uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,995,963
arxiv
\section{Introduction} Quantum effective actions provide a framework within which to study the quantum dynamics of field-theoretic systems, both perturbatively and non-perturbatively, and, for instance, out of thermodynamic equilibrium. Moreover, by introducing a function that controls which quantum fluctuations are integrated in --- the so-called regulator --- we can derive exact flow equations, which provide information about how the parameters of a theory change with scale (for reviews, see, e.g., Refs.~\cite{Berges:2000ew, Bagnuls:2000ae, Pawlowski:2005xe, Gies:2006wv, Kopietz:2010zz, Rosten:2010vm}). This is at the heart of the functional renormalisation group programme. In this article, we make a concrete comparison of two approaches to deriving these exact flow equations. The first of these is based on the two-particle-irreducible (2PI) effective action~\cite{Cornwall:1974vz}, which we refer to as the regulator-sourced 2PI effective action~\cite{Alexander:2019cgw, Alexander:2019quf}. The second (see Refs.~\cite{Wetterich:1992yh, Morris:1993qb, Ellwanger:1993mw, Reuter:1996cp}) is based on the so-called average one-particle-irreducible (1PI) effective action~\cite{Wetterich:1989xg}, itself a modification of the 1PI effective action~\cite{Jackiw:1974cv}. The two effective actions differ from one another by a Legendre transform with respect to the regulator, and while they therefore describe the same physics, only one of these can be interpreted as the quantum-corrected action. The Hamiltonian and Routhian of classical mechanics are an important analogy; while both describe the same dynamics, only one of these can be interpreted as the energy of the system. Additionally, the 1PI and 2PI approaches differ in the way that the loop expansion is organised and infinite series of field and loop insertions are resummed. The 2PI treatment described here is distinct from uses of 2PI and so-called $\Phi$-derivable approaches to improve approximations for the exact flow equations obtained from the average 1PI effective action (see, e.g., Refs.~\cite{Blaizot:2010zx, Blaizot:2021ikl}), or to make truncations of the same flow equations based on Bethe-Salpeter equations derived from the nPI effective action~\cite{Carrington:2012ea}. It is also distinct from the approach of Refs.~\cite{Wetterich:2002ky, Dupuis:2005ij, Dupuis2014, Carrington:2014lba, Rentrop2015, Carrington:2017lry, Carrington:2019fwp}, in that we will take the two-point source of the usual 2PI effective action to be the regulator directly, and the approach of Ref.~\cite{Lavrov:2012xz}, wherein additional sources are introduced for the composite operators involving the regulator, $\mathcal{R}_k$ say, i.e., a source for the operator $\mathcal{R}_k\phi^2$ in what follows. To make our comparison as intuitive as possible, we dispense with the complication of dealing with functionals by working with a zero-dimensional ``field theory'', taking inspiration from an earlier work~\cite{Millington:2019nkw}. By doing so, we are able to evaluate the effective action analytically at a fixed order in the coupling; to illustrate how each of these approaches works, particularly in relation to the closure of the differential systems; to show that both are internally self-consistent; and to highlight their key differences in terms of the structure of the would-be loop corrections. This work represents a first step towards a systematic programme aimed at revisiting analyses of quantum field theories based on the average 1PI framework within the regulator-sourced 2PI approach. The remainder of this article is organised as follows. In Sec.~\ref{sec:Model}, we introduce the zero-dimensional model that is the focus of this work. We describe the 2PI approach in Sec.~\ref{sec:2PI}, paying particular attention to the expression for the would-be inverse two-point function in Sec.~\ref{sec:2point} [see also App.~\ref{sec:app_multi_field_convexity}] and the derivation of the 2PI flow equations in Sec.~\ref{sec:2PIflow}. We then compare with the average 1PI approach and the corresponding flow equations in Sec.~\ref{sec:1PI}. Some concluding remarks are offered in Sec.~\ref{sec:Conc}. \section{Model} \label{sec:Model} We consider the following zero-dimensional field theory, with partition function \begin{equation}\label{eq:Z} \mathcal{Z}(J,K)=\mathcal{N}\int{\rm d}\Phi\,\exp\left\{-\frac{1}{\hbar}\left[S(\Phi)-J\Phi-\frac{1}{2}K\Phi^2\right]\right\} \end{equation} and classical action \begin{equation} S(\Phi)=\frac{1}{2}\Phi^2+\frac{\lambda}{4!}\Phi^4. \end{equation} Herein, $\hbar$, $\lambda>0$, $J$ and $K$ are real numbers, and $\mathcal{N}$ is some normalisation. At second order in the coupling $\lambda$, the $\Phi$ integral can be done explicitly, giving \begin{align} \mathcal{Z}(J,K)&=\mathcal{N}'\frac{1}{\sqrt{1-K}}\exp\left[\frac{1}{2\hbar}\frac{J^2}{1-K}\right]\nonumber\\\nonumber &\times\Bigg\{1-\frac{\lambda}{4!\hbar}\frac{1}{\left(1-K\right)^2}\left[\frac{J^4}{\left(1-K\right)^2}+\frac{6\hbar J^2}{1-K}+3\hbar^2\right]\\\nonumber &+\frac{\lambda^2}{2\hbar^2(4!)^2}\frac{1}{(1-K)^4}\left[ \frac{J^8}{(1-K)^4}+\frac{28\hbar J^6}{(1-K)^3}+\frac{210\hbar^2 J^4}{(1-K)^2}+\frac{420\hbar^3J^2}{1-K}+105\hbar^4\right]\\ & +\mathcal{O}(\lambda^3)\Bigg\}, \end{align} where $\mathcal{N}'$ is a different numerical constant. It follows that the Schwinger function \begin{equation} \mathcal{W}(J,K)=-\hbar\ln\mathcal{Z}(J,K) \end{equation} is, up to irrelevant constant terms, \begin{align} \mathcal{W}(J,K)=&-\frac{1}{2}\frac{J^2}{1-K}+\frac{\hbar}{2}\ln\left(1-K\right)+\frac{\lambda}{4!}\frac{1}{\left(1-K\right)^2}\left[\frac{J^4}{\left(1-K\right)^2}+\frac{6\hbar J^2}{1-K}+3\hbar^2\right]\nonumber\\ &-\frac{\lambda^2}{144}\frac{1}{(1-K)^4}\left[ \frac{2J^6}{(1-K)^3}+\frac{21\hbar J^4}{(1-K)^2}+\frac{48\hbar^2J^2}{1-K}+12\hbar^3\right]+\mathcal{O}(\lambda^3). \end{align} With the expression for the Schwinger function calculated, we are now able to construct the 2PI and average 1PI effective actions at second order in $\lambda$. It is clearly straightforward to work to higher order in $\lambda$, but the expressions become increasingly cumbersome without leading to further insight. \section{2PI effective action} \label{sec:2PI} The standard 2PI effective action $\Gamma^{\rm 2PI}(\phi,\Delta)$ is defined as the double Legendre transform of the Schwinger function as follows: \begin{subequations} \begin{align} \label{eq:Gamma_JK_phi_Delta} \Gamma^{\rm 2PI}(J,K;\phi,\Delta)&=\mathcal{W}(J,K)+J\phi+\frac{1}{2}K\left(\phi^2+\hbar\Delta\right),\\ \Gamma^{\rm 2PI}(\phi,\Delta)&=\max_{J,K}\Gamma^{\rm 2PI}(J,K;\phi,\Delta). \end{align} \end{subequations} The maximum is defined to occur at $(J,K)=(\mathcal{J},\mathcal{K})$, and maximisation gives \begin{subequations} \label{eq:onetwodefs} \begin{align} \nonumber \phi&\equiv\phi(\mathcal{J},\mathcal{K})=-\left.\frac{\partial \mathcal{W}(J,K)}{\partial J}\right|_{J=\mathcal{J},K=\mathcal{K}}\\\label{eq:phidef} &=\frac{\mathcal{J}}{1-\mathcal{K}}-\frac{\lambda}{6}\frac{\mathcal{J}}{\left(1-\mathcal{K}\right)^3}\left[\frac{\mathcal{J}^2}{1-\mathcal{K}}+3\hbar\right]+\frac{\lambda^2}{12}\frac{\mathcal{J}}{(1-\mathcal{K})^5}\left[ \frac{\mathcal{J}^4}{(1-\mathcal{K})^2}+\frac{7\hbar \mathcal{J}^2}{1-\mathcal{K}}+8\hbar^2\right]\nonumber\\&\phantom{=}+\mathcal{O}(\lambda^3),\\\nonumber\label{eq:Deltadef} \hbar\Delta&\equiv\hbar\Delta(\mathcal{J},\mathcal{K})=-2\left.\frac{\partial \mathcal{W}(J,K)}{\partial K}\right|_{J=\mathcal{J},K=\mathcal{K}}-\phi^2\\ &=\frac{\hbar}{1-\mathcal{K}}-\frac{\lambda}{2}\frac{\hbar}{\left(1-\mathcal{K}\right)^3}\left[\frac{\mathcal{J}^2}{1-\mathcal{K}}+\hbar\right]+\frac{\lambda^2}{12}\frac{\hbar}{(1-\mathcal{K})^5}\left[\frac{5\mathcal{J}^4}{(1-\mathcal{K})^2}+\frac{21\hbar\mathcal{\mathcal{J}}^2}{1-\mathcal{K}}+8\hbar^2\right]\nonumber\\&\phantom{=}+\mathcal{O}(\lambda^3), \end{align} \end{subequations} wherein we have been careful to note that $\phi$ and $\Delta$ are functions of $\mathcal{J}\equiv\mathcal{J}(\phi,\Delta)$ and $\mathcal{K}\equiv\mathcal{K}(\phi,\Delta)$, and such that they are independent variables. Equation~\eqref{eq:onetwodefs} can be inverted to second order in $\lambda$, giving \begin{subequations} \begin{align}\label{eq:J_pert} \mathcal{J}&(\phi,\Delta)=\frac{\phi}{\Delta}-\frac{\lambda}{3}\phi^3+\frac{\hbar\lambda^2}{2}\Delta^2\phi^3,\\\label{eq:K_pert} \mathcal{K}(\phi,\Delta)&=\frac{\Delta-1}{\Delta}+\frac{\lambda}{2}\left(\phi^2+\hbar\Delta\right)-\frac{\lambda^2}{6}\left( 3\phi^2\hbar\Delta^2+\hbar^2\Delta^3\right)\\\label{eq:Delta_inv_lambda_sq} \Rightarrow\Delta^{-1}&=1-\mathcal{K}+\frac{\lambda}{2}(\phi^2+\hbar\Delta)-\frac{\lambda^2}{6}\left( 3\phi^2\hbar\Delta^2+\hbar^2\Delta^3\right). \end{align} \end{subequations} Equation~\eqref{eq:Delta_inv_lambda_sq} is essentially the Schwinger-Dyson equation, and it is the precursor to a key result that relates the inverse ``propagator" to the source $\mathcal{K}$ and derivatives of the 2PI action. As we will see, it is this expression that gives the closure for the consistent set of flow equations in the regulator-sourced 2PI approach. Note that the expression for $\Delta^{-1}$ in Eq.~\eqref{eq:Delta_inv_lambda_sq} contains would-be loop corrections built self-consistently from $\Delta$. Thus, while it has been truncated at second order $\lambda^2$, the solution for $\Delta$ obtained from Eq.~\eqref{eq:Delta_inv_lambda_sq} resums an infinite series of loop insertions to the two-point function. This is the power of the 2PI approach. In addition, we have that \begin{equation} \frac{\mathcal{J}}{1-\mathcal{K}}=\phi\left[1+\frac{\lambda}{6} \Delta\left(\phi^2+3\hbar \Delta\right)+\frac{\lambda^2}{12}\Delta^2\left(\phi^4+4\hbar\phi^2\Delta+\hbar^2\Delta^2\right)\right] \end{equation} and we can eliminate the factors of $\mathcal{J}/(1-\mathcal{K})$ in favour of $\phi$ and $\Delta$ in the would-be two-point function~\eqref{eq:Deltadef} to give \begin{align}\label{eq:Delta_quadratic} \Delta&=\frac{1}{1-\mathcal{K}}-\frac{\lambda}{2}\frac{1}{\left(1-\mathcal{K}\right)^2}\left[\phi^2+\frac{\hbar}{1-\mathcal{K}}\right]\nonumber\\ &\quad+\frac{\lambda^2}{12}\frac{1}{(1-\mathcal{K})^2}\left[ \frac{8\hbar^2}{(1-\mathcal{K})^3}+\frac{21\hbar\phi^2}{(1-\mathcal{K})^2}+\frac{5\phi^4}{1-\mathcal{K}}-2\Delta\phi^2(\phi^2+3\hbar\Delta)\right]. \end{align} We can go further and note that Eq.~\eqref{eq:Delta_quadratic} is a quadratic equation in $\Delta$, and so can be solved to find \begin{subequations} \begin{align}\label{eq:Delta_K_phi} \Delta&=\frac{1}{1-\mathcal{K}}-\frac{\lambda}{2}\frac{1}{\left(1-\mathcal{K}\right)^2}\left[\phi^2+\frac{\hbar}{1-\mathcal{K}}\right] +\frac{\lambda^2}{12}\frac{1}{(1-\mathcal{K})^3}\left[ 3\phi^4+\frac{15\hbar\phi^2}{1-\mathcal{K}}+\frac{8\hbar^2}{(1-\mathcal{K})^2}\right],\\ \label{eq:Deltainv} \Rightarrow \Delta^{-1}&=1-\mathcal{K}+\frac{\lambda}{2}\left[\phi^2+\frac{\hbar}{1-\mathcal{K}}\right]-\frac{\lambda^2}{12}\frac{\hbar}{(1-\mathcal{K})^2}\left[9\phi^2+\frac{5\hbar}{1-\mathcal{K}} \right], \end{align} \end{subequations} both correct to second order in $\lambda$. It is worth pausing to note that Eq.~\eqref{eq:Delta_K_phi} shows explicitly how $\Delta$ depends on $\phi$ when we hold the source $\mathcal{K}$ fixed. The equation of motion for the would-be one-point function is given by \begin{equation}\label{eq:dGamm2PI_by_dphi} \frac{\partial \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi}=\mathcal{J}+\mathcal{K}\phi, \end{equation} and the Schwinger-Dyson equation for the would-be two-point function is obtained from \begin{equation}\label{eq:dGamm2PI_by_dDelta} \frac{\partial \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \Delta}=\frac{\hbar}{2}\mathcal{K}. \end{equation} Herein, partial derivatives with respect to $\phi$ are understood at fixed $\Delta$ and vice versa. Note that by restricting the source $\mathcal{K}$, we can constrain the two-point function and thereby also the effective action (see Refs.~\cite{Millington:2019nkw, Garbrecht:2015cla}), as we will do later in order to obtain analogues of the exact flow equations of the functional renormalisation group (as was done in Refs.~\cite{Alexander:2019cgw, Alexander:2019quf}). The 2PI effective action can now be expressed in terms of only $\phi$ and $\Delta$, either by integrating Eqs.~\eqref{eq:dGamm2PI_by_dphi} and~\eqref{eq:dGamm2PI_by_dDelta} or direct substitution into the definition of $\Gamma^{\rm 2PI}$ in Eq.~\eqref{eq:Gamma_JK_phi_Delta}, evaluated at $J=\mathcal{J}$ and $K=\mathcal{K}$, as given in Eqs.~\eqref{eq:J_pert} and~\eqref{eq:K_pert}. We find \begin{align}\label{eq:2PIfull} \Gamma^{\rm 2PI}(\phi,\Delta) &=\frac{1}{2}\phi^2+\frac{\lambda}{4!}\phi^4+\frac{\hbar}{2}\left[ \ln\Delta^{-1}+G^{-1}(\phi)\Delta-1\right]\nonumber\\ &\quad+\hbar^2\left[\frac{\lambda}{8}\Delta^2-\frac{\lambda^2}{12}\phi^2\Delta^3 \right]+\hbar^3\left[ -\frac{\lambda^2}{48}\Delta^4\right], \end{align} where $G^{-1}(\phi)=1+\frac{\lambda}{2}\phi^2$, which matches the form found in the full field theory case at 2PI (see, e.g., Ref.~\cite{Garbrecht:2015cla}). In the limit $\mathcal{K}\to 0$, we recover the 1PI effective action \begin{equation}\label{eq:1PIfull} \Gamma^{\rm 1PI}(\phi) =\frac{1}{2}\phi^2+\frac{\lambda}{4!}\phi^4+\frac{\hbar}{2}\ln G^{-1}(\phi) +\hbar^2\left[\frac{\lambda}{8}G^2(\phi)-\frac{\lambda^2}{12}\phi^2G^3(\phi) \right]+\hbar^3\left[ -\frac{\lambda^2}{12}G^4(\phi)\right] \end{equation} at order $\lambda^2$, where we have used \begin{equation} \Delta\big|_{\mathcal{K}=0}=G(\phi)+\hbar\left[-\frac{\lambda}{2}G^3(\phi)+\frac{\lambda^2}{2}\phi^2G^4(\phi)\right]+\hbar^2\left[\frac{2\lambda^2}{3}G^4(\phi)\right]+\mathcal{O}(\lambda^3) \end{equation} from Eq.~\eqref{eq:Delta_inv_lambda_sq}. On the other hand, in the limit $\mathcal{K}\to-\infty$, we have that $\Delta \to 0^+$, and \begin{equation} \Gamma^{\rm 2PI}(\phi,0) =\frac{1}{2}\phi^2+\frac{\lambda}{4!}\phi^4-\frac{\hbar}{2}\lim_{\Delta\to 0^+}\ln \Delta, \end{equation} which is, up to an infinite constant shift, the original classical action. The infinite shift is the zero-dimensional analogue of the vacuum energy, which diverges logarithmically in zero spacetime dimensions. \section{Inverse \texorpdfstring{``two-point function"}{"two-point function"} from convexity} \label{sec:2point} For the closure of the flow equations that we shall be deriving, we require an expression for the inverse two-point function in terms of partial derivatives of the effective action. While we have seen that such an expression exists in the example above \eqref{eq:Delta_inv_lambda_sq}, it is important that an analogous expression should exist in the general case. To find the expression, we consider the convexity of the effective action using the natural convex-conjugate variables \smash{$\mathcal{J}'(\phi',\Delta')=\mathcal{J}(\phi,\Delta)$} and \smash{$\mathcal{K}'(\phi',\Delta')=\frac{1}{2}\mathcal{K}(\phi,\Delta)$} for the sources, and \smash{$\phi'(\mathcal{J}',\mathcal{K}')=\phi(\mathcal{J},\mathcal{K})$} and \smash{$\Delta'(\mathcal{J}',\mathcal{K}')=\hbar \Delta(\mathcal{J},\mathcal{K}) +\phi^2(\mathcal{J},\mathcal{K})$} for the fields \cite{Millington:2019nkw}. The starting point is \begin{align} \frac{\ensuremath{\partial}\mathcal{J}'(\phi',\Delta')}{\ensuremath{\partial}\mathcal{J}'}=1, \qquad\frac{\ensuremath{\partial}\mathcal{J}'(\phi',\Delta')}{\ensuremath{\partial}\mathcal{K}'}=0,\qquad \frac{\ensuremath{\partial}\mathcal{K}'(\phi',\Delta')}{\ensuremath{\partial}\mathcal{J}'}=0, \qquad\frac{\ensuremath{\partial}\mathcal{K}'(\phi',\Delta')}{\ensuremath{\partial}\mathcal{K}'}=1. \end{align} One can then use the chain rule to introduce $\phi'$ and $\Delta'$ derivatives. Finally, we use \begin{align} \mathcal{J}'=\frac{\ensuremath{\partial}\Gamma^{\rm 2PI}}{\ensuremath{\partial}\phi'},\qquad \mathcal{K}'=\frac{\ensuremath{\partial}\Gamma^{\rm 2PI}}{\ensuremath{\partial}\Delta'} \end{align} to give expressions that involve the second derivatives of the 2PI action, leading to the following identities: \begin{align}\label{eq:system} \left( \begin{array}{cc} \frac{\ensuremath{\partial}^2\Gamma^{\rm 2PI}}{\ensuremath{\partial}\phi'^2} & \frac{\ensuremath{\partial}^2\Gamma^{\rm 2PI}}{\ensuremath{\partial}\Delta'\ensuremath{\partial}\phi'}\\ \frac{\ensuremath{\partial}^2\Gamma^{\rm 2PI}}{\ensuremath{\partial}\phi'\ensuremath{\partial}\Delta'} & \frac{\ensuremath{\partial}^2\Gamma^{\rm 2PI}}{\ensuremath{\partial}\Delta'^2} \end{array} \right) \left( \begin{array}{cc} \frac{\ensuremath{\partial}^2\mathcal{W}}{\ensuremath{\partial} \mathcal{J}'^2} & \frac{\ensuremath{\partial}^2\mathcal{W}}{\ensuremath{\partial} \mathcal{K}'\ensuremath{\partial} \mathcal{J}'}\\ \frac{\ensuremath{\partial}^2\mathcal{W}}{\ensuremath{\partial} \mathcal{J}'\ensuremath{\partial} \mathcal{K} '} & \frac{\ensuremath{\partial}^2\mathcal{W}}{\ensuremath{\partial} \mathcal{K} '^2} \end{array} \right) &=-\mathbb 1. \end{align} The partial derivatives with respect to primed variables can be re-expressed in terms of partial derivatives with respect to the original variables via \begin{subequations} \begin{gather} \frac{\partial}{\partial \phi'}=\frac{\partial \phi}{\partial \phi'}\frac{\partial}{\partial \phi}+\frac{\partial \Delta}{\partial \phi'}\frac{\partial}{\partial \Delta}=\frac{\partial}{\partial \phi}-\frac{2}{\hbar}\phi\frac{\partial}{\partial \Delta},\\ \frac{\partial}{\partial \Delta'}=\frac{\partial \phi}{\partial \Delta'}\frac{\partial}{\partial \phi}+\frac{\partial \Delta}{\partial \Delta'}\frac{\partial}{\partial \Delta}=\frac{1}{\hbar}\frac{\partial}{\partial \Delta}. \end{gather} \end{subequations} Thus, we have \begin{subequations} \begin{align} \frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi^{\prime2}}&=\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi^2}-\mathcal{K}(\phi,\Delta)-\frac{4}{\hbar}\phi\left[\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}-\frac{1}{\hbar}\phi\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \Delta^2}\right],\\ \frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi^{\prime}\partial\Delta'}&=\frac{1}{\hbar}\left[\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}-\frac{2}{\hbar}\phi\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial\Delta^2}\right],\\ \frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial\Delta^{\prime 2}}&=\frac{1}{\hbar^2}\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial\Delta^{2}}, \end{align} \end{subequations} and Eq.~\eqref{eq:system} yields the system \begin{subequations} \label{eq:system2} \begin{align} \label{eq:conv1} &\left\{\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi^2}-\mathcal{K}(\phi,\Delta)-\frac{4}{\hbar}\phi\left[\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}-\frac{1}{\hbar}\phi\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \Delta^2}\right]\right\}\Delta\nonumber\\&\qquad-\frac{2}{\hbar}\left[\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}-\frac{2}{\hbar}\phi\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial\Delta^2}\right]\frac{\partial^2 \mathcal{W}(\mathcal{J},\mathcal{K})}{\partial\mathcal{J}\partial \mathcal{K}}=1,\\ &\left\{\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi^2}-\mathcal{K}(\phi,\Delta)-\frac{4}{\hbar}\phi\left[\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}-\frac{1}{\hbar}\phi\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \Delta^2}\right]\right\}\frac{\partial^2 \mathcal{W}(\mathcal{J},\mathcal{K})}{\partial\mathcal{J}\partial \mathcal{K}}\nonumber\\&\qquad+\frac{2}{\hbar}\left[\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}-\frac{2}{\hbar}\phi\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial\Delta^2}\right]\frac{\partial^2 \mathcal{W}(\mathcal{J},\mathcal{K})}{\partial \mathcal{K}^2}=0,\\ \label{eq:conv3} &\left[\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}-\frac{2}{\hbar}\phi\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial\Delta^2}\right]\Delta-\frac{2}{\hbar}\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \Delta^2}\frac{\partial^2 \mathcal{W}(\mathcal{J},\mathcal{K})}{\partial\mathcal{J}\partial \mathcal{K}}=0,\\ &-\frac{2}{\hbar}\left\{\left[\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial \Delta}-\frac{2}{\hbar}\phi\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial\Delta^2}\right]\frac{\partial^2 \mathcal{W}(\mathcal{J},\mathcal{K})}{\partial\mathcal{J}\partial \mathcal{K}}+\frac{2}{\hbar}\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \Delta^2}\frac{\partial^2 \mathcal{W}(\mathcal{J},\mathcal{K})}{\partial \mathcal{K}^2}\right\}=1, \end{align} \end{subequations} where we have used Eqs.~\eqref{eq:dGamm2PI_by_dphi} and~\eqref{eq:dGamm2PI_by_dDelta}, along with \begin{equation} \frac{\partial^2\mathcal{W}(\mathcal{J},\mathcal{K})}{\partial \mathcal{J}^2}=-\Delta. \end{equation} Note that Eq.~\eqref{eq:system2} first appeared in footnote 11 of Ref.~\cite{Cornwall:1974vz}. We may then take Eqs.~\eqref{eq:conv1} and~\eqref{eq:conv3}, and solve them to find\footnote{This expression for the inverse two-point function disagrees with the incorrect expression appearing in Eq.~(21) and the seventh row of Tab.~I of Ref.~\cite{Alexander:2019cgw}; this has been corrected in an erratum to this work.} \begin{equation} \label{eq:inv2} \Delta^{-1}=\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi^2}-\mathcal{K}(\phi,\Delta)-\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}\left(\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \Delta^2}\right)^{-1}\frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \phi\partial\Delta}. \end{equation} It is straightforward to verify that the expression given in our example \eqref{eq:Delta_inv_lambda_sq} is consistent with this general result. In fact, one may take this approach and apply it to multiple fields, as is done in App.~\ref{sec:app_multi_field_convexity}. Doing so leads to a formula that is applicable also in the full field theory setting. \section{2PI flow equations} \label{sec:2PIflow} Our plan now is to see how the 2PI action changes as we vary the source $\mathcal{K}$. In the field theory case, this source, when taken to be the regulator of the renormalisation group flow, is what would be responsible for cutting off certain modes. Even in this zero dimensional scenario, however, we may observe how $\Gamma^{\rm 2PI}$ depends on $\mathcal{K}$. From Eq.~\eqref{eq:Delta_K_phi}, we see that fixing $\mathcal{K}$ to a given value presents us with a relation between the otherwise independent variables $\phi$ and $\Delta$, giving a curve in the $\phi-\Delta$ plane; different $\mathcal{K}$ lead to different curves. We also know how $\Gamma^{\rm 2PI}$ depends on $\phi$ and $\Delta$ [see Eq.~\eqref{eq:2PIfull}], and so we are led to Fig.~\ref{fig:Gamma_phi_Delta}. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{order_lambda_flow_Gamma_phi_Delta.pdf} \caption{A plot of $\Gamma^{\rm 2PI}(\phi,\Delta)$ at order $\lambda^2$, showing lines of constant $\mathcal{K}$. The dashed line is $\mathcal{K}=0$, which is just the 1PI curve, and the lines above that are for increasing $\mathcal{K}=0.5,\;0.75,\;1.0$. The coupling is set at $\lambda=0.1$ with $\hbar=1$.} \label{fig:Gamma_phi_Delta} \end{figure} It is also useful to see how $\Gamma^{\rm 2PI}$ depends on $\phi$ for a sample of $\mathcal{K}$ source values, which we present in Fig.~\ref{fig:Gamma_phi}, as this shows how the form of the 2PI effective action changes from one fixed value of $\mathcal{K}$ to another. In the remainder of this section, we shall formalise this observation and present the flowing action in terms of flow equations for the action's parameters. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{order_lambda_flow_Gamma_phi.pdf} \caption{A plot of $\Gamma^{\rm 2PI}(\phi,\Delta_k(\phi))$ at order $\lambda^2$, for fixed $\mathcal{K}=\mathcal{R}_k$, the dashed line is for $\mathcal{K}=0$, with the others at $\mathcal{K}=0.5,\;0.75,\;1.0$. The coupling is set at $\lambda=0.1$ with $\hbar=1$.} \label{fig:Gamma_phi} \end{figure} In order to more closely match the nomenclature found in the literature, we shall denote \begin{equation} \mathcal{K}(\phi,\Delta)=\mathcal{R}_k(\phi,\Delta), \end{equation} reminiscent of the regulator of the functional renormalisation group, where $k$ is a real parameter.\footnote{Note that we use a non-standard sign convention for the regulator.} This choice of $\mathcal{K}$ fixes $\Delta=\Delta_k$ (and $\mathcal{J}=\mathcal{J}_k$) to be a function of the parameter $k$ (but such that $\partial_k\phi=0$, and $\phi$ therefore remains a free parameter). The 2PI flow equation reads~\cite{Alexander:2019cgw} \begin{equation} \label{eq:flow2PI} \partial_k\Gamma^{\rm 2PI}(\phi,\Delta_k)=\frac{\partial \Gamma^{\rm 2PI}(\phi,\Delta_k)}{\partial \Delta_k}\partial_k\Delta_k=\frac{\hbar}{2}\mathcal{R}_k(\phi,\Delta_k)\partial_k\Delta_k. \end{equation} In order to proceed further, we make the following Ansatz for the 2PI effective action: \begin{equation} \label{eq:2PIansatz} \Gamma^{\rm 2PI}(\phi,\Delta_k)=\alpha_k(\Delta_k)+\frac{1}{2}\beta_k(\Delta_k)\phi^2+\frac{1}{4!}\gamma_k(\Delta_k)\phi^4, \end{equation} wherein we emphasise that the unknown functions $\alpha_k$, $\beta_k$ and $\gamma_k$ are functions of $\Delta_k$. \subsection{First order in \texorpdfstring{$\lambda$}{lambda}} In order to avoid unnecessary details, we shall start with the $\mathcal{O}(\lambda)$ computations, wherein Eq.~\eqref{eq:inv2} reduces to \begin{equation} \Delta^{-1}_k=\frac{\partial^2 \Gamma^{\rm 2PI}(\phi,\Delta_k)}{\partial \phi^2}-\mathcal{R}_k(\phi,\Delta_k)+\mathcal{O}(\lambda^2), \end{equation} and we therefore have \begin{equation} \Delta^{-1}_k=\beta_k(\Delta_k)-\mathcal{R}_k(\phi,\Delta_k)+\frac{1}{2}\gamma_k(\Delta_k)\phi^2. \end{equation} We can extract the flow equations for these functions by taking partial derivatives of the flow equation \eqref{eq:flow2PI} with respect to $\phi$ at fixed $\Delta_k$, and setting $\phi=0$. This process gives \begin{subequations} \begin{align} \partial_k\alpha_k(\Delta_k)&=\frac{\hbar}{2}\left[\mathcal{R}_k(\phi,\Delta_k)\partial_k\Delta_k\right]_{\phi=0},\\ \partial_k\beta_k(\Delta_k)&=\frac{\hbar}{2}\left[\frac{\partial^2\mathcal{R}_k(\phi,\Delta_k)}{\partial \phi^2}\partial_k\Delta_k\right]_{\phi=0},\\ \partial_k\gamma_k(\Delta_k)&=\frac{\hbar}{2}\left[\frac{\partial^4\mathcal{R}_k(\phi,\Delta_k)}{\partial \phi^4}\partial_k\Delta_k\right]_{\phi=0}. \end{align} \end{subequations} While it may seem strange to be varying the would-be regulator with respect to $\phi$,\footnote{We note that this subtlety of the 2PI approach was overlooked in Ref.~\cite{Alexander:2019quf}; an update to this work is in preparation.} it is helpful to recall that \begin{equation} \frac{\hbar}{2}\mathcal{R}_k(\phi,\Delta_k)=\frac{\partial \Gamma^{\rm 2PI}(\phi,\Delta_k)}{\partial \Delta_k}=\frac{\partial \alpha_k(\Delta_k)}{\partial \Delta_k}+\frac{1}{2}\phi^2\frac{\partial \beta_k(\Delta_k)}{\partial \Delta_k}+\frac{1}{4!}\phi^4\frac{\partial \gamma_k(\Delta_k)}{\partial \Delta_k}. \end{equation} Making use of this, it follows that the individual flow equations take the form \begin{subequations} \begin{align} \partial_k\alpha_k(\Delta_k)&=\left.\frac{\partial \alpha_k(\Delta_k)}{\partial \Delta_k}\partial_k\Delta_k\right|_{\phi=0},\\ \partial_k\beta_k(\Delta_k)&=\left.\frac{\partial \beta_k(\Delta_k)}{\partial \Delta_k}\partial_k\Delta_k\right|_{\phi=0},\\ \partial_k\gamma_k(\Delta_k)&=\left.\frac{\partial \gamma_k(\Delta_k)}{\partial \Delta_k}\partial_k\Delta_k\right|_{\phi=0}, \end{align} \end{subequations} which is nothing other than what we would expect by application of the chain rule. Now, in order to close the flow equations, we need to take partial derivatives with respect to $\phi$ of the Schwinger-Dyson equation, viz.~the expression for $\Delta^{-1}_k$. This process yields \begin{subequations} \begin{align} \frac{\partial \gamma_k(\Delta_k)}{\partial \Delta_k}&=0,\\ \frac{\partial \beta_k(\Delta_k)}{\partial \Delta_k}&=\frac{\hbar}{2}\gamma_k(\Delta_k),\\ \frac{\partial \alpha_k(\Delta_k)}{\partial \Delta_k}&=\frac{\hbar}{2}\mathcal{R}_k(0,\Delta_k). \end{align} \end{subequations} Returning to the flow equations, we therefore have that \begin{equation} \partial_k\gamma_k(\Delta_k)=0, \end{equation} giving $\gamma_k=\lambda$. The integration constant has been fixed by matching to the limit $\mathcal{R}_k\to -\infty$ of the inverse two-point function, which we take to be the limit $k\to \infty$, emulating the true RG case. Next, we have \begin{equation} \partial_k\beta_k(\Delta_k)=\frac{\hbar\lambda}{2}\partial_k\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^{-1}. \end{equation} The solution is \begin{equation} \beta_k(\Delta_k)=1+\frac{\hbar\lambda}{2}\frac{1}{1-\mathcal{R}_k(0,\Delta_k)}, \end{equation} where we have again fixed the integration constant by matching to the limit $\mathcal{R}_k\to -\infty$ of the inverse two-point function. Finally, \begin{equation} \partial_k\alpha_k(\Delta_k)=\frac{\hbar}{2}\mathcal{R}_k(0,\Delta_k)\partial_k\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^{-1}. \end{equation} This can be solved perturbatively, by writing $\beta_k(\Delta_k)=\beta^{(0)}_k(\Delta_k)+\lambda\beta^{(1)}_k(\Delta_k)$, and using the boundary condition that $\beta^{(0)}_k=1$. In this way, we find the solution \begin{equation} \alpha_k(\Delta_k)=\alpha_0+\frac{\hbar}{2}\frac{1}{1-\mathcal{R}_k(0,\Delta_k)}+\frac{\hbar}{2}\ln\left[1-\mathcal{R}_k(0,\Delta_k)\right]+\frac{\lambda\hbar^2}{8}\frac{1-3\mathcal{R}_k(0,\Delta_k)}{\left[1-\mathcal{R}_k(0,\Delta_k)\right]^3}. \end{equation} Substituting these results back into the Ansatz \eqref{eq:2PIansatz}, and using the fact that \begin{equation} \mathcal{R}_k(0,\Delta_k)=\frac{\Delta_k-1}{\Delta_k}+\frac{\hbar\lambda}{2}\Delta_k, \end{equation} we recover the 2PI effective action in Eq.~\eqref{eq:2PIfull} for \begin{equation} \alpha_0=-\frac{\hbar}{2}. \end{equation} Thus, we have seen that the 2PI flow equation, along with the Ansatz \eqref{eq:2PIansatz}, is fully self-consistent. \subsection{Second order in \texorpdfstring{$\lambda$}{lambda}} Beyond first order in $\lambda$, the situation is more complicated but no less tractable. In this case, we need \begin{subequations} \begin{align} \frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta_k)}{\partial \phi\partial \Delta_k}&=\frac{\partial \beta_k(\Delta_k)}{\partial \Delta_k}\phi+\frac{1}{3!}\frac{\partial \gamma_k(\Delta_k)}{\partial \Delta_k}\phi^3,\\ \frac{\partial^2\Gamma^{\rm 2PI}(\phi,\Delta_k)}{\partial \Delta_k^2}&=\frac{\partial^2\alpha_k(\Delta_k)}{\partial \Delta_k^2}+\frac{1}{2}\frac{\partial^2\beta_k(\Delta_k)}{\partial \Delta_k^2}\phi^2+\frac{1}{4!}\frac{\partial^2\gamma_k(\Delta_k)}{\partial \Delta_k^2}\phi^4. \end{align} \end{subequations} It then follows that \begin{align} \Delta_k^{-1}&=\beta_k(\Delta_k)-\frac{2}{\hbar}\left[\frac{\partial\alpha_k(\Delta_k)}{\partial \Delta_k}+\frac{1}{2}\frac{\partial\beta_k(\Delta_k)}{\partial \Delta_k}\phi^2+\frac{1}{4!}\frac{\partial\gamma_k(\Delta_k)}{\partial \Delta_k}\phi^4\right]+\frac{1}{2}\gamma_k(\Delta_k)\phi^2\nonumber\\&\qquad-\left[\frac{\partial \beta_k(\Delta_k)}{\partial \Delta_k}\phi+\frac{1}{3!}\frac{\partial \gamma_k(\Delta_k)}{\partial \Delta_k}\phi^3\right]^2\left[\frac{\partial^2\alpha_k(\Delta_k)}{\partial \Delta_k^2}+\frac{1}{2}\frac{\partial^2\beta_k(\Delta_k)}{\partial \Delta_k^2}\phi^2+\frac{1}{4!}\frac{\partial^2\gamma_k(\Delta_k)}{\partial \Delta_k^2}\phi^4\right]^{-1}. \end{align} While this looks horrendous, the procedure is the same as before. We take derivatives with respect to $\phi$ and $\Delta_k$ and evaluate at $\phi=0$ in order to determine the various $\Delta_k$ derivatives of $\alpha_k$, $\beta_k$ and $\gamma_k$. In this way, we can show that (to order $\lambda^2$) \begin{subequations} \begin{align} \frac{\partial \alpha_k(\Delta_k)}{\partial \Delta_k}&=\frac{\hbar}{2}\mathcal{R}_k(0,\Delta_k),\\ \frac{\partial^2 \alpha_k(\Delta_k)}{\partial \Delta_k^2}&=\frac{\hbar}{2}\left\{\frac{\partial \beta_k(\Delta_k)}{\partial \Delta_k}+\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^2\right\},\\ \frac{\partial^3 \alpha_k(\Delta_k)}{\partial \Delta_k^3}&=\frac{\hbar}{2}\left\{\frac{\partial^2 \beta_k(\Delta_k)}{\partial \Delta_k^2}-2\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^3\right\},\\ \frac{\partial \beta_k(\Delta_k)}{\partial \Delta_k}&=\frac{\hbar}{2}\gamma_k(\Delta_k)\left\{1-\hbar\gamma_k(\Delta_k)\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^{-2}\right\},\\ \frac{\partial^2 \beta_k(\Delta_k)}{\partial \Delta_k^2}&=-\hbar^2\gamma_k^2(\Delta_k)\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^{-1},\\ \frac{\partial \gamma_k(\Delta_k)}{\partial \Delta_k}& =\mathcal{O}(\gamma_k^4). \end{align} \end{subequations} So we again have $\gamma_k=\lambda$ to leading order, and the remaining flow equations are \begin{subequations} \begin{align} \partial_k\alpha_k(\Delta_k)&=-\frac{\hbar}{2}\mathcal{R}_k(0,\Delta_k)\frac{\partial_k\beta_k(\Delta_k)-\partial_k\mathcal{R}_k(0,\Delta_k)}{\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^2},\\ \partial_k\beta_k(\Delta_k)&=-\left\{\frac{\hbar}{2}\lambda-\frac{\hbar^2}{2}\frac{\lambda^2}{\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^2}\right\}\frac{\partial_k\beta_k(\Delta_k)-\partial_k\mathcal{R}_k(0,\Delta_k)}{\left[\beta_k(\Delta_k)-\mathcal{R}_k(0,\Delta_k)\right]^2}. \end{align} \end{subequations} We again solve perturbatively, now writing $\beta_k(\Delta_k)=\beta^{(0)}_k(\Delta_k)+\lambda\beta^{(1)}_k(\Delta_k)+\lambda^2\beta^{(2)}_k(\Delta_k)$, making use of our knowledge of $\beta_k^{(0)}$ and $\beta_k^{(1)}$ from the previous subsection, and the solutions are \begin{subequations} \begin{align} \beta_k(\Delta_k)&=1+\frac{\hbar\lambda}{2}\frac{1}{1-\mathcal{R}_k(0,\Delta_k)}-\frac{5\hbar^2\lambda^2}{12}\frac{1}{\left[1-\mathcal{R}_k(0,\Delta_k)\right]^3},\\ \alpha_k(\Delta_k)&=\alpha_0+\frac{\hbar}{2}\frac{1}{1-\mathcal{R}_k(0,\Delta_k)}+\frac{\hbar}{2}\ln\left[1-\mathcal{R}_k(0,\Delta_k)\right]+\frac{\lambda\hbar^2}{8}\frac{1-3\mathcal{R}_k(0,\Delta_k)}{\left[1-\mathcal{R}_k(0,\Delta_k)\right]^3}\nonumber\\&-\frac{\hbar^3\lambda^2}{12}\frac{1-5\mathcal{R}_k(0,\Delta_k)}{\left[1-\mathcal{R}_k(0,\Delta_k)\right]^5}. \end{align} \end{subequations} Rewriting these results in terms of $\Delta_k$, using \begin{equation} \mathcal{R}_k(0,\Delta_k)=\frac{\Delta_k-1}{\Delta_k}+\frac{\hbar\lambda}{2}\Delta_k-\frac{\hbar^2\lambda^2}{6}\Delta_k^3, \end{equation} we find (again at order $\lambda^2$) \begin{subequations} \begin{align} \beta_k(\Delta_k)&=1+\frac{\hbar\lambda}{2}\Delta_k-\frac{\hbar^2\lambda^2}{6}\Delta_k^3,\\ \alpha_k(\Delta_k)&=\alpha_0+\frac{\hbar}{2}\left[\Delta_k+\ln\Delta_k^{-1}\right]+\frac{\hbar^2\lambda}{8}\Delta_k^2-\frac{\hbar^3\lambda^2}{48}\Delta_k^4. \end{align} \end{subequations} Setting $\alpha_0=-\hbar/2$, and substituting back into the Ansatz for $\alpha_k$, $\beta_k$ and $\gamma_k$, we recover the explicit expression for the 2PI effective action. \section{Average 1PI} \label{sec:1PI} Let us now compare the previous exposition with the average 1PI effective action and the associated Wetterich-Morris-Ellwanger flow equation \cite{Wetterich:1992yh,Morris:1993qb,Ellwanger:1993mw} (see also Ref.~\cite{Reuter:1996cp} in the context of gravity). The average 1PI effective action is described as a modified Legendre transform \begin{equation} \Gamma^{\rm 1PI}_{\rm av}(\phi,K)=\max_{J}\left[\mathcal{W}(J,K)+J\phi+\frac{1}{2}K\phi^2\right]. \end{equation} In fact, it is the Routhian of the Schwinger function $\mathcal{W}(J,K)$, shifted by the term $K\phi^2/2$. The maximisation gives Eq.~\eqref{eq:phidef}, but evaluated at $K$ rather than $\mathcal{K}$, and we can verify that \begin{align} \mathcal{J}&=(1-K)\phi+\frac{\lambda}{6}\phi\left[ \phi^2+\frac{3\hbar}{1-K}\right]-\frac{\lambda^2}{12}\frac{\phi\hbar}{(1-K)^2}\left[3\phi^2+\frac{5\hbar}{1-K} \right],\\\nonumber \Delta&\stackrel{!}{=}-\left.\frac{\partial^2 \mathcal{W}(J,K)}{\partial J^2}\right|_{J=\mathcal{J}}=\frac{1}{1-K}-\frac{\lambda}{2}\frac{1}{\left(1-K\right)^2}\left[\phi^2+\frac{\hbar}{1-K}\right]\\ &\qquad\qquad\qquad\qquad\qquad +\frac{\lambda^2}{12}\frac{1}{(1-K)^3}\left[ 3\phi^4+\frac{15\phi^2\hbar}{1-K}+\frac{8\hbar^2}{(1-K)^2}\right], \end{align} such that the would-be two-point functions of the 2PI and average 1PI approaches coincide. Moreover, since we have the same form for the expressions $\phi(\mathcal{J},K)$ and $\Delta(\mathcal{J},K)$ in both 1PI and 2PI cases, it follows that $K=\mathcal{K}$ for the average 1PI approach. The difference, however, lies in the fact that the natural variables for the average 1PI effective action are $(\phi,\mathcal{K})$. Thus, while $\partial \Delta/\partial \phi$ (at fixed $\Delta$) is zero in the 2PI case, $\phi$ and $\mathcal{K}$ are independent for the 1PI case, such that $\partial \Delta/\partial\phi\neq 0$ (at fixed $\mathcal{K}$). We note that \begin{equation} \frac{\partial \Gamma^{\rm 1PI}_{\rm av}(\phi,\mathcal{K})}{\partial \phi}=\mathcal{J}+\mathcal{K}\phi, \end{equation} as in the 2PI case, whereas we have \begin{equation} \frac{\partial \Gamma^{\rm 1PI}_{\rm av}(\phi,\mathcal{K})}{\partial \mathcal{K}} =\frac{\ensuremath{\partial} \mathcal{W}}{\ensuremath{\partial} \mathcal{K}}+\frac{1}{2}\phi^2=\frac{\hbar}{2}\frac{\ensuremath{\partial}^2\mathcal{W}}{\ensuremath{\partial} \mathcal{J}^2}=-\frac{\hbar}{2}\Delta. \end{equation} The latter is non-zero in the limit $\mathcal{J},\mathcal{K}\to 0$, and this should be contrasted with the 2PI case, where \begin{align} \lim_{\mathcal{J},\mathcal{K}\to 0}\frac{\partial \Gamma^{\rm 2PI}(\phi,\Delta)}{\partial \mathcal{K}} &=\lim_{\mathcal{J},\mathcal{K}\to 0}\left[ \frac{\ensuremath{\partial} \Gamma^{\rm 2PI}}{\ensuremath{\partial}\phi}\frac{\ensuremath{\partial}\phi}{\ensuremath{\partial}\mathcal{K}} +\frac{\ensuremath{\partial} \Gamma^{\rm 2PI}}{\ensuremath{\partial}\Delta}\frac{\ensuremath{\partial}\Delta}{\ensuremath{\partial}\mathcal{K}}\right] \nonumber\\&=\lim_{\mathcal{J},\mathcal{K}\to 0}\left[ (\mathcal{J}+\mathcal{K}\phi)\frac{\ensuremath{\partial}\phi}{\ensuremath{\partial}\mathcal{K}} +\frac{\hbar}{2}\mathcal{K}\frac{\ensuremath{\partial}\Delta}{\ensuremath{\partial}\mathcal{K}}\right]\nonumber\\&=0. \end{align} Thus, while the average 1PI and regulator-sourced 2PI effective actions coincide as $\mathcal{K}\to 0$, their derivatives with respect to $\mathcal{K}$ do not. Notice in addition, that while the two-point function in the 2PI case is a function of $(\mathcal{J},\mathcal{K})$, the two-point function of the 1PI case is a function of $(\phi,\mathcal{K})$. Putting everything together, we find the explicit result \begin{align} \label{eq:1PIfinal} \Gamma^{\rm 1PI}_{\rm av}(\phi,\mathcal{K}) &=\frac{1}{2}\left[\phi^2+\hbar\ln\left(1-\mathcal{K}\right)\right]+\frac{\lambda}{24}\left[\phi^4+\frac{6\hbar\phi^2}{1-\mathcal{K}}+\frac{3\hbar^2}{\left(1-\mathcal{K}\right)^2}\right]\nonumber\\ &\quad-\frac{\lambda^2}{48}\left[\frac{3\hbar\phi^4}{(1-\mathcal{K})^2}+\frac{10\hbar^2\phi^2}{(1-\mathcal{K})^3}+\frac{4\hbar^3}{(1-\mathcal{K})^4} \right]. \end{align} The inverse two-point function is obtained from the second derivative of the usual 1PI effective action, such that \begin{equation} \Delta^{-1}=\frac{\partial \Gamma^{\rm 1PI}(\phi)}{\partial \phi^2}=\frac{\partial \Gamma^{\rm 1PI}_{\rm av}(\phi,\mathcal{K})}{\partial \phi^2}-\mathcal{K}, \end{equation} cf.~Eq.~\eqref{eq:inv2}. We can readily confirm that the result for $\Delta^{-1}$ agrees with the 2PI expression \eqref{eq:Deltainv} at order $\lambda^2$. At this point, it is important to remark that while the two-point function $\Delta$ is formally the same for the 2PI and 1PI cases, the way the perturbation theory is organised differs. In the 1PI case, the loop expansion is built out of tree-level propagators, wherein field insertions are resummed; in the 2PI case, the loop expansion is built out of two-point functions that are themselves solutions of the Schwinger-Dyson equation, wherein infinite series of loop insertions are also resummed, for instance, all proper 1PI self-energy insertions. We now take $\mathcal{K}=\mathcal{R}_k$ and turn our attention to the flow equation,\footnote{Note again that we use an unconventional sign convention on the regulator $\mathcal{R}_k$.} which takes the form~\cite{Wetterich:1992yh,Morris:1993qb,Ellwanger:1993mw, Reuter:1996cp} \begin{equation} \partial_k\Gamma^{\rm 1PI}_{\rm av}(\phi,\mathcal{R}_k)=\frac{\partial\Gamma^{\rm 1PI}_{\rm av}(\phi,\mathcal{R}_k)}{\partial \mathcal{R}_k}\partial_k\mathcal{R}_k= -\frac{\hbar}{2}\Delta_k\partial_k\mathcal{R}_k. \end{equation} Compared to the 2PI flow equation \eqref{eq:flow2PI}, we see that the partial derivative with respect to $k$ hits the regulator directly. Thus, while the average 1PI effective action always flows in the presence of the regulator, the regulator-sourced 2PI effective action only flows if the two-point function does~\cite{Alexander:2019cgw}. Note that $\partial \mathcal{R}_k/\partial \phi=0$ in the 1PI case, since $\phi$ and $\mathcal{R}_k$ are the independent natural variables of the average 1PI effective action. We now make the Ansatz \begin{equation} \label{eq:1PIansatz} \Gamma^{\rm 1PI}_{\rm av}(\phi,\mathcal{R}_k)=\tilde{\alpha}_k(\mathcal{R}_k)+\frac{1}{2}\tilde{\beta}_k(\mathcal{R}_k)\phi^2+\frac{1}{4!}\tilde{\gamma}_k(\mathcal{R}_k)\phi^4. \end{equation} Note that we have distinguished the $\tilde{\alpha}_k$, $\tilde{\beta}_k$ and $\tilde{\gamma}_k$ of the 1PI case by a tilde, since they are not equal to their 2PI counterparts. Note also that, compared with Eq.~\eqref{eq:2PIansatz}, the $\tilde{\alpha}_k$, $\tilde{\beta}_k$ and $\tilde{\gamma}_k$ are functions of $\mathcal{R}_k$ rather than $\Delta_k$. In order to extract the flow equations for each of these functions, we now take partial derivatives with respect to $\phi$ at fixed $\mathcal{R}_k$ and evaluate at $\phi=0$. This leads to the system \begin{subequations} \begin{align} \partial_k\tilde{\alpha}_k(\mathcal{R}_k)&=\left.-\frac{\hbar}{2}\left[\tilde{\beta}_k(\mathcal{R}_k)-\mathcal{R}_k+\tilde{\gamma}_k(\mathcal{R}_k)\phi^2/2\right]^{-1}\right|_{\phi=0}\partial_k\mathcal{R}_k,\\ \partial_k\tilde{\beta}_k(\mathcal{R}_k)&=\left.-\frac{\hbar}{2}\left\{\frac{\partial^2}{\partial \phi^2}\left[\tilde{\beta}_k(\mathcal{R}_k)-\mathcal{R}_k+\tilde{\gamma}_k(\mathcal{R}_k)\phi^2/2\right]^{-1}\right\}\right|_{\phi=0}\partial_k\mathcal{R}_k,\\ \partial_k\tilde{\gamma}_k(\mathcal{R}_k)&=\left.-\frac{\hbar}{2}\left\{\frac{\partial^4}{\partial \phi^4}\left[\tilde{\beta}_k(\mathcal{R}_k)-\mathcal{R}_k+\tilde{\gamma}_k(\mathcal{R}_k)\phi^2/2\right]^{-1}\right\}\right|_{\phi=0}\partial_k\mathcal{R}_k. \end{align} \end{subequations} Proceeding to second order in $\lambda$, we find the flow equations \begin{subequations} \begin{align} \frac{\partial \tilde{\gamma}_k(\mathcal{R}_k)}{\partial k}&=-3\hbar\frac{\tilde{\gamma}_k^2(\mathcal{R}_k)\partial_k\mathcal{R}_k}{\left[\tilde{\beta}_k(\mathcal{R}_k)-\mathcal{R}_k\right]^3},\\ \frac{\partial \tilde{\beta}_k(\mathcal{R}_k)}{\partial k}&=\frac{\hbar}{2}\frac{\tilde{\gamma}_k(\mathcal{R}_k)\partial_k\mathcal{R}_k}{\left[\tilde{\beta}_k(\mathcal{R}_k)-\mathcal{R}_k\right]^2},\\ \frac{\partial \tilde{\alpha}_k(\mathcal{R}_k)}{\partial k}&=-\frac{\hbar}{2}\frac{\partial_k\mathcal{R}_k}{\tilde{\beta}_k(\mathcal{R}_k)-\mathcal{R}_k}. \end{align} \end{subequations} The solutions are \begin{subequations} \begin{align} \tilde{\gamma}_k(\mathcal{R}_k)&=\lambda-\frac{3\hbar\lambda^2}{2}\frac{1}{\left(1-\mathcal{R}_k\right)^2},\\ \tilde{\beta}_k(\mathcal{R}_k)&=1+\frac{\hbar\lambda}{2}\frac{1}{1-\mathcal{R}_k}-\frac{5\hbar^2\lambda^2}{12}\frac{1}{\left(1-\mathcal{R}_k\right)^3} ,\\ \tilde{\alpha}_k(\mathcal{R}_k)&=\frac{\hbar}{2}\ln\left(1-\mathcal{R}_k\right)+\frac{\hbar^2\lambda}{8}\frac{1}{\left(1-\mathcal{R}_k\right)^2}-\frac{\hbar^3\lambda^2}{12}\frac{1}{\left(1-\mathcal{R}_k\right)^4}, \end{align} \end{subequations} from which we readily reconstruct the explicit expression for the effective action in Eq.~\eqref{eq:1PIfinal}. We have fixed the constants of integration in $\tilde{\beta}_k$ and $\tilde{\gamma}_k$ in the limit $\mathcal{R}_k\to -\infty$ as per the 2PI case. The constant shift in $\tilde{\alpha}_k$ is arbitrary, and we have fixed it by matching to the full expression for the average 1PI effective action. We therefore conclude that the average 1PI approach is also self-consistent. Notice that in the average 1PI case the would-be quartic coupling runs for the zero dimensional model at order $\lambda^2$, whereas the quartic coupling of the 2PI approach does not run until order $\lambda^4$. The reason for this is as follows. In the average 1PI action, whose natural variables are $(\phi,\mathcal{R}_k)$, we obtain a term $\sim \lambda^2\phi^4/(1-\mathcal{R}_k)^2$. In the 2PI approach, this diagram, which amounts to two insertions of $\lambda\phi^2$ is properly resummed into the two-point function $\Delta_k$. As such, this term is not present in the 2PI effective action, once it is written in terms of its natural variables $(\phi,\Delta_k)$. \section{Concluding remarks} \label{sec:Conc} We have compared two approaches to deriving exact flow equations based on the average 1PI and regulator-sourced 2PI effective actions by means of zero dimensional model. We have clarified subtleties in the derivation of the 2PI flow equations and shown that both approaches are self-consistent. The two approaches differ in their natural variables and the way in which the perturbative expansion is structured. In addition, the regulator-sourced 2PI approach has the following properties: \begin{itemize} \item The variation of the 2PI effective action with respect to the regulator vanishes in the limit that the sources (inc.~the regulator) vanish. \item The regulator-sourced 2PI approach inherits all of the properties of the 2PI effective action in terms of its consistent organisation of the resummation of would-be loop corrections to the two-point function. \item The 2PI effective action only runs if the two-point function is scale dependent; that is to say, if the two-point function responds to the presence of the regulator. \end{itemize} These properties motivate further systematic comparison of the regulator-sourced 2PI and average 1PI approaches to the exact flow equations of full field theoretic models, including, for instance, potential differences at non-trivial fixed points. \begin{acknowledgments} The work of PM was support by a Nottingham Research Fellowship from the University of Nottingham; PMS acknowledges support from STFC Grant No.~ST/P000703/1. The Authors would like to thank Dario Benedetti, Kevin Falls, Jan Pawlowski and Adam Rancon for constructive discussions at the 10th International Conference on Exact Renormalization Group 2020 (ERG2020), hosted by the Yukawa Institute for Theoretical Physics, Japan. \end{acknowledgments}
1,314,259,995,964
arxiv
\section{Introduction} \label{sec:Intro} The physics of black holes is a crucial part of the current frontier of theoretical physics. Beside their importance for theoretical (and observational) astrophysics \cite{Bambi:2016lkv, Rezzolla:2016jxw}, they offer a precious laboratory for testing the foundations of our theories of fundamental interactions and of all other physical systems, and their mutual (in)compatibility: General Relativity, Quantum Mechanics (and its offspring, Quantum Information), and Statistical Mechanics. On the one hand, black hole thermodynamics \cite{Bekenstein:1972tm,Bardeen:1973gs,Bekenstein:1973ur,Hawking:1974rv, Hawking:1974sw} (see also \cite{Carlip:2014pma} for a recent review and references covering the various developments) remains to date a surprising set of (theoretical) facts, still in search for a microscopic (statistical) understanding. It constantly challenges our assumed foundations of physical theories, at least when framed within semi-classical physics: locality, conservation of information, the equivalence principle, Lorentz invariance, quantum monogamy, causality, to name only a few of the principles that have been suggested to be somehow in conflict with it. On the other hand, together with the cosmology of the very early universe, black holes (and their thermodynamics) are especially important for quantum gravity research, representing the main concrete challenge and the first testing ground at the non-perturbative level. Quantum gravity is supposed to complete the very definition of what black holes are, by providing the new physics replacing their central curvature singularity, and to identify the microscopic degrees of freedom whose statistical mechanics is ultimately responsible for their macroscopic thermodynamical properties. In this paper, we tackle the issue of defining quantum states representing (spherically symmetric) black holes, and in particular their horizon degrees of freedom, within a full quantum gravity formalism, and to compute their macroscopic entropy from first principles, within the same formalism. The quantum gravity framework we use is group field theory \cite{Oriti:2013aqa,Oriti:2014uga}, closely related to random tensor models \cite{Gurau:2011xp} and to loop quantum gravity \cite{Rovelli:2004tv,Thiemann:2007zz}. To clarify what is the new contribution we give here to this topic, it is in fact useful to relate and compare our work to the one done so far within loop quantum gravity, forming by now an extensive literature. In the canonical loop quantum gravity (LQG) approach to black hole entropy calculation, the main strategy to the computation of black hole entropy has been based on classical symmetry reduction. The starting point for the modelling the black hole horizon is the local definition provided by the notion of Isolated Horizon (IH) \cite{Ashtekar:1998sp, Ashtekar:1999yj} (see \cite{DiazPolo:2011np, Perez:2017cmj} for reviews), which enters as a specific set of boundary conditions. Their implication is that a spherically symmetric isolated horizon appears at any $r_0$ for which the following classical geometrical relation holds \cite{Ashtekar:1999wa, Engle:2010kt}: \nopagebreak[3]\begin{eqnarray}\label{IH-cond} \mathcal{C}_{\scriptscriptstyle IH}\equiv{F}^i(A)+\frac{\pi }{\mathcal{A}_{\va IH}}(1-\gamma^2) {\Sigma}^i\label{FSigma}=0\,. \end{eqnarray} In the expression above, $\mathcal{A}_{\va IH}$ is the area of the isolated horizon, $F^i(A)$ the field strength of the Ashtekar--Barbero connection $A^i_a=\Gamma^i_a+\gamma K^i_a$ on the IH and $\Sigma^i\equiv \epsilon^i\,_{jk}\Sigma^{jk}$ the 2-form densitized triad pulled-back on the horizon. Spherically symmetric geometries constrained by such boundary conditions are then taken to \emph{define} the classical degrees of freedom of the black hole system, which is then quantized. In the quantum theory, the IH condition \eqref{FSigma} induces a relation between the flux associated to a link coming from the bulk and ending on the horizon and the holonomy on the horizon around the given link. This is imposed as an operatorial constraint equation on the tensor product of the bulk Hilbert space, quantised through standard LQG techniques, and the boundary Hilbert space, constructed by relying on Chern-Simons theory formalism, which also defines the boundary dynamics. As a result, fluctuations of geometrical operators coming from the bulk get coupled to those living on the boundary, through identification of quantum numbers of the two Hilbert spaces, and Chern-Simons curvature excitations (\lq punctures\rq) get identified with quanta of space degrees of freedom. The microcanonical counting of these degrees of freedom \`a la Boltzmann in the semi-classical limit of large IH area yields a leading term for the entropy linear in $\mathcal{A}_{\va IH}$ together with a subleading logarithmic term \cite{Ashtekar:2000eq, Meissner:2004ju, Domagala:2004jt, Ghosh:2004wq, Corichi:2006wn, Corichi:2006bs, Agullo:2008yv, Kaul:2000kf, G.:2008mj, Engle:2011vf}. Extension of the $\mathrm{SU}(2)$-invariant formulation of isolated horizons to the distorted and the rotating cases has been achieved in \cite{Perez:2010pq, Frodden:2012en}. The realisation that, in this classically reduced context, the relevant degrees of freedom are associated to punctures on the horizon (described by a simple Chern-Simons theory) combined with the general key result obtained in the full LQG theory that area operator has discrete spectrum, with spin networks as eigenstates and eigenvalues carried by the links of their supporting graphs, has motivated another well-explored strategy (in fact, the first one to be followed \cite{Smolin:1995vq, Krasnov:1996wc, Rovelli:1996dv}). This was the construction of several toy models within LQG, \emph{i.e.~} simple spin network states incorporating enough features of the mentioned description of quantized isolated horizons to have a chance to capture interesting black hole physics. Most of such toy models consist of spin network states based on a single fixed graph, interpreted as having a number of nodes inside a black hole horizon (encoding the bulk degrees of freedom), but often limited to a single intertwiner state, and a number of links crossing it, providing for the horizon degrees of freedom. Both the mentioned strategies have produced very important and interesting results, and will certainly contribute to the complete understanding of the physics of quantum black holes in quantum gravity. The reliability of the results obtained within a symmetry reduced treatment is notoriously questionable, however, and the limitations of both strategies are apparent, motivating the search for a more complete description of quantum black holes within the full quantum gravity formalism. Moreover, in all the LQG-based constructions so far, the numerical value of the Barbero--Immirzi parameter needs to be fixed to recover exactly the coefficient $1/4$ in the Bekenstein--Hawking entropy area formula. The long-standing issue, extensively debated in the literature \cite{Jacobson:2007uj, Ghosh:2011fc, Frodden:2012dq, Ghosh:2013iwa, Pranzetti:2013lma, Bodendorfer:2013hla, Ghosh:2014rra, Achour:2014eqa, BenAchour:2016mnn}, is whether this necessity might be signalling an incompleteness in the identification of the microscopic degrees of freedom counted in the LQG entropy calculation, or some other limitation in the usual constructions. On the basis of the semi-classical nature of the Bekenstein--Hawking formula and the expectation that the Barbero--Immirzi parameter should play no important role in the classical description of gravity, a natural option to remove this undesired feature of the entropy calculation is that other (non-internal and so far neglected) degrees of freedom should be taken into account. A recent proposal within the LQG framework, discussed in \cite{Freidel:2016bxd}, uses new boundary degrees of freedom representing information channels between gravitational subsystems. However, the implications of the discovery of these new degrees of freedom for the black hole entropy calculation have not been investigated in detail yet. In this paper, we will concentrate on an alternative proposal for the fundamental boundary (and bulk) degrees of freedom, working within the full quantum gravity formalism and using a construction that gives us access to a continuum description of a black hole quantum geometry. More precisely, we want to use the construction \cite{Oriti:2015qva} of GFT condensate states, generalising those used in a cosmological context \cite{Gielen:2013kla, Gielen:2013naa, Oriti:2016qtz, Gielen:2016dss}, and representing continuum spherically symmetric geometries within the group field theory (GFT) formalism. Leveraging on the structure of the Fock representation of GFTs, it is possible to define a quantum black hole horizon in the full, non-truncated, theory and then compute its entropy. A short report of some of these results appeared in \cite{Oriti:2015rwa}. Here we provide a much more detailed presentation of those calculations, as well as a more general entropy counting which extends to a wider and possibly more physically relevant class of generalized condensates for a spherically symmetric black hole. We will describe in some detail the construction of the class of states that our analysis relies on, and highlight their convenient features in the following; we note upfront, however, the main limitation of our construction. Implementing the dynamics for generic GFT condensates is not as easy as in the case of those used for cosmological applications \cite{Gielen:2013kla, Gielen:2013naa, Oriti:2016qtz, Gielen:2016dss}. Therefore we will treat these states as kinematical trial states, under the hypothesis that they can represent some reasonable approximation of realistic states, \emph{i.e.~} solutions of the full quantum dynamics, at least in some regime. This remains an hypothesis, but it is supported by three considerations. First, these states naturally include some form of homogeneity (specifically, wavefunction homogeneity, to be clarified in the following) which already restricts the possible shape of the states in combination with the combinatorial restrictions that allow to assign them a clear topological interpretation (in particular, as encoding spherical symmetry). This sort of restrictions is expected to apply also to an exact state (\emph{i.e.~} solving the equations of motion of the theory) with spherical symmetry, or some slightly less local variant (e.g. involving vertices along tangential loops), resulting from constraints on curvature observables. Second, very importantly (and for the first time, to the best of our knowledge), the quantum states we use already include a sum over a family of triangulations, which is an inevitable consequence of the interacting nature of the GFT equations of motion. It is not obvious that the superposition of different graphs realised in physical states is of the same melonic nature that we are using, even if the dominance of melonic diagrams is a recurrent property of GFT (and random tensor) models \cite{Baratin:2013rja}; however, the superposition that we are using explores a whole family of triangulations obtained by refinement, which means that, while differing in some crucial aspects, we might be still close to the exact state (indeed, for general wavefunctions, states associated to different graphs are \emph{not} orthogonal \cite{Oriti:2014uga}). While the calculation of the overlap between our condensate states and an exact spherically symmetric state (assuming this notion makes sense in the full theory) is nontrivial, we can expect that this overlap is nonzero\footnote{Its precise value would be a quantitative measure of the goodness of the approximation used.}. Third, these particular states implement a non-obvious macroscopic property, \emph{i.e.~} holography, in a form compatible with a non-perturbative full quantum gravity regime. This means that they contain some of the structural properties necessary to match the dynamics of the classical theory. Still, it remains true that our analysis stays at the kinematical level (as in all the previous LQG literature on the subject). We will use, however, a possible proxy for the dynamics of the theory, because we will impose a condition of maximisation of the entropy that one could expect to be satisfied by solutions of the quantum dynamics and, in particular, by black hole configurations. The (maximal) entropy we will compute for our states will be shown to be interpretable both as a Boltzmann entropy of horizon degrees of freedom and as an entanglement entropy across the same horizon, to scale with the mean area of the black hole horizon, and to match (under additional assumptions, amounting to semi-classicality conditions) the Bekenstein--Hawking result. We depart from the canonical LQG approach to black hole entropy in three main ways: $i)$ the horizon quantum state is defined including a sum over triangulations, including very refined ones, admitting in this way an interpretation in terms of a continuum geometry, whose information is encoded in a single collective variable, \emph{i.e.~} the condensate wavefunction; crucially, this basic aspect of the construction also implies that our states encode in their very definition a coarse-graining of microscopic degrees of freedom allowing to control them via a limited number of variables---this point is also relevant for their (lack of) dynamical character: being the result of a coarse graining, we should not expect them to be exact solutions of the microscopic quantum dynamics---; $ii)$ the interior bulk degrees of freedom are not removed (or drastically reduced) by hand through the introduction of a single intertwiner model; space-time does not end at the horizon in our construction, but both an interior and an exterior bulk are included in the quantum state; $iii)$ our construction relies uniquely on structures and techniques proper of the GFT formalism, \emph{i.e.~} both boundary and bulk degrees of freedom are described in a unique, consistent way, removing ambiguities present in the canonical LQG approach, where spin-network states are coupled to a Chern-Simons theory on the boundary\footnote{See however \cite{Sahlmann:2011xu, Pranzetti:2014tla} for a more uniform treatment of bulk and boundary degrees of freedom though techniques developed in the context of 2+1 LQG \cite{Sahlmann:2011uh, Sahlmann:2011rv, Noui:2011im, Noui:2011aa, Pranzetti:2014xva}.}. Most importantly, with all the limitations of our work, and the inevitable approximations and assumptions, we work from beginning to end within the full quantum gravity formalism, without any preliminary classical symmetry reduction and with realistic quantum states of the full theory. These points of departure from the standard treatment are also at the origin of a remarkable feature of the entropy calculation we present in this paper: by replacing the area operator eigenstates for the quantum isolated horizon by condensate states, the result of the entropy calculation yields the semi-classical Bekenstein--Hawking entropy area law with no explicit dependence on the value of the Barbero--Immirzi parameter. \section{Shell condensate state: group element representation} \label{sec:BH} We start by presenting in some detail the construction of our quantum black hole states in the GFT formalism. We will do so, in this and in the next sections, in two equivalent representations of the Hilbert (Fock) space of the theory. This serves also the purpose of showing the generality of our construction, and of our results. We take a sort of engineering approach, by building up our quantum states piece by piece, starting from the fundamental building blocks provided by the GFT formalism, \emph{i.e.~} GFT quanta corresponding to individual spin network vertices, in turn dual to fundamental 3D simplices. Specifically, we construct spherically symmetric configurations of quantum space as glueings (along a radial direction) of homogeneous spherical shells. As pointed out above, in this work we build upon the construction of a continuum quantum geometry representing a spherically symmetric shell performed in \cite{Oriti:2015qva}, to which we refer for notation and basic notions\footnote{For the reader's convenience, we have reported the basic notion in Appendix \ref{app:GFTFock}.} Their construction can be easily described: we start with a seed state for a given shell, containing few quanta, and then we act upon it with a series or refinement operators in order to increase the number of fundamental blocks (four-valent vertices), preserving the initial topology. The same wavefunction is associated to each new fundamental block, enforcing homogeneity and symmetry. Therefore, the GFT condensate state for a given shell is formed by an infinite superposition of graphs, each with a certain number of 4-vertices connected together. A colour $t=\{B,W\}$ is associated to each 4-vertex and four $SU(2)$ group elements (denoted by the letter $g$ variously decorated) are assigned to the links departing from a given 4-vertex and labelled by a number $I=\{1,2,3,4\}$. A shell is formed of three parts: an outer boundary, an inner boundary and a bulk in between. In order to keep track of these three parts of each shell, we add a colour $s=\{+,0,-\}$ to the vertex wavefunction, labelling respectively these three parts, so that we can specify the region to which a given vertex of a shell belongs. Following the conventions of \cite{Oriti:2015qva} (see also Appendix \ref{app:GFTFock}), the action of the refinement operators on the initial seed state is such that each boundary of a given shell is formed by open radial links all with the same colour, with outer and inner boundaries having different colours. The next step is to glue shells together. In order to do this, while still being able to distinguish different shells, we need to introduce a single extra label $r\in \mathbb{N}$, also associated to the shell wavefunction, which can be interpreted as an effective radial coordinate. Therefore, the field operator associated to the fundamental building block $v$ reads \nopagebreak[3]\begin{equation}\label{c-field} \hat{\sigma}_{r,t^{\scriptscriptstyle v}s^{\scriptscriptstyle v}}(h^v_I) = \int \mathrm{d} g^v_I\; \sigma_{r,s^{\scriptscriptstyle v}}(h^v_Ig^v_I) \,\hat{\varphi}_ {t^{\scriptscriptstyle v}}(g^v_I)\,, \quad\quad \hat{\sigma}^{\dagger}_{r,t^{\scriptscriptstyle v}s^{\scriptscriptstyle v}}(h^v_I) = \int \mathrm{d} g^v_I\; \overline{\sigma_{r,s^{\scriptscriptstyle v}}(h^v_Ig^v_I)} \,\hat{\varphi}^{\dagger}_ {t^{\scriptscriptstyle v}}(g^v_I)\, \end{equation} satisfying the commutation relations \nopagebreak[3]\begin{eqnarray}\label{c-comm} \left[\hat{\sigma}_{r,t^{\scriptscriptstyle v}s^{\scriptscriptstyle v}}(h^v_I), \hat{\sigma}^{\dagger}_{r',t^{\scriptscriptstyle w}s^{\scriptscriptstyle w}}(h^w_I) \right] &=& \delta_{r,r'}\delta_{t^{\scriptscriptstyle v}, t^{\scriptscriptstyle w}}\delta_{s^{\scriptscriptstyle v}, s^{\scriptscriptstyle w}}\Delta_{L}(h^v_I, h^w_I)\nonumber\\ &\equiv& \delta_{r,r'}\delta_{t^{\scriptscriptstyle v}, t^{\scriptscriptstyle w}}\delta_{s^{\scriptscriptstyle v}, s^{\scriptscriptstyle w}} \int_{\scriptscriptstyle SU(2)} d\gamma \prod_{I=1}^4 \delta(\gamma h^v_I (h^w_I)^{-1})\,, \end{eqnarray} where the r.h.s. guarantees the left gauge invariance of the vertex wavefunction, namely \begin{equation}\label{left-inv} \hat{\sigma}_{r,t^{\scriptscriptstyle v}s^{\scriptscriptstyle v}}(h^v_I) =\hat{\sigma}_{r,t^{\scriptscriptstyle v}s^{\scriptscriptstyle v}}(\gamma h^v_I)\,,~ \forall \gamma \in \mathrm{SU}(2). \end{equation} Moreover, the $\delta_{r,r'}$ implies that operators associated to different shells commute with each other. The above field operators are constructed out of the fundamental GFT field operators by convolution with the condensate wavefunction $\sigma$; they thus create/annihilate GFT quanta, all associated with such wavefunction. This association with a unique wavefunction for all the quanta forming a given shell is what we call \lq wavefunction homogeneity\rq, which puts these states in correspondence with homogeneous (continuum) spatial geometries, and also what characterizes the same states as GFT condensates. The condensate quanta are then glued to one another, for given radial parameter, to form the 3D triangulations constituting the shell, with quantum correlations encoding spatial topology \cite{Donnelly:2008vx,Chirco:2017xjb}. In order to form a full space foliation we glue all the radial links belonging to the outer boundary of a given shell $r$ with the radial links belonging to the inner boundary of the shell $r+1$. For the glueing to be done consistently, the two boundaries have to have the same number and colour of radial links. Two glued shells can be graphically represented as \nopagebreak[3]\begin{equation}\label{Shells} \begin{array}{c} \includegraphics[width=3cm]{Shells.pdf}\,. \end{array} \end{equation} The use of bipartite coloured graphs in order to being able to encode the information about the spatial topology suggest that it is most convenient to adopt a construction of the seed state in terms of melonic graphs, and of the associated refinement operators in terms of dipole (or melonic) moves. Explicitly, the seed state for a given shell $r$ is graphically represented as \nopagebreak[3]\begin{equation}\label{seedgraph} \begin{array}{c} \includegraphics[width=7cm]{GraphShell.pdf} \end{array} \end{equation} and, in terms of field operators, the seed state is given by \nopagebreak[3]\begin{eqnarray}\label{tau} \ket{\tau} &=& \int (dg)^{10} \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, B+}(e,g_{2},g_{3},g_4) \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, W+}(e,g'_{2},g_{3},g_4) \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, B0}(g''_{1},g'_{2},g'''_{3},g''_4)\nonumber\\ &&~~~~~~~~~~~\hat{\sigma}^{\dagger}_{\scriptscriptstyle r, W0}(g''_{1},g_{2},g''_{3},g''_4) \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, B-}(g''''_{1},g''''_{2},g''_{3},e) \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, W-}(g''''_{1},g''''_{2},g'''_{3},e) \left| 0 \right\rangle \, , \end{eqnarray} where we have arbitrarily assigned the colour 1 to the radial links of the boundary $+$~and the colour 4 to the boundary $-$, and, for the moment, we have set the gluing group elements $h$'s associated to both sets of radial links to the identity. There are three refinement operators for each shell: Two refine the boundaries and one the bulk vertices. The complete set of refinement operators has been studied in \cite{Oriti:2015qva}. Here we just concentrate on one of the two boundaries, namely the $+$~one. The construction for the other one follows the same logic. The action of the operator for the refinement of white vertices has a simple graphical representation: \nopagebreak[3]\begin{equation} \widehat{\mathcal{M}}_{\scriptscriptstyle r, W +}:~~ \begin{array}{c} \includegraphics[width=1.8cm]{TW.pdf} \end{array}~~~\rightarrow~~~ \begin{array}{c} \includegraphics[width=5.5cm]{RefWv2.pdf}\label{refW}\,, \end{array} \end{equation} with the one for black vertices having a similar structure (notice the colours of the edge in the loop, however): \nopagebreak[3]\begin{equation} \widehat{\mathcal{M}}_{\scriptscriptstyle r, B +}:~~ \begin{array}{c} \includegraphics[width=1.8cm]{TB.pdf} \end{array}~~~\rightarrow~~~ \begin{array}{c} \includegraphics[width=5.5cm]{RefB.pdf}\label{refB}\,. \end{array} \end{equation} These two moves are the ones involving the minimum number of vertices, keeping fixed the topology. In terms of the group fields, the two move operators read \begin{align} \widehat{\mathcal{M}}_{\scriptscriptstyle r, W +} \equiv \int & dk_2 dk_3 dk_4 dh_{4'} dh_{2'} dh_{3'} \nonumber \\ & \hat{\sigma}^{\dagger}_{\scriptscriptstyle W-}(e,k_{2},h_{3'},h_{4'}) \hat{\sigma}^{\dagger}_{\scriptscriptstyle B-}(e,h_{2'},h_{3'},h_{4'}) \hat{\sigma}^{\dagger}_{\scriptscriptstyle W-}(e,h_{2'},k_{3},k_{4}) \hat{\sigma}_{\scriptscriptstyle W-}(e,k_{2},k_{3},k_{4}) \label{move2} \end{align} and \begin{align} \widehat{\mathcal{M}}_{\scriptscriptstyle r, B +} \equiv \int & dk_2 dk_3 dk_4 dh_{4'} dh_{2'} dh_{3'} \nonumber \\ & \hat{\sigma}^{\dagger}_{\scriptscriptstyle r,B+}(e,h_{2'},h_{3'},k_{4}) \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, W+}(e,h_{2'},h_{3'},h_{4'}) \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, B+}(e,k_{2},k_{3},h_{4'}) \hat{\sigma}_{\scriptscriptstyle r,B+}(e,k_{2},k_{3},k_{4})\,. \label{move1} \end{align} These are graph topology-preserving operators and their actions \eqref{refB}, \eqref{refW} can be straightforwardly verified by computing their commutators with, respectively, $\hat{\sigma}^{\dagger}_{\scriptscriptstyle r, B+}(e,g_{2},g_{3},g_4), \hat{\sigma}^{\dagger}_{\scriptscriptstyle r, W+}(e,g_{2},g_{3},g_4)$. In a similar fashion, we can build refinement operators for vertices belonging to the other two components of the shell. We can thus arbitrarily refine the seed state \eqref{tau} by repeated action of the operators $\widehat{\mathcal{M}}_{\scriptscriptstyle r, t s}$ in order to implement the sum over triangulations while preserving the desired topology and the key feature of wavefunction homogeneity. In fact, the refinement moves are implemented through operators built out of field operators \eqref{c-field} dressed with the same wavefunction as the seed state; in this way, the geometric information of the finer shell states is still encoded in the same small number of parameters. The state of a given shell $r$ can then be written as \begin{equation}\label{shell-state} \ket{\Psi_r} = F_{r}(\widehat{\mathcal{M}}_{\scriptscriptstyle r, B s},\widehat{\mathcal{M}}_{\scriptscriptstyle r, W s}) \ket{\tau}\,, \end{equation} where, at this stage, $F_r$ is a generic function of the refinement operators associated to the given shell $r$. This completes the definition of the spherically symmetric quantum states we will use in the following, for describing the microstructure of quantum black holes. \subsection{The area operator}\label{sec:area} The physical properties encoded in our quantum states have to be extracted by computing suitable operators. While the combinatorial aspects of the GFT formalism, shared with random tensor models, were crucial in the definition of our quantum states, alongside its 2nd quantization tools, the quantum geometric aspects, shared with loop quantum gravity (and simplicial approaches), become prominent in the physical interpretation of the same states and in the identification of interesting operators. An important geometric operator for our purposes is the area operator. Following the prescription in \cite{Oriti:2015qva}, a second quantized version of a shell boundaries area operator is defined by \begin{equation}\label{area} \hat{\mathbb{A}}_{Jr,s}=\sum_{t=\scriptscriptstyle {B,W}}\hat{\mathbb{A}}_{Jr,t s} \equiv \kappa \sum_{t=\scriptscriptstyle {B,W}} \int dh_I^v \hat{\sigma}^{\dagger}_{r,ts}(h^v_I) \sqrt{E^{i}_{J} E^{j}_{J} \delta_{ij}} \rhd \hat{\sigma}_{r,ts}(h^v_I)\,, \end{equation} where $\kappa=8\pi\gamma \ell_P^2$, introducing the dependence on the Barbero--Immirzi parameter $\gamma$. In the expression above the label $s$ takes values $\{+,-\}$, accordingly to which boundary of the shell $r$ we want to compute the area of, and the index $J$ matches the number associated to the radial links dual to the given boundary. The action of the operator \eqref{area} is computed using the definition \begin{equation} E^{i}_{J} \rhd f(g_I) := \lim_{\epsilon\rightarrow 0} \mathrm{i} \frac{d}{d\epsilon} f(g_{1},\ldots, e^{-\mathrm{i} \epsilon \tau^{i}}g_{J}, \ldots, g_{4}) \end{equation} for a given function $f:SU(2)^4\rightarrow \mathbb{C}$. It is immediate to see that the expectation value of the area operator \eqref{area} on a shell boundary state can be written as a function of the number of quanta and \emph{a single vertex expectation value}, \begin{equation}\label{Area-op} \langle \hat{\mathbb{A}}_{Jr,s} \rangle =\kappa \langle \widehat{n}_{r,s} \rangle \int dh^v_I dg^v_I \sigma_{r,s}(h^{v}_I g_I^{v}) \sqrt{E^{i}_{J} E^{j}_{J} \delta_{ij}} \rhd \overline{\sigma_{r,s}(h_I^{v} g_I^{v})}\equiv \langle \widehat{n}_{r,s} \rangle a_{Jr,s} , \end{equation} where we have defined $a_{Jr,s}$ the expectation value of the area operator on a single radial link-$J$ in the boundary $s$ of the shell $r$, and $\widehat{n}_{r,s}$ is the number operator given by \nopagebreak[3]\begin{equation} \widehat{n}_{r,s}=\sum_{t=\scriptscriptstyle {B,W}}\widehat{n}_{r,ts} =\sum_{t=\scriptscriptstyle {B,W}} \int dh^v_I\, \hat{\sigma}^{\dagger}_{r,ts} (h^v_I) \hat{\sigma}_{r,ts} (h^v_I)\,. \end{equation} Notice that, due to the definition of the seed state and the refinement operators, after each refinement action the graphs are such that \nopagebreak[3]\begin{equation} n_{r, {\scriptscriptstyle B} s}=n_{r,{\scriptscriptstyle W} s}= \frac{n_{r,s}}{2} \end{equation} always holds, where $n\equiv\langle\widehat{n}\rangle $. An analog one-body operator can be constructed for the volume. Also in this case, the factorisation property \eqref{Area-op} holds. The structure of these expectation values should come as natural given the structure of the wavefunction, boiling down to dimensional considerations (areas are extensive quantities) and to the fact that a single wavefunction has been used. \section{Shell condensate state: spin representation} Let us now introduce a `dual' spin representation for generalized GFT condensates, and a different example of condensate states constructed by the same scheme but relying on this dual representation. This will provide a useful computational toolkit and, at the same time, it allows us to circumvent the issue of non-normalisability in the kinematical Hilbert space (the Fock space) of the condensate states constructed out of the field operators \eqref{c-field}. In practice, when working in the group representation, and for the condensate states described in the previous section, a regularisation scheme is generally required in order to obtain finite expectation values for geometric operators like \eqref{Area-op}. On the other hand, when working in the spin representation at fixed spins, and constructing adapted condensate states using the same scheme, this issue doesn't arise. Beside these technical advantages, we detail this dual construction for two additional reasons. First, it shows the generality of our construction and results; in fact, starting from these two basic definitions of condensate states, one can envisage new definitions combining or interpolating between the two basic ones (which play a role of possible bases for linear combinations possessing similar properties). Second, this dual spin-based construction is the one producing quantum GFT condensate states that are the closest to the quantum states customarily used in loop quantum gravity as models of quantum black holes, based on eigenstates of the area operator and thus labeled by fixed spins on the links puncturing the black hole horizon. This will facilitate comparison of our results with the LQG literature. The spin representation follows from a straightforward Peter-Weyl decomposition of the vertex wavefunction, with the field operators \eqref{c-field} now becoming \nopagebreak[3]\begin{eqnarray} \hat{\sigma}_{r,ts}(h_I) &=& \int dg_I \sigma_{r,s}(h_Ig_I) \hat{\varphi}_ {t}(g_I) = \int dg_I \sum_{\{j\}, l_{\scriptscriptstyle R},l_L} \sigma_{r,s}^{j_1\ldots j_4 l_{\scriptscriptstyle L} l_{\scriptscriptstyle R}} \, \iota^{j_1 j_2 j_3 j_4 l_{\scriptscriptstyle L}}_{m_1m_2m_3m_4} {\iota^{j_1 j_2 j_3 j_4 l_{\scriptscriptstyle R}}_{n_1n_2n_3n_4} } \prod_{I=1}^{4} D^{j_I}_{m_I o_I}(h_I) D^{j_I}_{o_I n_I}(g_I) \hat{\varphi}_ {t}(g_I) \nonumber\\ &=& \sum_{\{j\}, l_{\scriptscriptstyle R},l_L}\sigma^{j_1\ldots j_4 l_{\scriptscriptstyle L} l_{\scriptscriptstyle _R}} \iota^{j_1 j_2 j_3 j_4 l_{\scriptscriptstyle L}}_{m_1m_2m_3m_4} \hat{a}^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,ts)\,o_1 \ldots o_4} \prod_{I=1}^{4} D^{j_I}_{m_I o_I}(h_I)\,, \end{eqnarray} where we defined the following new field operators \begin{equation}\label{a-field} \hat{a}^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,ts)\, o_1 \ldots o_4} =\sigma_{r,s}\, \iota^{j_1 j_2 j_3 j_4 l_{\scriptscriptstyle R}}_{n_1n_2n_3n_4} \int dg_I \prod_{I=1}^{4} D^{j_I}_{o_I n_I}(g_I) \hat{\varphi}_ {t}(g_I) \,, \end{equation} and we have assumed the factorisation property \nopagebreak[3]\begin{equation} \sigma_{r,s}^{j_1\ldots j_4 l_{\scriptscriptstyle L} l_{\scriptscriptstyle R}}=\sigma_{r,s}\,\sigma^{j_1\ldots j_4 l_{\scriptscriptstyle L} l_{\scriptscriptstyle R}}\,. \end{equation} One might wonder whether such a restriction is plausible or not. Let us notice that this particular choice means that different layers differ more because of their volume (controlled essentially by the number of quanta, determined by $|\sigma_{r,s}|^2$) rather than their intrinsic geometry (type of curvature, for instance). This situation is very natural in spherical symmetry, as it is its very definition in the continuum setting: the geometry of different two-dimensional spheres singled out by the isometry group differs only by a rescaling determined by the radial coordinate, the rest being constrained by the isometry group itself. From the commutation relation of the basic field operators \begin{equation}\label{phi-comm} [\hat{\varphi}_t(g_I), \hat{\varphi}^{\dagger}_{t'}(g'_I)] =\delta_{t,t'} \Delta_{R}(g, g') \equiv\delta_{t,t'} \int_{\scriptscriptstyle SU(2)} d\gamma \prod_{I=1}^4 \delta(g_I \gamma (g'_I)^{-1})\, \end{equation} and the orthonormality relations between the Wigner matrices \nopagebreak[3]\begin{equation} \int dg D^j_{mn}(g)\overline{D^{j'}_{m'n'}(g)}=\frac{1}{d_j}\delta_{j,j'}\delta_{m,m'}\delta_{n,n'}\,, \end{equation} it is immediate to see that the new field operators \eqref{a-field} satisfy \begin{equation}\label{a-comm} [ \hat{a}^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,ts)\,o_1 \ldots o_4} , \hat{a}^{\dagger j'_1\ldots j'_4 l'_R}_{(r',t's')\,o'_1 \ldots o'_4} ] =|\sigma_{r,s}|^2\delta_{r,r'}\delta_{t,t'}\delta_{s,s'} \delta_{l_R,l'_{R}} n(j_1,j_2,j_3,j_4, l_{\scriptscriptstyle R}) \prod_{I=1}^4 \frac{1}{d_{j_I}} \delta_{o_I o'_I} \delta_{j_I j_I'}\,. \end{equation} where \nopagebreak[3]\begin{equation} \delta^{l,l'} n(j_1,j_2,j_3, j_4,l)=\sum_{\{ m \}} \suinter{1}{2}{3}{4}{l}{m} \suinter{1}{2}{3}{4}{l'}{m} \end{equation} is the intertwiners normalization factor. With the convention \begin{equation} \suinter{1}{2}{3}{4}{l}{m} = \sum_{m,m'} C^{j_1 j_2 l}_{m_1 m_2 m} C^{j_3 j_4 l'}_{m_3 m_4 m'} C^{l l' 0}_{m m' 0} \,, \end{equation} where the $C$'s are the Clebsch-Gordan coefficients, the normalization factor $n(j_1,j_2,j_3, j_4,l)$ is equal to 1 and therefore we omit it from now on. The wavefunction of the new field operators then reads \nopagebreak[3]\begin{equation}\label{a-wave} a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,s)\, o_1 \ldots o_4}(g_I)\equiv \langle g_t|\hat{a}^{\dagger j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,ts)\, o_1 \ldots o_4} |0\rangle =\sigma_{r,s}\, \iota^{j_1 j_2 j_3 j_4 l_{\scriptscriptstyle R}}_{n_1n_2n_3n_4} \prod_{I=1}^{4} D^{j_I}_{o_I n_I}(g_I) \,. \end{equation} \subsection{Refinement operators} We now write the refinement operators $\widehat{\mathcal{M}}_{r, \scriptscriptstyle B s},\widehat{\mathcal{M}}_{r, \scriptscriptstyle W s}$ in terms of the spin representation field operators \eqref{a-field}. We concentrate on the operators for the outer boundary $s=+$. Refinement operators for the other boundaries can be constructed straightforwardly in a similar fashion. Assuming again this boundary radial links to have label 1, we define \nopagebreak[3]\begin{eqnarray}\label{MB} \widehat{\mathcal{M}}^{j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\, o_1 o'_1 o''_1 o'''_1}=\frac{1}{|\sigma_{r,s}|^2}\left(\prod_{I=1}^4 {d_{j_I}}\right)\sum_{\{m\}} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,o_1 m_2 m_3 m'_4} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,W+)\,\,o''_1 m'_2 m'_3 -m'_4} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,o'_1 -m'_2 -m'_3 m_4}\hat{a}^{j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,o'''_1 m_2 m_3 m_4}\,, \end{eqnarray} where notice the absence of the sign flips in the last term, due to the fact that we have $\hat a$ instead of $\hat a^\dagger$. By means of \eqref{a-comm}, we can now verify that the refinement operators above realises the move depicted in \eqref{refB}, namely \nopagebreak[3]\begin{eqnarray} [\widehat{\mathcal{M}}^{j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,o_1 o'_1 o''_1 o'''_1}, \hat{a}^{\dagger\,j'_1j'_2j'_3 j'_4 l'_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,n_1 n_2 n_3 n_4}]= \sum_{\{m\}} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,o_1 m'_2 n_3 n_4} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,W+)\,\,o''_1 -m'_2 m'_3 m'_4} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,o'_1 n_2 -m'_3 -m'_4}\delta_{o'''_1, n_1}\delta_{l_{\scriptscriptstyle R}, l'_{\scriptscriptstyle R}}\prod_I\delta_{j_I,j'_I}\,. \end{eqnarray} Similarly, we have \nopagebreak[3]\begin{eqnarray}\label{MW} \widehat{\mathcal{M}}^{j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{ (\scriptscriptstyle r,W+)\,\, o_1 o'_1 o''_1 o'''_1}=\frac{1}{|\sigma_{r,s}|^2} \left(\prod_{I=1}^4 {d_{j_I}}\right)\sum_{\{m\}} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,W+)\,\,o_1 m'_2 m_3 m_4} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,B+)\,\,o''_1 -m'_2 m'_3 m'_4} \hat{a}^{\dagger\,j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,W+)\,\,o'_1 m_2 -m'_3 -m'_4} \hat{a}^{j_1j_2j_3 j_4 l_{\scriptscriptstyle R}}_{(\scriptscriptstyle r,W+)\,\,o'''_1 m_2 m_3 m_4}\,, \end{eqnarray} and a direct calculation as in the black case shows that the operator above implements the action \eqref{refW}. \subsection{Area expectation value} Let us compute the area expectation value on a single-vertex state created by the field operator \eqref{a-field}. Consistently with our previous convention, vertex radial link dual to the boundary of interest has colour $1$. Using \eqref{Area-op}, \eqref{a-wave}, we get \nopagebreak[3]\begin{eqnarray} a_{1r,s}&=&\kappa\int dg_I a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,s)\, o_1 \ldots o_4}(g_I) \sqrt{E^{i}_{1} E^{j}_{1} \delta_{ij}} \rhd\overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,s)\, o_1 \ldots o_4}(g_I)}\nonumber\\ &=&\kappa\int dg_I |\sigma_{r,s}|^2 \iota^{j_1 j_2 j_3 j_4 l_{\scriptscriptstyle R}}_{m_1m_2m_3m_4} \iota^{j_1 j_2 j_3 j_4 l_{\scriptscriptstyle R}}_{n_1n_2n_3n_4} \sqrt{j_1(j_1+1)} \prod_{I=1}^{4} D^{j_I}_{o_I m_I}(g_I) \overline{D^{j_I}_{o_I n_I}(g_I) }\nonumber\\ &=&\kappa|\sigma_{r,s}|^2\sqrt{j_1(j_1+1)}\prod_{I=1}^{4}\frac{1}{d_{j_I}}\,. \end{eqnarray} As anticipated above, the only dependence on the vertex wavefunction is through the normalisation factor $|\sigma_{r,s}|^2$. As we have already remarked, this might seem suspicious but it is to be expected if we take seriously the idea that these states have leaves which support the same symmetry group. \section{Shell density matrix}\label{sec:dens} Given the single shell state \eqref{shell-state}, we can construct a pure state associated to a full 3D spatial foliation through a shell-gluing procedure. This can be obtained through the definition of a full foliation refinement operator built out of the single-shell ones. At this kinematical stage, there is considerable freedom left in such a definition, with the only constraint coming from the requirement of spatial topology preservation. More precisely, the refinement and gluing operations have to be carried out in such a way that the outer boundary of a given shell $r$ always contains the same number of vertices as the inner boundary of the shell $r+1$ it is glued to, so that no open links are created. It should be remarked, however, that the structure of the refinement operators associated to the three different components of a given shell is such that these can act independently on one another. Therefore, without further inputs coming from the dynamics, the most generic complete-foliation state can be written as a product of single-shell states, namely \begin{equation}\label{full-state} \ket{\Psi}=\prod_r \ket{\Psi_r} \,, \end{equation} upon which the constraint on the `synchronisation' of nearby shell boundaries refinement is applied (thus removing the factorisation into shells and introducing correlations). From the pure state \eqref{full-state} we can obtain the density matrix \nopagebreak[3]\begin{equation} \hat \rho=|\Psi\rangle\langle \Psi| \end{equation} of the full 3D spatial foliation. \subsection{Reduced Density Matrix for a Single Shell} We now want to compute the reduced density matrix associated to the outer boundary of a single shell $r$ by tracing over the rest of the bulk state. This reduced density matrix will be the central object for our entropy calculation. In general, the boundary component of a given shell state \eqref{shell-state} contains a superpositions of all graphs that can be obtained by all possible combinations of strings of refinement operators associated to the boundary. The coefficients of such superposition are determined by the specific form of the function $F_r$. We will come back to this important point in a moment. For now, in order to understand the entanglement structure between different shells, let us start by simply considering a given graph $A$ associated to the shell $r$ outer boundary and the graph $B$ of the inner boundary of the shell $r+1$ right outside. The result of this simple example can be easily generalised to the rest of the graph. Therefore, if we assume that both graphs are formed by $n$ vertices (in order to be properly glued they must have the same number of building blocks), where again we take the connecting radial links of colour 1, we can write the wavefunction of these two components as \nopagebreak[3]\begin{equation} \psi(g^{\scriptscriptstyle {A_1}}_I,\ldots,g^{\scriptscriptstyle {A_n}}_I,g^{\scriptscriptstyle {B_1}}_I,\ldots,g^{\scriptscriptstyle {B_n}}_I)= \prod_{i=1}^{n} a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, m^i_1 \ldots m^i_4}(g^{\scriptscriptstyle A_i}_I) a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle B_i}\, n^i_1 \ldots n^i_4}(g^{\scriptscriptstyle B_i}_I)\delta_{m^i_1,-n^{t^m_1(i)}_1}\prod_{J=2}^4\delta_{m^i_J,-m^{t^m_J(i)}_J}\delta_{n^i_J,-n^{t^n_J(i)}_J}\,, \end{equation} where the $\delta$'s are used to keep track of the connectivity of the whole graph $A\cup B$, with the notation $t^m_J(i)$ $(t^n_J(i))$ indicating the target vertex in the graph $A$ $(B)$ of the edge of colour $J$ departing from the vertex $i$ in the graph $A$ $(B)$, and similarly for $t^m_1(i)$ with the target vertex in the graph $B$ (instead of $A$), encoding the connectivity between the two boundaries through the radial links of colour 1. We can thus write the total density matrix as \nopagebreak[3]\begin{eqnarray} &&\rho^{(n)}(g_I^{{\scriptscriptstyle A_1}},...,g_I^{{\scriptscriptstyle A_n}},g_I^{{\scriptscriptstyle B_1}},...,g_I^{{\scriptscriptstyle B_n}}; {g'_I}^{{\scriptscriptstyle A_1}},...,{g'_I}^{{\scriptscriptstyle A_n}},{g'_I}^{{\scriptscriptstyle B_1}},...,{g'_I}^{{\scriptscriptstyle B_n}})\\ &&= C \prod_{i=1}^{n} a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, m^i_1 \ldots m^i_4}(g^{\scriptscriptstyle A_i}_I) a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle B_i}\, n^i_1 \ldots n^i_4}(g^{\scriptscriptstyle B_i}_I)\delta_{m^i_1,-n^{t^m_1(i)}_1}\prod_{J=2}^4\delta_{m^i_J,-m^{t^m_J(i)}_J}\delta_{n^i_J,-n^{t^n_J(i)}_J}\nonumber\\ && \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, {m'}^i_1 \ldots {m'}^i_4}({g'}^{\scriptscriptstyle A_i}_I) a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle B_i}\, {n'}^i_1 \ldots {n'}^i_4}({g'}^{\scriptscriptstyle B_i}_I)}\delta_{{m'}^i_1,-{n'}^{t^{m'}_1(i)}_1}\prod_{J=2}^4\delta_{{m'}^i_J,-{m'}^{t^{m'}_J(i)}_J}\delta_{{n'}^i_J,-{n'}^{t^{n'}_J(i)}_J}\,, \end{eqnarray} where $C$ is a normalisation factor. If we now use the relation \begin{equation}\label{delta-spin} \int dg_I \,a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,s)\, m_1 \ldots m_4}(g_I)\overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{(r,s)\, n_1 \ldots n_4}(g_I)}= |\sigma_{r,s}|^2 \prod_{I=1}^4 \frac{1}{d_{j_I}} \delta_{m_I, n_I}\,, \end{equation} following from the commutation relation \eqref{a-comm}, to integrate away the $B$ part, we get \nopagebreak[3]\begin{eqnarray}\label{red-spin} \rho^{(n)}_{A}(g_I^{1},...,g_I^{n};{g'_I}^{1},...,{g'_I}^{n})&=& \left( \frac{\prod_{I=1}^4 d_{j_I}}{|\sigma_{r,s}|^2}\right)^n \prod_{i=1}^{n} a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, m^i_1 \ldots m^i_4}(g^i_I) \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, {m'}^i_1 \ldots {m'}^i_4}({g'_I}^i)}\nonumber\\ &\times&\delta_{m^i_1,{m'}^i_1} \prod_{J=2}^4\delta_{m^i_J,-m^{t^m_J(i)}_J} \, \delta_{{m'}^i_J,-{m'}^{t^{m'}_J(i)}_J}\,, \end{eqnarray} where we have set the normalisation factor $C=\left( \prod_{I=1}^4 d_{j_I}/|\sigma_{r,s}|^2\right)^{2n}$. We see that the normalised reduced density matrix we obtain is mixed, as a consequence of the relation $m^i_1={m'}^i_1$ imposed by the first set of $\delta$'s and following from the property \eqref{delta-spin}. Remarkably, as it emerges from this simple example, any property about the rest of the graph traced away disappears: different completions of the same visible portions of the state will lead to the same reduced density matrix. This is a direct consequence of the commutation relation \eqref{a-comm} (which implies \eqref{delta-spin}). This holographic property of our states will remain valid also when tracing away a bigger graph in the bulk, exactly for the same mechanism. This means that, given a graph for the whole 3D space foliation, if we choose the boundary of an arbitrary shell $r$ and trace away all the rest of the graph, we end up with a mixed reduced density matrix which contains {\it no information} about the bulk degrees of freedom. The only entanglement that remains in the reduced density matrix is the one induced by the radial links of the closest shell. Let us reiterate the message: for these particular states, all the rest of the information about the remaining graph disappears from the reduced density matrix, regardless of how big or intricate the rest of the graph is. This means that the entanglement entropy contribution comes uniquely from the entanglement with the closest shell, as expected by standard calculation in QFT on a sphere \cite{Solodukhin:2011gn}. \subsection{Entanglement Entropy}\label{sec:eentropy} To compute the entanglement entropy we now need to diagonalise the reduced density matrix, that is, we need to compute the eigenvectors of \eqref{red-spin}. Let us consider the state \nopagebreak[3]\begin{equation}\label{eigenstates-spin} \Psi^{(n)}_A(n_1, {g}) =\left( \frac{\prod_{I=1}^4d_{j_I}}{|\sigma_{r,s}|^2}\right)^{n/2} \prod_{i=1}^{n} \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, n^i_1 \ldots n^i_4}(g_I^i)} \prod_{J=2}^4\delta_{n^i_J,-n^{t^n_J(i)}_J}\,, \end{equation} which satisfies \nopagebreak[3]\begin{equation} \langle \Psi^{(n)}_A(n_1, {g})| \Psi^{(n)}_A(n'_1, {g})\rangle= \prod_{i=1}^n \delta_{n^i_1, n'^i_1}\,, \end{equation} and compute \nopagebreak[3]\begin{eqnarray} && \int \prod_{i=1}^{n} dg_I^{i}\rho^{(n)}_{red}(g_I^{{1}},...,g_I^{{\nonumber}};{g'_I}^{{1}},...,{g'_I}^{{n}}) \Psi^{(n)}_A(n_1, g)\nonumber\\ && =\left( \frac{\prod_{I=1}^4 d_{j_I}}{|\sigma_{r,s}|^2}\right)^{\frac{3}{2}n}\int \prod_{i=1}^{n} dg_I^{i} a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, m^i_1 \ldots m^i_4}(g^{i}_I) \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, {m'}^i_1 \ldots {m'}^i_4}({g'_I}^{i})} \delta_{m^i_1,{m'}^i_1} \prod_{J=2}^4\delta_{m^i_J,-m^{t^m_J(i)}_J} \, \delta_{{m'}^i_J,-{m'}^{t^{m'}_J(i)}_J}\nonumber\\ &&\times\, \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\, n^i_1 \ldots n^i_4}(g^{i}_I)} \prod_{J=2}^4\delta_{n^i_J,-n^{t^n_J(i)}_J}\nonumber\\ &&=\left( \frac{\prod_{I=1}^4 d_{j_I}}{|\sigma_{r,s}|^2}\right)^{\frac{n}{2}} \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{{\scriptscriptstyle A_i}\,m^i_1 \ldots {m}^i_4}({g'_I}^{i})} \prod_{J=2}^4 \delta_{{m}^i_J,-{m}^{t^{m}_J(i)}_J}\nonumber\\ &&= \Psi^{(n)}_A(m_1, {g'})\,, \end{eqnarray} where we have used the relation \eqref{delta-spin} again. From the calculation above, we see that the states \eqref{eigenstates-spin} are eigenstates of the shell reduced density matrix \eqref{red-spin} with eigenvalue $1$. We can thus label all the eigenstates of the shell reduced density matrix using a graph basis and the notation $\Psi_{r,s}^{(n)}(\Gamma_\alpha)$ to denote the states \eqref{eigenstates-spin}, where the structure of the given graph $\Gamma_\alpha$ is encoded in the product of deltas $\prod_{J=2}^4\delta_{n^i_J,-n^{{}^\alpha t^n_J(i)}_J}$. Given that this does not hold in general \cite{Oriti:2014uga}, it is instructive to show explicitly the orthogonality of the states $\Psi_{r,s}^{(n)}(\Gamma_\alpha)$ for different graphs $\Gamma_\alpha, \Gamma_{\alpha'}$. We want to prove that \nopagebreak[3]\begin{equation}\label{ortho} \langle \Psi_{r,s}^{(n)}(\Gamma_\alpha) |\Psi_{r,s}^{(n)}(\Gamma_{\alpha'}) \rangle=\delta_{\alpha, \alpha'}\prod_{i=1}^n \delta_{n^i_1, n'^i_1}\,. \end{equation} To do so, for all $\alpha\neq \alpha'$, it is enough to consider one simple case, namely graphs with $n_{\scriptscriptstyle B}=n_{\scriptscriptstyle W}=n/2=2$ (recall that all the graphs have always the same number of black and white vertices). These states are created by acting once with the refinement operators \eqref{refW}, \eqref{refB} on the seed state for the shell boundary \nopagebreak[3]\begin{equation}\label{seed} \includegraphics[width=6cm]{Shell.pdf}\,, \end{equation} where links-1 are the radial ones and links-2 are connected to the bulk of the shell. The two possible states with 4 vertices, when acting with the black \eqref{refB} and white \eqref{refW} refinement operators respectively, thus are \nopagebreak[3]\begin{eqnarray}\label{Psi1} &&|\Psi^{(4)}_{r,1}\rangle:~~ \begin{array}{c} \includegraphics[width=7.5cm]{RefW2v2.pdf} \end{array}\\ && |\Psi^{(4)}_{r,2}\rangle:~~ \begin{array}{c} \includegraphics[width=5.7cm]{RefB2.pdf} \end{array}\label{Psi2}\,. \end{eqnarray} The scalar product between these two states gives: \nopagebreak[3]\begin{eqnarray*} \langle \Psi^{(4)}_{r,2}|\Psi^{(4)}_{r,1}\rangle&=& \int dg^1_Idg^2_Idg^3_Idg^4_Idg'^1_Idg'^2_Idg'^3_Idg'^4_I\\ &&\overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ m^1_1 m^1_2 m^1_3 m^1_4}(g_I^1)} \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ m^2_1 m^2_2 m^2_3 -m^1_4}(g_I^2)} \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ m^3_1 -m^2_2 -m^2_3 m^3_4}(g_I^3)} \overline{a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ m^4_1 m^4_2 -m^1_3 -m^3_4}(g_I^4)} \\ && {a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ n^1_1 n^1_2 n^1_3 n^1_4}({g'_I}^1)} {a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ n^2_1 n^2_2 -n^1_3 -n^1_4}({g'_I}^2)} {a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ n^3_1 -n^2_2 n^3_3 n^3_4}({g'_I}^3)} {a^{j_1\ldots j_4 l_{\scriptscriptstyle R}}_{ n^4_1 n^4_2 -n^3_3 -n^3_4}({g'_I}^4)}\\ &&\langle0|\hat{\varphi}_ {\scriptscriptstyle W}({g'_I}^1) \hat{\varphi}_ {\scriptscriptstyle B}({g'_I}^2) \hat{\varphi}_ {\scriptscriptstyle W}({g'_I}^3) \hat{\varphi}_ {\scriptscriptstyle B}({g'_I}^4) \hat{\varphi}^{\dagger}_ {\scriptscriptstyle B}(g^1_I) \hat{\varphi}^{\dagger}_ {\scriptscriptstyle W}(g^2_I) \hat{\varphi}^{\dagger}_ {\scriptscriptstyle B}(g^3_I) \hat{\varphi}^{\dagger}_ {\scriptscriptstyle W}(g^4_I)|0\rangle\\ &=& \left(\prod_{I=1}^4\frac{1}{d_{j_I}}\right)^{4} \Big(\delta_{m^1_1,n^1_1}\delta_{m^1_2,n^1_2}\delta_{m^1_3,n^1_3}\delta_{m^1_4,n^1_4} \delta_{m^3_1,n^3_1}\delta_{m^2_2,n^2_2}\delta_{-m^2_3,n^3_3}\delta_{m^3_4,n^3_4}\\ &&\phantom{\prod_{I=1}^4\frac{1}{d_{j_I}}}+ \delta_{m^1_1,n^3_1}\delta_{m^1_2,-n^2_2}\delta_{m^1_3,n^3_3}\delta_{m^1_4,n^3_4} \delta_{m^3_1,n^1_1}\delta_{-m^2_2,n^1_2}\delta_{-m^2_3,n^1_3}\delta_{m^3_4,n^1_4}\Big)\times\\ &&\phantom{\prod_{I=1}^4\frac{1}{d_{j_I}}}\Big(\delta_{m^2_1,n^2_1}\delta_{m^2_2,n^2_2}\delta_{m^2_3,-n^1_3}\delta_{m^1_4,n^1_4} \delta_{m^4_1,n^4_1}\delta_{m^4_2,n^4_2}\delta_{m^1_3,n^3_3}\delta_{m^3_4,n^3_4}\\ &&\phantom{\prod_{I=1}^4\frac{1}{d_{j_I}}}+\delta_{m^2_1,n^4_1}\delta_{m^2_2,n^4_2}\delta_{m^2_3,-n^3_3}\delta_{m^1_4,n^3_4} \delta_{m^4_1,n^2_1}\delta_{m^4_2,n^2_2}\delta_{m^1_3,n^1_3}\delta_{m^3_4,n^1_4} \Big)\,, \end{eqnarray*} where we have performed the Wick contractions and used \eqref{phi-comm}. If we now expand this last expression, we end up with four products of $\delta$'s which all give rise to either one of the following relations \nopagebreak[3]\begin{equation} \delta_{m^1_3,-m^2_3}\,,~~~ \delta_{m^1_4, m^3_4}\,. \end{equation} None of these two relations can be satisfied, though, due to the structure of the graph of the state \eqref{Psi2}. In fact, they would both change the shell topology by creating two disconnected regions in the boundary graph. Therefore, the scalar product vanish and the states \eqref{Psi1}, \eqref{Psi2} are orthogonal. Due to the local action of the refinement operators, similar relations would follow when computing the scalar product between any two eigenstates at given $n$ corresponding to the action of a sequence of refinement operators generating two different graphs. This implies that all the eigenvectors $\Psi_{r}^{(n)}(\Gamma_\alpha)$ of the total reduced density matrix \eqref{red-tot-spin} are orthogonal for $\alpha\neq \alpha'$. A similar calculation shows that when $\alpha=\alpha'$ \eqref{ortho} is again satisfied. This seemingly unremarkable results has a very important consequence, namely \nopagebreak[3]\begin{equation}\label{eigenvalues-spin} \rho^{(n)}_{r,s}(\Gamma_\alpha)\Psi_{r,s}^{(n)}(\Gamma_{\alpha'}) = \begin{cases} \Psi_{r,s}^{(n)}(\Gamma_{\alpha'})\quad\text{if}\;\alpha=\alpha'\\ \\ 0\,\,\,\quad\text{if}\;\alpha\neq \alpha'\,.\end{cases} \end{equation} which implies that we can diagonalise the reduced density matrix, since we have discovered its diagonal form in terms of graphs, even in the case in which the full sum over triangulations is kept. This result also shows that the computation of the entanglement entropy, \emph{at least in the particular corner of the Fock space that we are exploring}, becomes a classic `counting of graphs' problem. \section{Semi-classicality conditions}\label{sec:class} Our construction so far is general, in the sense that it can apply to any of the shells in the space-like hypersurface of our foliation, and thus to generic spherically symmetric geometries. We now want to restrict our attention to an horizon 2-sphere cross section, \emph{i.e.~} we want to be able to interpret one of our shells as defining a spherical {\it horizon}, and for this we need to specify horizon boundary conditions. As we recalled in the Introduction, in the standard LQG black hole entropy calculation these boundary conditions are specified by the notion of Isolated Horizon and they are implemented in the quantum theory through the imposition of the boundary conditions \eqref{IH-cond} (though most often this is already implemented at a classical level). In the context of the GFT condensate for a spherically symmetric shell, imposition of the boundary condition \eqref{IH-cond} amounts to a relation between the flux variable $E_1$ associated to the $r$ radial link (in the convention adopted so far for the outer boundary of a shell) of a given vertex of the shell and the holonomy around it, constructed out of the group elements associated to the orthogonal remaining links. The latter correspond to the group elements $g_2, g_3, g_4$ of the fundamental field operator $\hat{\varphi}(g_I) $. In fact, these are the group elements which have the geometrical interpretation of parallel transport from the centre of a given tetrahedron to its faces lying on the horizon. The identification of the horizon shell could then be implemented through the construction of a second quantised curvature operator around a given puncture to be interpreted as the GFT holonomy operator around the radial link-1 of the corresponding vertex. However, given the fundamentally discrete nature of the GFT formalism and the lack of continuum manifold structures to be used as auxiliary tools to define quantum operators related to curvature, which generically involve correlation across several graph vertices and which correspond to intensive quantities from the GFT many-body perspective, their precise definition for GFT condensates remains one of the main open technical challenges of this second quantisation formalism. Given this obstruction, we are not going to explicitly impose IH boundary conditions in operatorial terms. Instead, we are going to rely on a maximum entropy argument in order to characterise the \emph{most generic} horizon shell geometry, as well as to capture some aspects of the semi-classical dynamics in our so far purely kinematical construction. Such treatment of the horizon semi-classical regime seems natural in light of the laws of black hole thermodynamics \cite{Bardeen:1973gs} and arguments for entropy bounds \cite{Bekenstein:1980jp, Bousso:1999xy}. The isolated horizon formalism will still play a role in our entropy calculation. In fact, we will check the compatibility of the single vertex Hilbert space degeneracy obtained through extremisation of the global horizon entropy with the constraint \eqref{IH-cond}, in the semi-classical limit. Moreover, we will also require consistency with the thermodynamical properties of isolated horizons. In addition to the notion of typicality, encoded in the maximum entropy argument, there are other geometrical requirements to be demanded in order to guarantee the validity of a semi-classical regime, namely: \begin{enumerate} \item large horizon shell boundary area; \item small horizon shell bulk volume; \item small fluctuations of the horizon shell geometrical operators. \end{enumerate} This last requirement is guaranteed by the factorisation property of the one-body geometrical operators (like area and volume, see Section \ref{sec:area}) in the large $n_{r,s}$ limit; from here on we thus take $n_{r_0,s}\gg 1$, with $r_0$ denoting the horizon shell. It also follows rather generically from the condensate nature of our quantum states. \subsection*{Holographic aspects of GFT black hole condensates} Before proceeding with the black hole entropy calculation we need to identify the relevant degrees of freedom which contribute to it. In order to understand better the features of the quantum states we have constructed and of their reduced density matrix, we recall here some basic aspects of the holographic principle which is implemented by it in a specific manner. We can identify two notions of holography: strong and weak \cite{Smolin:2000ag}. In the `strong' version, the holographic principle becomes a fundamental feature of quantum gravity, asserting that all bulk degrees of freedom can be encoded on a boundary screen. This is the point of view adopted for instance in the AdS/CFT correspondence, when interpreted as an exact duality. There are various arguments, however, to believe that in a background independent quantum gravity approach only a weaker notion of holography can survive \cite{Smolin:2000ag}. In particular, in a thermodynamical context where interactions between subsystems play a fundamental role, the causal structure of a black hole diagram strongly suggests that the only relevant degrees of freedom to account for the atomistic description beyond macroscopic variables are those living in the proximity of the horizon; in fact these are the only ones in causal contact with the exterior subsystem and relevant for the analysis of an external stationary observer. For these reasons, we are going to invoke a weak holographic principle in order to select the relevant reduced density matrix for the black hole entropy calculation. Let us recall that in Section \ref{sec:dens} we modelled, before dynamical considerations are eventually implemented, the 3D foliation of a spherically symmetric black hole geometry as a GFT pure condensate state factorised as a product over components of different shells. In particular, this means that refinement operators could act independently on different shells, suggesting that indeed strong holography is not realised at the microscopic level, at least in the kinematical setting. Selection of the microscopic degrees of freedom at the origin of the statistical mechanical nature of the Bekenstein--Hawking area entropy law thus requires us to trace out all bulk shells, both in the exterior and the interior of the horizon shell $r_0$. Entanglement entropy calculations in QFT on a fixed classical background might lead to the expectation that it should be enough to trace out only part, but not all, of the bulk Hilbert space in order to obtain a result that scales with the area of the entangling surface \cite{Solodukhin:2011gn}. However, this expectation is borne out of examples in which gravity is treated classically and entropy is associated to matter degrees of freedom, not gravitational ones. As we have argued above, in a general full quantum gravitational regime holography has probably a weaker nature. At the same time, one could consider a subclass of the GFT condensate states we constructed in which nonlocal correlations between different shells are introduced by the imposition of some effective dynamics, which would allow an entropy counting compatible with a stronger form of holography. The extreme case of such a scenario would be physical states for which the refinement operators on a given shell are synchronized with those acting on all the other shells at the same time. In this case the number of boundary graphs would exhaust all the possible configurations of bulk graphs as well: Each boundary graph is in correspondence with just one graph in all the rest of the bulk and there is no extra degeneracy. Such states are in fact rather straightforward to construct. Before proceeding, a few remarks are in order. At this stage, there is no indication that such specific sector is the physically relevant one as this would be ultimately dictated by the quantum equations of motion, even considered in some approximate form. The counting discussed in what follows, obtained by tracing over all the bulk vertices outside of the $r_0$ shell, applies straightforwardly to this particular case of strong holography. Let us also point out that implementation of the weak holographic principle does not automatically yield an area law for the entropy. In fact, \emph{a priori} the horizon reduced density matrix could retain information about the rest of the bulk. It is therefore a non-trivial property of our construction the fact that, as seen above, by tracing out bulk degrees of freedom, the resulting reduced density matrix of the horizon is mixed but loses all the information about the bulk beyond its existence. In addition, as we will see in a moment, there is also a further holographic feature concerning the degeneracy associated to the space of wavefunctions for the single vertex Hilbert space. This characteristic of our states is what we would expect from a causal barrier, suggesting that the foliation of space provided by our shell graphs could be naturally used to mimic a foliation by null surfaces. This does not obviously represent the most general case of foliation, but it suits very well to the description of a black hole horizon. Moreover, if we were in a more general case where information about the bulk had not been washed away by the tracing operation, the implementation of weak holography for the horizon density matrix would have required some further restriction on the bulk wavefunctions, for instance coming from the dynamics. Due to the holographic nature of our particular states, such restriction is automatically included. What remains to be checked is the compatibility of these holographic states and their associated geometrical data with the imposition of the dynamics. This is far from trivial, as we have repeatedly stressed, and it is left for future work. Finally, it is also remarkable that we could find an orthogonal set of eigenvectors for the horizon density matrix \eqref{red-tot-spin}. This allows us to diagonalize it and then, due to the property \eqref{eigenvalues-spin}, the calculation of the horizon von Neumann entropy can be performed precisely, without neglecting any entanglement contribution, and this corresponds exactly to a statistical counting \emph{\'a la} Boltzmann. We have thus proven that {\it the horizon entanglement entropy is the same as the horizon Boltzmann entropy}. Let us now perform the counting. \section{Combinatorial Contribution: Graph Counting} Our states are defined via iterated action of refinement moves on the seed of each shell. The generic parametrization \eqref{shell-state} makes use of a generic function of two sets $(B, W)$ of refinement operators. Written more explicitly: \begin{equation} \left( \sum_{n=0}^{\infty} \prod_{s}\prod_{m=1}^n \left( a^{n,m}_{\scriptscriptstyle r_0, B s}\, \widehat{\mathcal{M}}_{\scriptscriptstyle r_0, B s} + a^{n,m}_{\scriptscriptstyle r_0, W s} \,\widehat{\mathcal{M}}_{\scriptscriptstyle r_0, W s} \right) \right) \ket{\tau} \, . \end{equation} The refinement move operators, together with the seed state, specify the general class of possible microscopic combinatorial structures, \emph{i.e.~} the specific portion of the Fock space, that we are exploring. The parameters $a$ control instead the relative weights of the different possible strings of refinement move operators that can be applied to the seed state. As we have said, we will determine them following some global considerations, making use of a maximum entropy principle, appealing to typicality of the state given some mild general constraints. Let us clarify this point further. From the perspective of macroscopic, large scale dynamics, making reference to the detailed combinatorial structure of the state does not seem a plausible approach. Instead, the state has to be determined as the most general state compatible with global constraints on topology, symmetry and semi-classicality. A statistical approach based on typicality and maximum ignorance seems to be more suitable, exploring as much as possible the space of states that we have described so far, and limiting the truncations to specific sectors of the GFT kinematical space to the ones that respect mild constraints associated to the physical regime we are interested in. In section \ref{sec:eentropy} we have studied the properties of the reduced density matrix, showing a basis of states diagonalising it. We showed how the problem of computing the entropy boils down to a (weighted!) count of graphs, but we did not proceed further in the counting. We will now resume the computation of the entropy by first evaluating the contribution arising from the graph proliferation. The action of the refinement moves, via the Wick's theorem, generate a linear superposition of states possessing different combinatorial structure, according to each particular sequence of Wick contractions. Let $G_n$ be the set of graphs that can be obtained via any sequence of $n$ refinement moves per shell component starting from the given seed. The particular function in \eqref{shell-state} will then specify a set of weights $w_n(\Gamma)$, the coefficients entering the diagonal of the reduced density matrix. The detailed dependence of these weights on the coefficients $a$'s is not necessary, but could be computed, in principle. Let us stress that the assumption that at each step a given horizon component gets refined with the same number of vertices as the other two is here made mainly to simplify the notation and it could be relaxed without affecting the final result of the entropy counting. We have already shown that, for a given boundary graph, the reduced density matrix of the horizon takes the form \eqref{red-spin}, where $A=r_0$ corresponds to the horizon shell, including all its three components. As argued, this form remains unchanged even when we trace away the whole bulk degrees of freedom (both interior and exterior ones). Therefore, we can write the total normalized reduced density matrix of the shell for a given number $n$ of boundary vertices as \nopagebreak[3]\begin{equation}\label{red-tot-spin} \rho_{tot, r_0}^{(n)}= \sum_{\alpha=1}^{\mathcal N} w_n(\Gamma_\alpha) \rho^{(n)}_{r_0}(\Gamma_\alpha)\,, \end{equation} where ${\mathcal N}=\#G_n$ is the total number of shell graphs for given number of vertices (which are $2n$, given the presence of two colours for the vertices), obtained through all the possible actions of the refinement operators. Here, $\rho^{(n)}_{r_0}(\Gamma_\alpha)$ is the reduced density matrix in \eqref{red-spin}. The presence of several graphs (at fixed topology) results into a combinatorial contribution to the entropy: \begin{equation} S_{comb} = - \sum_{\alpha=1}^{\mathcal N} w_n(\Gamma_\alpha) \log\left( w_n(\Gamma_\alpha) \right)\,. \end{equation} As explained in Section \ref{sec:class}, we determine the weights $w_n$ (and then, implicitly, the function $F_r$ in \eqref{shell-state}) by maximising this entropy. It is immediate to do so: the most disordered configuration is the one in which the weights are all equal: \begin{equation} w^{\max}_n(\Gamma) = \frac{1}{{\mathcal N}} ,\qquad S_{comb}^{\max} = \log\left( {\mathcal N} \right) \end{equation} Therefore, the only thing left to determine is the size of the set of graphs that can be obtained from the given set of refinement moves. The counting of the number of graphs generated by our refinement moves can be easily performed using familiar techniques. Let us focus on a given layer, for instance the outer boundary of the horizon shell. Following the convention adopted so far, let us suppress for a moment the edges of colour 1, whose only role is to connect the horizon shell with the next one. They do not play a role in the counting since the state is completely determined by the specification of the combinatorial pattern of the gluings of edges of colours $2, 3, 4$. This effectively reduces the counting problem to a counting of graphs obtained connecting 3-valent vertices (\emph{i.e.~} a lower dimensional problem, as it should be expected). At a first inspection, following the reasoning in \cite{Bonzom:2011zz}, one would expect the counting of the boundary states to give us the Catalan numbers (for D = 2, in fact), as what we are looking at is the refinement of a melonic graph with the insertion of melons. However, some care is due in the case of the shell boundaries. In fact, we have only two moves at our disposal, inserting loops with colours (2,3) and (3,4) only (see \eqref{refW}, \eqref{refB}). The insertion of loops with colours (2,4) would cause a crossing with an edge of colour $1$. Therefore the strategy of \cite{Bonzom:2011zz} has to be slightly adapted. Notice that the refinement moves can be seen as the insertion of melons on links of colour 2 and 4 only (this corresponds to the addition of tetrahedra incident on the same dual edge, \emph{i.e.~} the increase of the curvature around that edge). Therefore, our problem can be seen as the calculation of the number of ways in which we can insert, into a line of colour 2 (for instance), a string of pairs of nodes connected according to our rules. This is easily done using generating functions. Let $u$ be a real (or complex) parameter such that the Taylor coefficients of $G(u)$ in zero, which we wish to determine, are the number of graphs obtained acting with our moves, \emph{i.e.~} the $n$-th coefficient is the number of graphs with $n$ pairs of nodes. The basic building blocks will be 1PI graphs, which are then combined to get all the graphs. The equation for the 1PI part can be obtained by inspection of the diagrammatics. We can obtain a recursion relation observing that, due to the peculiar connectivity, the only 1PI diagrams are the ones in which the first black vertex and the last white vertex are connected by a line of colour 3. If this were not the case, one could cut the graph into two disconnected components removing the link between the white vertex connected with the first black vertex and the next black vertex. Let us call $\Sigma(u)$ the function that generates the 1PI graphs. We have \begin{equation} G(u) = \frac{1}{1+\Sigma(u)}\, . \end{equation} However, it is easy to realize that $\Sigma(u) = uG(u)$, as the graphs that we would be counting are determined by all the possible ways of inserting $n-1$ pairs of nodes in the link that would connect the first and last node of the 1PI graph. The $u$ prefactor keeps track of the initial pair to be inserted. Therefore: \begin{equation} G(u) = \frac{1}{1+u G(u)}\,. \end{equation} The solution, differentiable in 0, is \begin{equation} G(u) = \frac{-1 + \sqrt{1+4u}}{2u}\,, \end{equation} which, as indeed expected, is the generating function for the Catalan numbers, namely \begin{equation} C_{k} = \frac{(2k)!}{(k+1)! k!}\,. \end{equation} Therefore, the number of distinct graphs with obtained with $n$ refinement moves acting on $l$ layers of the shell initial seed state is: \begin{equation}\label{G} {\mathcal N} = \left( \frac{(2n-2)!}{n! (n-1)!} \right)^l\,. \end{equation} As explained above, in the horizon entropy counting we will consider the most general case in which each layer is refined independently, \emph{i.e.~} we will set $l=3$. As it will be clear in a moment, considering a subclass of states in which the refinement proceeds in a more synchronized way among the different layers (and which therefore implements also a stronger form of holograpy) simply affects the numerical coefficient in front of the logarithmic corrections, but not the leading linear term. Taking the large-$n$ limit of \eqref{G}, in order to meet our semi-classicality requirements, by means of the Stirling formula we get \nopagebreak[3]\begin{equation} S_{\mathrm{comb}}^{\max} = \log\left( {\mathcal N}\right)=2n l \log{(2)}-\frac{l}{2}\log {(n)}\,. \end{equation} It is interesting to note that this is not the first time that Catalan numbers enter the calculation of black hole entropy, see \cite{Davidson:2011eu}, albeit in a different context. We will come back to this later. \section{Full Entropy: from Macro to Micro} The graph counting quantifies the contribution of the combinatorial degrees of freedom to the horizon entropy. This is not the entire entropy of the reduced state. There is also another, more `geometrical', component which arises from the information attached to each vertex by the wavefunction: the degeneracy of the Hilbert space for a single vertex, $\Delta(a)$, for fixed values of the macroscopic quantities. This quantity measures the size of the space of wavefunctions compatible with our semi-classicality restrictions (and solution of the dynamics equations). Since the geometric and combinatorial components are independent, at this (kinematical) stage of the construction, the total horizon entropy is then \nopagebreak[3]\begin{equation} S(n,a) = \log\left({\mathcal N} \Delta(a)\right)=2nl\log{(2)}+\log(\Delta(a))-\frac{l}{2}\log {(n)}\,. \end{equation} This is the entropy of reduced matrix of the most typical state that we can construct with our special class of condensates, at fixed number of vertices\footnote{If the number of vertices were not fixed, an additional contribution should be added to include the effect of the dispersion in the number of vertices.}. The next step is to maximise this entropy at fixed value of the classical area $\mathcal{A}_{\va IH}$ of the horizon. Consider the function: \begin{equation}\label{ent-funct} \Sigma(n,a,\lambda) = S(n,a) + \lambda({\mathcal{A}_{\va IH}} - 2 a n)\,, \end{equation} where $\lambda$ is a Lagrange multiplier imposing the area constraint. Necessary conditions for the maximisation of \eqref{ent-funct} are: \nopagebreak[3]\begin{eqnarray} \frac{\partial \Sigma}{\partial \lambda} &=& {\mathcal{A}_{\va IH}} - 2 a n = 0\,, \label{con1}\\ \frac{\partial \Sigma}{\partial n} & \approx& 2l\log (2 ) - 2\lambda a =0\,,\label{con2}\\ \frac{\partial \Sigma}{\partial a} & =& \frac{\Delta'(a)}{\Delta(a)} - 2n\lambda=0\label{con3}\,, \end{eqnarray} where the prime indicates derivative w.r.t. $a$. In \eqref{con2} we have dropped a term of order $1/n$, since we are in the large $n$ regime, as the semi-classicality conditions of small fluctuations of the horizon geometrical properties imply. These are three equations for the values of the three quantities $(n,a,\lambda)$ which maximise the entropy functional \eqref{ent-funct}. We could solve them explicitly only if we knew the expression the single vertex Hilbert space degeneracy $\Delta$ as a function of $a$. In principle, its evaluation could be achieved by the imposition of the isolated horizon boundary condition, by computing the number of solutions to that equation and compatible with the dynamics. However, both tasks, \emph{i.e.~} solving the isolated horizon boundary condition and the equations of motions defining the microscopic dynamics, are highly non-trivial and currently out of our reach, as we had already emphasised. Alternatively, we could turn the argument around and use the three relations \eqref{con1}, \eqref{con2}, \eqref{con3} to get an explicit expression of $a$ and $\Delta(a)$ as functions of $ \mathcal{A}_{\va IH}$ and $\lambda$. At this point, then, we could require consistency with a further semi-classical property of a black hole horizon, namely its thermality, in order to determine the value of the Lagrange multiplier $\lambda$ and thus remove the last ambiguity left in our entropy calculation. This gives a different twist to this discussion, turning it into the inference of how certain microscopic quantities should look like if we want them to be compatible with macroscopic observations. Following this strategy, we obtain \nopagebreak[3]\begin{eqnarray} &&a=\frac{l\log(2)}{\lambda}\label{a}\\ &&\Delta=c_0\exp{(2\lambda a n)}=c_0\exp{\left(\lambda {\mathcal{A}_{\va IH}}\right)}\label{Delta}\,, \end{eqnarray} where $c_0$ is an integration constant left unspecified for the moment. The horizon entropy we derive is then \nopagebreak[3]\begin{equation} S(\mathcal{A}_{\va IH})\approx 2\lambda {\mathcal{A}_{\va IH}}-\frac{l}{2} \log\left(\frac{\mathcal{A}_{\va IH}}{\ell_{\scriptscriptstyle P}^2}\right)\,. \end{equation} Notice that the solution \eqref{Delta} matches the expectation that the dimension of the Hilbert space for the wavefunction at fixed plaquette area should be finite once we implement the semi-classicality conditions listed in Section \ref{sec:class}. In particular, this would\begin{enumerate} \item exclude degenerate (zero-volume) configurations (which would be indeed incompatible with a reasonable semi-classical limit); \item impose a finite volume of the shell, again consistent with the semi-classical intuition. \end{enumerate} In other words, we should expect that whenever $\Delta(a)$ diverges, we should not be worried about possible problems in the calculation of the entropy since the geometry would be highly problematic to be interpreted as a semiclassical one in the first place. We can go further in our attempt of determine the properties of states that match the large scale behaviour of classical gravity. Since the semi-classical limit involves large values of $n$, the area constraint \eqref{con1} requires $a$ to be small. In the limit of $a\rightarrow 0$, the IH boundary condition fixes the holonomy around each radial link to be flat; this can be achieved only if the spin labels of the tangent links are 0. Therefore, in the limit $a\rightarrow 0$ the wavefunction should be a delta peaked on $j_I=0$, which means $\Delta(0)\sim 1$. As soon as $a>0$, then $\Delta (a)$ should grow\footnote{Notice that these arguments about semi-classical limits are used only to constrain the functional dependence of $\Delta(a)$ on $a$; there is no requirement that $j_l$ actually ever take the zero value, which may be problematic from the quantum geometric point of view.}. These expectations are matched by the Taylor expansion of the solution \eqref{Delta} around $a=0$, if we fix the integration constant $c_0=1$, namely \nopagebreak[3]\begin{equation} \Delta(a)\sim 1+\lambda a\,, \label{eq:densityofstates} \end{equation} for small $a$. The free parameter $\lambda$ can now be fixed by requiring consistency with the semi-classical thermodynamical condition relating the derivative of the entropy w.r.t. a local notion of energy at the horizon for fixed $n$ to the (inverse) Unruh\footnote{A geometric notion of temperature \cite{Pranzetti:2013lma} can be associated to a quantum IH by demanding the Kubo--Martin--Schwinger condition \cite{Haag:1992hx} to be satisfied for a sub-algebra of the holonomy-flux $*$-algebra of LQG. In the large IH area this notion of temperature coincides with the Unruh one.} $1/T_{\scriptscriptstyle U}=\beta_{\scriptscriptstyle U}=2\pi / (\ell_{\scriptscriptstyle P}^2\kappa) =2\pi \ell/\ell_{\scriptscriptstyle P}^2$. In the previous expression, $\ell$ is a local stationary observer proper distance and $\kappa=1/\ell$ is its surface gravity. For this purpose, we can use the local notion of energy introduced in \cite{Frodden:2011eb} for isolated horizons, namely \nopagebreak[3]\begin{equation} {\mathcal E}_{\scriptscriptstyle IH}=\frac{\mathcal{A}_{\va IH}}{8\pi\ell}\,. \end{equation} Dropping a subdominant term ($\propto 1/\mathcal{A}_{\va IH}$), which becomes immaterial for large enough areas, we get \nopagebreak[3]\begin{equation} \beta_{\scriptscriptstyle U}=\frac{2\pi \ell}{\ell_{\scriptscriptstyle P}^2}=\frac{\partial S}{\partial {\mathcal E}_{\scriptscriptstyle IH}}=8 \pi \ell \frac{\partial S}{\partial \mathcal{A}_{\va IH}} \approx{16 \pi \ell} \lambda\,, \end{equation} which leads, finally, to \nopagebreak[3]\begin{equation} \lambda \approx \frac{1}{8\ell^2_P}\,. \end{equation} Therefore, consistency with the IH thermodynamical properties yields an entropy \nopagebreak[3]\begin{equation}\label{entropy} S(\mathcal{A}_{\va IH})\approx \frac{\mathcal{A}_{\va IH}}{4\ell_{\scriptscriptstyle P}^2}-\frac{3}{2} \log\left(\frac{\mathcal{A}_{\va IH}}{\ell_{\scriptscriptstyle P}^2}\right) \end{equation} whose leading term reproduces the Bekenstein--Hawking formula; the numerical coefficient in front of the sub-leading logarithmic correction comes from explicitly setting $l=3$ as argued above, and it matches the result of previous calculations in LQG \cite{Kaul:2000kf, Livine:2005mw, Engle:2011vf} as well as those performed through CFT techniques \cite{Carlip:2000nv}. \section{Conclusions and final remarks} We now briefly summarise the results obtained and observations made in this paper. \ We have constructed, within the full quantum gravity formalism of group field theory, a class of quantum states that can be argued to describe (at least some key properties of) continuum spherically symmetric geometries. We have done this by making use of both its quantum geometric aspects, shared with loop quantum gravity, and its combinatorial aspects, shared with random tensor models. We have then imposed additional conditions on such states, that support an interpretation of them as containing horizons, and thus describing black hole geometries. For such states, we have then identified the microscopic degrees of freedom contributing to the horizon entropy and compute the latter explicitly. Under appropriate semi-classicality restrictions and the assumption that the entropy is maximised, we have recovered both a general area law and the exact Bekenstein--Hawking value. The fact that our states are realistic quantum states in the full theory, involving also a highly non-trivial sum over graphs, including arbitrarily refined ones, is one main improvement over the existing derivations of the same results in the loop quantum gravity literature. Let us now mention a few additional interesting aspects of our analysis and results. \ The entropy result \eqref{entropy} is completely independent on the Barbero--Immirzi parameter $\gamma$. This is a striking consequence of the GFT formalism in its Fock representation and the way in which it allows us to work with quantum gravity states. More precisely, the GFT fundamental field operators \eqref{c-field}, encoding the state geometrical data in the collective wavefunction $\sigma_r$ for a given shell, represents the key departing point from canonical LQG. It allows us to introduce a well-defined number operator in the theory and replace area eigenstates and eigenvalues for the quantum horizon with condensate states and \emph{area expectation values} on a single vertex state, (which is the same for each fundamental block due to the condensate hypothesis). In this way, while $\gamma$ still enters in the value of $a$, it completely disappears in the final expression of the entropy. Another way to understand the same important point is that this is a consequence of our choice of states in a second quantised formalism (reasonably simple condensates), allowing us to do a microscopic counting of degrees of freedom, while not using eigenstates of the total area operator. The latter, customary used in LQG, beside being dubious as corresponding to semiclassical black hole states, immediately introduce a dependance on the Barbero--Immirzi parameter, since this appears inevitably in the area spectrum. This also means that obtaining our results, including this intriguing independence from the Barbero--Immirzi parameter, have been facilitated greatly by the GFT formalism, with its convenient organisation of spin network states into a Fock space (with the mentioned existence of a number operator), and the tools it provides for handling sums over different triangulations/graphs. The reason for this independence from the Barbero--Immirzi parameter can also be traced back to the appearance of \eqref{Delta} (see the related discussion in the text). It is not just the area value that matters, but also the single vertex `density of states'. The calculations that we have given above suggest that this should lead to terms compensating $\gamma$ in the physical quantities. They also suggest that LQG calculations might be overlooking an essential step, signalled by the appearance of the Barbero--Immirzi parameter in expressions for quantities that should be independent of it, at least at the classical level. The evaluation of the expectation values of operators on properly defined semiclassical states (which are not necessarily eigenvectors of the area operator for some fixed graphs structure) seems to be a critical calculation, in this respect. \ The graph basis provides, in this case, an orthonormal basis for the reduced density matrix. This, together with the holographic property of our states, allows us to compute the horizon entanglement entropy exactly and prove the result \nopagebreak[3]\begin{equation} S_{Von Neumann}\equiv -\mathrm{Tr}{(\rho_{tot, r_0}^{(n)}\log(\rho_{tot, r_0}^{(n)}))}=\log(\mathcal N(n))\equiv S_{\mathrm{Boltzmann}}\,, \end{equation} which then reduces to a relatively simple counting exercise. As we have already mentioned, the particular class of states that we have chosen, obtained by what are very close to be melonic refinement of seed states, has a very specific growth rate with the number of vertices. This growth has also been considered in \cite{Davidson:2011eu}, in a different context and with a different picture in mind. Despite all the limitations that have been discussed, these results point at a specific behaviour for physical quantum gravity states describing black holes. Together with the observation about the disappearance of the Barbero--Immirzi parameter, related to the density of physical states per vertex, these facts suggest that the superposition of microscopic \emph{kinematical} configurations in the physical states cannot be neglected in such calculations, and it might also have to be severely constrained. \ Following up on the last point, the dynamics has not yet been used, essentially. Our results can be seen as general, therefore, only provided that the states we have used turn out to be good representatives of the true microstates of physical black holes, not only encoding in a more precise manner the various properties that we expect from such particular spacetime geometries (at least semi-classically) and that we have imposed only implicitly, but also solving at least approximately the microscopic quantum dynamics of the theory. Given the experience in the cosmological setting, one should expect that the imposition of the GFT equations of motion will result in some non-linear (set of) effective equation(s) on the wavefunction and on the coefficients of the linear combinations defining our quantum states in terms of graphs. From the discussion above, it might be relatively easy, once even only a qualitative understanding of the structure of some of the solutions to the equations is achieved, to check the compatibility of the dynamics with macroscopic properties, as quantities as rate of growth of graphs with the number of quanta are already rather constrained. We leave this type of analysis for future work, though. The maximum entropy principle that we are used, itself, may or may not be compatible with the full quantum dynamics. At this stage, we use it to fix the shape of the states given the available information and try to infer consequences about their nature of the states, but nothing more. It goes without saying that it would be even more significant if we could recover this reasoning as an endpoint of a calculation that starts from the microscopic equations, especially given the deep relations between gravity and thermodynamics. \addcontentsline{toc}{section}{Acknowledgement} \section*{Acknowledgement} Research at Perimeter Institute for Theoretical Physics is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI.
1,314,259,995,965
arxiv
\section{Introduction} Estimating treatment effects from observational data is difficult because ``treated" and ``control" samples typically differ on many characteristics besides treatment status. For example, consumers of nutritional supplements may be wealthier or more health-conscious than those not taking supplements. One popular tool for adjusting for such baseline imbalances is Inverse Propensity Weighting (IPW) \cite{ipw_review, ATTViaIPW, HiranoImbensRidder}. This technique re-weights treated and untreated samples to be similar along all observed characteristics and then compares outcomes in the weighted samples. The crucial assumption underlying this approach is that the weighted samples do not systematically differ along important \textit{unobserved} characteristics. This ``unconfoundedness" assumption is untestable, and often implausible. This paper studies how much can be learned when unconfoundedness does not hold, but one can bound the plausible degree of unobserved confounding. In particular, given a ``sensitivity assumption" controlling the degree of selection, we aim to answer two questions:\\ \begin{enumerate}[label=(\arabic*),topsep=0pt,itemsep=-1ex] \item \emph{Sensitivity analysis}. Can we bound how much the IPW point estimate from our ``primary analysis" might change if unobserved confounding were properly accounted for? \label{sensitivity_analysis} \item \emph{Partial identification}. Can we characterize the most informative bounds that could possibly be obtained from the sensitivity assumption with even an infinite amount of observational data? \label{partial_identification}\\ \end{enumerate} The specific sensitivity assumption used in this paper is the ``marginal sensitivity model" of Tan \cite{tan2006}, which extends the famous Rosenbaum model \cite{RosenbaumDesign, Rosenbaum1987, rosenbaum2002} from matched-pairs studies to IPW. This sensitivity assumption is quite popular in causal inference; see \cite{kallus2018interval, kallus2020confoundingrobust, confounding_robust_policy_improvement, kallus_zhou2020, causal_rule_ensemble, rosenman2021designing, rosenman2020combining, soriano2021interpretable, tan2006, zsb2019} for an incomplete list of references. As we will see, it lends itself to computationally-efficient sensitivity analyses which are simple enough to explain to any practitioner comfortable with IPW. Recently, Zhao, Small, and Bhattacharya \cite{zsb2019} (hereafter ZSB) introduced an interpretable IPW sensitivity analysis for the marginal sensitivity model. Their approach, based on linear fractional programming, has been largely responsible for the recent resurgence of interest in this sensitivity assumption. However, they did not answer question \ref{partial_identification}, leaving open the possibility that more informative bounds could be obtained from the same data and assumptions. Indeed, there are no existing partial identification results for the marginal sensitivity model which can be used to benchmark a sensitivity analysis. The first main contribution of this paper is to provide a complete answer to the partial identification question \ref{partial_identification}. We derive closed-form expressions for the largest and smallest values of the ``usual" estimands (e.g. average treatment effect) compatible with the marginal sensitivity assumption. These expressions show that the ZSB bounds are essentially always too conservative because they ignore an infinite collection of constraints implied by the distribution of observed characteristics. Tan \cite{tan2006} also identified these constraints, but deemed it intractable to incorporate them all in a sensitivity analysis. In contrast, our partial identification results show that this collection can actually be reduced to a \textit{single} constraint which is easy to incorporate. Our second main contribution is to introduce a new IPW sensitivity analysis, which we call the \textit{quantile balancing} method. The method has several desireable features:\\ \begin{enumerate}[label=(\roman*),topsep=0pt,itemsep=-1ex] \item The quantile balancing sensitivity interval is always a subset of the ZSB interval. Outside of knife-edge cases, it is a strict subset. \item When the outcome's conditional quantiles can be estimated consistently, the bounds converge to the best possible bounds that can be obtained under the marginal sensitivity model. In the language of partial identification, quantile balancing is ``sharp." \item Under standard assumptions for IPW inference, the bounds can be converted into confidence intervals using the same bootstrap scheme proposed in \cite{zsb2019}. \item When the estimated quantiles are inconsistent, the sensitivity interval is too wide rather than too narrow and the confidence intervals over-cover rather than under-cover. In other words, our intervals are guaranteed to be valid, regardless of the quality of the additional input we demand. \\ \end{enumerate} We apply the quantile balancing method in several simulated examples and one real-data application, and find that it can substantially tighten the ZSB bounds when the covariates are good predictors of the outcome. One shortcoming we will mention is that our statistical guarantees assume the outcome is continuously-distributed. This seems to be inevitable as our sensitivity analysis relies on quantile regression. Since our partial identification results also apply to discrete outcomes, we conjecture that the quantile balancing procedure could be modified to give sharp bounds in that setting too. \subsection{Setting and background} We consider the Neyman-Rubin potential outcomes model with a binary treatment \cite{neyman, rubin1974}. We observe i.i.d. samples $(X_i, Y_i, Z_i)$ from a distribution $P$, where $X_i \in \mathcal{X} \subseteq \mathds{R}^d$ is a vector of covariates, $Z_i \in \{ 0, 1 \}$ is a binary treatment assignment indicator, and $Y_i \in \mathds{R}$ is a real-valued outcome. We assume that $Y_i = Z_i Y_i(1) + (1 - Z_i) Y_i(0)$ for some unobserved potential outcomes $(Y_i(0), Y_i(1))$. The goal is to use the observed data to draw inferences about a causal estimand $\psi_0$. For the purposes of exposition, we initially focus on the counterfactual means $\psi_{\textup{T}} = \mathbb{E}[ Y(1)]$ and $\psi_{\textup{C}} = \mathbb{E}[ Y(0)]$, although the examples of most practical interest are the average treatment effect (ATE) and the average treatment effect on the treated (ATT). \begin{align*} \psi_{\textup{ATE}} &= \mathbb{E}[ Y(1) - Y(0)] \\ \psi_{\textup{ATT}} &= \mathbb{E}[ Y(1) - Y(0) | Z = 1]. \end{align*} With minor modification, our identification results can also be applied to more complex estimands, including weighted average treatment effects and policy values of the type considered in \cite{athey2017efficient, confounding_robust_policy_improvement}. However, we do not present those extensions in this paper. Under the unconfoundedness assumption $(Y(0), Y(1)) \, \rotatebox[origin=c]{90}{$\models$} \, Z \mid X$, all of the above quantities can be consistently estimated using inverse propensity weighting. IPW estimators work by reweighting the observed sample by (some function of) the propensity score $e(x) := P(Z = 1 | X = x)$. For example, if the estimand of interest is $\psi_{\textup{T}}$, the (stabilized) IPW estimate is given by (\ref{ipw1}). \begin{align} \hat{\psi}_{\textup{T}} = \frac{\sum_{i = 1}^n Y_i Z_i / \hat{e}(X_i)}{\sum_{i = 1}^n Z_i/\hat{e}(X_i)}. \label{ipw1} \end{align} Here, $\hat{e}(X_i)$ is an estimate of the propensity score $e(X_i)$. An unstabilized version of $\hat{\psi}_{\textup{T}}$ which replaces the denominator in (\ref{ipw1}) by $n$ is also common. Related estimators for the other estimands considered will be denoted by $\hat{\psi}_{\textup{C}}, \hat{\psi}_{\text{ATE}}$, and $\hat{\psi}_{\text{ATT}}$. See the articles by Austin and Stuart \cite{ipw_review} or Hirano and Imbens \cite{ATTViaIPW} for their exact formulas. We will assume some conditions which are required for identification and estimation under unconfoundedness: $0 < e(X) < 1$ almost surely and $\mathbb{E}[|Y|] < \infty$. However, we will not assume unconfoundedness. \section{The marginal sensitivity model} \label{section:msm} The marginal sensitivity model introduced by Tan \cite{tan2006} is a relaxation of unconfoundedness which has been applied in many causal inference problems. This one-parameter sensitivity assumption allows for the existence of unobserved confounders $U$, but limits the degree of selection bias that can be attributed to these confounders. \begin{manualassumption}{$\Lambda$} \label{assumption:msm} \textup{\textbf{(Marginal sensitivity model)}}\\ There exists a vector of unmeasured confounders\footnote{We have presented a slightly different version of the marginal sensitivity model from the one given in \cite{tan2006} and \cite{zsb2019}, which requires the unobserved confounder $U$ to be one or both of the potential outcomes. All of the identification results in this paper can be shown to hold under such an assumption.} $U$ that, if measured, would lead to unconfoundedness: $(Y(0), Y(1)) \, \rotatebox[origin=c]{90}{$\models$} \, Z \mid (X, U)$. However, within each stratum of the observed covariates, measuring $U$ can only change the odds of treatment by at most a factor of $\Lambda$, i.e. if we set $e_0(x, u) := P(Z = 1 | X = x, U = u)$, then (\ref{or_bound}) holds with probability one. \begin{align} \Lambda^{-1} \leq \frac{e_0(X, U)/[1 - e_0(X, U)]}{e(X)/[1 - e(X)]} \leq \Lambda \label{or_bound} \end{align} \end{manualassumption} To avoid confusion between $e_0$ and $e$, we will follow \cite{kallus_zhou2020} and refer to $e_0$ as the ``true propensity score" and $e$ as the ``nominal propensity score." Like the famous Rosenbaum model \cite{RosenbaumDesign, Rosenbaum1987, rosenbaum2002} for matched-pairs studies, Assumption \ref{assumption:msm} controls the degree of unobserved confounding with a single parameter. When $\Lambda = 1$, measuring additional confounders cannot change the odds of treatment at all, i.e. treatment assignment is unconfounded. As $\Lambda$ increases, stronger forms of confounding are allowed. For advice on how to choose this parameter, see Hsu and Small \cite{hsu_small2013}. We remark that the marginal sensitivity assumption is ``nonparametric" in the following sense: no assumptions are needed about how $e_0(x, u)$ depends on $u$. The dimension of the vector $U$ does not even need to be specified. To see how Assumption \ref{assumption:msm} can be used for sensitivity analysis, begin by considering how an oracle statistician who observes the confounders $U_i$ might estimate $\psi_{\textup{T}}$. One strategy would be to use the IPW estimator (\ref{oracle}), which is consistent under weak assumptions. \begin{align} \hat{\psi}_{\textup{T}}^* = \frac{\sum_{i = 1}^n Y_i Z_i / e_0(X_i, U_i)}{\sum_{i = 1}^n Z_i / e_0(X_i, U_i)} \label{oracle}. \end{align} In reality, $\{ U_i \}_{i \leq n}$ are not observed, but under Assumption \ref{assumption:msm}, it is possible to \textit{bound} the true propensity scores $e_0(X_i, U_i)$. In particular, the vector $(e_0(X_1, U_1), \cdots, e_0(X_n, U_n))$ must belong to the set $\mathcal{E}_n(\Lambda)$ defined in (\ref{msm_set}). \begin{align} \mathcal{E}_n(\Lambda) = \left\{ \bar{e} \in \mathds{R}^n \, : \, \Lambda^{-1} \leq \frac{\bar{e}_i/(1 - \bar{e}_i)}{e(X_i)/[1 - e(X_i)]} \leq \Lambda \right\} \label{msm_set} \end{align} ZSB proposed bounding the oracle statistician's IPW estimator (\ref{oracle}) with the largest and smallest IPW estimates that can be obtained using putative propensity weights in $\mathcal{E}_n(\Lambda)$. \begin{align} [\hat{\psi}_{\textup{T,ZSB}}^-, \hat{\psi}_{\textup{T,ZSB}}^+] = \left[ \min_{\bar{e} \in \mathcal{E}_n(\Lambda)} \frac{\sum_{i = 1}^n Y_i Z_i / \bar{e}_i}{\sum_{i = 1}^n Z_i / \bar{e}_i}, \max_{\bar{e} \in \mathcal{E}_n(\Lambda)} \frac{\sum_{i = 1}^n Y_i Z_i / \bar{e}_i}{\sum_{i = 1}^n Z_i / \bar{e}_i} \right] \label{zsb_bounds}. \end{align} Since the interval (\ref{zsb_bounds}) contains the consistent estimator $\hat{\psi}_{\textup{T}}^*$, the distance between the true estimand $\psi_{\textup{T}}$ and the sensitivity interval must tend to zero. ZSB show that this conclusion holds even if the nominal propensity score $e(x)$ is replaced by a suitably consistent estimate $\hat{e}(x)$ in the definition of $\mathcal{E}_n(\Lambda)$, which is important for practical applications as $e(x)$ is typically not known in observational studies. This simple idea is intuitive enough to explain to any practitioner who is comfortable with IPW and has been extended to estimands other than $\psi_{\textup{T}}$. ZSB also consider $\psi_{\text{ATE}}$ and $\psi_{\text{ATT}}$, and related work by \cite{kallus2018interval, kallus2020confoundingrobust, confounding_robust_policy_improvement, causal_rule_ensemble} takes the idea substantially further. Tan \cite{tan2006} applied a similar idea to a different propensity-score-based estimator, and \cite{AronowLeeInterpretable, MiratrixEtAl, tudballZhaoEtAl2019interval} used this approach in survey sampling problems. \subsection{Sharpness and data-compatibility} \label{section:sharpness} The aforementioned works do not address the asymptotic optimality of the interval $[\hat{\psi}_{\textup{T,ZSB}}^-, \hat{\psi}_{\textup{T,ZSB}}^+]$. Does it converge to a limiting set containing all values of $\psi_{\textup{T}}$ compatible with Assumption \ref{assumption:msm} and no others? Sensitivity analyses with this asymptotic optimality property are called ``sharp" in the partial identification literature.\footnote{The partial identification literature more typically uses ``sharp" to refer to the limiting set itself and uses ``estimators of sharp bounds" \cite{FanPark2010}, ``asymptotically sharp" \cite{semenova2020better}, or similar to refer to the corresponding finite-sample estimates. We have found it clearer and more concise to use ``sharp" to refer to a consistent estimate of a sharp limiting set.} Sharpness is important for interpreting the results of a sensitivity analysis. If the primary analysis finds a positive treatment effect but the bounds associated with a very small value of $\Lambda$ include zero, one might be tempted to conclude that the primary analysis is sensitive to unobserved confounding. However, unless the bounds are known to be sharp, this inference is not warranted even in large samples. It could be the case that the bounds were just too conservative. Despite its attractive features, the ZSB sensitivity analysis is not sharp. It can even be arbitrarily conservative. To illustrate this, we only need to consider a very simple joint distribution of observables: \begin{align} \begin{split} X &\sim \mathcal{N}(0, \sigma^2)\\ Z \mid X &\sim \textup{Bernoulli}( \tfrac{1}{2})\\ Y \mid X, Z &\sim \mathcal{N}(X, 1). \label{example} \end{split} \end{align} Suppose that a data analyst receives i.i.d. samples $(X_i, Y_i, Z_i)$ from this distribution and is willing to posit that Assumption \ref{assumption:msm} is satisfied with $\Lambda = 2$. Let $\phi(\cdot)$ and $z_{\tau}$ denote the density and $\tau$-th quantile of the standard normal distribution, respectively. The following proposition\footnote{The proof of Proposition \ref{proposition:zsb_not_sharp} and all other results in this paper can be found in Appendix \ref{appendix:proofs}.} writes the set of values of $\psi_{\textup{T}}$ compatible with Assumption \ref{assumption:msm} explicitly in terms of these quantities and shows that this ``partially identified" set does not coincide with the ZSB interval. \begin{proposition} \label{proposition:zsb_not_sharp} Let $(X_i, Y_i, Z_i)$ be i.i.d. samples from the joint distribution (\ref{example}). \begin{enumerate}[label=(\roman*),topsep=0pt,itemsep=-1ex] \item The set of values of $\psi_{\textup{T}}$ compatible with the bound $\Lambda = 2$ and the distribution (\ref{example}) is the interval $[ \pm \tfrac{3}{4} \phi( z_{2/3}) ] \approx [\pm 0.27]$.\label{item:ExampleIdentifiedSet} \item However, with probability one, $[ \pm 0.27 \sqrt{\sigma^2 + 1} ] \subseteq [ \hat{\psi}_{\textup{T,ZSB}}^-, \hat{\psi}_{\textup{T,ZSB}}^+]$ for all large $n$.\label{item:ZSBLimitSet} \end{enumerate} \end{proposition} The precise meaning of \ref{item:ExampleIdentifiedSet} is the following: for any $\psi \in [\pm \tfrac{3}{4} \phi(z_{2/3})]$, it is possible to construct a distribution $Q$ for the full data $(X, Y(0), Y(1), Z, U)$ which marginalizes to (\ref{example}), satisfies Assumption \ref{assumption:msm} with $\Lambda = 2$, and has $\mathbb{E}_{Q}[ Y(1)] = \psi$. On the other hand, for any $\psi$ not in this interval, it is impossible to construct such a distribution. Proposition \ref{proposition:zsb_not_sharp} implies that the ZSB interval typically includes many values of $\psi$ which cannot possibly be reconciled with the data. The explanation for this conservatism is that the odds-ratio bound (\ref{or_bound}) does not capture all of the restrictions on the true propensity score $e_0$. Additional information can be found in the marginal distribution of the \textit{observed} characteristics. For example, the consider the putative propensity score (\ref{putative}). \begin{align} \bar{e}(x, u) = \left\{ \begin{array}{ll} 1/3 &\text{if } x < 0\\ 2/3 &\text{if } x \geq 0 \end{array} \right. \label{putative} \end{align} This certainly satisfies the odds-ratio bound (\ref{or_bound}) --- and is therefore a possible value of $\bar{e}$ in the problem (\ref{zsb_bounds}) --- but it could not possibly be the true propensity score $e_0$. If it were, we would observe $\mathbb{P}(Z = 1 | X \geq 0) = \tfrac{2}{3}$, while (\ref{example}) demands that $\mathbb{P}(Z = 1 | X \geq 0) = \tfrac{1}{2}$. In short, this choice of $\bar{e}$ is allowed in the domain of the ZSB optimization problem but is incompatible with the distribution of observed data. This example suggests that it should be possible to improve upon the ZSB bounds by only optimizing over the subset of $\mathcal{E}_n(\Lambda)$ which is ``data compatible." However, this is easier said than done, because the observed data distribution actually imposes an infinite number of constraints on putative propensity scores $\bar{e}$. For example, the true $e_0$ ``balances" all integrable functions $h : \mathcal{X} \rightarrow \mathds{R}$: \begin{align} \begin{split} \mathbb{E}[ h(X) Z / e_0(X, U)] & = \mathbb{E}[h(X) \mathbb{E}[Z | X, U] / e_0(X, U)] \\ & = \mathbb{E}[h(X) e_0(X, U) / e_0(X, U)] \\ & = \mathbb{E}[h(X)]. \label{population_balance_h} \end{split} \end{align} Every such $h$ gives rise to a testable ``balancing constraint" (\ref{balance_h}) which can be used to rule out incompatible values of $\bar{e}$. \begin{align} \frac{\sum_{i = 1}^n h(X_i) Z_i / \bar{e}_i}{\sum_{i = 1}^n Z_i / \bar{e}_i} \approx \mathbb{E}[ h(X)] \label{balance_h} \end{align} In other words, any sharp sensitivity analysis must contend with an infinite number of constraints, which is typically computationally intractable \cite{BeresteanuEtAl, DaveziesDHault}. Previous works have considered relaxing these constraints by balancing only a finite set of functions \cite{tan2006, tudballZhaoEtAl2019interval}, but the resulting bounds are generally not sharp. \section{Partial identification results} \label{section:partial_identification} In this section, we show that at the \textit{population} level, it is possible to characterize the sharp bounds for $\psi_0 \in \{ \psi_{\textup{T}}, \psi_{\textup{C}}, \psi_{\textup{ATT}}, \psi_{\textup{ATE}} \}$ without ignoring or relaxing any of the (infinitely many) balancing constraints on the true propensity score. We apply these partial identification results to finite-sample sensitivity analysis in Section \ref{section:sensitivity_analysis}. To state these results formally, we need a few pieces of additional notation. Recall that Assumption \ref{assumption:msm} requires the true propensity score $e_0(X, U)$ to satisfy the following odds-ratio bound: \begin{align*} \Lambda^{-1} \leq \frac{e_0(X, U)/[1 - e_0(X, U)]}{e(X)/[1 - e(X)]} \leq \Lambda. \end{align*} Therefore, it is natural to define $\mathcal{E}_{\infty}(\Lambda)$ to be the set of all random variables $\bar{E}$ which satisfy the same condition: \begin{align} \mathcal{E}_{\infty}(\Lambda) := \left\{ \bar{E} \, : \, \Lambda^{-1} \leq \frac{\bar{E} / (1 - \bar{E})}{e(X)/(1 - e(X))} \leq \Lambda \text{ with probability one} \right\} \end{align} This can be viewed as the ``population" version of the ZSB constraint set $\mathcal{E}_n(\Lambda)$. Additionally, we define the conditional distribution function $F(y | x, z)$ and quantile function $Q_t(x, z)$ by: \begin{align*} F(y | x, z) &= P(Y \leq y | X = x, Z = z)\\ Q_t(x, z) &= \inf \{ q \in \mathds{R} \, : \, F(q | x, z) \geq t \}. \end{align*} Since these functions only refer to observed quantities, they are identified from the observed-data distribution. \subsection{Balancing bounds}\label{section:balancing_bounds_simpler} Our first partial identification result shows that the partially identified set for $\psi_{\textup{T}}$ can be computed by simply minimizing and maximizing $\mathbb{E}[ YZ / \bar{E}]$ over the set of $\bar{E}$ in $\mathcal{E}_{\infty}(\Lambda)$ which ``balance" a particular conditional quantile of $Y$. \begin{theorem} \label{theorem:psiT_identified_set} \textup{\textbf{(Optimal bounds for $\psi_{\textup{T}}$)}}\\ For any $\Lambda \geq 1$, the set of values of $\psi_{\textup{T}}$ compatible with the observed data distribution and Assumption \ref{assumption:msm} is a closed interval $[\psi_{\textup{T}}^-, \psi_{\textup{T}}^+]$. Moreover, if we define $\tau = \tfrac{\Lambda}{\Lambda + 1}$, then the interval endpoints solve (\ref{psiT_minus}) and (\ref{psiT_plus}). \begin{align} \psi_{\textup{T}}^- &= \min_{\bar{E} \in \mathcal{E}_{\infty}(\Lambda)} \mathbb{E}[ YZ/\bar{E}] \quad \text{subject to} \quad \mathbb{E}[ Q_{1 - \tau}(X, 1) Z / \bar{E}] = \mathbb{E}[ Q_{1 - \tau}(X, 1)] \label{psiT_minus} \\ \psi_{\textup{T}}^+ &= \max_{\bar{E} \in \mathcal{E}_{\infty}(\Lambda)} \mathbb{E}[ Y Z / \bar{E}] \quad \text{subject to} \quad \mathbb{E}[ Q_{\tau}(X, 1) Z / \bar{E}] = \mathbb{E}[ Q_{\tau}(X, 1)] \label{psiT_plus}. \end{align} \end{theorem} We will highlight a few important takeaways from this theorem. First, even if one adds additional balancing constraints of the form $\mathbb{E}[ h(X) Z / \bar{E}] = \mathbb{E}[ h(X)]$ in (\ref{psiT_minus}) and (\ref{psiT_plus}), the value of these problems will not change. Thus, for the purposes of computing bounds, the quantile balancing constraints in Theorem \ref{theorem:psiT_identified_set} capture all the information in the observed data. Second, the fact that only a single conditional quantile appears in each of the sharp bounds for $\psi_{\textup{T}}$ reflects a special advantage of the marginal sensitivity model. For alternative sensitivity assumptions, sharp bounds often involve distinct quantiles $Q_{\tau(x)}$ for each covariate level \cite{LeeSelection, MastenPoirerSharp}, complicating estimation by potentially requiring estimates of the entire conditional quantile process \cite{masten2020assessing, semenova2020better}. Third, this result shows that the ZSB sensitivity analysis can only be sharp when the conditional quantiles of $Y$ do not depend on $X$ at all. Since this is quite pathological, this suggests there is room for improvement over the ZSB method in almost all applications. We can extend the theorem to other estimands. To bound $\psi_{\textup{C}}$, exchange the labels ``treated" and ``control" and apply Theorem \ref{theorem:psiT_identified_set}. Sharp bounds on $\psi_{\textup{C}}$ can be translated into sharp bounds on $\psi_{\textup{ATT}}$ using the relation $\psi_{\text{ATT}} = \tfrac{\mathbb{E}[ Y | Z = 1] - \psi_{\textup{C}}}{P(Z = 1)}$. \begin{corollary} \label{corollary:psiC_att_identified_set} \textup{\textbf{(Optimal bounds for $\psi_{\textup{C}}$ and $\psi_{\textup{ATT}}$)}}\\ In the setting of Theorem \ref{theorem:psiT_identified_set}, the partially identified set for $\psi_{\textup{C}}$ is the interval $[\psi_{\textup{C}}^-, \psi_{\textup{C}}^+]$, where the interval endpoints solve (\ref{psiC_minus}) and (\ref{psiC_plus}). \begin{align} \psi_{\textup{C}}^- &= \min_{\bar{E} \in \mathcal{E}_{\infty}(\Lambda)} \mathbb{E} [ Y \tfrac{1 - Z}{1 - \bar{E}}] \quad \text{subject to} \quad \mathbb{E}[Q_{1 - \tau}(X, 0) \tfrac{1 - Z}{1 - \bar{E}}] = \mathbb{E}[ Q_{1 - \tau}(X, 0)] \label{psiC_minus}\\ \psi_{\textup{C}}^+ &= \max_{\bar{E} \in \mathcal{E}_{\infty}(\Lambda)} \mathbb{E}[ Y \tfrac{1 - Z}{1 - \bar{E}}] \quad \text{subject to} \quad \mathbb{E}[ Q_{\tau}(X, 0) \tfrac{1 - Z}{1 - \bar{E}}] = \mathbb{E}[ Q_{\tau}(X, 0)] \label{psiC_plus} \end{align} The partially identified set for $\psi_{\textup{ATT}}$ is the interval $[ \psi_{\textup{ATT}}^-, \psi_{\textup{ATT}}^+]$, where $\psi_{\textup{ATT}}^{\mp} = \tfrac{\mathbb{E}[ Y | Z = 1] - \psi_{\textup{C}}^{\pm})}{P(Z = 1)}$. \end{corollary} Finally, sharp bounds for $\psi_{\textup{ATE}}$ can be obtained by subtracting sharp bounds for $\psi_{\textup{T}}$ and $\psi_{\textup{C}}$. Equivalently, these bounds can be obtained by solving optimization problems with two quantile balancing constraints. Although this result is superficially similar to Theorem \ref{theorem:psiT_identified_set} and Corollary \ref{corollary:psiC_att_identified_set}, its proof requires a novel construction. \begin{theorem} \label{theorem:psiATE_identified_set} \textup{\textbf{(Optimal bounds for $\psi_{\textup{ATE}}$)}}\\ For any $\Lambda \geq 1$, the set of values of $\psi_{\textup{ATE}}$ compatible with the observed data distribution and Assumption \ref{assumption:msm} is a closed interval $[\psi_{\textup{ATE}}^-, \psi_{\textup{ATE}}^+]$ where $\psi_{\textup{ATE}}^- = \psi_{\textup{T}}^- - \psi_{\textup{C}}^+$ and $\psi_{\textup{ATE}}^+ = \psi_{\textup{T}}^+ - \psi_{\textup{C}}^-$. \end{theorem} These partial identification results apply to any observed-data distribution $P$ satisfying overlap and $\mathbb{E}_{P}[|Y|] < \infty$, but in certain special cases, the optimal bounds can be computed more explicitly. Specifically, consider the Gaussian outcome model (\ref{eq:additive_noise}): \begin{align} X &\sim P_X\\ Z \mid X & \sim \textup{Bernoulli}(e(X))\\ Y \mid X, Z& \sim \mathcal{N}(\mu(X, Z), \sigma^2(X)). \label{eq:additive_noise} \end{align} The following proposition derives the ATE identified set in this model, which offers some intuition about the main factors that make a causal estimate more or less robust to unobserved confounding. \begin{proposition} \label{proposition:additive_noise_identified_set} Suppose the observed-data distribution has the factorization (\ref{eq:additive_noise}). Let $\psi_{\textup{ATE}} = \mathbb{E}_{P}[\mu(X, 1) - \mu(X, 0)]$ be the nominal ATE. Then the partially identified set for the ATE under Assumption \ref{assumption:msm} is: \begin{align} [\psi_{\textup{ATE}}^-, \psi_{\textup{ATE}}^+] &= [\psi_{\textup{ATE}} \pm \tfrac{\Lambda^2 - 1}{\Lambda} \phi(\Phi^{-1} (\tfrac{\Lambda}{\Lambda + 1})) \mathbb{E}[ \sigma(X)] ] \label{additive_noise_formula}. \end{align} Here, $\phi$ and $\Phi$ are the standard normal density and distribution function, respectively. \end{proposition} For a fixed bound $\Lambda$ on the degree of unobserved confounding, the formula (\ref{additive_noise_formula}) shows that two key features map the observed data distribution to robustness. The first is the magnitude of the nominal ATE; all else equal, larger nominal effects are more robust. The second is the average noise level $\mathbb{E}[ \sigma(X)]$; the better the measured variables predict the outcome, the less unobserved confounding can affect our estimates. In the extreme case where $X$ and $Z$ perfectly predict $Y$, then the ATE remains point-identified no matter how large $\Lambda$ is. \subsection{Data-compatible propensity scores} \label{section:explaining_bounds_APO} Although the qualitative implications of Proposition \ref{proposition:additive_noise_identified_set} are quite plausible, we nevertheless find the quantile balancing formulas of Section \ref{section:balancing_bounds_simpler} to be quite counterintuitive. After all, it is certainly not true that every random variable $\bar{E} \in \mathcal{E}_{\infty}(\Lambda)$ satisfying $\mathbb{E}[ Q_{\tau}(X, 1) Z / \bar{E}] = \mathbb{E}[ Q_{\tau}(X, 1)]$ could plausibly be the true propensity score $e_0(X, U)$. Indeed, the constraints of the quantile-balancing optimization problems do not even enforce that $\mathbb{E}[ Z / \bar{E}] = 1$. Our intuition for why the ZSB procedure is conservative suggest the quantile balancing formulas should be conservative as well. To explain how these results are possible, we begin by characterizing which random variables $\bar{E}$ could plausibly be the true propensity score $e_0(X, U)$. The calculation (\ref{population_balance_h}) indicates that $\bar{E}$ should at least satisfy $\mathbb{E}[ h(X) Z / \bar{E}] = \mathbb{E}[ h(X)]$ for all integrable $h$, or equivalently, $\mathbb{E}[Z / \bar{E} | X] = 1$. Proposition \ref{proposition:data_compatibility_psiT} shows that for the purposes of bounding $\psi_{\textup{T}}$, this is actually the ``only" constraint on $\bar{E}$. Similar results appear in \cite{BirminghamJRSSB, Robins_etal_2000, tan2006, graham2011, hristache_patilea2017, FranksEtAl, zsb2019}. \begin{proposition} \label{proposition:data_compatibility_psiT} For any random variable $\bar{E} \in \mathcal{E}_\infty(\Lambda)$ satisfying $\mathbb{E}[ Z / \bar{E} | X] = 1$, there is a distribution $Q$ for $(X, Y(0), Y(1), Z, U)$ with the following properties: \begin{enumerate}[label=(\roman*),topsep=0pt,itemsep=-1ex] \item The distribution of the observables $(X, Y, Z)$ is the same under $P$ and $Q$. \item $Q$ satisfies Assumption \ref{assumption:msm}. \item $\mathbb{E}_{Q}[Y(1)] = \mathbb{E}_{P}[ YZ / \bar{E}]$. \end{enumerate} \end{proposition} In short, this result says that $\mathbb{E}[ YZ / \bar{E}]$ is a plausible value of $\psi_{\textup{T}}$ as long as $\mathbb{E}[ Z / \bar{E} | X] = 1$. It is not hard to show that the converse also holds: if $\psi$ is a plausible value of $\psi_{\textup{T}}$, then $\psi = \mathbb{E}[ YZ / \bar{E}]$ for some random variable $\bar{E}$ satisfying $\mathbb{E}[ Z / \bar{E} | X] = 1$. As a result, the optimal bounds for $\psi_{\textup{T}}$ can be obtained by solving the variational problems in Corollary \ref{corollary:variational_problems}. \begin{corollary} \label{corollary:variational_problems} The endpoints of the partially identified interval for $\psi_{\textup{T}}$ solve: \begin{align} \psi_{\textup{T}}^- &= \min_{\bar{E} \in \mathcal{E}_{\infty}(\Lambda)} \mathbb{E}[ YZ / \bar{E}] \quad \text{subject to} \quad \mathbb{E}[ Z / \bar{E} | X] = 1 \label{psi_minus_variational}\\ \psi_{\textup{T}}^+ &= \max_{\bar{E} \in \mathcal{E}_{\infty}(\Lambda)} \mathbb{E}[ YZ/\bar{E}] \quad \text{subject to} \quad \mathbb{E}[ Z / \bar{E} | X] = 1 \label{psi_plus_variational} \end{align} \end{corollary} Even though the variational problems (\ref{psi_minus_variational}) and (\ref{psi_plus_variational}) can be infinite-dimensional optimization problems with infinitely-many constraints, they have several nice features that enable them to be solved explicitly. Some straightforward algebraic manipulation shows that the problem (\ref{psi_plus_variational}) can be written as: \begin{align} \begin{split} \text{maximize} &\quad \mathbb{E}[ \mathbb{E}[ YZ / \bar{E} | X]]\\ \text{subject to} &\quad \mathbb{E}[ Z / \bar{E} | X] = 1\\ \text{and} &\quad 1 + \tfrac{1 - e(X)}{e(X)} \Lambda^{-1} \leq 1 /\bar{E} \leq 1 + \tfrac{1 - e(X)}{e(X)} \Lambda. \end{split} \end{align} Not only is this problem \textit{linear} in the decision ``variable" $1/\bar{E}$, it also separates across levels of $X$. Therefore, it suffices to separately solve (\ref{x_specific_problem}) for each $x \in \mathcal{X}$. \begin{align} \begin{split} \text{maximize} &\quad \mathbb{E}[ YZ / \bar{E} | X = x]\\ \text{subject to} &\quad \mathbb{E}[ Z / \bar{E} | X = x] = 1\\ \text{and} &\quad 1 + \tfrac{1 - e(x)}{e(x)} \Lambda^{-1} \leq 1 / \bar{E} \leq 1 + \tfrac{1 - e(x)}{e(x)} \Lambda \label{x_specific_problem} \end{split} \end{align} The problem (\ref{x_specific_problem}) requires us to maximize one expectation subject to an equality constraint on another expectation. This resembles the problem solved by the Neyman-Pearson lemma, and in fact is a special case of the generalization due to \cite{dantzig_wald_1951}. The optimization problems posed in Theorem \ref{theorem:psiT_identified_set} also fall in this class. It turns out that both of these problems have a common solution, given in Proposition \ref{proposition:psiT_formulas}. \begin{proposition} \label{proposition:psiT_formulas} Let $\bar{E}_-$, $\bar{E}_+ \in \mathcal{E}_{\infty}(\Lambda)$ satisfy $\mathbb{E}[ Z / \bar{E}_- | X] = \mathbb{E}[ Z / \bar{E}_+ | X] = 1$ and also (\ref{cutoff_minus}) and (\ref{cutoff_plus}). \begin{align} 1 / \bar{E}_- &= \left\{ \begin{array}{ll} 1 + \tfrac{1-e(X)}{e(X)} \Lambda^{+1} &\text{if } Y < Q_{1 - \tau}(X, 1)\\ 1 + \tfrac{1-e(X)}{e(X)} \Lambda^{-1} &\text{if } Y > Q_{1 - \tau}(X, 1) \end{array} \right. \label{cutoff_minus}\\ 1 / \bar{E}_+ &= \left\{ \begin{array}{ll} 1 + \tfrac{1-e(X)}{e(X)} \Lambda^{+1} &\text{if } Y > Q_{\tau}(X, 1)\\ 1 + \tfrac{1-e(X)}{e(X)} \Lambda^{-1} &\text{if } Y < Q_{\tau}(X, 1) \end{array} \right. \label{cutoff_plus} \end{align} Then $\bar{E}_-$ solves both (\ref{psiT_minus}) and (\ref{psi_minus_variational}), and $\bar{E}_+$ solves both (\ref{psiT_plus}) and (\ref{psi_plus_variational}). \end{proposition} The form of the propensity score $\bar{E}_+$ gives us insight into the confounding structure which maximizes $\psi_\textup{T}$: in the worst case, all observations with ``high" values of $Y$ are unlikely to be treated and thus receive large propensity weight, while all observations with ``low" values of $Y$ are likely to be treated and thus receive small propensity weight. The cutoff between high and low is chosen to satisfy the data-compatibility condition $\mathbb{E}[ Z / \bar{E}_+ | X ] = 1$. This argument presented in this section extends immediately to $\psi_{\textup{C}}$ by swapping treatment and control labels, extends to $\psi_{\textup{ATT}}$ by the argument given in Section \ref{section:balancing_bounds_simpler}, and can extend to other sensitivity models of the form $e_{\min}(X) \leq e_0(X, U) \leq e_{\max}(X)$ by modifying the constraints of (\ref{x_specific_problem}). \subsection{Data compatibility for the ATE}\label{section:explaining_bounds_ATE} To extend the argument from Section \ref{section:explaining_bounds_APO} to the ATE requires additional care. Although $\psi_{\textup{ATE}}^+ = \psi_{\textup{T}}^+ - \psi_{\textup{C}}^-$ is certainly a \textit{valid} upper bound for the partially identified set for $\psi_{\textup{ATE}}$, it is not obviously a sharp one. Proposition \ref{proposition:data_compatibility_psiT} only implies that there exists a distribution $Q$ matching the observed-data distribution which has $\mathbb{E}_{Q}[Y(1)] = \psi_{\textup{T}}^+$ and another distribution $Q'$ which has $\mathbb{E}_{Q'}[ Y(0)] = \psi_{\textup{C}}^-$, but these distributions need not be the same. In other words, the two bounds may not be simultaneously achievable. Theorem \ref{theorem:psiATE_identified_set} indicates that the worst-case bounds on the counterfactual means are simultaneously achievable in the marginal sensitivity model. This is a surprising result, given that simultaneous achievability is \textit{not} expected to hold in the closely-related Rosenbaum sensitivity model. In that model, Yadlowsky et al. \cite{yadlowsky2018bounds} derived sharp bounds on $\psi_{\textup{T}}$ and $\psi_{\textup{C}}$ but required an extra symmetry assumption on the distribution of potential outcomes to establish sharpness of the resulting ATE bounds. The key to our bounds on $\psi_{\textup{ATE}}$ is the following proposition, which strengthens Proposition \ref{proposition:data_compatibility_psiT}. \begin{proposition}\label{proposition:data_compatibility_ATE} For any random variable $\bar{E} \in \mathcal{E}_\infty(\Lambda)$ satisfying $\mathbb{E} [Z / \bar{E} | X] = \mathbb{E} [ (1-Z)/(1-\bar{E}) | X] = 1$, there is a distribution $Q$ for the full data $(X, Y(0), Y(1), Z, U)$ with the following properties: \begin{enumerate}[label=(\roman*),topsep=0pt,itemsep=-1ex] \item The distribution of the observables $(X, Y, Z)$ is the same under $P$ and $Q$.\label{item:data_compatibility_ATE:data_compatible} \item $Q$ satisfies Assumption \ref{assumption:msm}.\label{item:data_compatibility_ATE:MSM} \item $\mathbb{E}_{Q}[Y(1)] = \mathbb{E}_{P}[ YZ/\bar{E}]$ and $\mathbb{E}_{Q}[Y(0)] = \mathbb{E}_{P}[ Y(1-Z)/(1-\bar{E})]$.\label{item:data_compatibility_ATE:Means} \end{enumerate} \end{proposition} Unlike Proposition \ref{proposition:data_compatibility_psiT}, this result does not follow from the existing data-compatibility characterizations of \cite{BirminghamJRSSB, Robins_etal_2000, tan2006, zsb2019} and instead requires an original construction. Given this result, one can derive Theorem \ref{theorem:psiATE_identified_set} as a consequence of Theorem \ref{theorem:psiT_identified_set} and Corollary \ref{corollary:psiC_att_identified_set}. \section{Sensitivity analysis} \label{section:sensitivity_analysis} In this section, we give our proposal for translating the population-level partial identification results of Section \ref{section:partial_identification} into a practical sensitivity analysis for IPW, which we call the \textit{quantile balancing} method. Our proposal follows naturally from our partial identification results: on a high level, we modify the ZSB proposal described in Section \ref{section:msm} to incorporate the quantile-balancing constraints we derived in Theorem \ref{theorem:psiT_identified_set} and Corollary \ref{corollary:psiC_att_identified_set}. Throughout this section, we take $\Lambda \geq 1$ to be fixed and write $\tau = \Lambda/(\Lambda + 1)$. \subsection{Quantile balancing bounds} We begin by describing the quantile balancing bounds for the average treated outcome. Theorem \ref{theorem:psiT_identified_set} implies that the largest value of $\psi_{\textup{T}}$ compatible with Assumption \ref{assumption:msm} solves the optimization problem (\ref{population_optimization}): \begin{align} \psi_{\textup{T}}^+ &= \max_{\bar{E} \in \mathcal{E}_{\infty}(\Lambda)} \frac{\mathbb{E}[ YZ / \bar{E}]}{\mathbb{E}[ Z / \bar{E}]} \quad \text{s.t.} \quad \binom{\mathbb{E}[ Q_{\tau}(X, 1) Z / \bar{E}]}{\mathbb{E}[ Z / \bar{E}]} = \binom{\mathbb{E} [ Q_{\tau}(X, 1) Z / e(X)]}{\mathbb{E}[ Z / e(X)]}. \label{population_optimization} \end{align} We have included an additional constraint $\mathbb{E}[ Z / \bar{E}] = \mathbb{E}[ Z / e(X)]$ which does not appear in Theorem \ref{theorem:psiT_identified_set}, but this does not affect the value of the optimization problem. Our proposal is to estimate $\psi_{\textup{T}}^+$ by replacing all of the unknown quantities in (\ref{population_optimization}) with empirical counterparts. We estimate $\psi_{\textup{T}}^-$ by following the same principle. To translate these estimates into confidence intervals, we employ the same simple percentile bootstrap scheme as ZSB. We will be concrete about what optimization problem we are proposing to solve. Let $\hat{Q}_{\tau}(x, z)$ be an estimate of the conditional quantile function of $Y$ obtained by some kind of quantile regression (e.g. \cite{generalized_random_forests, koenker_bassett_1978, quantile_random_forest, stone1977}). Let $\hat{e}$ be the data analyst's estimate of the nominal propensity score $e$ from their primary analysis. We define $\hat{\psi}_{\textup{T}}^+$ as the solution to the empirical maximization problem (\ref{qbalance_psiT}). \begin{align} \hat{\psi}_{\textup{T}}^+ &= \max_{\bar{e} \in \mathcal{E}_n(\Lambda)} \frac{\sum_{i = 1}^n Y_i Z_i / \bar{e}_i}{\sum_{i = 1}^n Z_i / \bar{e}_i} \quad \text{s.t.} \quad \binom{\tfrac{1}{n} \sum_{i = 1}^n \hat{Q}_{\tau}(X_i, 1) Z_i / \bar{e}_i}{\tfrac{1}{n} \sum_{i = 1}^n Z_i / \bar{e}_i} = \binom{\tfrac{1}{n} \sum_{i = 1}^n \hat{Q}_{\tau}(X_i, 1) Z_i / \hat{e}(X_i)}{\tfrac{1}{n} \sum_{i = 1}^n Z_i / \hat{e}(X_i)} \label{qbalance_psiT} \end{align} The lower bound $\hat{\psi}_{\textup{T}}^-$ is defined similarly, but with maximization replaced by minimization and $\hat{Q}_{\tau}(x, z)$ replaced by another quantile estimate $\hat{Q}_{1 - \tau}(x, z)$. Several immediate properties of the quantile balancing bounds (\ref{qbalance_psiT}) are collected below:\\ \begin{enumerate}[label=(\roman*),topsep=0pt,itemsep=-1ex] \item When $\Lambda = 1$ (i.e. no confounding is allowed), the quantile balancing bounds collapse to the usual IPW estimate of $\psi_{\textup{T}}$ under unconfoundedness. \item The quantile balancing bounds are sample bounded, i.e. $\min_i Y_i \leq \hat{\psi}_{\textup{T}}^- \leq \hat{\psi}_{\textup{T}}^+ \leq \max_i Y_i$.\label{prop:SampleBound} \item The quantile balancing bounds are always a subset of the ZSB bounds and, outside of knife-edge cases, are a strict subset.\label{prop:ZSBSubset} \item The optimization problem (\ref{qbalance_psiT}) is convex and can be solved efficiently. In fact, it reduces to a standard quantile regression problem. See Appendix \ref{appendix:computation} for implementation details.\\ \end{enumerate} One can also apply quantile balancing to unstabilized IPW estimators at the cost of properties \ref{prop:SampleBound} and \ref{prop:ZSBSubset}. The quantile balancing idea extends easily to other causal estimands. To compute bounds for $\psi_{\textup{C}}$, one only needs to exchange the definitions of ``treated" and ``control" and solve the same optimization problem. Subtracting the bounds for $\psi_{\textup{T}}$ and $\psi_{\textup{C}}$ gives bounds for $\psi_{\textup{ATE}}$, and bounds for $\psi_{\textup{ATT}}$ follow from a similar principle (see Appendix \ref{appendix:computation} for the exact formula). To form confidence intervals based on quantile balancing, we follow ZSB \cite{zsb2019} and propose using the percentile bootstrap. If $[\hat{\psi}_b^-, \hat{\psi}_b^+]$ are quantile balancing bounds estimated in the $b^{\text{th}}$ of $B$ bootstrap samples, we report the quantile balancing $1 - \alpha$ confidence interval as: \begin{align} \textup{CI}(\alpha) = [ Q_{\alpha/2} ( \{ \hat{\psi}_b^- \}_{b \in [B]}), Q_{1 - \alpha/2}( \hat{\psi}_b^+ \}_{b \in [B]})]. \label{ci} \end{align} As is standard for bootstrap-based IPW inference, we require re-estimating the nominal propensity score separately in each bootstrap replication. That requirement does not extent to the conditional quantiles. While the conditional quantiles can be re-estimated within bootstraps, our inference results will also apply if they are taken from the main dataset. This is important to keep inference computationally tractable. When the conditional quantiles are estimated using linear quantile regression (i.e. $\hat{Q}_t(x, z) = \hat{\beta}_t(z)^{\top} h(x)$ for some ``features" $h : \mathcal{X} \rightarrow \mathds{R}^k$), one could consider directly ``balancing" the features $h$ rather than the fitted quantile $\hat{Q}_t$ as in \cite{sieve_balancing, cbps, tan2006, tudballZhaoEtAl2019interval}. Although this approach has some nice features and theoretical support, our simulations find the resulting inference is less reliable in small samples. \subsection{Theoretical properties} We now state some theoretical properties of the quantile balancing bounds $[\hat{\psi}^-, \hat{\psi}^+]$ which apply when the outcome $Y$ has a continuous distribution. In short, the bounds are sharp when quantiles are estimated consistently and are valid even when quantiles are estimated inconsistently. Moreover, the percentile bootstrap yields valid confidence intervals if standard IPW inference conditions are satisfied and quantiles are estimated parametrically. To obtain these results, we need a few conditions. The first condition collects some standard IPW consistency requirements which we expect the data analyst to have already assumed in his or her primary analysis \cite{TargetedLearningVDLRose}. \begin{condition} \label{condition:ipw_conditions} \textup{\textbf{(IPW assumptions)}}\\ The nominal propensity score $e$ satisfies $\varepsilon \leq e(X) \leq 1 - \varepsilon$ with probability one for some $\varepsilon > 0$. The estimated propensity score $\hat{e}(\cdot) \equiv \hat{e}( \cdot, \{ X_i, Z_i \}_{i \leq n} )$ is uniformly consistent, and the variance of $Y$ is finite. \end{condition} The second condition requires that the outcome $Y$ has a bounded conditional density which is positive near the relevant conditional quantiles. This is a common identification condition for quantile regression \cite{generalized_random_forests, BelloniEtAl2019}. \begin{condition} \label{condition:density} \textup{\textbf{(Density)}}\\ The conditional distribution of $Y \mid X, Z$ has a uniformly bounded density $f(y | x, z)$. For each $(x, z) \in \mathcal{X} \times \{ 0, 1 \}$, the map $y \mapsto f( y | x, z)$ is continuous and positive near $Q_{1 - \tau}(x, z)$ and $Q_{\tau}(x, z)$. \end{condition} Finally, we make some assumptions about how the quantiles are estimated. For the standard linear quantile regression method of Koenker and Bassett \cite{koenker_bassett_1978}, one only needs to check that the regressors in the quantile regression have finite variance. We cover generic (possibly nonlinear) methods by requiring sample splitting to avoid overfitting. The specific form of sample splitting analyzed in our proofs is ``cross-fitting" \cite{schick1986, newey_robins_crossfitting, doubleML}, but leave-one-out or out-of-bag quantile estimates perform similarly in simulations. \begin{condition} \label{condition:quantile_estimates} \textup{\textbf{(Quantile estimates)}}\\ For each $t \in \{ 1 - \tau, \tau \}$, one of the following holds: \begin{enumerate}[label=(\roman*),topsep=0pt,itemsep=-1ex] \item $\hat{Q}_t(x, z) = \hat{\beta}_t(z)^{\top} h(x)$ for some fixed ``features" $h_j(X)$ with finite variance. \label{linear} \item $\hat{Q}_t(x, z)$ is estimated using cross-fitting and satisfies Condition \ref{condition:nonlinear_regularity} in Appendix \ref{section:nonlinear_proof}. \label{crossfit} \end{enumerate} \end{condition} Condition \ref{condition:quantile_estimates} is essentially ``algorithmic," and neither \ref{linear} nor \ref{crossfit} impose any accuracy requirements on the estimated conditional quantiles. The appendix conditions in \ref{crossfit} are technical to state but very mild. For example, they are satisfied by quantile estimates based on nearest-neighbors \cite{stone1977, nn_quantile1986}, kernels \cite{kernel_quantile1990}, and random forests \cite{generalized_random_forests, quantile_random_forest} under conditions \ref{condition:ipw_conditions} and \ref{condition:density}. Under these conditions, we have the following result on the asymptotic sharpness of the quantile balancing bounds. \begin{theorem} \label{theorem:sharpness} \textup{\textbf{(Sharpness and robustness)}}\\ For any $\psi_0 \in \{ \psi_{\textup{T}}, \psi_{\textup{C}}, \psi_{\textup{ATT}}, \psi_{\textup{ATE}} \}$, let $[\psi^-, \psi^+]$ be its partially identified interval under Assumption \ref{assumption:msm} and let $[ \hat{\psi}^-, \hat{\psi}^+]$ be the corresponding quantile balancing interval. Assume Conditions \ref{condition:ipw_conditions}, \ref{condition:density}, and \ref{condition:quantile_estimates}. \begin{enumerate}[label=(\roman*),topsep=0pt,itemsep=-1ex] \item If the quantile regression estimates are consistent, then $\hat{\psi}^- \xrightarrow{p} \psi^-$ and $\hat{\psi}^+ \xrightarrow{p} \psi^+$. \label{sharpness} \item Even if the quantile models are misspecified, we still have $\hat{\psi}^- \leq \psi^- + o_P(1)$ and $\psi^+ - o_P(1) \leq \hat{\psi}^+$. \label{robustness} \end{enumerate} \end{theorem} The result \ref{sharpness} is expected, so we offer some intuition for \ref{robustness}. The true worst-case propensity score $\bar{E}_+$ defined in Proposition \ref{proposition:psiT_formulas} ``balances" all (integrable) functions of $X$, so it is ``almost" feasible in the optimization problem (\ref{qbalance_psiT}). Thus, even if the quantile regression model is misspecified, the IPW estimator based on $\bar{E}_+$ will ``almost" be in the domain of (\ref{qbalance_psiT}). We should therefore expect $\hat{\psi}_+$ to be too large rather than too small. The robustness result \ref{robustness} shows that it eventually is. The validity of the confidence interval (\ref{ci}) follows under stronger parametric assumptions. We prove an inference result assuming the nominal propensity score is estimated by a correctly-specified parametric model and the conditional quantiles are estimated by a (potentially misspecified) parametric model. These assumptions are not much stronger than what we expect the primary analysis to assume for inference under unconfoundedness \cite{ATTViaIPW, zsb2019}. \begin{theorem} \label{theorem:inference} \textup{\textbf{(Inference)}}\\ Let $[ \psi^-, \psi^+]$ be as in Theorem \ref{theorem:sharpness}, and let $\textup{CI}(\alpha)$ be as in (\ref{ci}). Suppose Conditions \ref{condition:ipw_conditions}, \ref{condition:density}, and \ref{condition:quantile_estimates}.\ref{linear} are satisfied, and also that the nominal propensity score is estimated by a regular parametric model (e.g. logistic regression). Then we have \begin{align}\label{eq:InferentialGuarantee} \liminf_{n \rightarrow \infty} \mathbb{P}( [\psi^-, \psi^+] \subseteq \textup{CI}(\alpha)) \geq 1 - \alpha \end{align} for any $\alpha \in (0, 1)$. \end{theorem} The inferential guarantee (\ref{eq:InferentialGuarantee}) is stronger than some analysts may require. The confidence interval $\textup{CI}(\alpha)$ covers the entire partially identified set $[\psi^-, \psi^+]$ with probability asymptotically at least $1 - \alpha$. However, if one believes that the sensitivity assumption is actually correct, then the coverage of the true causal estimand will typically exceed $1 - \alpha$. Refined bounds intended only to capture $\psi_0$ could be constructed using an approach similar to the one from Imbens and Manski \cite{imbens_manski_2004}. Although we do not have theoretical support for the confidence interval $\text{CI}(\alpha)$ when quantiles are estimated by a nonlinear model, we find that approach performs reasonably well in the simulations of Section \ref{section:numerical_examples}. \section{Numerical examples}\label{section:numerical_examples} In this section, we illustrate the finite-sample performance of the quantile balancing method on several simulated datasets and one real-data example. We also compare several variants of the method against the ZSB method described in Section \ref{section:msm} on the simulated datasets. \subsection{Simulated data} We consider two data-generating processes (DGPs) in our simulated examples. The two DGPs differ in the distribution of the regression function $\mathbb{E}[ Y | X, Z]$, but otherwise can be described as follows: \begin{align} \begin{split} X &\sim \text{Uniform}([-1, 1]^5)\\ Z \mid X & \sim \text{Bernoulli} \left( \tfrac{1}{1 + \exp(-\sum_{j = 1}^5 X_{j}/\sqrt{5})} \right)\\ Y \mid X, Z &\sim \mathcal{N}( \mu(X), 1). \end{split} \end{align} In the first DGP, we use $\mu(x) = x_1 + \cdots + x_5$, and in the second DGP we use $\mu(x) = \tfrac{3}{2} \text{sign}(x_1) + \text{sign}(x_2)$. The estimand of interest is the ATE and we fix $\Lambda = 2$, i.e. unobserved confounders can double or halve the odds of treatment. We compare four methods for obtaining bounds on $\psi_{\textup{ATE}}$. \texttt{Linear} applies the quantile balancing method of Section \ref{section:sensitivity_analysis} with quantiles estimated following \cite{koenker_bassett_1978}. \texttt{Forest} is similar, but $\hat{Q}_t(x, z)$ is fitted using out-of-bag estimates from the random forest method of Athey et al. \cite{generalized_random_forests}. \texttt{Covariates} directly balances the features $X$ without first estimating quantiles. \texttt{ZSB} implements the unconstrained method described in Section \ref{section:msm}. All four methods estimate the nominal propensity score by logistic regression. Figure \ref{fig:simulation} shows the distribution of upper and lower bound point estimates from each of these four methods, estimated using 1,000 simulations with 500 observations each. Dashed lines indicate the true partially identified region. The results conform to the asymptotic predictions of Proposition \ref{proposition:zsb_not_sharp} and Theorem \ref{theorem:sharpness}: (i) when the quantile models are ``correctly specified," the quantile balancing point estimates are nearly unbiased; (ii) under misspecification, the range of point estimates is too wide rather than too narrow; and (iii) the \texttt{ZSB} range of point estimates is too wide in both cases. \begin{figure}[H] \centering \includegraphics[width=15cm]{n500.pdf} \caption{\textit{Boxplots of the ATE upper and lower bound point estimates for both DGPs and all considered methods. The dashed line indicates the boundary of the true partially identified set. In DGP1, the \texttt{Linear} and \texttt{Covariates} methods are correctly specified and give the most accurate bounds. In DGP2, the \texttt{Forest} method is well-suited to the piecewise-constant outcome model and gives the most accurate bounds.}} \label{fig:simulation} \end{figure} Confidence intervals based on the \texttt{Linear} and \texttt{Covariates} approaches exhibited some under-coverage at this sample size, at least in the ``well-specified" setting of DGP1. The 90\% bootstrap confidence intervals based on the \texttt{Linear} method had approximately 85.4\% coverage of the identified set, whereas those based on the \texttt{Covariates} method had 81.3\% coverage. This undercoverage is partially caused by a small bias in the original range of point estimates, which is not readily apparent from Figure \ref{fig:simulation}. Meanwhile, \texttt{Forest} had 95.5\% coverage, and \texttt{ZSB} had 100\% coverage. In DGP2, the \texttt{Forest} method had 91.6\% coverage and all other methods had at least $99\%$ coverage. \subsection{Real data} In this section, we apply our proposed sensitivity analysis to a subsample of data from the 1966-1981 National Longitudinal Survey (NLS) of Older and Young Men. We wish to estimate the impact of union membership on wages. Specifically, we consider the ATE of union membership on log wages. For illustrative reasons, we focus on the 1978 cross-section of Young Men and restrict our attention to craftsmen and laborers not enrolled in school. Our estimates are thus based on a sample of 668 respondents with measurements on wages, union membership, and eight other covariates. For our primary analysis, we use IPW to adjust for baseline imbalances in covariates between union and nonunion samples. Table \ref{table:balance} reports the covariate balance between union and nonunion samples before and after weighting by the (estimated) inverse propensity score. On several important characteristics, inverse propensity weighting dramatically improves balance across the two samples. \begin{table}[!ht] \centering \begin{tabular}{rrrrr} \hline &\multicolumn{2}{c}{Unweighted} &\multicolumn{2}{c}{Weighted}\\ Covariate &Union &Nonunion &Union &Nonunion\\ \hline Age & 30.1 & 30.0 & 30.0 & 30.0 \\ Black & 24\% & 24\% & 23\% & 24\% \\ Metropolitan & \textcolor{red}{74\%} & \textcolor{red}{57\%} & 66\% & 65\% \\ Southern & \textcolor{red}{32\%} & \textcolor{red}{53\%} & 42\% & 42\% \\ Married & 78\% & 75\% & 76\% & 76\% \\ Manufacturing & \textcolor{red}{42\%} & \textcolor{red}{32\%} & 37\% & 38\% \\ Laborer & \textcolor{red}{23\%} & \textcolor{red}{15\%} & 18\% & 18\% \\ Education & 12.2 & 11.7 & 12.1 & 12.0 \\ \hline \end{tabular} \caption{\textit{Covariate means among the nonunion and union subsamples, along with the means in the weighted samples. In \textcolor{red}{red}, we highlight particularly large imbalances. In the weighted samples, propensity weights are estimated using logistic regression.}} \label{table:balance} \end{table} The IPW point estimate of the ATE is 0.23 with an associated 90\% confidence interval of $[0.18, 0.27]$. Thus, our primary analysis concludes that union membership has a positive effect on wages, at least on average among craftsmen and laborers. Both the point estimate and the confidence interval are in agreement with prior literature studying the same problem using cross-sectional data. See \cite{ jakubson1991, johnson1975} for overviews. Alternative analysis methods (e.g. regression, matching) give essentially the same point estimates when applied to this data. Freeman \cite{freemn1984_unions}, Mellow \cite{mellow1981} and many other economists have argued that cross-sectional estimates of the union premium overestimate the true causal effect because higher-skill workers are simultaneously more likely to be selected for union jobs and earn high wages. Here, ``skill" refers to an unobserved confounder which is only partially captured by the measured covariates. Is it plausible that the positive effect we find in the IPW analysis could be entirely due to the selection on skill? A sensitivity analysis may help address this problem. Figure \ref{fig:union_results} reports point estimate ranges and 90\% bootstrap confidence intervals from both quantile balancing and the ZSB sensitivity analysis for several values of the sensitivity parameter $\Lambda$. We estimate $\hat{Q}_t(x,z)$ for quantile balancing using the standard linear quantile model of Koenker and Bassett \cite{koenker_bassett_1978}. \begin{figure}[!ht] \centering \includegraphics[width=15cm]{nls78.pdf} \caption{\textit{Point estimate ranges and 90\% bootstrap confidence intervals for the ATE in the NLS dataset. For the quantile balancing method, conditional quantiles are estimated using the linear quantile regression method of Koenker and Bassett \cite{koenker_bassett_1978}.}} \label{fig:union_results} \end{figure} Both sensitivity analyses show that the positive effect found in the primary analysis is fairly robust to unobserved confounding, but quantile balancing refines the ZSB interval. Even if the odds of union membership for ``skilled" workers were double the odds for ``typical" workers with the same observed covariates, the quantile balancing analysis would still find a statistically significant positive treatment effect. Meanwhile, when $\Lambda = 2$, the ZSB confidence intervals already include the null, although the range of point estimates (barely) excludes it. In this application, quantile balancing only slightly refined the ZSB range. Generally speaking, we expect to see larger relative improvements in problems where the covariates have more explanatory power. To put these figures in context, we follow Kallus and Zhou \cite{kallus_zhou2020} and compute the degree to which the (estimated) odds of union membership could change if \textit{measured} confounders were omitted from the dataset. Caveats to this approach and more sophisticated empirical calibration strategies are discussed in Hsu and Small \cite{hsu_small2013}, Zhang and Small \cite{zhang_small2020}, and Cinelli and Hazlett \cite{CinelliHazlett}. Omitting \texttt{Black} only changes the odds of treatment by a factor of 1.4, and omitting \texttt{Laborer} only changes the odds of treatment by a factor of 1.9. In fact, no measured confounders except \texttt{South} were able to double or halve the odds of union membership for any respondent. We interpret these results as showing that the qualitative conclusions of the primary analysis are fairly robust to unobserved confounding by skill. Incidentally, longitudinal estimates of union wage effects --- which control for individual-specific effects like ``skill" --- come to similar conclusions as the one suggested by our sensitivity analysis. Although treatment effect estimates from longitudinal studies are generally smaller than those from cross-sectional studies, they still find evidence in favor of the ``union premium" \cite{CHAMBERLAIN1982, jakubson1991, freemn1984_unions}. \section{Conclusion}\label{section:conclusion} We have shown that quantile balancing --- a simple modification of the popular ZSB \cite{zsb2019} sensitivity analysis --- is feasible, robust, and sharp. This new sensitivity analysis for IPW is based on novel partial identification results for Tan \cite{tan2006}'s marginal sensitivity model. We will point to several interesting directions for future work. While our partial identification results focus on counterfactual means and a few treatment effects, it should be possible to extend our partial identification results to more complex estimands of the type considered in \cite{kallus2018interval, kallus2020confoundingrobust, confounding_robust_policy_improvement, kallus_zhou2020, causal_rule_ensemble}. Perhaps a similarly compact sensitivity analysis could even apply to dynamic treatment regimes. In addition, while our identification arguments generalize to any sensitivity assumption that only restricts the propensity score in a pointwise fashion (i.e. $e_{\min}(x) \leq e_0(x, u) \leq e_{\max}(x)$), the practicality of our sensitivity analysis and its theoretical properties rely on the marginal sensitivity model quite heavily. It would be interesting to see if a practical and sharp sensitivity analysis could be developed for other sensitivity assumptions in this class. \newpage \printbibliography \newpage
1,314,259,995,966
arxiv
\section{Introduction} Multiple-input-multiple-output (MIMO) has been recognized as a promising technology to enhance the capacity and the spectral efficiency of fifth-generation (5G) wireless networks~\cite{MIMO}. However, deploying and configuring a large number of antennas can lead to severe hardware impairment, heavy computational cost, and substantial power consumption. To overcome these limitations, reconfigurable intelligent surfaces (RISs) have emerged as a cost-effective solution~\cite{RIS_review_Liu21}. An RIS composes of a large number of low-cost reflecting elements that can proactively reconfigure the propagation of incident signals. By intelligently adjusting the phase shift of each RIS element, the communication channels can be effectively manipulated to enhance the spectral efficiency and the network coverage~\cite{Xiao_UAV-RIS, Xiao_NOMA_RIS}. Benefiting from the low-cost meta-materials, RISs can be seamlessly integrated with emerging technologies, such as MIMO, to further improve transmission throughput in a cost-effective manner. As a promising technology for supporting massive connectivity, non-orthogonal multiple access (NOMA)~\cite{NOMA_yuanwei} can be integrated into RIS-aided networks to further enhance the spectral efficiency. Moreover, the integration of NOMA and RIS can effectively improve the design flexibility of NOMA schemes~\cite{RIS_NOMA_interplay_Liu20}. However, the implementation of NOMA in RIS-aided networks increases the resource allocation difficulty. In particular, RISs can alter the channel quality of individual devices, which directly influences the decoding orders and the clustering results of NOMA systems. Extensive research contributions have been devoted to investigating the integration of RIS and NOMA techniques~\cite{SoA_RIS_NOMA_Hou20, SoA_RIS_NOMA_Mu20, SoA_RIS_NOMA_Yang20}. In particular, the authors of~\cite{SoA_RIS_NOMA_Hou20} derived extensive analytical results, including ergodic rates, energy efficiency and spectral efficiency, for RIS-assisted NOMA networks. The sum rate maximization problem of RIS-aided NOMA systems was investigated in~\cite{SoA_RIS_NOMA_Mu20}, where the passive beamforming at the RIS was jointly optimized with the active beamforming at the base station (BS) under both the ideal and non-ideal RIS elements. To maximize sum rate while ensuring user fairness, the authors of~\cite{SoA_RIS_NOMA_Yang20} formulated a max-min problem for RIS-enhanced NOMA networks by jointly optimizing the power allocation and the RIS phase shift. The aforementioned research contributions revealed the potentials of RIS when integrated into NOMA networks and established a foundation for solving various challenges in RIS-assisted networks. However, these contributions mainly investigated the implementations of conventional convex optimization techniques, which often suffer from high computational complexity and poor scalability. Moreover, the objective functions are often non-convex, which can not be directly tackled by convex optimization methods. Hence, alternative cost-effective non-convex optimization schemes are necessary to fulfil the requirements of massive connectivity in next-generation wireless networks. In recent years, artificial intelligence (AI) has emerged as a tremendous technology to address the problems of exploding data volume, non-convex optimization, and computational complexity~\cite{ML_survey1}. In particular, deep learning (DL) techniques utilize the extensive offline training phase to reduce the algorithm complexity in the application and have received overwhelming research interests in the optimization of RIS-assisted wireless communication systems~\cite{RIS_BF_opt_transferDL_Ge, RIS_indoor_DL_Huang, RIS_DL_PS_Sheen}. In~\cite{RIS_BF_opt_transferDL_Ge}, deep transfer learning was employed to solve the beamforming optimization problem in multiple-input-single-output (MISO) networks, based on a small amount of training data. The problem was further extended into the discrete phase shift cases to accommodate hardware limitations. The authors of~\cite{RIS_indoor_DL_Huang} and~\cite{RIS_DL_PS_Sheen} utilized neural networks to learn the interactions between the receiver locations and the optimal RIS phase shift to achieve maximal communication throughput. These aforementioned contributions demonstrated the outstanding performance of DL-based techniques when solving high-dimensional and non-convex optimization problems in RIS-enhanced wireless networks. However, existing DL-based RIS optimization methods~\cite{RIS_BF_opt_transferDL_Ge, RIS_indoor_DL_Huang, RIS_DL_PS_Sheen} are all based on orthogonal multiple access (OMA) systems, where the RIS phase shift is the only optimization variable. To the best of our knowledge, there does not exist a DL-based solution for RIS-aided NOMA networks, which motivates this study. In this paper, we investigate the sum rate optimization problem in RIS-aided downlink MISO-NOMA networks, where both the RIS phase shift and the BS power allocation are optimized to maximize the total transmission sum rate. We adopt the zero-forcing (ZF) precoding method and the successive interference cancellation (SIC) decoding method to eliminate the effect of multi-user interference on the strong users. However, this approach causes the weak users to suffer from both inter-cluster and intra-cluster interference, leading to poor achievable rates. To improve the resource efficiency, we propose a quality-of-service (QoS)-based NOMA clustering method, which aims to maximize the QoS deviation within each cluster. In terms of the DL model, we design a neural network to output optimized power allocation given the RIS phase shift, resulting in a low-complexity model. Meanwhile, the phase shift is optimized through a gradient descent algorithm, given the trained network. We further employ meta-learning in the training process to improve the convergence rate of the phase shift optimization. The main contributions are as follows: \begin{enumerate} \item We propose a RIS-enhanced NOMA downlink framework and formulate the sum rate maximization problem by jointly optimizing the phase shift of the RIS and the power allocation of the BS. To improve the resource efficiency, we propose a QoS-based NOMA clustering scheme, which maximizes the QoS deviation within clusters. \item We propose a MAML-based DL algorithm to solve the joint optimization problem. The algorithm can output optimized solutions in as few as five iterations and the model has lower network complexity compared to the conventional design. \item Simulation results indicate that the implementation of RIS can induce approximately 5\% to 25\% throughput gain as the number of RIS elements increases from 8 to 64, in both NOMA and OMA networks. Results also show that the proposed QoS-based clustering method achieves higher throughput than the conventional channel condition-based approach. \end{enumerate} \section{System Model and Problem Formulation}\label{sec:system model} \subsection{System Model} \begin{figure}[t] \includegraphics[width=0.47\textwidth]{Figures/GC_model.pdf} \caption{Illustration of the RIS-assisted downlink MISO-NOMA network.} \label{fig: system model} \end{figure} As illustrated in Fig.~\ref{fig: system model}, we consider a downlink MISO system with one BS and $K$ mobile users (MUs). The BS is equipped with $M$ antenna elements and each MU is equipped with one single antenna. The communication between the BS and the MUs is assisted by a RIS of $N$ reflecting elements, whose phase shift and amplitude absorption can be adjusted by a controller. The channels between the BS and the RIS are modelled as Rician fading channels. The BS-MU channels and the RIS-MU channels are modelled as Rayleigh fading channels. The path loss of a particular MU $k$ is modelled as $\text{PL}_k = d_k^{-\alpha}$ where $d_k$ is the distance, calculated in meters, between the MU and the BS, and $\alpha$ denotes the path loss exponent. \subsection{NOMA Signal Model} In this subsection, we formulate the NOMA-based signal model and introduce the proposed QoS-based clustering method. \subsubsection{Signal model} The signal received at each MU is a composition of the signals derived from the direct link between BS and MU, and the signals derived from the reflecting link. In particular, for MU $i$ in the $l$-th cluster, we denote the RIS-MU link and BS-MU link by $\mathbf{h}_{R,l,i}^{H} \in {\mathbb{C}^{1 \times N}}$, and $\mathbf{h}_{B,l,i}^{H} \in {\mathbb{C}^{1 \times M}}$, respectively. We further denote the BS-RIS link by ${\mathbf{H}}_{BR} \in {\mathbb{C}^{N \times M}}$. The phase shift of the RIS is denoted by $\boldsymbol{\theta} = [\theta_1, \cdots, \theta_n, \cdots, \theta_N]$ where $\theta_n \in [0, 2\pi)$. The diagonal phase-shifting matrix is expressed as $\mathbf{\Theta} = \text{diag}(\beta_1 e^{j\theta_1},\cdots,\beta_n e^{j\theta_n},\cdots,\beta_N e^{j\theta_N})$, where $\beta_n \in [0,1]$ is the amplitude reflection coefficient. For simplicity, we assume that all amplitude coefficients are ones, i.e., $\beta_n = 1, \forall n$. We assume that each cluster is formed by two users and we denote the strong MU as MU $s$ and the weak MU as MU $w$. The signals transmitted to the strong and the weak MU in the $l$-th cluster are denoted by ${s_{l,s}}$ and ${s_{l,w}}$, respectively and we denote the transmit power allocated to the strong and the weak MUs in the $l$-th cluster by $p_{l,s}$ and $p_{l,w}$, respectively. Hence, the transmit signal of the $l$-th cluster is formulated as ${x_l} = \sqrt {{p_{l,s}}} {s_{l,s}} + \sqrt {{p_{l,w}}} {s_{l,w}}$. The corresponding signal received by MU $i$ in the $l$-th cluster can be expressed as \begin{align}\label{transmitsignalNOMA} {{y_{l,i}} = \left( {{\mathbf{h}}_{B,l,i}^H + {\mathbf{h}}_{R,l,i}^H{{\mathbf{\Theta }}}{{\mathbf{H}}_{BR}}} \right)\sum\limits_{l = 1}^{K/2} {{{\mathbf{w}}_l}{x_l}} + {n_{l,i}} }, \end{align} where ${{{\mathbf{w}}_l}}$ denotes the beamforming vector of the $l$-th cluster and $n_{l,i}$ denotes the additive white Gaussian noise (AWGN), modelled as $n_{l,i} \sim \mathcal{C}\mathcal{N}\left( {0,\sigma ^2} \right)$. To decode the received symbols from the multiplexed signal $x_l$, each strong MU employs the SIC technique which eliminates the intra-cluster interference. The weak MU, however, decodes the signal directly without SIC. Additionally, we eliminate the inter-cluster interference for the strong MUs through the ZF beamforming technique. The normalized ZF precoding vector is given by \begin{align}\label{ZFNOMA} {\mathbf{w}_{l}} = \frac{{\mathbf{h}_{l,s}}{\left( {{{\mathbf{h}^H_{l,s}}}{\mathbf{h}_{l,s}}} \right)^{ - 1}}}{\rho_{l,s}}, \end{align} where $\mathbf{h}_{l,s}^H = \mathbf{h}^H_{B,l,s} + \mathbf{h}^H_{R,l,s}\mathbf{\Theta}\mathbf{H}_{BR}$ denotes the combined channel and $\rho_{l,s}$ denotes the normalizing constant formulated as $\rho_{l,s} = |{{\mathbf{h}_{l,s}}{( {{{\mathbf{h}_{l,s}}^H}{\mathbf{h}_{l,s}}} )^{ - 1}}} |^2$. The corresponding ZF precoding constraints are expressed as follows \begin{align}\label{ZF2} \left\{ \begin{array}{*{20}{c}} \mathbf{h}_{j,s}^H \mathbf{w}_l &= 0, & {\kern 1pt} \forall j \ne l,{\kern 1pt} {\kern 1pt} {\kern 1pt} j = 1, \cdots, {K/2} , \\ \mathbf{h}_{j,s}^H \mathbf{w}_l&= \frac{1}{\rho_{l,s}}, & j = l. \end{array} \right. \end{align} Hence, without the interference, the signal received at the strong MU in the $l$-th cluster can be simplified into \begin{align}\label{receiveNOMA} {{y_{l,s}}{\kern 1pt} = \mathbf{h}^H_{l,s}{{\mathbf{w}}_l}\sqrt {{p_{l,s}}} {s_{l,s}} + {n_{l,s}}}. \end{align} Based on \eqref{ZF2}, we can calculate the received SINR of the strong MU in the $l$-th cluster as \begin{align}\label{eq: SINRNOMA} {{\gamma _{l,s}} = \frac{{{{\left| {\mathbf{h}^H_{l,s}{{\mathbf{w}}_l}\sqrt {{p_{l,s}}} {s_{l,s}}} \right|}^2}}}{{\sigma ^2}} = \frac{p_{l,s}}{{\rho_{l,s}\sigma ^2}} }. \end{align} Since both inter-cluster interference and intra-cluster interference exists in the weak MU's received signals, the received SINR of the weak MU in the $l$-th cluster is derived as \begin{align}\label{eq: weak user SINR} {{\gamma _{l,w}} = \frac{{{{\left| {{{\mathbf{h}}_{l,w}}{{\mathbf{w}}_l}} \right|}^2}{p_{l,w}}}}{{{{\left| {{{\mathbf{h}}_{l,w}}{{\mathbf{w}}_l}} \right|}^2}{p_{l,s}} + {{\left| {{{\mathbf{h}}_{l,w}}\sum\limits_{j = 1,j \ne l}^{K/2} {{{\mathbf{w}}_j}{x_j}} } \right|}^2} + \sigma ^2}} }. \end{align} \subsubsection{QoS-based clustering scheme} When both the ZF precoding and the SIC decoding techniques are employed in NOMA, the weak MUs suffer from both the inter-cluster and the intra-cluster interference, resulting in low SINR and low achievable rate compared to the strong MUs, who are served in an interference-free manner. Conventional clustering methods allocate MUs by exploiting the difference between their channel conditions. However, when MUs have different QoS requirements, we may notice a weak MU acquiring a high QoS, which is challenging to achieve given the multiuser interference. Moreover, due to the low SINR, a great amount of transmit power has to be allocated to the weak MU to fulfil the QoS. Hence, it is more sensible to assign MUs with lower QoS requirements as the weak MUs to improve the resource efficiency and to enhance the network throughput. Therefore, we propose a QoS-based clustering scheme, which assigns the MUs with higher or lower QoS requirements as the strong or weak MUs, respectively. To be specific, the objective of the QoS-based clustering method is to maximize the minimum QoS deviation among all clusters. The clustering problem can be formulated as $\mathop {\max } \mathop{\min }\limits_{l={1, \cdots, K/2}}(R^{l,s}_{QoS} - R^{l,w}_{QoS})$, where $R^{l,s}_{QoS}$ and $R^{l,w}_{QoS}$ denote the QoS requirements of the strong and the weak MUs in the $l$-th cluster, respectively. To achieve the maximal QoS deviation, we propose a simple clustering method when each cluster consists of two MUs. We assume that $K$ is an even number and all MUs are ordered in terms of their QoS requirements, namely, the $k$-th MU has the $k$-th highest QoS requirement. The maximal QoS deviation can be achieved by assigning the $k$-th MU and the $(k+K/2)$-th MU into the same cluster, for all $k \leq K/2$. \subsection{Problem Formulation} The optimization goal is to maximize the total throughput of the network by jointly optimizing the RIS phase shift ${{\boldsymbol{\theta}}} = [{\theta _{1}}, \cdots ,{\theta _{n}}, \cdots ,{\theta _{N}}]$ and the BS power allocation vector $\mathbf{P}=[\mathbf{P}_s, \mathbf{P}_w]$, where $\mathbf{P}_s = [p_{1,s}, \cdots, p_{K/2,s}]$ and $\mathbf{P}_w = [p_{1,w}, \cdots,p_{K/2,w}]$. The optimization problem is formulated as \begin{center} \begin{subequations}\label{eq: short-term opt problem} \begin{align} \mathop {\max }\limits_{\boldsymbol{\theta}, \mathbf{P}} {\kern 1pt} {\kern 1pt} {\kern 1pt} R = \sum\nolimits_{l = 1}^{K/2} (R_{l,s} + R_{l,w}) \label{eq: short-term maxR}\\ {\text{s}}{\text{.t}}{\text{.}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {R_{l,i}} \ge R_{\text{QoS} }^{l,i} ,\forall l, \forall i \in \{ s,w\} \label{eq: short-term constraint1}\\ \left| {{e^{j\theta_n}}} \right| = 1,\forall n \label{eq: short-term constraint2}\\ \sum\nolimits_{l=1}^{K/2} (p_{l,s} + p_{l,w}) \leq P_{max}\label{eq: short-term constraint3} \end{align} \end{subequations} \end{center} where $R_{l,i} = B_l\log_2(1+\gamma_{l,i})$ denotes the throughput achieved by MU $i$ in cluster $l$ and $R_{\text{QoS} }^{l,i}$ denotes the minimal QoS requirement of the given MU. Hence, \eqref{eq: short-term constraint1} represents the minimum transmit rate constraint. Moreover, \eqref{eq: short-term constraint2} denotes the phase shift constraint of the RIS and (8d) qualifies the total transmit power constraint of the BS. Due to the non-convex constraint~\eqref{eq: short-term constraint2}, the optimization problem can not be directly solved by conventional approaches. Hence, we proposed to tackle the joint optimization problem utilizing machine learning techniques. \section{DL-Based Power Allocation and Phase Shift Optimization}\label{sec: DL solution} In this section, we introduce the proposed meta-learning enabled DL algorithm that jointly optimizes the power allocation and the RIS phase shift, given MUs' QoS requirements. \subsection{Proposed MAML-based Phase Shift and Power Allocation Optimization Algorithm} The main idea of DL algorithms is to extensively train a neural network such that, given any inputs, the outputs of the network achieve minimal loss. A conventional design is to construct a neural network that outputs all optimization variables, namely, the RIS phase shift and the power allocation. However, this design will result in an extremely large input space that consists of all channel information and QoS information. In particular, all channel matrices contribute $(2K\times N +2K\times M + 2N\times M)$ to the input dimension, leading to exceedingly expensive computational costs. Moreover, the phase shift and the power allocation have vastly different value ranges and distributions, which greatly increase the training difficulty. \begin{remark} Optimizing the phase shift requires the knowledge of all channels among the BS, the RIS and the MUs. However, the optimization of the power allocation only requires the information of the combined channel. \end{remark} Inspired by the fact that the combined channel of all MUs, denoted by $\mathbf{H} = [\mathbf{h}_{1,s}, \mathbf{h}_{1,w}, \cdots, \mathbf{h}_{K/2,s}, \mathbf{h}_{K/2,w}]$, provides sufficient channel information for optimizing $\mathbf{P}$ but not $\boldsymbol{\theta}$, we propose to design the neural network to input the combined channel and output the optimized power allocation. The real and the imaginary parts of the combined channel $\mathbf{H}$ contributes $(2K\times M)$ to the input dimension, which is significantly smaller than the input dimension of the intuitive design. The neural network $G_{\eta}$ is formulated as \begin{equation}\label{eq: P = G(H, R, theta)} \mathbf{P} = G_{\eta}(\mathbf{H}( \boldsymbol{\theta}), \mathbf{R}_{QoS}, \mathbf{L}_{\text{path}}), \end{equation} where $\mathbf{H}(\boldsymbol{\theta})$ denotes the combined channel calculated using the phase shift $\boldsymbol{\theta}$, $\mathbf{R}_{QoS}\in \mathbb{R}^{K}$ denotes the QoS requirement vector, and $\mathbf{L}_{\text{path}} \in \mathbb{R}^{K}$ is the path loss vector. The phase shift $\boldsymbol{\theta}$ is optimized separately using a gradient descent algorithm. Two optimization algorithms are connected in an alternating structure, by using the output of the other algorithm as the input. In contrast to the conventional alternating optimization approach, we train the neural network $G_{\eta}$ to output the optimized power allocation given any phase shift. Hence, given a trained $G_{\eta}$, we can find the optimized pair of $\boldsymbol{\theta}$ and $\mathbf{P}$ by solely performing the optimization on $\boldsymbol{\theta}$. To further improve the convergence rate of the gradient descent algorithm, the network $G_{\eta}$ is trained using MAML, such that, the optimized pair of $\boldsymbol{\theta}$ and $\mathbf{P}$ can be obtained in as few as five iterations. MAML is a meta-learning technique, which is designed to optimize the model parameters such that a few gradient steps will produce a maximally effective performance on a new task~\cite{few_shot1_MAML}. As demonstrated in~\cite{DCS_DeepMind}, MAML can be employed to reduce the number of gradient descent steps required to optimize the network input space. In our model, the network inputs are optimized by adjusting the phase shift $\boldsymbol{\theta}$. Moreover, the gradient descent steps on $\boldsymbol{\theta}$ are performed by back-propagating through the weights of $G_{\eta}$. Hence, we propose to train $G_{\eta}$ with MAML to obtain a set of network weights that can greatly reduce the number of gradient steps required to update $\boldsymbol{\theta}$. \subsection{Loss Functions} As described in~\eqref{eq: short-term opt problem}, both $\boldsymbol{\theta}$ and $\mathbf{P}$ need to be optimized to maximize the system throughput given the constraints. Hence, they share the same loss function, denoted by $\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\eta})$, where $\boldsymbol{\eta}$ is the weights of the neural network $G_{\eta}$ that outputs $\mathbf{P}$. The loss function consists of two parts, the total throughput and the constraint term for enforcing the QoS requirements, given by \small{\begin{multline}\label{eq: loss function L()} \mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\eta}) = w_{1} \sum_{l=1}^{K/2}\sum_{i=s,w} R_{l,i}(\boldsymbol{\theta}, \boldsymbol{\eta}) +\\ w_{2} \sum_{l=1}^{K/2}\sum_{i=s,w} \text{max}\big(R_{l,i}(\boldsymbol{\theta}, \boldsymbol{\eta}) - R^{l,i}_{QoS}, 0\big), \end{multline}} \normalsize where $R_{l,i}(\boldsymbol{\theta}, \boldsymbol{\eta})$ is the sum rate calculated using $\boldsymbol{\theta}$ and $ \boldsymbol{\eta}$, and $\text{max}\big(R_{l,i}(\boldsymbol{\theta}, \boldsymbol{\eta}) - R^{l,i}_{QoS}, 0\big)$ indicates the QoS deficiency of MU $i$ in cluster $l$. The weights $w_1$ and $w_2$ are tuned during training. Since the sum rate is positive-valued and the QoS deficiency is negative-valued, $w_1$ and $w_2$ should be negative and positive, respectively. Suppose we aim to optimize $\boldsymbol{\theta}$ in $J$ gradient steps, we can derive the gradient descent formula in the $j$-th gradient step of the $p$-th training episode as \begin{equation}\label{eq: phase shift update equation} \boldsymbol{\theta}^{(j)} \leftarrow \boldsymbol{\theta}^{(j-1)}- \gamma_\theta \frac{\partial}{\partial \boldsymbol{\theta}^{(j-1)}} \mathcal{L}(\boldsymbol{\theta}^{(j-1)},\boldsymbol{\eta}^{(p-1)}), \end{equation} where $\gamma_\theta$ denotes the step size and $\boldsymbol{\eta}^{(p-1)}$ denotes the neural network weights obtained in the previous training episode. In order to satisfy the phase shift constraint in \eqref{eq: short-term constraint2}, we clip the values of $\boldsymbol{\theta}$ to $[0, 2\pi]$ after each update. Based on \eqref{eq: phase shift update equation}, the loss function after completing the $J$-th gradient step is therefore $\mathcal{L}(\boldsymbol{\theta}^{(J)}, \boldsymbol{\eta}^{(p-1)})$. Then, we optimize the neural network $G_{\eta}$ to further minimize $\mathcal{L}(\boldsymbol{\theta}^{(J)}, \boldsymbol{\eta}^{(p-1)})$ through the following update formula \begin{equation}\label{eq: eta update equation} \boldsymbol{\eta}^{(p)} \leftarrow \boldsymbol{\eta}^{(p-1)}- \gamma_\eta \frac{\partial}{\partial \boldsymbol{\eta}^{(p-1)}} \mathcal{L}(\boldsymbol{\theta}^{(J)}, \boldsymbol{\eta}^{(p-1)}), \end{equation} where $\gamma_\eta$ denotes the learning rate. \begin{algorithm}[t] \caption{Meta-learning Based Training Algorithm} \begin{algorithmic}[1] \label{Algorithm: train} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Channel matrix $\mathbf{H}$, QoS vector $\mathbf{R}_{QoS}$, MU locations, neural network $G_{\eta}$, number of phase shift update steps $J$, phase shift learning rate $\gamma$ \ENSURE Trained neural network $G_{\hat{\eta}}$ \\ Initialize $\boldsymbol{\eta}$ \REPEAT \FOR {each episode} \STATE Initialize phase shift $\theta_1, ..., \theta_N \overset{\text{iid}}{\sim} \mathcal{U}(0, 2\pi)$ \STATE Calculate path loss vector $\mathbf{L}_{\text{path}}$ \FOR {$j = 0$ to $J-1$} \STATE Obtain power allocation $\mathbf{P}^{(j)} = G_{\eta}(\mathbf{H}(\boldsymbol{\theta}^{(j)}), \mathbf{R}_{QoS}, \mathbf{L}_{\text{path}})$ \STATE Calculate loss function $\mathcal{L}(\boldsymbol{\theta}^{(j)} ; \boldsymbol{\eta})$ \STATE Update phase shift using \eqref{eq: phase shift update equation} \ENDFOR \STATE Given the optimized phase shift $\boldsymbol{\theta}^{(J)}$, calculate the optimized power allocation $\mathbf{P}^{(J)} = G_{\eta}(\mathbf{H}(\boldsymbol{\theta}^{(J)}), \mathbf{R}_{QoS}, \mathbf{L}_{\text{path}})$ \STATE Calculate loss function $\mathcal{L}(\boldsymbol{\theta}^{(J)}, \boldsymbol{\eta})$ using the optimized phase shift \STATE Update network weights using \eqref{eq: eta update equation} \ENDFOR \UNTIL{reaches the maximum training steps} \STATE \textbf{Return} $G_{\hat{\eta}}$ \end{algorithmic} \end{algorithm} To employ MAML, we calculate~\eqref{eq: eta update equation} by implicitly performing the second order differentiation with respect to the loss function $\mathcal{L}(\boldsymbol{\theta}^{(J)}, \boldsymbol{\eta}^{(p-1)})$ and back-propagating through the $J$ phase shift optimization steps in \eqref{eq: phase shift update equation}. What's more, we also perform MAML on the hyper-parameter $\gamma_\theta$ against the loss function $\mathcal{L}(\boldsymbol{\theta}^{(J)}, \boldsymbol{\eta}^{(p-1)})$ to reduce the need for further hyper-parameter tuning. The update equation of $\gamma_\theta$ can be derived in the same way as in~\eqref{eq: eta update equation}. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{Figures/DL_network_figure.pdf} \caption{An illustration of the MAML-based training framework.} \label{fig: DL_network_figure} \end{figure} \subsection{Training Algorithm} As shown in Fig.~\ref{fig: DL_network_figure}, each training epoch can be divided into two stages, corresponding to the inner and outer MAML steps: \begin{enumerate} \item \textit{Phase shift optimization (inner step)}: The initial phase shift is sampled according to a random uniform distribution, i.e. $\boldsymbol{\theta}^{(0)} \sim \mathcal{U}(0, 2\pi)$. In the $j$-th gradient loop, the corresponding power allocation $\mathbf{P}^{(j)}$ is obtained based on~\eqref{eq: P = G(H, R, theta)}, using $\boldsymbol{\theta}^{(j)}$. Then, $\boldsymbol{\theta}^{(j)}$ is optimized with respect to the loss function $\mathcal{L}(\boldsymbol{\theta}^{(j)}, \boldsymbol{\eta})$, as in~\eqref{eq: phase shift update equation}. We repeat~\eqref{eq: phase shift update equation} for $J$ iterations. The final optimized phase shift is thus $\boldsymbol{\theta}^{(J)}$. \item \textit{Power allocation optimization (outer step)}: After completing $J$ gradient descent loops, the current optimal power allocation $\mathbf{P}^{(J)}$ can be computed using $\boldsymbol{\theta}^{(J)}$ and~\eqref{eq: P = G(H, R, theta)}. Then, the network weights $\boldsymbol{\eta}$ are updated according to~\eqref{eq: eta update equation}, by backpropagating through all $J$ gradient descent iterations. \end{enumerate} The pseudocode of the training algorithm is presented in Algorithm~\ref{Algorithm: train}, where lines 2-8 correspond to the phase shift optimization steps and lines 9-11 correspond to the power allocation optimization steps. To apply the trained network on test datasets, we only need to perform the phase shift optimization procedure for $J$ times, after which the optimized phase shift $\boldsymbol{\theta}^{(J)}$ and the corresponding power allocation $\mathbf{P}^{(J)}$, are the solutions to our joint optimization problem in~\eqref{eq: short-term opt problem}. \subsection{Complexity Analysis} The computational complexity of the proposed joint optimization algorithm mainly depends on three factors, namely, the number of phase shift update steps $J$, the complexity of the loss function, and the complexity of the neural network. Trivially, the complexity of~\eqref{eq: loss function L()} is dominated by the calculation of individual MU's combined channel vector $\mathbf{h}^H_{l,i}$, of which the complexity is $\mathcal{O}(NM)$. Since we compute the combined channel for each MU, the total complexity induced by calculating the loss function is $\mathcal{O}(NKM)$. Then, we note that a fully-connected neural network of $D$ layers, including input and output layers, has a computational complexity of $\mathcal{O}(\sum_{i=1}^{D-1} n_i n_{i+1})$, where $n_i$ is the number of neurons in layer $i$. Therefore, the proposed algorithm has a complexity of $\mathcal{O}(JNKM\sum_{i=1}^{D-1} n_i n_{i+1})$. \section{Simulations}\label{sec:experiment} In this section, we present the simulation results for the proposed MAML-based optimization algorithm for RIS-assisted NOMA downlink networks. We consider MUs moving in a square area of width 10 meters. The BS is located at a corner of the area and the RIS is randomly placed in the area. In all simulations, the path loss exponent is set to $\alpha=3$, the noise power spectral frequency is -169 dBm/Hz, and the total bandwidth is 4 MHz. The number of phase shift optimization steps is $J=5$ unless otherwise stated. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{Figures/DL_vs_DRL_K_ST-eps-converted-to.pdf} \caption{Sum rate versus the number of MUs, $K$, in NOMA and OMA systems using the QoS-based or the channel condition-based clustering schemes.} \label{fig: DL_DRL_rate_vs_K_ST} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{Figures/datarate_vs_N-eps-converted-to.pdf} \caption{Sum rate versus the number of reflecting elements $N$ for NOMA and OMA cases, given $M=16$ antennas and 20 dBm transmit power at BS.} \label{fig: rate_vs_N} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{Figures/datarate_vs_power-eps-converted-to.pdf} \caption{Sum rate versus total transmit power at BS for NOMA and OMA cases, given $M=16$ BS antennas and $N=16$ reflecting elements.} \label{fig: rate_vs_power} \end{figure} \subsection{QoS-based versus Channel Condition-based Clustering Methods} In Fig.~\ref{fig: DL_DRL_rate_vs_K_ST}, we compare the network throughput given different clustering methods, namely, the proposed QoS-based clustering scheme and the conventional channel condition-based clustering scheme. Simulations are performed with $M=24$ BS antennas and $P_{\text{max}} = 20$ dBm maximum transmit power. It can be observed that, for smaller numbers of MUs, i.e., $K\leq 8$, the performance difference between two clustering schemes is small because their clustering results are likely to be similar. However, when there are more MUs in the system, i.e., $K\geq 12$, the proposed QoS-based method starts to achieve higher throughput than the conventional channel condition-based approach. The performance gains further increases as the number of MUs increases. \subsection{Sum Rate versus the Number of RIS Elements} In Fig.~\ref{fig: rate_vs_N}, we observe that the proposed NOMA network with the QoS-based clustering scheme outperforms the conventional OMA network of around 4 dBm/Hz of sum rate, without the enhancement of RIS. In both NOMA and OMA networks, the deployment of RIS induces approximately 5\% to 25\% throughput gain as the number of reflecting elements ranges from $N=8$ to $N=64$. Higher performance gain can be attained by increasing the number of RIS elements, however, the cost of optimization complexity and the deployment cost increases as well. \subsection{Sum Rate versus BS Total Transmission Power} Fig.~\ref{fig: rate_vs_power} illustrates the throughput performance between OMA and NOMA systems as the BS power varies between 0 dBm and 30 dBm. It can be observed that the NOMA system outperforms the OMA system for all values of BS transmit power, given the same number of MUs. We also notice that the NOMA network with 4 MUs achieves higher throughput than the OMA network with 6 MUs when the BS power is less than 15 dBm. Moreover, as the number of MUs increases, the throughput of the NOMA networks increases by a larger amount compared to the throughput of the OMA networks. \section{Conclusions}\label{sec:conclusion} In this article, we proposed a QoS-based clustering method to improve the resource efficiency in RIS-assisted NOMA networks. We formulated the sum rate maximization problem by jointly optimizing the RIS phase shift and the BS power allocation. The proposed DL solution utilized a low-complexity network architecture and employed MAML to improve the convergence rate. Simulation results demonstrated that higher transmission throughput was achieved by the proposed QoS-based clustering method than the baseline method. Results also illustrated that the proposed QoS-based NOMA model achieved higher throughput compared to the conventional OMA models for a wide range of values of the BS transmit power and the RIS elements. Moreover, the implementation of RIS improved network throughput by a substantial amount, which further grows as the number of reflecting elements increases. \bibliographystyle{IEEEtran}
1,314,259,995,967
arxiv
\section{Introduction} \label{intro_sec} Recently Antal \emph{et al.} \cite{antal} proposed a triad dynamics to model the approach of social balance. An essential ingredient in the algorithm is the reduction of frustration in the following sense. To an edge (or link) in the all-to-all topology is assigned a value of $+1$ or $-1$ if it connects two individuals who are friends or enemies respectively. The sign $\pm 1$ of a link we call also its spin. If the product of links along the boundary of a triad is negative, the triad is called frustrated (or imbalanced), otherwise it is called balanced (or unfrustrated). The state of the network is called balanced if all triads are balanced. If the balanced state is achieved by all links being positive the state is called ``paradise''. The algorithm depends on a parameter $p \in [0,1]$ called propensity which determines the tendency of the system to reduce frustration via flipping a negative link to a positive one with probability $p$ or via flipping a positive link to a negative with probability $1-p$. For an all-to-all topology Antal \emph{et al.} predict a transition from imbalanced stationary states for $p<1/2$ to balanced stationary states for $p\geq 1/2$. Here the dynamics is motivated by social applications so that the notion of frustration from physics goes along with frustration in the psychological sense. \\ Beyond frustration in social systems, within physics, the notion is familiar from spin glasses. It is the degree of frustration in spin glasses which determines the qualitative features of the energy landscape. A high [low] degree of frustration corresponds to many [few] local minima in the energy landscape. In terms of energy landscape it was speculated by Sasai and Wolynes \cite{wolynes} that is the low degree of frustration in a genetic network which is responsible for the few stable cell states in the high-dimensional space of states. \\ Calculational tools from spin-glass theory like the replica-method \cite{parisi} turned out to be quite useful in connection with generic optimization problems (as they occur, for example, in computer science) whenever there is a map between the spin-glass Hamiltonian and a cost function. The goal in finding the ground state-energy of the Hamiltonian translates to the minimization of the costs. A particular class of the optimization problems refers to the satisfiability problems. More specifically one has a system of $B$ Boolean variables and $Q$ logical constraints (clauses) between them. In this case, minimizing the costs means minimizing the number of violated constraints. In case of the existence of a non-violating configuration the problem is said to be satisfiable, it has a zero-ground state energy in the Hamiltonian language. Here it is obvious that computer algorithms designed to find the optimal solution have to reduce the frustration down to a minimal value. So the reduction of frustration is in common to very different dynamical processes. \\ The algorithms we have to deal with belong to the so-called incomplete algorithms \cite{garey,weigt,semerjian} characterized by some kind of Monte-Carlo dynamics that tries to find the solution via stochastic local moves in configuration space, starting from a random initial configuration. It either finds the solution "fast" or never (this will be made more precise below). Among the satisfiability problems there are the $k$-SAT ($k$S) problems \cite{cook,mezard,mezard2}, for which actually no frustration-free solution exists above a certain threshold in the density of clauses imposed on the system. In this case the unsatisfiability is not a feature of the algorithm but intrinsic to the problem. However, there is a special case of $k$S problems, so-called $k$-XOR-SAT ($k$XS) problems \cite{weigt,semerjian,mezard2,cocco} which are always solvable by some global algorithm, but poses a challenge for finding the solution by some kind of Monte-Carlo dynamics, very similar to the one used for solving the $k$S problem, where actually no solution may exist. Now it is these $k$XS problems and their solutions that are related to the social balance dynamics. \\ In particular it can be easily shown \cite{mezard,mezard2,cocco} that the satisfiability problem $3$S (and also the subclass $3$XS) can first be mapped onto a $3$-spin model that is a spin-glass, and as we shall show below, the $3$-spin glass model can next be mapped onto the triad dynamics of Antal \emph{et al.} \cite{antal}. The $k$XS problem is usually studied for diluted connections because the interesting changes in the phase structure of the $k$XS problem appear at certain threshold parameters in the dilution, while the all-to-all case is not of particular interest there. \\ Dilution of the all-to-all topology is not only needed for the mapping to the $3$XS problem in its usual form. It is also a natural generalization of the triad dynamics considered in \cite{antal} for social balance. A diluted network is more realistic than an all-to-all topology by two reasons: either two individuals may not know each other at all (this is very likely in case of a large population size) or they neither like or dislike each other, but are indifferent as emphasized in \cite{cartwright} as an argument for the postulated absence of links. For introducing dilution into the all-to-all network considered by Antal \emph{et al.} it is quite natural to study random Erd\"os-R\'enyi networks \cite{erdos} for which two nodes are connected by a link with probability $w$. On the other hand, dilution in the $k$XS problem is parameterized by the ratio $\alpha$ of number of clauses over number of variables (variables in the corresponding spin model or number of links in the triad dynamics). We will determine the map between both parameterizations. \\ \vspace{5pt} \\ In the first part of this paper (section \ref{model}) we generalize the triad dynamics to $k$-cycle dynamics, driven by the reduction of frustration, with arbitrary integer $k$. In the context of \emph{social balance} theory, Cartwright and Harary \cite{cartwright} introduced the notion of balance describing a balanced state with all $k$-cycles being balanced and $k$ not restricted to three. We first study such model on fully connected networks (section \ref{complete}). For given fixed and integer $k\geq 3$ in the updating rules, we draw the differential equations of the time evolution due to the local dynamics (section \ref{evolution}) and we predict the stationary densities of $k$-cycles, $k$ arbitrary integer, containing $j \leq k$ negative links (section \ref{stationary}). As long as $k$ is odd (section \ref{odd}) in the updating dynamics, the results are only quantitatively different from the case of $k=3$ considered in \cite{antal}. An odd cycle of length three is, however, not an allowed loop in a bipartite graph, for which links may only exist between different type of vertices so that the length of a loop of minimal size in a bipartite graph is four. In addition, a $4$-cycle with four negative links (that is four individuals each of which dislikes two others) is balanced and not frustrated, although it may be called the ``hell'', so it does not need to be updated in order to reduce its frustration. (To call the hell with four negative links balanced is not specific for the notion of frustration in physics; also in social balance theory it is the product over links in the loop which counts and decides about balance or frustration \cite{roberts}.) This difference is essential as compared to the triad dynamics, in which a triad of three unfriendly links is always updated. It has important implications on the phase structure as we will show. For even values of $k$ and larger than four, again there are only quantitative differences in the phase structure as compared to $k=4$ (section \ref{even}). \\ As in \cite{antal} , for odd values of $k$, we shall distinguish between stationary states in the infinite volume limit that can be either balanced (for $p \geq 1/2$) or frustrated (for $p< 1/2$) since it is not possible to reach the paradise in a finite time. They are predicted as solutions of mean field equations. In numerical simulations, fluctuations about their stationary values do not die out in the phase for $p< 1/2$ so that some frustration remains, while for $p \geq 1/2$ frozen states are always reached in the form of the paradise although other balanced states with a finite amount of negative links are in principle available, but are quite unlikely to be realized during the finite simulation time. They are exponentially suppressed due to their small weight in configuration space. We calculate the time it takes to reach a frozen state at and above the phase transition (section \ref{time_odd}). For even values of $k$ we have only two types of stationary frozen states, ``paradise'' and ``hell'' with all links being positive and negative, respectively. In this case the time to reach the frozen states at the transition can be calculated in two ways. The first possibility applies for both even and odd values of $k$ and is based on calculating the time it takes until a fluctuation is of the same order in size as the average density of unfriendly links. The second one, applicable to the case of even values of $k$, can be obtained by mapping the social system to a Markov process known as the Wright-Fisher model for diploid organisms \cite{wright}, for which the decay time to one of the final configurations (all ``positive'' or all ``negative'' genes) increases quadratically in the size $N$ of the system (section \ref{sec:time_even}). \\ In the second part we generalize the $k$-cycle dynamics to diluted systems (section \ref{sec:diluted}). The dilution, originally given in terms of the probability for connecting two links in a random Erd\"os-R\'enyi network \cite{erdos}, is then parameterized in terms of the dilution parameter $\alpha$, and the results for stationary and frozen states and the time needed to reach them will be given as a function of $\alpha$ (section \ref{sec:ratio}). The original triad dynamics of Antal \emph{et al.} with propensity parameter $p$ on a diluted network contains, as special case, the usual Random-Walk SAT (RWS) algorithm for finding the solution of the $3$XS problem corresponding to the choice of $p=1/3$ in the triad dynamics. Therefore it is natural to generalize the RWS algorithm for generic $p\in [0,1]$ and to study the modifications in the performance of the algorithm as a function of $p$ (section \ref{sec:RWS}). For the $k$S problem, and similarly for the $k$XS problem, there are three thresholds in $\alpha$, $\alpha_d$, $\alpha_s$, and $\alpha_c$ with $\alpha_d<\alpha_s<\alpha_c$. Roughly speaking, the threshold $\alpha_d$ corresponds to a dynamical transition between a phase in which the RWS algorithm finds a solution in a time linearly increasing with the size of the system for $\alpha<\alpha_d$, and exponentially increasing with the system size for $\alpha>\alpha_d$. The value $\alpha_s$ characterizes a transition in the structure of the solution space, from one cluster of exponentially many solutions ($\alpha<\alpha_s$) to exponentially many clusters of solutions ($\alpha>\alpha_s$). Finally, $\alpha_c$ refers to the transition between satisfiable and unsatisfiable $k$S problems, this means that for these models not all constraints can be satisfied simultaneously in the UNSAT-phase for $\alpha>\alpha_c$ so that a finite amount of frustration remains. Above this last threshold lies a value of $\alpha,\; \alpha=\alpha_m$, such that for $\alpha>\alpha_m$ the mean field approximation is justified that was used for the maximum value of $\alpha$ in the all-to-all topology of the triad dynamics of \cite{antal}. We shall study the influence of the parameter $p$ on the value of $\alpha_d$ (section \ref{sec:alphad}) and on the Hamming distance for $\alpha$ smaller $\alpha_s$ or larger $\alpha_s$ (section \ref{sec:alphas}). Moreover we will show how the choice of $p$ changes the possibility to find a solution for the $k$XS problem (section \ref{sec:pc}) and we will determine the validity range of the mean-field approximation (section \ref{sec:alpham}). As it turns out, the parameter $p$ introduces some bias in the RWS, accelerating the convergence to ``paradise'' and reducing the explored part of configuration space. On the other hand, an inappropriate choice of $p$ or too much dilution may prevent an approach to paradise. Fluctuations in the wrong direction, increasing the amount of frustration, go along with improved convergence to the balanced state. \section{The Model for Social Balance} \label{model} We represent individuals as vertices (or nodes) of a graph and a relationship between two individuals as a link (or edge) that connects the corresponding vertices. Moreover we assign to a link $(i,j)$ between two nodes $i$ and $j$ a binary spin variable $s_{i,j}=\pm 1$, with $s_{i,j}=1$ if the individuals $i$ and $j$ are friends , and $s_{i,j}=-1$ if $i$ and $j$ are enemies. We consider the standard notion of \emph{social balance} extended to cycles of order $k$ \cite{cartwright,heider}. In particular a cycle of order $k$ (or a $k$-cycle) is defined as a closed path between $k$ distinct nodes $i_1$, $i_2$, \ldots , $i_k$ of the network, where the path is performed along the links of the network $(i_1,i_2)$ , $(i_2,i_3)$ , \ldots , $(i_{k-1},i_k)$, $(i_k,i_1)$. Given a value of $k$ we have $k+1$ different types $T_0$, $T_1$, \ldots, $T_j$, \ldots, $T_k$ of cycles of order $k$ containing $0$, $1$, \ldots, $j$, \ldots, $k$ negative links, respectively. A cycle of order $k$ in the network is considered as balanced if the product of the signs of links along the cycle equals $1$, otherwise the cycle is considered as imbalanced or frustrated. Accordingly, the network is considered as balanced if each $k$-cycle of the network is balanced. \\ We consider our social network as a dynamical system. We perform a local unconstrained dynamics obtained by a natural generalization of the local triads dynamics, recently proposed by Antal \emph{et al.} \cite{antal}. We first fix a value of $k$. Next, at each update we choose at random a $k$-cycle $T_j$. If this $k$-cycle $T_j$ is balanced ($j$ is even) nothing happens. If $T_j$ is imbalanced ($j$ is odd) we change one of its link as follows: if $j<k$, then $T_j \to \; T_{j-1}$ occurs with probability $p$, while $T_j \to \; T_{j+1}$ occurs with probability $1-p$ ; if $j=k$, then $T_{j} \to \; T_{j-1}$ happens with probability $1$. During one update, the positive [negative] link which we flip to take a negative [positive] sign is chosen at random between all the possible positive [negative] links belonging to the $k$-cycle $T_j$. One unit of time is defined as a number of updates equal to $L$, where $L$ total number of links of the network. In Figure \ref{fig:example} we show a simple scheme that illustrates the dynamical rules in the case $k=4$ (A) and $k=5$ (B). It is evident from the figure that for even values of $k$ the system remains the same if we simultaneously flip all the spins $s_{i,j} \to -s_{i,j}$ $\forall \; (i,j)$ and make the transformation $p \to 1-p$. The same is not true for odd values of $k$. The reason is that a $k$-cycle with only ``unfriendly'' links is balanced for even values of $k$, while it is imbalanced for odd values of $k$. The presence or absence of this symmetry property for even values of $k$ or odd, respectively, is responsible for very different features in the phase structure. This will be studied in detail in the following sections. \begin{figure}[ht] \includegraphics*[width=0.47\textwidth]{example} \caption{Dynamical rules in case of $k=4$ (A) and $k=5$ (B). The cycles containing an odd number of ``unfriendly'' links are considered as imbalanced and evolve into balanced ones. Full and dashed lines represent ``friendly'' and ``unfriendly'' links respectively.} \label{fig:example} \end{figure} \section{Complete graphs} \label{complete} We first consider the case of fully connected networks. Later we extend the main results to the case of diluted networks in section \ref{sec:diluted}. In a complete graph every individual has a relationship with everyone else. Let $N$ be the number of nodes of this complete graph. The total number of links of the network is then given by $L={N \choose 2}$, while the total number of $k$-cycles is given by $M={N \choose k}$. ${x \choose y}$ is the standard notation of the binomial coefficient. It counts the total number of different ways of choosing $y$ elements out of $x$ elements in total, while it is $0 \leq y\leq x$ , with $x,y \in \mathbb{N}$. Moreover we define $M_j$ as the number of $k$-cycles containing $j$ negative links, and $m_j= \; M_j/\;M$ the respective density of $k$-cycles of type $T_j$. The total number of positive links $L^+$ is then related to the number of $k$-cycles by the relation \begin{equation} L^+ = \frac{\sum_{i=0}^k\left(k-i\right) \; M_i}{\left(N-2\right)! \;/\; \left(N-k\right)!}\;\;\; .\label{eq:link_positive} \end{equation} A similar relation holds for the total number of negative links $L^-$ \begin{equation} L^- = \frac{\sum_{i=0}^k i \; M_i}{\left(N-2\right)! \;/\; \left(N-k\right)!}\;\;\;. \label{eq:link_negative} \end{equation} In particular, in Eq.s (\ref{eq:link_positive}) and (\ref{eq:link_negative}) the numerators give us the total number of positive and negative links in all the $k$-cycles, respectively, while the same denominator comes out from the fact that one link belongs to $(N-2)(N-3)\cdots(N-k+1)=\left(N-2\right)!/ \left(N-k\right)!$ different $k$-cycles. Furthermore the density of positive links is $\rho=L^+/L=1-\sum_{i=0}^k\; i \; m_i$, while the density of negative links is $1-\rho$. \subsection{Evolution Equations} \label{evolution} In view of deriving the mean field equations for the unconstrained dynamics, introduced in the former section \ref{model}, we need to define the quantity $M^+_j$ as the average number of $k$-cycles of type $T_j$ which are attached to a positive link. This number is given by \ M^+_j = \frac{\left(k-j\right) \;\; M_j }{L^+}\;\;\; , \ while similarly \ M^-_j = \frac{j \;\; M_j }{L^-} \ counts the average number of $k$-cycles of type $T_j$ attached to a negative link. In term of densities we can easily write \begin{equation} m^+_j = \frac{\left(k-j\right) \;\; m_j }{\sum_{i=0}^k\; \left(k-i\right) \; m_i} \label{eq:density_positive} \end{equation} and \begin{equation} m^-_j = \frac{j \;\; m_j }{\sum_{i=0}^k\; i \; m_i}\;\;\;. \label{eq:density_negative} \end{equation} Now let $\pi^+$ be the probability that a link flips its sign from positive to negative in one update event and $\pi^-$ the probability that a negative link changes its sign to $+1$ in one update event. We can write such probabilities as \begin{equation} \pi^+ = \left(1-p\right) \; \sum_{i=1}^{(k-1)/2} \; m_{2i-1} \label{eq:prob_positive_odd} \end{equation} and \begin{equation} \pi^- = p \; \sum_{i=1}^{(k-1)/2} \; m_{2i-1} \; + \; \; m_k \;\;\;, \label{eq:prob_negative_odd} \end{equation} valid for the case odd values of $k$. For even values of $k$, these probabilities read \begin{equation} \pi^+ = \left(1-p\right) \; \sum_{i=1}^{k/2} \; m_{2i-1} \label{eq:prob_positive_even} \end{equation} and \begin{equation} \pi^- = p \; \sum_{i=1}^{k/2} \; m_{2i-1} \; \; \; . \label{eq:prob_negative_even} \end{equation} Since each update changes $\left(N-2\right)! / \left(N-k\right)!$ $k$-cycles, and also the number of updates in one time step is equal to $L$ update events, the rate equations in the mean field approximation can be written as \begin{equation} \left\{ \begin{array}{l} \frac{d}{dt}\; m_0 \; = \; \pi^- \; m^-_1 \; - \; \pi^+ \; m^+_0 \\ \\ \begin{array}{ll} \frac{d}{dt}\;m_1 \; = \; & \pi^+ \; m^+_0 \; + \; \pi^- \; m^-_2 \; + \\ & - \; \pi^- \; m^-_1 \; - \; \pi^+ \; m^+_1 \end{array} \\ \vdots \\ \begin{array}{ll} \frac{d}{dt}\;m_j \; = \; & \pi^+ \; m^+_{j-1} \; + \; \pi^- \; m^-_{j+1} \; + \\ & - \; \pi^- \; m^-_{j} \; - \; \pi^+ \; m^+_{j} \end{array} \\ \vdots \\ \begin{array}{ll} \frac{d}{dt}\;m_{k-1} \; = \; & \pi^+ \; m^+_{k-2} \; + \; \pi^- \; m^-_{k} \; + \\ & - \; \pi^- \; m^-_{k-1} \; - \; \pi^+ \; m^+_{k-1} \end{array} \\ \\ \frac{d}{dt}\;m_k \; = \; \pi^- \; m^-_{k-1} \; - \; \pi^- \; m^-_k \end{array} \right. \;\;\;. \label{eq:mean_field} \end{equation} We remark that the only difference between the cases of odd values of $k$ and even values of $k$ comes from Eq.s (\ref{eq:prob_positive_odd}) and (\ref{eq:prob_negative_odd}), and Eq.s (\ref{eq:prob_positive_even}) and (\ref{eq:prob_negative_even}), respectively. This difference is the main reason why the two cases odd values of $k$ and even values of $k$ lead to two completely different behavior and why we treat them separately in the following section \ref{stationary}. \subsection{Stationary states} \label{stationary} Next let us derive the stationary states from the rate equations (\ref{eq:mean_field}) that give a proper description of the unconstrained dynamics of $k$-cycles in a complete graph. Imposing the stationary condition $\frac{d}{dt}\;m_j = 0$ , $\forall \; 0 \leq j \leq k$, we easily obtain \begin{equation} m^+_{j-1}\; = \; m^-_j \;\;\;, \; \forall \; 1 \leq j \leq k \;\;\; . \label{eq:stationary1} \end{equation} Then, forming products of the former quantities appearing in Eq.(\ref{eq:stationary1}), we have \ m^+_{j-1}\; m^-_{j+1}\; = \; m^+_j\; m^-_j\;\;\; , \; \forall \; 1 \leq j \leq k \;\; \ and, using the definitions of Eq.s (\ref{eq:density_positive}) and (\ref{eq:density_negative}), we finally obtain \begin{equation} \left(k-j+1\right)\left(j+1\right) \; m_{j-1} \; m_{j+1} \; \; = \; \; \left(k-j\right) j \left(\; m_j\; \right)^2\;\;\; , \label{eq:stationary3} \end{equation} valid $\forall \; 1 \leq j \leq k$. Moreover the normalization condition $\sum_i \; m_i \; = \; 1$ should be satisfied. Furthermore, in the case of stationary, the density of friendships should be fixed, so that we should impose that $\pi^+ \; = \; \pi^-$. \subsubsection{The case of odd values of $k$}\label{odd} In the case of odd values of $k$, the condition for having a fixed density of friendships reads \begin{equation} m_k \; = \; \left(1 - 2p\right) \; \sum_{i=1}^{(k-1)/2} \; m_{2i-1}\;\;\;, \label{eq:stationary_odd1} \end{equation} where we used Eq.s (\ref{eq:prob_positive_odd}) and (\ref{eq:prob_negative_odd}). In principle the $k$ equations of (\ref{eq:stationary3}) plus the normalization condition and the fixed friendship relation (\ref{eq:stationary_odd1}) determine the stationary solution. For $k=3$ Antal \emph{et al.} \cite{antal} found \begin{equation} m_j \; = \; {3 \choose j} \; \rho_\infty^{3-j} \; \left(1-\rho_\infty\right)^j\;\;, \; \forall \; 0 \leq j \leq 3 \;\;\; , \label{eq:antal1} \end{equation} where \begin{equation} \rho_\infty \; = \; \left\{ \begin{array}{ll} 1/\left[ \sqrt{3\left(1-2p\right)} +1 \right] & \textrm{ , if } p \leq 1/2 \\ 1 & \textrm{ , if } p \geq 1/2 \end{array} \right. \label{eq:antal2} \end{equation} is the stationary density of friendly links. In the same manner also the case $k=5$ can be solved exactly with the solution \begin{equation} m_j \; = \; {5 \choose j} \; \rho_\infty^{5-j} \; \left(1-\rho_\infty\right)^j \;\;, \; \forall \; 0 \leq j \leq 5 \;\;\; , \label{eq:k5a} \end{equation} where \begin{equation} \rho_\infty \; = \; \left[ \sqrt{5\left(1-2p\right) \left(1+\sqrt{1+\frac{1}{5(1-2p)}}\right)} +1 \right]^{-1} \label{eq:k5b} \end{equation} for $p\leq 1/2$, while $\rho_\infty=1$ for $p\geq 1/2$. \\ In Figure \ref{fig:t5} we plot the densities $m_j$ given by Eq.(\ref{eq:k5a}) and the stationary density of friendly links $\rho_\infty$ given by Eq.(\ref{eq:k5b}) as function of $p$. Moreover we verified the validity of the solution performing several numerical simulations on a complete graph with $N=64$ nodes (full dots). We compute numerically the average density of positive links after $10^3$ time steps, where the average is done over $10^2$ different realizations of the system. At the beginning of each realization we select at random the values of the signs of the links, where each of them has the same probability to be positive or negative, so that $\rho_0=0.5$. The numerical results perfectly reproduce our analytical predictions. \begin{figure}[ht] \includegraphics*[width=0.47\textwidth]{t5} \caption{(Color online) Exact stationary densities $m_j$ for the cycles of order $k=5$ from Eq.(\ref{eq:k5a}) and stationary density of friendly links $\rho_\infty$ from Eq.(\ref{eq:k5b}), both as a function of the dynamical parameter $p$. Numerical results are also reported for a system with $N=64$ vertices. Each value (full dot) is obtained by averaging the density of friendly links reached after $10^3$ time steps over $10^2$ different realizations with random initial conditions ($\rho_0 = 0.5$).} \label{fig:t5} \end{figure} \\ As one can easily see, both solutions (\ref{eq:antal1}) and (\ref{eq:k5a}) are just binomial distributions. This means that the densities of a cycle of order $k=3$ or a cycle of order $k=5$ with $j$ negative links are simply given by the probability of finding these densities on a complete graph in which each link is set equal to $1$ with probability $\rho_\infty$ or equal to $-1$ with probability $1-\rho_\infty$. (As already noticed in \cite{antal}, this result may come a bit as a surprise, because the $3$-cycle or here the $5$-cycle dynamics seems to be biased towards the reduction of frustration, on the other hand it is a bias for individual triads without any constraint of the type that the frustration of the whole "society" should get reduced.) \\ For odd values of $k>5$, a stationary solution always exists. This solution becomes harder to find as $k$ increases, because the maximal order of the polynomials involved increases with $k$ (for $k=3$ we have polynomials of first order, for $k=5$ polynomials of second order, for $k=7$ of third and so on). So it becomes impossible to find the solution analytically as the maximal order of solvable equations is reached. Nevertheless we can give an approximate solution using a self-consistent approach as we shall outline in the following. We suppose that the general solution for the stationary densities is of the form \begin{equation} m_j \; = \; {k \choose j} \; \rho_\infty^{k-j} \; \left(1-\rho_\infty\right)^j \;\;, \; \forall \; 0 \leq j \leq k \;\;\; , \label{eq:kgen} \end{equation} Eq.(\ref{eq:kgen}) is an appropriate ansatz as we can directly see from the definition of the density of friendly links $\rho_\infty = 1-\sum_{i=0}^k i\; m_i= 1- (1-\rho_\infty)$, where the last term comes out as mean value of the binomial distribution. ( Actually such self-consistency condition is satisfied by any distribution of the $m_j$s with mean value equal to $1-\rho_\infty$. ) Moreover the ansatz for the stationary solution in the form of Eq.(\ref{eq:kgen}) has the following features: first it is valid for the special cases $k=3$ and $k=5$, and second, it is numerically supported. In Figure \ref{fig:test} we show some results obtained by numerical simulations. We plot the densities $m_j$ for different values of $k$ [ $k=7$ (A) , $k=9$ (B), $k=11$ (C) and $k=21$ (D) ] and different values of $p$ [ $p=0$ (black circles) , $p=0.3$ (red squares) , $p=0.44$ (green diamonds) and $p=0.49$ (blue crosses) ]. We performed $50$ different realizations of a system of $N=64$ vertices, where the densities are extrapolated from $10^6$ samples ($k$-cycles) at each realization and after $5\cdot 10^2$ time steps of the simulations (so that we have reached the stationary state). The initial values of the signs are chosen to be friendly or unfriendly with the same probability ($\rho_0=0.5$). The full lines are given by Eq.(\ref{eq:kgen}) for which the right value of $\rho_\infty$ is given by the average stationary density of friendly links and the average is performed over all simulations. Furthermore, we numerically check whether Eq.(\ref{eq:kgen}) holds, with the same $\rho_\infty$ if we measure the densities of cycles also of order $k' \neq k$ and moreover, whether it holds during the time while using the time dependent density of friendly links $\rho(t)$ instead of the stationary one $\rho_\infty$. Since all these checks are positive, we may say that if at some time the distribution of friendly links (and consequently of unfriendly links) is uncorrelated, it will stay so forever. \begin{figure}[ht] \includegraphics*[width=0.47\textwidth]{test} \caption{(Color online) Stationary densities $m_j$ for the $k$-cycles with $j$ negative links and different values of $k$ [ $k=7$ (A) , $k=9$ (B), $k=11$ (C) and $k=21$ (D) ], and for different values of $p$ [ $p=0$ (black circles) , $p=0.3$ (red squares) , $p=0.44$ (green diamonds) and $p=0.49$ (blue crosses) ]. The numerical results (symbols) represent the histograms extrapolated from $10^6$ samples and over $50$ different realizations of the network. In particular the initial values of the spins are equally likely at each realization (so that $\rho_0=0.5$), the distributions are sampled after $5\cdot 10^2$ time steps and the system size is always $N=64$. The prediction of Eq.(\ref{eq:kgen}) is plotted as a full line and the value of $\rho_\infty$ used is taken from the simulations as the average value of the stationary density of positive links.} \label{fig:test} \end{figure} \\ Let us assume that the ansatz (\ref{eq:kgen}) is valid, we then evaluate the unknown value of $\rho_\infty$ self-consistently by imposing the condition that the density of friendly links is fixed at the stationary state \[ \pi^+ \; = \; \pi^- \;\;\; \Leftrightarrow \;\; \left(1-2p\right) \sum_{i=1}^{(k-1)/2}\; m_{2i-1}\; = \; m_k \;\;\;. \] In particular we can write \begin{equation} \sum_{i=1}^{(k-1)/2}\; m_{2i-1} \; + \; m_k \; = \; \sum_{i=1}^{(k+1)/2}\; m_{2i-1} \; = \xi\;, \label{eq:kgen_a} \end{equation} and so \[ m_k\; = \; \left(1-2p\right)\left(\xi-\; m_k\right)\; \] from which \begin{equation} \rho_\infty\; = \; 1-\left[ \frac{\xi \left(1-2p\right)}{2\left(1-p\right)}\right]^{1/k}\;\;\;, \label{eq:kgen_fin} \end{equation} for $p \leq 1/2$, while $\rho_\infty=1$ for $p \geq 1/2$. In particular we notice that Eq.(\ref{eq:kgen_fin}) goes to zero as $k \to \infty$ for $p < 1/2$, because $0 \leq \xi \leq 1$.. This means that in the limit of large $k$ the stationary density of friendly links takes the typical shape of a step function centered at $p=1/2$, with $\rho_\infty=0$ for $p<1/2$ and $\rho_\infty=1$ for $p>1/2$. This is exactly the result we find for the case even values of $k$ (see the next section \ref{even}), and it is easily explained since in the limit of large $k$ the distinction between the cases odd values of $k$ and $k$ even should become irrelevant. \\ Furthermore it should be noticed that $\xi$ defined in Eq.(\ref{eq:kgen_a}) is nothing more than a sum of all odd terms of a binomial distribution. For large values of $k$ we should expect that the sum of the odd terms is equal to the sum of the even terms of the distribution, so that \[ \xi \; = \; \sum_{i=1}^{(k+1)/2}\; m_{2j-1} \; \simeq \frac{1}{2} \; \simeq \; \sum_{i=0}^{(k-1)/2}\; m_{2j} \;\;\;, \] because of the normalization. In Figure \ref{fig:theory} we plot the quantity $(1-\rho_\infty)^k$ obtained by numerical simulations for different values of $k$ [ $k=3$ (black circles) , $k=5$ (red squares) , $k=7$ (blue diamonds) , $k=9$ (violet triangles), $k=11$ (orange crosses) ] as a function of $p$. Each point represents the average value of the density of positive links (after $10^3$ time steps) over $10^2$ different realizations. The system size in our simulations is $N=64$, while, at the beginning of each realization, the links have the same probability to have positive or negative spin ($\rho_0 = 0.5$). {}From Eq.(\ref{eq:kgen_fin}) we expect that the numerical results collapse on the same curve $\xi(1-2p)/(2-2p)$, depending on the parameter $\xi$. Imposing $\xi=1/2$ [dashed line] we obtain an excellent fit for all values of $p$. Only for small values of $p$ the fit is less good than for intermediate and large values of $p$, which is explained by the plot in the inset of Figure \ref{fig:theory}. There Eq.(\ref{eq:kgen_a}) is shown as function of $p$ for $k=3$ (black dotted line) and for $k=5$ (red full line). The values of $m_j$ are taken directly from the binomial distribution of Eq.(\ref{eq:kgen}) with values of $\rho_\infty$ known exactly from Eq.s (\ref{eq:antal2}) and (\ref{eq:k5b}) for $k=3$ and $k=5$, respectively. We can see how well the approximation $\xi=1/2$ works already for $k=3$ and how it improves for $k=5$, with the only exception for small values of $p$ where $\xi > 1/2$. Furthermore we see that $\xi < 1/2$ for $p \simeq 1/2$, but in this range the dependence on $\xi$ of Eq.(\ref{eq:kgen_fin}) becomes weaker since the factor $\xi(1-2p)$ tends to zero anyway. \begin{figure}[ht] \includegraphics*[width=0.47\textwidth]{theory} \caption{(Color online) Numerical results (symbols) and approximate solution (dashed line) for the function $\left(1-\rho_\infty\right)^k$, depending on the stationary density of positive links $\rho_\infty$ and the parameter $k$ [ $k=3$ (black circles) , $k=5$ (red squares) , $k=7$ (blue diamonds) , $k=9$ (violet triangles) , $k=11$ (orange crosses) ], as a function of the dynamical parameter $p$. The theoretical result, plotted here as a dashed line, is given by Eq.(\ref{eq:kgen_fin}) for $\xi=1/2$. This prediction is in good agreement with the numerical results obtained by averaging the density of friendly links after $10^3$ time steps over $10^2$ different realizations. The system size is $N=64$. Each simulation starts with random initial conditions ($\rho_0=0.5$). Moreover, as we can see from the inset, the value of $\xi$ calculated for $k=3$ (red full line) and for $k=5$ (black dotted line) is very close to $1/2$ for an extended range of $p$.} \label{fig:theory} \end{figure} \subsubsection{The case of even values of $k$}\label{even} The stability of a $k$-cycle with all negative links in the case of even $k$ (see Figure \ref{fig:example}) has deep implications on the global behavior of the model. Actually the elementary dynamics is now symmetric. Only the value of $p$ gives a preferential direction (towards a completely friendly or unfriendly cycle) to the basic processes. With odd $k$, for $p < 1/2$ the tendency of the dynamics to reach the state with a minor number of positive links in the elementary processes (involving no totally unfriendly cycles) is overbalanced by the process $T_{k}\rightarrow T_{k-1}$ which happens with probability one, so that in the thermodynamical limit the system ends up in an active steady state with a finite average density of negative links due to the competition between the basic processes. Instead, for even $k$, nothing prevents the system from reaching the ``hell'', that is a state of only negative links, because here a completely negative cycle is stable. Only for $p=1/2$ we expect to find a non-frozen fluctuating final state, since in this case the elementary dynamical processes are fully symmetric. Imposing the stationary conditions on the system we do not get detailed information about the final state. As we can see from Eq.s (\ref{eq:prob_positive_even}) and (\ref{eq:prob_negative_even}), for $p\neq1/2$ the only possibility to have $\pi^+=\pi^-$ is the trivial solution for which both probabilities are equal to zero, so that the system must reach a frozen configuration, while for $p=1/2$, $\pi^+$ and $\pi^-$ are always equal, in this case we expect the system to reach immediately an active steady state. In order to describe more precisely the final configuration of this active steady state, it is instructive to consider the mean-field equation for the density of positive links. For generic even value of $k$, it is easy to see that the number of positive links increases in updates of type $T_{2j-1}\rightarrow T_{2(j-1)}$ with probability $p$, whereas it decreases in updates of type $T_{2j-1}\rightarrow T_{2j}$ with probability $1-p$, so that the mean field equation that governs the behavior of the density of friendly links is given by \begin{equation} \frac{d\rho}{dt}=(2p-1)\rho(1-\rho)\cdot\sum_{i=1}^{k/2} {k \choose 2i-1}\cdot\rho^{k-2i}(1-\rho)^{2(i-1)}\;\;\;. \label{mfr} \end{equation} For $p\neq1/2$ we have only two stationary states, $\rho_{\infty}=0$ and $\rho_{\infty}=1$ (the other roots of the steady state equation are complex). It is easily understood that for $p<1/2$ the stable configuration is $\rho_{\infty}=0$, while for $p>1/2$ it is $\rho_{\infty}=1$. In contrast, for $p=1/2$ we have $\rho(t)=$const at any time, so that $\rho_{\infty}=\rho(t=0)=\rho_0$. These results are confirmed by numerical simulations. Moreover, the convergence to the thermodynamical limit is quite fast, as it can be seen in Figure \ref{thl}, where we plot the density of friendly links $\rho_\infty$ as a function of $p$ for the system sizes $N$ [ $N=8$ (dotted line), $N=16$ (dashed line) and $N=32$ (full line) ] and for $k=4$. Each curve is obtained from averages over $10^3$ different realizations of the dynamical system. In all simulations the links get initially assigned the values $\pm 1$ with equal probability, so that $\rho_0=0.5$. \begin{figure} \includegraphics*[angle=0,width=0.47\textwidth,clip]{therm_lim.eps} \caption{Behavior of the stationary density of friendly links $\rho_{\infty}$ as a function of $p$ for three (small) values of $N$ [ $N=8$ (dotted line), $16$ (dashed line) and $32$ (full line) ] and for $k=4$. The values of the initial configuration are randomly chosen to be $\pm 1$ with density of friendly links $\rho_0=0.5$. The curves are obtained from averages over $10^3$ different realizations.} \label{thl} \end{figure} \subsection{Frozen configurations}\label{frozen} When all $k$-cycles of the network are balanced we say that the network itself is balanced. In particular, in the case of our unconstrained dynamics we can say that if the network is balanced it has reached a frozen configuration. The configuration is frozen in the sense that no dynamics is left since the system cannot escape a balanced configuration. Furthermore it was proven \cite{cartwright} that if a graph (not only a complete graph) is balanced it is balanced independently of the choice of $k$ and that the only possible balanced configurations are given by bipartitions of the network in two subgroups (or ``cliques''), where all the individuals belonging to the same subgroup are friends while every couple of individuals belonging to different subgroups are enemies (this result is also known as \emph{Structure Theorem} \cite{roberts}). In the case of even values of $k$ the latter result is still valid if all the individuals of one subgroup are enemies, while two individuals belonging to different subgroups are friends. It should be noticed that one of the two cliques may be empty and therefore the configuration of the paradise (where all the individuals are friends) is also included in this result, as well as, for the case even values of $k$, the hell with all individuals being enemies . In the following we will combine our former results about the stationary states (section \ref{stationary}) with the notion of frozen configurations in order to predict the probability of finding a particular balanced configuration and the time needed for freezing our unconstrained dynamical process. For clarity we analyze the cases of odd values of $k$ and even values of $k$ separately, again. \subsubsection{Freezing time for odd values of $k$} \label{time_odd} Let $0 \leq N_1 \leq N$ be the size of one of the two cliques. Therefore the other clique will be of size $N-N_1$. In such a frozen configuration the total number of positive and negative links are related to $N_1$ and $N$ by \begin{equation} L^+ = \frac{N_1\left(N_1-1\right)}{2}+\frac{\left(N-N_1\right)\left(N-N_1-1\right)}{2} \label{eq:positive_frozen} \end{equation} and \begin{equation} L^- = N_1\left(N-N_1\right) \label{eq:negative_frozen} \end{equation} respectively. As we have seen in the former section \ref{odd}, for odd values of $k$ and $p<1/2$, all the $k$-cycles are uncorrelated during the unconstrained dynamical evolution, if we start from an initially uncorrelated configuration. In such cases, we can consider our system as a purely random process in which the values of the spins are chosen at random with a certain probability. In particular, the probability of a link to be positive is given by $\rho$ , the density of positive links ( $1-\rho$ is the probability for a link to be negative). The probability of reaching a frozen configuration, characterized by two cliques of $N_1$ nodes and $N-N_1$ nodes, is then given by \begin{equation} P(\rho,N_1) = {N \choose N_1} \rho^{\frac{N(N-1)}{2}-N_1(N-N_1)} \left( 1 -\rho \right)^{N_1(N-N_1)} \;\;\; . \label{eq:frozen_prob} \end{equation} The binomial coefficient ${N \choose N_1}$ in Eq.(\ref{eq:frozen_prob}) counts the total number of possible bi-partitions into cliques with $N_1$ and $N-N_1$ nodes ( i.e. the total number of different ways for choosing $N_1$ nodes out of $N$), and each of these bi-partitions is considered as equally likely because of the randomness of the process. We should also remark that in Eq.(\ref{eq:frozen_prob}) we omit the time dependence of $\rho$, while the density of positive links $\rho$ follows the following master equation \[ \frac{d\rho}{dt}= (1-\rho)^k+(2p-1) \sum_{i=1}^{(k-1)/2} {k \choose 2i-1} \rho^{2i-1} (1-\rho)^{k-2i+1} \;\;\;. \] Eq.(\ref{eq:frozen_prob}) shows that the probability of having a frozen configuration with cliques of $N_1$ and $N-N_1$ nodes is extremely small, because the number of the other equiprobable configurations with the same number of negative and positive links is equal to ${L \choose L^-} \gg {N \choose N_1}$, where $L^-$ should satisfy Eq.(\ref{eq:negative_frozen}). This allows us to ignore the transient time to reach the stationary state (we expect that the system goes to the stationary state exponentially fast for any $k$, as shown in \cite{antal} for $k=3$) and consider the probability for obtaining a frozen configurations as \begin{equation} P\left(\rho_\infty \right) = \sum_{N_1=0}^N P\left(\rho_\infty,N_1\right) \;\;\; . \label{eq:frozen_prob_odd} \end{equation} This probability provides a good estimate for the order of magnitude in time $\tau$ that is needed to reach a frozen configuration, because $\tau \sim 1/P\left(\rho_\infty \right)$. Unfortunately this estimate reveals that the time needed for freezing the system becomes very large already for small sizes $N$ (i.e. $\tau$ increases almost exponentially as a function of $L\sim N^2$). This means that it is practically impossible to verify this estimate in numerical simulations. \\ \vspace{5pt} \\ At the transition, for the dynamical parameter $p=1/2$ we can follow the same procedure as used by Antal \emph{et al.} \cite{antal}. The procedure is based on calculating the time it takes until a fluctuation in the number of negative links reaches the same order of magnitude as the average number of negative links. In this case the systems happens to reach the frozen configuration of the paradise due to a fluctuation. The number of unfriendly links $L^- \equiv A(t)$ can be written in the canonical form \cite{kampen} \begin{equation} A(t)=La(t)+\sqrt{L}\eta(t)\;\;\;, \label{qwe} \end{equation} where $a(t)$ is the deterministic part and $\eta(t)$ is a stochastic variable such that $\langle\eta\rangle=0$. Let us consider the elementary processes \begin{equation} A\longrightarrow\left\{\begin{array}{ll} A-1 & \textrm{ , rate } \quad M_k \\ A-1 & \textrm{ , rate } \quad p\sum_{i=1}^{(k-1)/2}M_{2i-1} \\ A+1 & \textrm{ , rate } \quad (1-p)\sum_{i=1}^{(k-1)/2}M_{2i-1} \end{array} \right. \label{elpr1} \end{equation} and therefore \begin{equation} A^2\longrightarrow\left\{\begin{array}{ll} A^2-2A+1 & \textrm{ , rate} \quad M_k \\ A^2-2A+1 & \textrm{ , rate} \quad p\sum_{i=1}^{(k-1)/2}M_{2i-1} \\ A^2+2A+1 & \textrm{ , rate} \quad (1-p)\sum_{i=1}^{(k-1)/2}M_{2i-1} \end{array} \right. \;. \label{elpr2} \end{equation} We can then write the following equations for the mean values of $A$ and $A^2$ \ \frac{d\langle A\rangle}{dt}=-\langle M_k\rangle+(1-2p)\sum_{i=1}^{(k-1)/2}\langle M_{2i-1}\rangle \ and \ \begin{array}{ll} \frac{d\langle A^2\rangle}{dt}= & \langle(1-2A)M_k\rangle+ \\ & +p\left\langle(1-2A)\sum_{i=1}^{(k-1)/2}M_{2i-1}\right\rangle+ \\ & +(1-2p)\left\langle(1+2A)\sum_{i=1}^{(k+1)/2}M_{2i-1}\right\rangle \end{array} \;\;\;. \ For $p=1/2$ we obtain \begin{equation} \frac{d\langle A\rangle}{dt}=-\langle M_k\rangle \label{fluc3} \end{equation} and \ \frac{d\langle A^2\rangle}{dt}=\langle M_k\rangle+\sum_{i=1}^{(k-1)/2} \langle M_{2i-1}\rangle-2\langle AM_k\rangle \;\;\;. \ Since it is $\langle A\rangle\sim a$ and $\langle M_k\rangle\sim a^k$, we get from Eq.(\ref{fluc3}) \begin{equation} \frac{da}{dt}=-a^k \;\;\; , \label{fluc4B} \end{equation} from which \begin{equation} a(t)\sim t^{-\frac{1}{k-1}}\;\;\;. \label{fluc5} \end{equation} On the other hand, considering that $d\langle A\rangle^2/dt=2\langle A\rangle\cdot d\langle A\rangle/dt$ and by definition $\sigma=\langle A^2\rangle-\langle A\rangle^2=\langle\eta^2\rangle$, we have \begin{equation} \frac{d\sigma}{dt}=\langle M_k\rangle+\sum_{i=1}^{(k-1)/2} \langle M_{2i-1}\rangle-2(\langle AM_k\rangle-\langle A\rangle\langle M_k\rangle)\;\;\;. \label{fluc6} \end{equation} Moreover we can write \ \begin{array}{rl} \langle AM_k\rangle-\langle A\rangle\langle M_k\rangle= &\langle(La+\sqrt{L}\eta)M_k\rangle-La\langle M_k\rangle= \\ = & \sqrt{L}\langle\eta M_k\rangle \end{array}\;\;\;. \ It is easy to see that $\langle\eta M_k\rangle\sim\langle \eta A^k\rangle=\langle\eta(La+\sqrt{L}\eta)^k\rangle$, so that \begin{equation} \begin{array}{ll} \langle\eta M_k\rangle\sim & \langle\eta\cdot(L^ka^k+kL^{k-1/2}a^{k-1} \eta+\dots+L^{k/2}\eta^k)\rangle= \\ & = kL^{k-1/2}a^{k-1}\langle\eta^2\rangle+O(\langle\eta^3\rangle)\;\;\;. \end{array} \label{fluc8} \end{equation} Dividing Eq.(\ref{fluc6}) by Eq.(\ref{fluc4B}) and using Eq.(\ref{fluc8}) we get \begin{equation} \frac{d\sigma}{da}=-\left[2ka^{k-1}\sigma-\sum_{i=1}^{(k+1)/2} {k\choose 2i-1}a^{2i-1}(1-a)^{k-2i+1}\right]. \label{fluc9} \end{equation} Here we have taken into account that \begin{equation} \langle M_j\rangle\sim{k\choose j}a^j(1-a)^{k-j}\;\;\;. \label{eq:a} \end{equation} It is straightforward to find the solution of Eq.(\ref{fluc9}) as \[ \sigma(a)=Ca^{2k}+\frac{\gamma_k}{a}+\dots+\frac{\gamma_0}{a^{k-2}}\;\;\;, \] with $C$ and $\gamma_j$ suitable constants. From Eq.(\ref{fluc5}), for $t\to \infty$ we have \[ \sigma\sim a^{-(k-2)}\sim t^{\frac{k-2}{k-1}}\;\;\; . \] For $\eta\sim\sqrt{\sigma}$, we finally obtain \ \eta\sim t^{\frac{k-2}{2(k-1)}}\;\;\;. \ In general, the system will reach the frozen state of the paradise when the fluctuations of the number of negative links become of the same order as its mean value. (Note that in this case the mean-field approach is no longer valid.) Then, in order of finding the freezing time $\tau$ we have just to set equal the two terms on the right hand side of Eq.(\ref{qwe}). \begin{equation} La(\tau)\sim\sqrt{L}\eta(\tau)\;\;\;. \label{eq:condition} \end{equation} Since $L\sim N^2$, we get a power-law behavior \begin{equation} \tau\sim N^\beta \label{eq:freezing_time_odd} \end{equation} with exponent $\beta$ as a function of $k$ according to \begin{equation} \beta=2\frac{k-1}{k}\;\;\;. \label{fluc11} \end{equation} It is worth noticing that in the limit $k \to \infty$ we obtain $\beta=2$, which is the same result as in the case of even values of $k$ as we shall see soon. The analytical results of this subsection are well confirmed by simulations, cf. Figure \ref{fig:time_odd}. There we study numerically the freezing time $\tau$ as a function of the system size $N$ for different odd values of $k$ [ $k=3$ (black circles) , $k=5$ (red squares) , $k=7$ (blue diamonds) , $k=9$ (violet triangles) and $k=15$ (orange crosses) ]. The freezing time is measured until all links have positive sign and paradise is reached. Other frozen configurations are too unlikely to be realized. Each point stands for the average value over a different number of realizations of the dynamical system [ $100$ realizations for sizes $N \leq 64$ , $50$ realizations for $64<N\leq 256$ and $10$ realizations for $N>256$ ], where the initial configuration is always chosen as an antagonistic society (all the links being negative so that $\rho_0=0$) to reduce the statistical error. The standard deviations around the averages have sizes comparable with the symbol sizes. The full lines stands for power laws with exponents given by Eq.(\ref{fluc11}). They perfectly fit with the numerical measurements. \begin{figure} \includegraphics*[width=0.47\textwidth]{time_odd} \caption{(Color online) Numerical results (full dots) for the freezing time $\tau$ as a function of the system size $N$ and for various $k$ [ $k=3$ (black circles) , $k=5$ (red squares) , $k=7$ (blue diamonds) , $k=9$ (violet triangles) and $k=15$ (orange crosses) ]. Each point is given by the average value over several realizations [ $100$ realizations for sizes $N \leq 64$ , $50$ realizations for $64<N\leq 256$ and $10$ realizations for $N>256$ ]. Moreover as initial configuration of each realization the links are chosen all negative ($\rho_0=0$, antagonistic society) in order to reduce the statistical error (the standard deviation is comparable with the symbol size) caused by the small number of realizations at larger sizes of the system. The full lines have slope $2(k-1)/k$ as expected from Eq.(\ref{fluc11}). The inset shows the numerical results for the freezing time $\tau$ , for different values of $k$ (the same as in the main plot), as a function of the system size $N$ and for $p=3/4$. Each point of the inset is given by the average over $10^3$ different realizations with initial antagonistic society.} \label{fig:time_odd} \end{figure} \vspace{1cm} For $p>1/2$ the freezing time $\tau$ scales as \begin{equation} \tau \sim \ln{N}\;\;\;. \label{eq:time_pbigger} \end{equation} The derivation would be the same as in the paper of Antal \emph{et al.} \cite{antal}. It should be noticed that for $p>1/2$ the paradise is reached as soon as $k$ increases. For simplicity let $p=1$ and imagine that the system is at the closest configuration to the paradise, for which only one link in the system has negative spin. This link belongs to $R = (N-2)!/(N-k)!$ different $k$-cycles. At each update event we select one $k$-cycle at random out of $M = {N \choose k}$ total $k$-cycles. This way we have to wait a number of update events $E \sim M/R$ until the paradise is reached, which leads to a freezing time $\tau \sim E/L$, with $L$ the total number of links independent on $k$, so that \begin{equation} \tau \sim \frac{1}{k!}\;\;\;. \label{eq:time_biggerp} \end{equation} For values of $1/2<p<1$ the $k$-dependence of $\tau$ should be weaker than the one in Eq.(\ref{eq:time_biggerp}), but anyway $\tau$ should be a decreasing function of $k$. The inset of Figure \ref{fig:time_odd} shows the numerical results obtained for $p=3/4$ as a function of the size of the system $N$. The freezing time $\tau$ is measured for different values of $k$. We plot the average value over $10^3$ different realizations with initial condition $\rho_0=0$. \subsubsection{Freezing time for even values of $k$} \label{sec:time_even} In the case of even values of $k$ and $p=1/2$ the master equation for the density of positive links [ Eq.(\ref{mfr}) ] reads as $d\rho/dt=0$. Therefore, the density of friendly links, $\rho$, should be constant during time for an infinite large system. In finite size systems the dynamics is subjected to non-negligible fluctuations. This allows to understand the scaling features of the freezing time $\tau$ with the system size. The order of the fluctuations is $\sqrt{L}$ because the process is completely random as we have seen for the case odd values of $k$ and $p<1/2$. Differently from the latter case, for even values of $k$ and $p=1/2$ the system has no tendency to go to a fixed point determined by $p$ because $d\rho/dt=0$. We can view the dynamical system as a Markov chain, with discrete steps in time and state space, for which the transition probability for passing from a state with $L^-(t-1)$ negative at time $t-1$ to a state with $L^-(t)$ negative links at time $t$ is given by \begin{equation} \begin{array}{l} P\left[ \; L^-(t)\; | \; L^-(t-1) \; \right] = \\ = {L \choose L^-(t)} \left(\frac{L-L^-(t-1)}{L} \right)^{L-L^-(t)} \left(\frac{L^-(t-1)}{L} \right)^{L^-(t)}\;\;\;. \end{array} \label{eq:markov} \end{equation} So that the probability of having $L^-(t)$ negative links at time $t$ is just a binomial distribution where the probability of having one negative link is given by $\frac{L^-(t-1)}{L}$, the density of negative links at time $t-1$. This includes both the randomness of the displacement of negative links and the absence of a particular fixed point dependent on $p$. The Markov process, with transition probability given by Eq.(\ref{eq:markov}), is known under the name of the Wright-Fisher model \cite{wright} from the context of biology. The Wright-Fisher model is a simple stochastic model for the reproduction of diploid organisms (diploid means that each organism has two genes, here named as ``$-$'' and ``$+$''), it was proposed independently by R.A. Fisher and S. Wright at the beginning of the thirties \cite{wright}. The population size of genes in an organism is fixed and equal to $L/2$ so that the total number of genes is $L$. Each organism lives only for one generation and dies after the offsprings are made. Each offspring receives two genes, each one selected with probability $1/2$ out of the two genes of a parent of which two are randomly selected from the population of the former generation. Now let us assume that there is a random initial configuration of positive and negative genes with a slight surplus of negative genes. The offspring generation selects its genes randomly from this pool and provides the pool for the next offspring generation. Since the pools get never refreshed by a new random configuration, the initial surplus of negative links gets amplified in each offspring generation until the whole population of genes is "negative". Actually the solution of the Wright-Fisher model is quite simple. The process always converges to a final state with $L^-=0$ [$L^+=L$] or $L^-=L$ [$L^+=0$], corresponding to our heaven and [hell] solutions for even values of $k$. The average value over several realizations of the same process depends on the initial density of friendly links $\rho_0$ according to \[ \langle L^- \rangle = \rho_0 \delta(0) + (1-\rho_0) \delta(L)\;\;\; , \] where $\delta(x)=1$ for $x=0$ and $\delta(x)=0$ otherwise. Furthermore, on average, the number of negative links decays exponentially fast to one of the two extremal values \[ \langle L^- (t) \rangle \simeq L \left\{ \begin{array}{l} e^{-t/L} \\ 1 - e^{-t/L} \end{array} \right. \;\;\;. \] with typical decay time \begin{equation} \tau \sim L \sim N^2\;\;\;. \label{eq:freezing_time_even} \end{equation} This result is perfectly reproduced by the numerical data plotted in Figure \ref{fig:time_even}. The main plot shows the average time needed to reach a balanced configuration as a function of the size of the system $N$ and for different values of $k$ [ $k=4$ (black circles) , $k=6$ (red squares) , $k=8$ (blue diamonds) and $k=12$ (violet crosses) ]. The averages are performed over different numbers of realizations depending on the size $N$ [ $1000$ realizations for sizes $N \leq 128$ , $500$ realizations for $128<N\leq 384$ and $50$ realizations for $N=384$ and $N=512$, and $10$ realizations for $N=1024$ ]. The dashed line in Figure \ref{fig:time_even} has, in the log-log plane, a slope equal to $2$, all numerical data fit very well with this line. Furthermore it should be noticed that there is no $k$-dependence of the freezing time $\tau$, as it is described by Eq.(\ref{eq:markov}). This is reflected by the fact that $\tau$ is the same for all the values of $k$ considered in the numerical measurements. \begin{figure} \includegraphics*[width=0.47\textwidth]{time_even} \caption{(Color online) Numerical results for the freezing time $\tau$ as a function of the system size $N$ and for various even values of $k$ [ $k=4$ (black circles) , $k=6$ (red squares) , $k=8$ (blue diamonds) and $k=12$ (violet crosses) ] and for $p=1/2$. Each point is given by the average value over several realizations [ $100$ realizations for sizes $N \leq 64$ , $50$ realizations for $64<N\leq 256$ and $10$ realizations for $N>256$ ]. Moreover, at the beginning of each realization the links are chosen to be positive or negative with the same probability ($\rho_0=0.5$). The dashed line has, in the log-log plane, slope $2$ as expected in Eq.(\ref{eq:freezing_time_even}). The inset A) shows the numerical results for the freezing time $\tau$ , for different values of $k$ (the same as in the main plot), as a function of the system size $N$ and for $p=3/4$. Each point of the inset is given by the average over $10^3$ different realizations with random initial conditions. The full lines are all proportional to $\ln{N}$ as expected. The inset B) shows the not-normalized probability $P(N_1)$ as a function of the ratio $N_1/N$ and for different values of the system size $N$ [ $N=6$ (full line), $N=8$ (dashed line) and $N=10$ (dotted line) ]. As one can see, $P(N_1)$ is extremely small for values of $0 < N1 < N$ already for $N=10$.} \label{fig:time_even} \end{figure} Nevertheless there is a difference between our model and the Wright-Fisher model that should be noticed. During the evolution of our model there is the possibility that the system freezes in a configuration different from the paradise ( $L^-=0$ ) or the hell ($L^-=L$ ). The probability of this event is still given by Eq.(\ref{eq:frozen_prob}), with $r=L^+(N_1)/L$ as the stationary condition [ $L^+(N_1)$ is given by Eq.(\ref{eq:positive_frozen}) ]. In this way Eq.(\ref{eq:frozen_prob}) gives us $P(N_1)$, the not-normalized probability for the system to freeze in a balanced configuration with two cliques of $N_1$ and $N-N_1$ nodes, respectively. It is straightforward to see that $P(N_1)=1$ for $N_1=0$ or for $N_1=N$, so that the paradise has a non-vanishing probability to be a frozen configuration. Differently for any other value of $0 < N_1 < N$, $P(N_1)$ decreases to zero faster than $1/N$. This means that for values of $N$ large enough it is appropriate to forget about the intermediate frozen configurations and to consider the features of our model as being very well approximated by those of the Wright-Fisher model. In the inset B) of Figure \ref{fig:time_even} the function $P(N_1)$ is plotted for different values of $N$ [ $N=6$ (full line), $N=8$ (dashed line) and $N=10$ (dotted line) ] with $N_1$ a continuous variable for clarity of the figure (we approximate the factorial with the Stirling's formula). Obviously $P(N_1)$ disappears for $0 < N_1 < N$ as $N$ increases, already for reasonably small values of $N$. \\ The dependence $\tau \sim N^2$ can also be obtained using the same procedure as the one in section \ref{time_odd} for the case odd values of $k$ and $p=1/2$. In particular for even values of $k$ we can rewrite Eq.(\ref{elpr1}) according to \begin{equation} A\longrightarrow\left\{\begin{array}{ll} A-1 & \textrm{ , rate}\quad p\sum_{i=1}^{k/2}M_{2i-1} \\ A+1 & \textrm{ , rate}\quad (1-p)\sum_{i=1}^{k/2}M_{2i-1} \end{array} \right . \label{elpr1_even} \end{equation} and therefore Eq.(\ref{elpr2}) according to \begin{equation} A^2\longrightarrow\left\{\begin{array}{ll} A^2-2A+1 & \textrm{ , rate}\quad p\sum_{i=1}^{k/2}M_{2i-1} \\ A^2+2A+1 & \textrm{ , rate}\quad (1-p)\sum_{i=1}^{k/2}M_{2i-1} \end{array} \right. \;\;\;. \label{elpr2_even} \end{equation} For $p=1/2$ we have \begin{equation} \frac{d\langle A\rangle}{dt}=0 \label{fluq1_even} \end{equation} and \ \frac{d\langle A^2\rangle}{dt}=\sum_{i=1}^{k/2} \langle M_{2i-1} \rangle\;\;\;. \ Eq.(\ref{fluq1_even}) tells us that $a\sim \langle A\rangle=$const, so that we have \[ \eta\sim \sqrt{t}\;\;\;, \] remebering Eq.(\ref{eq:a}). As in the previous case, for determining the freezing time we impose the condition that the average value is of the same order as the fluctuations [Eq.(\ref{eq:condition})], and, for $L\sim N^2$, we obtain again Eq.(\ref{eq:freezing_time_even}). \\ \vspace{5pt} \\ For even values of $k$ and for $p \neq 1/2$ the time $\tau$ needed for reaching a frozen configuration scales as $\tau \sim \ln{N}$. In the inset of Figure \ref{fig:time_even} numerical estimates of $\tau$ for $p=3/4$ and different values of $k$ demonstrate this dependence on the size $N$ of the system. Each point is obtained from averaging over $10^3$ different simulations with the same initial conditions $\rho_0=0.5$. Again, as in the case of $k$ odd and $p>1/2$, $\tau$ is a decreasing function of $k$ and the same argument used for obtaining Eq.(\ref{eq:time_biggerp}) can be applied here. \section{Diluted Networks}\label{sec:diluted} In this section we extend the former results, valid in the case of fully connected networks, to diluted networks. Real networks, apart from very small ones, cannot be represented by complete graphs. The situation in which all individuals know each other is in practice very unlikely. As mentioned in the introduction, links may be also missing, because individuals neither like nor dislike each other but are just indifferent. In the following we analyze the features of dynamical systems, still following the unconstrained $k$-cycle dynamics, but living on topologies given by diluted networks. \\ For diluted networks there is an interesting connection to another set of problems that leads to a new interpretation of the social balance problem in terms of a certain $k$-SAT ($k$S) problems (SAT stands for satisfiability) \cite{cook,mezard,mezard2}. In such a problem a formula $F$ consists of $Q$ logical clauses $\left\{C_q\right\}_{q=1,\ldots,Q}$ which are defined over a set of $B$ Boolean variables $\left\{x_i=0,1\right\}_{i=1,\ldots,B}$ which can take two possible values $0=$\emph{FALSE} or $1=$\emph{TRUE}. Every clause contains $k$ randomly chosen Boolean variables that are connected by logical $OR$ operations ($\bigvee$). They appear negated with a certain probability. In the formula $F$, all clauses are connected by logical $AND$ operations ($\bigwedge$) \[ F=\bigwedge_{q=1}^Q C_q\;\;\;, \] so that all clauses $C_q$ should be simultaneously satisfied in order to satisfy the formula $F$. A particular formulation of the $k$S problem is the $k$-XOR-SAT ($k$XS) problem \cite{weigt,semerjian,mezard2,cocco}, in which each clause $C_q$ is a parity check of the kind \begin{equation} C_q= x_{i_1}^q+x_{i_2}^q+ \ldots + x_{i_k}^q \;\;\; \textrm{mod }2\;\;\;, \label{eq:xor} \end{equation} so that $C_q$ is \emph{TRUE} if the total number of true variables which define the clause is odd, while otherwise the clause $C_q$ is \emph{FALSE}. It is straightforward to map the $k$XS problem to our former model for the case odd values of $k$. Actually, each clause $C_q$ corresponds to a $k$-cycle [$Q \equiv M$] and each variable $x_v$ to a link $(i,j)$. Furthermore [$B \equiv L$] with the correspondence $s_{i,j}=1$ for $x_v=1$, while $s_{i,j}=-1$ for $x_v=0$. For the case of even values of $k$, one can use the same mapping but consider as clause $C_q$ in Eq.(\ref{eq:xor}) its negation $\overline{C_q}$. In this way, when the number of satisfied variables $x_i^q$ is odd the clause $\overline{C_q}$ is unsatisfied for odd values of $k$, while $\overline{C_q}$ is satisfied for even values of $k$. \\ Moreover a typical algorithm for finding a solution of the $k$S problem is the so-called Random-Walk SAT (RWS). The procedure is the following \cite{weigt,semerjian}: select one unsatisfied clause $C_q$ randomly, next invert one randomly chosen variable of its $k$ variables $x_{i^*}^q$; repeat this procedure until no unsatisfied clauses are left in the problem. Each update is counted as $1/B$ units of time. As one can easily see, this algorithm is very similar to our unconstrained dynamics apart from two aspects. First, in our unconstrained dynamics we use the dynamical propensity parameter $p$, while it is absent in the RWS. Second, in our unconstrained dynamics we count also the choice of a balanced $k$-cycle as update event, although it does not change the system at all. Because of this reason, the literal application of the original algorithm of unconstrained dynamics has very high computational costs if it is applied to diluted networks. Apart from the parameter $p$, we can therefore use the same RWS algorithm for our unconstrained dynamics of $k$-cycles. This algorithm is more reasonable because it selects at each update event only imbalanced $k$-cycles which are actually the only ones that should be updated. In case of an all-to-all topology there are so many triads that a preordering according to the property of being balanced or not is too time consuming so that in this case our former version is more appropriate. In order to count the time as in our original framework of the unconstrained dynamics, we should impose that, at the $n$-th update event, the time increases as \begin{equation} t_n\; \; = \;\; t_{n-1}\;\;+ \;\;\frac{1}{L}\; \cdot \; \frac{\alpha}{\alpha^{(n-1)}_{u}}\;\;\;. \label{eq:time_scaling} \end{equation} Here $\alpha=M/L$ stands for the ratio of the total number of $k$-cycles of the system (i.e. total number of clauses) and the total number of links (i.e. total number of variables). The parameter $\alpha$ is called the ``dilution'' parameter, it can take all possible values in the interval $\left[0 , {L \choose k}/L\right]$. $\alpha^{(n-1)}_{u}=\sum_{i=1}^{(k+1)/2}M_{2i-1}/L$ is the ratio of the total number of imbalanced (or ``unsatisfied'') $k$-cycles over the total number of links, in particular $\alpha^{(n-1)}_{u}$ is computed before an instant of time at which the $n$-th update event is implemented. Therefore the ratio $\alpha/\alpha^{(n-1)}_{u}$ gives us the inverse of the probability for finding an imbalanced $k$-cycle, out of all, balanced or imbalanced, $k$-cycles, at the $n$-th update event. This is a good approximation to the time defined in the original unconstrained dynamics. It should be noticed that this algorithm works faster in units of this computational time, but the simulation time should be counted in the same units as defined for the unconstrained dynamics introduced in section \ref{model}. \\ The usual performance of the RWS is fully determined by the dilution parameter $\alpha$. For $\alpha \leq \alpha_d$ the RWS always finds a solution of the $k$S problem within a time that scales linearly with the number of variables $L$. In particular for the $k$XS problem $\alpha_d=1/k$. For $\alpha_d < \alpha < \alpha_c$ the RWS is still able to find a solution for the $k$S problem, but the time needed to find the solution grows exponentially with the number of variables $L$. For the case of the $3$XS problem $\alpha_c \simeq 0.918$. $\alpha_d$ is the value of the dilution parameter for which we have the ``dynamical'' transition, depending on the dynamics of the algorithm while $\alpha_c$ represents the transition between the SAT and the UNSAT regions: for values of $\alpha \geq \alpha_c$ the RWS is no longer able to find any solution for the $k$S problem, and in fact no such solution with zero frustration exists for the $k$S problem. Furthermore there is a third critical threshold $\alpha_s$, with $\alpha_d < \alpha_s <\alpha_c$. For values of $\alpha < \alpha_s$ all solutions of the $k$S problem found by the RWS are located into a large cluster of solutions and the averaged and normalized Hamming distance inside this cluster is $\langle d \rangle \simeq 1/2$. For $\alpha > \alpha_s$ the solutions space splits into a number of small clusters (that grows exponentially with the number of variables $L$) , for which the averaged and normalized Hamming distance inside each cluster is $\langle d \rangle \simeq 0.14$, while the averaged and normalized Hamming distance between two solutions lying in different clusters is still $\langle d \rangle \simeq 1/2$ \cite{cocco}. For the special case of the $3$XS problem $\alpha_s$ was found as $\alpha_s\simeq 0.818$. \\ In order to connect the problems of social balance on diluted networks and the $k$XS problem on a diluted system we shall first translate the parameters into each other. First of all we need to calculate the ratio $\alpha=M/L$ between the total number of $k$-cycles of the network and the total number of links $L$ (section \ref{sec:ratio}). Next we consider the standard RWS applied to the $k$XS problem taking care of the right way of computing the time as it is given by the rule (\ref{eq:time_scaling}) and the introduction of the dynamical parameter $p$ (section \ref{sec:RWS}). In particular we focus on the ``dynamical'' transition at $\alpha_d$ (section \ref{sec:alphad}) and the transition in solution space concerning the clustering properties of the solutions at $\alpha_s$ (section \ref{sec:alphas}). The dynamical parameter $p$, formerly called the propensity parameter, leads to a critical value $p_c$ above which it is always possible to find a solution within a time that grows at most linearly with the system size (section \ref{sec:pc}). Finally, in section \ref{sec:alpham} we decrease the dilution, i.e. increase $\alpha$ to $\alpha_m$ such that for $\alpha \geq \alpha_m$ the system is fully described by the mean field equations of the former sections. We focus on the simplest case $k=3$, but all results presented here for $k=3$ should be qualitatively valid for any value of $k \geq 3$. \subsection{Ratio $\alpha$ for random networks} \label{sec:ratio} Let us first consider Erd\"os-R\'enyi networks \cite{erdos} as a diluted version of the all-to-all topology that we studied so far. An Erd\"os-R\'enyi network, or a random network, is a network in which each of the ${N \choose 2}$ different pairs of nodes is connected with probability $w$. The average number of links is simply $\langle L \rangle = w {N \choose 2}$. The average number of cycles of order $k$ is given $\langle M \rangle = w^k {N \choose k}$, so that the average ratio $\langle \alpha \rangle$ can be estimated as \begin{equation} \langle \alpha \rangle \simeq w^{k-1} \frac{2N^{k-2}}{k!} \;\;\; . \label{eq:average_ratio} \end{equation} In Figure \ref{fig:ratio} we plot the numerical results obtained for the ratio $\alpha$ as a function of the probability $w$, in the particular case of cycles of order $k=3$. The reported results, from bottom to top, have been obtained for values of $N=16, 32, 48, 64, 96, 128, 192$ and $256$. Each point is given by the average over $10^3$ different network realizations. In particular these numerical results fit very well with the expectations (full lines) of Eq.(\ref{eq:average_ratio}), especially for large values of $N$ and/or small values of $w$. Furthermore the critical values $\alpha_d =1/3$ , $\alpha_s =0.818$ and $\alpha_c =0.918$ (dotted lines) are used for extrapolating the numerical results of $w_d$ (open circles), $w_s$ (open squares) and $w_c$ (gray squares) respectively [see the inset of Figure \ref{fig:ratio}]. $w_i \; , \; i=d,s,c$ is the value of the probability for which the ratio $\alpha_i \; , \; i=d,s,c$ is satisfied. As expected, they follow the rule $w_i = \sqrt{3\alpha_i/N} \; , \; i=d,s,c$ predicted by Eq.(\ref{eq:average_ratio}) for $k=3$. \begin{figure} \includegraphics*[width=0.47\textwidth]{alpha} \caption{Numerical results (full dots) for the ratio $\alpha=M/L$ between the total number of cycles $M$ of order $k=3$ and the total number of links $L$ as a function of the probability $w$ for different sizes of Erd\"os-R\'enyi networks. In particular the numerical results refer to different network size $N$: from bottom to top $N=16, 32, 48, 64, 96, 128, 192$ and $256$. Each point is given by the average over $10^3$ network realizations. The full lines are the predicted values given by Eq.(\ref{eq:average_ratio}), while the dotted lines denote the critical values $\alpha_d =1/3$ , $\alpha_s =0.818$ and $\alpha_c =0.918$ as described in detail in the text. In particular the numerical values of the probability $w$ for which these three critical values of $\alpha$ are realized are denoted by $w_d$ (open circles), $w_s$ (open squares) and $w_c$ (gray squares) respectively, they are plotted in the inset, where the full lines are extrapolated by Eq.(\ref{eq:average_ratio}) as $w_i = \sqrt{3\alpha_i/N} \; , \; i=d,s,c$. The two upper curves for $w_s$ and $w_c$ almost coincide.} \label{fig:ratio} \end{figure} \\ According to the isomorphism traced between the $k$XS problem and the social balance for $k$-cycles, from now on we will not make any distinction between the words problem and network, variable and link, $k$-clause and $k$-cycle, value and sign (or spin), false and negative (or unfriendly), true and positive (or friendly), satisfied and balanced (or unfrustrated), unsatisfied and imbalanced (or frustrated), etc\ldots . \subsection{$p$-Random-Walk SAT } \label{sec:RWS} So far we have established the connection between the $k$XS problem and the social balance for $k$-cycles, proposed in this paper. In particular we have determined how the dilution parameter $\alpha$ is related to diluted random networks parameterized by $w$. In this section we extend the known results for the standard RWS of \cite{weigt,semerjian} to the $p$-Random-Walk SAT ($p$RWS) algorithm, that is the RWS algorithm extended by the dynamical parameter $p$ that played the role of a propensity parameter in connection with the social balance problem. The steps of the $p$RWS are as follows: \begin{enumerate} \item{Select randomly a frustrated clause between all frustrated clauses.} \item{Instead of randomly inverting the value of one of its $k$ variables, as for an update in the case of the RWS, apply the following procedure: \begin{itemize} \item{if the clause contains both true and false variables, select with probability $p$ one of its false variable, randomly chosen between all the false variables belonging to the clause, and flip it to the true value;} \item{if the clause contains both true and false variables, select with probability $1-p$ one of its true variable, randomly chosen between all the true variables belonging to the clause, and flip it to the false value;} \item{if the clause contains only false values ($k$ should be odd), select with probability $1$ one of its false variables, randomly chosen between all the false variables belonging to the clause, and flip it to the true value.} \end{itemize}} \item{Go back to point 1 until no unsatisfied clauses are present in the problem.} \end{enumerate} The update rules of point 2 are the same used in the case of $k$-cycle dynamics and illustrated in Figure \ref{fig:example} for the cases $k=4$ (A) and $k=5$ (B). For the special case of $3$XS problem, the standard RWS algorithm and the $p$RWS algorithm coincides for the dynamical parameter $p=1/3$. \subsubsection{Dynamical transition at $\alpha_d$} \label{sec:alphad} The freezing time $\tau$, that is the time $\tau$ needed for finding a solution of the problem, abruptly changes its behavior at the dynamical critical point $\alpha_d = 1/k$. \\ Figure \ref{fig:time_diluted} reports the numerical estimate of the freezing time $\tau$ as a function of the dilution parameter $\alpha$ and for different values of the dynamical parameter $p$ [ $p=0$ (circles) , $p=1/3$ (squares) , $p=1/2$ (diamonds) and $p=1$ (crosses) ]. As one can easily see, for $p=1/3$ and $p=0$, $\tau$ drastically changes around $\alpha_d$, increasing abruptly for values of $\alpha> \alpha_d$. For $p=1/2$ and for $p=1$ this drastic change is not observed. This is understandable from the fact that both values of $p$ provide a bias towards paradise, while $p=1/3$ corresponds to a random selection of one of the three links of a triad as in the original RWS and $p=0$ would favor the approach to the hell if it were a balanced state. The simulations are performed over a system with $L=10^3$ variables. Moreover each point stands for the average over $10^2$ different networks and $10^2$ different realizations of the dynamics on such topologies. At the beginning of each simulation the variables take the value $1$ or $0$ with the same probability. The inset shows the relation between the time $\tau^*$ calculated using the standard RWS and the time $\tau$ calculated according to Eq.(\ref{eq:time_scaling}). The almost linear relation (the dashed line has a slope equal to one) between $\tau^*$ and $\tau$ means that there is no qualitative change between the two different ways of counting the time. \begin{figure} \includegraphics*[width=0.47\textwidth]{time2} \caption{Time $\tau$ for reaching a solution for a system of $L=1000$ variables as a function of the ratio $\alpha$ and for different values of the dynamical parameter $p$ [ $p=0$ (circles), $p=1/3$ (squares), $p=1/2$ (diamonds) and $p=1$ (crosses) ]. The $p$RWS performed for $p=1/3$ shows a critical behavior around $\alpha_s=1/3$: for values of $\alpha \leq \alpha_s$, $\tau$ grows almost linearly with $\alpha$, while it jumps to an exponential growth with $\alpha$ for $\alpha> \alpha_s$. The same is qualitatively true for $p=0$, but the time $\tau$ needed for reaching a solution increases more slowly with respect to the case $p=1/3$ for $\alpha > \alpha_s$. For $p=1/2$ and $p=1$ there seems to be no drastic increment of $\tau$ for $\alpha > \alpha_s$. Moreover the inset shows the dependence of $\tau^*$, the freezing time as calculated in the standard RWS \cite{weigt,semerjian}, on the freezing time $\tau$ calculated according to Eq.(\ref{eq:time_scaling}). The almost linear dependence of $\tau^*$ on $\tau$ (the dashed line has slope one) explains that there is no qualitative change if we describe the dynamical features of the system in terms of $\tau$ or $\tau^*$ as time used by the simulations.} \label{fig:time_diluted} \end{figure} Following the same argument as in \cite{weigt}, we can specify for the update event at time $t$ the variation of the number of unsatisfied clauses $M_t^{(u)}$ as \[ \Delta M_t^{(u)}=-\left(k \alpha_u(t) +1\right)+k \alpha_s(t) = k \alpha - 2 k \alpha_u(t)-1\;\;\;, \] because, by flipping one variable of an unsatisfied clause, all the other unsatisfied clauses which share the same variable become satisfied, while all the satisfied clauses containing that variable become unsatisfied. In the thermodynamic limit $L\to \infty$, one can impose $M_t^{(u)}=L\alpha_u(t)$. Moreover, the amount of time of one update event is given by Eq.(\ref{eq:time_scaling}) so that we can write \begin{equation} \dot{\alpha}_u(t)=\frac{\alpha_u(t)}{\alpha}\left(k\alpha-2k\alpha_u(t)-1\right)\;\;\;. \label{eq:differential} \end{equation} Eq.(\ref{eq:differential}) has as stationary state (or a plateau) at \begin{equation} \alpha_u=\frac{k\alpha-1}{2k}\;\;\;. \label{eq:plateu} \end{equation} Therefore, when the ratio $\alpha$ (that is the ratio of the number of clauses over the number of variables) exceeds the critical ``dynamical'' value \begin{equation} \alpha_d = \frac{1}{k}\;\;\;, \label{eq:weigt} \end{equation} the possibility of finding a solution for the problem drastically changes. This result was already found by \cite{weigt,semerjian}. While for values of $\alpha \leq \alpha_d$ we can always find a solution because the plateau of Eq.(\ref{eq:plateu}) is always smaller or equal to zero, for $\alpha>\alpha_d$ the solution is reachable only if the system performs a fluctuation large enough to reach zero from the non-zero plateau of Eq.(\ref{eq:plateu}). In Figure \ref{fig:timebehav} we report some numerical simulations for $\alpha_u$ as a function of the time for different values of $p$ [ A) $p=0$ , B) $p=1/3$ , C) $p=1/2$ , D) $p=1$ ] and for different values of the dilution parameter $\alpha$ [ $\alpha=0.3$ (black, bottom) , $\alpha=0.5$ (red, middle) , $\alpha=0.85$ (blue, top) ]. The numerical values [full lines] are compared with the numerical integration of Eq.(\ref{eq:differential}) [dashed lines]. They fit very well apart from large values of $t$, for $\alpha=0.85$ and for $p=1/2$ or $p=1$. The initial configuration in all cases is that of an antagonistic society ($x_i=0 \;\;\; , \; \forall \; i=1,\ldots ,L$), while the number of variables is $L=10^4$. \begin{figure} \includegraphics*[width=0.47\textwidth]{timebeah1} \caption{(Color online) Time behavior of the ratio $\alpha_u$ of unsatisfied clauses for different values of $p$ [ A) $p=0$ , B) $p=1/3$ , C) $p=1/2$ , D) $p=1$ ] and for different values of the dilution parameter $\alpha$ [ $\alpha=0.3$ (black, bottom) , $\alpha=0.5$ (red, middle) , $\alpha=0.85$ (blue, top) ]. Numerical results of simulations [full lines] are compared with the numerical integration of Eq.(\ref{eq:differential}) [dashed lines] leading to a very good fit in all cases, except for $\alpha=0.85$ and for $p=1/2$ and $p=1$. The initial configuration in all the cases is the one of an antagonistic society ($x_i=0 \;\;\; , \; \forall \; i=1,\ldots ,L$), while the number of variables is $L=10^4$.} \label{fig:timebehav} \end{figure} \subsubsection{Clustering of solutions at $\alpha_s$} \label{sec:alphas} In order to study the transition in the clustering structure of solutions at $\alpha_s$, we numerically determine the Hamming distance between different solutions of the same problem. More precisely, given a problem of $L$ variables and $M$ clauses, we find $T$ solutions $\left\{x_i^r \right\}_{i=1,\ldots ,L}^{r=1,\ldots , T}$ of the given problem. This means that we start $T$ times from a random initial configuration and at each time we perform a $p$RWS until we end up with a solution. We then compute the distance between these $T$ solutions as normalized Hamming distance \begin{equation} \langle d \rangle = \frac{1}{L\cdot T(T-1)}\sum_{r,s=1}^T \sum_{i=1}^L \left| x_i^r - x_i^s \right| \;\;\;. \label{eq:Hamming} \end{equation} The numerical results for $L=20$ are reported in Figure \ref{fig:dist}. We average the distance over $T=10^2$ trials and over $10^2$ different problems for each value of $\alpha$. As expected for $p=1/3$ [squares] the distance between solutions drops down around $\alpha_s$ (actually it drops down before $\alpha_s$ because of the small number of variables). For different values of $p$ [ $p=0$ (circles) , $p=1/2$ (diamonds) and $p=1$ (crosses) ], the $p$RWS is less random and $\langle d \rangle$ drops down before $\alpha_s$ (or at least before the point at which the case $p=1/3$ drops down). In particular, if we plot (as in the inset) the distance $\langle d \rangle$ as a function of $p$ and for different values of $\alpha$ [$\alpha=0.3$ (full line) , $\alpha=0.5$ (dotted line) and $\alpha=0.85$ (dashed line)] we see a clear peak of the distance $\langle d \rangle$ around $p=1/3$. This suggests that a completely random, unbiased RWS always explores a large region in phase space, it leads to a larger variety of solutions. \begin{figure} \includegraphics*[width=0.47\textwidth]{dist} \caption{Normalized Hamming distance $\langle d \rangle$ [ Eq.(\ref{eq:Hamming}) ] between solutions as a function of the ratio $\alpha$ and different values of the dynamical parameter $p$ [ $p=0$ (circles) , $p=1/3$ (squares) , $p=1/2$ (diamonds) and $p=1$ (crosses)]. For the standard RWS ($p=1/3$) the distance drops down around the critical point $\alpha_s$. Different values of $p$ perform not-really random walks and lead to effective values of $\alpha_s$ smaller than the former one. The inset shows the dependence of $\langle d \rangle$ on the dynamical parameter $p$. As it is shown for different values of $\alpha$ [$\alpha=0.3$ (full line) , $\alpha=0.5$ (dotted line) and $\alpha=0.85$ (dashed line)] the peak of the distance between solutions is for a $p$RWS which is really random, that is for $p=1/3$. All the points here , in the main plot as well as in the inset, are obtained for a system of $L=20$ variables. Each point is obtained averaging over $10^2$ different networks and on each of these networks the average distance is calculated over $10^2$ solutions. At the beginning of each simulation the value of one variable is chosen to be $1$ or $0$ with equal probability.} \label{fig:dist} \end{figure} \subsubsection{SAT/UNSAT transition at $\alpha_c$} \label{sec:pc} Differently from the general $k$S problem, the $k$XS problem is known to be always solvable \cite{semerjian} and the solution corresponds to one of the balanced configurations as described in section \ref{frozen} for the all-to-all topology. Nevertheless the challenge is whether the solutions can be found by a local random algorithm like RWS. In the application of the RWS it can happen that the algorithm is not able to find one of these solutions in a ``finite'' time, so that the problem is called ``unsatisfied''. The notion is made more precise in \cite{cocco}. For practical reasons the way of estimating the critical point $\alpha_c$ that separates the SAT from the UNSAT region is related to the so-called algorithm complexity of the RWS. Here we follow the prescription of \cite{weigt,semerjian,schoning}. Fixed $k=3$ and calling a RWS with initial random assignment of the variables followed by $3L$ update events one trial, one needs a total number of trials $T \gg \left(4/3\right)^L$ for being ``numerically'' sure to be in the UNSAT region. In fact if after $T$ trials no solution is found, the problem is considered as ``unsatisfied'' . \\ The introduction of the dynamical parameter $p$ can strongly ``improve'' the performance of RWS. For $p \neq 1/3$ the $p$RWS updates the variables following a well prescribed direction: the tendency is to increases the number of negative variables for $p<1/3$ and to decrease their number for $p > 1/3$. In particular, as we have seen in the former sections, for $p \geq 1/2$ the $p$RWS approaches the configuration of the paradise for the largest value of $\alpha={L \choose k}/L \gg \alpha_c$ and in a time that goes as $\tau \sim L^\beta$, so that there is no UNSAT region at all if we apply the former criterion for the numerical estimate of the UNSAT region. Clearly, if the bias goes in the wrong direction, the performance gets worse. \\ In this section we briefly give a qualitative description about the SAT/UNSAT region for the $p$RWS due to the dynamical parameter $p$. Let us define as $^+p_c$ [$^-p_c$] the minimum [maximum] value of $p$ for which the system can be satisfied. Given a problem with $\alpha L$ clauses we follow the algorithm: 1) Set $p=1$ [$p=0$] ; 2) set an initial random configuration and apply the $p$RWS ; 3) if the $p$RWS finds the solution in a number of updates less than $U\cdot L$ , decrease [increase] $p$ and go to point 2) ; 4) if not $^+p_c=p$ [$^-p_c=p$]. This procedure can be performed up to the desired sensitivity for the numerical estimate of $^+p_c$ [$^-p_c$]. The idea of defining an upper $^+p_c$ and lower critical value $^-p_c$ for the dynamical parameter $p$ is related to the fact that for $p=1/3$ the $p$RWS has most trouble to find the solution. Figure \ref{fig:pcritic}B and Figure \ref{fig:pcritic}C show the numerical results for $^+p_c$ and $^-p_c$ as a function of the dilution parameter $\alpha$. The number of variables is $L=10^3$. We report the results for different values of the waiting time $T=L \cdot U$ [ $U=1$ (circles) , $U=2$ (squares) , $U=3$ (crosses) , $U=10$ (crosses) ]. Each point is averaged over $10$ different problems and $10$ different $p$RWS applied to each problem. Qualitatively it is seen that for $\alpha \leq \alpha_d$ the problem is always solvable ( $^+p_c=0$ and $^-p_c=1$ ) , while for $\alpha > \alpha_d$ one needs $p \neq 1/3$ for solving the problem. Of course the numerical values for $^+p_c$ and $^-p_c$ depend on the waiting time until the $p$RWS reaches a solution. Here, for simplicity we do not wait long enough for seeing a similar behavior around $\alpha_c$ instead of $\alpha_d$. Furthermore, in Figure \ref{fig:pcritic}A we report the probability $P$, that is the ratio of success over the number of trials, for solving the problem as a function of $p$ for $\alpha=\alpha_c$. The waiting time is $U=1$ (circles) , $U=2$ (squares) , $U=3$ (diamonds) , $U=10$ (crosses) and $U=100$ (triangles), respectively. The probabilities are calculated over $10^2$ trials for each point ($10$ different problems times $10$ $p$RWS for each problem). As the waiting time increases the upper critical value $^+p_c$ for finding for sure the solution decreases ( $^+p_c \simeq 0.8$ for $U=1$ , $^+p_c \simeq 0.7$ for $U=2$ , $^+p_c \simeq 0.6$ for $U=3$ and for $U=10$ , $^+p_c \simeq 0.5$ for $U=100$ ). This means that even for less biased search, solutions can be found, while $^-p_c$ is zero for the waiting time reported here, no value of $p<p_c$ leads to a solution. This is as expected. If the variables are almost all negative it is harder to find a solution of the problem (the paradise is a solution while the hell for $k$ odd is not). \begin{figure} \includegraphics*[width=0.47\textwidth]{pcriticnew} \caption{Numerical estimate of the upper $^+p_c$ (B) and lower $^-p_c$ (C) critical values of $p$ [see the text for their definition] as a function of the dilution parameter $\alpha$. Here $L=10^3$ and the different symbols corresponds to different maximum waiting times $T=U\cdot L$ [ $U=1$ (circles) , $U=2$ (squares) , $U=3$ (diamonds) , $U=10$ (diamonds) ]. Each point is given by the average over $10$ different problems for each value of $\alpha$ and $10$ different $p$RWS for each problem (with random initial condition). Moreover in (A) we show the probability $P$ that the $p$RWS finds a solution at $\alpha =0.918 \simeq \alpha_c$ as a function of $p$. We used different waiting times [ $U=1$ (circles) , $U=2$ (squares) , $U=3$ (diamonds) , $U=10$ (crosses) , $U=100$ (triangles) ]. See the text for further comments.} \label{fig:pcritic} \end{figure} \subsubsection{Mean-field approximation down to $\alpha_m$} \label{sec:alpham} By construction the ``topology'' of a $k$S problem is completely random (for this reason is sometimes called explicitly as Random $k$-SAT problem). Each of the $L$ variables can appear in one of the $\alpha L$ clauses with probability $v=\frac{1}{L}+\frac{1}{L-1}+\ldots + \frac{1}{L-k}$. In particular for $L \gg k$ one can simply write $v\simeq \frac{k}{L}$. Then the probability $P_r$ that one variable belongs to $r$ clauses can be described by the Poisson distribution \begin{equation} P_r = \frac{\left(\alpha k \right)^r}{r!} e^{-\alpha k} \;\;\; , \label{eq:probdil} \end{equation} with mean value $\langle r \rangle = \alpha k$ and variance $\sigma_r = \sqrt{\alpha k}$. $P_r$ is plotted in Figure \ref{fig:probdil}, where the numerical results [ symbols , $r=0$ (black circles) , $r=1$ (red squares) , $r=2$ (blue diamonds) and $r \geq 3$ (violet crosses) ] are compared to the analytical expectation [ lines , $r=0$ (black full line) , $r=1$ (red dotted line) , $r=2$ (blue dashed line) and $r \geq 3$ (violet dotted-dashed line) ]. \begin{figure} \includegraphics*[width=0.47\textwidth]{probdil} \caption{Probability $p_r$ that one variable belongs to $r$ clauses as function of the dilution parameter $\alpha$. The symbols stand for numerical results obtained over $10^3$ different realizations for $L=128$ variables [ $r=0$ (black circles) , $r=1$ (red squares) , $r=2$ (blue diamonds) and $r \geq 3$ (violet crosses) ]. The lines stand for analytical predictions of Eq.(\ref{eq:probdil}) [ $r=0$ (black full line) , $r=1$ (red dotted line) , $r=2$ (blue dashed line) and $r \geq 3$ (violet dotted-dashed line) ].} \label{fig:probdil} \end{figure} If we start from an antagonistic society (all variables false) the minimum value of the dilution $\alpha_m$ needed to reach the paradise (if $p \geq 1/2$) is that all variables belong to at least one clause. This means that $P_0 < 1/L$, from which \begin{equation} \alpha_m = \frac{\ln{L}}{k} \;\;\; . \label{eq:meanfield} \end{equation} It is interesting to note that the same criterion applies for any $p$. In Figure \ref{fig:dendil} we plot the absolute value of the difference $\left| \; ^{(m)}\rho_\infty - \; ^{(t)}\rho_\infty \; \right| $, between $^{(t)}\rho_\infty$, the theoretical prediction for the stationary density of true variables, [ Eq.(\ref{eq:antal2}) ] and the numerically measured value $^{(m)}\rho_\infty$, as a function of the dilution parameter $\alpha$. $^{(m)}\rho_\infty$ is obtained as the average of the density of friendly links (registered after a waiting time $T=200.0$ , so that is effectively the stationary density) over $50$ different problems and $50$ different $p$RWS for each problem. The results reported here are for $L=128$ (open symbols) and $L=256$ (gray filled symbols) and for different values of $p$ [ $p=0$ (circles) , $p=1/3$ (squares) , $p=1/2$ (diamonds) , $p=1$ (triangles) ]. The initial conditions are those of an antagonistic society. The dashed lines are proportional to $e^{-3 \alpha}$. Figure \ref{fig:dendil} shows that the mean-field approximation of Eq.(\ref{eq:antal2}) becomes exponentially fast true as the system dilution decreases. Moreover, as for the cases $p=0$ and $p=1/3$, we can observe that the difference $\left| \; ^{(m)}\rho_\infty - \; ^{(t)}\rho_\infty \; \right| $ is always smaller than for $p=1/2$ and $p=1$. Qualitatively this means that the dilution $\alpha$ of the system needed to reach the theoretical expectation of Eq.(\ref{eq:antal2}) is smaller than $\alpha_m$ for $p<1/2$. In general we can say that $\alpha_m$ is a function of $p$: $\alpha_m=\alpha_m(p)$, and $\alpha_m$ is the minimum value of the dilution of the system for which we can effectively describe the diluted system as an all-to-all system for all the values of $p$. Moreover, it should be noticed that for $\alpha > \alpha_m$ almost all variables belong to at least three clauses [ see Figure \ref{fig:probdil} ]. This fact allows the $p$RWS to explore a larger part of configuration space. Let us assume that one variable belongs to less than three clauses: an eventual update event that flips this variable so that the one triad becomes balanced, can never increase the number of unsatisfied clauses by frustrating other clauses it belongs to. This reminds us to the situation in an energy landscape in which an algorithm gets stuck in a local minimum when it never accepts a change in the ``wrong'' direction, i.e. towards larger energy. \begin{figure} \includegraphics*[width=0.47\textwidth]{dendil} \caption{Difference $\left| \; ^{(m)}\rho_\infty - \; ^{(t)}\rho_\infty \; \right| $ between $^{(t)}\rho_\infty$ the theoretical prediction for the stationary density of friendly variables [ Eq.(\ref{eq:antal2}) ] and the numerically measured value $^{(m)}\rho_\infty$, as a function of the dilution parameter $\alpha$. $^{(m)}\rho_\infty$ is obtained as the average of the density of friendly links (registered after a waiting time $T=200.0$ , so that is effectively stationary ) over $50$ different problems and $50$ different $p$RWS for each problem. The results displayed here are obtained for $L=128$ (open symbols) and $L=256$ (gray filled symbols) and for different values of $p$ [ $p=0$ (circles) , $p=1/3$ (squares) , $p=1/2$ (diamonds) , $p=1$ (triangles) ]. The initial conditions are those of an antagonistic society. The dashed lines are proportional to $e^{-3 \alpha}$.} \label{fig:dendil} \end{figure} \section{Summary and conclusions}\label{summary} In the first part of this paper we generalized the triad dynamics of Antal \emph{et al.} to a $k$-cycle dynamics \cite{antal}. Here we had to distinguish the cases of even values of $k$ and odd values of $k$. For all values of integer $k$ there is again a critical threshold at $p_c=1/2$ in the propensity parameter. For odd $k$ and $p<p_c$ the paradise can never be reached in the thermodynamic limit of infinite system size (as predicted by the mean field equations which we solved exactly for $k=5$ and approximately for $k>5$). In the finite volume, in principle one could reach a balanced state made out of two cliques (a special case of this configuration is the ``paradise'' when one clique is empty). However, the probability for reaching such type of frozen state decreases exponentially with the system size so that in practice the fluctuations never die out in the numerical simulations. For $p>1/2$ the convergence time to reach the paradise grows logarithmically with the system size. At $p=1/2$ paradise is reached within a time that follows a power law in the size $N$, where we determined the $k$-dependence of the exponent. In particular, the densities of $k$-cycles with $j$ negative links, here evolved according to the rules of the $k$-cycle dynamics, could be equally well obtained from a random dynamics in which each link is set equal to $1$ with probability $\rho_\infty$ or equal to $-1$ with probability $1-\rho_\infty$. This feature was already observed by Antal \emph{et al.} for $k=3$ \cite{antal}. It means that the individual updating rules which seem to be ``socially'' motivated in locally reducing the social tensions by changing links to friendly ones, end up with random distributions of friendly links. The reason is a missing constraint of the type that the overall number of frustrated $k$-cycles should not increase in an update event. Such a constrained dynamics was studied by Antal \emph{et al.} in \cite{antal}, but not in this paper. \\ For even values of $k$, the only stable solutions are ``heaven'' (i.e. paradise) and ``hell'' for $p>1/2$ and $p<1/2$, respectively, and the time to reach these frozen configurations grows logarithmically with $N$. At $p_c=1/2$ other realizations of the frozen configurations are possible, in principle. However, they have negligible probability as compared to heaven and hell. Here the time to reach these configurations increases quadratically in $N$, independently of $k$. This result was obtained in two ways. Either from the criterion to reach the stable state when a large enough fluctuation drops the system into this state (so we had to calculate how long one has to wait for such a big fluctuation). Alternatively, the result could be read off from a mapping to a Markov process for diploid organisms ending up in a genetic pool of either all ``$+$''-genes or all ``$-$''-genes. The difference in the possible stable states of diploid organisms and ours consists in two-clique stable solutions that are admissible for the even $k$-cycle dynamics, in principle, however such clique states have such a low probability of being realized that the difference is irrelevant. \\ The difference in the exponent at $p_c$ and the stable configurations above and below $p_c$ between the even and odd $k$-cycle dynamics was due to the fact that "hell", a state with all links negative as in an antagonistic society, is a balanced state for even $k$, not only by the frustration criterion of physicists, but also according to the criterion of social scientists \cite{cartwright}. \\ \vspace{5pt} \\ As a second natural generalization of the social balance dynamics of Antal \emph{et al.} we considered a diluted network. One way of implementing the dilution is via a random Erd\"os-R\'enyi network, characterized by the probability $w$ for connecting a randomly chosen pair of nodes. Here we focused our studies to the case $k=3$. The mean-field description and the results about the phase structure remain valid down to a certain degree of dilution, characterized by $w_m$. This threshold for the validity of the mean-field description practically coincides with the criterion whether a single link belongs to at least three triads (for $w>w_m$) or not ($w<w_m$). If it does so, an update event can increase the number of frustrated triads. For $w<w_m$, or more precisely $w<w_d<w_m$ it becomes easier to realize frozen configurations different from the paradise. Isolated links do not get updated at all and isolated triads can freeze to a ``$+$''-``$-$''-configuration. The time to reach such a frozen configuration (in general different from the paradise) grows then only linearly in the system size. Also the solution space, characterized by the average Hamming distance between solutions, has different features below and above another threshold, called $w_s$ with $w_d<w_s<w_m$. Therefore one of the main differences between the all-to-all and the sufficiently diluted topology are the frozen configurations. For the all-to-all case we observed the paradise above $p_c$ for odd values of $k$ and even values of $k$ and the hell for even values of $k$ below $p_c$, in the numerical simulations, because the probability to find a two-clique-frozen configuration was calculated to be negligibly small. For larger dilution, also other balanced configurations were numerically found, as mentioned above, and the time passed in the numerical simulations for finding these solutions followed the theoretical predictions. \\ In section \ref{sec:diluted} we used, however, another parameterization in terms of the dilution parameter $\alpha$, that was the ratio of triads (clauses) over the number of links [we gave anyway an approximated relation between $\alpha$ and $w$ in Eq.(\ref{eq:average_ratio})]. The reason for using this parameterization was a mapping of the $k$-cycle social balance of networks to a $k$-XOR-SAT ($k$XS) problem, that is a typical satisfiability problem in optimization tasks. We also traced a mapping between the ``social'' dynamical rules and the Random-Walk SAT (RWS) algorithm, that is one approach for solving this problem in a random local way. As we have shown, the diluted version of the $3$-cycle social dynamics with propensity parameter $p=1/3$ corresponds to a $3$XS problem solved by the RWS algorithm in its standard form (as used in \cite{weigt,semerjian}). \\ The $k$XS problem is always solvable like the $k$-cycle social balance, for which a two cliques solution always exists due to the structure theorem of \cite{cartwright}, containing as a special solution the so-called paradise. The common challenge, however, is to find this solution by a local stochastic algorithm. The driving force, shared by both sets of problems, is the reduction of frustration. The meaning of frustration depends on the context: for the $k$-cycle dynamics it is meant in a social sense as a reduction of social tension, for the $k$XS problem it corresponds to violated clauses. The mathematical criterion is the same. The local stochastic algorithm works in a certain parameter range, but outside this range it fails. The paradise is never reached for a propensity parameter $p<1/2$, independently of $k$. Similarly, the solution of the $k$XS problem is never found if the dilution parameter is larger than $\alpha_c$, and the RWS algorithm needs an exponentially long time already for $\alpha>\alpha_d$, with $\alpha_d<\alpha_c$. \\ We generalized the RWS algorithm, usually chosen for solving the $k$-SAT ($k$S) problem as well as the $k$XS problem, to include a parameter $p$ that played formerly the role of the propensity parameter in the social dynamics ($p$RWS). The effect of this parameter is a bias towards the solution so that $\alpha_d$, the threshold between a linear and an exponential time for solving the problem, becomes a function of $p$. Problems for which the $p$RWS algorithm needed exponentially long for $p=1/3$, now become solvable within a time that grows less than logarithmically in the system size for $p>1/2$ and less than power-like in the system size for $p=1/2$. Along with the bias goes an exploration of solution space that has on average a smaller Hamming distance between different solutions than in the case of the $\frac{1}{3}$RWS algorithm that was formerly considered \cite{weigt,semerjian}. \\ \vspace{5pt} \\ Our paper has illustrated that the reduction of frustration may be the driving force in common to a number of dynamical systems. So far we were concerned about ``artificial'' systems like social systems and satisfiability problems. It would be interesting to search for natural networks whose evolution was determined by the goal of reducing the frustration, not necessarily to zero degree, but to a low degree at least. \begin{acknowledgments} It is a pleasure to thank Martin Weigt for drawing our attention to Random $k$-SAT problems in computer science and for having useful discussions with us while he was visiting the International University Bremen as an ICTS-fellow. \end{acknowledgments}
1,314,259,995,968
arxiv
\section{Introduction} The prototype of residue theorems in geometry is the classical theorem of Hopf's [H] which relates the zeroes of a vector field on a compact manifold to its Euler characteristic. In general, residue theorems associate topological invariants to the singularities of geometric objects. The aim of this paper is to study singularities of maps between bundles over an oriented manifold, in particular, to obtain residue theorems for such singularities. The general setting is as follows: Let $E$ and $F$ be vector bundles, real or complex, over a compact, oriented manifold $X$, and let $\alpha:E \mapsto F$ be a bundle map between $E$ and $F$ that drops rank outside a closed submanifold $\Sigma$. The theory developed in this paper uses the notion of a pushforward connection, as developed by Harvey-Lawson [HL1], to compare the characteristic classes of $E$ and $F$ and relate them to $\Sigma$. When rank(E) = rank(F) and $\phi$ is an Ad-invariant polynomial, formulae of the type \[\phi(\Omega_{F}) - \phi(\Omega_{E}) = Res_{\phi}[\Sigma] + dT \] are derived, where: $[\Sigma]$ is the current associated to the submanifold $\Sigma$, $Res_{\phi}$ is a closed current computed in the curvatures $\Omega_{E}$ and $\Omega_{F}$ of $E$ and $F$ and the twisting of the map $\alpha$, and where $T$ is a canonical transgression form. The theory of generic bundle maps has been studied in great detail by Harvey-Lawson [HL1], [HL2] and by Harvey-Semmes [HS]. This paper uses several key ideas developed in the papers above, namely the notion of pushforward connections and the universal setting for residue theorems, but takes a different point of view. While the authors above study atomic maps between bundles, for which the singularities are of the expected codimension, we allow the singularities to be nongeneric, asking only that they be closed submanifolds. This is not a strong restriction because a minor modification of the standard Thom Transversality Theorem (see [HL2]) shows that the set of smooth bundle maps which vanish nondegenerately is open and dense in the $C^{1}$-topology. In this general setting one cannot hope to obtain residue theorems for bundle maps but the key point of this paper is that up to homotopy it is always possible to do so. More precisely, up to homotopy of the bundle map $\alpha$, and the connections $D_{E}$ and $D_{F}$ of the bundles $E$ and $F$, we can write residue formulae for any Ad-invariant polynomial $\phi$. This approach leads to many interesting formulae and applications. To name a few, we obtain a generalized Hopf index theorem for bundle maps with finite singularities, a generalized Riemann-Hurwitz formula for smooth maps between manifolds of the same dimension, residue formulae for CR-singularities and residue formulae for Clifford and Spin bundles. The author wishes to thank Blaine Lawson for introducing him to the subject and for his invaluable help in shaping the results obtained in his thesis, from which this paper stems. Also, he wishes to thank ICTP, where a part of this work was realized, for its hospitality. \section{Pushforward Connections} \setcounter{equation}{0} In this section we review the concept of a pushforward connection introduced in [HL1]. The material covered in \S 1 to \S 3 can be found in the above paper in much greater detail. A notational convention: Throughout this paper $X$ will denote a manifold which is oriented and $\Sigma$ a submanifold of $X$. Suppose $E$ and $F$ are vector bundles (real or complex) over an oriented manifold $X$. Let $D_{E}$, respectively $D_{F}$, be a smooth connection on $E$, respectively $F$. Let $\alpha$ be a bundle map between $E$ and $F$ that is injective outside a submanifold $\Sigma$ of $X$. Then we can transplant the connection $D_{E}$ to define a pushforward connection on $F$ outside the singularity set $\Sigma$ as follows. \begin{equation} \vec{D} = \alpha D_{E} \beta + D_{F} (1 - \alpha \beta) \end{equation} Here $\beta$ is the `inverse of $\alpha$'. This is made precise below. Suppose $E$ and $F$ are equipped with metrics, not necessarily compatible with the connections $D_{E}$ and $D_{F}$. On the complement of $\Sigma$ let $I=im\alpha$ denote the image subbundle of $F$. We can now choose $\beta$ to be the orthogonal projection of $F$ onto $I$ followed by the inverse of the map \[\alpha : E \mapsto I\] The transplanted connection $\vec{D}$ is singular because the map $\beta$ is singular on $\Sigma$. A more concrete formula for $\beta$ is given by $$\beta=(\alpha^{\ast} \alpha)^{-1} \alpha^{\ast}$$ where $\alpha^{\ast}$ denotes the adjoint of $\alpha$. For the most part, this paper will concentrate on the equirank case, i.e, when rank(E) = rank(F). Then the singular pushforward connection $\vec{D}$ on $F$ is given by the simple formula $$\vec{D} = \alpha D_{E} \alpha^{-1} \quad \mbox{on} \quad X-\Sigma$$ On $X-\Sigma$ the pushforward connection $\vec{D}$ can be written in block form with respect to the splitting $F = I \oplus I^{\perp}$. The matrix form of $\vec{D}$ blocks as an upper triangular matrix with diagonal terms $\alpha D_{E} \beta$ and $(1 - P) D_{F} (1 - P)$. Here $P$ denotes the orthogonal projection of $F$ onto $I$. Therefore for any Ad-invariant polynomial $\phi$, on the Lie algebra $g\ell_{n}(\mathbb R)$, or $g\ell_{n}(\mathbb C)$, we have $$\phi (\vec{D}) = \phi (\alpha D_{E} \beta \oplus (1-P) D_{F} (1-P) ) \quad \mbox{on} \quad X-\Sigma$$ where to simplify notation we write $$\phi (D) \equiv \phi (\Omega)$$ This denotes the invariant polynomial evaluated on the appropriate curvature. This will be the notational covention adopted throughout the paper. When restricted to sections of $I$, $\alpha D_{E} \beta = \alpha D_{E} \alpha^{-1}$ is gauge equivalent to $D_{E}$ and this implies that \begin{equation} \phi (\vec{D}) = \phi(D_{E} \oplus D_{I^{\perp}}) \quad \mbox{on} \quad X-\Sigma \end{equation} where $D_{I^{\perp}} = (1 - P) D_{F} (1 - P)$ is the connection induced on $I^{\perp} \subset F$ by $D_{F}$. \section{ Families of Pushforward Connections and Transgressions} \setcounter{equation}{0} To obtain a transgression formula via Chern-Weil theory we want to introduce a family of connections on $F$. There is a nice way of doing this by using the notion of an approximate one. By an approximate one we mean a function $\chi$ which satisfies the following properties: $$\chi : [ 0 , \infty ] \mapsto [ 0 , 1 ]$$ which is $C^{\infty}$ on [0 , $\infty$] and satisfies $$\chi (0) = 0 , \chi (\infty) = 1$$ and $$\chi' \geq 0$$ Given a bundle map $\alpha$ we can define approximations to the inverse of $\alpha$ based on $\chi$ by setting $$\beta_{s} = \frac{\alpha^{\ast}}{s^{2}} \rho (\frac{\alpha \alpha^{\ast}}{s^{2}})$$ The family of bundle maps $\beta_{s}$ is smooth for $0 < s \leq + \infty$ on $X$ with $\beta_{\infty} = 0$ and $\beta_{0} = \beta$ on $X-\Sigma$. We can now define a family of smooth connections $\vec{D}_{s}$ over the entire manifold $X$ including $\Sigma $ for $0 < s \leq + \infty$ by $$\vec{D}_{s} = \alpha D_{E} \beta_{s} + D_{F} (1 - \alpha \beta_{s})$$ Note that $\vec{D}_{\infty} = D_{F}$ and $\vec{D}_{0} = \vec{D}$ on $X-\Sigma$. We then have a family of curvature forms $\Omega_{s}$ corresponding to the family of connections $\vec{D}_{s}$. Using standard Chern - Weil theory (see [BoC]) we can write $$\phi (D_{\infty}) - \phi (\vec{D}) = dT \quad \mbox{on} \quad X-\Sigma$$ and using (2.2) we can rewrite this as \begin{equation} \phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}}) = dT \quad \mbox{on} \quad X-\Sigma \end{equation} Here T denotes the transgression form for this family of connections. The explicit form of $T$ is given as follows. $$T = \int_{0}^{\infty} \phi (\dot{D}_{t} ; \Omega_{t}) dt$$ where $\phi (\dot{D}_{t} ; \Omega_{t}) = \frac{d}{ds} \phi (\Omega_{t} + sD_{t}) \mid_{s = 0}$ is the complete polarization of $\phi$. The aim of this paper is to extend equation (3.1) across the singularity set $\Sigma$ to the entire manifold $X$, thereby obtaining residue formulae. \begin{remark} We can also consider the case where $\alpha$ is a surjective map outside $\Sigma$. Here we can define a pullback connection $ \stackrel{\leftarrow}{D}$ on $E$ by $$ \stackrel{\leftarrow}{D} = \betaD_{F}\alpha + (1 - \beta\alpha)D_{E}$$ where $\beta$ denotes orthogonal projection onto $T$ followed by the inverse of the map $\alpha: K^{\perp} \mapsto T$ and $K \equiv ker \alpha$ is the kernel subbundle of $E$. Again by considering families of connections, we obtain, for any invariant polynomial $\phi$, \begin{equation} \phi(D_{E}) - \phi(D_{F} \oplus D_{K}) = dT \quad \mbox{on} \quad X-\Sigma \end{equation} For the sake of clarity we omit mentioning the surjective case explicitly in the exposition that follows. The formulae are the same in both cases; the reader just has to replace (3.1) with (3.2) to obtain the result for the surjective case. \end{remark} \section{The Universal Setting} \setcounter{equation}{0} It is often useful to consider the above setting universally. By this we mean transplanting the given data to $Hom^{\times} (E,F)$, the bundle of injective maps from $E$ to $F$. Let $\pi : Hom^{\times} (E,F) \mapsto X$ be the projection map onto the manifold $X$. Then we can pull back the bundles $E$ and $F$ by $\pi$ to obtain the bundles $\pi^{\ast} E$ and $\pi^{\ast} F$ over $Hom^{\times} (E,F)$. There is a tautological bundle map $$\tilde{\alpha} : \pi^{\ast} E \mapsto \pi^{\ast} F$$ which at a point $\alpha \in Hom^{\times} (E_{x} , F_{x})$ above $x \in X$ is simply defined to be $\alpha$. This tautological map is injective everywhere. We can pull the connections on $E$ and $F$ back to $\pi^{\ast} E$ and $\pi^{\ast} F$ and apply Chern - Weil theory to this setting as we did in the previous section. We then have the following universal formula. \begin{equation} \phi (D_{\pi^{\ast} F}) - \phi(D_{\pi^{\ast} E} \oplus D_{\tilde{I}^{\perp}}) = d\tilde{T} \quad \mbox{on} \quad Hom^{\times} (E,F) \end{equation} A smooth bundle map $\alpha : E \mapsto F$ which is injective outside $\Sigma \subset X$ defines a cross-section of $Hom^{\times} (E,F)$ on $X-\Sigma$ and we have that $$\alpha^{\ast} (\pi^{\ast} E) = E$$ and $$\alpha^{\ast} (\pi^{\ast}F) = F$$ over $X-\Sigma$ as bundles with connections. Furthermore $$\alpha^{\ast} (\tilde{\alpha}) = \alpha \quad \mbox{on} \quad X-\Sigma$$ Thus every case is a pullback of the universal one, in particular, (4.1) pulls back to give (3.1) on $X-\Sigma$. \section{Residue Formulae} \setcounter{equation}{0} In this section we study equation (3.1) in more detail. We show how to obtain residue formulae when both the transgression form $T$ and the characteristic form $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ in the equation above extend as $L_{loc}^{1}$ forms across the singularities of the bundle map $\alpha$. Suppose that we are in the setting outlined in the previous chapter, where $\alpha : E \mapsto F$ is a bundle map, defined and injective outside $\bigcup\Sigma_{i}$. Each $\Sigma_{i}$ is assumed to be a submanifold of $X$, disjoint from the others, but not necessarily of the same dimension. The submanifolds $\Sigma_{i}$ will be referred to as the $\bf{singularities}$ of the map $\alpha$. Then as in \S 3 we can write down the following: $$\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}}) = dT \quad \mbox{on} \quad X - \bigcup \Sigma_{i}$$ Suppose that the transgression form $T$ and the characteristic form $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ in the equation above extend as an $L_{loc}^{1}$ forms across the singularities. We then have the following residue theorem. \begin{theorem} Let $\alpha : E \mapsto F$ be a map which is injective outside $\bigcup \Sigma_{i}$, where each $\Sigma_{i}$ is a closed submanifold of a compact, oriented manifold $X$. Suppose that $T$ and $\phi(D_{F}) -\phi(D_{E} \oplus D_{I^{\perp}})$ extend as $L_{loc}^{1}$ forms on $X$ for a given invariant polynomial $\phi$. Furthermore, assume that the extension of $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ is d-closed on $X$. Then \begin{equation} \phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}}) = \su Res_{\phi,i} [\Sigma_{i}] + dT \quad \mbox{on} \quad X \end{equation} where $$Res_{\phi,i}\equiv \lim_{\epsilon \rightarrow 0} \int_{\pi_{i}\mid_{\partial N_{i,\epsilon}}} T$$ is a closed current supported on $\Sigma_{i}$ and $deg(Res_{\phi,i}) = 2deg(\phi) - codim(\Sigma_{i})$. Here $\partial N_{i,\epsilon}$ denotes the boundary of an $\epsilon$-tubular neighborhood $N_{i,\epsilon}$ of $\Sigma_{i}$ and $\pi_{i} : \partial N_{i} \rightarrow \Sigma_{i}$ is projection. \end{theorem} \begin{proof} Choose $\epsilon$-tubular neigborhoods $N_{i,\epsilon}$ of $\Sigma_{i}$. Write $X = ( X - \bigcup N_{i,\epsilon} ) \cup \bigcup N_{i,\epsilon}$. Since $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ extends as an $L_{loc}^{1}$ form on $X$, we have $${\lim_{\epsilon \rightarrow 0}} (\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})) \wedge [X - \bigcup N_{i,\epsilon}] = (\phi(D_{F}) -\phi(D_{E} \oplus D_{I^{\perp}})) \wedge [X]$$ Then we can write \begin{eqnarray*} ( \phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}}) ) \wedge [X] & = & \lim_{\epsilon \rightarrow 0} dT \wedge [X - \bigcup N_{i,\epsilon}] \\ & = & \lim_{\epsilon \rightarrow 0} d(T \wedge [X - \bigcup N_{i,\epsilon} ]) \\ & & + \lim_{\epsilon \rightarrow 0} \su T \wedge [ \partial N_{i,\epsilon} ] \\ \end{eqnarray*} Here $[X]$, $[ X - \bigcup N_{i,\epsilon} ]$ and $[ \partial N_{i,\epsilon} ]$ denote the currents associated with $X$, $X - \bigcup N_{i,\epsilon}$ and $ \partial N_{i,\epsilon}$ respectively. Since $T$ extends as an $L_{loc}^{1}$ form on $X$, the family $T \wedge [X - \bigcup N_{i,\epsilon}]$ converges to $T$ extended by zero, as currents on $X$. We now use the following convergence theorem found in Federer [F;4.1.19]. If $ a_{\epsilon} \rightarrow a$ and $b_{\epsilon} \rightarrow b$ as $L_{loc}^{1}$ forms, then $da_{\epsilon} + b_{\epsilon} \rightarrow da + b$ in flat norm. Applied here, this implies that $L_{i} \equiv \displaystyle{\lim_{\epsilon \rightarrow 0}} T \wedge [\partial N_{i,\epsilon}]$ exists in flat currents on $X$. Since $supp( L_{i} ) \subset \Sigma_{i}$, this current is intrinsic to $\Sigma_{i}$, by Federer's flat support theorem found in Federer [F;4.1.15], i.e , $\pi_{i \ast} L_{i} = L_{i}$ where $\pi_{i} : N_{i,\epsilon} \rightarrow \Sigma_{i}$ is the fibration of the tubular neighborhhod over $\Sigma_{i}$. Now \begin{eqnarray*} L_{i} & = & \pi_{i \ast}} \newcommand{\su}{\sum_{i=1}^{l} L_{i} \\ & = & \pi_{i \ast}} \newcommand{\su}{\sum_{i=1}^{l} [ \lim_{\epsilon \rightarrow 0} T \wedge \partial N_{i,\epsilon} ] \\ & = & \lim_{\epsilon \rightarrow 0} \pi_{i \ast}} \newcommand{\su}{\sum_{i=1}^{l} [ T \wedge \partial N_{i,\epsilon} ] \\ & = & \lim_{\epsilon \rightarrow 0} \int_{\pi_{i}\mid_{\partial N_{i,\epsilon}}} T \wedge [\Sigma_{i}] \\ & = & Res_{\phi,i} [\Sigma_{i}] \\ \end{eqnarray*} We note that if $\Sigma_{i}$ is nonorientable then integration over the fiber defines a current with twisted coefficients in the orientation bundle of $N_{i,\epsilon}$. Since $X$ is orientable, $Res_{\phi,i}$ and $[\Sigma_{i}]$ lie in the same orientation class, hence $Res_{\phi,i}[\Sigma_{i}]$ is well-defined as a current on $X$. If $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ extends as a closed $L_{loc}^{1}$ form and each $\Sigma_{i}$ is a closed submanifold then applying the exterior derivative on both sides of (5.1) gives that each $Res_{\phi,i}$ is a closed current. \end{proof} \begin{remark} We now explain integration over the fibers a little more carefully. If $\Sigma_{i}$ is orientable, integration over the fibers gives a closed current $Res_{\phi,i}$ on $X$ supported on $\Sigma_{i}$, which defines an element of $H^{\ast} (X;\mathbb R)$. If $\Sigma_{i}$ is nonorientable, integration over the fibers gives a closed current on $\Sigma_{i}$ with twisted coefficients in the orientation bundle $o(N_{i,\epsilon})$ of $N_{i,\epsilon}$. Taking the limit as $\epsilon \rightarrow 0$ does not change the orientation bundle $o(N_{i,\epsilon})$, so in fact the limit is well defined. Furthermore, since $$ N_{i,\epsilon} \oplus T\Sigma_{i} = TX\mid_{\Sigma_{i}}$$ and $X$ is orientable, we have that $$o(N_{i,\epsilon}) \otimes o(T\Sigma_{i}) = \underline{\mathbb R}$$ where $o(N_{i,\epsilon})$ and $o(T\Sigma_{i})$ are the orientation bundles of $N_{i,\epsilon}$ and $\Sigma_{i}$ respectively, and $\underline{\mathbb R}$ is the trivial bundle. Hence $\Sigma_{i}$ is in the same orientation class as $Res_{\phi,i}$, so the pairing $Res_{\phi,i}[\Sigma_{i}]$ defines a closed current on $\Sigma_{i}$. Here $[\Sigma_{i}]$ denotes the current associated with $\Sigma_{i}$ with twisted coefficients in $o(\Sigma_{i})$. With this in hand, it is clear that when we say currents and differential forms, it is to be understood that we mean currents and differential forms with {\bf twisted coefficients} when $\Sigma_{i}$ is {\bf nonorientable}. \end{remark} The hypotheses of the theorem recur again throughout the paper, so for convenience we make the following definition. \begin{defn} We say that a bundle map $\alpha$ with singularities $\Sigma_{i}$ which are closed submanifolds is {\bf extendable} if for any invariant polynomial $\phi$ the smooth forms $T$ and $\phi(D_{F}) -\phi(D_{E} \oplus D_{I^{\perp}})$ in $X - \bigcup \Sigma_{i}$ extend as $L_{loc}^{1}$ forms on the manifold $X$. \end{defn} \begin{remark} If rank(E) = rank(F), then $I^{\perp}$ does not appear in the equation above. Since $D_{E}$ and $D_{F}$ are smooth connections, $\phi(D_{F}) - \phi(D_{E})$ extends as a closed smooth form on $X$. Therefore, the only hypothesis needed in this case is that $T$ extend to be $L_{loc}^{1}$ on $X$. \end{remark} \section{Normalized Maps and Normalized Bundles} \setcounter{equation}{0} In this section we discuss when the transgression form $T$ and the characteristic form $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ extend as $L_{loc}^{1}$ forms on $X$. This involves the notions of normalized bundles and normalized maps. We define these notions below. First we define a tubular neighborhood structure of a closed submanifold $\Sigma$ to be an $\epsilon$-tubular neighborhood $N_{\epsilon}$ of $\Sigma$ with a given smooth identification with the normal disk bundle to $\Sigma$. Let $\pi : N \rightarrow \Sigma$ denote the bundle projection and $\rho:N-\Sigma \rightarrow \partial N$ denote the radial projection onto the boundary induced from the vector bundle structure of $\pi : N \rightarrow \Sigma$. \begin{defn} We say that a bundle $E$ with connection $D_{E}$ over a manifold $X$ is {\bf normalized} at a submanifold $\Sigma$, if for some tubular neighborhood structure $N$ of $\Sigma$, the pair $(E,D_{E})$ can be written as a pullback from $\Sigma$ , i.e., $$ (E , D_{E}) \mid_{N} \quad \equiv \quad \pi^{\ast} (E , D_{E}) \mid_{\Sigma}$$ where $\pi : N \rightarrow \Sigma$ is projection. \end{defn} Since $\Sigma$ is the retract of $N$, any bundle over $N$ is equivalent to a pullback of a bundle on $\Sigma$. However, the pullback connection is only homotopic to the original one. The condition that a bundle be normalized at $\Sigma$ ensures that the pullback connection is equal to the original one. If $\Sigma$ is a point on $X$ then the normalization condition implies that $E$ is flat in a neighborhood of the point. In general, normalization can be viewed as `flatness' in radial directions. We now go on to define normalized maps. \begin{defn} A bundle map $\alpha : E \rightarrow F$ defined over $X-\Sigma$ is {\bf normalized} at $\Sigma$, a submanifold of $X$, if there exists a tubular neighborhood structure $N$ for which $(E,D_{E})$ and $(F,D_{F})$ are normalized and such that $\alpha$ is radially constant, i.e., $$ (\rho^{\ast}\alpha : E \mid_{\partial N} \rightarrow F \mid_{\partial N}) = (\alpha : E \rightarrow F) \mbox{ in } N - \Sigma$$ where $\rho : N - \Sigma \rightarrow \partial N$ is radial projection onto the boundary. \end{defn} With these definitions in hand we prove the main extension theorem. \begin{theorem} Let $\alpha : E \rightarrow F$ be an injective map outside $\bigcup \Sigma_{i}$. Suppose that $\alpha$ is normalized at each $\Sigma_{i}$. Then for any invariant polynomial $\phi$, the forms $T$ and $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ extend as $L_{loc}^{1}$ forms over the manifold $X$, i.e., $\alpha$ is extendable. \end{theorem} \begin{proof} Since $\alpha$ is normalized at each $\Sigma_{i}$, the family of pushforward connections $\vec{D}_{s}$ is also a pullback from $\partial N_{i,\epsilon}$, i.e., $$(F, \vec{D}_{s}) \mid_{N_{i,\epsilon} - \Sigma_{i}} = \rho^{\ast} (F , \vec{D}_{s}) \mid_{\partial N_{i,\epsilon}}$$ Here $N_{i,\epsilon}$ denotes the $\epsilon$-disk bundle of $\Sigma_{i}$ and $\partial N_{i,\epsilon}$ its boundary. This immediately implies that the transgression form is also a pullback from $\partial N_{i,\epsilon}$, i.e., $$ T \mid_{N_{i,\epsilon} - \Sigma_{i}} = \rho^{\ast} (T \mid_{\partial N_{i,\epsilon}})$$ Now we want to show that this pullback property implies that $T$ extends as an $L_{loc}^{1}$ form. Since this is a local property we need only construct a local argument. Without loss of generality we can assume that each $\Sigma_{i}$ is of dimension 0. Let $f : R^{n} - \{0\} \rightarrow S^{n - 1}$ be radial projection. In coordinates $f(x) = \frac{x}{\parallel x \parallel}$ where $x = (x_{i}, . . . , x_{n})$. Then \begin{eqnarray*} f^{\ast} dx^{i} & = & d(\frac{x_{i}}{\parallel x \parallel})\\ & = & \frac{dx_{i}}{\parallel x \parallel} - \sum \frac{x_{i} x_{j}}{\parallel x \parallel^{2}} dx_{j}\\ \end{eqnarray*} This is a homogeneous form of degree 0 and the coefficients of $f^{\ast} dx^{i}$ are bounded by $\frac{n}{\parallel x \parallel}$. Therefore the coefficients of $f^{\ast} dx^{I}$, for a multiindex $I$ are bounded by $(\frac{n}{\parallel x \parallel})^{\mid I \mid}$. Hence it is $L_{loc}^{1}$ if $\mid I \mid \leq n -1$. For any $\varphi \in E^{p} (S^{n - 1})$ where $$ \varphi = \sum a_{I} dx^{I} \mid_{S^{n - 1}},\quad \mid I \mid \leq n -1$$ the coefficients of $f^{\ast} \varphi$ are bounded by $\sum sup \mid a_{I} \mid \frac{c}{\parallel x \parallel^{n -1}}$, where $c$ is a constant. Here the sup is taken over the sphere. Hence $f^{\ast} \varphi$ is $L_{loc}^{1}$. Applied to the transgression form this implies that $T$ extends as an $L_{loc}^{1}$ form. If $2 deg(\phi) < dim X$ then the same argument as above implies that $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ extends as an $L_{loc}^{1}$ form. If $2 deg(\phi) = dim X$ then $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}}) = 0$ in $N_{i,\epsilon} - \Sigma_{i}$ since it is a form of degree higher than the dimension of $\partial N_{i,\epsilon}$. Hence it extends by zero as an $L_{loc}^{1}$ form on $X$. \end{proof} \section{Residues} \setcounter{equation}{0} The residue is in general a current supported on the singularities of the bundle map. However, when $\alpha$ is normalized at the singularities the residue is a smooth form. \begin{lemma} Let $\alpha : E \rightarrow F$ be an injective map outside $\bigcup \Sigma_{i}$. Suppose that $\alpha$ is normalized at each $\Sigma_{i}$. Then the residue $Res_{\phi,i}$ is a smooth differential form supported on $\Sigma_{i}$ and is given by $$Res_{\phi,i} \equiv \int_{\pi_{i} \mid_{N_{i,\epsilon}}} T$$ where $\pi_{i} : \partial N_{i,\epsilon} \rightarrow \Sigma_{i}$ is projection. Furthermore if either $rank E = rank F$ or $2 deg(\phi) = dim X$ then $Res_{\phi,i}$ is closed. \end{lemma} \begin{proof} The normalization condition implies that the transgression form $T$ is pullback form $\partial N_{i,\epsilon}$. This radial invariance makes $T\mid_{\partial N_{i,\epsilon}}$ essentially independent of $\epsilon$. In particular $$\lim_{\epsilon \rightarrow 0} \int_{\pi_{i} \mid_{N_{i,\epsilon}}} T = \int_{\pi_{i} \mid_{N_{i,\epsilon}}} T$$ for any sufficiently small $\epsilon$. Since $T$ is a smooth form outside the singularities, integrating over the fibres of the projection map $\pi_{i}$ yields a smooth closed form on $\Sigma_{i}$. As mentioned in Remark 5.4 if rank(E) = rank(F) then $\phi(D_{F}) - \phi(D_{E})$ extends as a closed smooth form on $X$, hence $Res_{\phi,i}$ is closed. Also, as observed in the proof of Theorem 6.3, the normalization of $\alpha$ implies the $\phi(D_{F}) -\phi(D_{E} \oplus D_{I^{\perp}})$ is a pullback from $\partial N_{i,\epsilon}$. If $2 deg(\phi) = dim X$ then $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}}) = 0$ in $N_{i,\epsilon} - \Sigma_{i}$, since it is a form of degree higher than the dimension of $\partial N_{i,\epsilon}$. So $\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}})$ extends by zero to be a closed current and $Res_{\phi,i}$ is closed. \end{proof} \section{Homotopy, Normalized Maps, and Normalized Bundles} \setcounter{equation}{0} We showed in \S 6 that $\alpha$, $E$ and $F$ had to be normalized at the singularities $\Sigma_{i}$ for $T$ and $dT$ to extend as $L_{loc}^{1}$ forms over the manifold $X$. We now show that normalization is not a strong condition. More precisely we prove that any bundle $E$ with connection is smoothly homotopic to a normalized bundle at $\Sigma_{i}$ and that any bundle map $\alpha$ with singularities $\Sigma_{i}$ is smoothly homotopic to a normalized map on $X - \bigcup \Sigma_{i}$. We also show that any two normalized maps with singularities $\Sigma_{i}$ are homotopic through normalized maps. \begin{lemma} Any pair $(E,D_{E})$ is smoothly homotopic to a normalized bundle at a submanifold $\Sigma$ of $X$. \end{lemma} \begin{proof} Let $N_{\epsilon/2}$ be a tubular neighborhood of $\Sigma$. Define a map $\pi : X \rightarrow X$ as follows. $$\pi(v) = \lambda(\parallel v \parallel) v$$ where $\lambda : [0,\infty] \rightarrow [0,1]$ is a smooth function with the properties $$\lambda(s) = 0 \quad for \quad s<\epsilon/2$$ $$\lambda(s) = 1 \quad for \quad s \geq \epsilon$$ and $$\lambda '(s) \geq 0$$ Then $\pi \mid_{N_{\epsilon/2}} = p : N_{\epsilon/2} \rightarrow \Sigma$, where $p$ is projection onto $\Sigma$. Hence $\pi^{\ast}(E,D_{E})$ is equivalent to $E$ and is normalized at $\Sigma$. Now let $\pi_{t}(v) = (1-t)v + t\pi(v)$ for $0 \leq t \leq 1$. Define $$(E_{t},D_{t}) \equiv \pi_{t}^{\ast} (E,D_{E})$$ then $E_{t} \cong E$ for $0 \leq t \leq 1$ and $\pi_{1}^{\ast} (E,D_{E}) = \pi^{\ast} (E,D_{E})$. Hence $\pi_{t}^{\ast}$ is the required homotopy. \end{proof} \begin{lemma} Any bundle map $\alpha : E \rightarrow F$ with singularities $\Sigma_{i}$ is homotopic to a normalized map on $X - \bigcup \Sigma_{i}$ \end{lemma} \begin{proof} Let $N_{i,\epsilon}$ be an $\epsilon$-tubular neighborhood of $\Sigma_{i}$. We first normalize $E$ and $F$ at each $\Sigma_{i}$. Define a map $\rho : X - \bigcup \Sigma_{i} \rightarrow X - \bigcup \Sigma_{i}$ as follows. For $v\in N_{i,\epsilon} -\Sigma_{i}$, set $$\rho(v) = l(\parallel v \parallel)v$$ where $l : (0,\infty] \rightarrow [1,\infty]$ is a smooth function with the properties $$l(t) = \frac{1}{t} \quad for \quad t \leq \epsilon/2$$ $$l(t) = 1 \quad for \quad t \geq \epsilon$$ and $$l'(t) \leq 0$$ Then $\rho \mid_{N_{i,\epsilon/2}} : N_{i,\epsilon/2} \rightarrow \partial N_{i,\epsilon/2}$ is radial projection in the sense of Definition 6.2, and extends smoothly to be the identity outside $N_{i,\epsilon}$. Hence $\rho^{\ast} \alpha$ is normalized at $\Sigma_{i}$. Now let $\rho_{t}(v) = (1-t)v + t\rho(v)$ for $0 \leq t \leq 1$. Then $\rho_{t}^{\ast} (\alpha)$ is the required homotopy between $\alpha$ and $\rho^{\ast}\alpha$ defined on $X - \bigcup \Sigma_{i}$. \end{proof} By the same argument as above we have the following lemmas. \begin{lemma} Let $\alpha_{1} : E \rightarrow F$ and $\alpha_{2} : E \rightarrow F$ be normalized at $\Sigma_{i}$, with the same tubular neighborhood structure such that $$\alpha_{1} = \alpha_{2} \quad \mbox{on} \quad X - \bigcupN_{i,\epsilon}$$ Then there is a homotopy between $\alpha_{1}$ and $\alpha_{2}$ on $X - \bigcup \Sigma_{i}$ through normalized bundle maps. \end{lemma} \begin{lemma} Let $(E_{1},D_{E_{1}})$ and $(E_{2},D_{E_{2}})$ be normalized at $\Sigma_{i}$, with the same tubular neighborhood structure such that $$(E_{1},D_{E_{1}}) = (E_{2},D_{E_{2}}) \quad \mbox{on} \quad X - \bigcupN_{i,\epsilon}$$ Then there is a homotopy between $(E_{1},D_{E_{1}})$ and $(E_{2},D_{E_{2}})$ on $X$ through normalized bundles. \end{lemma} \section{Invariance of Residues under Homotopy} \setcounter{equation}{0} We discuss what happens to the residue when we homotope $\alpha$ through normalized maps at the singularities $\Sigma_{i}$. First we recall a double transgression formula found in [HL1]. \begin{lemma} Let $D_{s,t}$ be a 2-parameter family of connections, $0 \leq s \leq 1$, and $a \leq t \leq b$, with $D_{s,a} = D_{a}$ and $D_{s,b} = D_{b}$ for all $ 0 \leq s \leq 1$. Then the two transgressions, $T_{1}$ and $T_{0}$, determined by $D_{1,t}$ and $D_{0,t}$ satisfy $$T_{1} - T_{0} = dR$$ with $$R = \int_{a}^{b} \int_{0}^{1} \phi (\frac{\partial}{\partial s} w_{s,t} ; \frac{\partial}{\partial s} w_{s,t} ; \Omega_{s,t}) ds dt$$ where $$\phi (A,B;C) = \frac{\partial^{2}}{\partial s \partial t} \phi (C + sA + tB) \mid_{s = t = 0}$$ \end{lemma} The lemma above allows us to prove the following invariance of the residue classes for the equirank case. \begin{theorem} Let $\alpha_{0} : E \rightarrow F$ and $\alpha_{1} : E \rightarrow F$ be normalized maps at singularities $\Sigma_{i}$. Assume that rank(E) = rank(F). Then the residues, $Res_{\phi,i}^{0}$ and $Res_{\phi,i}^{1}$, define the same cohomology class on $\Sigma_{i}$. \end{theorem} \begin{proof} Since $E$ and $F$ are of the same rank and the normalization conditions are satisfied we have that $$\phi(D_{F}) - \phi(D_{E}) = \su Res_{\phi,i}^{0} [\Sigma_{i}] + dT_{0} \quad\mbox { for }\quad \alpha_{0}$$ and $$\phi(D_{F}) - \phi(D_{E}) = \su Res_{\phi,i}^{1} [\Sigma_{i}] + dT_{1} \quad\mbox { for }\quad \alpha_{1}$$ where $$Res_{\phi,i}^{0} \equiv \lim_{\epsilon \rightarrow 0} \int_{\pi_{i} \mid_{\partial N_{i,\epsilon}}} T_{0}$$ and $$Res_{\phi,i}^{1} \equiv \lim_{\epsilon \rightarrow 0} \int_{\pi_{i} \mid_{\partial N_{i,\epsilon}}} T_{1}$$ By lemma 8.3 we can write a smooth homotopy $\alpha_{t}$ between $\alpha_{0}$ and $\alpha_{1}$ through normalized maps, hence $T_{t}$ extends as an $L_{loc}^{1}$ form on $X$ for all $0 \leq t \leq 1$. The initial and end points for the two parameter family of pushforward connections defined by the homotopy are $D_{F}$ and $D_{E}$. Thus we are in the setting of the double transgression lemma above and we can write $$T_{0} - T_{1} = dR \quad \mbox{on} \quad X - \bigcup \Sigma_{i}$$ Since $T_{0}$ and $T_{1}$ extend as $L_{loc}^{1}$ forms on $X$ so does $dR$. Furthermore $R$ extends as an $L_{loc}^{1}$ form because it is a pullback under radial projection. We now apply the same argument as we used in Theorem 5.1. Since $T_{0}$ and $T_{1}$ extend as $L_{loc}^{1}$ forms then $\displaystyle{\lim_{\epsilon \rightarrow 0}} (T_{0} - T_{1})\wedge [X-\bigcup N_{i,\epsilon}] = (T_{0} - T_{1}) \wedge [X]$. Therefore \begin{eqnarray*} (T_{0} - T_{1}) \wedge [X] & = & \lim_{\epsilon \rightarrow 0} dR \wedge [X-\bigcup N_{i,\epsilon}]\\ & = & \lim_{\epsilon \rightarrow 0} dR \wedge [X- \bigcupN_{i,\epsilon}] + \lim_{\epsilon \rightarrow 0}\su R\wedge\partial N_{i,\epsilon}\\ & = & dR\wedge[X] + \lim_{\epsilon \rightarrow 0}\su R\wedge\partial N_{i,\epsilon}\\ \end{eqnarray*} Using the convergence lemma in Federer [F;4.1.19], as before we get that $\displaystyle{\lim_{\epsilon \rightarrow 0}}R\wedge\partial N_{i,\epsilon}$ exists. Now we observe that radial invariance makes $R\mid_{\partial N_{i,\epsilon}}$ essentially independent of $\epsilon$. In particular \begin{eqnarray*} \lim_{\epsilon \rightarrow 0} \int_{\pi_{i}\mid_{N_{i,\epsilon}}}R & = & \int_{\pi_{i}\mid_{N_{i,\epsilon}}}R\\ & \equiv & R_{0}\\ \end{eqnarray*} for any sufficiently small $\epsilon$. Therefore $R_{0}$ is a smooth form on $\Sigma_{i}$. This shows that \begin{eqnarray*} Res_{\phi,i}^{0} - Res_{\phi,i}^{1} & = & \lim_{\epsilon \rightarrow 0} \int_{\pi_{i}} dR\\ & = & d\{\lim_{\epsilon \rightarrow 0}\int_{\pi_{i}}R\}\\ & = & dR_{0}\\ \end{eqnarray*} Hence they define the same cohomology class of $\Sigma_{i}$. \end{proof} \section{The Universal Transgression Form} \setcounter{equation}{0} In \S 3 we wrote down the following transgression formula which is valid on $Hom^{\times} (E,F)$, the bundle of injective maps from $E$ to $F$. $$\phi (D_{\pi^{\ast} F}) - \phi(D_{\pi^{\ast} E} \oplus D_{\tilde{I}^{\perp}}) = d\tilde{T} \quad \mbox{on} \quad Hom^{\times} (E,F)$$ where $\pi : Hom^{\times} (E,F) \rightarrow X$ is the projection map. If $\alpha : E \rightarrow F$ is an injective map outside $\Sigma_{i}$ then the equation above pulls down by $\alpha$ to give $$\phi(D_{F}) - \phi(D_{E} \oplus D_{I^{\perp}}) = dT \quad \mbox{on} \quad X - \bigcup \Sigma_{i}$$ In particular, the transgression form $T$ is just a pullback of the universal transgression form $\tilde{T}$ outside the singularities, i.e., $$ T = \alpha^{\ast} \tilde{T} \quad \mbox{on} \quad X - \bigcup \Sigma_{i}$$ This implies that the residues can be expressed universally in terms of $\tilde{T}$ and the map $\alpha : X - \bigcup \Sigma_{i} \rightarrow Hom^{\times} (E,F)$, considered as a section of $Hom^{\times} (E,F)$ outside the singularities. More precisely, \begin{equation} Res_{\phi,i} = \lim_{\epsilon \rightarrow 0} \int_{\pi_{i} \mid_{\partial N_{i,\epsilon}}} \alpha^{\ast} \tilde{T} \end{equation} In a sense, this is a generalization of the notion of the index of a vector field, in that the residue measures the twisting of the map $\alpha$. \section{Obstructions} \setcounter{equation}{0} The most interesting applications of the main residue formula (5.1) arise when rank(E) = rank(F). For any invariant polynomial $\phi$, the characteristic form $\phi(D_{F}) - \phi(D_{E})$ extends as a closed, smooth differential form on $X$. Suppose the singularities $\Sigma_{i}$ of the bundle map $\alpha : E \mapsto F$ are \textbf{orientable}, closed submanifolds of $X$. Furthermore, assume that $\alpha$ is extendable. The condition that the $\Sigma_{i}$'s are orientable arises in many natural settings, for instance, orientation preserving finite group actions on a compact manifold. The residues $Res_{\phi,i}$ are then closed currents on $X$ (see Remark 5.4). More precisely, by Theorem 5.1, $Res_{\phi,i} \in H^{2deg\phi - e_{i}} (X ; \mathbb R)$, where $e_{i} = codim Res_{\phi,i}$. We then have the following immediate corollary. \begin{corollary} Let $\alpha : E \mapsto F$ be an extendable bundle map with singularities $\Sigma_{i}$ that are orientable, closed submanifolds of $X$. Assume that $rank E = rank F$. Let $dim X = k$ and $codim \Sigma_{i} = e_{i}$. Suppose that the cohomology groups $H^{2deg\phi - e_{i}} (X ; \mathbb R) = 0$ for each $i$. Then $\phi(D_{F})$ and $\phi(D_{E})$ are cohomologous. \end{corollary} \begin{proof} By (5.1) we have that $$ \phi(D_{F}) - \phi(D_{E}) = \su Res_{\phi,i} [\Sigma_{i}] + dT$$ Since $H^{2deg\phi - e_{i}} (X ; \mathbf{R}) = 0$, each $Res_{\phi,i}$ is exact, i.e., $$Res_{\phi,i} = dS_{i}$$ for some $S_{i} \in \Omega^{2deg\phi - e_{i} - 1} (X ; \mathbb R) $ Hence $$ \phi(D_{F}) - \phi(D_{E}) = d ( \su S_{i} [Res_{\phi,i}] + T )$$ which proves the assertion. \end{proof} This corollary can be viewed as an obstruction theorem to the existence of bundle maps with orientable singularities of a certain codimension. \begin{corollary} Let $E$ and $F$ be vector bundles over $X$ where $rank E = rank F$. Suppose that $\phi(D_{E})$ and $\phi(D_{F})$ are not cohomologous for a given invariant polynomial $\phi$. Also suppose that $H^{2deg\phi - k} (X ; \mathbb R) = 0$ for some $k < 2 deg \phi$. Then there cannot exist a bundle map $\alpha : E \mapsto F$ with an orientable, closed singularity $\Sigma$ of codimension $k$. \end{corollary} \begin{proof} Assume that such an $\alpha$ exists. We can always homotope $\alpha$ such that it becomes extendable with the same singularity $\Sigma$. By Corollary 11.1 $\phi(D_{E})$ and $\phi(D_{F})$ are cohomologous, which contradicts the assumptions. \end{proof} \nopagebreak We also the have the following. \begin{corollary} Let $\alpha : E \mapsto F$ be an extendable bundle map with singularities $\Sigma_{i}$, where $rank E = rank F$ and $codim \Sigma_{i} = e_{i}$. Let $\phi$ be an invariant polynomial such that $2 deg \phi < max \hspace{2mm}codim \Sigma_{i}$. Then $\phi(D_{E})$ and $\phi(D_{F})$ are cohomologous. \end{corollary} \begin{proof} Again by Theorem 5.1, if $2 deg \phi < max \hspace{2mm} codim \Sigma_{i}$ then $Res_{\phi,i} = 0$ for each $i$. Hence $$\phi(D_{F}) - \phi(D_{E}) = dT$$ \nopagebreak \end{proof} \section{Singularities of Maps} Let $X$ and $Y$ be smooth Riemannian manifolds of equal dimension and consider a smooth mapping $$f : X \rightarrow Y$$ Consider the differential map $$df : TX \rightarrow f^{\ast}TY$$ Here we endow $TX$ and $f^{\ast}TY$ with the standard riemannian connections, normalized at the singularities of $df$. We are now in the standard setting where $E = TX$ and $F = f^{\ast}TY$. Let $p_{k}(Y)$ and $p_{k}(X)$ be the k-th Pontryjagin forms in the normalized riemannian curvatures of $X$ and $Y$. A straightforward application of the main residue theorem yields the following result. \begin{theorem} Suppose that $f : X \rightarrow Y$ is a smooth map between compact oriented riemannian n-manifolds. Suppose that the differential map $df$ has submanifold singularities $\Sigma_{i}$ and is extendable. Then for any $k \leq n/4$ $$f^{\ast} p_{k}(Y) - p_{k}(X) = \su Res_{p_{k},i} [\Sigma_{i}] + dT$$ and $Res_{\phi,i}=0$ if $codim\Sigma_{i} > 4k$. \end{theorem} \begin{proof} We just observe that $p_{k}(f^{\ast}TY) = f^{\ast}p_{k}(TY)$ and apply Theorem 5.1. \end{proof} An interesting special case occurs when n = 4k. This yields a 4k-dimensional analogue of the classical Riemann-Hurwitz theorem [M1],[M2],[R]. \begin{corollary} Suppose that $f : X \rightarrow Y$ is a smooth map between compact oriented 4k-manifolds. Suppose that the differential map $df$ has submanifold singularities $\Sigma_{i}$ and is extendable. Then $$M_{f} p_{Y} - p_{X} = \su \int_{\Sigma_{i}} Res_{p_{k},i} $$ where $M_{f}$ is the degree of the map $f$ and $p_{Y}$ and $p_{X}$ are the top Pontryjagin numbers of the manifolds $X$ and $Y$ respectively. \end{corollary} More generally we consider $\wp$, a homogeneous polynomial of weight k in k indeterminates. The associated Pontryjagin number to $\wp$ of a compact, oriented manifold $X$ of dimension 4k is defined to be $$\wp(X) = \int_{X} \wp(p_{1}(X), \ldots , p_{k}(X))$$ Then we have the following corollary. \begin{corollary} Suppose that $f : X \rightarrow Y$ is a smooth map between compact oriented 4k-manifolds. Suppose that the differential map $df$ has submanifold singularities $\Sigma_{i}$ and is extendable. Then $$M_{f} \wp(Y) - \wp(X) = \su \int_{\Sigma_{i}} Res_{\wp,i} $$ where $M_{f}$ is the degree of the map $f$ and $\wp(X)$ and $\wp(Y)$ are the Pontryjagin numbers associated to $\wp$ of the manifolds $X$ and $Y$ respectively . \end{corollary} For a topological approach to these formulae see [N], [GGV]. In particular, by the Hirzebruch signature formula [MS] which states that the signature of a compact, oriented manifold of dimension 4k is expressible as a polynomial in the Pontyjagin classes through the L-class, we have the following. \begin{corollary} Suppose that $f : X \rightarrow Y$ is a smooth map between compact oriented 4k-manifolds. Suppose that the differential map $df$ has submanifold singularities $\Sigma_{i}$ and is extendable. Then $$M_{f} sig(Y) - sig(X) = \su \int_{\Sigma_{i}} Res_{sig,i} $$ where $M_{f}$ is the degree of the map $f$ and sig(X) and sig(Y) are the signatures of the manifolds $X$ and $Y$ respectively. \end{corollary} \begin{remark} The results in this section apply naturally to branched coverings $$ \pi : X \mapsto Y$$ branched along a submanifold $\Sigma$ of codimension 2 in $Y$ and also to finite group actions on a compact manifold. In a subsequent paper we study these important cases where the singularities are of nongeneric codimension in more detail, providing explicit calculations of the residues that appear in the formulae above. \end{remark} \begin{remark} It is clear that similar formulae hold for the case of maps between complex manifolds. \end{remark} \section{CR-Singularities} The methods introduced above yield interesting results in CR-geometry. Consider an immersion $$f : X \hookrightarrow Z$$ of a real manifold $X$ into a complex manifold $Z$ where, $$ dim_{\mathbb R}(X) = n = dim_{\mathbb C}(Z)$$ Then the differential map $$df : TX \rightarrow f^{\ast}TZ$$ extends to a complex bundle map $$df_{\mathbb C} : TX \otimes_{\mathbb R} \mathbb C \rightarrow f^{\ast}TZ$$ Assume that the bundle map $df_{\mathbb C}$ has submanifold singularities $\Sigma_{i}$. This corresponds to the loci of points where $f_{\ast} T_{x}X$ contains a complex subspace having `excess' dimensions, i.e., more complex tangency than expected. Specifically we have: \begin{lemma} $$\bigcup \Sigma_{i} = \{x \in X : dim_{\mathbb C}(T_{x} \cap JT_{x}X) > 0\}$$ where $J$ denotes the complex structure of $Z$. \end{lemma} \begin{proof} We have that $\bigcup \Sigma_{i} = \{x \in X : rank(df_{\mathbb C}) < n\} $. Note that at $x \in X$, \begin{eqnarray*} ker(df_{\mathbb C}) & = & \{V + iW : V,W \in T_{x}X\quad\mbox{and}\quad V + JW = 0\}\\ & = & \{V + iJV : V,JV \in T_{x}X\}\\ & = & [(T_{x}X \cap JT_{x}X) \otimes \mathbb C]^{0,1}\\ \end{eqnarray*} Since rank($df_{\mathbb C}$) = n - $dim_{\mathbb C}ker(df_{\mathbb C})$ we have rank($df_{\mathbb C}$) $<$ n if and only if $dim_{\mathbb C}(T_{x}X \cap JT_{x}X) > 0$. \end{proof} We now suppose that $X$ carries a Riemannian metric and $Z$ carries a hermitian metric and a complex connection, and we normalize these connections at each $\Sigma_{i}$. Let $p_{i}(X)$ and $c_{i}(Z)$ be the ith Pontryjagin and Chern forms of $X$ and $Z$ respectively. Applying Theorem 5.1 yields the following result. \begin{theorem} Let $f : X \hookrightarrow Z$ be an immersion of a real m-manifold into a complex m-manifold. Assume that $df_{\mathbb C}$ has submanifold singularities $\Sigma_{i}$ and that it is extendable. Then $$f^{\ast} c_{2i}(Z) - p_{i}(X) = \su Res_{\phi,i} [\Sigma_{i}] + dT$$ \end{theorem} \begin{proof} Apply Theorem 5.1 to the bundle map $df_{\mathbb C}$ and observe that $$(-1)^{i}c_{2i}(TX \otimes \mathbb C) = p_{i}(X)$$ \end{proof} Consider the case when $Z = \mathbb C^{n}$. This gives the following. \begin{corollary} Let $f : X^{n} \hookrightarrow \mathbb C^{n}$ be an immersion with the property that $df_{\mathbb C}$ has submanifold singularities $\Sigma_{i}$ and that it is extendable. Then $$p_{i}(X) = \su Res_{\phi,i} [\Sigma_{i}] + dT$$ \end{corollary} \section{Finite Singularities and a Generalized Hopf Index Formula} The finite singularity sets of r-vector fields and r-plane fields were studied extensively by E.Thomas [T1],[T2], Atiyah [A], and Atiyah-Dupont [AD]. In certain cases they managed to relate these singularities to algebraic invariants of the manifold. We continue the study of finite singularities in the general setting of bundle maps over a compact manifold. In \S 9 we discussed the universal transgression form $\tilde{T}$. We now use the universal construction to prove the following theorem. \begin{theorem} Let $\alpha : E \rightarrow F$ be a bundle map over a compact manifold $X$ with isolated finite singularities $\{x_{i}\}$. Assume that $\alpha$ is normalized at each $x_{i}$. Let rank(E) = m and rank(F) = n. Then for any invariant polynomial of top degree, $$\int_{X} \phi(D_{F})-\phi(D_{E} \oplus D_{I^{\perp}})=\su \int_{S^{i}} \alpha^{\ast}\tilde{T}$$ where $\tilde{T}\in\Omega^{odd}(V_{n,m})$. Here $V_{n,m}$ is the Stiefel manifold of m-frames in $\mathbb R^{n}$ and the $S^{i}$ are small spheres around the points $\{x_{i}\}$. \end{theorem} \begin{proof} We recall that $T=\alpha^{\ast}\tilde{T}$ outside $x_{i}$. Since $E$ and $F$ are normalized at $x_{i}$ then near $x_{i}$ $$Hom^{\times} (E,F) = Hom^{\ast}(\mathbb R^{m},\mathbb R^{n}) = V_{n,m}$$ Now we apply Theorem 5.1 and integrate over the manifold. Since $\phi$ is of top degree by Lemma 7.1 we do not need to take a limit for the residue. \end{proof} This theorem is a bundle map analogue of the classical Hopf index formula. It has the following interesting corollary. \begin{corollary} Consider a map $\alpha : E \rightarrow F$ over a compact manifold $X$ of dimension 4n with isolated finite singularities, where rank(E) = 2 and rank(F) = n. Suppose that $\alpha$ is normalized at the singularities. Then $$\int_{X} p_{n}(F)-p_{1}(E)p_{n-1}(I^{\perp}) = 0$$ In particular if $p_{1}(E) = 0$ then $p_{n}(F)=0$. Here $p_{i}$ denotes the i-th Pontryjagin class. \end{corollary} \begin{proof} This is just a consequence of dimension. By the theorem above $\tilde{T}$ is a differential form of degree 4n - 1 on $V_{4n,2}$. But $dim V_{4n,2} = 4n - 3$ and hence $\tilde{T}=0$. \end{proof} \begin{remark} The result above is not a consequence of obstruction theory because in general, $\pi_{4n-1}(V_{4n,2}) \neq 0$. \end{remark} We would like to know when $\tilde{T}$ is closed near $x_{i}$. Then the residue can be interpreted using cohomology of $V_{n,m}$. For this we use the following construction. Let $G_{n,m}$ be the Grassmann manifold of m-planes in $\mathbb R^{n}$ and let $\rho : V_{n,m} \rightarrow G_{n,m}$ be the standard fiber map. Also let $\tau$ be the tautological bundle over $G_{n,m}$ and $\tau^{\perp}$ be its dual. Give $\tau^{\perp}$ the connection induced by projection. This construction yields the following lemma. \begin{lemma} $\tilde{T}$ is closed near $x_{i}$ iff $\phi(\tau^{\perp}) = 0$ on $G_{n,m}$. \end{lemma} \begin{proof} We can write $$\phi(\mathbb R^{n}) - \phi(\mathbb R^{m}\oplus\tau^{\perp}) = d\hat{T} \quad \mbox{on} \quad G_{n,m}$$ By construction $\tilde{T}=\rho^{\ast}\hat{T}$. The equation above reduces to $$\phi(\tau^{\perp})=d\hat{T}$$ Therefore if $\phi(\tau^{\perp}) = 0$ on $G_{n,m}$, $\hat{T}$ is closed and hence $\tilde{T}$ is also closed. \end{proof} The lemma above yields the following corollary. \begin{corollary} Let $\alpha : E \rightarrow F$ be a bundle map over a compact manifold $X$ of dimension 4i with isolated finite singularities $\{x_{i}\}$. Assume that $\alpha$ is normalized at each $x_{i}$. Let rank(E) = m and rank(F) = n. Suppose that $2i>n-m$. Then $$\int_{X} p_{i}(F) - p_{i}(E\oplus I^{\perp}) = \su \int_{S^{i}} \alpha^{\ast}\tilde{T}$$ where $\tilde{T} \in H^{4i-1}(V_{n,m})$. \end{corollary} \begin{proof} We observe that $rank(\tau^{\perp}) = n-m$. If $2i>n-m$ then $$0=p_{i}(\tau^{\perp}) = d\hat{T}$$ Hence $\hat{T}$ is closed and $\tilde{T} \in H^{4i-1}(V_{n,m})$. \end{proof} \begin{corollary} Let $\alpha : E \rightarrow F$ be a bundle map over a compact manifold $X$ of dimension 2i with isolated finite singularities $\{x_{i}\}$. Assume that $\alpha$ is normalized at each $x_{i}$.Let rank(E) = m and rank(F) = n. Let $\phi$ be any multiplicative invariant polynomial and $\phi_{i}$ be its i-th degree term. Then for $m(n-m)<2i\leq m(n-m) + \frac{1}{2}m(m-1)$ we have that $$\int_{X} \phi_{i}(F) - \phi_{i}(E\oplus I^{\perp}) = \su \int_{S^{i}} \alpha^{\ast}\tilde{T}$$ where $\tilde{T} \in H^{2i-1}(V_{n,m})$ \end{corollary} \begin{proof} This again is a consequence of dimension. We have that $$dim V_{n,m} = m(n-m) + \frac{1}{2}m(m-1)$$ and $$dim G_{n,m} = m(n-m)$$ If $2i>dimG_{n,m}$ then $$0=\phi(\tau^{\perp})=d\hat{T}$$ because it is a differential form of degree higher than the dimension of $G_{n,m}$. Therefore $\hat{T}$ is closed and hence $\tilde{T} \in H^{2i-1}(V_{n,m})$. \end{proof} \section{Clifford and Spin Bundles} Let $\pi : F \mapsto X$ be a 2n-dimensional vector bundle with spin structure (for a complete discussion of the following constructions see [LM]). Assume that $F$ is provided with a connection $D$. Let $ \mathcal{S}$ denote the complex spinor bundle associated to $F$ and let $D_{\mathcal{S}}$ be the connection on $\mathcal{S}$ induced from the one on $F$. There is a canonical decomposition $$ \mathcal{S} = \mathcal{S^{+}} \oplus \mathcal{S^{-}}$$ by the complex volume form. Suppose that we are given an odd form $\alpha$, i.e., $\alpha \in \Gamma (\bigwedge^{odd} (F) )$. Then $\alpha$ is a bundle map from $\mathcal{S^{+}}$ to $\mathcal{S^{-}}$. Assume that $$\alpha : \mathcal{S^{+}} \mapsto \mathcal{S^{-}}$$ has closed submanifold singularities $\Sigma_{i}$. Then again we are in the setting of theorem 5.1, with rank $\mathcal{S^{+}}$ = rank $\mathcal{S^{-}}$ . Furthermore , if $E$ is any complex vector bundle with complex vector bundle with complex connection $D_{E}$, we let $$D_{\mathcal{S} \otimes E} = D_{\mathcal{S}} \otimes 1 + 1 \otimes D_{E}$$ denote the tensor product connection on $\mathcal{S} \otimes E$ and normalize this connection at $\Sigma_{i}$. This connection observes the splitting of the spinor bundle. The odd form $\alpha$ is again a bundle map from $\mathcal{S^{+}} \otimes E$ to $\mathcal{S^{-}} \otimes E$ and we have the following. \begin{corollary} Let $(F , D)$ and $(\mathcal{S} \otimes E , D_{\mathcal{S} \otimes E})$ be as above. Suppose that $\alpha \in \Gamma (\bigwedge^{odd} (F) )$ has closed submanifold singularities $\Sigma_{i}$ as a bundle map $\alpha : \mathcal{S^{+}} \mapsto \mathcal{S^{-}}$ and is extendable. Then $$ ch (D_{\mathcal{S^{+}} \otimes E}) - ch(D_{\mathcal{S^{-}} \otimes E}) = \su Res_{ch,i} [\Sigma_{i}] + dT$$ \end{corollary} \begin{remark} The same formula holds when $F$ is $spin^{c}$. \end{remark} Even if $F$ is not spin or $spin^{c}$, we can derive an interesting residue formula by considering the complex Clifford bundle $C\ell \equiv C\ell \otimes \mathbf{C}$ associated to $F$. Again we have a decomposition $$ C\ell = C\ell^{+} \oplus C\ell^{-}$$ and an odd form $\alpha \in \Gamma (\bigwedge^{odd} (F) )$ acts as a bundle map $$\alpha : C\ell^{+} \otimes E \mapsto C\ell^{-} \otimes E$$ where $E$ is any complex vector bundle with complex connection $D_{E}$. Again we let $$D_{C\ell \otimes E} = D_{C\ell} \otimes 1 + 1 \otimes D_{E}$$ denote the tensor product connection on $C\ell \otimes E$ and we normalize this connection at $\Sigma_{i}$, the submanifold singularities of $\alpha$. As before this connection preserves the splitting of $C\ell$. We immediately have \begin{corollary} Let $(F,D)$ and $(C\ell \otimes E, D_{C\ell \otimes E})$ be as above . Suppose that $\alpha \in \Gamma (\bigwedge^{odd} (F) )$ has closed submanifold singularities as a bundle map $\alpha : C\ell^{+} \otimes E \mapsto C\ell^{-} \otimes E$ and is extendable. Then $$ ch (D_{C\ell^{+} \otimes E}) - ch(D_{C\ell^{-} \otimes E}) = \su Res_{ch,i} [\Sigma_{i}] + dT$$ \end{corollary} \begin{remark} An explicit calculation of the residue in these two cases would be useful because it would yield analogues of Grothendieck - Riemann - Roch [AH]. \end{remark}
1,314,259,995,969
arxiv
\section{Introduction} This paper but section 6 is essentially my lecture at The Eighth Congress of Romanian Mathematicians, 2015, Iasi, Romania. Classical Morse theory and Morse--Novikov theory consider a Riemannian manifold $(M,g)$ and a Morse real valued or a Morse angle valued map, $f:M\to \mathbb R$ or $f:M\to \mathbb S^1,$ and relate the dynamical invariants of the vector field $grad_g \ f,$ namely -- the rest points of $grad_g f$= critical points of $f,$ -- the instantons \footnote {isolated trajectories between critical points} between two rest points $x,y$ of $grad_g f,$ -- the closed trajectories of $grad_g f$ (when $f$ is angle valued) \noindent to the algebraic topology of the underlying manifold $M$ or of the pair $(M, \xi_f)$ in case $f$ is angle valued map. Here $\xi_f$ denotes the degree one integral cohomology class represented by $f.$ The results of the theory can be applied to any vector field $V$ on $M$ which admits a closed differential one form $\omega \in \Omega^1(M)$ as Lyapunov rather than $grad_g f$, since the dynamics of such vector field $V$ (when generic) is the same as of $grad_g f$ for some Riemannian metric $g$ and some $f,$ angle valued map cf \cite{BH08}. The results of the theory can be used in both ways; knowledge of the dynamical invariants of $grad_g f$ permits to calculate the topological invariants of $M$ or of $(M,\xi_f)$ and the algebraic topological invariants of $M$ or of $(M,\xi)$ provide significant constraints for dynamics of a vector field with Lyapunov map representing $\xi,$ cf \cite {BH08}. \vskip .1in The ANM theory associates to a pair $(X,f)$, $X$ a compact ANR, $f$ a continuous real or angle valued map defined on $X$ and $\kappa$ a field a collection of invariants: the configurations $\delta^f_r, \ \hat \delta^f_r, \ \hat{\hat \delta}^f_r$ and the Jordan cells $\mathcal J_r(f), r\geq 0.$ The configuration $\delta^f_r$ is a finite collection of points with multiplicity located in $\mathbb C$ in case $f$ is real valued and in $\mathbb C\setminus 0$ in case $f$ is angle valued and the configuration $\hat \delta^f_r$ is given by the same points but instead of natural numbers as multiplicities have $\kappa-$vector spaces or free $\kappa[t^{-1},t]-$modules assigned to, where $\kappa[t^{-1},t]$ denotes the ring of Laurent polynomials with coefficients in $\kappa.$ A Jordan cell is a pair $(\lambda, k)$ with $\lambda$ a nonzero element in the algebraic closure of the field $\kappa$ and $k$ a positive integer. The pair $(\lambda,k)$ is an abbreviation for the $k\times k$ Jordan matrix $$T(\lambda, k)= \begin{pmatrix} \lambda&1&0&0&\cdots&0\\ 0&\lambda&1 &0&\cdots &0\\ \cdots \\ 0&0&\cdots& 0&\lambda& 1\\ 0&0&\cdots &0&0&\lambda\end{pmatrix}.$$ The configurations $\delta^f_r$ and the collections $\mathcal J_r(f), r\geq 0$ are {\it computer friendly} in the sense that for a simplicial complex and a simplicial map can be calculated by computer implementable algorithms. On one side these invariants refine basic algebraic topology invariants of $X$ and $(X;\xi_f)$ (Betti numbers or Novikov-Betti numbers, Homology or Novikov homology, monodromy). On other side they are closed to the dynamical elements (rest points, instantons, closed trajectories) of a flow on $X$ which has $f$ as a Lyapunov map and permit to detect the presence and get informations about the cardinality of such elements. The configuration $\delta^f_r$ is a configuration of points in the complex plane, each such point corresponding to a pair of critical values of $f$ (i.e. bar codes in the terminology of \cite {BD11}) whose multiplicity have homological interpretation. The configuration $\hat \delta^f$ is a configuration of vector spaces or modules indexed by complex numbers with the vector space or module $\hat \delta^f_r(z)$ of dimension or rank equal to $\delta^f_r(z)$ and specifying a piece of the homology $H_r(X)$ or Novikov homology $H^N_r(X, \xi_f).$ The Jordan cells $\mathcal J_r(f)$ are pairs $(\lambda, k)$ each providing a Jordan matrix which appears in the Jordan decomposition of the $r-$ monodromy of $\xi_f.$ In contrast with the classical Morse--Novikov theory concerned with critical points of $f,$ instantons and periodic orbits of $grad_g f$ for $X$ a smooth manifold and $f$ a Morse real or angle valued map, the configurations $\delta^f_r,$ $\hat \delta^f_r$ and the Jordan cells $\mathcal J_r,$ associated to $f$ in AMN-theory, --\ are defined for spaces $X$ and maps $f$ considerably more general than manifolds and Morse maps, --\ are computable by effective algorithms when $X$ is a finite simplicial complex and $f$ simplicial map, --\ enjoy robustness to $C^0-$ perturbation and satisfy Poincar\'e duality. This paper summarizes the definitions and the properties of the invariants $\delta^f_r, \hat\delta^f_r, \hat{\hat \delta}^f_r, \mathcal J_r(f) $ in AMN-theory and addresses only the first aspect of the theory, the algebraic topology aspect. It also indicates a few mathematical applications (section 6). The results are stated in Sextion 4. Details for the proofs are contained in \cite {B1}, \cite {B2}, \cite {B3} and partially in \cite {BH} where the computational aspects of these invariants are also addressed. \section{Preliminary definitions} \subsection {Configurations}\label {C} Let $X$ be a topological space and $\kappa$ a fixed field. A configuration of points in $X$ is a map $\delta:X\to \mathbb Z_{\geq 0}$ with finite support and a configuration of $\kappa-$vector spaces or of free $\kappa[t^{-1},t]-$modules indexed by the points in $X$ is a map $\hat \delta$ defined on $X$ with values $\kappa-$vector spaces or free $\kappa[t^{-1},t]-$modules with finite support. A point $x\in X$ is in the support of $\delta$ if $\delta (x)\ne 0$ and in the support of $\hat\delta$ if $\hat\delta(x)$ is of dimension or of rank different from $0.$ The non negative integer $\sum_{x\in X} \delta(x)$ is referred to as the {\it cardinality} of $\delta.$ One denotes by $\mathcal C_N(X)$ the set of configurations of cardinality $N.$ One says that the configuration $\hat\delta$ refines the configuration $\delta$ if $ \dim \hat\delta(x)= \delta(x).$ If $\kappa= \mathbb C$ one can consider also configurations with values in Hilbert modules of finite type over a von Neumann algebra, in our discussion always $\mathbb L^\infty(\mathbb S^1),$ the finite von Neumann algebra obtained by the von Neumann completion of the group ring $\mathbb C[\mathbb Z]$ which is exactly $\mathbb C[t^{-1},t]. Let $V$ be a finite dimensional vector space over $\kappa$ a field or a free f.g. $\kappa[t^{-1},t]-$ module or a finite type Hilbert module over $L^\infty (\mathbb S^1).$ Consider the set $\mathcal P(V)$ of subspaces of $V,$ split free submodules of $V,$ closed Hilbert submodules of $V$ respectively. One denotes by $\mathcal C_V(X) $ the set of configurations with values in $\mathcal P(V)$ which satisfy the property that the induced map $I_\delta: \oplus _{x\in X} \hat\delta(x)\to V$ is an isomorphism. An element of $\mathcal C_V(X) $ will be denoted by $\hat{\hat \delta}$ rather than $\hat \delta$ to emphasize the additional properties. The sets $\mathcal C_N(X)$ and ${\bf \mathcal C}_V (X)$ carry natural topologies, referred to as the {\it collision topology}. One way to describe these topologies is to specify for each $\delta$ or $\hat\delta$ a system of {\it fundamental neighborhoods}. If $\delta$ has as support the set of points $\{x_1, x_2, \cdots x_k\},$ a fundamental neighborhood $\mathcal U$ of $\delta$ is specified by a collection of $k$ disjoint open neighborhoods $U_1, U_2,\cdots, U_k$ of $x_1,\cdots x_k,$ and consists of $\{\delta'\in \mathcal C_N(X)\mid \sum_{x\in U_i} \delta'(x)=\delta(x_i)\}.$ Similarly if $\hat{\hat \delta}$ has as support the set of points $\{x_1, x_2, \cdots x_k\}$ with $\hat{\hat\delta}(x_i)= V_i\subseteq V\},$ a fundamental neighborhood $\mathcal U$ of $\hat{\hat \delta}$ is specified by a collection of $k$ disjoint open neighborhoods $U_1, U_2,\cdots U_k$ of $x_1,\cdots x_k,$ and consists of configuration $\hat{\hat \delta}'$ which satisfy the following: a) for any $x\in U_i$ one has $\hat{\hat\delta}'(x)\subset V_i,$ b) the map $I_{\hat{\hat\delta}'} ( \oplus_{x\in U_i} \hat {\hat \delta}' (x))= V_i.$ \noindent Note that \begin{obs}\label {O211}\ \begin{enumerate} \item $\mathcal C_N(X)$ identifies to the $N-$fold symmetric product $X^N/\Sigma_N$ of $X$ \footnote { $\Sigma _N$ is the group of permutations of $N$ elements} and if $X$ is a metric space with distance $D$ then the collision topology is the same as the topology defined by the metric $\underline D$ on $X^N/\Sigma_N$ induced from the distance $D.$ This induced metric is referred to as the {\it canonical metric} on $\mathcal C_N(X).$ \item If $X= \mathbb C$ then $\mathcal C_N(X)$ identifies to the degree $N-$monic polynomials with complex coefficients and if $X= \mathbb C\setminus 0$ to the degree $N-$monic polynomials with non zero free coefficient. To the configuration $\delta$ whose support consists of the points $z_1, z_2, \cdots z_k$ with $\delta(z_i)= n_i$ one associates the monic polynomial $P^\delta (z)= \prod _i (z- z_i)^{n_i}. $ Then as topological spaces $\mathcal C_N(\mathbb C)$ identifies to $\mathbb C^N$ and $\mathcal C_N(\mathbb C\setminus 0)$ to $\mathbb C^{N-1}\times (\mathbb C\setminus 0).$ \item If $X= \mathbb T:=\mathbb R^2/\mathbb Z,$ the quotient of $\mathbb R^2$by the action $ \mu (n, (a,b))= (a+2\pi n, b+2\pi n),$ the space $\mathbb T$ can be identified to $\mathbb C\setminus 0$ by $\langle a, b\rangle \rightarrow e^{ia+(b-a)}$ then $\mathcal C_N(\mathbb T)$ and $\mathcal C_N(\mathbb C\setminus 0)$ are homeomorphic. Here $\langle a, b\rangle$ denotes the $\mu-$orbit of $(a,b).$ \item The {\it canonical metrics} $\underline D$ on $\mathcal C_N(\mathbb R^2)$ or $\mathcal C_N(\mathbb T)$referrs to the metric derived from the complete Euclidean metric $D$ on $\mathbb R^2$ or $\mathbb R^2/ \mathbb Z.$ Both these metrics are complete. Note that standard metric on $\mathbb C\setminus 0$ is not complete so although $\mathbb T$ and $\mathbb C\setminus 0$ are homeomorphic, hence so are $\mathcal C_N(\mathbb T)$ and $\mathcal C_N(\mathbb C\setminus 0),$ when equipped with the canonical metric they are not isometric. \end{enumerate} \end{obs} \subsection {Tame maps}\label {SS22} A space $X$ is an ANR if any closed subset $A$ of a metrizable space $B$ homeomorphic to $X$ has a neighborhood $U$ which retracts to $A$, cf \cite {Hu} chapter 3. Any space homeomorphic to a locally finite simplicial complex or to a finite dimensional topological manifold or an infinite dimensional manifold (i.e. a paracompact separable Hausdorff space locally homeomorphic to the infinite dimensional separable Hilbert space or to the Hilbert cube $[0,1]^\infty$\ \footnote{product of county;e copies of the interval $[0,1]$}) is an ANR. \begin{enumerate} \item A continuous proper map $f:X\to \mathbb R,$ $X$ an ANR \footnote {This rules out infinite dimensional Hilbert manifolds} is {\it weakly tame} if for any $t\in \mathbb R,$ the level $f^{-1}(t)$ is an ANR. Therefore for any bounded or unbounded closed interval $I$ the space $f^{-1}(I)$ is an ANR. \item The number $t\in \mathbb R$ is a {\it regular value} if there exists $\epsilon >0$ small s.t. for any $t'\in (t-\epsilon, t+\epsilon)$ the inclusion $f^{-1}(t' )\subset f^{-1}(t-\epsilon, t+\epsilon)$ is a homotopy equivalence. A number $t$ which is not regular value is a {\it critical value}. In different words the homotopy type of the $t-$level does not change in the neighborhood of a regular value and does change in any neighborhood of a critical value. One denotes by $Cr(f)$ the collection of critical values of $f.$ \item The map $f$ is called {\it tame} if weakly tame and in addition: i) The set of critical values $Cr(f)\subset \mathbb R$ is discrete, ii) The number $\epsilon (f):= \inf \{|c-c'| \mid c,c'\in Cr(f), c\ne c'\}$ satisfies $\epsilon(f)>0.$ If $X$ is compact then (i) implies (ii). \item An ANR for which the set of tame maps is dense in the space of all maps w.r. to the fine- $C^0$ topology is called a {\it good ANR}. There exist compact ANR's (actually compact homological n-manifolds) with no co-dimension one subsets which are ANR's, hence compact ANR's which are not {\it good } , cf \cite {DW}. \end{enumerate} The reader should be aware of the following rather obvious facts. \begin{obs}\label {O21}\ \begin{enumerate} \item If $f$ is a weakly tame map then the compact ANR $f^{-1}([a,b])$ has the homotopy type of a finite simplicial complex (cf \cite{Mi2}) and therefore has finite dimensional homology w.r. to any field $\kappa. \item If $X$ is a locally finite simplicial complex and $f$ is linear on each simplex then $f$ is weakly tame with the set of critical values discrete. Critical values are among the values $f$ takes on vertices. If in addition $X$ is compact then $f$ is tame. If $M$ is a smooth manifold and $f$ is proper smooth map with all critical points of finite codimension, in particular $f$ is a Morse map, then $f$ is weakly tame and when $M$ is compact $f$ is tame. \item If $X$ is homeomorphic to a compact simplicial complex or to a compact topological manifold the set of tame maps is dense in the set of all continuous maps equipped with the $C^0-$topology (= compact open topology). The same remains true if $X$ is a compact Hilbert cube manifold defined in the next section. In particular all these spaces are good ANR's. \item On a smooth manifold the Morse functions are dense in the space of all continuous function w.r. to the fine $C^0-$topology and are generic in any $C^r-$ topology, $r\geq 2. \end{enumerate} \end{obs} \subsection{Algebraic topology} Let $\kappa$ be a field. For an ANR $X$ denote by $H_r(X)$ the (singular) homology with coefficients in $\kappa;$ this is a $\kappa-$vector space which when $X$ is compact is finite dimensional by \cite{Mi2}. Denote by $\beta_r(X):=\beta_r(X;\kappa)= \dim H_r(X)\ r \geq 0$ referred below as the $r-$th Betti number and by $\chi(X)= \chi(X;\kappa)=\sum_r (-1)^r \beta_r(X)$ the Euler characteristic with coefficients in $\kappa.$ For a pair $(X, \xi\in H^1(X;\mathbb Z)),$ $X$ a compact ANR and $\xi$ a degree one integral cohomology class, consider $\pi: \tilde X\to X$ an infinite cyclic cover associated to $\xi$ (unique up to isomorphism), and let $\tau:\tilde X\to \tilde X$ be the generator of the group of deck transformations (the infinite cyclic group $\mathbb Z ).$ The space $\tilde X$ is a locally compact ANR and the $\kappa-$vector space $H_r(\tilde X)$ is a finitely generated $\kappa[t^{-1},t]-$ module with the multiplication by $t$ given by the isomorphism $T_r:H (\tilde X)\to H_r(\tilde X)$ induced by the homeomorphism $\tau.$ The submodule of torsion elements of $H_r(\tilde X),$ denoted by $V_r(X;\xi),$ when regarded as a $\kappa-$vector space is finite dimensional and the $\kappa[t^{-1},t]-$module $ H_r(\tilde X)/ V_r(X;\xi)$ is free of finite rank. The isomorphism class of the $\kappa[t^{-1},t]-$module $V_r(X;\xi),$ equivalently of the pair $(V_r(X;\xi), T_r)$ with $V_r(X;\xi)$ viewed as a $\kappa-$vector space with a linear automorphism $T_r,$ is referred to as the $r-$th monodromy. The free $\kappa[t^{-1},t]-$module $H^N_r(X:\xi):= H_r(\tilde X)/ V_r(X;\xi)$ is referred below as the Novikov homology in dimension $r,$ and its rank as the $r-$Novikov--Betti number and denoted by $\beta^N_r(X;\xi).$ If $\kappa =\mathbb C$ is the field of complex numbers then the ring $\mathbb C[t^{-1},t],$ equivalently the group algebra $\mathbb C[\mathbb Z],$ has a canonical completion to the finite von-Neumann algebra $L^\infty(\mathbb S^1)$ and the module $H^N_r( X;\xi)$ to a finite type $L^\infty(\mathbb S^1)-$Hilbert module, of von-Neumann dimension $\beta^B_r(X;\xi).$ The completion of $H^N_r( X;\xi)$is exactly the $L_2-$homology $H^{L_2}_r(\tilde X),$ cf \cite{B2}. The completion of $H^N_r(X;\xi)$ is referred to as the von Neumann completion, and depends a priory on additional data such as: a Riemannian metric when $X$ a compact smooth manifold, a triangulation when $X$ is a finite simplicial complex or more algebraically, an inner $\mathbb C[t^{-1}, t]-$product on $H^N_r(X;\xi)$, but all these data lead to isomorphic $L^\infty(\mathbb S^1)-$Hilbert modules, cf\cite{B2}. \vskip .1in \section {The configurations and the set of Jordan cells } Let $f:X\to \mathbb R$ be a a proper continuous map, $X$ an ANR and $\kappa$ be a fixed field. Denote by: \vskip .1in --\ $X_a,$ the sub level $X_a: =f^{-1}(-\infty,a]),$ --\ $X^b,$ the super level $X^b:=f^{-1}([b,\infty)),$ --\ $\mathbb I^f_a(r):= \mathrm img (H_r(X_a)\to H_r(X)) \subseteq H_r(X),$ --\ $\mathbb I^b_f(r):= \mathrm img (H_r(X^b)\to H_r(X))\subseteq H_r(X),$ --\ $\mathbb F_r^f(a,b):= \mathbb I^f_a(r)\cap \mathbb I^b_f(r)\subseteq H_r(X),$ $F^f_r(a,b)= \dim \mathbb F^f_r(a,b).$ \begin{obs}\label {O31}\ 1.\ If $a'\leq a$ and $b\leq b'$ then $\mathbb F_r^f(a',b')\subseteq\mathbb F_r^f(a,b).$ 2.\ If $a'\leq a$ and $b\leq b'$ then $\mathbb F_r^f(a',b)\cap \mathbb F_r^f(a,b')= \mathbb F_r^f(a',b').$ 3.\ $\dim F_r(a,b)<\infty$ (cf \cite {B1} Proposition 3.4). 4.\ $\sup_{x\in X} |f(x)-g(x)| <\epsilon$ implies $\mathbb F^g(a-\epsilon, b+\epsilon) \subseteq\mathbb F^f_r(a,b).$ 5.\ If $f$ is weakly tame and number $a\in \mathbb R$ is a {\it regular value} then there exists $\epsilon>0$ so that for any $0\leq t, t'<\epsilon$ the inclusions $\mathbb I^f_{(a-t)}(r) \subseteq \mathbb I^f_{(a+t')}(r)$ and $\mathbb I_f^{(a-t')}(r) \supseteq \mathbb I_f^{(a+t)}(r)$ are isomorphisms for all $r$. \end{obs} \vskip .1in A set $B\subset \mathbb R^2$ of the form $B= (a',a]\times [b,b')$ with $a'<a, b<b'$ is called {\it box}. For $(a,b)\in \mathbb R^2$ and $\epsilon >0$ denote by $B(a,b;\epsilon) $ the box $B(a,b;\epsilon): = (a-\epsilon, a]\times [b, b+\epsilon).$ To the box $B$ we assign the vector space $$\mathbb F^f_r(B):= \mathbb F^f_r(a,b)/ \mathbb F^f_r(a',b)+\mathbb F^f_r(a,b')$$ of dimension $$F^f_r(B): = \dim \mathbb F^f_r(B).$$ In view of Observation \ref{O31} item 3 \ $F^f_r(B) <\infty $ and in view of Observation \ref{O31} item 2 \hskip 1in $F^f_r(B):= F^f_r(a,b) + F^f(a',b') - F^f_r(a',b)- F^f(a,b') .$ \vskip .1in \begin{figure} \begin{tikzpicture} \draw [<->] (0,3) -- (0,0) -- (3,0); \node at (-0.5,2.8) {y-axis}; \node at (2.9,-0.2) {x-axis}; \node at (1,0.3) {(a',b)}; \node at (2.9,0.3) {(a,b)}; \node at (3,2.3) {(a,b')}; \node at (1,2.3) {(a',b')}; \draw [<->] (0,-3) -- (0,0) -- (-3,0); \draw [thick] (1,0.5) -- (2.9,0.5); \draw [thick] (2.9,0.5) -- (2.9,2); \draw [dotted] (2.9,2) -- (1,2); \draw [dotted] (1,2) -- (1,0.5); \draw (0,0) -- (2.5,2.5); \draw (0,0) -- (-2.5,-2.5); \end{tikzpicture} \caption {The {\it box} $ B : =(a',a]\times [b,b')\subset \mathbb R^2$ } \end{figure} \newcommand{\mynewnewpicture}[1][ ]{ \begin{tikzpicture} [scale=1] \draw [dashed, ultra thick] (0,0) -- (0,3); \draw [line width=0.10cm] (5,0) -- (5,3); \draw [line width=0.10cm] (2,1) -- (2,3); \draw [line width=0.10cm] (0,1) -- (2,1); \draw [line width=0.10cm] (0,0) -- (5,0); \draw [dashed, ultra thick] (0,3) -- (5,3); \node at (1,2) {B'}; \node at (3.5,0.8) {B''}; \node at (2.5, -0.5) {Figure 4}; \end{tikzpicture} \hskip .5in \begin{tikzpicture} [scale=1] \draw [dashed, ultra thick] (0,0) -- (0,3); \draw [line width=0.10cm] (5,0) -- (5,3); \draw [dashed, ultra thick] (3,1.5) -- (5,1.5); \draw [dashed, ultra thick] (3,0) -- (3,1.5); \draw [line width=0.10cm] (0,0) -- (5,0); \draw [dashed, ultra thick] (0,3) -- (5,3); \node at (1,2) {B''}; \node at (3.5, 0.8) {B'}; \node at (2.5, -0.5) {Figure 5}; \end{tikzpicture}} \newcommand{\mynewpicture}[1][ ]{ \begin{tikzpicture} [scale=1] \draw [dashed, ultra thick] (0,0) -- (0,3); \draw [line width=0.10cm] (5,0) -- (5,3); \draw [line width=0.10cm] (2,1) -- (2,3); \draw [line width=0.10cm] (0,1) -- (2,1); \draw [line width=0.10cm] (0,0) -- (5,0); \draw [dashed, ultra thick] (0,3) -- (5,3); \node at (1,2) {B'}; \node at (3.5,0.8) {B''}; \node at (2.5, -0.5) {Figure 4}; \end{tikzpicture} \hskip .5in \begin{tikzpicture} [scale=1] \draw [dashed, ultra thick] (0,0) -- (0,3); \draw [line width=0.10cm] (5,0) -- (5,3); \draw [dashed, ultra thick] (3,1.5) -- (5,1.5); \draw [dashed, ultra thick] (3,0) -- (3,1.5); \draw [line width=0.10cm] (0,0) -- (5,0); \draw [dashed, ultra thick] (0,3) -- (5,3); \node at (1,2) {B''}; \node at (3.5, 0.8) {B'}; \node at (2.5, -0.5) {Figure 5}; \end{tikzpicture}} \newcommand{\mypicture}[1][ ]{ \begin{tikzpicture} [scale=1] \draw [line width=0.10cm] (0,0) -- (5,0); \draw [line width=0.10cm] (5,0) -- (5,3); \draw [dashed, ultra thick] (0,3) -- (5,3); \draw [dashed, ultra thick] (0,0) -- (0,3); \draw [line width=0.10cm] (3,0) -- (3,3); \node at (1.5,1.5) {B1}; \node at (4,1.5) {B2}; \node at (2.5, -0.5){Figure 2}; \end{tikzpicture} \hskip .5in \begin{tikzpicture}[ ] [scale=0.8] \draw [line width=0.10cm] (7,0) -- (12,0); \draw [line width=0.10cm] (12,0) -- (12,3); \draw [dashed, ultra thick] (7,0) -- (7,3); \draw [dashed, ultra thick] (7,3) -- (12,3); \draw [line width=0.10cm] (7,1) -- (12,1); \node at (9.5,2) {B1}; \node at (9.5,0.5) {B2}; \node at(9.5, -0.5) {Figure 3}; \end{tikzpicture}} For $a''<a' <a$, $b <b' <b''$ and $B": =(a",a]\times [b,b"),$ $B': =(a',a]\times [b,b),$ the inclusion of vector spaces $ (\mathbb F^f_r(a'',b)+\mathbb F^f_r(a,b'')) \subseteq (\mathbb F^f_r(a',b)+\mathbb F^f_r(a,b'))$ induces the canonical surjective linear map $\pi^{B'}_{B'', r}: \mathbb F^f_r(B'') \to \mathbb F^f_r(B').$ \vskip .1in For $0 <\epsilon' < \epsilon$ consider $B(a,b;\epsilon ')\subset (a-\epsilon, a]\times [b, b+\epsilon')= B_1 \subset B(a,b; \epsilon) $ and $B(a,b;\epsilon ')\subset (a-\epsilon', a]\times [b, b+\epsilon)= B_2 \subset B(a,b; \epsilon) .$ One has \hskip 1.5in $\pi^{\epsilon'}_{\epsilon,r} = \pi _{B_1,r}^{B(a,b; \epsilon')}\cdot \pi _{B(a,b;\epsilon),r}^{B_1} = \pi _{B_2,r}^{B(a,b; \epsilon')}\cdot \pi _{B(a,b;\epsilon),r}^{B_2} .$ \vskip .1in Consider the diagram \hskip .5in $\xymatrix { &\mathbb F^f_r(a,b)\ar [ld]_{\pi^{\epsilon}_{(a,b),r}} \ar[rd]^{\pi^{\epsilon'}_{(a,b),r}} &\\ \mathbb F^f_r(B(a,b;\epsilon))\ar[rr]^{\pi_\epsilon ^{\epsilon'}}&&\mathbb F^f_r(B(a,b;\epsilon')) }$ \noindent and denote by $\hat\delta^f_r(a,b)$ and $\pi_r(a,b)$ the vector space $$\boxed{\hat \delta_r(a,b) = \varinjlim_{\epsilon\to 0} \mathbb F^f_r(B(a,b;\epsilon))}$$ and the surjective linear map $$\boxed{\pi_r (a,b): \mathbb F^f_r(a,b) \to \hat \delta _r(a,b) = \varinjlim_{\epsilon\to 0} \pi_{(a,b)}^\epsilon}.$$ The space $\hat \delta_r(a,b) $ is of finite dimension since $\mathbb F^f_r(a,b)$ is and denote this dimension by $$ \boxed{\delta^f_r(a,b)= \dim \hat \delta^f_r(a,b)}.$$ \vskip .1in In view of Observation \ref{O31} item 4 one proposes the following definition. \begin{definition}\label {D1}\ A real number $t$ is a {\bf homologically regular value} (w.r. to the field $\kappa$) if there exists $\epsilon (t) >0$ s.t. for any $0<\epsilon <\epsilon(t)$ the inclusions $\mathbb I_{t-\epsilon}(r)\subseteq\mathbb I_t(r)\subseteq \mathbb I_{t+\epsilon}(r)$ and $\mathbb I^{t-\epsilon}(r)\supseteq\mathbb I^t(r)\supseteq \mathbb I^{t+\epsilon}(r)$ are equalities and {\bf homologically critical value} if not a {\bf homological regular value} and let $CR(f)$ be the set of homological critical critical values. \end{definition} By Observation \ref{O31} item 5, $f$ weakly tame implies $CR(f)\subseteq Cr(f).$ \begin{obs} \label {P42}(cf \cite {B1})\ If $X$ is an ANR and $f$ is a continuous proper map then $CR(f)$ is a discrete set. If $\delta^f_r(a,b)\ne 0$ then $a,b\in CR(f).$ If $f$ is tame and $\delta^f_r(a,b)\ne 0$ then $\hat \delta^f_r(a,b)= \mathbb F^f_r(B(a,b;\epsilon))$ for any $\epsilon <\epsilon (f).$\footnote {this observation holds also for $f$ continuous with the appropriate definition of $\epsilon (f).$} \end{obs} \vskip .1in {\bf The configurations in the case of a real valued map} \vskip .1in Suppose $f:X\to \mathbb R$ with $X$ compact ANR and $f$ continuous. The assignment $\mathbb R^2\ni (a,b) \rightsquigarrow \delta^f_r(a,b)$ defined above is a {\it configuration of points} in $\mathbb R^2\equiv \mathbb C,$ which determines and is determined by a monic polynomial $P^f_r(z)$ whose roots are the points in the support of $\delta^f_r$ with multiplicities the values of $\delta^f_r,$ and the assignment $\hat\delta^f_r$ is a {\it configuration of vector spaces} which refines $\delta^f_r.$ If $\kappa= \mathbb R$ or $\mathbb C$ and $H_r(X)$ is equipped with a scalar product then the canonical splitting $s_r(a,b): \hat\delta^f_r(a,b)\to \mathbb F_r(a,b)$ of $\pi_r(a,b):\mathbb F_r(a,b) \to \hat\delta^f_r(a,b)$ given by the orthogonal complement of $\ker \pi_r(a,b)$ realizes $\hat\delta^f_r(a,b)$ as a subspace $$\boxed{\hat{\hat \delta }^f_r(a,b):= s_r(a,b)(\hat\delta ^f_r(a,b))\subseteq \mathbb F_r(a,b)\subseteq H_r(X)}.$$ It turns out that the points $(a,b)\in \mathrm supp \ \delta^f_r$ with $a\leq b$ are exactly the closed $r-$bar codes $[a,b]$ and with $a>b$ are exactly the $(r-1)-$open bar codes $(b,a)$ defined in \cite{CSD09} and \cite{BD11} for the level persistence of $f.$ \vskip .1in Note: One can view the configurations $\delta^f_r$ and $\hat{\hat \delta} ^f_r$ in analogy with the configuration $\delta^T$ of eigenvalues with multiplicity, and the configuration $\hat{\hat \delta}^T$ of corresponding generalized eigenspaces, associated to a linear map $T:V\to V,$ $V$ a finite dimensional complex vector space. The comparison provides remarkable similarities which deserve to be inspected in case of a compact smooth Riemannian manifold and a Morse function. \vskip .1in {\bf The configurations in the case of an angle valued map} \vskip .1in Suppose $f:X\to \mathbb S^1$ with $X$ compact. Let $\tilde f: \tilde X\to \mathbb R$ be an infinite cyclic cover of $f,$ and consider the homeomorphism $\tau : \tilde X\to \tilde X$ provided by the positive generator of the group of deck transformation $\mathbb Z;$ hence $\tilde f\cdot \tau = \tilde f + 2\pi.$ The map $\tau$ induces the isomorphism $T_r: H_r(\tilde X)\to H_r(\tilde X)$ which restricts to $T_r: \mathbb F_r^{\tilde f}(a,b)\to \mathbb F^{\tilde f}_r(a+2\pi, b+2\pi)$ and induces the isomorphism $T_r:\hat \delta^f_r(a,b)\to \hat\delta^f_r(a+2\pi, b+2\pi).$ \vskip .1in Consider the quotient space $\mathbb T:= \mathbb R^2/\mathbb Z$ identified to $\mathcal C\setminus 0$ by $\langle a, b\rangle \rightarrow e^{ia + (b-a)},$ cf subsection 2.1 and define $$\boxed{\delta^f_r (\langle a,b\rangle)= \delta^f_r(z):= \delta^{\tilde f}_r(a,b)},$$ $$\boxed {\hat \delta ^f_r(\langle a,b\rangle)= \hat \delta ^f_r(z):= \oplus_{n\in \mathbb Z} \mathbb F^{\tilde f}_r(a+2n\pi, b+2n\pi)}$$ and $$T_r(\langle a, b\rangle)= \oplus _{n\in \mathbb Z} T_r(a +2n\pi, b+2n\pi) : \hat\delta ^f_r(\langle a, b\rangle)\to \hat\delta ^f_r(\langle a, b\rangle).$$ The pair $({\hat \delta}^f_r(\langle a, b\rangle), T_r (\langle a, b\rangle)$ defines a $\kappa[t^{-1}, t]-$module which is free It turns out that the points $e^{ia +(b-a)} \in \mathrm supp \delta^f_r$ with $a\leq b, a\in [0, 2\pi)$ are exactly the closed $r-$bar codes $[a,b]$ and with $a>b, a\in [0,2\pi)$ are exactly the the $(r-1)-$open bar codes $(b,a)$ defined in \cite{BD11}. \vskip .1in As already pointed out in subsection 2.3, when $\kappa = \mathbb C$ the algebra $\mathbb C[t^{-1},t]$ can be canonically completed to the finite von-Neumann algebra $L^\infty (\mathbb S^1).$ Additional data (for example a $\mathbb C[t^{-1},t]-$inner product on $H^N_r(X;\xi),$ or a Riemannian metric on $X$ when $X$ is a Riemannian manifold, or a triangulation of $X$ when $X$ is a simplicial complex) lead to a completion of $H^N_r(X;\xi)$ as Hilbert $L^\infty(\mathbb S^1)-$module, the $L_2-$homology $H^{L_2}(\tilde X),$ and of $\hat \delta^f_r(\langle a, b\rangle)$as a closed Hilbert submodule of $H^{L_2}(\tilde X).$ The procedure of such completions is described in \cite {B2} section 2 and called the {\it von-Neumann completion}. The assignments $\delta^f_r,$ $\hat \delta^f_r,$ and $\hat{\hat\delta}^f_r$ are configurations of points with multiplicities, free $\kappa[t^{-1},t]-$modules and $L^{\infty}(\mathbb S^1)-$Hilbert modules respectively. \vskip .2in {\bf The Jordan cells for an angle valued map} \vskip .1in For $f:X\to \mathbb S^1$ tame and $\theta\in \mathbb S^1$ denote by $X_\theta:=f^{-1}(\theta)$ and by $\overline X_\theta$ the two sided compactification of $f^{-1} (\mathbb S^1\setminus \theta)$ by $f^{-1}(\theta),$ called in \cite{B3} the {\it cut of $f$ at $\theta$}. The space $\overline X_\theta$ is homeomorphic to the compact space $\tilde f^{-1} [t, t+2\pi]$ for any $t\in \mathbb R$ with $p(t)=\theta.$ The inclusions $$\xymatrixcolsep{4pc}\xymatrix{X_\theta= \tilde f^{-1}(t)\ar[r]^{\subset}&\overline X_\theta = f^{-1}([t, t+2\pi] & X_\theta= f^{-1}(t+2\pi)\ar[l]_{\supset}}$$ induce in homology the linear map $$ \xymatrix{H_r(X_\theta)\ar[r]^{a}& H_r(\overline X_\theta)& H_r(X_\theta)\ar[l]_b}$$ which can be regarded as a {\it linear relation}, cf \cite {BH}, \cite {B3}, or as a graph representation of the oriented graph $G_2.$ The oriented graph $G_2$ has two vertices $v , w$ and two oriented edges from $v$ to $w$ denoted by $\alpha$ and $\beta$ as indicated below \hskip 2.5 in \xymatrixcolsep{5pc} \xymatrix{v \ar@/^/[r]^\alpha \ar@/_/[r]_\beta & v}. A linear representation $\rho$ of $G_2$ is provided by f.d. two vector spaces $V$ and $W$ associated to $v$ and $w$ and two linear maps $a,b: V\to W$ associated to the edges $\alpha, \beta$. The concept of isomorphism of representations direct sum of representations and indecomposable representations are obvious and, as in the case of an arbitrary finite oriented graph, each representation has a decomposition as sum of a unique (up to isomorphism) collection of indecomposables; the decomposition is not unique. If $\kappa$ is algebraically closed the list of {\it indecomposables} can be recovered from an old theorem of Kronecker (a proof of Kronecker theorem can be found in \cite {Be}) and is provided below. \begin {enumerate} \item Representation denoted by $\rho^+(r)$ has $V=\kappa^{r}, W=\kappa^{r+1},$ $a=\begin{bmatrix} Id_r\\0 \end{bmatrix},$ $b=\begin{bmatrix} 0\\Id_r \end{bmatrix}.$ \item Representation denoted by $\rho^-(r)$ has $V=\kappa^{r+1}, W=\kappa^{r},$ $a=\begin{bmatrix} Id_r&0 \end{bmatrix}, $ $b=\begin{bmatrix} 0&Id_r \end{bmatrix}.$ \item Representation denoted by $(\lambda, k)$ called Jordan cells, has $V=\kappa^r, W=\kappa^{r},$ $a= T(\lambda,k),$ $b=\begin{bmatrix} Id_k\\0 \end{bmatrix}.$ \end{enumerate} One defines the set $\mathcal J_r(f,\theta)$ the collection of the Jordan cells associated to the $G_2$ representation given by $$ \xymatrix{H_r(X_\theta)\ar[r]^{a}& H_r(\overline X_\theta)& H_r(X_\theta)\ar[l]_b}.$$ \section {The results} As notices in Section 2 the configuration $\delta^f_r$ defined in Section 3 can be equally regarded as a monic polynomial $P^f_r(z)$ whose zeros are the complex numbers $z\in \mathrm supp \ \delta^f_r$ with multiplicities equal to $\delta ^f_r(s).$ \vskip .1in {\bf Results about real valued maps} \begin{theorem} (Topological results) \label {T1}\ Suppose $X$ compact and $f:X\to \mathbb R$ continuous. Then the following holds. \begin{enumerate} \item If $P^f_r(z)=0,$ equivalently $\delta^f_r(z)\ne 0$ with $z=(a+i b),$ then $a,b \in CR(f).$ \item The configuration $\delta^f_r\in \mathcal C_{\dim H_r(X)}(\mathbb C),$ the configuration $\hat \delta^f_r$ satisfies $\oplus_{z\in \mathbb C} \hat \delta f_r(z)\simeq H_r(X)$ and if $\kappa= \mathbb R\ \rm{or}\ \ \mathbb C$ and $H_r(X)$ is equipped with a Hilbert space structure (i.e. a scalar product) then the configuration $\hat{\hat \delta} ^f(r)\in \mathcal C_{H_r(X)}(\mathbb C)$ and satisfies $\hat{\hat\delta}^f_r(z)\perp \hat{\hat\delta}^f_r(z)$ for $z\ne z'.$ \item For $f$ in an open and dense subset of the space of continuous real valued maps equipped with the compact open topology one has $\delta_r^f(z)= 0$ or $1.$ \end{enumerate} \end{theorem} \vskip .1in \begin{theorem} (Stability)\label{T2}\ Suppose $X$ is a compact ANR. 1. The assignment $ f \rightsquigarrow \delta^f_r$ provides a continuous map from the space of real valued maps equipped with the compact open topology to the space of configurations $\mathcal C_{b_r} (\mathbb R^2) = \mathcal C_{b_r}(\mathbb C)\simeq \mathbb C^{b_r},$\ $ b_r=\dim H_r(X),$ equipped with the collision topology, equivalently to the space of monic polynomials of degree $b_r.$ Moreover, with respect to the canonical metric $\underline D$ (cf Observation \ref {O211}) on the space of configurations $\mathcal C_{b_r}(\mathbb R^2),$ and the metric $D(f,g):= || f-g||_\infty= sup _{x\in X} |f(x)- g(x)|$ on the space of continuous maps one has $$\underline D (\delta^f , \delta^g) < 2 D(f,g).$$ 2. If $\kappa= \mathbb R$ or $C,$ and $H_r(X)$ are equipped with scalar products then the assignment $f\rightsquigarrow \hat{\hat \delta}^f_r$ is also continuous provided that $\mathcal C_{H_r(X)}(\mathbb C)$ is equipped with the collision topology described in subsection 2.1. \end{theorem} \vskip .1in \begin{theorem} (Poincar\'e Duality) \label{T3}\ Suppose $X$ is a closed topological manifold of dimension $n$ which is $\kappa-$orientable and $f: X\to \mathbb R$ a continuous map. Then the following holds. 1. $\delta^f_r(a,b)= \delta^{f}_{n-r}(b,a).$ 2. If $\kappa= \mathbb R, \mathbb C$ and the vector spaces $H_r(X)'$s are equipped with scalar products then the canonical isomorphism induced by the Poincar\'e duality and the scalar products, $PD_r: H_r(X)\to H_{n-r}(X),$ intertwines the configuration $\hat{\hat \delta}^f_r$ and $\hat{\hat\delta}^{f}_{n-r}\cdot \tau$ \ where $\tau (a,b)= (b,a).$ In particular if $X$ is a closed Riemannian manifold, hence $H_r(X)$ identifies to the space of harmonic $(n-r)-$ differential forms, then the Hodge star operator intertwines $\hat{\hat \delta}^f_r$ with $\hat{\hat\delta}^{f}_{n-r}\cdot \tau.$ \end{theorem} \vskip .1in {\bf Results about angle valued maps} \vskip .1in Let $f:X\to \mathbb S^1$ be a continuous map, $X$ compact ANR, and let $\xi:= \xi_f\in H^1(X;\mathbb Z)$ be the integral cohomology class represented by $f.$ Let $\tilde X\to X$ be an infinite cyclic cover associated to $\xi.$ If $\kappa=\mathbb C$ let $H_r^{L_2}(\tilde X)$ be the von--Neumann completion of $H^N(X;\xi)$ as described in \cite{B2}. \begin{theorem} (Topological results) \label {T4}\ Suppose $X$ compact ANR and $f:X\to \mathbb S^1$ continuous map. Then the following holds. \begin{enumerate} \item If $P^f_r(z)=0,$ equivalently $\delta_r^f(z) \ne 0$ with $z= e^{ia +(b-a)},$ then $e^{ia}, e^{ib} \in CR(f)$ $(e^{ia}, e^{ib}\in \mathbb S^1).$ \item The configuration $ \delta^f_r(z)\in \mathcal C_{\beta^N_r(X;\xi_f)} (\mathbb C\setminus 0),$ the configuration $\hat \delta^f_r$ satisfies $\oplus \hat\delta^f_r\simeq H^N_r(X;\xi)$ and if $\kappa =\mathbb C$ then the configuration $\hat{\hat \delta} ^f_r\in \mathcal C_{H^{L_2}(\tilde X)}(\mathbb C\setminus 0)$ and satisfies $\hat{\hat \delta} ^f_r(z)\perp \hat{\hat \delta} ^f_r(z')$ for $z\ne z'. \item If $C_\xi(X, \mathbb S^1)$ denotes the set of continuous maps in the homotopy class determined by $\xi$ equipped with the compact open topology then for $f$ in an open and dense subset of maps of $C_\xi(X,\mathbb S^1) $ one has $\delta^f(z)= 0\ \rm {or} \ 1.$ \end{enumerate} \end{theorem} \vskip .1in \begin{theorem} (Stability)\label{T5} Suppose $X$ is a compact ANR and $\xi\in H^1(X;\mathbb Z).$ Then the following holds. 1. The assignment \hskip .7in $ C(X, \mathbb S^1)_\xi \ni f \rightsquigarrow \delta^f_r\in \mathcal C_{\beta^N_r(X;\xi)}(\mathbb C\setminus 0)\equiv \mathcal C_{\beta^N_r(X;\xi)}(\mathbb R^2)$ equivalently \hskip .9in $C(X, \mathbb S^1)_\xi \ni f \rightsquigarrow P_r^f(z)\in \mathbb C^{\beta_r^N(X;\xi)}\times (\mathbb C\setminus 0)$ provides a continuous map from $C_\xi(X, \mathbb S^1),$ the set of continuous maps in the homotopy class determined by $\xi$ equipped with the compact open topology, to the space of configurations $\mathcal C_{\beta^N_r(X;\xi)}(\mathbb C\setminus 0)$ equivalently $\mathbb C^{\beta_r^N(X;\xi)}\times (\mathbb C\setminus 0).$ Moreover, with respect to the canonical metric $\underline D$ on $C_{\beta^N_r(X;\xi)}(\mathbb T)$ and the complete metric $D$ on the space $C_\xi(X, \mathbb S^1)$ given by $D(f,g):= sup _{x\in X} d(f(x),g(x)),$ $d$ the distance on $\mathbb S^1= \mathbb R/ 2\pi \mathbb Z,$ one has $$\underline D (\delta^f , \delta^g) < 2\pi D(f,g).$$ 2. If $\kappa= \mathbb C$ and the space of configurations $\mathcal C_{H_r^{L_2}(\tilde X)}(\mathbb C\setminus 0)$ is equipped with the collision topology then the assignment $f \rightsquigarrow \hat{\hat \delta}^f_r$ is continuous. \end{theorem} \begin{theorem} (Poincar\'e Duality) \label{T6} Suppose $M$ is a closed topological manifold of dimension $n$ which is $\kappa-$orientable and $f: M\to \mathbb S^1$ is a continuous map. Then one has \begin{enumerate} \item $\delta^f_r(\langle a,b\rangle )= \delta^{ f}_{n-r}(\langle b, a\rangle ),$ equivalently $\delta^f_r(z)= \delta^f_{n-r}(\tau z)$ with $\tau(z)= z^{-1} e^{i\ln |z|}.$ Here $\langle a, b \rangle$ denotes the element of $\mathbb T$ represented by $(a,b)\in \mathbb R^2.$ \item If $\kappa=\mathbb C$ and $M$ is a closed Riemannian manifold then the canonical isomorphism of $H^{L_2}_r(\tilde M)$ to $H^{L_2}_{n-r}(\tilde M)$ induced by the Riemannian metric (via $L_2$ harmonic forms and the Hodge star operator) intertwines the configuration $\hat{\hat \delta}^f_r$ and $\hat{\hat \delta}^f_{n-r}\cdot \tau $ when regarded as configurations on $\mathbb R^2/\mathbb Z =\mathbb T.$ \end{enumerate} \end{theorem} In Section 3 for a weakly tame map $f:X\to \mathbb S^1$ and an angle $\theta\in \mathbb S^1$ we have defined the collection of Jordan cells $\mathcal J_r(f,\theta),$ all computable by effective algorithms. They have the following properties. \begin{proposition}\label {P37}\ \begin{enumerate} \item If $f:X\to \mathbb S^1$ is a weakly tame map then the set $\mathcal J_r(f,\theta)$ is independent on $\theta,$ so the notation $\mathcal J_r(f;\theta)$ can be abbreviated to $\mathcal J_r(f).$ \item If $f_1:X_1\to \mathbb S^1$ and $f_2:X_2\to \mathbb S^1$ are two weakly tame maps and $\omega:X_1\to X_2$ a homeomorphism s.t. $f_2\cdot \omega$ and $f_1$ are homotopic then $\mathcal J_r(f_1)= \mathcal J_r(f_2).$ \end{enumerate} \end{proposition} This permits to define for any pair $(X,\xi)$ with $X$ a space homotopy equivalent to a compact ANR and $\xi\in H^1(X; \mathbb Z)$ the invariant $\mathcal J_r(X,\xi)$ by $\mathcal J_r(X,\xi):=\mathcal J_r(f)$ where $f:Y\to \mathbb S^1$ is a simplicial map defined on the simplicial complex $Y$ homotopy equivalent to $X$ by a homotopy equivalence $\omega:X\to Y$ s.t. $f\cdot \omega$ represents $\xi.$ In view of the discussion on the topology of compact Hilbert cube manifolds such pairs $(Y, \omega)$ exist. The invariant $\mathcal J_r(X;\xi)$ satisfies the following. \begin{theorem}\label {T7}\ \begin{enumerate} \item If $\omega:X_1\to X_2$ is a homotopy equivalence s.t. $\omega^\ast (\xi_2)= \xi_1,$ \ \ $\xi_1\in H^1(X_1,\mathbb Z),$ \ $\xi_2\in H^1(X_2,\mathbb Z)$ and $X_1$ and $X_2$ have the homotopy type of a compact ANR then $\mathcal J(X_1,\xi_1)= \mathcal J_r(X_2, \xi_2).$ \item If $X$ is a compact ANR then $\mathcal J_r(X,\xi)$ are exactly the Jordan cells of the monodromy $$T_r(X,\xi): V_r(X,\xi) \to V_r(X,\xi).$$ \end{enumerate} \end{theorem} \vskip .1in Introduce the set $$\mathcal J_r(X;\xi)(u):= \{ (\lambda, k)\in \mathcal J_r(X:\xi) \mid \lambda=u\}$$ and for a finite set $S$ denote by $\sharp S$ the cardinality of $S.$ For any field $\kappa$ one has the following relation between Betti numbers , Novikov Betti numbers and Jordan cells. \begin{theorem}\ $\beta_r(X)= \beta^N_r(X,\xi) + \sharp \mathcal J_r(X,\xi)(1) + \sharp \mathcal J_{r-1}(X,\xi)(1).$ \end{theorem} \section {About the proof} The proof of Theorems \ref{T1}, \ref {T2} , \ref {T3} is contained partially in \cite {BH} and as stated in \cite {B1}, of Theorems \ref {T4}, \ref{T5} , \ref{T6} partially in \cite {BH} and as stated in \cite {B2}, and of Proposition \ref{P37} and Theorem \ref{T7} in \cite{BH} and \cite {B3}. The proofs are done first for nice spaces (homeomorphic to simplicial complexes) and tame maps and then extended to an arbitrary compact ANR and arbitrary continuous map based on results on compact Hilbert cube manifolds as summarized in Theorem \ref{T54} below. As far as the first step is concerned the following propositions of various level of complexity are essential intermediate results whose proofs are contained in \cite {B1}. \begin{proposition}\label {P1}\ Let $a'<a<a'' ,$ $b< b''$ and $B_1,$ $B_2,$ and $B$ the boxes $B_1= (a',a]\times [b,b''),$ $B_2= (a,a'']\times [b,b'')$ and $B= (a',a'']\times [b,b'') $ (see Figure 2). 1. The inclusions $B_1\subset B$ and $B_2\subset B$ induce the linear maps $ i_{B_1,r} ^B: \mathbb F_r(B_1)\to \mathbb F_r(B)$ and $\pi_{B,r}^{B_2}: \mathbb F_r(B)\to \mathbb F_r(B_2)$ such that the following sequence is exact $$\xymatrix{0\ar [r]& \mathbb F_r(B_1)\ar[r]^{i_{B_1,r}^B} & \mathbb F_r(B)\ar[r]^{\pi_{B,r}^{B_2}} & \mathbb F_r(B_2)\ar[r]& 0}.$$ 2. If $\kappa = \mathbb R$ or $\mathbb C$ and $H_r(X)$ is equipped with a scalar product hence $\mathbb F_r(B)'$s are canonically realized as subspaces ${\bf H}_r(B)\subseteq H_r(M)$ then $${\bf H}_r(B_1) \perp {\bf H}_r(B_2)$$ and $${\bf H}_r(B)= {\bf H}_r(B_1) + {\bf H}_r(B_2).$$ \end{proposition} \begin{proposition}\label{P2}\ Let $a'<a,$ $b'<b< b''$ and $B_1,$ $B_2,$ and $B$ the boxes $B_1= (a',a]\times [b,b''),$ $B_2= (a,a'']\times [b',b)$ and $B= (a',a]\times [b',b'') $ (see Figure 3). 1. The inclusions $B_1\subset B$ and $B_2\subset B$ induce the linear maps $i_{B_1,r}^B: \mathbb F_r(B_1)\to \mathbb F_r(B)$ and $\pi_{B,r}^{B_2}: \mathbb F_r(B)\to \mathbb F_r(B_2)$ such that the following sequence is exact $$\xymatrix{0\ar [r]& \mathbb F_r(B_1)\ar[r]^{i_{B_1,r}^B} & \mathbb F_r(B)\ar[r]^{\pi_{B,r}^{B_2}} & \mathbb F_r(B_2)\ar[r]& 0}.$$ 2. If $\kappa= \mathbb R$ or $\mathbb C$ and $H_r(X)$ is equipped with a scalar product then $${\bf H}_r(B_1) \perp {\bf H}_r(B_2)$$ and $${\bf H}_r(B)= {\bf H}_r(B_1) + {\bf H}_r(B_2).$$ \end{proposition} \mypicture {} \vskip .1in \begin{proposition} \label {P08} (cf \cite{BH} Proposition 5.6) Let $f:X\to \mathbb R$ be a tame map and $\epsilon <\epsilon(f)/3.$ For any map $g:X\to \mathbb R$ which satisfies $|| f- g ||_\infty <\epsilon$ and $a,b\in Cr(f)$ critical values one has \begin{equation}\label{E3} \quad \sum_{x\in D(a,b;2\epsilon)} \delta^g_r(x)= \delta ^f_r(a,b), \end{equation} \begin{equation} \label{E40} \quad \quad \mathrm supp \ \delta^{g}_r\subset \bigcup _{(a,b)\in \mathrm supp\ \delta^{f}_r} D(a,b;2\epsilon). \end{equation} If $\kappa= \mathbb R$ or $\mathbb C$ and in addition $H_r(X)$ is equipped with a scalar product the above statement can be strengthen to \begin{equation}\label{E5} x\in D(a,b;2\epsilon)\Rightarrow \hat{\hat \delta}^g_r(x)\subseteq \hat{\hat \delta} ^f_r(a,b), \ \quad \oplus_{x\in D(a,b;2\epsilon)}\hat \delta^g_r(x)= \hat \delta ^f_r(a,b). \end{equation} \end{proposition} \vskip .1in Theorems \ref{T1} and \ref {T3} follows essentially from the first two propositions which imply that $F_r$ is a measure on the sigma algebra generated by {\it boxes} with $\delta^f_r$ the {\it measure density}. Theorems \ref{T2} and \ref{T5}, in case the source of the map $f$ is a simplicial complex, uses essentially Proposition \ref{P08} and Theorems \ref{T3} and \ref{T6} use manipulation of Poincar\'e duality and alternative definition of $\hat \delta^f_r$ , cf \cite{B1}. In case of Theorem \ref{T6} a more elaborated manipulation involving Poincar\'e duality for the open manifold $\tilde M,$ the infinite cyclic cover of $f: M\to \mathbb S^1,$ and the description of the torsion of the $\kappa[t^{-1},t]$ module $H_r(\tilde M)$ are needed, cf \cite {B2}. The proof of Theorem \ref{T7} involves the recognition of what in \cite{BH} and \cite{B3} is referred to as the {\it regular part of the linear relation} defined by the pair of linear maps $$ \xymatrix{H_r(X_\theta)\ar[r]^{a}& H_r(\overline X_\theta)& H_r(X_\theta)\ar[l]_b}.$$ Concerning the results about compact Hilbert cube manifolds used in this work recall that: The Hilbert cube $Q$ is the infinite product $Q = \prod_{i\in \mathbb Z_{\geq 0}} I_i$ with $I_i= [0,1]; $ its topology is also given by the metric $d(\overline u ,\overline v)= \sum _i |u_i- v_i|/ 2^i$ with $\overline u= \{u_i\in I_i, i\in \mathbb Z_{\geq 0}\}$ and $\overline v= \{v_i\in I_i, i\in \mathbb Z_{\geq 0}\}.$ The space $Q$ is a compact ANR and so is any $X\times Q$ for $X$ any compact ANR. A compact Hilbert cube manifold is a compact Hausdorff space locally homeomorphic to the Hilbert cube and is a compact ANR. The following basic results about Hilbert cube manifolds can be found in \cite {CH}. \begin{theorem} \label {T54}\ \begin{enumerate} \item (R Edwards) $X$ is a compact ANR iff $X\times Q$ is a compact Hilbert cube manifold. \item (T.Chapman) Any compact Hilbert cube manifolds is homeomorphic to $K\times Q$ \ for some finite simplicial complex $K.$ \item (T Chapman) If $\omega:X\to Y$ is a simple homotopy equivalence between two finite simplicial complexes with Whitehead torsion $\tau(\omega)=0$ then there exists a homeomorphism $\omega': X\times Q\to Y\times Q$ s.t. $\omega'$ and $\omega\times id_Q$ are homotopic. If $\omega$ is only a homotopy equivalence the same conclusion holds for $Q$ replaced by $Q\times \mathbb S^1.$ \footnote{some partial but relevant results on the line of Theorem \ref{T54} were due to J West as indicated in \cite {CH}} \end{enumerate} \end{theorem} If one writes $I^\infty= I^k\times I^{\infty-k}$ observe that given $\epsilon >0$ for any continuous real or angle valued map $f$ defined on $K\times Q,$ $K-$ a simplicial complex, there exists $N$ large enough such that $f$ is $\epsilon-$closed to $g\cdot \pi,$ $\pi: K\times I^\infty\to K\times I^N$ the canonical projection with $g$ a simplicial map defined on $K\times I^N.$ In particular any compact Hilbert cube manifold is a good ANR. It can be also verified by using the definitions that if $f:X\to \mathbb R\ \rm{or}\ \mathbb S^1$ is a continuous map $K,$ a compact ANR and $f^K= f\times \pi,$ $\pi: X\times K\to X,$ then $\hat \delta^{f^K}(\langle a,b\rangle )= \oplus_{k\geq 0} \hat \delta^f_{r-k}(\langle a,b\rangle) \otimes H_k(K).$ \section {Some applications} 1. {\bf Geometric analysis} Theorem \ref{T1} insures that a generic continuous function provides one dimensional subspaces in homology with coefficients in a fixed field; in particular for $\kappa=m\mathbb R$ or $\mathbb C$ for a closed Riemannian manifold a generic continuous function provides an orthonormal base (up to sign) in the space of harmonic forms. We expect (but have not found this result in literature) that the eigenforms of the Laplace Beltrami operators for a generic Riemannian metric in any dimension provides a similar decomposition for the smooth differential forms orthogonal to the harmonic forms. This is indeed the case in view of a result of Uhlenbeck\footnote{which claims that for a closed manifold equipped with a generic Riemannian metric the eigenvalues of the Laplace operator are simple} for degree zero forms and for $n=2$ for all degree. This shows that a generic pair, Riemannian metric and smooth function, provides an orthonormal base up to sign (In Fourier sense) in the space of all differential forms; in the same way trigonometric functions on $\mathbb S^1$ provide an orthonormal base (In Fourier sense) for smooth functions. This can be a useful tool in geometric analysis. \vskip .1in 2. {\bf Topology \begin {obs} \label {O53}\ 1. Theorem (\ref{T3}) implies that for a closed orientable manifold of dimension $n$ $(c,c')\in \mathrm supp \delta^f_r$ iff $(c',c)\in \mathrm supp \delta^f_{n-r}$ and both pairs appear with equal multiplicity $\delta^f_r(c,c')= \delta^f_{n-r}(c',c).$ 2. Theorem \ref {T6} remains valid with the same proof in case $M$ is a compact manifold with boundary $(M,\partial M),$ provided $H^N_r(\partial M; \xi_{f_{\partial M}})$ \footnote {with $f_{\partial M}$ notation for the restriction of $f$ to $\partial M$ vanishes for all $r.$ In particular, under the above vanishing hypothesis, $H^N_r(M;\xi_f)\simeq H^N_{n-r} (M;\xi_f).$ \end{obs} \begin {corollary}\label {C}\ Suppose $(M^{2n},\partial M^{2n})$ is a compact orientable manifold with boundary which has the homotopy type of a simplicial complex of dimension $\leq n$ and $\xi\in H^1(M;\mathbb Z)$ s.t $H^N_r(\partial M; \xi_{\partial M})=0$ for all $r.$ Then for any field $\kappa$ : 1. $\beta^N_r(X:\xi)= \begin{cases} 0 \ \rm{if}\ r\ne n\\ (-1)^n\chi (M_n)\ \rm{if}\ r= n\end{cases},$ with $\chi (M)$ the Euler -Poincar\'e characteristic with coefficients in $\kappa.$ 2. $\beta_r(X) =\begin{cases} \alpha_{r-1} + \alpha_r \ \rm{if}\ r\ne n\\ \alpha_{n-1} + \alpha_n +(-1)^n\chi (M_n)\ \rm{if}\ r= n\end{cases}, $ where $\alpha _r$ denotes the number of Jordan cells $(\lambda, k) \in J_r(M,\xi_f),$ with $\lambda=1.$ \vskip.1in 3. If $V^{2n-1}\subset M^{2n}$ is a compact proper sub manifold (i,e, $V\pitchfork \partial M, \footnote {$\pitchfork$= transversal}$ and $V\cap \partial M= \partial V$) representing a homology class in $H_{n-1}(M,\partial M)$ Poincar\'e dual to $\xi_f$ and $H_r(V)=0$ then the set of Jordan cells $J_r(M,\xi)$ is empty. \end{corollary} Item 1. follows from Observation (\ref {O53}) and the fact that both Betti numbers and Novikov--Betti numbers calculate the same Euler--Poincar\'e characteristic. Item 2 follows from Theorem 11 item c. in \cite {BH}, and Item 3. from the description of Jordan cells in terms of linear relations as provided in \cite{BH} or \cite{B3}. As pointed out to us by L Maxim, the complement $X = \mathbb C^n\setminus V$ of a complex hyper surface $V\subset \mathbb C^n, V:= \{(z_1, z_2, \cdots z_n) \mid f(z_1, z_2, \cdots z_n)=0\}$ regular at infinity, equipped with the canonical class $\xi_f\in H^1(X; \mathbb Z)$ defined by $f: X\to \mathbb C\setminus 0$ is an open manifold with an integral cohomology class $\xi\in H^1(X;\mathbb Z)$ represented by $f/|f|: X\to \mathbb S^1.$ This manifold has compactifications to manifolds with boundary with cohomology class which satisfies the hypotheses above. Item 1. recovers a calculation of L Maxim, cf \cite{M14} and \cite {FM16} \footnote {The Friedl-Maxim results state the vanishing of more general and more sophisticated $L_2-$homologies and Novikov type homologies. They can also be recovered via the appropriate Poincar\'e Duality type isomorphisms on similar lines.} that the complement of an algebraic hyper surface regular at infinity has vanishing Novikov homologies in all dimension but $n.$
1,314,259,995,970
arxiv
\section{Introduction} Given polynomials with integer coefficients, famous results and long-standing questions concern the {\it divisibility properties} of their values at integers, in particular their {\it primality}. The polynomial $x^2+x+41$ which assumes prime values at $0,1,\ldots,39$ is a striking example, going back to Euler. On the other hand, the values of a nonconstant polynomial $P(x)$ cannot be all prime numbers : if $P(0)$ is a prime, then the other value $P(kP(0)) = a_d(kP(0))^d+\cdots +a_1kP(0)+ P(0)$ is divisible by $P(0)$, and, for all but finitely many $k\in \Zz$, is different from $\pm P(0)$, and hence cannot be a prime. \medskip Whether a polynomial may assume infinitely many prime values is a deeper question. Even for $P(x)=x^2+1$, whether there are infinitely many prime numbers of the form $n^2+1$ with $n\in \Zz$ is out of reach. Bunyakowsky conjectured that the question always has an affirmative answer, under some natural assumption recalled below. The Schinzel Hypothesis generalizes this conjecture to several polynomials, concluding that they should simultaneously take prime values; see Section \ref{sec:Schinzel-hypotheses}. \medskip We follow this trend. Our main results are concerned with the \emph{common divisors of values} $P_1(n), \ldots, P_s(n)$ at integers $n$ of several polynomials, see Theorem \ref{th:Dstar} in Section \ref{sec:first} (proved in Section \ref{sec:proof-coprime}) and further complements in Section \ref{sec:more}. We can then investigate the \emph{coprimality of values} of polynomials. Generally speaking, we say that $n$ integers, with $n\geq 2$, are coprime if they have no common prime divisor. While the Schinzel Hypothesis is still open, we obtain this ``coprime'' version: under some suitable assumption, \emph{coprime polynomials assume coprime values at infinitely many integers} (Corollary \ref{th:schinzel-coprime}). \medskip We deduce a ``modulo $m$'' variant of the Schinzel Hypothesis, and versions of the Goldbach and the Twin Primes conjectures, again ``modulo $m$''; see Section \ref{sec:Schinzel-hypotheses}. A coprimality criterion for polynomials is offered in Section \ref{ssec:coprime}. Finally, in Section 6, we discuss generalizations for which $\Zz$ is replaced by a polynomial ring. \section{Common divisors of values and the coprimality question} \label{sec:first} \emph{For the whole paper, $f_1(x),\ldots,f_s(x)$ are nonzero polynomials with integer coefficients.} \medskip Assume that the polynomials $f_1(x),\ldots,f_s(x)$ are coprime ($s\geq 2$), i.e.\ they have no common root in $\Cc$. Interesting phenomena occur when considering the greatest common divisors: $$d_n = \gcd(f_1(n),\ldots,f_s(n)) \quad \text{ with } n\in \Zz.$$ It may happen that $f_1(x),\ldots,f_s(x)$ never assume coprime values, i.e., that none of the integers $d_n$ is $1$. A simple example is $f_1(x)=x^2-x=x(x-1)$ and $f_2(x)=x^2-x+2$: all values $f_1(n)$ and $f_2(n)$ are even integers. More generally for $f_1(x)=x^p-x$ and $f_2(x)=x^p-x+p$ with $p$ a prime number, all values $f_1(n)$, $f_2(n)$ are divisible by $p$, by Fermat's theorem. Rule out these polynomials by assuming that \emph{no prime $p$ divides all values $f_1(n),\ldots,f_s(n)$ with $n\in \Zz$}. Excluded polynomials are well-understood: modulo $p$, they vanish at every element of $\Zz/p\Zz$, hence are divisible by $x^p-x=\prod_{m\in\Zz/p\Zz} (x-m)$; so they are of the form $p g(x) + h(x) (x^p-x)$ with $g(x), h(x)\in \Zz[x]$ for some prime $p$. With this further assumption, is it always true that $f_1(n),\ldots,f_s(n)$ are coprime for at least one integer $n$? For example this is the case for $n$ and $n+2$ that are coprime when $n$ is odd. In other words, does the set $$\mathcal{D}^\ast = \{d_n \mid n\in \Zz\}$$ contain $1$? Studying ${\mathcal D}^\ast$, which, as we will see, is quite intriguing, is a broader goal. \begin{example} \label{example:intro} Let $f_1(x)=x^2 - 4$ and $f_2(x)=x^3 + 3x + 2$. These polynomials are coprime since no root of $f_1$ is a root of $f_2$. The values $d_n = \gcd(f_1(n),f_2(n))$, for $n=0,\ldots,20$ are: $$ 2\quad 3\quad 16\quad 1\quad 6\quad 1\quad 4\quad 3\quad 2\quad 1\quad 24 \quad 1\quad 2\quad 3\quad 4\quad 1\quad 6\quad 1\quad 64\quad 3\quad 2 $$ We have in fact $\mathcal{D}^\ast = \{ 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 192 \}$. It seems unclear to highlight a pattern from the first terms, but at least the integer $1$ occurs. \end{example} A first general observation is that the set $\mathcal{D}^\ast$ is finite. This was noticed for two polynomials by Frenkel-Pelik\'{a}n \cite{FP}. In fact they showed more: the sequence $(d_n)_{n\in \Zz}$ is periodic. We will adjust their argument. A new result about the set ${\mathcal D}^\ast$ is the stability assertion of the following statement, which is proved in Section \ref{sec:proof-coprime}. \begin{theorem} \label{th:Dstar} Let $f_1(x),\ldots,f_s(x) \in \Zz[x]$ be nonzero coprime polynomials ($s\ge2$). The sequence $(d_n)_{n\in \Zz}$ is periodic and the finite set $\mathcal{D}^\ast = \{ d_n \}_{n\in\Zz}$ is stable under gcd and under lcm. Consequently, the gcd $d^\ast$ and the lcm $m^\ast$ of all integers $d_n$ ($n\in \Zz$) are in the set $\mathcal{D}^\ast$. \end{theorem} The stability under gcd means that for every $n_1,n_2\in \Zz$, there exists $n\in \Zz$ such that $\gcd(d_{n_1},d_{n_2})=d_n$. In Example \ref{example:intro}, the sequence $(d_n)_{n\in \Zz}$ can be checked to be periodic of period $192$ and the set $\mathcal{D}^\ast $ is indeed stable under gcd and lcm. A consequence of Theorem \ref{th:Dstar} is the following result, proved in \cite[Theorem 1]{Sc02}; as discussed below in Section \ref{sec:proof-coprime}, it is a ``coprime'' version of the Schinzel Hypothesis. \begin{corollary} \label{th:schinzel-coprime} Assume that $s\geq 2$ and $f_1(x),\ldots,f_s(x)$ are coprime polynomials. Assume further that no prime number divides all integers $f_1(n),\ldots,f_s(n)$ for every $n\in \Zz$. Then there exist infinitely many $n\in \Zz$ such that $f_1(n),\ldots,f_s(n)$ are coprime integers. \end{corollary} In Example \ref{example:intro}, we have $f_1(1)=-3$ and $f_2(0)=2$, so no prime number divides $f_1(n)$, $f_2(n)$ for every $n\in\Zz$. Corollary \ref{th:schinzel-coprime} asserts that $f_1(n)$ and $f_2(n)$ are coprime integers for infinitely many $n \in \Zz$. Assuming Theorem \ref{th:Dstar}, here is how Corollary \ref{th:schinzel-coprime} is deduced. \begin{proof} The integer $d^\ast$, defined as the gcd of all the $d_n$, is also the gcd of all values $f_1(n),\ldots,f_s(n)$ with $n\in \Zz$. The assumption of Corollary \ref{th:schinzel-coprime} exactly says that $d^\ast = 1$. By Theorem \ref{th:Dstar}, we have $1 \in \mathcal{D}^\ast$, that is: there exists $n\in \Zz$ such that $f_1(n),\ldots,f_s(n)$ are coprime. Due to the periodicity of $(d_n)_{n\in \Zz}$, the set of such $n$ is infinite. \end{proof} \section{The Schinzel Hypothesis} \label{sec:Schinzel-hypotheses} The Schinzel Hypothesis is the following statement; it was denoted by (H) in \cite{SS}. \begin{hypothesis*} Assume that $s\geq 1$ and $f_1(x),\ldots,f_s(x)$ are irreducible in $\Zz[x]$. Assume further that no prime number divides the product $\prod_{i=1,\ldots,s}f_i(n)$ for every $n\in \Zz$. Then there exist infinitely many integers $n$ such that $f_1(n),\ldots,f_s(n)$ are all prime numbers. \end{hypothesis*} This statement would imply many other conjectures in number theory. For instance with $f_1(x)=x$ and $f_2(x)=x+2$, it yields the Twin Primes conjecture: there exist infinitely many primes $p$ such that $p+2$ is also a prime number. It also provides infinitely many prime numbers of the form $n^2+1$ with $n\in \Zz$; see \cite{SS} and \cite[Ch.~3 and Ch.~6]{Ri} for other problems. The Schinzel Hypothesis is however wide open. It is only known true when $s=1$ and $\deg(f_1)=1$, and this case is already quite deep. It is indeed the Dirichlet theorem: if $a$, $b$ are coprime nonzero integers, then there are infinitely many $\ell \in \Zz$ such that $a + \ell b$ is a prime number. Corollary \ref{th:schinzel-coprime} at least provides a ``coprime'' version of the Schinzel Hypothesis. This coprime version can then be conjoined with the Dirichlet theorem. This yields the following. \begin{corollary} \label{cor:schinzel-coprime} Assume that $f_1(x)$ and $f_2(x)$ are coprime polynomials and that no prime number divides $f_1(n)$ and $f_2(n)$ for every $n\in \Zz$. Then, for infinitely many $n\in \Zz$, there exist infinitely many $\ell \in \Zz$ such that $f_1(n)+\ell f_2(n)$ is a prime number. \end{corollary} \begin{proof} As no prime number divides $f_1(n)$ and $f_2(n)$ for every $n\in \Zz$, we can apply Corollary \ref{th:schinzel-coprime} to get infinitely many integers $n\in \Zz$ such that $f_1(n)$ and $f_2(n)$ are coprime. By the Dirichlet theorem for primes in an arithmetic progression, for each of these $n$ except roots of $f_2$, there exist infinitely many $\ell \in \Zz$ such that $f_1(n)+\ell f_2(n)$ is a prime number. \end{proof} Corollary \ref{cor:schinzel-coprime} extends to the case $s\geq 2$. Under the generalized assumption that no prime divides all $f_1(n),\ldots,f_s(n)$ for every $n\in \Zz$, the conclusion becomes: \emph{for infinitely many $n\in \Zz$, there exists a ``large'' \footnote{ ``large'' should be understood as \emph{Zariski dense} in $\Zz^{s-1}$; this is the generalization of ``infinite'' for a subset $\mathcal{L}\subset \Zz^{s-1}$: if a polynomial $P(x_2,\ldots,x_s)$ vanishes at every point of ${\mathcal L}$, it has to be the zero polynomial. } set $\mathcal{L} \subset \Zz^{s-1}$ of tuples $(\ell_2,\ldots,\ell_{s})$ such that $f_1(n)+\ell_2 f_2(n)+\cdots + \ell_s f_s(n)$ is a prime number.} We leave the reader work out the generalization. We also obtain this ``modulo $m$'' version of the Schinzel Hypothesis. \begin{corollary} \label{cor:schinzel-modulo} For $s\geq 1$, assume that no prime integer divides $\prod_{i=1,\ldots,s}f_i(n)$ for every $n\in \Zz$. Then, given any integer $m>0$, there exist $n\in \Zz$ such that each of the values $f_1(n),\ldots,f_s(n)$ is congruent to a prime number modulo $m$. In fact, there are infinitely many integers $n$ such that for each $i=1,\ldots, s$, there are infinitely many prime numbers $p_i$ such that $f_i(n) = p_i \pmod{m}$. \end{corollary} \begin{proof} Fix an integer $m>0$. Consider the two polynomials $F_1(x) = \prod_{j=1,\ldots,s}f_j(x)$ and $F_2(x) = m$. Clearly, $F_1(x)$ and $F_2(x)$ satisfy the assumptions of Corollary \ref{th:schinzel-coprime}. It follows that there exists $n\in \Zz$ such that $F_1(n)=f_1(n)\cdots f_s(n)$ is coprime with $m$. In particular, each of the integers $f_1(n),\ldots,f_s(n)$ is coprime with $m$. Hence, by the Dirichlet theorem, there exists a prime number $p_j$ such that $p_j = f_j(n) + a_j m$ (for some $a_j\in\Zz$). In fact the Dirichlet theorem asserts that there are infinitely many such primes $p_j$. For $j=1,\ldots,s$ the congruences, $$f_j(n+\ell m) = f_j(n) \pmod{m}$$ provide the infiniteness of the integers $n$. These congruences are easily deduced from the basic ones for which $f_j(x)$ is a monomial $x^k$; they will again be used later. \end{proof} Corollary \ref{cor:schinzel-modulo} has this nice special case, which can also be found in Schinzel's paper \cite{Sc59} following works of Sierpi\'nski. \begin{example}[Goldbach Theorem modulo $m$] \emph{Let $m, \ell$ be two positive integers. Then there exist infinitely many prime numbers $p$ and $q$ such that $p+q= 2\ell \pmod{m}$.} \emph{Proof.} Take $f_1(x)=x$ and $f_2(x)=2\ell-x$. As $f_1(1)f_2(1) = 2\ell-1$ and $f_1(-1)f_2(-1) = -(2\ell+1)$, no prime number divides $f_1(n)f_2(n)$ for every $n\in \Zz$. By Corollary \ref{cor:schinzel-modulo}, there exist $n\in \Zz$ and prime numbers $p$ and $q$ such that $f_1(n) = n$ is congruent to $p \pmod{m}$ and $f_2(n) = 2\ell - n$ is congruent to $q \pmod{m}$, whence $p+q=2\ell \pmod{m}$. Another example with $f_1(x)=x$ and $f_2(x)=x+2$ gives the \emph{Twin Primes Theorem modulo $m$:} \emph{For every $m>0$, there are infinitely many primes $p$, $q$ such that $q = p+2 \pmod{m}$.} \end{example} \section{Proof of Theorem \ref{th:Dstar}} \label{sec:proof-coprime} After a brief reminder in Section \ref{ssec:reminder}, Theorem \ref{th:Dstar} is proved in Sections \ref{ssec:proof-part1} and \ref{ssec:proof-part2}. Recall that $f_1(x),\ldots,f_s(x)$ are nonzero polynomials with integer coefficients. \subsection{Reminder on coprimality of polynomials} \label{ssec:reminder} Denote the gcd of $f_1(x),\ldots,f_s(x)$ in $\Qq[x]$ by $d(x)$; it is a polynomial in $\Qq[x]$, well-defined up to a nonzero multiplicative constant in $\Qq$. Polynomials $f_1(x),\ldots,f_s(x)$ are said to be \defi{coprime} if $d(x)$ is the constant polynomial equal to $1$. These characterizations are well-known: \begin{proposition} \label{prop:eqcoprime} For $s\geq 2$, the following assertions are equivalent: \begin{itemize} \item[(i)] $f_1(x),\ldots,f_s(x)$ are coprime polynomials (i.e.\ $d(x)=1$), \item[(ii)] the gcd of $f_1(x),\ldots,f_s(x)$ in $\Zz[x]$ is a constant polynomial, \item[(iii)] $f_1(x),\ldots,f_s(x)$ have no common complex roots, \item[(iv)] there exist $u_1(x),\ldots,u_s(x) \in \Qq[x]$ such that a Bézout identity is satisfied, i.e.: $$u_1(x) f_1(x) + \cdots + u_s(x) f_s(x) = 1.$$ \end{itemize} \end{proposition} A brief reminder: (iv) $\Rightarrow$ (iii) is obvious; so is (iii) $\Rightarrow$ (ii) (using that $\Cc$ is algebraically closed); (ii) $\Rightarrow$ (i) is an exercise based on ``removing the denominators'' and Gauss's Lemma \cite[IV, \S 2]{La02}; and (i) $\Rightarrow$ (iv) follows from $\Qq[x]$ being a Principal Ideal Domain. In the case of two polynomials, we have this additional equivalence: $f_1(x)$ and $f_2(x)$ are coprime if and only if their resultant $\Res(f_1,f_2) \in \Zz$ is non-zero. Section \ref{ssec:coprime} offers an alternate method to check coprimality of two or more polynomials. For the rest of this section, assume that $s\geq 2$ and $f_1(x),\ldots, f_s(x)$ are coprime. Denote by $\delta$ the smallest positive integer such that there exist $u_1(x),\ldots,u_s(x) \in \Zz[x]$ with $u_1(x)f_1(x)+\cdots+u_s(x)f_s(x)=\delta$. Such an integer exists from the Bézout identity of Proposition \ref{prop:eqcoprime}, rewritten after multiplication by the denominators. \subsection{Finiteness of ${\mathcal D}^\ast$ and periodicity of $(d_n)_{n\in \Zz}$} \label{ssec:proof-part1} \begin{proposition} \label{prop:dn} We have the following: \begin{itemize} \item Every integer $d_n$ divides $\delta$ ($n\in \Zz$). In particular, the set ${\mathcal D}^\ast$ is finite. \item The sequence $(d_n)_{n\in \Zz}$ is periodic of period $\delta$. \end{itemize} \end{proposition} Note that the integer $\delta$ need not be the smallest period. Proposition \ref{prop:dn} is an improved version of results by Frenkel and Pelik\'{a}n \cite{FP}: for two coprime polynomials $f_1(x)$, $f_2(x)$, they show that every $d_n$ divides the resultant $\Res(f_1,f_2)$ of $f_1(x)$ and $f_2(x)$. In fact our $\delta$ divides $\Res(f_1,f_2)$. Next example shows that $\Res(f_1,f_2)$ and $\delta$ may be huge and the sequence $(d_n)_{n\in \Zz}$ may have a complex behavior despite being periodic. \begin{example} \label{ex:knuth} Let $f(x) = x^8+x^6-3x^4-3x^3+x^2+2x-5$ and $g(x) = 3x^6+5x^4-4x^2-9x+21$. These two polynomials were studied by Knuth \cite[Division of polynomials, p.~427]{Kn}. We have $\Res(f,g) = 25\,095\,933\,394$ and $\delta = 583\,626\,358 = 2 \times 7^2 \times 43 \times 138\,497$. Here are the terms $d_n$ for $0\leq n\leq 39$: $$ 1\ 2\ 1\ 2\ 7\ 2\ 1\ 2\ 1\ 2\ 1\ 14\ 1\ 2\ 1\ 2\ 1\ 2\ 7\ 2\ 1\ 86\ 1\ 2\ 1\ 14\ 1\ 2\ 1\ 2\ 1\ 2\ 7\ 2\ 1\ 2\ 1\ 2\ 1\ 98 $$ Higher values occur: for instance $d_{1999} = 4214$, $d_{133\,139} = 276\,994$. For this example, the set ${\mathcal D}^\ast$ is exactly the set of all divisors of $\delta$ and the smallest period is $\delta$. \end{example} \begin{proof}[Proof of Proposition \ref{prop:dn}] The identity $u_1(n)f_1(n)+\cdots+u_s(n)f_s(n)=\delta$ implies that $d_n=\gcd(f_1(n),\ldots,f_s(n))$ divides $\delta$ ($n\in \Zz$). To prove that the sequence $(d_n)_{n\in \Zz}$ is periodic, we use again that $f_j(n+\ell \delta) = f_j(n) \pmod{\delta}$ for every $\ell \in \Zz$ and every $n \in \Zz$. Fix $n,\ell \in \Zz$. As $d_n$ divides $f_j(n)$ and $\delta$, then by this congruence, $d_n$ divides $f_j(n+\ell \delta)$. This is true for $j=1,\ldots,s$, whence $d_n$ divides $d_{n+\ell\delta}$. In the same way we prove that $d_{n+\ell\delta}$ divides $d_n$ ($n,\ell \in \Zz$). Thus $d_{n+\ell\delta}=d_n$ and $(d_n)_{n\in \Zz}$ is periodic of period $\delta$. \end{proof} \subsection{Stability by gcd and lcm} \label{ssec:proof-part2} \begin{proposition} \label{th:gcdlcm} The set $\mathcal{D}^\ast$ is stable under gcd and lcm. \end{proposition} Denote by $d^\ast$ the gcd of all elements of $\mathcal{D}^\ast$ and by $m^\ast$ the lcm of those of $\mathcal{D}^\ast$. Using that $\gcd(a,b,c)=\gcd(a,\gcd(b,c))$ we obtain: \begin{corollary} \label{cor:gcd} The integers $d^\ast$ and $m^\ast$ are elements of ${\mathcal D}^\ast$. Furthermore $d^\ast = \min(\mathcal{D}^\ast)$ is the greatest integer dividing $f_1(n),\ldots,f_s(n)$ for every $n\in\Zz$. Similarly $m^\ast = \max(\mathcal{D}^\ast)$. \end{corollary} \begin{proof}[Proof of Proposition \ref{th:gcdlcm} for the gcd] We only prove the gcd-stability part and leave the lcm part (which we will not use) to the reader. Let $d_{n_1}$ and $d_{n_2}$ be two elements of $\mathcal{D}^\ast$. Let $d(n_1,n_2)$ be their gcd. The goal is to prove that $d(n_1,n_2)$ is an element of $\mathcal{D}^\ast$. The integer $d(n_1,n_2)$ can be written: $$d(n_1,n_2) = \prod_{i\in I} p_i^{\alpha_i}$$ where, for each $i\in I$, $p_i$ is a prime divisor of $\delta$ (see Proposition \ref{prop:dn}) and $\alpha_i \in \Nn$ (maybe $\alpha_i = 0$ for some $i\in I$). Fix $i\in I$. As $p_i^{\alpha_i+1}$ does not divide $d(n_1,n_2)$, $p_i^{\alpha_i+1}$ does not divide $d_{n_1}$ or does not divide $d_{n_2}$; we name it $d_{m_i}$ with $m_i$ equals $n_1$ or $n_2$. The Chinese remainder theorem provides an integer $n$, such that $$n = m_i \pmod{p_i^{\alpha_i+1}} \quad \text{ for each } i\in I.$$ By definition, $p_i^{\alpha_i}$ divides $d(n_1,n_2)$, so $p_i^{\alpha_i}$ divides all $f_1(n_1),\ldots, f_s(n_1)$, $f_1(n_2),\ldots, f_s(n_2)$. In particular $p_i^{\alpha_i}$ divides $f_1(m_i),\ldots, f_s(m_i)$, hence also $f_1(n),\ldots, f_s(n)$. Whence $p_i^{\alpha_i}$ divides $d_n$ for each $i\in I$. Now $p_i^{\alpha_i+1}$ does not divide $f_{j_0}(m_i)$, for some $j_0 \in \{1,\ldots,s\}$. As $f_{j_0}(n) = f_{j_0}(m_i) \pmod{p_i^{\alpha_i+1}}$, then $p_i^{\alpha_i+1}$ does not divide $f_{j_0}(n)$. Hence $p_i^{\alpha_i+1}$ does not divide $d_n$. We have proved that $p_i^{\alpha_i}$ is the greatest power of $p_i$ dividing $d_n$, for all $i\in I$. As $d_n$ divides $\delta$, each prime factor of $d_n$ is one of the $p_i$. Conclude that $d(n_1,n_2) = d_n$. \end{proof} \section{More on the set ${\mathcal D}^\ast$} \label{sec:more} Further questions on the set ${\mathcal D}^\ast$ are of interest. The stability under gcd and lcm gives it a remarkable ordered structure. Can more be said about elements of ${\mathcal D}^\ast$? The smallest element $d^\ast$ particularly stands out: it is also the gcd of all values $f_1(n),\ldots,f_s(n)$ with $n\in \Zz$. Can one determine or at least estimate $d^\ast$? \begin{proposition} \label{prop:dstar} Assume that $f_1(x),\ldots,f_s(x)$ are monic. Then $d^\ast$ divides each of the integers $(\deg f_1)!, \ldots, (\deg f_s)!$. \end{proposition} The proof relies on the following result. \begin{lemma} \label{lem:discrete} Let $f(x) = a_dx^d + \cdots + a_1x+a_0$ be a polynomial in $\Zz[x]$ of degree $d$. Fix an integer $T>0$ and fix $m \in \Zz$. If an integer $k$ divides each of $f(m), f(m+T), f(m+2T),\ldots$ then $k$ divides $a_d T^d d!$. \end{lemma} For $T=1$, this lemma was obtained by Schinzel in \cite{Sc57}. If $f(x)$ is assumed to be a primitive polynomial (i.e.\ the gcd of its coefficients is $1$) and $k$ divides $f(m+\ell T)$ (for all $\ell \in \Zz$) then Bhargava's paper \cite{Bh} implies that $k$ divides $T^d d!$ (see theorem 9 and example 17 there). Moreover using a theorem of P\'olya (see \cite[theorem 2]{Bh}), in Proposition \ref{prop:dstar}, we could replace the hypothesis ``$f_j(x)$ is monic'' by ``$f_j(x)$ is primitive'' with the same conclusion on $d^\ast$. We give an elementary proof below of Lemma \ref{lem:discrete} which was suggested to us by Bruno Deschamps. It uses the following operator: $$ \begin{array}{cccc} \Delta : & \Qq[x] & \longrightarrow & \Qq[x] \\ & P(x) & \longmapsto & \frac{P(x+T)-P(x)}{T}. \\ \end{array}$$ If $P(x) = a_d x^d + \cdots+ a_0$ is a polynomial of degree $d$, then $\Delta (P)(x)$ is a polynomial of degree $d-1$ of the form $\Delta (P)(x) = d a_d x^{d-1} + \cdots$ By induction, if we iterate this operator $d$ times, we obtain that $\Delta^d (P)(x) = d! a_d$ is a constant polynomial. The polynomial $\Delta (P)(x)$ is a discrete analog of the derivative $P'(x)$. In particular $\Delta^d (P)(x) = d! a_d$ should be related to the higher derivative $P^{(d)}(x) = d! a_d$. \begin{proof}[Proof of Lemma \ref{lem:discrete}] The key observation is that if $k$ divides $f(m)$ and $f(m+T)$, then $k$ divides $T \Delta(f)(m)$. We prove the statement by induction on the degree $d$. \begin{itemize} \item For $d=0$, ``$k$ divides $f(m)$'' is exactly saying ``$k$ divides $a_0$''. \item Fix $d>0$ and suppose that the statement is true for polynomials of degree less than $d$. Let $f(x) = a_dx^d+ \cdots+a_0$ be a polynomial of degree $d$ satisfying the hypothesis. As $k$ divides $f(m+\ell T)$ for all $\ell \in \Nn$, then $k$ divides $$T \Delta(f)(m+\ell T) = f(m +(\ell+1)T)-f(m+\ell T).$$ By induction applied to $T \Delta (f)(x) = T d a_d x^{d-1} + \cdots$, the integer $k$ divides the integer $(Tda_d) T^{d-1} (d-1)! = a_d T^d d!$. \end{itemize} \end{proof} \begin{proof}[Proof of Proposition \ref{prop:dstar}] For each $j=1,\ldots, s$, the integer $d^\ast$ divides $f_j(n)$ for every $n\in\Zz$. Thus $d^\ast$ divides $(\deg f_j)!$ by Lemma \ref{lem:discrete} (applied with $T=1$ and $a_d=1$). \end{proof} We can also derive a result for $m^\ast = \max(\mathcal{D}^\ast) = \lcm(\mathcal{D}^\ast)$. \begin{proposition} \label{prop:mstar} Let $T$ be the smallest period of the sequence $(d_n)_{n\in\Zz}$ and $f_1(x) = a_d x^d + \cdots$ be a polynomial of degree $d$. Then: $$T | m^\ast \qquad \text{ and } \qquad m^\ast | a_d T^d d!$$ \end{proposition} \begin{proof} The proof that $m^\ast$ is a period is the same as the one for $\delta$ (see Proposition \ref{prop:dn}). It follows that $T$ divides $m^\ast$. On the other hand, if $(d_n)_{n\in \Zz}$ is periodic of period $T$, then every term $d_n$ divides $f_1(n+\ell T)$ for all $\ell \in \Zz$. By Lemma \ref{lem:discrete}, $d_n$ divides $a_d T^d d!$. This is true for each $n$, so $m^\ast = \lcm \{d_n\}_{n\in\Zz}$ also divides $a_d T^d d!$. \end{proof} \section{A coprimality criterion for polynomials} \label{ssec:coprime} A constant assumption of the paper has been that our polynomials $f_1(x),\ldots, f_s(x)$ are coprime. To test this condition, we offer here a criterion only using the values $f_1(n),\ldots, f_s(n)$ that may be more practical than the characterizations from Proposition \ref{prop:eqcoprime}. Define the \defi{normalized height} of a degree $d$ polynomial $f(x)= a_dx^d+ \cdots + a_0$ by $$H(f) = \max_{i=0,\ldots,d-1} \left| \frac{a_i}{a_d} \right|.$$ \begin{proposition} \label{prop:coprime} Let $H$ be the minimum of the normalized heights $H(f_1),\ldots,H(f_s)$. The polynomials $f_1(x),\ldots,f_s(x)$ are coprime if and only if there exists $n\ge 2H+3$ such that $\gcd(f_1(n),\ldots,f_s(n)) \le \sqrt{n}$. \end{proposition} In particular if $f_1(n),\ldots,f_s(n)$ are coprime (as integers) for some sufficiently large $n$ then $f_1(x),\ldots,f_s(x)$ are coprime (as polynomials). \begin{example} ~ \begin{itemize} \item Take $f_1(x) = x^4 - 7x^3 + 3$, $f_2(x) = x^3 -3x + 3$. We have $H(f_1)=7$, $H(f_2)=3$, so $H = 3$. For $n = 9 \, (= 2H+3)$, we have $f_1(n) = 1461$, $f_2(n) = 705$. Thus $\gcd(f_1(n),f_2(n)) = 3 \le \sqrt{n}$. From Proposition \ref{prop:coprime}, the polynomials $f_1(x)$ and $f_2(x)$ are coprime. \item Here is an example for which the polynomials are not coprime. Take $f_1(x) = x^2-1 = (x+1)(x-1)$, $f_2(x) = x^2+2x+1 = (x+1)^2$. Then $\gcd(f_1(x),f_2(x))=x+1$ and $\gcd(f_1(n),f_2(n)) \ge n+1$. \end{itemize} \end{example} \begin{remark*} Proposition \ref{prop:coprime} is a coprime analog of the classical idea consisting in using prime values of polynomials to prove their irreducibility. For instance there is this irreducibility criterion by Ram Murty \cite{RM}, which can be seen as a converse to the Bunyakovsky conjecture: \emph{Let $f(x) \in \Zz[x]$ be a polynomial of normalized height $H$. If $f(n)$ is prime for some $n\ge H+2$, then $f(x)$ is irreducible in $\Zz[x]$.} \end{remark*} We first need a classical estimate for the localization of the roots of a polynomial, as in \cite{RM}. \begin{lemma}[Cauchy bound] \label{lem:root} Let $f(x) = a_dx^d+\cdots +a_1x+a_0 \in \Zz[x]$ be a polynomial of degree $d$ and of normalized height $H$. Let $\alpha \in \Cc$ be a root of $f$. Then $|\alpha| < H+1$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:root}] We may assume $|\alpha|>1$, since for $|\alpha| \le 1$, Lemma \ref{lem:root} is obviously true. As $f(\alpha)=0$, $\alpha$ satisfies: $$|a_d \alpha^d| = \left| a_{d-1} \alpha^{d-1} + \cdots + a_1\alpha + a_0 \right| \le \sum_{i=0}^{d-1} \left| a_i \alpha^i \right|.$$ By dividing by $a_d$, we get: $$|\alpha^d| \le \sum_{i=0}^{d-1} H \left| \alpha^i \right| = H \frac{|\alpha|^d-1}{|\alpha|-1} \quad \text{ then } \quad |\alpha|-1 \le H \frac{|\alpha|^d-1}{|\alpha^d|} = H \left( 1 -\frac{1}{|\alpha|^d} \right).$$ So that $|\alpha|-1 \le H$ and the proof is over. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:coprime}] ~ \begin{itemize} \item $\Longrightarrow$ Since $f_1(x), \ldots, f_s(x)$ are coprime polynomials, we have a Bézout identity: $u_1(x)f_1(x)+\cdots+u_s(x)f_s(x)=1$ for some $u_1(x),\ldots,u_s(x)$ in $\Qq[x]$. By multiplying by an integer $k \in \Zz\setminus \{0\}$, we obtain $\tilde u_1(x)f_1(x)+\cdots+\tilde u_s(x)f_s(x)=k$, with $\tilde u_1(x),\ldots,\tilde u_s(x)$ being this time in $\Zz[x]$. This gives $\tilde u_1(n)f_1(n)+\cdots+\tilde u_s(n)f_s(n)=k$ for all $n\in\Zz$, so that $\gcd(f_1(n),\ldots,f_s(n))$ divides $k$. Thus the gcd of $f_1(n),\ldots, f_s(n)$ is bounded, hence it is $\leq \sqrt{n}$ for all sufficiently large $n$. \item $\Longleftarrow$ Let $d(x) \in \Zz[x]$ be a common divisor of $f_1(x),\ldots,f_s(x)$ in $\Zz[x]$. By contradiction, assume that $d(x)$ is not a constant polynomial. Consider an integer $n \ge 2H+3$ such that $\gcd(f_1(n),\ldots,f_s(n)) \le \sqrt{n}$. On the one hand $d(n)$ divides each of the $f_1(n),\ldots,f_s(n)$, so $|d(n)| \le \gcd(f_1(n),\ldots,f_s(n)) \le \sqrt{n}$. On the other hand $$d(n) = c\prod_{i\in I} (n-\alpha_i)$$ for some roots $\alpha_i\in\Cc$, $i\in I$, of $f_1$ (and of the other $f_j$), and $c \in \Zz\setminus \{0\}$. By Lemma \ref{lem:root}, we obtain: $$|d(n)| = |c| \prod_{i} |n-\alpha_i| > |c| \prod_{i} |n-(H+1)| \ge |n-(H+1)|.$$ We obtain $|n-(H+1)| \le \sqrt{n}$, which is impossible for $n \ge 2H+3$. We conclude that the common divisors of $f_1(x),\ldots,f_s(x)$ in $\Zz[x]$ are constant. Therefore by Proposition \ref{prop:eqcoprime}, the polynomials $f_1(x),\ldots,f_s(x)$ are coprime. \end{itemize} \end{proof} \section{Polynomials in several variables} \label{sec:polynomial-rings} The Schinzel Hypothesis and its coprime variant can be considered with the ring $\Zz$ replaced by a more general integral domain $Z$. Papers \cite{BDN19a} and \cite{BDN19b} are devoted to this. The special case that $Z$ is a polynomial ring $\Zz[\underline u]$ stands out; here $\underline u$ can be a single variable or a tuple $(u_1,\ldots,u_r)$ of several variables. ``Prime in $\Zz[\underline u]$'' then means ``irreducible in $\Zz[\underline u]$''. In \cite{BDN19a}, we prove the Schinzel Hypothesis for $\Zz[\underline u]$ instead of $\Zz$: \begin{theorem} \label{th:schinzel-polynomial} With $s\geq 1$, let $f_1(\underline u,x),\ldots,f_s(\underline u,x)$ be $s$ polynomials, irreducible in $\Zz[\underline u,x]$, of degree $\geq 1$ in $x$. Then there are infinitely many polynomials $n(\underline u) \in \Zz[\underline u]$ (with partial degrees as large as desired) such that $$f_i\big(\underline u,n(\underline u)\big)$$ is an irreducible polynomial in $\Zz[\underline u]$ for each $i=1,\ldots,s$. \end{theorem} We also prove the Goldbach conjecture for polynomials: \emph{any nonconstant polynomial in $\Zz[\underline u]$ is the sum of two irreducible polynomials of lower or equal degree.} Furthermore, Theorem \ref{th:schinzel-polynomial} is shown to also hold with the coefficient ring $\Zz$ replaced by more general rings $R$, e.g.\ $R=\Ff_q[t]$. However not all integral domains are allowed. For example, with $\underline u$ a single variable, the result is obviously false with $R=\Cc$, is known to be false for $R=\Ff_q$ by a result of Swan \cite{Sw} and is unclear for $R=\Zz_p$. In contrast, we prove in \cite{BDN19b} that the coprime analog of Theorem \ref{th:schinzel-polynomial} holds in a much bigger generality. \begin{theorem} \label{th:schinzel-coprime-polynomial} Let $R$ be a Unique Factorization Domain and assume that $R[\underline u]$ is not the polynomial ring $\Ff_q[u_1]$ in a single variable over a finite field. With $s\geq 2$, let $f_1(\underline u,x),\ldots,f_s(\underline u,x)$ be $s$ nonzero polynomials, with no common divisor in $R[\underline u,x]$ other than units of $R$. Then there are infinitely many polynomials $n(\underline u) \in R[\underline u]$ such that $$f_1\big(\underline u,n(\underline u)\big), \ \ldots, \ f_s\big(\underline u,n(\underline u)\big)$$ have no common divisor in $R[\underline u]$ other than units of $R$. \end{theorem} Theorem \ref{th:schinzel-coprime-polynomial} fails if $R[\underline u] = \Ff_q[u_1]$. Take indeed $f_1(u_1,x) = x^q -x+u_1$ and $f_2(u_1,x) = (x^q -x)^2 +u_1$. For every $n(u_1)\in \Ff_q[u_1]$, the constant term of $n(u_1)^q - n(u_1)$ is zero, so $f_1(u_1,n(u_1))$ and $f_2(u_1,n(u_1))$ are divisible by $u_1$. \bibliographystyle{plain}
1,314,259,995,971
arxiv
\section{Introduction} In recent years, interest to high-energy collisions of particles in strong gravitational field increased significantly after observation made by Ba\~{n ados, Silk and West (hereafter, BSW). They noticed that if two particles collide in the vicinity of the Kerr black hole, the energy in the centre of mass (CM) frame $E_{c.m.}$ may grow unbound \cite{ban}. Later on, it was shown that the BSW effect is due to the general properties of the black hole horizon \cite{prd}, so in this sense, the effect has an universal character. Meanwhile, there are some difficulties in astrophysical realization and observation of the BSW effect.\ The first one consists in that one of particles should have fine-tuned relation between the energy and angular momentum (so-called critical particle). The second one is due to the fact that enormous $E_{c.m.}$ lead to relatively modest energies measured at infinity \cite{p} - \cite{z}. Meanwhile, alternative mechanisms of getting ultra-high energies in the CM frame also exist. One of them comes back to the works \cite{psk}, \cite{ps} where unbound $E_{c.m.}$ were also obtained. The crucial difference between \cite{psk}, \cite{ps} and \cite{ban} consists in the type of trajectories of colliding particles. In the BSW effect, both particles approach the horizon. In \cite{psk}, \cite{ps}, they move in opposite directions, so one of particles has to move away from the horizon that is rather difficult to realize for a black (not white) hole. However, there is no need for fine-tuning parameters in this case \cite{cqg}. Another mechanism does not require the presence of the horizon at all. The following scenario in the background of naked Kerr \cite{kerr} and Reissner-Nordstr\"{o}m (RN) \cite{rnn} metrics were considered. The first particle reflects from an infinite potential barrier and collides with the second one. It turned out that $E_{c.m.}\,$\ can be made as large as one likes provided the parameters of a metric are close to the threshold of forming an extremal black hole, so the charge $Q=M(1+\varepsilon )$ or J=M^{2}(1+\varepsilon )$ where $M$ is the mass, $Q$ is the charge, $J$ is the angular momentum, $\varepsilon \ll 1$. Quite recently, it was shown that unbound $E_{c.m.}\,$are still possible even if both horizons and naked singularities are absent \cite{nn1}. Such a scenario was further considered in detail for a particular example of colliding spherical dust shells \cit {4}. The aim of the present work is to extend the approach of \cite{nn1} from the spherically symmetric case to axially symmetric rotating configurations. We draw attention that scenario with neither horizons nor naked singularities has a general character. It does not require the knowledge of the whole dynamics. We consider generic configurations with matter and reveal the main features of the effect in a model-independent way. The key ingredients are (i) the metric on the threshold of forming the extremal horizon, (ii) head-on collision that is similar to \cite{psk}, \cite{ps} but without horizons. Point (i) is a feature typical of quasiblack holes (QBH) \cite{qbh} for which the ultra-high energy collisions are also possible \cite{acqbh}. However, the mechanism of getting large $E_{c.m.}\,$\ in both cases is essentially different (see below). Thus instead of considering particular metrics, trajectories or models \cite{kerr}, \cite{rnn}, \cite{4}, \cite{ax} we reveal and discuss underlying factors that ensure the existence of the effect. All features considered in the present paper imply that although there is no horizon as such, there exists a time-like surface "close" to it. There, the value of the lapse function becomes small although not exactly equal to zero. Meanwhile, there is also another type of the high energy process which is due to the ergosphere, not the horizon \cite{ergo}, \cite{myergo}. We do not discuss it here. \section{General formalism} Let us consider the metric \begin{equation} ds^{2}=-N^{2}dt^{2}+g_{\phi }(d\phi -\omega dt)^{2}+\frac{dr^{2}}{A +g_{\theta }d\theta ^{2}\text{,} \label{met} \end{equation where the coefficients do not depend on $t$ and $\phi $. We use units in which fundamental constants $G=c= h{\hskip-.2em}\llap{\protect\rule[1.1ex]{.325em}{.1ex}}{\hskip.2em $=1.$ Equations of motion for a particle with the mass $m$ in the backrground (\re {met})\ rea \begin{equation} m\dot{t}=\frac{X}{N^{2}}\text{,} \label{t} \end{equation \begin{equation} m\dot{\phi}=\frac{L}{g_{\phi }}+\frac{\omega X}{N^{2}}\text{,} \label{fi} \end{equation where dot denotes derivative with respect to the proper time. Here, \begin{equation} X=E-\omega L\text{,} \label{x} \end{equation $E=-mu_{0}$, $L=mu_{\phi }$. From the normalization condition it follows tha \begin{equation} m\dot{r}=\pm \frac{\sqrt{A}}{N}Z\text{,} \label{r} \end{equation \begin{equation} Z=\sqrt{X^{2}-N^{2}\left( \frac{L^{2}}{g_{\phi }}+g_{\theta }(p^{\theta })^{2}+m^{2}\right) }, \label{z} \end{equation where $p_{\theta }=m\dot{\theta}$. For geodesic motion, $E$ and $L$ are conserved and have the meaning of the energy and angular momentum, respectively. However, the equations (\ref{t}) - (\ref{z}) are valid even if $E$ and $L$ are not conserved. The key quantity of interest is the energy in the CM frame $E_{c.m.}$ If two particles collide in some point, it can be defined in this point by analogy with the standard relation for one particle. For two particles with masses m_{1}$ and $m_{2}$ and four-velocities $u_{1}^{\mu }$ and $u_{2}^{\mu },$ the energy $E_{c.m.}$ at the collision event is the norm of their total four-momentum, \begin{equation} E_{c.m.}^{2}=-(p_{1}^{\mu }+p_{2}^{\mu })(p_{1\mu }+p_{2\mu })=m_{1}^{2}+m_{2}^{2}+2m_{1}m_{2}\gamma \label{cm} \end{equation where \begin{equation} \gamma =-u_{1\mu }u_{2}^{\mu } \label{gamma} \end{equation is the relative Lorentz factor. Then, by direction substitution into (\ref{gamma}), one can find that \begin{equation} m_{1}m_{2}\gamma =\frac{X_{1}X_{2}+\delta Z_{1}Z_{2}}{N^{2}}-\frac{L_{1}L_{2 }{g_{\phi }}-g_{\theta }p_{1}^{\theta }p_{2}^{\theta }. \label{ga} \end{equation where $\delta =-1$ for particles moving in the same radial direction before collision and $\delta =+1$ otherwise. From now on, we consider scenarios for which $\delta =+1$ in (\ref{ga}) (head-on collisions). \section{Metric on threshold of forming the extremal horizon} In what follows we assume that (i) $N^{2}>0$ everywhere, (ii) for some value $r=r_{0}$, it can be made as small as one likes, (iii) collision occurs in the point $r=r_{0}$. With these assumptions, a natural representation is \begin{equation} N^{2}(r,\theta )=B(r,\theta )(r-r_{+})(r-r_{-}) \label{nb} \end{equation with $B(r_{0},\theta )>0$ separated from zero. It follows from (i) that both roots are complex and mutually conjugate.\ It is convenient to introduce the new parameter $\varepsilon $ and write $r_{\pm }=r_{0}\pm ir_{0}\varepsilon , s \begin{equation} N^{2}=B(r,\theta )[(r-r_{0})^{2}+r_{0}^{2}\varepsilon ^{2}] \label{e} \end{equation} Then, requirement (ii) leads to $\varepsilon \ll 1$. Then, near the point of collision $r-r_{0}=r_{0}O(\varepsilon )$, $N^{2}=O(\varepsilon ^{2}).$ Correspondingly, \begin{equation} \gamma =O(\varepsilon ^{-2}) \label{la} \end{equation and can be made as large as one likes. In doing so, the metric is perfectly regular in the vicinity of $r_{0}$. It is seen from above consideration that under continuous change of the parameter $\varepsilon $, $\ $the system passes through the state of the extremal black hole when $\varepsilon =0$. For $\varepsilon \ll 1$. eq. (\ref{cm}) gives us now \begin{equation} E_{c.m.}^{2}\approx \frac{4X_{1}(r_{0},\theta )X_{2}(r_{0},\theta )} B(r_{0},\theta )r_{0}^{2}\varepsilon ^{2}}. \label{ed} \end{equation} If one tries to repeat the procedure for the nonextremal would-be horizons, one is led to take real distinct roots in (\ref{nb}). However, this is inconsistent with assumption (i) since $N^{2}$ changes the sign when $r$ passes through $r_{-}$ and $r_{+}$. Therefore, the effect under consideration is impossible. It is worth reminding that, by contrast, the BSW effect for nonextremal horizons is possible \cite{gp}, \cite{prd}. \subsection{Example: the Reissner-Nordstr\"{o}m metric} To illustrate the general situation, one can compare it to the previous results for the RN metric \cite{rnn}. In this case, \begin{equation} N^{2}=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}. \label{nq} \end{equation} Let for simplicity two particles have the same mass $m$ and collision occurs in the point where $N^{2}$ reaches the minimum value. Then, according to eq. 16 of \cite{rnn}, \begin{equation} E_{c.m.}^{2}=\frac{4m^{2}}{1-\frac{M^{2}}{Q^{2}}}\text{.} \end{equation} Thus, unbound $E_{c.m.}^{2}$ correspond to $Q\rightarrow M$, so the metric looks "almost" like the extremal RN black hole. There is a naked singularity inside at $r=0$ due to the last term in (\ref{nq}). This terms is also responsible for repulsion and reflection of the first particle. However, in a general case one cam imagine some distribution of matter which smooths the singularity. \section{Mechanism of collision} Let particle 1 pass over $r=r_{0}$ and bounce back at some $r_{1}<r_{0}$. Then, it collides with particle 2 that moves from the outside region. If the point of collision is adjusted to be at $r_{0}$ or in its immediate vicinity, we obtain $E_{c.m.}\sim \varepsilon ^{-2}$ in accordance with (\re {cm}), (\ref{la}). To make particle 1 to reflect at $r=r_{1}$, some potential barrier should exist at $r<r_{0}$. In some cases it is infinite like in the RN case \cite{rnn}. Then, the effect under discussion reveals itself for any Killing energies. However, even if the potential barrier is of some finite height, the effect of unbound $E_{c.m.}$ persists, although with the restriction on the permitted range of energies $E$. As N\rightarrow 0$ near $r_{0}$ but $N=O(1)$ inside, it is clear that such a barrier does exist. To some extent, the situation resembles that for quasiblack holes (QBH) in that the horizon is "almost" formed but does not form. However, there are crucial differences. We do not require $N$ to be small everywhere inside in contrast to the QBH case \cite{qbh}. And, now particles move in the opposite direction before collision whereas it was assumed in \cite{acqbh} that they move in the same direction before collision, the BSW effect being due to the difference in the energy scales outside and inside the quasihorizon. \section{Time before collision} It is instructive to evaluate the time needed for collision and compare it with the corresponding time in the case of the BSW process. As is known, if we want the BSW effect to occur, we should choose one of particle to be 'critical" (with fine-tuned parameters). And, for such a particle the proper time required to reach the horizon diverges \cite{ted}, \cite{gp}, \cite{prd . As a result, this mechanism prevents the actual release of infinite energy, as it should be in any physically meaningful process: the energy E_{c.m.}$ remains finite in any act of collision although it can be made as large as one likes. Now, one can expect that in the situation under discussion the proper time remains finite since particles are assumed to be usual, without special fine-tuning. Let us consider, for simplicity, motion in the equatorial plane $\theta =\frac{\pi }{2}$. It follows from (\ref{r}), (\ref{z}) that, in the absence of the turning point, motion between $r_{i}$ and $r_{f}<r_{i}$ takes the proper time \begin{equation} \tau =m\int_{r_{f}}^{r_{i}}\frac{drN}{\sqrt{A}Z}\text{.} \label{tau} \end{equation} As the particle is taken to be usual, $X>0$ everywhere, $Z>0$ is separated from zero. Assuming additionally that $\sqrt{A}\sim N$ (like it happens, say, for the Kerr metric), we see that the integrand in (\ref{tau}) is finite, so $\tau $ is also finite. When a particle reflects from the turning point and returns to $r_{0}$, the corresponding time is also finite because of the same reasonings. Thus in a general model-independent way and without calculations we can conclude that the proper time before collision is finite. For the coordinate time $t$, it follows from (\ref{t}) tha \begin{equation} t=\int_{r_{f}}^{r_{i}}\frac{drX}{ZN\sqrt{A}}. \end{equation} This time is also finite. But, in contrast to $\tau $, the time of travel between $r_{i}$ and $r_{f}=r_{0}$ grows unbound when $\varepsilon \rightarrow 0$ in (\ref{e}). Taking $A=N^{2}b^{2}$ where $b$ is some model-dependent nonzero coefficient, we obtain for time of motion between r_{i}>r_{0}$ and $r_{0} \begin{equation} t_{0}\approx \frac{\pi }{2b(r_{0})\varepsilon B(r_{0},\frac{\pi }{2})}. \end{equation} If one takes into account the time for back motion to $r_{0}$, $t$ acquires an additional factor 2. Comparison to (\ref{ed}) gives us that \begin{equation} E_{c.m.}\sim t_{0}. \end{equation} The contents of the present section agrees with that of Sec.II D\ of \cit {rnn} where the particular case of the Reissner-Nordstr\"{o}m metric was considered. \section{Case of motion along the axis of symmetry} There is the case deserving special attention. Let the coordinate $z$ have the meaning of the polar angle. Let us consider motion along the polar axis, so $\theta =0$ or $\theta =\pi $. The regularity of the metric near the axis $\theta =0$ requires $g_{\phi }\sim \theta ^{2}$. Then, the finiteness of the term $L^{2}/g_{\phi }$ in (\ref{z}) entails $L=0$, so in (\ref{x}) $X=E . It follows from (\ref{z}) tha \begin{equation} Z^{2}=E^{2}-N^{2}m^{2}. \label{l0} \end{equation} Let $N(0,0)=N_{0}$ and $N(\infty ,0)=N_{\infty }$. Then, we can choose any value of $E$ such that $E<N_{0}$ since this guarantees the presence of the turning point. If $N_{\infty }<N_{0}$, the particle with an intermediate energy $N_{\infty }<E<N_{0}$ can fall from infinity. Otherwise, it oscillates between turning points. Assuming that the presentation (\ref{e}) is valid with $\varepsilon \ll 1$, we obtain the unbound energy $E_{c.m.}$ according to (\ref{la}). This generalizes observation made in \cite{ax} for the Kerr metric. The same formula (\ref{l0}) applies to the case $\theta \frac{\pi }{2}=const$, $L=0$. \section{After collision} Let us consider the scenario described above. One sends particle 1 towards the centre and, later, particle 2. Particle 1 enters the inner region, reflects from the potential barrier and collides with particle 2 near r=r_{0}$. As a result, particles 3 and 4 are created. We assume that in the act of collision both the energy and angular momentum are conserved \begin{equation} E_{1}+E_{2}=E_{3}+E_{4}\text{,} \label{en} \end{equation \begin{equation} L_{1}+L_{2}=L_{3}+L_{4}\,\text{,} \label{l} \end{equation whenc \begin{equation} X_{1}+X_{2}=X_{3}+X_{4}\text{.} \label{x12} \end{equation We also assume the forward in time conditio \begin{equation} X_{i}>0,\text{ }1\leq i\leq 4\text{,} \label{ft} \end{equation which follows from (\ref{t}) and $\dot{t}>0$. In contrast to the black hole case, where $N=0$ on the horizon, now $N>0$ everywhere, so the case $X_{i}=0$ is excluded. Also, we assume that $X_{i}=O(1)$ do not become small near r_{0}$. The conservation of the radial momentum gives us, according to (\ref{r}), that \begin{equation} Z_{1}-Z_{2}=Z_{4}-Z_{3}\text{.} \label{z12} \end{equation} For small $N$, \begin{equation} Z_{i}\approx X_{i}-\frac{N^{2}}{2X_{i}}\left( \frac{L_{i}^{2}}{g_{\phi } +m_{i}^{2}\right) \text{.} \label{zn} \end{equation The main terms give us \begin{equation} X_{1}-X_{2}\approx X_{4}-X_{3}\text{.} \label{x34} \end{equation It follows from (\ref{x12}), (\ref{x34}) that \begin{equation} X_{1}\approx X_{4}\text{, }X_{2}\approx X_{3}\text{.} \label{x3} \end{equation The main corrections give us from (\ref{z12}), (\ref{zn}) tha \begin{equation} \frac{1}{X_{2}}\left( \frac{L_{2}^{2}}{g_{\phi }}+\frac{L_{3}^{2}}{g_{\phi } +m_{2}^{2}+m_{3}^{2}\right) \approx \frac{1}{X_{1}}\left( \frac{L_{1}^{2}} g_{\phi }}+\frac{L_{4}^{2}}{g_{\phi }}+m_{1}^{2}+m_{4}^{2}\right) \text{,} \end{equation where $g_{\phi }$ is taken in the point of collision $r=r_{0}$. For fixed $E_{1,2}$ and $L_{1,2}$ (hence, $X_{1}$ and $X_{2}$), we are interested in the solutions for which $E_{3}$ grows with (\ref{ft}) satisfie $.$ According to (\ref{x3}), $X_{3}=X_{2}$ is also fixed, hence this implies that $L_{3}=\frac{E_{3}-X_{3}}{\omega }$ is large. Let us assume that \omega >0$ everywhere. Our goal can be achieved if, say, $E_{4}\rightarrow -\infty $, $L_{4}\rightarrow -\infty $, $E_{3}\rightarrow \infty $, L_{3}\rightarrow \infty $. This implies that orbits with large negative energy do exist. In principle, this is possible even in the absence of the horizon. Then, insofar as all masses $m_{i}\ll M$ where $M$ is the mass corresponding to the metric (\ref{met}), there are no bounds on the ratio \frac{E_{3}}{E_{1}+E_{2}}$ on this scale. In this sense, this is the standard situation for the Penrose process. \section{Example: regular star-like configurations versus vacuum-like black holes} In this section, we give an example of physically relevant objects to which the scenario of collision under discussion can apply. (Another example based on the Bardeen spacetime \cite{bard} was given in Sec. IV of \cite{nn1}).\ In a sense, the RN or Kerr naked singularity can be obtained by deformation of the metric of the corresponding extremal black hole. In a similar way, the required regular starlike configuration can be obtained by deformation of a regular extremal black hole. As such an example, we can choose the regular black hole with the de Sitter core proposed in \cite{dym}. Let us consider the spherically symmetric metri \begin{equation} ds^{2}=-fdt^{2}+\frac{d\rho ^{2}}{f}+r^{2}(u)d\omega ^{2}\text{.} \label{f} \end{equation} In \cite{dym}, it is assumed that the matter satisfies the vacuum-like equation of state $p_{r}=-\rho $ ($p_{r}$ is the radial pressure, $\rho $ is the energy density). Then, $r=u$. However, this is not necessary. We can consider (\ref{f}) with more general equations of state (which become vacuum-like near the origin to ensure regularity). We only require (i) the de Sitter core for small $r$ \cite{dym}, (ii) asymptotic flatness, f\rightarrow 1$ when $u\rightarrow \infty $. As $f=1$ both at infinity and near the origin $r=0$, it must have a minimum in between. For simplicity, we assume that there is only one such a minimum. If $f$ has two zeros, we have a nonextremal black hole, if it has one double zero in the point of minimum, a black hole becomes extremal, if $f>0$ there is no black hole at all (starlike configuration). The main purpose of \cite{dym} was to obtain a regular black hole. By contrary, now we are interested in a starlike configuration which is close to the extremal black hole in the sense described above (see eq. \ref{e}). I \begin{equation} f=f_{0}+a(u-u_{0})^{2} \end{equation near $u=u_{0}$, and $f_{0}\rightarrow 0$, previous general consideration applies. Then, in the background (\ref{f}) with aforementioned properties, one can obtain unbound $E_{c.m.}$ for test particles without the horizon or naked singularity. In the absence of the ergoregion, extraction of energy does not occur. However, due to large $E_{c.m.}$, creation of superheavy particles is possible. \section{Shells} Instead of test particles, let us consider collision of shells which move in opposite directions. We can divide the act of collision to the set of individual collisions of small constituents. For example, if shells are spherical, natural division consists in collision between particles with the same value of angle variables. Then, for each pair of colliding elements, we can apply again eq. (\ref{cm}), (\ref{ga}). The values of $X_{i}$ and $Z_{i}$ can be obtained (for given initial conditions) from equations of motion. These equations differ from (\ref{t}) - (\ref{z}) due to the effect of self-gravitation. Say, for collision of charged shells, one obtains (see eq. 74 of \cite{rnn}) tha \begin{equation} E_{c.m.}^{2}=2m^{2}+\frac{2m^{2}}{f}(\left\vert \dot{R}_{1}\dot{R _{2}\right\vert +\sqrt{\dot{R}_{1}^{2}+f}\sqrt{\dot{R}_{2}^{2}+f})\text{, } \end{equation $f\equiv N^{2}$ is taken in the region between shells in the coincidence limit. The law that governs the dependence $R_{i}(\tau )$ is different for test particles and constituents of self-gravitation shells and can be described in terms of different effective potentials. However, the structure of the expression (\ref{ga}) is universal. Therefore, insofar as $f$ is small, $E_{c.m.}^{2}$ is large. And, previous explanation based on presentation (\ref{e}) is still valid. (It is worth stressing that it is important that shells move in the opposite directions before collision. For motion in the same direction, the effect of inbound $E_{c.m.}$ is absent \cite{sh}.) Thus inasmuch as we are interested in the effect of gaining unbound E_{c.m.} $ only, there is no need to analyze the whole history of the shell (which, however, is of interest by itself). What is important is smallness of $N^{2}$ and the possibility that an inner shell bounces back from some surface. This can be achieved either due to the naked singularity or the effective potential barrier of a finite height. Actually, some restrictions (not related to our subject) on $E_{c.m.}$ come from requirement that description of shells is macroscopic, so they should contain a large number of constituents (see Sec. III C of \cite{rnn}). \section{Discussion and conclusion} In previous scenarios, the following difficulties were present: (i) the necessity to ensure fine-tuning, (ii) severe bounds on the energy of products of collisions measured at infinity, (iii) if collisions are arranged due to naked singularities, the problem with the cosmic censorship arises, (iv) if collisions with large $E_{c.m.}$ occur with no horizons or naked singularities, this requires effects of self-gravity, so for test particles such collisions could not be realized. Meanwhile, now we see that all these difficulties can be avoided for regular star-like configurations, so the effect exists even for test particles. In particular, if there exists an ergoregion (that is, in principle, is possible even without the horizon - see some example in \cite{noh}), the collisional Penrose process should become much more efficient than for the BSW effect. We described an unified picture of the scenario that ensures the unbound $E_{c.m.}$ in head-on collisions with the metrics close to forming the horizon but when the horizon does not form. Apart from this, the advantage of collisions under discussion consists in that we can safely neglect the role of gravitational radiation \cite{berti}. Such radiation bounds the BSW effect since it "spoils" special (critical) trajectories with fine-tuning of parameters required for this effect. However, now fine-tuning is not required at all, so small perturbation due to an additional force do not change the whole picture qualitatively. It is instructive to classify main types of the effect under discussion - see Table 1. \begin{tabular}{|l|l|l|l|l|l|} \hline Relevant references & Relative direction & Horizon & Naked singularity & Fine-tuning & Self-gravity \\ \hline \cite{psk}, \cite{ps} & $-$ & $+$ & $-$ & $-$ & $-$ \\ \hline \cite{ban} & $+$ & $+$ & $-$ & $+$ & $-$ \\ \hline \cite{acqbh} & $+$ & $-$ & $-$ & $-$ & $-$ \\ \hline \cite{kerr}, \cite{rnn}, \cite{ax} & $-$ & $-$ & $+$ & $-$ & $-$ \\ \hline \cite{4} & $-$ & $-$ & $-$ & $-$ & $+$ \\ \hline present paper & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline \end{tabular} Table 1. Different types of high energy processes with horizons or would-be horizons. For shortness, in this table, "self-gravity" means "necessity of self-gravity to have unbound $E_{c.m.}$" (collision of shells), etc. Thus the present kind of high energy collision in a strong gravitational field looks promising since it relaxes or weakens strong restrictions typical of the BSW\ effect required for observation (at least in principle). In doing so, only the vicinity of the would-be horizon is important in accordance with the spirit of black hole physics (even in the absence of a black hole!). In this sense, in all effects described in Table 1, it is necessary that a system posses either the true horizon or time-like surface which in a sense is close it.
1,314,259,995,972
arxiv
\section{Introduction} Real-life networks are of finite size, loopy and display heavy correlations. This complexity represents a challenge from several points of view: first it is computationally expensive when attempting to investigate the network topology and to simulate dynamical systems upon it; moreover it becomes rapidly intractable analytically and one is obliged to make assumptions in order to simplify the picture and perform calculations. If this effort is crucial to practically afford problems, it also embeds a deeper question: facing the network complexity and their omnipresence in real world, it is fundamental to make the distinction between the essential variables which are able to catch the topology main features and those details which are unessential for a minimal though complete description. One of those very fruitful simplifications is \emph{sparseness}, i.e. the networks considered have in general a few links per vertex while the network size tends to infinity. More precisely, a network is \emph{sparse} if $k/N\rightarrow0$ when $N\rightarrow\infty$, $k$ being the average degree. This basic hypothesis leads to a crucial consequence: locally, the network can be approximated by a tree, which means the absence of\emph{ finite loops, }i.e. finite closed paths, among the vertices. Sparseness and the local tree-likeness proved essential to analytical studies of dynamical systems on networks: we cite, focusing just on small-world networks, studies on the Ising model \cite{exactsolIsing_lopes2004,herrero2002ising}, percolation \cite{newman1999scaling} and, more recently, on the Kuramoto model (for a more complete overview, see \cite{dorogovtsev2008critical}). Therefore, the advantage in terms of numerical computation is evident: in general, both numerical studies investigating the network topology \cite{barrat2000properties,newman2000mean} and critical phenomena on networks \cite{exactsolIsing_lopes2004,herrero2002ising,kim_smallworld2001,medvedyeva2003dynamic} exploit the assumption of sparseness in its strongest form, taking the degree as constant. Nevertheless, increasing the links density, networks exist which are still sparse, fulfilling the aforementioned condition but they cannot no longer ensure the tree-likeness because of the heavy presence of loops. It could be argued hence that the links density could play a non negligible role both on the topological properties of those networks and on dynamical models defined upon them. Indeed it is the case of the $XY$-rotors model on regular one-dimensional chains: we show that the passage between a sparse network in the sense of $k=\mathcal{O}(1)$ and a dense one ($k=\mathcal{O}(N)$) implies the emergence of a new metastable state for which the thermodynamic order parameter does not relax at equilibrium \cite{deNigris2013}. The links density hence triggers a non trivial effect on the thermodynamic behavior of the $XY$ model, which by itself is known for possessing a rich phenomenology investigated in several numerical studies \cite{jain1986monte,janke1991numerical,kim2007novel,lee1984discrete,loft1987numerical,mccarthy1986numerical} on 2 and 3-D lattices. In particular, we would like to recall, as an example among many others, that the two dimensional case with nearest neighbors coupling is characterized by the famous Berezinskii-Kosterlitz-Thouless phase transition \cite{kosterlitz2002ordering,leoncini1998hamiltonian}, which implies the correlation function to switch from a power law decay at low temperatures to an exponential one in the high temperatures regime. In the mean field limit, the $XY$ model, called the Hamiltonian Mean Field (HFM) model in this case, displays as well a wide variety of behaviors this complexity being strongly entangled with the lack of additivity. Among its peculiarities we cite the presence of a second order phase transition of the magnetization \cite{Campa2009} and, even more noteworthy, the presence of non equilibrium quasi-stationary states of diverging duration in the thermodynamic limit \cite{antoniazzi2007maximum,ettoumi2011linear,latora2002fingerprints,chavanis2005dynamics,Levin2012_ergodicity,levin2011_coreHalo}. More recently, those models have been challenged to face more complex network topologies: for instance, studies exist concerning the HMF model on random graphs \cite{ciani2011long} where, varying the links density, a second order phase transition of the global magnetization is recovered for every density value in the thermodynamic limit. Furthermore, studies of the $XY$ model on small world networks \cite{kim_smallworld2001,medvedyeva2003dynamic} proved that this lattice topology supports as well complex thermodynamical responses of the model: a mean field transition of the order parameter is retrieved and its critical energy seems to depend on the network parameters. \\ The present work inscribes itself on this line as we will focus, on first instance, on regular networks and then we will shuffle this regular topology with the introduction of a controlled amount of randomness. The first part of the paper being on regular networks, we detail the analytical calculations presented in \cite{deNigris2013} showing that tuning the link density allows to pass from a short-range regime to a long-range one. The analytical approach is preceded in Sec. \ref{sec:Thermodynamic-Behaviour regular} by the results of numerical simulations which are as well more extensively illustrated than in \cite{deNigris2013}. Furthermore, we show that it exists between those two regimes a peculiar metastable state characterized by huge fluctuations of the order parameter. We then address, in the second part of the paper, small-world networks using the Watts-Strogatz model \cite{watts_strogatz1998SW} aiming to shed light on the interplay between the link density and the injection of randomness in the network. In his regime we first investigate, acting on the links density $\gamma$ and on the rewiring probability $p$, the crossover from the regular lattice to the small-network topology. In Sec.\ref{sec:Thermodynamic-Behaviour} we consider the dynamics of the $XY$-rotors model on small-world networks and we show how the emergence of global coherence, via a mean field phase transition of the order parameter, strongly depends on the topological conditions fixed by $p$ and $\gamma$. Furthermore, we discuss in the last part how this influence turns out to be \emph{quantitative}, affecting the critical energy $\varepsilon_{c}$ at which the phase transition occurs. \section{The $XY$-Rotors Model\label{sec:The-XY-Rotors-Model} } The $XY$-rotors model describes a set of $N$ spins interacting pairwise: each spin is fixed on the sites of a one dimensional ring and it is assigned with two canonically conjugated variables $\{\theta_{i},p_{i}\}$, $\theta_{i}\in\left[-\pi;\pi\right]$ being a rotation angle. The $XY$ Hamiltonian reads \cite{antoni1995clustering,leoncini1998hamiltonian}: \begin{equation} H=\sum_{i=1}^{N}\frac{p_{i}^{2}}{2}+\frac{J}{2k}\sum_{i,j}^{N}a_{i,j}(1-\cos(\theta_{i}-\theta_{j})),\label{eq:potential HMF} \end{equation} \\ where $a_{i,j}$ is the matrix encoding the spins connections: \begin{equation} a_{i,j}=\begin{cases} 1\,\, \mbox{if\,i$\neq$ j\ and\ are\, connected} \\ 0\,\, \mbox{otherwise} \end{cases}.\label{eq:adiacency matrix} \end{equation} We take $J>0$, so that we are in the ferromagnetic case and in the following $J=1$ as well as the lattice spacing. Finally the $1/k$ factor in Eq.~(\ref{eq:potential HMF}) ensures that the energy is an extensive quantity. $k$ is referred to as the \emph{degree} and, to control the density of links in the network, we define it as: \begin{equation} k=\frac{2^{2-\gamma}(N-1)^{\gamma}}{N}\sim2^{2-\gamma}N^{\gamma-1}.\label{eq:degree} \end{equation} Practically, we take the integer part of Eq.~(\ref{eq:degree}) since, once fixed $\gamma$ and $N$, $k$ is in general non integer. Since we assign $k$ links per spin and we set periodic boundary conditions, the system is translationally invariant. The dynamics are given by the set of Hamilton equations: \begin{eqnarray} \dot{\theta_{i}} & = & \frac{\partial H}{\partial p_{i}}=p_{i},\label{eq:eq dynamics}\\ \dot{p_{i}} & = & -\frac{\partial H}{\partial\theta_{i}}=-\frac{J}{k}\sum_{j\in V_{i}}^{N}\sin\left(\theta_{i}-\theta_{j}\right)\nonumber \end{eqnarray} where $V_{i}$, represents the neighbors of rotor $i$. A global parameter, the magnetization is defined by \begin{eqnarray} \mathbf{M} & =\frac{1}{N} & \left(\begin{array}{cc} & \sum\cos\theta_{i}\\ & \sum\sin\theta_{i} \end{array}\right)=M\left(\begin{array}{cc} \cos\varphi\\ \sin\varphi \end{array}\right)\label{eq:Magnetisation_def} \end{eqnarray} in order to have an insight on the macroscopic behavior: we expect finite values of $M$ to indicate the emergence of a coherent inhomogeneous state , while a vanishing magnetization signals the absence of long-range order. We first study the response of the total equilibrium magnetization $M$ to the change of the underlying network via the $\gamma$ parameter. Practically, for each $\gamma$, we perform simulations within the microcanonical ensemble, by direct numerical integration of Eqs.~(\ref{eq:eq dynamics}) with the fifth order optimal symplectic integrator described in \cite{mclachlan1999accuracy}. The initial conditions of angles and momenta are picked from a Gaussian distributions with identical variance (which corresponds to a low temperature setting) and, to check the numerical integration, we monitor the conservation of the two constants of motion preserved by the dynamics: the energy $E=H$ and the total angular momentum $P=\sum_{i}p_{i}$, which we have set without loss of generality to $P=0$. Finally the time step is $\Delta t=0.05$ and we average the thermodynamic quantities over time only when the system has reached the equilibrium. \section{Thermodynamic Behavior on Regular Lattices\label{sec:Thermodynamic-Behaviour regular}} \subsection{: Numerical Computation\label{sub::-Numerical-Computation}} The regular network that we take into account is a one-dimensional chain of $N$ spins (rotors) with periodic boundary conditions for which each spin is connected to its $k$ nearest neighbors. By tuning the parameter $\gamma$, $1<\gamma\leq2$ we act on the links density of the network. For $\gamma=1\,\,(k=2)$ the spins are connected to their nearest neighbors, while for $\gamma=2\,\,(k=N-1)$ the network is fully coupled. Heuristically, changing the value of $\gamma$ corresponds to change the \emph{range} of interaction of each spin. Then two limit behaviors naturally emerge from this approach: the first is $\gamma\rightarrow$1 in which we expect the system to behave progressively like a one dimensional short range system with the existence of a continuous symmetry group, and so without any phase transition . On the other side, the $\gamma\rightarrow2$ limit leads to the mean field regime and we expect the HMF transition of the magnetization to appear above a specific threshold of degree. We find this boundary value for $\gamma=1.5$ so that the two aforementioned limits translate more precisely in two intervals $\gamma<1.5$ and $\gamma>1.5$. Practically, for each $\gamma$ value, we monitor the average magnetization $\overline{M(N,\varepsilon)}$ (the bar indicates the temporal mean) for different sizes $N$ and for every energy density $\varepsilon=E/N$ in the physical range. The temporal mean is computed on the second half on the simulations: we start with the Gaussian initial conditions described in Sec. \ref{sec:The-XY-Rotors-Model} and we simulate the dynamics, calculating the magnetization at each time step. When the system reaches a stationary state for the magnetization, we take the temporal mean as the equilibrium value. We start our analysis with the $\gamma<1.5$ interval. The simulations are displayed \begin{figure} (a)\includegraphics[width=7.5cm, keepaspectratio]{Fig1a.eps}\\ (b)\includegraphics[width=7.5cm, keepaspectratio]{Fig1b.eps}\caption{\label{fig:low gamma}(a) Equilibrium magnetization versus energy density for $\gamma=1.25$ and different sizes. The error bars are of the size of the dots; (b) Residual magnetization for $\gamma=1.25$ at $\varepsilon=0.1$ versus the system size.} \end{figure} in Figs.~\ref{fig:low gamma}a-b and as mentioned the magnetization smoothly vanishes with the energy (Fig. \ref{fig:low gamma}a). We have to recall that for low energies, the magnetization can be non-zero as a finite size effect, so the results displayed should depend on the system size. This is confirmed in Fig.~\ref{fig:low gamma}, where the trend for the magnetization to vanish with increasing the size is exhibited. To check with even larger sizes, we consider in Fig.~\ref{fig:low gamma}b the magnetization for a small energy density $\varepsilon=0.1$ and several sizes. The results clearly point out that the magnetization vanishes in the thermodynamic limit. When looking at relaxation scales, we found that larger sizes took more time to relax to equilibrium. So typically in our simulations we take as final time $t_{f}=20000$ for sizes up to $N=2^{16}$ and for $N>2^{18}$~~$t_{f}=30000$. Given these numerical results, we conclude that in the $\gamma<1.5$ interval, the system is short-ranged and the Mermin-Wagner theorem applies imposing the order parameter to vanish. Nevertheless, if long-range order is not possible, quasi long-range could still entail an infinite order phase transition of the correlation function, like in the two dimensional $XY$ model with nearest neighbors interactions. We recall that this particular type of critical phenomenon, first detected by Berezinskii, Kosterlitz and Thouless \cite{kosterlitz2002ordering}, is characterized by two different types of decay of the correlation function with distance: a power law or an exponential decay, respectively for low and high temperatures. In order to look for such possibility we computed the correlation function \[ c(j)=\frac{1}{N}\sum_{i=1}^{N}\cos(\theta_{i}-\theta_{i+j\left[N\right]})\:, \] for every $\varepsilon$ in the considered range. The results shown in Fig. \ref{fig:correlations 1.25} \begin{figure} \includegraphics[width=8cm, keepaspectratio]{Fig2.eps}\caption{(color online) Correlation function $c_{j}$ for $\gamma=1.25$ and $N=2^{14}$.\label{fig:correlations 1.25}} \end{figure} indicate that the decay behavior is also exponential for low energies, demonstrating the absence of the aforementioned phase transition. This could have been anticipated from the fact that finite size effects on the magnetization even though present were small, but possible tricky effects of the boundary conditions could come into play, so it was worthwhile checking. To summarize our result, we can conclude that, for $\gamma<1.5$, the spin degree is still too low for the system to show long-range or quasi long-range. Interestingly, the short range behavior is still at play even for configurations like $\gamma=1.4$ where each spin is under the influence of quite an important neighborhood since $k\propto N^{0.4}$ in this case. Taking now in account the symmetric interval $\gamma>1.5$, the spins are connected enough to allow a coherent state to emerge: in Fig. \ref{fig:various gamma} \begin{figure} \includegraphics[width=8cm, keepaspectratio]{Fig3.eps}\caption{Equilibrium magnetization for $N=2^{16}$ and different $\gamma$. For $\gamma\neq1.5$ the error bars are of the size of the dots. \label{fig:various gamma}} \end{figure} the magnetization undergoes a second order phase transition at $\varepsilon_{c}=0.75$ which is well described by the HMF analytical curve. Again, around the delicate zone of the phase transition, finite size effects induce a shift between the theoretical prediction of the HMF and the simulations, but they can be smoothed down increasing the size. We recall that this phenomenon is also present for the full coupling $\gamma=2$. As a consequence we then find that even with a degree remarkably inferior (e.g. for $\gamma=1.6$ ) than the full coupling condition, each spin possesses enough connections to trigger the global behavior of the system and give a finite magnetization (at low energies). Of course, in both the intervals $\gamma\lessgtr1.5$, the equilibrium magnetization is still affected by fluctuations because of the finite size effects. To monitor these we measured the magnetization variance $\sigma^{2}=\overline{(M-\overline{M})^{2}}$ and we show in Fig.~\ref{fig:variance 1.5} that it scales with the system size like \begin{equation} \sigma^{2}\propto1/N.\label{eq:scaling sigma} \end{equation} This scaling is the one expected for the equilibrium state thus confirming that the values in Figs.~\ref{fig:low gamma}a- \ref{fig:various gamma} are representative of such state. \begin{figure} (a)\includegraphics[width=8cm, keepaspectratio]{Fig4a.eps}\\ (b)\includegraphics[width=8cm, keepaspectratio]{Fig4b.eps}\caption{(color online) Scaling of the magnetization variance $\left\langle \sigma^{2}\right\rangle $ with the size for $\gamma=1.75$ (a) and $\gamma=1.5$ (b).\label{fig:variance 1.5}} \end{figure} Given the results presented in the previous discussions, a natural critical value appears, which characterizes the shift from the short range picture to the long range one: $\gamma_{c}\simeq 1.5$. We decided to investigate the system behavior at this critical threshold imposing $\gamma=\gamma_{c}$. In fine, we expect that the system will be in a peculiar state by itself which cannot be labeled as short or long ranged. Results are depicted in Fig.~\ref{fig:various gamma}. We observe that for low energies, $0.3\lesssim\varepsilon\leq0.75$, the averaged magnetization is finite even when increasing the size but it remains lower than the mean field value. The effect is clearer when we look at its temporal behavior. It is indeed totally different than in the other two regimes and the order parameter $M$ shows large fluctuations which are orders of magnitude larger than for the other $\gamma$ regimes. We show in Fig.~\ref{fig:fluctuations} a comparison a time series for the same energy and system size and different values of $\gamma$, namely $\gamma=1.75$ which displays a finite magnetization with small fluctuations and the one $\gamma=1.5$, with large fluctuations. In order to control the fact that these fluctuations are not an artifact of our initial conditions and that it is likely that the system does not relax on larger timescales than the previous configurations, we considered computation times up to a final time $t_{f}=200000$. Results are presented in Fig.~\ref{fig:fluctuations}, where it appears that this regime with large fluctuations persists. We recall that for $\gamma\lessgtr1.5$ the simulation time was at most $t_{f}=30000$ and it was enough to reach a stationary state. Proceeding further, we notice that the amplitude of these fluctuations is not dependent on system size. We compare for instance $N=2^{12}$ to $N=2^{18}$ in Fig.~\ref{fig:fluctuations} c and we conclude that for the aforementioned energies there is no significant amplitude decrease with system size. More precisely, \begin{figure} (a)\includegraphics[width=8cm, keepaspectratio]{Fig5a.eps}\\ (b)\includegraphics[width=8cm, keepaspectratio]{Fig5b.eps}\\ (c)\includegraphics[width=8cm, keepaspectratio]{Fig5c.eps}\\ (d)\includegraphics[width=8cm, keepaspectratio]{Fig5d.eps} \caption{(color online) Time series for the magnetization with (a) $N=2^{18}$, $\varepsilon=0.60$; (b) $N=2^{12}$; $\varepsilon=0.44$; $T_{f}=200000$; (c) Comparison of the fluctuations amplitude of $N=2^{12}$ and $N=2^{18}$ with $\varepsilon=0.52$ and (d) of $N=2^{18}$ and $N=2^{20}$ with $\varepsilon=0.44$.\label{fig:fluctuations}} \end{figure} if we consider the variance $\sigma^{2}$ as before, it appears that the scaling of the variance mentioned in Eq.~(\ref{eq:scaling sigma}) and coherent with in the $\gamma\ne1.5$ regimes, is substituted by a flat behavior increasing $N$ (see the results in Fig.~\ref{fig:variance 1.5}b). It is worth noticing that the influence the system size can be retrieved not in the fluctuations amplitude but in the typical fluctuation time scale. In Figs. \ref{fig:fluctuations}c-d, it becomes obvious that fluctuations appear to slow down with the system size. This time-scale dependence on the size is reminiscent of out of equilibrium behavior in systems with long range interactions, namely the lifetime of the Quasi Stationary States (QSS) \cite{chavanis2008out,van2010stationary,ettoumi2011linear,Ettoumi2013} and further investigations are ongoing to shed light on this effect and on its potential analogy with the HMF results. Heuristically, for $\gamma=\gamma_{c}$ it is like the if each spin does not possess enough connections to create a global order and establish the mean field but, nevertheless, the degree is sufficiently high ($\gamma=1.5$ corresponds to $k=\sqrt{N}$) to avoid the vanishing of the order parameter in the thermodynamic limit. The resulting behavior is reminiscent of a bistable regime oscillating between the $M=0$ configuration and the mean field value, which corresponds to a finite magnetization, we thus may expect some kind of intermittent behavior. In fine, the flatness of the variance suggests moreover that we observe a state with infinite susceptibility $\chi$ considering its canonical definition \begin{equation} \chi\sim\lim_{N\rightarrow\infty}N\sigma^{2}.\label{eq:chi} \end{equation} To conclude our analysis in symmetry with the $\gamma\lessgtr1.5$ cases, we looked for a signature of this non trivial state in the correlation function but the fluctuations heavily affect it too so that it oscillates without showing a proper scaling. \subsection{Analytical Calculation\label{sub:Analytical-Calculation}} The numerical investigations illustrated point out that the degree triggers the shift from the pure one dimensional topology to the mean field frame. We now tackle this issue analytically aiming to retrieve the influence of the topology, encoded in the adjacency matrix $a_{i,j}$ (Eq.~(\ref{eq:adiacency matrix})), in the thermodynamic properties. We thus compute the magnetization in the low energy regime and check if the correct behavior is recovered, namely a zero magnetization for $\gamma<1.5$ and a finite value for $\gamma>1.5$. At low energies we have a clear separation between the magnetization values, with $M=0$ and the mean field one, in which as $\varepsilon\rightarrow0\,\, M\rightarrow1$. In this limit, due to the ferromagnetic coupling it is natural to assume the differences $\theta_{i}-\theta_{j}$ are small when $a_{i,j}=1$ so that the connected spins are mostly aligned in order to minimize the free energy. We can hence develop the Hamiltonian at the leading order: \begin{equation} H=\sum_{i}\frac{p_{i}^{2}}{2}+\frac{J}{4k}\sum_{i,j}a_{i,j}(\theta_{i}-\theta_{j})^{2}\:,\label{eq:Hamilton_dl} \end{equation} so that our system reduces to a collection of oscillators connected by $a_{i,j}$. We then choose to represent the spin field as a superposition of modes, following the recipe given in refs \cite{leoncini1998hamiltonian,leoncini2001dynamical}: \begin{equation} \begin{array}{c} \theta_{i}=\sum_{l=0}^{N-1}\alpha_{l}(t)\cos(\frac{2\pi li}{N}+\phi_{l})\\ p_{i}=\sum_{l=0}^{N-1}\dot{\alpha_{l}}(t)\cos(\frac{2\pi li}{N}+\phi_{l}) \end{array}.\label{eq:representation} \end{equation} In Eq.~(\ref{eq:representation}), we sum over $N$ modes so that the change of variables is linear and we observe that, given the periodic boundary conditions, it just corresponds to perform a discrete Fourier transform. The amplitudes $\alpha_{l}$ are, in our approach, the information carriers of the temporal behavior, hence the representation of the momenta $p_{i}$ is related to the one of the angles via the first Hamilton equation $p_{i}=\dot{\theta_{i}}$. The phases $\phi_{l}$ are randomly distributed on the circle to ensure that the momenta $p_{i}$ are Gaussian distributed in the limit $N\rightarrow\infty$ as theoretically predicted for the microcanonical ensemble. Following the approach described in \cite{leoncini2001dynamical}, if we consider different sets of phases $\{\phi_{l}\}_{m}$ labeled by $m$ we can interpret each set as a realization of the system, i.e. a trajectory in the phase space. Hence the process of averaging on random phases would correspond to ensemble averaging and leads to \emph{dynamic} equations which, nevertheless, embed information about the \emph{thermodynamic state} of the system, via the phase averaging. If we now inject Eq.~(\ref{eq:representation}) in the Hamiltonian (\ref{eq:Hamilton_dl}) we obtain for the kinetic part $K$: \begin{equation} \frac{\left\langle K\right\rangle }{N}=\frac{1}{N}\left\langle \sum_{i}\frac{p_{i}^{2}}{2}\right\rangle =\frac{1}{4}\sum_{l}\dot{\alpha}_{l}^{2},\label{eq:average kinetic} \end{equation} where $\left\langle ...\right\rangle $ stands for the average over random phases. In Eq.~(\ref{eq:average kinetic}) we used the relation: \[ \left\langle \cos(k_{i}+\phi_{i})cos(k_{j}+\phi_{j})\right\rangle =\frac{1}{2}\delta_{i,j}. \] For the potential, we have that the adjacency matrix $a_{i,j}$ is a circulant one because of the definition of the regular network given in Sec.~\ref{sec:The-XY-Rotors-Model} which is translationally invariant. Hence we can diagonalize it, obtaining a real spectrum $\{\lambda_{j}\}$ since $a_{i,j}$ is a real symmetric matrix. The spectrum analytical expression in general reads: \begin{equation} \lambda_{j}=\frac{1}{k}\sum_{l=1}^{N-1}c_{l}e^{\frac{2\pi ijl}{N}},\label{eq:general spectrum} \end{equation} where the vector $c_{l}$ is the coefficient vector whose permutations compose the matrix $a_{i,j}$. Because of the two symmetries $c_{l}=c_{N-l}$ and $e^{\frac{2\pi ij(N-l)}{N}}=e^{\frac{-2\pi ilj}{N}}$, Eq.~(\ref{eq:general spectrum}) can be splitted in two sums: \begin{equation} \lambda_{j}=\frac{1}{k}\left(\sum_{l=1}^{\frac{N}{2}}c_{l}e^{\frac{2\pi ijl}{N}}+\sum_{l=1}^{\frac{N}{2}}c_{l}e^{\frac{-2\pi ijl}{N}}\right), \end{equation} which can hence be written as the sum of the real parts: \begin{equation} \lambda_{j}=\frac{2}{k}\sum_{l=1}^{k/2}\cos(\frac{2\pi lj}{N})=\frac{1}{k}\left[\frac{\sin[(k+1)j\pi/N]}{\sin(j\pi/N)}-1\right],\label{eq:spectrum} \end{equation} where $k$ is the spin degree of Eq.~(\ref{eq:degree}). To the leading order the potential will hence take the form: \begin{equation} \frac{V}{N}=\frac{1}{4kN}\sum_{i,j}a_{i,j}(\theta_{i}-\theta_{j})^{2}=\frac{1}{2}\sum_{l}(1-\lambda_{l})\left|\hat{\theta_{l}}\right|^{2}\label{eq:linearised potential} \end{equation} In Eq.~(\ref{eq:linearised potential}) we used the identity \begin{equation} \frac{1}{kN}\sum_{i,j}a_{i,j}\theta_{i}\theta_{j}=\frac{1}{kN}\Theta^{T}P^{*}DP\Theta=\sum_{l}\lambda_{l}\left|\hat{\theta_{l}}\right|^{2}\label{eq:identity}, \end{equation} where $\Theta=\left( \theta_{1}...\theta_{N}\right)$ and $a_{i,j}=P^{*}DP$. In the latter equation $D$ is the diagonal form of the adiacency matrix and $P^{-1}=P^{*}$ since $P$ is unitary. The identity in Eq.~(\ref{eq:identity}) comes from the fact that the eigenvectors of a circulant matrix of size $N$ are the columns of the unitary discrete Fourier transform matrix of the same size. We can then inject in Eq.~(\ref{eq:linearised potential}) the linear waves representation and average over the phases as we did for the kinetic part of the Hamiltonian: \[ \frac{\left\langle V\right\rangle }{N}=\left\langle \frac{1}{2}\sum_{l}(1-\lambda_{l})\left|\hat{\theta_{l}}\right|^{2}\right\rangle =\frac{1}{4}\sum_{l}(1-\lambda_{l})\alpha_{l}^{2}. \] Having obtained the averaged Hamiltonian $\left\langle H\right\rangle =\left\langle K\right\rangle +\left\langle V\right\rangle $, we can deduce the \emph{averaged equation of motion,} as anticipated, via the second Hamilton equation \[ \frac{d}{dt}\left(\frac{\partial\left\langle H\right\rangle }{\partial\dot{\alpha_{l}}}\right)=-\frac{\partial\left\langle H\right\rangle }{\partial\alpha_{l}}\:, \] and obtain \begin{equation} \ddot{\alpha_{l}}=-(1-\lambda_{l})\alpha_{l}=-\omega_{l}^{2}\alpha_{l}.\label{eq:dispersion_relation} \end{equation} We have hence an equation for an harmonic oscillator whose frequency depends on the adjacency matrix spectrum and, consequently, on the spin degree. We note that this approach is dependent from our low temperatures approximation, but as mentioned we shall make use of this since, depending on the value of $\gamma$, we expect two clearly defined regimes of zero or finite magnetization. Our system is now completely encoded in terms of wave amplitudes $\{\alpha_{l}\}$ and frequencies $\{\omega_{l}\}$ which can be linked observing that, at equilibrium, we have the equipartition of the modes ($p_{i}$'s are Gaussian): \[ T=\frac{1}{N}\sum_{i}\left\langle p_{i}^{2}\right\rangle =\frac{1}{2}\sum_{l}\alpha_{l}^{2}\omega_{l}^{2}\Rightarrow\alpha_{l}^{2}=\frac{2T}{N(1-\lambda_{l})}. \] In order to compute $M$, we apply the same procedure, meaning that we average over the phases its expression given by Eq.~(\ref{eq:Magnetisation_def}) after having substituted the representation Eq.~(\ref{eq:representation}). We obtain \cite{leoncini1998hamiltonian}: \begin{equation} \left\langle \mathbf{M}\right\rangle =\prod_{l}J_{0}(\alpha_{l})(\cos\theta_{0},\sin\theta_{0}),\label{eq:average magnetisation} \end{equation} where $J_{0}$ is the zeroth order Bessel function and $\theta_{0}$ is the average of the angles $\{\theta_{i}\}$, $\theta_{0}=\frac{1}{N}\sum_{i}{\theta_{i}}$. This quantity is conserved because of the translational invariance, giving a constant total momentum $P$ which is set at $P=0$ by our choice of initial conditions. As the final step to evaluate Eq.~(\ref{eq:average magnetisation}), we recall that we are dealing with a low temperatures approximation so we can consider that the amplitudes $\alpha_{l}^{2}$ to be small at equilibrium and in the large system size limit \cite{leoncini2001dynamical}. This consideration allows to develop at leading order the product of the Bessel functions and, taking the logarithm of Eq.~(\ref{eq:average magnetisation}), we finally obtain: \begin{equation} \ln\left(\left\langle M\right\rangle \right)=-\sum_{l}\frac{\alpha_{l}^{2}}{4}=-\frac{T}{2N}\sum_{l}\frac{1}{1-\lambda_{l}}.\label{eq:final magnetisation} \end{equation} Eq. (\ref{eq:final magnetisation}) conjugates the thermodynamic information and the topological one because of the matrix spectrum. If, from one side, it actually realizes our purpose of matching these two levels of description, now the spectrum in Eq.~(\ref{eq:spectrum}) carries the system complexity, requiring Eq.~(\ref{eq:final magnetisation}) to be evaluated numerically. In Fig. \ref{fig:magn analitical} \begin{figure} \includegraphics[width=8cm, keepaspectratio]{Fig6.eps}\caption{Analytical magnetization $\left\langle M\right\rangle $ from Eq.~(\ref{eq:final magnetisation}) for $T=0.1$ versus $\gamma$. Theory refers to the exact analytical solution of the HMF model.\label{fig:magn analitical}} \end{figure} we show, increasing the size, how this approximated expression grasps the correct asymptotic behavior, giving the mean field value in the high $\gamma$ regime and vanishing for low $\gamma$. The transition becomes sharper at $\gamma_{c}\simeq 1.5$ by increasing the size and gives hence confirmation of its critical signification as already pointed out by our numerical simulations of Sec.\ref{sec:Thermodynamic-Behaviour regular}. \section{The Small World Network Model\label{sec:The-Model}} In Secs.~\ref{sub::-Numerical-Computation}-\ref{sub:Analytical-Calculation} we considered a regular chain as network topology and we illustrated how the degree drives the thermodynamic response of the $XY$-model on those lattices from the short-range regime to the long-range one. The natural following step to reorganize the topology is now to break the translational invariance of the regular chain previously considered and to introduce some \emph{randomness} in how the spins are connected. In this purpose, we used the Watts-Strogatz model (W-S)\cite{watts_strogatz1998SW} for small-world networks, which interpolates between a regular network and a random one by the progressive introduction of random long-range connections. Following the algorithm devised in \cite{watts_strogatz1998SW}, each link is reconnected with probability $p$ to a randomly chosen other vertex or is left untouched with probability $1-p$: long-range connections are hence introduced and the rewiring procedure injects disorder in the network since $k,$ fixed by Eq.~(\ref{eq:degree}) at the beginning, is non-uniform afterwards. The degree distribution decays exponentially since the rewiring is performed independently for every vertex \cite{barrat2000properties}. Moreover, since $k_{i}\approx\left\langle k\right\rangle $, a W-S network is not locally equivalent, even in the limit case of $p=1$, to a random graph were eventually isolated vertex exist and the network is fragmented in many parts \cite{barrat2000properties}. It is noteworthy for the following to add that the rewiring injects mainly shortcuts whose length is of the order of the network size $\mathcal{O\left(N\right)},$ so that a fine tuning of the interaction range by the means of the randomness $p$ is not possible. \subsection{Network analysis\label{sub:Network-analysis}} The small-world regime embeds characteristics of both the regular lattice and the random network ones: the network keeps track of the initial configuration since, after the rewiring, it still conserves a local neighborhood like a regular lattice; on the other hand the network approaches, in the sense specified in Sec.~\ref{sec:The-Model}, the random graph topology because of the shortcuts induced by the rewiring. In our context it hence emerges naturally the question of how the degree, which scales as $k\sim N^{\gamma-1}$, could influence the scaling of topological quantities in competition with the rewiring probability $p$. For instance, a crucial passage in which the $\gamma$ parameter could play an important role is the crossover from the regular chain topology to the small-world regime. It is usually investigated by the scaling behavior of the average path length $l(p,\gamma)$, defined as the average shortest distance between spins. \begin{figure} \includegraphics[scale=0.4, keepaspectratio]{Fig7.eps}\caption{(color online) Path lengths starting from the blue vertex.\label{fig: path}} \end{figure} This quantity has an algebraic increase $l\sim N$ for a regular one-dimensional lattice with fixed degree $k$, while for random networks it grows as $l\sim\log N$. The passage between those two regimes is enhanced by the long-range connections which could allow the spins to behave coherently. Practically, since the network lacks a metric, the distance between two spins is calculated as the minimal number of edges to cross to go from one spin to the other, as shown in Fig.~\ref{fig: path}. To investigate the change between these two behaviors we perform numerical simulations, varying $\gamma$ and $p$: we use values for $\gamma$ from 1.2 to 1.5 and $p$ ranges from $10^{-7}$ up to $10^{-3}$ . $N$ is fixed at $2^{14}$ and we average over 10 network realizations for each value of $p$. In Fig.~\ref{fig:length}a we plot $l(\gamma,p)/l(\gamma,0)$ versus $\gamma$. $l(\gamma,p)$ shows the known crossover behavior \cite{watts_strogatz1998SW} but, considering the probability $p_{SW}(N,p,\gamma)$ at which $l(\gamma,p)$ drops abruptly to the random network values, it appears evident that it is strongly dependent on $\gamma$. We have the following scaling for $p_{SW}(N,\gamma)$ \cite{newman1999scaling}, using the degree definition in Eq.~(\ref{eq:degree}): \begin{equation} p_{SW}\sim\frac{1}{N^{D}kD}\propto\left(\frac{1}{N}\right)^{\gamma},\label{eq: pSW scaling} \end{equation} where $D=1$ is the dimension of the initial regular lattice. In Fig. \ref{fig:length}b we plot the estimation of $p_{SW}(N,\gamma)$ from the simulations versus $\gamma$ which effectively confirm the power law of Eq.~(\ref{eq: pSW scaling}). The degree is hence crucial to quantitatively determine the passage to the small-world regime; this dependence unveils its importance considering that, on small-world networks, a ''topological'' length scale can be defined \cite{newman1999scaling} as \begin{equation} \xi=1/(pkD)^{1/D}\label{eq:correlation} \end{equation} and then $p_{SW}$ in Eq.~(\ref{eq: pSW scaling}) is the probability of having $\xi=N$ . This is the key condition to achieve global coherence and it clearly appears that the density of links, governed by the parameter $\gamma$, and the randomness injected by $p$ concur in complexifying the network topology. In Sec. \ref{sec:Thermodynamic-Behaviour regular}, we then move one step further dealing with the thermodynamics of the $XY$-rotors model on the small-world network and looking for the topological signature of the $\gamma$ and $p$ parameters in its properties. \begin{figure} (a)\includegraphics[width=8cm, keepaspectratio]{Fig8a.eps}\\ (b)\includegraphics[width=8cm, keepaspectratio]{Fig8b.eps}\\ (c)\includegraphics[width=8cm, keepaspectratio]{Fig8c.eps}\\ (d)\includegraphics[width=8cm, keepaspectratio]{Fig8d.eps} \caption{(a) Average path lengths versus rewiring probability for different $\gamma$ values and $N=2^{14}$. (b) Power law scaling of $p_{SW}(\gamma)$. (c) Average path lengths versus rewiring probability for different $N$ values and $\gamma=1.3$ . (d) Power law scaling of $p_{SW}(\gamma=1.3)$. The curve slope is $\approx 1.27$, coherent with the scaling in Eq.~(\ref{eq: pSW scaling}). \label{fig:length}} \end{figure} \subsection{Thermodynamic Behavior on Small World Networks\label{sec:Thermodynamic-Behaviour}} In Sec.~\ref{sub:Network-analysis} we focused on the topological interplay of $\gamma$ and $p$ parameters in establishing the small-world regime which, as explained, is noteworthy for its ambivalence, resembling both to a regular lattice and to a random graph. In this section we put the $XY$-rotors model on a small-world network: the question we address now is to investigate the thermodynamic counterpart of the network complex topology. We focus the low $\gamma$ regime, i.e. $\gamma<1.5$. In this case we recall that the degree is still too low to induce long-range order by itself without the intervention of randomness and the network behaves like a one-dimensional chain. In the interval $\gamma>1.5$ the high degree already induces a mean field phase transition of the magnetization whose critical energy is $\varepsilon_{c}=0.75$, as shown in Sec.~\ref{sec:Thermodynamic-Behaviour regular}. In this interval, even without the contribution of long-range connections, the network is connected enough to behave like a full-coupled one, which is the case of the Hamiltonian Mean Field model. On the other hand, in the case of random networks, it has been shown that the mean field phase transition appears for all $\gamma>1$ \cite{ciani2011long}. We thus introduce progressively long-range connections with the rewiring probability $p$ since, from Eq.~(\ref{eq:correlation}), we expect to retrieve two regimes determined by $\gamma$ and $p$; $\xi>N$ in which long-range order is absent and $\xi<N$ where the order parameter displays a second order phase transition. \begin{figure} (a)\includegraphics[width=8cm, keepaspectratio]{Fig9a.eps} (b)\includegraphics[width=8cm, keepaspectratio]{Fig9b.eps}\\ (c)\includegraphics[width=8cm,, keepaspectratio]{Fig9c.eps} \caption{Average magnetization $M$ versus energy density $\varepsilon=E/N$ for several system sizes, $\gamma=1.25$ and $p=0.001\,\,(a),\,0.005\,\,(b),\,0.05\,\,(c).$ \label{fig:different probability}} \end{figure} In Figs.~\ref{fig:different probability}a-c we set $\gamma=1.25$ and, for each value of $p$, we consider several system sizes, above and below the threshold $\xi(1.25,p,N)=N$. The results displayed in Figs.~\ref{fig:different probability} show the equilibrium mean value of the magnetization $\overline{M}$ versus the energy density $\varepsilon=E/N$: for $N=2^{12}$, the probabilities $p=0.001$ and $0.005$ (Figs.~\ref{fig:different probability} a-b) are still too low to entail the crossover to the long-range regime and the system does not undergo a phase transition. On the other hand, the other two sizes considered $N=2^{14}$ and $2^{16}$ are in the $\xi<N$ regime and the mean field phase transition is recovered all the $p$ taken in account. As explained, increasing the randomness decreases the small-world threshold; hence all the sizes show the phase transition of the magnetization for $p=0.05$ (Fig. \ref{fig:different probability}c). Those results suggest the importance of $\xi$ also from the statistical point of view: in Sec.~\ref{sub:Network-analysis}, we showed that it signals the topological passage from regular to small-world network which identifies itself by a drop of the average path distance $l(N,\gamma,p)$; equivalently in this short $l(N,\gamma,p)$ regime the existence of long-range order is possible and, thus, we observe the second field phase transition of the thermodynamic order parameter. Remarkably, the critical energy $\varepsilon_{c}$ at which the transition occurs varies accordingly to the randomness; we thus investigate this effect tuning $\gamma$ between $1.2$ and $1.5$ and $p$ from $10^{-7}$ to $10^{-3}$. As explained before, it is worth focusing on the interval $\gamma\leq1.5$. In this case the shortcuts introduced by the rewiring process are crucial for the achievement of global coherence; while in the $\gamma>1.5$ we already know that phase transition with $\varepsilon_{c}=\varepsilon_{HMF}=0.75$ occurs both on regular chains \cite{deNigris2013} and on random networks \cite{ciani2011long}. \begin{figure} (a)\includegraphics[width=8cm, keepaspectratio]{Fig10a.eps}\\ (b)\includegraphics[width=8cm, keepaspectratio]{Fig10b.eps}\\ (c)\includegraphics[width=8cm, keepaspectratio]{Fig10c.eps} \caption{(a) Logarithmic dependence of the critical energy $\varepsilon_{c}$ versus the rewiring probability $p$ for different $\gamma$ values. (b) Power law scaling of $p_{MF}(\gamma)$. (c) Phase plot in the $\left (\gamma,p\right )$ plane. The thick line for $p=0$ and $1<\gamma$<1.5 stands for the absence of phase transitions in that parameter region. In the ``$MF$ phase transition`` region, the critical energy is the same of $HMF$, $\varepsilon_{c}=0.75$. \label{fig:log dependence}} \end{figure} In Fig. \ref{fig:log dependence}, we plot the critical energy $\varepsilon_{c}(p,\gamma)$ versus the rewiring probability $p$ for several values of $\gamma$ and we observe that the phase boundary seems to be well described by the logarithmic form : \begin{equation} \varepsilon_{c}=\log(g(\gamma)p^{c})\label{eq:critical energy} \end{equation} with $C\sim0.1$. Eq.~(\ref{eq:critical energy}) is coherent with the scaling proposed in \cite{kim_smallworld2001,medvedyeva2003dynamic} as far as the $p$ dependence is concerned. Remarkably, in \cite{kim_smallworld2001,medvedyeva2003dynamic}, it was a result issued from Monte-Carlo simulations in the canonical ensemble while we work in the microcanonical frame. Moreover the aforementioned results of logarithmic scaling were found in the $p\rightarrow0$ regime, while where we are exploring regions with large values of $p$. We also have to insist on the fact that Eq.~(\ref{eq:critical energy}) embeds an extra piece of information concerning the degree. Indeed, in our analysis the ``quantitative'' topological parameter $\gamma$ affects in his turn the critical energy $\varepsilon_{c}$ through the function $g(\gamma)$ , showing the non trivial role played by the links density in the thermodynamic behavior of the $XY$-rotors model. Another specific information can be retrieved from Fig.~\ref{fig:log dependence}a. There is a threshold beyond which a ``saturation'' process exists: to be more explicit, for each value of $\gamma$, we define a threshold probability $p_{MF}(\gamma)$ for which the critical energy is $\varepsilon_{c}=0.75$, identical to values obtained in the mean field ($\gamma=2$), or for the fully randomized networks \cite{ciani2011long}. For $p>p_{MF}(\gamma)$ increasing the randomness does not influence anymore the critical energy and in some way the resulting small-world network is, from the thermodynamic point of view, equivalent to a fully coupled graph. In Fig.~\ref{fig:log dependence}b, we show how this probability threshold $p_{MF}(\gamma)$ depends as a power law on the $\gamma$ parameter. Note though that we expect that $p_{MF}\rightarrow0$ when $\gamma\rightarrow1.5$, because as we discussed before in the $\gamma>1.5$ regime the system is already in the mean field state without any rewiring. Regarding Fig.~\ref{fig:log dependence}b, we do not expect the results to be valid near $\gamma=1.5$. Indeed a precise estimation of $p_{MF}$ proves to be very delicate since it relies in its turn on the determination of the critical energy of the transition which is intrinsically a hard task. Moreover the simulations are performed with finite size systems, so the measured $p_{MF}$ is influenced by finite size effects. Then we actually have $p_{MF}(\gamma,N)$ and this dependence on $N$ can entail a finite value even for $\gamma=1.5$. We also have to mention that we average on a finite number of network realizations, which may affect as well the results. An interesting path to follow in order to avoid this effects and refine our estimations could be the use of finite-size scaling techniques. Moreover, since previous results exist in the canonical ensemble \cite{medvedyeva2003dynamic,kim_smallworld2001}, this analysis would be of interest in our approach because we deal with the microcanonical ensemble. It would then be possible to compare the characteristics of the phase transition in the two ensembles and shed light on their equivalence. Proceeding in our analysis, we recall that for the regular network a metastable state was found for $\gamma_{c}\simeq 1.5$ in which the order parameter is affected by heavy fluctuations, suggesting that the system oscillates between low magnetization values, proper of the $\gamma<1.5$ regime, and the mean field value of the $\gamma>1.5$ case. We can notice that, after the introduction of randomness, we do not observe this metastable state for $\gamma=1.5$ or any other value of $\gamma$. In fact now a small (eventually vanishing) $p$ is enough to generate a phase transition. It therefore exists an interplay between the ``quantitative'' parameter $\gamma$ and the ``qualitative'' parameter $p$; nevertheless those parameters, as anticipated in Sec.~\ref{sec:The-Model}, are not equivalent when dealing with their influence on the thermodynamic behavior of the $XY$-model. This duality is so far not complete since it was not possible to retrieve the metastable state in the $\gamma<1.5$ regime acting exclusively on the $p$ parameter. In this sense, the randomness is ``regularizing'' the thermodynamic behavior: the rewired network supports either the behavior of a regular lattice either, once the small-world regime is reached, gives rise to the phase transition of the magnetization. Summarizing, we can say that the noise created by the rewiring stabilizes the passage between the two regimes and destroys the delicate metastable state which arose in the regular lattice. \section{Conclusion} In conclusion, we have studied the influence on the critical behavior of the $XY$-rotors model of two different network topologies, the regular lattice and small world network. In Sec.~\ref{sec:Thermodynamic-Behaviour regular}, we introduced the parameter $\gamma$ which allows to tune the number of links from the linear chain to the full coupling configuration. We identified two main parameter regions: the first for $\gamma<1.5$ in which the model has a one dimensional behavior and thus it does not display long-range or quasi long-range order as shown by numerical simulations. On the contrary, in the second region $(\gamma>1.5)$, the spin degree is sufficiently high to lead to the emergence of a coherent state: we thus observe a mean field phase transition of the magnetization, identical to the one of the HMF model. More interestingly, we show numerical and analytical evidence of an unstable state at the threshold between the two regions, for $\gamma_{c}\simeq1.5$. In this peculiar state, the magnetization is affected by fluctuations which seem to be size independent and, furthermore, this state does not reach equilibrium on the timescales considered. We then calculated analytically an approximated expression for the magnetization, obtained in the low temperatures regime, which demonstrates the topological critical nature of $\gamma_{c}\simeq1.5$. This expression retrieves correctly the two behaviors aforementioned and, since it contains the spectrum of the adjacency matrix, it points out that the\emph{ topological }origin of the three different phases shown by the simulations. We have then studied the role of the links density on the topology of small-world networks and its effect on the $XY$-rotors model dynamics. We have focused, in Sec.~\ref{sec:The-Model}, on the crossover to the small-world regime tuning the $\gamma$ parameter. We show by numerical simulations that $p_{SW}$ has the scaling in Eq.~(\ref{eq: pSW scaling}) which is therefore consistent with \cite{newman1999scaling}. Hence the links density, governed by $\gamma$, turns out to be crucial to enhance the crossover between the ``large-world'' regime and the small-world one cooperating with the rewiring probability $p$ in the creation of long-range connections. We then investigated, in Sec.~\ref{sec:Thermodynamic-Behaviour}, the thermodynamic response of the $XY$-rotors model to the variations of the network underlying. We retrieved the emergence of a mean field transition of the magnetization once $p>p_{SW}$. This latter condition implies the network to be in the $\xi<N$ case, using the definition in Eq.~(\ref{eq:correlation}), implying that the passage between the regular and the small-world topology also entails a difference in the behaviour of the model. Moreover we found a logarithmic dependence of the critical energy $\varepsilon_{c}(p,\gamma)$ on $p$ and $\gamma$ which lead to the scaling in Eq.~(\ref{fig:log dependence}). The interplay between the topological parameters in modifying $\varepsilon_{c}$ saturates when $\varepsilon_{c}=0.75$ which is the critical energy of the Hamiltonian Field model and we defined a new threshold probability $p_{MF}$ which displays the power law scaling with $\gamma$ shown in Fig.~\ref{fig:log dependence}b. We hence found that a small (vanishing) amount of randomness regularizes the $\gamma=1.5$ metastable state pointed out in Sec.~\ref{sec:Thermodynamic-Behaviour regular} and moreover it was not possible to recreate it in the $\gamma<1.5$ interval just adding long-range connections with $p$. Therefore, as far as the thermodynamic behavior is concerned, we conclude that $\gamma$ and $p$ are not equivalent when dealing with the transition to the mean field state; nevertheless, we anticipate here that a more refined criteria than randomness could be found in order to perturb the regular network in the low density regime $(\gamma<1.5)$ and enhance the creation of out-of-equilibrium effects like the $\gamma_{c}\simeq1.5$ metastable state. \begin{acknowledgments} X. L. is partially supported by the FET project Multiplex 317532 and S. d. N. is supported by DGA/MRIS. \end{acknowledgments} \bibliographystyle{apsrev4-1}
1,314,259,995,973
arxiv
\section{Introduction} Dispersive shock waves (DSWs) are observed in nonlinear optics in systems described by the nonlinear Sch\"{o}rdinger equation, when the so-called hydrodynamical reduction is valid \cite{gurevich,bronskibook,kamchatnov,conti09,whitman,Besse12}. The introduction of a small amount of disorder competes with nonlinearity and hampers the shock formation \cite{Ghoprl12,Gentilini12}. This makes the DSWs an appealing framework to study the interplay between randomness and nonlinear waves, a subject of growing interest as witnessed by recent theoretical \cite{Con11,Fol12,kivshar10} and experimental studies \cite{DelRe11,levi11,Fleisher2012}. At variance with the ordered systems \cite{wan07}, the direct observation and characterization of optical shock waves in the presence of structural randomness is burdened by several technical difficulties in identifying an appropriate nonlinear medium, feasible excitation conditions, and relevant observables. There are several possibilities to characterize the excitation of \emph{undular bores} and related phenomena \cite{deykoon, wyller, wurtz,ell,hoefer,barsi:07}: the very definition and observation of the wave-breaking phenomena in the presence of disorder and nonlinearity is an open issue. This calls for an extensive development of experimental techniques and the use of multiple methods to characterize the DSWs. In this paper we give a detailed review of our experimental investigations of the hydrodynamical regime in the generation of optical shocks during nonlinear optical propagation in a thermal defocusing nonlinearity of a continuous wave (CW) laser beam. The hydrodynamical regime is achieved when the nonlinear length is much smaller than the diffraction and losses (absorption and scattering) lengths. Our experimental technique allows the direct observation of a propagating initially Gaussian laser beam in a thermal nonlinear liquid with controllable disorder obtained by a colloidal dispersion with a low index of refraction contrast. We show that, by increasing the strength of the nonlinearity, shock formation is enhanced, while, on the other hand, the random scatterers limit and ultimately inhibit the wave-breaking phenomenon. We quantify such a competition by analyzing both the laser beam along the propagation direction and its far field distribution intensity: these allow the measurement of the relevant scaling laws that relate the shock position \cite{Gho12,Ghoprl12} and the post-shock wave-vector spectrum with the input beam size, power, and the scale and strength of disorder \cite{Gentilini12}. These observables, namely the shock formation point and the output wave-vector spectrum, exhibit a threshold and determine a phase diagram identifying parameter regions where the shock occurs. The paper is organized as follows: first we briefly review the theoretical framework. We then illustrate the experimental setup and the characterization of the samples used in the experiments. We then review the results in three different sections, corresponding to the experimental characterization of different observables. We present the analysis of the beam intensity distribution both during propagation and exiting the samples. We conclude with a summary section of the obtained results. \section{Theoretical framework}\label{section1} In our experiments, CW laser beams propagate in dye-doped dispersions of dielectric colloidal beads. The beam is partially absorbed and scattered, activating the interplay between thermal-defocusing and spatial-disorder. Neglecting, in a first approximation, the spatial nonlocality \cite{ghofraniha}, the refractive index perturbation in the presence of nonlinearity and disorder to the bulk index $n_0$ is written as: \begin{equation} \Delta n=n_2I+\Delta n_R(X,Y,Z) \label{eq:refractiveindex} \end{equation} where $n_2<0$ takes into account the considered defocusing Kerr effect, $I$ is the optical intensity and $\Delta n_R$ represents the random perturbation due to the colloidal beads. The propagation of a TEM$_{00}$ Gaussian beam inside the medium is described by the paraxial wave equation for the complex envelope, $A$, of a monochromatic electric field $E=(\frac{2}{c\epsilon_0n_0})^{1/2}A\exp({ikZ-i\omega T})$, \begin{equation} 2ik\frac{\partial A}{\partial Z}+\nabla^2_{X,Y}A+2k^2\frac{\Delta n}{n_0}A=0, \label{eq:paraxial} \end{equation} where $k=2\pi n_0/\lambda$ is the wave-vector, $c$ the velocity of light, and $\epsilon_0$ is the electric permittivity of free space. In Eq. (\ref{eq:paraxial}) $A$ is normalized such that $I=|A|^2$. Indicating with $I_0$ the input peak intensity, $w_0$ the input beam waist, $L_{nl}=n_0/(k_0|n_2|I_0)$ the nonlinear length scale, and introducing the scaled coordinates $x,y,z=X/w_0,Y/w_0,Z/L$, and the normalized field $\psi=A/\sqrt{I_0}$, we obtain the following dimensionless equation: \begin{equation} i\epsilon\frac{\partial\psi}{\partial z}+\frac{\epsilon^2}{2}\nabla^2_{x,y}\psi-|\psi|^2\psi+U_R\psi=0, \label{eq:dimensionless} \end{equation} where $\epsilon\equiv L_{nl}/L=\sqrt{L_{nl}/L_d}$, being $L\equiv\sqrt{L_{nl}L_d}$ and $L_d=kw_0^2$ the diffraction length, and $U_R=\Delta n_R/(n_2I_0)$. The quantity $\epsilon$ measures the strength of the nonlinearity with respect to the diffraction: a small value for $\epsilon$ implies negligible diffraction and a pronounced nonlinear response. $U_{R}$ is the ratio between the perturbation of index due to the disorder and the nonlinearity. Setting in Eq.(\ref{eq:dimensionless}) $\psi=\sqrt{\rho(r,z)}\exp[i\phi(r,z)/\epsilon]$ and retaining only the leading order in $\epsilon$, we obtain the following equation for the phase $\phi$: \begin{equation} \phi_z+\frac{1}{2}(\phi_x^2+\phi_y^2)+\rho-U_R=0 \label{eq:decoupled} \end{equation} Limiting to one dimension ($1$D, $\partial_y=0$), performing the transverse derivative of Eq.(\ref{eq:decoupled}), and defining a \emph{velocity field} equal to the phase chirp, $u\equiv\phi_x$, we have \begin{equation} u_z+uu_x+\partial_x(\rho-U_R)=0. \label{eq:hopf} \end{equation} In the homogeneous case ($\rho=$const.) and for an ordered medium ($U_R=0$), Eq. (\ref{eq:hopf}) takes the form of the Hopf equation \cite{whitman}, the solution of which can develop discontinuities in the velocity profile, $u_x\rightarrow\infty$, and hence gives rise to shock waves. Here we remark that from the hydrodynamical approximation, a threshold in the nonlinearity is present: in fact, the approximation holds true when $L_{nl}\ll L_d$. Another threshold arises from the term $U_R=\Delta n_R/(n_2I_0)$, corresponding to the existence of a critical value for the amount of randomness, above which it is expected that no shock occurs: when the random index perturbation $\Delta n_R$ becomes comparable with the nonlinearity $n_2I_0$, the material refractive index fluctuations are so pronounced that the nonlinear effect is totally masked. Correspondingly, in our experiments (see below) in absence of disorder, we find a threshold in the laser power, while in the presence of disorder a threshold emerges also in the amount of randomness. \section{Experimental setup.} In the hydrodynamic limit, DSWs are expected to occur when the nonlinearity is dominant compared to diffraction; nevertheless the diffraction, which is initially negligible, starts to play a major role in the proximity of the wave-breaking point, and regularizes the singularity by means of the appearance of characteristic oscillations (undular bores). Besides these regularizing oscillations, the singularity in the field phase and amplitude also results in a diffraction enhancement, evident in the funnel shape along the propagation direction (see below) appearing with the increase of the input power. This shows that the shock involves the spatial spectrum of the beam as detected in far field measurements. \begin{figure*} \includegraphics[width=\textwidth]{fig1.pdf} \caption{(Color online) Experimental setup: (a) detection of the top fluorescence emission of the beam; (b) configuration for the collection of the far field intensity; (c) details of the optical setup of panel (b); (d) sketch showing the position of the shock plane inside of the sample.}\label{figexp1} \end{figure*} \emph{Near-field configuration} - In the near-field configuration our setup [Fig. \ref{figexp1}(a)] allows a direct visualization of the propagating beam profile, i.e., the intensity as function of the transverse coordinate, $X$, and of the propagation direction $Z$. This enables the identification of the shock point $Z_s$ as the propagation distance at which the maximum chirp occurs (see below). Typically, a CW laser at wavelength $\lambda=532$nm is focused inside the sample. The beam waist in the focus is $w_0=10\mu$m. The near-field configuration is sketched in Fig. \ref{figexp1}(a). A $1$cm $\times1$cm $\times3$cm glass cell is used and the laser beam propagates along the $1$ cm side. Top images of the fluorescence emission are collected by a MZ$16$ Leica microscope placed perpendicularly to the propagation direction, $Z$, and recorded by a $1024\times1392$ pixels CCD camera. \emph{Far-field configuration} - In Fig. \ref{figexp1}(b) we show the setup for the far-field measurements. The CW laser beam is focused inside the sample ($w_0=50\mu$m). The liquid samples are placed in a $1$mm $\times1$cm $\times3$cm glass cell, the laser beam propagates along the $1$mm side, the cell is placed in vertical direction in order to moderate the effect of heat convection. As shown in Fig. \ref{figexp1}(c) the intensity distribution of the Fourier transform of the transmitted beam is collected by a CCD camera placed at the focal length from the collecting lens. We calibrate the CCD detector by fitting with the Airy function the experimentally obtained Fourier transform of a $500\mu$m diameter pinhole, placed on the exit face of the cell. The angular spreading $\theta$ is related to the transverse wavevector as $k_{X,Y}=(2\pi/\lambda)\sin(\theta)$. Figure \ref{figexp1}(d) shows the mutual positions of the focus plane, the shock plane and the output plane. \section{Sample characterization} As in previous experimental works, we use the thermal Kerr-like defocusing nonlinearity of absorbing dye-doped liquid media \cite{ghofraniha,Gho07,GhoLang,Gho12,Gentilini12,Ghoprl12}. \begin{figure}[ht] \includegraphics[width=8.5cm]{fig2.pdf} \caption{(Color online) (a)--(c) Near-field images of the fluorescence emission of the propagating beam at fixed laser power ($P=8$mW) and two different particle concentrations, (a) $c_{SiO_2}=0$, (c) $c_{SiO_2}=0.03$w/w; (b)--(d) corresponding intensity profiles of the far-field images of the transmitted field at the exit face of $1$mm thick cell. \label{figexp2}} \end{figure} \begin{figure}[ht] \includegraphics[width=8.5cm]{fig3.pdf} \caption{(Color online) (a)--(b) Intensity profiles taken at three different $Z$ position in the images of Figs. \ref{figexp2}(a)--\ref{figexp2}(c): dashed line corresponds to $Z=0.3$mm, dot-dashed line to $Z=0.7$mm, and the continuous line to $Z=1.0$mm. (c) Exponential (quasi-linear) decays of beam intensity calculated from images of Fig. \ref{figexp2}(a) (dashed line) and Fig. \ref{figexp2}(c) (dot-dashed line). \label{figexp3}} \end{figure} Our samples are aqueous solutions of Rhodamine B (RhB). We tailor the degree of absorption and nonlinearity by varying the concentration of RhB (c$_{RhB}$) from $0.05$ to $0.2$mM. We add disorder by using monodisperse $1\mu$m diameter silica (SiO$_2$) spheres. The degree of randomness is fixed by varying the concentration of SiO$_2$ (c$_{SiO_2}$) from $0.005$ to $0.04$w/w, in units of weight of silica particles over suspension weight. In terms of refractive index perturbation, the amount of disorder can be estimated by the following relation: \begin{equation} \langle\Delta n_R^2\rangle^{1/2}=c_{SiO_2}\rho_{H_2O}(n_{SiO_2}-n_{H_2O})/\rho_{SiO_2}, \label{eq:refractive_disorder} \end{equation} being n$_{SiO_2}$ (n$_{H_2O}$) and $\rho_{SiO_2}$ ($\rho_{H_2O}$) the refractive index and the density of the SiO$_2$ (H$_2$O), respectively. The angular brackets in Eq. (\ref{eq:refractive_disorder}) denotes volume average. Being the silica (water) density $\rho_{SiO_2}=2$g/cm$^3$ ($\rho_{H_2O}\approx 1$g/cm$^3$ at $25$\textcelsius), for the considered range of $c_{SiO_2}$ concentration, $\langle\Delta n^2_R\rangle^{1/2}$ varies between $4\times10^{-4}$ and $32\times10^{-4}$. Therefore, since from the theory a threshold in the disorder amount is predicted when $\langle\Delta n^2_R\rangle^{1/2}$ becomes comparable with the nonlinear perturbation $|n_2|I_0\cong10^{-3}$, such a threshold is expected for the silica concentration $c_{SiO_2}=0.030$w/w as it was confirmed by our experiments (see below). In our samples there are two leading loss mechanisms: (ii) absorption due to the RhB dye, and (ii) scattering due to SiO$_2$ particles. We find that scattering losses are predominant; this is shown in Fig. \ref{figexp2} where we compare the images of the transverse beam intensity distribution versus the propagation direction $Z$ at two different SiO$_2$ concentrations, fixed laser power $P$ and RhB concentration $c_{RhB}$. Figure \ref{figexp2}(a) shows the top fluorescence of the laser beam in a pure dye sample ($c_{SiO_2}=0$), Fig. \ref{figexp3}(b) gives the corresponding far-field. Figures \ref{figexp2} (c) and \ref{figexp2}(d) report the case of a silica-dye sample at $c_{SiO_2}=0.03$w/w; the beam is more diffused and the far field reveals an enhanced spectral content. In the presence of disorder ($c_{SiO_2}>0$) the transverse spread of the beam along $Z$ is enhanced. This is clarified in the analysis reported in Fig. \ref{figexp3}(a) and \ref{figexp3}(b), which show the intensity profiles at three different $Z$ positions for the $0.1$mM pure dye solution and dye solution with silica at $c_{SiO_2}=0.03$w/w concentration, respectively. At variance with linear absorption, scattering due to SiO$_2$ beads, broadens the beam. Figure \ref{figexp3}(c) shows the average intensity Vs Z calculated from the images of Figs. \ref{figexp2}(a) and \ref{figexp2}(c) and the exponential decays that fit the data. The fitting coefficients of the exponential decays give: the absorption length, $L_{abs}=1.6$mm, and the losses (i.e., absorption and scattering) length, $L_{los}=1.2$mm, for the pure dye solution and for the $0.03$w/w silica-dye solution, respectively; this implies that the effect of the particles on the losses is very small. In summary the above analysis shows that the role of disorder is predominantly to introduce random phase modulation. Moreover, since absorption does not qualitatively affect shock formation, the disorder induced phase scrambling is predominant over all the loss mechanisms in determining the shock point $Z_s$ measured below. \section{Shock point} In this section we report the procedure to identify the shock point Z$_s$ and to determine the threshold for the wave breaking in terms of laser power and SiO$_2$ concentration. In Figs. \ref{figexp4}(a)-\ref{figexp4}(c) and \ref{figexp5}(a) and \ref{figexp5}(c), we show the images of the propagating beam versus Z direction at low and high laser power, respectively. At low power, i.e., $P=10$mW, no nonlinear effects are visible. In Figs. \ref{figexp4}(d)-\ref{figexp4}(f) and \ref{figexp4}(d)-\ref{figexp4}(f) we show the corresponding images of the output intensity field. \begin{figure}[ht] \includegraphics[width=8.5cm]{fig4.pdf} \caption{(Color online) Top panels: low power ($P=10$mW) images of the fluorescence emission of the beam along $Z$ at different concentrations of silica spheres: (a) $c_{SiO_2}=0$, (b) $c_{SiO_2}=0.017$ w/w, (c) $c_{SiO_2}=0.030$ w/w; bottom panels: corresponding images of the transmitted (in the $X$-$Y$ plane) intensity at the exit facet of $1$mm cell. \label{figexp4}} \end{figure} \begin{figure}[ht] \includegraphics[width=8.5cm]{fig5.pdf} \caption{(Color online) The same of Fig. \ref{figexp4} at laser power $P=450$mW.} \label{figexp5} \end{figure} \begin{figure}[ht]\includegraphics[width=8.5cm]{fig6.pdf} \caption{(Color online) scattered dots are the calculated steepness along the $Z$ direction for three different powers. The solid lines are polynomial fit of the steepness curves to identify their maximum value, indicated by the arrows. \label{figexp6}} \end{figure} \begin{figure}[ht]\includegraphics[width=8.5cm]{fig7.pdf} \caption{(Color online) (a) measured $Z_s$ vs $P$ for different $c_{SiO_2}$; (b) power-disorder diagram from the propagation measurements: filled circles are the threshold power calculated from (a), dashed line is a boundary due to the experimental available observation window, and the dot-dashed line is the boundary as estimated by the theory. \label{figexp7}} \end{figure} Conversely at higher beam power both the effects of nonlinearity and of disorder are evident simultaneously. The two effects are competing as evident by the shock features, i.e., an augmented beam diffraction and the appearance of the undular bores, are enhanced by the laser power and inhibited by the SiO$_2$ concentration, clear from the transverse and longitudinal intensity profiles of Fig. \ref{figexp5}. We determine the shock point $Z_s$ from the intensity profile of Fig. \ref{figexp4} and \ref{figexp5}, recalling that the shock is originated from a singularity in the phase chirp $|d\phi/dX|\rightarrow\infty$ \cite{ghofraniha,Gho12,Ghoprl12}. To retrieve the phase singularity from the intensity profile we used the following argument: in the hydrodynamical approximation the laser beam is mainly affected by the defocusing nonlinearity, in a regime of negligible losses and diffraction. Hence at first approximation the phase is proportional to the refractive index perturbation, which in turn depends on the intensity profile because of the Kerr nonlinearity: \begin{equation} \phi(X,Y,Z)=\frac{k_0Z}{n_0}\Delta n[I(X,Y,Z)]. \label{eq:phase}\end{equation} From Eq. (\ref{eq:phase}) we can estimate the occurrence of the singularity in the phase from the intensity profiles, in fact: \begin{equation} \nabla_{X,Y}\phi(X,Y,Z)\propto\nabla_{X,Y}I(X,Y,Z). \label{eq:shockpoint} \end{equation} Equation (\ref{eq:shockpoint}) shows that the point of maximum phase chirp is given by the maximum derivative in the intensity profile, this allows the estimation of the shock point as follows: we calculate the transverse derivative of the intensity normalized to the peak value, $I_N$, and we define the steepness $S(Z)$ as the maximum with respect to the transverse coordinates of such a derivative: \begin{equation} S(Z)=\text{max}_{X,Y}[\nabla_{X,Y}I_N(X,Y,Z)]. \label{eq:stepness} \end{equation} The shock point, $Z_s$, is finally defined as the position of the maximum steepness versus $Z$: \begin{equation} Z_S=\text{max}_Z[S(Z)]. \label{eq:Zs} \end{equation} In Fig. \ref{figexp6} we show the steepness curves $S(Z)$ at three different laser power $P$. The point of shock occurs at propagation distances consistently smaller with the increase of the incident laser beam power, a signature that, for a fixed level of disorder, the increase of the nonlinearity enhances the shock formation. We note that the curve corresponding to the lowest $P$ shows a monotonous trend and reaches its maximum value at the edge of the observation window. This implies the existence of a threshold value of $P$ below which $Z_s$ assumes a constant value (equal to the size of the observation window $L_0\sim1$mm). In Fig. \ref{figexp7}(a) we plot the calculated $Z_s$ vs $P$ for all the prepared $c_{SiO_2}$ concentrations. We observe that the threshold power at which $Z_s$ starts to decrease with respect to $L_0$ becomes larger when increasing $c_{SiO_2}$, resulting in a shift of the power threshold towards higher values. In Fig. \ref{figexp7}(b) we map the threshold $P$ in a disorder-power shock phase diagram. We remark that the obtained $Z_s$ values are in all the investigated cases always smaller of the absorption length $L_{abs}$, confirming that the absorption only marginally affects the shock formation that is instead connected to the phase scrambling due to the silica particles scattering. \section{Intensity correlation at the shock} \begin{figure}[ht] \includegraphics[width=8.5cm]{fig8.pdf} \caption{(Color online) (a) Correlation curves calculated from the far-field measurements calculated for the ordered sample and for three different increasing values of $c_{SiO_2}$; (b) power-disorder phase diagram as calculated from the transverse intensity correlation curves of (a). \label{figexp8}} \end{figure} In the previous section we have quantitatively analyzed the top fluorescence near- field images of the propagating beam by calculating the shock position $Z_s$. In order to also analyze the transmitted profiles (bottom panels of Figs. \ref{figexp5} and \ref{figexp6}) we calculate the correlation function as follows: \begin{equation} C(P)=\Sigma_{i,j}I_P(i,j)I_0(i,j)/\Sigma_{i,j}I_0(i,j)I_0(i,j), \label{eq.correlation}\end{equation} where $P$ is the laser power, $I_P(i,j)$ the intensity distribution on the CCD camera, with $i$ and $j$ pixel indexes, corresponding to a certain power $P$, and $I_0(i,j)$ is the reference image of the intensity distribution transmitted from the pure dye sample ($c_{RhB}=0.1$mM) at the laser power $P=160$mW, such reference image was selected as the first image clearly showing the post-shock rings. The function $C(P)$ provides an estimation of the degree of coherence after propagation in the scattering samples. Figure \ref{figexp8}(a) shows the correlation curves $C(P)$ calculated from the bottom images of Figs. \ref{figexp5} and \ref{figexp6}, we observe that the curves grow up to a maximum value and then they start to decrease with the increasing power $P$. The power $P$ of the peak value increases with the SiO$_2$ concentration, meaning that in the presence of disorder a stronger nonlinearity is necessary to overcome the dephasing effect due to the scattering with the silica particles. Figure \ref{figexp8}(b) shows the disorder-power shock phase diagram as calculated from the curves of Fig. \ref{figexp8}(a): the filled circles represent the threshold power $P$, defined as the power at which the maximum correlation between the ordered an the disordered samples is achieved. We stress that the shock phase diagram of Fig. \ref{figexp8}(b) is in good agreement with that of Fig. \ref{figexp7}(b) calculated from the $Z_s(P)$ curves; the slight discrepancy between the two phase diagrams derives from the different definition of the shock point. \section{Shock threshold from angular spreading measurements} \begin{figure*} \includegraphics[width=\textwidth]{fig9.pdf} \caption{(Color online) Images of the far-field intensity distribution of the transmitted beam after $1$mm propagation distance. The left panels group refers to ordered samples (i.e. fixed $c_{SiO_2}=0$) at fixed power $P=140$mW at various dye concentrations (a) $c_{RhB}=0.05$mM, (b) $0.1$mM (b), (c) and $0.2$mM. The right panels group shows the far field intensity profile in disordered samples (i.e., fixed $c_{RhB}=0.1$mM) fixed laser power $P=130$mW and at various SiO$_2$ concentrations (a') $c_{SiO_2}=0.005$w/w, (b') $0.017$w/w and (c') $0.038$w/w. \label{figexp9}} \end{figure*} The characteristic post-shock annular structure and the diffraction enhancement displayed by the near-field transverse and longitudinal intensity distribution, reveal a non-trivial involvement of the wave-vector spectrum in the shock phenomenon. In this section we report the investigation on the far-field intensity distribution of the transmitted beam after $1$mm propagation distance. Such an investigation allows us to measure the angular aperture $\theta$. Fig. \ref{figexp9} provides a qualitative overview of the whole set of the far-field measurements. The panels on the left side [Figs. \ref{figexp9}(a)--\ref{figexp9}(c)] report images relative to the ordered samples ($c_{SiO_2}=0$) at fixed power $P=130$mW and at various dye concentrations $c_{RhB}$ ranging from $0.05$ to $0.2$mM; the right panels [Figs. \ref{figexp9}(a')--\ref{figexp9}(c')] refer to the disordered samples at fixed power $P=140$mW prepared at $c_{RhB}=0.1$mM and varying $c_{SiO_2}$ between $0.005$w/w and $0.038$w/w. The way the nonlinearity and the disorder affect the shock phenomenology, i.e., the appearance of the characteristic rings and the enlargement of the spectral content, reveals that their effect on the shock formation is opposite: the images [Figs.\ref{figexp9}(a)--\ref{figexp9}(c)] show an enhancement with the increase of $c_{RhB}$ (i.e., of the strength of nonlinearity); conversely those in Figs. \ref{figexp9}(a')--\ref{figexp9}(c') show the inhibition of shock with $c_{SiO_2}$. Note that in images on the right the circular symmetry of the DSWs is lost because of the refractive index inhomogeneities. In other words, the shock wave has a partially randomized spatial distribution. In order to quantitatively analyze both the sets of measurements, we perform a radial average of the two-dimensional collected profiles and we estimate the angular aperture $\theta$ as the full width half maximum since the profile appears as a single peak; and as the distance between the two leading peaks when the profiles start to split because of the wave breaking due to the defocusing nonlinearity. In what follows we detail the results obtained for the ordered and disordered case \cite{Gentilini12}. \subsection{Ordered case} \begin{figure}[ht] \includegraphics[width=8.5cm]{fig10.pdf} \caption{(Color online) (a)--(f): spectral content of the transmitted beam of the ordered samples ($c_{SiO_2}=0$) at two laser power $P$ and different dye concentration: (a),(d) $c_{RhB}=0.05$mM; (b),(e) $0.1$mM and, (c), (f) $0.2$mM. Bottom panels: $\theta$ vs $P$ curves (g); threshold power $P$ as calculated from curves of (g) vs $c_{RhB}$(h). \label{figexp10}} \end{figure} We study the occurrence of DSW in the pure dye solutions ($c_{SiO_2}=0$) when varying input laser power $P$ for different dye concentrations $c_{RhB}$. Figures \ref{figexp10}(a)-\ref{figexp10}(f) display the collected images of the far-field intensity distribution when a low [Figs. \ref{figexp10}(a)-\ref{figexp10}(b)] and high [Figs. \ref{figexp10}(d)-\ref{figexp10}(f)] power laser beam impinges on the pure dye solutions. We note that the higher the dye concentration, the larger the spatial spectral content due to the higher nonlinearity. Figure \ref{figexp10}(g) shows the curves of the calculated angular aperture $\theta$ vs $P$ as obtained for the different $c_{RhB}$ concentrations. In these measurements both the control parameters contribute to strengthen the nonlinearity of the system. Consistently we find that, above a critical power, $\theta$ starts to increase with $P$ because of the speedup of the shock formation due to the augmented nonlinearity; the slope of the curves increases with $c_{RhB}$, providing larger spectra at the same laser power $P$. Analogously to our analysis of the shock position $Z_s$, we seek also for the angular aperture a threshold value for the laser power. Such a threshold power can be retrieved in the above mentioned power value, beyond which $\theta$ starts to linearly grow with $P$. We plot the threshold power values in the diagram of $P$ versus $c_{RhB}$ in Fig. \ref{figexp10}(h). \subsection{Disordered case} We consider the interplay between disorder and nonlinearity in the DSW formation by dispersing the SiO$_2$ particles in pure dye solutions at $c_{RhB}=0.05$mM and $c_{RhB}=0.1$mM. Figures \ref{figexp11}(a)-(f), show the spectral profiles for different $c_{SiO_2}$ and laser power $P$ at fixed $c_{RhB}=0.05$mM. At this dye concentration and any laser power $P$, no shock formation emerges from the spectra as can be retrieved also in the trend of $\theta$ vs $P$ reported in Fig. \ref{figexp11} (g). This is a signature of the fact that at the lowest prepared dye concentration the nonlinearity is counteracted by the disorder which prevents the appearance of any shock phenomenology. \begin{figure}[ht] \includegraphics[width=8.5cm]{fig11.pdf} \caption{(Color online) (a)-(f): low dye concentration ($c_{RhB}=0.05$mM) spectral content of the transmitted beam by disordered samples at two different laser power $P$ and different SiO$_2$ concentration: (a), (d) $c_{SiO_2}=0.007$w/w; (b), (e) $0.018$w/w and (c), (f) $0.038$w/w; (g) the angular aperture $\theta$ vs $P$: the curves show no threshold behavior in the presence of disorder. \label{figexp11}} \end{figure} In Fig. \ref{figexp12} we show the case of the disordered samples obtained by the $0.1$mM pure dye solution. At the higher power $P=140$mW the shock characteristic rings are clearly visible in the spectra corresponding to the lower concentrations of SiO$_2$, $c_{SiO_2}=0.007$ and $0.018$w/w and disappear at the highest concentration $c_{SiO_2}=0.038$. In Fig. \ref{figexp12}(g) we show the curves $\theta$ vs $P$ calculated from the images of the upper panels [Figs. \ref{figexp12}(a)-\ref{figexp12}(f)]. We retrieve the expected threshold behavior with respect to both the control parameters $P$ and $c_{SiO_2}$, which results in the power-disorder shock phase diagram of Fig. \ref{figexp12}(h). \begin{figure}[ht] \includegraphics[width=8.5cm]{fig12.pdf} \caption{(Color online) (a)-(f) are the same as Fig. \ref{figexp11} but at higher dye concentration ($c_{RhB}=0.1$mM); (g) the angular aperture $\theta$ vs $P$: at this higher dye concentration the curves have recovered the threshold behavior also in the presence of disorder; (h) power-disorder phase diagram from the curves of (g). \label{figexp12}} \end{figure} \section{Conclusion} \vspace{-0.5cm} We have reported a detailed analysis of our experiments aimed at understanding the role of disorder in the occurrence of dispersive shock waves in a thermal defocusing medium. We collected the propagating and the transmitted intensity profile of a CW laser beam impinging on aqueous solutions of Rhodamine B, with an added controllable amount of disorder achieved by dispersing silica beads at well-defined concentrations. Resorting to the hydrodynamical approximation we analyzed the collected intensity distributions associated to the two observables of the system: the shock point from the propagating intensity profiles and the angular aperture of the transmitted intensity profiles. Both the observables have evidenced the expected thresholds for the occurrence of the shock phenomenon with respect to the degree of nonlinearity and in the amount of disorder. The calculation of the shock point has in fact led to the first determination of disorder-power shock phase-diagram; also the trend of the angular aperture versus the laser power for the different silica concentrations has allowed to the calculation of two distinct shock diagrams related to the ordered and disordered cases. We also analyzed the degree of correlation of the shock images when increasing disorder. These experiments open the way to further investigations concerning the interplay between disorder and nonlinearity, with ramifications in several research directions, from basics physics, as the study of nonlinear waves in random media, to applied research, where the exploitation of nonlinear effects in disordered media, such as biological tissue and atmosphere, should be fundamental in order to improve spectroscopy and imaging. \section{acknowledgments} The research leading to these results has received funding from the European Research Council under the European Community's Seventh Framework Program (FP7/2007-2013)/ERC Grant No. 201766, from the Italian Ministry of Research (MIUR) through the PRIN Project No. 2009P3K72Z, and from the Italian Ministry of Education, University and Research under the Basic Research Investigation Fund (FIRB/2008) program/CINECA Grants No. RBFR08M3P4 and No. RBFR08E7VA. We thank M. Deen Islam for technical assistance.
1,314,259,995,974
arxiv
\section*{Introduction} \label{introduction} In Riemann surface theory, Teichmueller theory, and the theory of moduli spaces, on the one hand, we benefit a lot from cross-pollination of techniques coming from sometimes disparate fields like topology, complex analysis, algebraic geometry, and arithmetic geometry. However, on the other hand, passing from one structure/definition to another is quite often an arduous task, making the use of different techniques simultaneously rather tricky. Incase of a closed Riemann surface $\Sigma_{g}$ of genus $g \geq 2$, to make a smooth transition from \begin{displayquote} \textit{complex structures}. A \textit{complex structure} on $\Sigma_{g}$ is an equivalence class of complex atlases, where two atlases, say, $\{U_{i}, f_{i}\}$ and $\{ V_{i}, g_{i}\}$ are equivalent iff their union forms a new complex atlas. We denote the set of almost complex structures on $\Sigma_{g}$ by $\mathscr{C}(\Sigma_{g})$ \end{displayquote} to \begin{displayquote} \textit{hyperbolic structures}. A closed oriented surface $\Sigma_{g}$ of genus $g \geq 2$ endowed with a fixed hyperbolic metric, i.e., a Riemannian metric of constant sectional curvature -1, is known as a hyperbolic surface or $\Sigma_{g}$ equipped with a hyperbolic structure (see \cite{goldman1}, \cite{goldman2}, \cite[Chapter 5]{goldman3} for equivalent definitions of hyperbolic structures on $\Sigma_{g}$), \end{displayquote} we need the Uniformization theorem (\cite{abik}, \cite{klein}). From the lens of the Korn-Lichtenstein theorem\footnote{The Korn-Lichtenstein theorem is same as the Newlander-Nirenberg theorem for surfaces.} (\cite{Ch55}, \cite{NN57}) we watch metamorphosis of \begin{displayquote} \textit{almost complex structures}. \textit{An almost complex structure} on $\Sigma_{g}$ is a smooth bundle endomorphism $J: T \Sigma_{g} \longrightarrow T \Sigma_{g}$ such that for all $x \in \Sigma_{g}$, $J_{x}^{2}=-I_{x}$ and for all non-zero $v \in T_{x} \Sigma_{g}$, $(v, J_{x}(v))$ is an oriented basis for $T_{x}\Sigma_{g}.$ Equivalently, an almost complex structure $J$ is a smooth section of the fiber bundle $$\mathrm{GL}(\Sigma_{g}) \times_{\mathrm{GL}^{+}(2, \mathbb{R})} \mathrm{GL}^{+}(2, \mathbb{R}) / \mathrm{GL}(1, \mathbb{C}) \longrightarrow \Sigma_{g},$$ where $\mathrm{GL}(1, \mathbb{C})$ is the multiplicative group of non-zero complex numbers embedded in the group $\mathrm{GL}^{+}(2, \mathbb{R})$ of the real $2 \times 2$ matrices with positive determinant. We denote the set of almost complex structures on $\Sigma_{g}$ by $\mathscr{A}(\Sigma_{g})$. Note that $\mathscr{A}(\Sigma_{g})$ is endowed with the $C^{\infty}$-topology and is clearly contractible because the homogeneous space $\mathrm{GL}^{+}(2, \mathbb{R}) / \mathrm{GL}(1, \mathbb{C})$ is contractible \end{displayquote} into \begin{displayquote} complex structures. \end{displayquote} Usually, the problems even get worse when passing from a single Riemann surface to either the parametrization space $\mathscr{T}(\Sigma_{g})$ - famously known as the Teichmueller space of $\Sigma_{g}$ - parameterizing hyperbolic structures/complex structures/almost complex structures on $\Sigma_{g}$ upto \textit{isotopy} or the bundles of Riemann surfaces. As already mentioned, this problem is not only confined to structures but it is also valid when it comes to connecting different definitions and different descriptions of a mathematical object in Teichmueller theory. For instance, the description of the Teichmueller space $\mathscr{T}(\Sigma_{g})$ of a closed oriented surface $\Sigma_{g}$ of genus $g \geq 2$ enjoys a multifaceted viewpoint, i.e., we can view $\mathscr{T}(\Sigma_{g})$ as \begin{itemize} \item the quotient space of the space $\mathscr{A}(\Sigma_{g})$ of almost complex structures on $\Sigma_{g}$ by the action of the group $\mathrm{Diff}^{+}_{0}(\Sigma_{g})$ of orientation preserving diffeomorphisms on $\Sigma_{g}$ that are isotopic to the identity. The group $\mathrm{Diff}_{0}^{+}(\Sigma_{g})$ acts on $\mathscr{A}(\Sigma_{g})$ in the following manner $$ (f^{\ast}J)_{x}:= (df_{x})^{-1}J_{f(x)}df_{x}; \quad f \in \mathrm{Diff}_{0}^{+}(\Sigma_{g});$$ \item the quotient space of the space $\mathscr{H}(\Sigma_{g})$ of Riemannian metrics of constant sectional curvature $-1$ on $\Sigma_{g}$ by the action of the group $\mathrm{Diff}^{+}_{0}(\Sigma_{g})$ (\cite{fischer}, \cite{Tr92}). The group $\mathrm{Diff}_{0}^{+}(\Sigma_{g})$ acts on $\mathscr{H}(\Sigma_{g})$ by pullback of metrics. Note that $\mathscr{H}(\Sigma_{g}) \subset \mathscr{M}(\Sigma_{g})$, where $\mathscr{M}(\Sigma_{g})$ denotes the space of all Riemannian metrics on $\Sigma_{g}$; \iffalse \item the quotient space of the space $\mathscr{A}(\Sigma_{g})$ of almost complex structures on $\Sigma_{g}$ by the action of the group $\mathrm{Diff}^{+}_{0}(\Sigma_{g})$ of orientation preserving diffeomorphisms on $\Sigma_{g}$ that are isotopic to the identity; \fi \item the quotient space of $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ by the action of the Lie group $\mathrm{PSL}(2, \mathbb{R})$, where $\Gamma_{g}$ is the fundamental group of $\Sigma_{g}$ and $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is the space of homomorphisms $\Gamma_{g} \longrightarrow \mathrm{PSL}(2, \mathbb{R})$ which describe a discrete and cocompact action of $\Gamma_{g}$ on $\mathbb{H}^{2}$. \end{itemize} The above-mentioned viewpoints are brought together in one-to-one correspondence by the following commutative diagram: \begin{figure}[H] \label{differentteich} \begin{center} \[ \xymatrixcolsep{1.5pc} \xymatrix{ \mathscr{M}(\Sigma_{g}) \ar@/_2pc/[ddrr]_\varsigma & & \mathscr{H}(\Sigma_{g}) \ar@{_{(}->}[ll]^-{i} \ar@{->>}[r]^-{p_{1}} \ar@[red][d]^\Theta & \mathscr{H}(\Sigma_{g}) / \mathrm{Diff}^{+}_{0}(\Sigma_{g}) \ar@/^1pc/[dr]^\Psi \ar@{=}[d] \\ & & \mathscr{C}(\Sigma_{g}) \ar@[red][d]^\Xi & \mathscr{T}(\Sigma_{g}) \ar@{=}[d] \ar@{=}[r] & \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R}) \\ & & \mathscr{A}(\Sigma_{g}) \ar@{->>}[r]^-{p_{2}} \ar@/^2pc/@[red][uu]^\Phi & \mathscr{A}(\Sigma_{g}) / \mathrm {Diff}^{+}_{0}(\Sigma_{g}) \ar@/_1pc/[ur]_\varLambda } \] \end{center} \caption{Different perspectives to look at the Teichmueller space $\mathscr{T}(\Sigma_{g})$: ``red'' arrows denote $\mathrm{Diff}^{+}_{0}(\Sigma_{g})$-equivariant bijective maps} \end{figure} In Figure \ref{differentteich}, $p_{1}$ and $p_{2}$ are the projection maps. Infact, $p_{1}$ and $p_{2}$ are principal $\mathrm{Diff}^{+}_{0}(\Sigma_{g})$-bundles (\cite{EE69}). Clearly, $i$ is the inclusion map. The map $ \varsigma$ is also clear because given a Riemannian metric $h$ on $\Sigma_{g}$, we automatically have an almost complex structure on $\Sigma_{g}$ because with the metric $h$, the notion of angles is clear. The map $\Xi$ is an obvious (forgetful) map given by \begin{equation*} c \ni (U \subset \Sigma_{g}, \phi) \longmapsto \bigg(J_{\phi}(x):= d\phi^{-1}_{x} \hat{J} d\phi_{x}, x \in U, \hat{J}:= \begin{bmatrix} 0& -1 \\ 1 & 0 \end{bmatrix} \bigg). \end{equation*} The forgetful map $\Theta$ is a consequence of the Uniformization theorem. The continous map $\Xi \circ \Theta$ is a bijection. The map $\Phi$ - the inverse of $\Xi \circ \Theta$ - is also continous (\cite{EE69}, \cite{eellsscha}). One of the main ingredient of the map $\Psi$ is the \textit{holonomy} representation (see \cite[Section 2]{burger}). For the description of the map $\varLambda$, see \cite{EE69} and \cite{robbin}. \iffalse \begin{figure}[H] \begin{center} \begin{displaymath} \begin{tikzcd}[row sep=0.5cm, column sep=small] \mathscr{M}(\Sigma_{g}) \ar[ddrr, bend right=30] & & \mathscr{H}(\Sigma_{g}) \ar[ll, hook'] \ar[r, two heads] \ar[d, red] & \mathscr{H}(\Sigma_{g}) / \mathrm{Diff}^{+}_{0}(\Sigma_{g}) \ar[d, equal] \ar[dr, bend left=13] \\ & & \mathscr{C}(\Sigma_{g}) \ar[d, red]{h} & \mathscr{T}(\Sigma_{g}) \ar[d, equal] \ar[r, equal] & \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R}) \\ & & \mathscr{A}(\Sigma_{g}) \ar[r, two heads] \ar[uu, red, bend right=50] & \mathscr{A}(\Sigma_{g}) / \mathrm {Diff}^{+}_{0}(\Sigma_{g}) \ar[ur, bend right=13] \\ \end{tikzcd} \end{displaymath} \end{center} \caption{Different perspectives to look at the Teichmueller space $\mathscr{T}(\Sigma_{g})$: ``red'' arrows denote $\mathrm{Diff}^{+}_{0}(\Sigma_{g})$-equivariant bijective maps} \end{figure} \fi In the literature, $\mathscr{T}(\Sigma_{g})$ is also defined as a connected component of the representation variety $\mathrm{Hom}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R})$ (\cite{GM80}, \cite{GM88}, \cite{matsumoto}) and the universal orbifold cover of the moduli space of algebraic curves $\mathfrak{M}_{g}$. In their own right, these descriptions are great motivations to study the Teichmueller space $\mathscr{T}(\Sigma_{g})$ in detail, this article will not discuss them further. In this article, $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R})$ will be our main definition of $\mathscr{T}(\Sigma_{g})$. Other than the Teichmueller space $\mathscr{T}(\Sigma_{g})$, there are many examples of spaces in Teichmueller theory that enjoy a kaleidoscopic picture. One famous example is tangent spaces to the Teichmueller space $\mathscr{T}(\Sigma_{g})$. Tangent spaces to the Teichm\"{u}ller space $\mathscr{T}(\Sigma_{g})$ are best described using the theory of \textit{infinitesimal deformations}. The main slogan of the theory is to \textit{deform} a point in the Teichmueller space $\mathscr{T}(\Sigma_{g})$ be it \begin{itemize} \item a homomorphism $\rho$ representing $[\rho] \in \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R})$; \item or a complex structure on $\Sigma_{g}$ \end{itemize} with respect to a (real) parameter $t$ and then analyze the local structure of the corresponding spaces. Recall the Taylor expansion of a smooth function $f$ (on a smooth manifold $M$) around a point $x \in M$. The first order derivative at $x$ provides good information of $f$. In the same way, certain cohomology groups provide basic and satisfactory information on deformations of a homomorphism $\rho \in \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$. Formally speaking, deformation of a homomorphism $\rho \in \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ has the following meaning: we take a curve of maps $\rho_{t}$ where $\rho_{0}=\rho$ is a homomorphism, and ask for (infinitesimal) conditions which ensure that this curve $\rho_{t}$ satisfies the homomorphism condition \begin{equation*} \rho_{t}(\gamma_{1}\gamma_{2})=\rho_{t}(\gamma_{1})\rho_{t}(\gamma_{2}), \quad \forall \gamma_{1}, \gamma_{2} \in \Gamma_{g}. \end{equation*} Solving $\frac{d \rho_{t}}{dt}\big|_{t=0}$ up to the first order determines a $1$-cocycle with values in the vector space of Killing vector fields on $\mathbb{H}^{2}$, a.k.a the Lie algebra $\mathfrak{g}$ of $\mathrm{PSL}(2, \mathbb{R})$. As a result, $T_{\rho}\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is nothing but the space of $\mathfrak{g}$-valued $1$-cocycles $Z^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}_{\rho}})$. Next, by considering ``trivial'' deformations $\rho_{t}$ of $\rho$ given by conjugation via elements of $\mathrm{PSL}(2, \mathbb{R})$ and solving the above-mentioned homomorphism condition up to the first-order determines a $1$-coboundary $c \in B^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}_{\rho}})$. Hence, $$T_{[\rho]}\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/\mathrm{PSL}(2, \mathbb{R}) \cong H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}_{\rho}}).$$ Therefore, $H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}_{\rho}})$ serves as the cohomological description of tangent spaces to the Teichmueller space $\mathscr{T}(\Sigma_{g})$. The space of infinitesimal deformations of a complex structure on $\Sigma_{g}$ is parametrized by the space $\mathrm{HQD}(\Sigma_{g})$ of \textit{holomorphic quadratic differentials} on $\Sigma_{g}$ (\cite{Kodaira}, \cite{Morrow}, \cite[Chapter 1]{wolpert2}), where a holomorphic quadratic differential is a holomorphic section of $Q_{\Sigma_{g}}$, the tensor square of the canonical line bundle $K_{\Sigma_{g}}$ of $\Sigma_{g}$. Hence, the analytic description of tangent spaces to the Teichmueller space $\mathscr{T}(\Sigma_{g})$ is given by $\mathrm{HQD}(\Sigma_{g})$. So, the main aim of this article is to construct explicit maps from $\mathrm{HQD}(\Sigma_{g})$ to $H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho})$ and vice-versa, i.e., \begin{equation} \label{mainthing} \xymatrix{ \mathrm{HQD}(\Sigma_{g}) \ar[r]^-{?} & H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho}) \ar[l]^-{?} } \end{equation} Now, we can ask ourselves the following question: what recipes are we going to use in the construction of maps from $\mathrm{HQD}(\Sigma_{g})$ to $H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho})$ and vice-versa? Since the inception of Teichmueller's theorems, the use of \textit{quasiconformal maps} in classical Teichmueller theory is prevalent. However, in this thesis, we don't focus much on quasiconformal maps. We take an unconventional road that \textit{minimizes energy} to connect the above-mentioned descriptions of tangent spaces to the Teichmueller space $\mathscr{T}(\Sigma_{g})$. Our essential recipe will be the notion of a \textit{harmonic vector field} on the upper half plane $\mathbb{H}^{2}$ or the Poincar\'{e} disk $\mathbb{D}$ in constructing maps from $\mathrm{HQD}(\Sigma_{g})$ to $H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho})$ and vice-versa. The notion of a \textit{harmonic vector field} on $\mathbb{H}^{2}$ (or on $\mathbb{D}$) takes inspiration from the definition (see Definition \ref{harmonicdefn}) of a \textit{harmonic map} $\phi: \Sigma_{1} \longrightarrow \Sigma_{2}$ between Riemann surfaces equipped with conformal metrics. Harmonic maps are critical points of the energy functional $$E(\phi) = \int_{\Sigma_{1}} \| d \phi \|^{2} d\mu,$$ where $ \| \cdot \|$ is the Hilbert-Schmidt norm and $d\mu$ is the measure on $\Sigma_{1}$ determined by the Riemannian metric on $\Sigma_{1}$. The integrand is also known as the \textit{energy density} (see (\ref{energydensity})). Equivalently, harmonic maps satisfy the \textit{Euler-Lagrange partial differential equations} associated with the energy functional (see (\ref{euler})). These PDEs are non-linear and elliptic. Harmonic maps exist in the homotopy class of any diffeomorphism when the target surface is equipped with a strictly negatively curved metric, and are unique (\cite{ells}, \cite{hart}). Harmonic maps are related to holomorphic quadratic differentials intimately, hence play an important role in Teichmueller theory. This relation arises from the fact that \begin{displayquote} a diffeomorphism $\phi: (\Sigma_{1}, \sigma) \longrightarrow (\Sigma_{2}, \rho)$ between two Riemann surfaces equipped with conformal metrics is harmonic iff the quadratic differential $(\phi^{\ast}\rho)^{(2, 0)}$ on the source surface $\Sigma_{1}$ is holomorphic (see Example \ref{harmimplieshol} and \cite[Lemma 1.1]{jost1}). \hfill $(\dagger)$ \end{displayquote} The use of harmonic maps in Teichmueller theory goes all the way back to Gerstenhaber and Rauch's program (\cite{Gerstenhaber}, \cite{Gerstenhaber1}, \cite{Reich}) to prove Teichmueller's Theorems (\cite{teichmu}) using harmonic maps. In order to state our main results, we need to define a harmonic vector field on the upper half plane $\mathbb{H}^{2}$ or the Poincar\'{e} disk $\mathbb{D}$: let $U$ be an open subset of $M$, where $M$ is either the upper half plane $\mathbb{H}^{2}$ or the Poincar\'{e} disk $\mathbb{D}$. Let $\{\phi_{t}\}_{t \in [0, \epsilon)}$ be a smooth family of smooth maps $$\phi_{t} : U \longrightarrow M$$ where $\phi_{0}$ is the inclusion. Then $\xi= \frac{d \phi_{t}}{dt} \vert_{t=0}$ is a vector field on $U$. \begin{maindefn}[Definition \ref{defn12}]\label{maindefn1} \normalfont The vector field $\xi$ on $U$ is harmonic if there exists a smooth family of smooth maps $\{\phi_{t}: U \longrightarrow M\}_{t \in [0, \epsilon)}$ which satisfies the following: \begin{enumerate} \item $\phi_{0}$ is the inclusion map, \item $\displaystyle\frac{d \phi_{t}}{dt}\Big\vert_{t=0} = \xi$, \item $\forall x \in U:~\displaystyle\frac{d}{dt}\Big\vert_{t=0}~\tau(\phi_{t})(x)=0\,$, where $\tau$ is the \textit{tension field} (see Definition \ref{tensionfieldharmonic}). \end{enumerate} \end{maindefn} An infinitesimal version of $(\dagger)$ is given by the following: \begin{mainprop}[Proposition \ref{thm2}] \label{firstprop} A smooth vector field $\xi$ on $\mathbb{H}^{2}$ or on $\mathbb{D}$ is harmonic iff $\big(\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}}\big)^{(2, 0)}$ or $\big(\mathcal{L}_{\xi}\textbf{g}_{\mathbb{D}}\big)^{(2, 0)}$ is holomorphic. \end{mainprop} Our first main theorem is based on the above Proposition and the fact that a holomorphic vector field on $U \subset \mathbb{H}^{2}$ is a harmonic vector field on $U \subset \mathbb{H}^{2}$. \begin{mainthmintro}[Theorem \ref{sess}]\label{maintheorem1} Let $\mathcal{HOL}$ denote the sheaf of holomorphic vector fields on $\mathbb{H}^{2}$, $\mathcal{HARM}$ denote the sheaf of harmonic vector fields on $\mathbb{H}^{2}$ and $\mathcal{HQD}$ denote the sheaf of holomorphic quadratic differentials on $\mathbb{H}^{2}$. Then the following sequence of sheaves \begin{equation} \label{maintheoremequation} \xymatrix{ \mathcal{HOL} \ar[r]^-{\alpha} & \mathcal{HARM} \ar[r]^-{\beta} & \mathcal{HQD} } \end{equation} is a short exact sequence of sheaves on $\mathbb{H}^{2}$. In (\ref{maintheoremequation}), $\alpha$ is the inclusion map and $\beta$ is given by the formula in Proposition \ref{firstprop}. \end{mainthmintro} \begin{mainremark} \normalfont Theorem \ref{maintheorem1} is also valid if we replace $\mathbb{H}^{2}$ with $\mathbb{D}$. \end{mainremark} \begin{mainremark} \normalfont (\ref{maintheoremequation}) is related to the following short exact sequence of sheaves in classical Teichmueller theory \begin{equation} \label{maintheoremequation3} \xymatrix{ 0 \ar[r] & \mathcal{S}_{\mathrm{Hol}}\big(T \mathbb{H}^{2}\big) \ar[r]^-{i} & \mathcal{S} \big(T \mathbb{H}^{2}\big) \ar[r]^-{\frac{\partial}{\partial \bar{z}}} & \mathcal{BEL} \ar[r] & 0 } \end{equation} where $\mathcal{S}_{\mathrm{Hol}}\big(T \mathbb{H}^{2}\big)$ is the sheaf of holomorphic vector fields on $\mathbb{H}^{2}$, $\mathcal{S} \big(T \mathbb{H}^{2}\big)$ is the sheaf of smooth vector field on $\mathbb{H}^{2}$, and $\mathcal{BEL}$ is the sheaf of \textit{Beltrami differentials} on $\mathbb{H}^{2}$. (\ref{maintheoremequation3}) is a special case of a more general construction called the \textit{Dolbeault resolution} of the sheaf $\mathcal{S}_{\mathrm{Hol}}\big(T \mathbb{H}^{2}\big)$. See \cref{genesis} for more details. \end{mainremark} Our next main Theorem is about proving the global surjectivity of the map $\beta$ in (\ref{maintheoremequation}) in Theorem \ref{maintheorem1}. \begin{mainthmintro} [Theorem \ref{thmglobharmvf} + Theorem \ref{boundary}] \label{maintheorem2} Let $q=f(z)dz^{2}$ be a holomorphic quadratic differential on $\mathbb{H}^{2}$. Suppose that $q$ satisfies the following boundedness conditions \begin{enumerate} \item $q$ is bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$, i.e., \begin{equation*} \Arrowvert q \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} = \arrowvert f(z) \arrowvert \Arrowvert dz^{2} \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} \leq D, \end{equation*} where $\Arrowvert dz^{2} \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}}= \Im(z)^{2}$ and $D$ is a positive real number. \item The first and second covariant derivative of $q$ w.r.t $\nabla$, the linear connection on $T^{\ast} \mathbb{H}^{2} \otimes_{\mathbb{C}} T^{\ast} \mathbb{H}^{2}$, are bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$. \end{enumerate} Then there exists a harmonic vector field $\xi^{\mathrm{reg}}$ on $\mathbb{H}^{2}$ such that $\beta(\xi^{\mathrm{reg}})=q$, where $\beta$ is introduced in Theorem \ref{maintheorem1}. An explicit formula is \begin{equation*} \xi^{\mathrm{reg}}(z) = \lim_{c \to \infty} \Bigg( \xi_{c}(z) - \bigg( \xi_{c}(\iota) + \frac{\partial \xi_{c}}{\partial z}\bigg|_{z=\iota}\cdot (z-\iota)\bigg) \Bigg), \end{equation*} where $$\xi_{c}(z)= \bigg( \int_{y_{\ast}(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \bigg) \eta(z)$$ and $c$ is a positive real number. The harmonic vector field $\xi^{\mathrm{reg}}$ transformed from $\mathbb{H}^{2}$ to the open unit disc $\mathbb{D}$ by the Cayley transform $C$ extends to a continuous vector field, say $\chi$, on $\overline{\mathbb{D}}$ defined as follows: \begin{equation*} \label{maintheoremequation1} \chi(C(z)) = \begin{cases} C_{\ast}(\xi^{\mathrm{reg}}(z)) & \quad z \in \mathbb{H}^{2} \\ C_{\ast}(\xi^{\mathrm{reg}}(z)) & \quad z \in \partial \mathbb{H}^{2} \setminus \{\infty\} \\ 0 & \quad z = \{ \infty \} \end{cases} \end{equation*} where $C_{\ast}(\xi^{\mathrm{reg}}(z))$ is the pushforward of $\xi^{\mathrm{reg}}(z)$ by the Cayley transform $C$. \end{mainthmintro} \begin{mainremark} \normalfont We have introduced a simple terminology $\mathrm{reg}$ short for ``regularisation'' to characterise our required harmonic vector field. \end{mainremark} \begin{mainremark} \normalfont The global surjectivity of the map $\beta$ in Theorem \ref{maintheorem1} is proven independently by S. Wolpert in \cite[Section 2]{wolpert}. See the beginning of \textbf{\cref{harmonicexplicit}} in \textbf{\cref{chapter3}}. \end{mainremark} \smallskip Theorem \ref{maintheorem2} implies that the coboundary $\delta \chi$ $$\chi \longmapsto \big(\gamma \longmapsto \chi(\gamma) \gamma^{-1} - \chi \big), \quad \forall \gamma \in \Gamma$$ where $\Gamma$ is a discrete cocompact subgroup of $\mathrm{Isom}^{+}(\mathbb{D})$, defines a $1$-cocycle with values in the vector space $\mathrm{HOL}$ of holomorphic vector fields on $\mathbb{D}$. Note that we view $\chi$ as a $0$-cocycle with values in the vector space of harmonic vector fields on $\mathbb{D}$. The following results ensure that $\delta \chi$ is a $1$-cocycle with values in the vector space of Killing vector fields on $\mathbb{D}$: \begin{mainthmintro} [Theorem \ref{infinitedimproblem} + Theorem \ref{summarytheo}] \label{chap3theorem1} Given a holomorphic quadratic differential $q=fdz^{2}$ on the Poincar\'{e} disk $\mathbb{D}$ which satisfies the following boundedness conditions: \begin{enumerate} \item $q$ is bounded in the hyperbolic metric on $\mathbb{D}$, i.e., $$\lVert q \rVert_{\textbf{g}_{\mathbb{D}}} \leq D, $$ where $D$ is a positive real number. \item The first and the second covariant derivative of $q$ w.r.t the linear connection on $T^{\ast}\mathbb{D} \otimes_{\mathbb{C}} T^{\ast} \mathbb{D}$ are bounded in $\textbf{g}_{\mathbb{D}}$. \end{enumerate} Then there exists a harmonic vector field $\chi$ on $\mathbb{D}$ which admits an $L^{2}$-extension to the closed unit disk $\overline{\mathbb{D}}$ such that $(\mathcal{L}_{\chi}\textbf{g}_{\mathbb{D}})^{(2, 0)}= q$. Moreover, the restriction of that extension to the boundary circle $\mathbb{S}^{1}$ is tangential and $\chi$ is unique upto the addition of holomorphic vector fields on $\mathbb{D}$ which extend tangentially to the boundary circle $\mathbb{S}^{1}$. Also, $\chi$ is unique upto the addition of the vector space $\mathfrak{g}$ of Killing vector fields on $\mathbb{D}$. \end{mainthmintro} \begin{maincoro}[Corollary \ref{summarytheocor}] \label{chap3coro1} Let $\Gamma$ denote a subgroup of $\mathrm{Isom}^{+}(\mathbb{D})$, where $\mathrm{Isom}^{+}(\mathbb{D})$ is the group of orientation preserving isometries of $\mathbb{D}$. If $q=fdz^{2}$ and $\chi$ are related as in Theorem \ref{chap3theorem1} and if in addition to (1) and (2) in Theorem \ref{chap3theorem1}, $q$ is $\Gamma$-invariant, i.e., $$f(\gamma(z))\gamma'(z)^{2}=f(z), \quad \forall \gamma \in \Gamma, z \in \mathbb{D},$$ then $\delta \chi$ defined by $$ \gamma \longmapsto \chi(\gamma) \gamma'^{-1}-\chi, \quad \forall \gamma \in \Gamma$$ is a $1$-cocycle $c$ for the group $\Gamma$ with coefficients in the Lie algebra $\mathfrak{g}$ of $\mathrm{Isom}^{+}(\mathbb{D})$ and its cohomology class $[c]$ depends only on $q$. \end{maincoro} \begin{maincoro}[Corollary \ref{onewaymap}] \label{corosecondlast} Let $\Gamma$ be a discrete cocompact subgroup of $\mathrm{Isom}^{+}(\mathbb{D})$. Then we have an injective mapping $$\varPhi: \mathrm{HQD}(\mathbb{D}, \Gamma) \longrightarrow H^{1}(\Gamma; \mathfrak{g}) $$ $$q \longmapsto [c],$$ where $\mathrm{HQD}(\mathbb{D}, \Gamma)$ denotes the vector space of $\Gamma$-invariant holomorphic quadratic differentials on $\mathbb{D}$ and $c= \delta \chi$. \end{maincoro} \smallskip To construct an inverse of $\varPhi$ in Corollary \ref{corosecondlast}, we first construct a smooth vector field $\psi$ on $\mathbb{D}$ such that $\delta \psi =c$, where $c$ is a $1$-cocycle $c$ representing $[c] \in H^{1}(\Gamma; \mathfrak{g})$. And then, we show that $\psi$ admits an $L^{2}$-extension to the closed unit disk $\overline{\mathbb{D}}$ whose restriction to the boundary circle $\mathbb{S}^{1}$ is tangential. This construction relies on the existence of a $\Gamma$-invariant partition of unity on $\mathbb{D}$. See \textbf{\cref{cohomtoanalsection1}} in \textbf{\cref{cohomtoanal}}. \begin{mainlemma}[Lemma \ref{partition}] \label{lemmaintro} There exists a smooth function $\varphi$ on $\mathbb{D}$ such that \begin{enumerate} \item $0 \leq \varphi \leq 1$. \item For each $z \in \mathbb{D}$, there is a neighborhood $U$ of $z$ and a finite subset $S$ of $\Gamma$ such that $\varphi=0$ on $\gamma(U)$ for every $\gamma \in \Gamma-S$. \item $\sum_{\gamma \in \Gamma} \varphi(\gamma(z))=1$ on $\mathbb{D}$. \end{enumerate} \end{mainlemma} \begin{mainremark} \normalfont We suspect that Lemma \ref{lemmaintro} is a simpler version of results on \textit{Kleinian groups} (see \cite{Kra}). \end{mainremark} \begin{mainlemma}[Lemma \ref{continuousvector}] \label{lemmaintro1} Given any $[c] \in H^{1}(\Gamma; \mathfrak{g})$ we set $$\psi(z)= - \sum_{\gamma \in \Gamma} \varphi(\gamma(z)) c_{\gamma}(z), \quad z \in \mathbb{D},$$ where $\varphi$ is introduced in Lemma \ref{lemmaintro}. $\psi$ is a $C^{\infty}$-vector field on $\mathbb{D}$ such that $\delta \psi = c$. \end{mainlemma} \begin{maincoro}[Corollary \ref{shortcoro}] \label{maincoro2} $\psi$ in Lemma \ref{lemmaintro1} admits a unique $L^{2}$-extension to the closed unit disk $\overline{\mathbb{D}}$ whose restriction $\psi^{\sharp}$ to the boundary circle $\mathbb{S}^{1}$ is tangential. \end{maincoro} \begin{mainremark} \normalfont The above-mentioned construction of a vector field on the boundary circle $\mathbb{S}^{1}$ from a cocycle $c$ representing $[c] \in H^{1}(\Gamma; \mathfrak{g})$ is in the spirit of \textit{universal Teichmueller theory}. See \cite{fletcher}, \cite{gardiner}, \cite{lehto1}, \cite{lehto2}, \cite{mar} for more details. \end{mainremark} For the construction of $\psi$ in Lemma \ref{lemmaintro1}, we can either use the $\Gamma$-invariant partition of unity method or the difficult theory of \textbf{\cref{chapter3}} and \textbf{\cref{analtocohom}} which produces a harmonic solution. Lemma \ref{lemmaintro1} is valid for all of these but the construction of an $L^{2}$-extension of $\psi$ to $\overline{\mathbb{D}}$ relies on the existence of harmonic vector fields. Therefore, it is worth asking the following: \begin{mainopen}[Open Problem \ref{openprob1}] Is there a more direct way of proving Corollary \ref{maincoro2} which does not take harmonicity into account? \end{mainopen} The final results of this article are based on the reincarnation (see \textbf{\cref{invariancepoisson}}) and adaptation of the \textit{Poisson integral formula} in the case of continuous tangential vector fields on $\mathbb{S}^{1}$. First, we construct a harmonic vector field on the open unit disk $\mathbb{D}$ from a continuous tangential vector field $X$ on $\mathbb{S}^{1}$. Note that a continuous tangential vector field $X$ on $\mathbb{S}^{1}$ can be written as $X=fY$ where $f$ is a real-valued continuous function on $\mathbb{S}^{1}$ and $Y$ is the norm $1$ tangential vector field on $\mathbb{S}^{1}$ given by $z \longmapsto \iota z$. \begin{mainthmintro}[Theorem \ref{kernelvectorfieldisharmonic}] \label{maintheorem4} Let $\mathcal{S}_{C^{0}}(T\mathbb{S}^{1})$ be the Banach space of (tangential) continuous vector fields on $\mathbb{S}^{1}$ and $\mathcal{S}_{C^{0}}(T\mathbb{D})$ be the space of continuous vector fields on the open disk $\mathbb{D}$. A linear map $$\mathcal{F}: \mathcal{S}_{C^{0}}(T\mathbb{S}^{1}) \longrightarrow \mathcal{S}_{C^{0}}(T\mathbb{D})$$ is given by the normalized convolution $$\mathcal{F}(X) = f \ast \textbf{K}, $$ where $\textbf{K}$ is the Poisson Kernel vector field given by $$\textbf{K}(z) = \frac{\iota (1-|z|^{2})^{3}}{|1-\bar{z}|^{2} \cdot (1-\bar{z})^{2}}.$$ Moreover, $\mathcal{F}(X)$ is a harmonic vector field on the open unit disk $\mathbb{D}$. \end{mainthmintro} \begin{mainlemma}[Lemma \ref{finaltheorem}] \label{mainlemma3} $\mathcal{F}(X)$ and $X$ make up a continuous vector field on the the closed unit disk $\overline{\mathbb{D}}$. \end{mainlemma} We adapt Lemma \ref{mainlemma3} in the case of tangential $L^{2}$-vector fields on $\mathbb{S}^{1}$ as follows: \begin{maincoro}[Corollary \ref{lastcoro}] \label{lastcorointro} For an $L^{2}$-tangential vector field $X$ on $\mathbb{S}^{1}$, $X$ is an $L^{2}$-boundary extension of the smooth vector field $\mathcal{F}(X)$ on the open unit disk $\mathbb{D}$. \end{maincoro} \begin{mainremark} \normalfont We suspect that Corollary \ref{lastcorointro} is an infinitesimal version of the problem of finding harmonic extensions of quasiconformal maps (from $\mathbb{S}^{1}$ to itself) to the open unit disk $\mathbb{D}$ or the upper half plane $\mathbb{H}^{2}$. See \cite{hardt} for more details. \end{mainremark} We have not shown that there exists a \textit{unique} harmonic extension of a tangential $L^{2}$-vector field $X$ on $\mathbb{S}^{1}$ to the closed unit disk $\overline{\mathbb{D}}$. And this brings us to our second open problem: \begin{mainopen}[Open Problem \ref{openprob2}] \label{openprobintro2} Given a tangential $L^{2}$-vector field $X$ on the boundary circle $\mathbb{S}^{1}$, does there exist a unique harmonic extension to the closed unit disk $\overline{\mathbb{D}}$? \end{mainopen} From Theorem \ref{maintheorem4} and Corollary \ref{lastcorointro}, we get the following result: \begin{mainthmintro}[Theorem \ref{lastsectiontheo}] \label{maintheorem5} Let $\Gamma$ be a discrete cocompact subgroup of $\mathrm{PSU}(1, 1)$. For every cocycle $c$ representing a cohomology class $[c] \in H^{1}(\Gamma; \mathfrak{g})$, there exists a smooth vector field $\psi$ on the open unit disk $\mathbb{D}$ such that $c= \delta \psi$. Moreover, any such $\psi$ admits an $L^{2}$-extension to $\overline{\mathbb{D}}$ whose restriction $\psi^{\sharp}$ to the boundary circle $\mathbb{S}^{1}$ is tangential. There exists a homomorphism \begin{equation*} \label{mainmap2} \begin{split} \varPsi: H^{1}(\Gamma; \mathfrak{g}) & \longrightarrow \mathrm{HQD}(\mathbb{D}, \Gamma) \\ [c] & \longmapsto \big(\mathcal{L}_{\mathcal{F}(\psi^{\sharp})}\textbf{g}_{\mathbb{D}} \big)^{(2, 0)}, \end{split} \end{equation*} where the map $\mathcal{F}$ is introduced in Theorem \ref{maintheorem4} and $\mathcal{F}(\psi^{\sharp})$ is a harmonic vector field on the open disk $\mathbb{D}$. \end{mainthmintro} \begin{maincoro}[Corollary \ref{lastsectioncorol}] \label{maincoro5} $$ \varPhi \circ \varPsi = \mathrm{Id},$$ where $\varPhi$ is defined in Corollary \ref{corosecondlast} and $\varPsi$ is defined in Theorem \ref{maintheorem5}. \end{maincoro} \subsection*{Organisation of the article} The main goals of \textbf{\cref{beginning}} are to gather some necessary results, prove that the Teichmueller space $\mathscr{T}(\Sigma_{g})$ is a $6g-6$ dimensional manifold using techniques from differential topology, and discuss briefly about tangent spaces to the Teichmueller space $\mathscr{T}(\Sigma_{g})$. We have attempted to follow a coherent narrative. The reader who is familiar with these notions can skip \cref{beginning}. \textbf{\cref{chapter3}} is dedicated to establishing the notion of a harmonic vector field on $\mathbb{H}^{2}$ (or on $\mathbb{D}$) and proving Proposition \ref{firstprop}, Theorem \ref{maintheorem1}, and Theorem \ref{maintheorem2}. It also discusses the main advantages of the method which is used in \textbf{\cref{chapter3}} in proving Theorem \ref{maintheorem2} over Scott Wolpert's method. In \textbf{\cref{analtocohom}}, we ensure that we get an explicit map from the vector space of $\Gamma$-invariant holomorphic quadratic differentials $\mathrm{HQD}(\mathbb{D}, \Gamma)$ on $\mathbb{D}$ to $H^{1}(\Gamma; \mathfrak{g})$ by using the theory of $L^{2}$-vector fields on $\mathbb{S}^{1}$, where $\Gamma$ denotes a discrete cocompact subgroup of $\mathrm{PSU}(1, 1)$ and $\mathfrak{g}$ denotes the Lie algebra of $\mathrm{PSU}(1, 1)$. One of the main actors in \textbf{\cref{analtocohom}} is the notion of a \textit{tangential $L^{2}$-vector field} on $\mathbb{S}^{1}$ (see Definition \ref{defnoftan} and Example \ref{examoftan}). In \textbf{\cref{analtocohom}}, we prove Theorem \ref{chap3theorem1}, Corollary \ref{chap3coro1}, and Corollary \ref{corosecondlast}. \textbf{\cref{cohomtoanal}} is dedicated to constructing a map in the other direction in (\ref{mainthing}), i.e., from the cohomological description of tangent spaces to the analytic description of tangent spaces to the Teichmueller space $\mathscr{T}(\Sigma_{g})$. In \textbf{\cref{cohomtoanal}}, we prove Lemma \ref{lemmaintro}, Lemma \ref{lemmaintro1}, Corollary \ref{maincoro2}, Theorem \ref{maintheorem4}, Lemma \ref{mainlemma3}, Corollary \ref{lastcorointro}, Theorem \ref{maintheorem5}, and Corollary \ref{maincoro5}. And, in \textbf{\cref{connectionuniversal}}, we show that how we can describe a connection on the \textit{universal Teichmueller curve} using the notion of a harmonic vector field on $\mathbb{D}$ developed in \textbf{\cref{chapter3}}, \textbf{\cref{analtocohom}}, and \textbf{\cref{cohomtoanal}}. \cref{genesis} sheds light on how (\ref{maintheoremequation}) and (\ref{maintheoremequation3}) relate to each other. \subsection*{Acknowledgements} This article is based on the author Ph.D. thesis work. The author is immensely indebted to Michael Weiss for his constant support and encouragement. The author would also like to thank Scott Wolpert for many enlightening correspondences and for discussing some of his work that has been used in this article. The author thesis work was supported by the Alexander von Humboldt Professorship of Michael Weiss (2012-2017) and the Ada Lovelace Research Fellowship provided by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy EXC 2044-390685587, Mathematics M\"unster: Dynamics-Geometry-Structure. \section{Preliminaries} \label{beginning} \subsection{Some facts from hyperbolic geometry} \label{factshyperbolic} The upper half plane $\mathbb{H}^{2}$ with the metric $\textbf{g}_{\mathbb{H}^{2}} = \frac{dx^{2}+dy^{2}}{y^{2}}$ and the Poincar\'{e} disk $\mathbb{D}$ with the metric $\textbf{g}_{\mathbb{D}}= \frac{4dx^{2}+dy^{2}}{(1-(x^{2}+y^{2}))^{2}}$ are the common models for the hyperbolic plane. Semicircles and half lines orthogonal to $\mathbb{R}$ are the geodesics in the upper half plane model $\mathbb{H}^{2}$. In the Poincar\'{e} disk model $\mathbb{D}$, if two points $z_{1}$ and $z_{2}$ are on the same diameter then the geodesic from $z_{1}$ to $z_{2}$ is the Euclidean line segment joining them, otherwise the geodesic is the arc of circle, orthogonal to $\mathbb{S}^{1}$. Both $\mathbb{H}^{2}$ and $\mathbb{D}$ have curvature $-1$ w.r.t $\textbf{g}_{\mathbb{H}^{2}}$ and $\textbf{g}_{\mathbb{D}}$. Both $\textbf{g}_{\mathbb{H}^{2}}$ and $\textbf{g}_{\mathbb{D}}$ are invariant under $$\mathrm{Aut}(\mathbb{H}^{2}) =\{f \in \mathrm{Aut}(\overline{\mathbb{C}})| f(\mathbb{H}^{2})=\mathbb{H}^{2} \},$$ where $\mathrm{Aut}(\overline{\mathbb{C}})$ is the automorphism group of the Riemann sphere $\overline{\mathbb{C}}$, and $$\mathrm{Aut}(\mathbb{D}) = \{f \in \mathrm{Aut}(\overline{\mathbb{C}})| f(\mathbb{D})=\mathbb{D} \}.$$ Note that $\mathrm{Aut}(\mathbb{H}^{2}) \cong \mathrm{PSL}(2, \mathbb{R}) \cong \mathrm{Isom}^{+}(\mathbb{H}^{2})$, where $\mathrm{Isom}^{+}(\mathbb{H}^{2})$ is the group of orientation preserving isometries of $\mathbb{H}^{2}$. Every element of $\mathrm{Isom}^{+}(\mathbb{H}^{2})$ has a form $\gamma(z)=\frac{az+b}{cz+d},$ where $a, b, c, d \in \mathbb{R}$ with $ad-bc=1$. We classify elements of $\mathrm{PSL}(2, \mathbb{R})$ based on an extremal problem on hyperbolic translation length as follows: for every $\gamma \in \mathrm{PSL}(2, \mathbb{R})$ except the identity element, set $$\alpha(\gamma) = \inf_{z \in \mathbb{H}^{2}} d_{\mathbb{H}^{2}}(z, \gamma(z)),$$ where $d_{\mathbb{H}^{2}}(-, -)$ denotes the hyperbolic distance, then $\gamma$ is \textit{elliptic} if $\alpha(\gamma)=0$ and there exists a point $z \in \mathbb{H}^{2}$ with $\alpha(\gamma)= d_{\mathbb{H}^{2}}(z, \gamma(z))$. In other words, $z$ is a fixed point of $\gamma$; $\gamma$ is \textit{parabolic} if $\alpha(\gamma)=0$ but there exists no point $z \in \mathbb{H}^{2}$ with $\alpha(\gamma)= d_{\mathbb{H}^{2}}(z, \gamma(z))$; $\gamma$ is \textit{hyperbolic} if $\alpha(\gamma) > 0$ and there exists a point $z \in \mathbb{H}^{2}$ with $\alpha(\gamma)= d_{\mathbb{H}^{2}}(z, \gamma(z))$. Since $\mathbb{H}^{2}$ is isometric to $\mathbb{D}$, normal forms of above elements are given as follows: any elliptic element is conjugate to a rotation $z \longmapsto \lambda z$ in $\mathrm{Aut}(\mathbb{D})$, for some $\lambda$ with $|\lambda|=1$; any parabolic element is conjugate to either $z \longmapsto z+1$ or to $z \longmapsto z-1$ in $\mathrm{Aut}(\mathbb{H}^{2})$, and these maps are not conjugate to each other; any hyperbolic element is conjugate to $z \longmapsto \lambda z$ in $\mathrm{Aut}(\mathbb{H}^{2})$, where $\lambda > 1$. Since elements of $\mathrm{PSL}(2, \mathbb{R})$ have matrix representations, they are also classified by $\mathtt{trace}$, i.e., for a non-identity $\gamma \in \mathrm{PSL}(2, \mathbb{R})$ the following holds: $\gamma$ is parabolic iff $\mathtt{trace}^{2}(\gamma)=4$; $\gamma$ is elliptic iff $0 \leqq \mathtt{trace}^{2}(\gamma) < 4$; $\gamma$ is hyperbolic iff $\mathtt{trace}^{2}(\gamma)>4$. \cite{Beardon} and \cite{Iverson} are great references to absorb different flavours of hyperbolic geometry. \subsection{The Teichm\"{u}ller space, a kaleidoscopic view} \iffalse We denote the fundamental group of $\Sigma_{g}$ by $\Gamma_{g}$. Understanding and generalizing a `mathematical structure' on a `mathematical object' is an important concept in every discipline of pure mathematics. In (Riemann) surface theory the study of \textit{conformal structure}, \textit{complex structure}, and \textit{almost complex structure} on $\Sigma_{g}$ has received much attention. We begin by giving an overview of the above-mentioned structures on $\Sigma_{g}$ and also emphasize the interplay between them. \begin{defn}[Complex structure] \label{complexstructure} \normalfont A \textit{complex structure} $J$ on $\Sigma_{g}$ is an equivalence class of complex atlases, where two atlases, say, $\{U_{i}, f_{i}\}$ and $\{ V_{i}, g_{i}\}$ are equivalent iff their union forms a new complex atlas. \end{defn} \begin{defn}[Almost complex structure] \label{almostcomplexstructure} \normalfont \textit{An almost complex structure} on $\Sigma_{g}$ is a smooth bundle endomorphism $J: T \Sigma_{g} \longrightarrow T \Sigma_{g}$ such that \begin{enumerate} \item $\forall x \in \Sigma_{g}: J_{x}^{2}=-I_{x}$, \item $\forall \hspace{2pt} \text{nonzero} \hspace{2pt} v \in T_{x} \Sigma_{g}: (v, J_{x}(v)) \hspace{2pt} \text{is an oriented basis for} \hspace{2pt} T_{x}\Sigma_{g}.$ \end{enumerate} Equivalently, an almost complex structure is a smooth section of the fiber bundle $$\mathrm{GL}(\Sigma_{g}) \times_{\mathrm{GL}^{+}(2, \mathbb{R})} \mathrm{GL}^{+}(2, \mathbb{R}) / \mathrm{GL}(1, \mathbb{C}) \longrightarrow \Sigma_{g}.$$ $\mathrm{GL}(1, \mathbb{C})$ is the multiplicative group of non-zero complex numbers embedded in the group $\mathrm{GL}^{+}(2, \mathbb{R})$ of the real $2 \times 2$ matrices with positive determinant. \end{defn} \begin{defn}[Conformal structure] \label{conformal} \normalfont A \textit{conformal structure} on $\Sigma_{g}$ is an equivalence class of Riemannian metrics on $\Sigma_{g}$ where two Riemannian metrics $h_{1}$ and $h_{2}$ are equivalent if the following holds $$ h_{1}=e^{2u}h_{2},$$ where $u$ is a real valued $C^{\infty}$-function on $\Sigma_{g}$. \end{defn} We denote the set of almost complex structures on $\Sigma_{g}$ by $\mathcal{A}(\Sigma_{g})$ and the set of complex structures on $\Sigma_{g}$ by $\mathcal{C}(\Sigma_{g})$. $\mathcal{A}(\Sigma_{g})$ is endowed with the $C^{\infty}$-topology and is clearly contractible because the homogeneous space $\mathrm{GL}^{+}(2, \mathbb{R}^{2}) / \mathrm{GL}(1, \mathbb{C}^{1})$ is contractible. Getting an almost complex structure on $\Sigma_{g}$ from a complex structure on $\Sigma_{g}$ is obvious but the question of whether $\Sigma_{g}$ admits a complex structure whose underlying almost complex structure is the given one is answered by the \textit{Newlander-Nirenberg theorem}. Here is the precise formulation: \begin{theorem}[Korn-Lichtenstein Theorem \cite{Ch55}, \cite{NN57}] \label{theorem1} There is an obvious (forgetful) map \begin{equation*} \begin{split} \Xi&: \mathcal{C}(\Sigma_{g}) \longrightarrow \mathcal{A}(\Sigma_{g}) \\ & c \ni (U \subset \Sigma_{g}, \phi) \longmapsto \bigg(J_{\phi}(x):= d\phi^{-1}_{x} \hat{J} d\phi_{x}, x \in U, \hat{J}:= \begin{bmatrix} 0& -1 \\ 1 & 0 \end{bmatrix} \bigg) \end{split} \end{equation*} which is a bijection. \end{theorem} \begin{remark} \normalfont $J_{\phi}$ is independent of the choice of $\phi$ in the description of the map $\Xi$ above. \end{remark} Let $\mathrm{Diff}^{+}(\Sigma_{g})$ be the topological group of all orientation preserving diffeomorphisms of $\Sigma_{g}$ and let $\mathrm{Diff}^{+}_{0}(\Sigma_{g})$ be the open subgroup of those orientation preserving diffeomorphisms which are homotopic to the identity. The group $\mathrm{Diff}^{+}(\Sigma_{g})$ and $\mathrm{Diff}_{0}^{+}(\Sigma_{g})$ acts on $\mathcal{A}(\Sigma_{g})$ by $$ (f^{\ast}J)_{x}:= (df_{x})^{-1}J_{f(x)}df_{x}; \quad f \in \mathrm{Diff}^{+}(\Sigma_{g}).$$ The above action makes the bijective map $ \mathcal{C}(\Sigma_{g}) \longrightarrow \mathcal{A}(\Sigma_{g})$ in Theorem \ref{theorem1} $\mathrm{Diff}^{+}(\Sigma_{g})$-equivariant. Futhermore, we call a Riemannian metric $h$ on $\Sigma_{g}$ with an almost complex structure $J$ conformal if $J$ is orthogonal w.r.t $h$. From the Uniformization theorem, $\Sigma_{g}$ is biholomorphically equivalent to the quotient space $\mathbb{H}^{2} / \Gamma$, where $\Gamma$ is a group of holomorphic automorphisms of $\mathbb{H}^{2}$ acting freely and properly discontinuously and is identified with a discrete subgroup of $\mathrm{PSL}(2, \mathbb{R})$, i.e., a \textit{Fuchsian group}. In other words, in any conformal class of Riemannian metrics on $\Sigma_{g}$, there exists a unique Riemannian metric of constant curvature $-1$. In summary, almost complex structures, complex structures, conformal structures, and Riemannian metrics of constant curvature $-1$ are equivalent notions for $\Sigma_{g}$. \fi \subsubsection{Classical definition} \label{classical} We choose a basepoint $x_{0} \in \Sigma_{g}$. The fundamental group $\pi_{1}(\Sigma_{g}, x_{0})$ is generated by the homotopy classes $[a_{1}], [b_{1}], \ldots, [a_{g}], [b_{g}]$ induced from simple closed curves $a_{1}, b_{1}$, $\ldots, a_{g}, b_{g}$ with base point $x_{0}$ satisfying the following relation: $$[[a_{1}], [b_{1}]] \cdots [[a_{g}], [b_{g}]] = 1,$$ where $1$ is the unit element. We denote the fundamental group $\pi_{1}(\Sigma_{g}, x_{0})$ by $\Gamma_{g}$. By abuse of notation, we denote the generators of $\Gamma_{g}$ by $a_{1}, b_{1}, \ldots, a_{g}, b_{g}$ satisfying the fundamental relation $[a_{1}, b_{1}] \cdots [a_{g}, b_{g}] = 1$. From the Uniformization theorem, $\Gamma_{g}$ is isomorphic to a discrete cocompact subgroup of $\mathrm{PSL}(2, \mathbb{R})$. Before giving the classical definition of the Teichmueller space, we describe elements of $\Gamma_{g}$. \begin{prop}[\cite{katok}] \label{hyperbolicelements} Every non-identity element of $\Gamma_{g}$ is hyperbolic. \end{prop} \begin{proof} We prove the proposition by contradiction. Assume that $\gamma \in \Gamma_{g}-\{1\}$ is either parabolic or elliptic. Note that $\Gamma_{g}$ acts freely on $\mathbb{H}^{2}$ and hence cannot have elliptic elements. Now, assume that $\gamma \in \Gamma_{g}-\{\ 1 \}$ is a parabolic element. Since every parabolic element of $\mathrm{PSL}(2, \mathbb{R})$ is conjugate in $\mathrm{PSL}(2, \mathbb{R})$ to either $z \longmapsto z+1$ or $z \longmapsto z-1$ (see \textbf{\cref{factshyperbolic}}), we work with $\gamma(z)=z+1$ for the rest of the proof. Let $a$ be a positive real number. Let us denote the image of the segment joining $\iota a$ to $\gamma(\iota a)$ by the projection map $p: \mathbb{H}^{2} \longrightarrow \mathbb{H}^{2}/ \Gamma_{g}$ by $C_{a}$. We note that $C_{a}$ is a closed curve. See Figure \ref{pehlachitra} below. \begin{figure}[H] \begin{center} \begin{tikzpicture} \draw [->](-2.5,0) -- (3,0); \draw [->](0,0) -- (0, 5); \draw (1,0) -- (1,5); \draw (0,0.5)node[left]{$\iota a_{2}$} -- (1,0.5) node[right]{$\gamma(\iota a_{2}$)}; \draw (0,1)node[left]{$\iota a_{3}$} -- (1,1)node[right]{$\gamma(\iota a_{3}$)}; \draw (0,2)node[left]{$\iota a_{4}$} -- (1,2)node[right]{$\gamma(\iota a_{4}$)} ; \draw (0,0.25) node[left]{$\iota a_{1}$} -- (1, 0.25)node[right]{$\gamma(\iota a_{1}$)}; \end{tikzpicture} \end{center} \caption{Line segments joining $\iota a_{i}$ to $\gamma(\iota a_{i})$} \label{pehlachitra} \end{figure} Recall the Poincar\'{e} metric on $\mathbb{H}^{2}$ induces a hyperbolic metric on the compact surface $\mathbb{H}^{2} / \Gamma_{g}$. Let $l(C_{a})$ be the hyperbolic length of $C_{a}$ w.r.t to a hyperbolic metric on $\mathbb{H}^{2} / \Gamma_{g}$. We have one-to-one correspondence between the free homotopy classes of closed curves on the compact surface $\mathbb{H}^{2} / \Gamma_{g}$ and the set of conjugacy classes in the fundamental group $\pi_{1}(\mathbb{H}^{2} / \Gamma_{g})$. So we could view $C_{a}$ as an element of the fundamental group $\pi_{1}(\mathbb{H}^{2} / \Gamma_{g})$. $C_{a}$ is null-homotopic because for a sequence of positive numbers $\{ a_{i}\}_{i=1}^{\infty}$, $l(C_{a_{i}}) \longrightarrow 0$ as $i \longrightarrow \infty$. In order to get a contradiction we have to show that $C_{a}$ is not null-homotopic as an element of the fundamental group $\pi_{1}(\mathbb{H}^{2} / \Gamma_{g}) \simeq \Gamma_{g}$. It's obvious because we started with a non-identity element $\gamma \in \Gamma_{g}$. \hfill \qedsymbol \end{proof} \begin{lemma}[\cite{hub}, \cite{imayoshi}] \label{fuchsianlemma} Let $\gamma_{1}, \gamma_{2} \in \mathrm{PSL}(2, \mathbb{R}) -\{ 1 \}$ be hyperbolic, where $1$ denotes the identity element of $\mathrm{PSL}(2, \mathbb{R})$. Let $\mathrm{Fix}(\gamma_{1})$ and $\mathrm{Fix}(\gamma_{2})$ be the set of fixed points of $\gamma_{1}$ and $\gamma_{2}$, where the set of fixed points of an element $\gamma \in \mathrm{PSL}(2, \mathbb{R})-\{ 1 \}$ is the set of all $z \in \mathbb{R} \cup \{\infty \}$ satisfying $\gamma(z)=z$. Then $\gamma_{1}$ and $\gamma_{2}$ commute iff they have atleast one common fixed point, i.e., $$z \in \mathrm{Fix}(\gamma_{1}) \cap \mathrm{Fix}(\gamma_{2}) \neq \emptyset.$$ \end{lemma} \begin{remark} \normalfont \label{cyclic} We denote the centralizer of $\Gamma_{g}$ in $\mathrm{PSL}(2, \mathbb{R})$ by $C_{\Gamma_{g}} \mathrm{PSL}(2, \mathbb{R})$. From Lemma \ref{fuchsianlemma}, it is easy to see that $C_{\Gamma_{g}} \mathrm{PSL}(2, \mathbb{R})$ is trivial. Here is an argument: from Lemma \ref{fuchsianlemma}, $\gamma_{1}$ and $\gamma_{2}$ are noncommuting iff $\mathrm{Fix}(\gamma_{1}) \cap \mathrm{Fix}(\gamma_{2}) = \emptyset$. Now, let's assume that $\gamma \in \mathrm{PSL}(2, \mathbb{R})$ commutes with $\gamma_{1}$ and $\gamma_{2}$. Then $\gamma$ fixes the axis of $\gamma_{1}$ and $\gamma_{2}$, since $\gamma(\mathrm{Ax}\gamma_{i}) = \mathrm{Ax}\gamma \gamma_{i} \gamma^{-1} = \mathrm{Ax} \gamma_{i}$, for $i=1, 2$. Thus $\gamma$ maps $\mathrm{Fix}(\gamma_{i}), i=1, 2$ to itself. However we cannot conclude that $\gamma(z)= z$, $z \in \mathrm{Fix}(\gamma_{i}), i=1, 2$. We have two possibilities: \begin{enumerate} \item $\gamma$ is hyperbolic with the same axis as of $\gamma_{1}$ and $\gamma_{2}$. \item $\gamma$ is elliptic of order $2$, i.e., $\gamma$ interchanges the fixed points of $\gamma_{1}$ and $\gamma_{2}$. And $\gamma \gamma_{i} \gamma^{-1} = \gamma_{i}^{-1}$. \end{enumerate} We can exclude both the possibilities because according to (1), $\gamma$ has $4$ fixed points, hence a contradiction. And from (2), $\gamma \notin C_{\Gamma_{g}} \mathrm{PSL}(2, \mathbb{R})$. Hence, $C_{\Gamma_{g}} \mathrm{PSL}(2, \mathbb{R})$ is trivial. \end{remark} \begin{defn} \label{classicaldefn} \normalfont The Teichm\"{u}ller space of $\Sigma_{g}$ is defined as the space of equivalence classes of \textit{marked hyperbolic surfaces}. By a \textit{marked hyperbolic surface} we mean a pair $(S, \phi)$ where $S$ is a hyperbolic surface and $\phi: \Sigma_{g} \longrightarrow S$ is an orientation preserving diffeomorphism. Equivalence relation is defined as follows: $$ (S, \phi) \sim (S', \psi),$$ if there exists an isometry $h: S \longrightarrow S'$ such that $\psi$ is isotopic to $h \circ \phi$. We denote the Teichm\"{u}ller space of $\Sigma_{g}$ by $\mathscr{T}(\Sigma_{g})$. \end{defn} \begin{remark} \label{glitchtopology} \normalfont Note that there is a glitch in the above definition as we have not introduced a topology on the Teichm\"{u}ller space $\mathscr{T}(\Sigma_{g})$. There is a notion of the \textit{Teichmueller metric} which gives a topology on $\mathscr{T}(\Sigma_{g})$. See \cite{farbmar} and \cite{imayoshi} for a complete understanding. \end{remark} \subsubsection{$\mathscr{T}(\Sigma_{g})$ as a representation variety} Let $\Gamma$ be a finitely generated group and $G$ be a connected Lie group. The most interesting case for us is when $\Gamma= \Gamma_{g}$ and $G= \mathrm{Isom}^{+}(\mathbb{H}^{2}) \cong \mathrm{PSL}(2, \mathbb{R})$. Let $\mathrm{Hom}(\Gamma, G)$ denote the space of all homomorphisms $\Gamma \longrightarrow G$ with the compact-open topology. Note that $G$ can be described as a closed subgroup of $\mathrm{GL}(k, \mathbb{R})$ for some large $k$. Therefore, we can think of $G$ as a real algebraic subgroup of $\mathrm{GL}(k, \mathbb{R})$. The space $\mathrm{Hom}(\Gamma, G)$ has the structure of an algebraic variety. The representation variety $\mathrm{Hom}(\Gamma, G)$ is isomorphic to an algebraic subvariety (in $G^{n}$). The isomorphism type of the variety $\mathrm{Hom}(\Gamma, G)$ does not depend on the choice of the presentation of $\Gamma$ (see \cite{KM99}, \cite{LM85}). Note that the spaces $\mathrm{Hom}(\Gamma, G)$ are not generally manifolds. The natural symmetries of the space $\mathrm{Hom}(\Gamma, G)$ come from the action of $\mathrm{Aut}(\Gamma) \times \mathrm{Aut}(G)$ where the action is described as: if $\gamma \in \mathrm{Aut}(\Gamma)$ and $\alpha \in \mathrm{Aut}(G)$, then $\rho^{(\gamma, \alpha)} \in \mathrm{Hom}(\Gamma, G)$ is defined as: $$\rho^{(\gamma, \alpha)}(x)= (\alpha \circ \rho \circ \gamma^{-1})(x). $$ We will be mainly concerned with the quotient space of $\mathrm{Hom}(\Gamma, G)$ by $\mathrm{Inn}(G)$ which will be denoted by $\mathrm{Hom}(\Gamma, G)/G$. Note that $\mathrm{Inn}(G)$ does not act freely on $\mathrm{Hom}(\Gamma, G)$ in some cases. The isotropy group of a point $\rho \in \mathrm{Hom}(\Gamma, G)$ is the centralizer $C_{G}(\rho)$ in $\mathrm{Inn}(G)$ and $\mathrm{Inn}(G)$ acts freely on $\mathrm{Hom}(\Gamma, G)$ if $C_{G}(\rho)$ is trivial for all $\rho \in \mathrm{Hom}(\Gamma, G)$. In the case of our interest, i.e., when $\Gamma = \Gamma_{g}$ and $G= \mathrm{PSL}(2, \mathbb{R})$, we overcome this pathology (see Remark \ref{cyclic}). The quotient space $\mathrm{Hom}(\Gamma, G)/G$ is not generally a Hausdorff space unless $G$ is a compact Lie group. \begin{defn} \label{subspacesofrep} \normalfont \begin{equation*} \begin{split} \mathrm{Hom}_{\mathrm{DF}}(\Gamma, G) & := \{ \rho \in \mathrm{Hom}(\Gamma, G) | \rho \hspace{2pt} \text{is injective with discrete image}\}, \\ \mathrm{Hom}_{0}(\Gamma, G) & := \{ \rho \in \mathrm{Hom}_{\mathrm{DF}}(\Gamma, G) | G/ \rho(\Gamma)\hspace{2pt}\text{is compact}\}. \end{split} \end{equation*} \end{defn} \begin{remark} \label{weilremark} \normalfont It is clear that $\mathrm{Hom}_{0}(\Gamma, G) \subset \mathrm{Hom}_{\mathrm{DF}}(\Gamma, G) \subset \mathrm{Hom}(\Gamma, G)$. $\mathrm{Hom}_{0}(\Gamma, G)$ is an open subset of $\mathrm{Hom}(\Gamma, G)$ \cite{Weil60}, \cite{Weil62}. \end{remark} \begin{defn} \label{teichrep} \normalfont The Teichmueller space $\mathscr{T}(\Sigma_{g})$ of $\Sigma_{g}$ is (also) defined as the quotient space $$\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R}),$$ where $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is defined in Definition \ref{subspacesofrep}. \end{defn} The above definition will be the main definition of the Teichmueller space in this thesis. Now, we prove the following general fact using techniques from differential topology: \begin{prop} \label{teichmanifold} $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R})$ has a preferred structure of smooth manifold of dimension $6g-6$. \end{prop} \begin{proof} We prove the statement in the following steps: \smallskip \paragraph{\textbf{Step I:}} Here we prove that $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is a smooth manifold of dimension $6g-3$. Since a homomorphism $\rho: \Gamma_{g} \longrightarrow \mathrm{PSL}(2, \mathbb{R})$ is determined by choosing the $2g$ images $\rho(a_{i}), \rho(b_{i}), 1 \leq i \leq g$, there is a natural inclusion of $\mathrm{Hom}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ into the direct product $\mathrm{PSL}(2, \mathbb{R})^{2g}$ of $2g$ copies of $\mathrm{PSL}(2, \mathbb{R})$. Consider the following map $$R: \mathrm{PSL}(2, \mathbb{R})^{2g} \longrightarrow \mathrm{PSL}(2, \mathbb{R}) $$ given by \begin{equation} \label{commutatormap} R(A_{1}, B_{1}, \ldots, A_{g}, B_{g}) = A_{1}B_{1}A_{1}^{-1}B_{1}^{-1} \cdots A_{g}B_{g}A_{g}^{-1}B_{g}^{-1}. \end{equation} \textit{Claim:} We assume that $A_{1}$ and $B_{1}$ are noncommuting hyperbolic elements. Then the differential of $R$ at $(A_{1}, B_{1}, \ldots, A_{g}, B_{g}) \in \mathrm{PSL}(2, \mathbb{R})^{2g}$ is surjective. \smallskip \newline \textit{Proof of the Claim:} By precomposing the map $R$ given in (\ref{commutatormap}) with the map $$ \mathrm{PSL}(2, \mathbb{R}) \times \mathrm{PSL}(2, \mathbb{R}) \longrightarrow \mathrm{PSL}(2, \mathbb{R})^{2g}$$ $(A, B) \longmapsto (A, B, 1, \ldots, 1) $, we get a map $$ \mathrm{PSL}(2, \mathbb{R}) \times \mathrm{PSL}(2, \mathbb{R}) \longrightarrow \mathrm{PSL}(2, \mathbb{R})$$ given by \begin{equation} \label{reduced} (A, B) \longmapsto ABA^{-1}B^{-1}. \end{equation} We denote this composite map by $R$ as well. Therefore, proving the above-mentioned claim amounts to proving the following statement: Let $\mathfrak{g}$ denote the Lie algebra of $\mathrm{PSL}(2, \mathbb{R})$. If $A$ and $B$ are noncommuting hyperbolic elements, then the differential of the map $R$ given in (\ref{reduced}) \begin{equation} \label{differentialata} dR(A, B): T_{(A, B)} (\mathrm{PSL}(2, \mathbb{R}) \times \mathrm{PSL}(2, \mathbb{R})) \longrightarrow T_{R(A, B)} \mathrm{PSL}(2, \mathbb{R}) \end{equation} is surjective. For the calculation of the differential $dR(A, B)$ we can replace $\mathrm{PSL}(2, \mathbb{R})$ with $\mathrm{SL}(2, \mathbb{R})$. A simple calculation shows that $$T_{A}\mathrm{SL}(2, \mathbb{R}) = A \cdot \mathfrak{gl}(2, \mathbb{R}),$$ where $\mathfrak{gl}(2, \mathbb{R})$ is the Lie algebra of $\mathrm{SL}(2, \mathbb{R})$, equivalently, the tangent space at the identity. From this discussion on tangent spaces, we can write (\ref{differentialata}) as \begin{equation} \label{final} dR(A, B): A \mathfrak{gl}(2, \mathbb{R}) \times B \mathfrak{gl}(2, \mathbb{R}) \longrightarrow R(A, B) \mathfrak{gl}(2, \mathbb{R}). \end{equation} Now, we prove the surjectivity of the map given by (\ref{final}). First, we calculate the differential of $R$ at $(A, B)$. Let $u, v \in \mathfrak{gl}(2, \mathbb{R})$. For $t \rightarrow 0$, we have \begin{equation*} \label{differentialapprox} \resizebox{0.95\hsize}{!}{$ \begin{split} R(A\exp tu, B\exp tv) - R(A, B) & \approx A(I+tu)B(I+tv)(I-tu)A^{-1}(I-tv)B^{-1} -ABA^{-1}B^{-1} \\ & \approx (A+Atu)(B+Btv)(A^{-1}-tuA^{-1}) (B^{-1}-tvB^{-1}) - ABA^{-1}B^{-1} \\ & \approx (AB+ABtv+AtuB) (A^{-1}B^{-1}-A^{-1}tvB^{-1}-tuA^{-1}B^{-1}) \\ & \quad - ABA^{-1}B^{-1} \\ & \approx ABA^{-1}B^{-1} - ABA^{-1} tv B^{-1} - ABtuA^{-1}B^{-1} + ABtvA^{-1}B^{-1} \\ & \quad + AtuBA^{-1}B^{-1} - ABA^{-1}B^{-1} \\ & \approx - ABA^{-1} tv B^{-1} - ABtuA^{-1}B^{-1} + ABtvA^{-1}B^{-1} + AtuBA^{-1}B^{-1} \\ & \approx AB \big(-A^{-1}tvA-tu+tv+B^{-1}tuB\big) A^{-1}B^{-1}. \\ \end{split} $} \end{equation*} Recall that the adjoint representation $\mathrm{Ad}$ of $\mathrm{SL}(2, \mathbb{R})$ on $\mathfrak{gl}(2, \mathbb{R})$ is defined by \begin{equation*} \label{adjointrepresen} (\mathrm{Ad}A)w:= A^{-1}wA, \quad w \in \mathfrak{gl}(2, \mathbb{R}). \end{equation*} Therefore, the differential $dR(A, B): A \mathfrak{gl}(2, \mathbb{R}) \times B \mathfrak{gl}(2, \mathbb{R}) \longrightarrow R(A, B) \mathfrak{gl}(2, \mathbb{R})$ is given by the following: \begin{equation*} \label{differentialcom} (Au, Bv) \longmapsto AB\big((\mathrm{Ad}B)u-u + v- (\mathrm{Ad}A)v \big) A^{-1}B^{-1}, \quad u, v \in \mathfrak{gl}(2, \mathbb{R}). \end{equation*} It is enough to show that the map $\mathfrak{gl}(2, \mathbb{R}) \times \mathfrak{gl}(2, \mathbb{R}) \longrightarrow \mathfrak{gl}(2, \mathbb{R})$ given by \begin{equation} \label{differentialcom2} (u, v) \longmapsto (\mathrm{Ad}B)u-u + v- (\mathrm{Ad}A)v, \quad u, v \in \mathfrak{gl}(2, \mathbb{R}) \end{equation} is surjective. \smallskip \newline \textit{Proof of surjectivity of the map given in (\ref{differentialcom2}):} Note that $\mathrm{SL}(2, \mathbb{R})$ preserves a nondegenerate bilinear form on its Lie algebra $\mathfrak{gl}(2, \mathbb{R})$. Moreover, $\mathrm{PSL}(2, \mathbb{R})$ embeds into the isometry group of the Killing form on $\mathfrak{gl}(2, \mathbb{R})$. So, we think of $B$ as an element of one parameter subgroup generated by $b \in \mathfrak{gl}(2, \mathbb{R})$ of the isometry group of the Killing form on $\mathfrak{gl}(2, \mathbb{R})$. The image of the linear map $u \longmapsto (\mathrm{Ad}B)u-u$ from $\mathfrak{gl}(2, \mathbb{R})$ to itself is precisely the $2$-dimensional subspace of $\mathfrak{gl}(2, \mathbb{R})$ which is perpendicular (in the sense of the Killing form) to $b$. Similarly, the image of the linear map $v \longmapsto v - (\mathrm{Ad}A)v$ from $\mathfrak{gl}(2, \mathbb{R})$ to itself is precisely the $2$-dimensional subspace of $\mathfrak{gl}(2, \mathbb{R})$ which is perpendicular (in the sense of Killing form) to $a \in \mathfrak{gl}(2, \mathbb{R})$. Since we have chosen $A$ and $B$ such that they are noncommuting hyperbolic elements, $a$ and $b$ are linearly independent in $\mathfrak{gl}(2, \mathbb{R})$. The reader can also verify these two statements in coordinates, i.e., by making choices for $B$ (and $A$ respectively), $u$ (and $v$ respectively) and plugging these into $u \longmapsto (\mathrm{Ad}B)u-u$ and $v \longmapsto v - (\mathrm{Ad}A)v$. Therefore, the map $\mathfrak{gl}(2, \mathbb{R}) \times \mathfrak{gl}(2, \mathbb{R}) \longrightarrow \mathfrak{gl}(2, \mathbb{R})$ given in (\ref{differentialcom2}) is surjective. \smallskip \newline We denote the subset of $\mathrm{PSL}(2, \mathbb{R})^{2g}$ consisting of elements $A_{1}, B_{1}, \ldots, A_{g}, B_{g}$ such that $A_{1}, B_{1}$ are noncommuting hyperbolic elements by $W$. Since $W$ is open in $\mathrm{PSL}(2, \mathbb{R})^{2g}$, hence $W$ is a manifold of dimension $6g$. From the above-mentioned claim, $1$ is a regular value of the restriction map $R|_{W}: W \longrightarrow \mathrm{PSL}(2, \mathbb{R})$. In fact, every value of the map $R|_{W}$ is a regular value. Hence, $R|_{W}^{-1}(1)$ is a submanifold of $W$ of dimension $6g-3$. Note that $R|_{W}^{-1}(1)$ is nothing but $\mathrm{Hom}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \cap W$. From Remark \ref{weilremark}, we know that $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is an open subset of $\mathrm{Hom}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$, therefore, $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is a $6g-3$ dimensional smooth manifold. \smallskip \paragraph{\textbf{Step II:}} In this step, we study the action of $\mathrm{PSL}(2, \mathbb{R})$ on $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$. Given $g \in \mathrm{PSL}(2, \mathbb{R})$ and $\rho \in \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$, we define $\rho^{g}: \Gamma_{g}\longrightarrow \mathrm{PSL}(2, \mathbb{R})$ by setting \begin{equation} \label{freeaction} \rho^{g}(\gamma) = g \rho(\gamma) g^{-1}, \quad \forall \gamma \in \Gamma_{g}. \end{equation} The map $(g, \rho) \longmapsto \rho^{g}$ is a continuous action of $\mathrm{PSL}(2, \mathbb{R})$ on $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$. We want to show that the action is free and that the orbit space of this action is again a smooth manifold. Consider the following map $$\psi_{1}: \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \longrightarrow \mathrm{Conf}_{3}(\partial \mathbb{H}^{2}),$$ $$ \rho \longmapsto (z_{1}, z_{2}, z_{3})$$ where $\mathrm{Conf}_{3}(\partial \mathbb{H}^{2})$ is the space of ordered configurations of distinct $3$ points in the boundary $\partial \mathbb{H}^{2}$. In the above map, $z_{1}$, $z_{2}$ are \textit{attractive} and \textit{repelling} fixed points of $A_{1}$, i.e., $$\lim_{n \rightarrow \infty} A_{1}^{n}(z)=z_{1}, \forall z \in \mathbb{H}^{2}, \quad \lim_{n \rightarrow -\infty} A_{1}^{n}(z)=z_{2}, \forall z \in \mathbb{H}^{2}, $$ and $z_{3}$ is the attractive fixed point of $B_{1}$. Moreover, the group $\mathrm{PSL}(2, \mathbb{R})$ acts sharply transitively on ordered triples in $\partial \mathbb{H}^{2}$, we can also think of $\psi_{1}$ as a map $$ \psi_{1}: \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \longrightarrow \mathrm{PSL}(2, \mathbb{R}).$$ Note that we have identified $\mathrm{PSL}(2, \mathbb{R})$ with $\mathrm{Conf}_{3}(\partial \mathbb{H}^{2})$ by the map $g \longmapsto g \cdot (0, 1, \infty)$. Observe that $\psi_{1}$ is a $\mathrm{PSL}(2, \mathbb{R})$-equivariant map, i.e., $$\psi_{1}(g \cdot \rho) = g \cdot \psi_{1}(\rho), \quad \forall g \in \mathrm{PSL}(2, \mathbb{R}), $$ where the action on the L.H.S is by conjugation and the action on the R.H.S is by left-multiplication. In other words, if we change $\rho$ by conjugating it by an element $g \in \mathrm{PSL}(2, \mathbb{R})$, the three distinct points $z_{1}, z_{2}, z_{3}$ in $\partial \mathbb{H}^{2}$ are also transformed by the same element $g \in \mathrm{PSL}(2, \mathbb{R})$. The only thing we have to show now is that $\psi_{1}$ is differentiable. Here is an argument: $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is also a closed subset of $\mathrm{PSL}(2, \mathbb{R})^{2g}$. Now, $\psi_{1}$ extends to a small open neighborhood $$\mathscr{U} \subset \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$$ in $\mathrm{PSL}(2, \mathbb{R})^{2g}$. We know that an element $\rho \in \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ is determined by hyperbolic elements $(A_{1}, B_{1}, \ldots, A_{g}, B_{g}) \in \mathrm{PSL}(2, \mathbb{R})^{2g}$ satisfying the relation $$[A_{1}, B_{1}] \cdots [A_{g}, B_{g}]=1.$$ Since the set of hyperbolic elements form an open subset of $\mathrm{PSL}(2, \mathbb{R})$ (see \textbf{\cref{factshyperbolic}}), then an open neighborhood $\mathscr{U} \subseteq \mathrm{PSL}(2, \mathbb{R})^{2g}$ of $ \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ also contains hyperbolic elements $A'_{1}, B'_{1}$, $\ldots, A'_{g}, B'_{g}$ which may not satisfy $[A'_{1}, B'_{1}] \cdots [A'_{g}, B'_{g}]=1$. The upshot is $\psi_{1}$ is smooth because it is the restriction of a map defined on an open neighborhood $\mathscr{U}$ of $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ which is obviously smooth. $\mathrm{PSL}(2, \mathbb{R})$-equivariance of $\psi_{1}$ makes immediately clear that $\psi_{1}$ is everywhere regular. Therefore, $\psi_{1}^{-1}(1)$ is a submanifold of codimension 3 of $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$. We denote $\psi_{1}^{-1}(1)$ by $Z$. Tying it all together, the action of $\mathrm{PSL}(2, \mathbb{R})$ on $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ admits a transversal, i.e., there exists a submanifold $Z$ of $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ of codimension $3$ such that the action of $\mathrm{PSL}(2, \mathbb{R})$ gives us a diffeomorphism $$\psi_{2}: \mathrm{PSL}(2,\mathbb{R}) \times Z \longrightarrow \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$$ $$\psi_{2}(g, z) = gzg^{-1}.$$ Therefore, the orbit space $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R})$ is diffeomorphic to $Z$. \hfill \qedsymbol \end{proof} \begin{remark} \normalfont Note that a different choice of generators for $\Gamma_{g}$ will give the same structure of smooth manifold on $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R})$. \end{remark} \begin{remark} \normalfont We were only made aware of Earle and Eells' paper \cite{EE69}, where they only give a sketch proof of Step 1 at the end of this thesis research. Many thanks to Johannes Ebert. \end{remark} \subsection{Tangent spaces to the Teichm\"{u}ller space $\mathscr{T}(\Sigma_{g})$} \label{tangentteichmuellerspace} \subsubsection{Cohomological description} \label{cohomo} Let $\Gamma$ be a finitely generated group and $G$ be a connected Lie group with Lie algebra $\mathfrak{g}$. We can obtain a (linear) action of $\Gamma$ on $\mathfrak{g}$ by fixing a homomorphism $\rho_{0}:\Gamma \longrightarrow G$ and composing $\rho_{0}$ with the adjoint representation of $G$ and hence make $\mathfrak{g}$ a $k \Gamma$-module where $k=\mathbb{R}$ or $\mathbb{C}$. We denote $\mathfrak{g}$ with the above-mentioned $\Gamma$-module structure by $\mathfrak{g}_{\mathrm{Ad}\rho_{0}}$. A map $c: \Gamma \longrightarrow \mathfrak{g}$ is called a \textit{1-cocycle} if \begin{equation} \label{cocycle} c(\gamma_{1} \gamma_{2})=c(\gamma_{1})+\mathrm{Ad}(\rho_{0}(\gamma_{1}))c(\gamma_{2}), \quad \forall \gamma_{1}, \gamma_{2} \in \Gamma. \end{equation} $c$ is a \textit{1-coboundary} if it has the following form \begin{equation} \label{coboundary} c(\gamma)= u-\mathrm{Ad}(\rho_{0}(\gamma))u \end{equation} for some $u \in \mathfrak{g}$. The (real vector) space of 1-cocycles is denoted by $Z^{1}(\Gamma; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})$ and the (real vector) space of 1-coboundaries is denoted by $B^{1}(\Gamma; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})$. Their quotient is the group cohomology \begin{equation*} H^{1}(\Gamma; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})= Z^{1}(\Gamma; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})/ B^{1}(\Gamma; \mathfrak{g}_{\mathrm{Ad}\rho_{0}}). \end{equation*} When $\Gamma=\pi_{1}(M)$ for a topological space $M$, $H^{1}(\Gamma; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})$ can be identified with $H^{1}(M; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})$, the first cohomology of $M$ with coefficients in the local system given by $\mathfrak{g}_{\mathrm{Ad}\rho_{0}}$. For more details on group cohomology, the reader is referred to \cite{brown}. We are interested in the case when $\Gamma = \Gamma_{g}$, $G= \mathrm{PSL}(2, \mathbb{R})$, and $\mathfrak{g}$ is the Lie algebra of $\mathrm{PSL}(2, \mathbb{R})$. \begin{prop}[\protect{\cite[Theorem 2.6]{LM85}, \cite[Chapter VI]{raghu}}] \label{tangent1} $$T_{[\rho_{0}]} \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R}) \cong H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}}).$$ \end{prop} \begin{proof} We construct a linear map $$\Psi: T_{[\rho_{0}]} \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R}) \longrightarrow H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})$$ as follows: to the first order, a curve of maps $(\rho_{t})_{t \in [0, \epsilon)}$ in $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ through the point $\rho_{0}$ depending smoothly on the real parameter $t$ is described as: $$\rho_{t}(\gamma)= \exp\big(tc(\gamma)+O(t^{2})\big) \rho_{0}(\gamma), \quad \forall \gamma \in \Gamma_{g}.$$ The infinitesimal condition for $\rho_{t}$ to be a homomorphism is given as: \begin{equation*} \begin{split} \rho_{t}(\gamma_{1} \gamma_{2}) & = (e+tc_{\gamma_{1} \gamma_{2}}+O(t^{2})) \rho_{0}(\gamma_{1} \gamma_{2}) \\ & = \rho_{0}(\gamma_{1} \gamma_{2}) + t c_{\gamma_{1} \gamma_{2}} \rho_{0}(\gamma_{1} \gamma_{2}) + O(t^{2}) \\ & = \big(\rho_{0}(\gamma_{1})+t c_{\gamma_{1}}\rho_{0}(\gamma_{1}) \big)\big(\rho_{0}(\gamma_{2})+t c_{\gamma_{2}}\rho_{0}(\gamma_{2})\big)+O(t^{2})\\ & = \rho_{0}(\gamma_{1} \gamma_{2}) + t \big(\rho_{0}(\gamma_{1}) c_{\gamma_{2}} \rho_{0}(\gamma_{2}) + c_{\gamma_{1}} \rho_{0}(\gamma_{1})\rho_{0}(\gamma_{2})\big) +O(t^{2})\\ & = \rho_{0}(\gamma_{1} \gamma_{2}) + t \big( \rho_{0}(\gamma_{1}) c_{\gamma_{2}} + c_{\gamma_{1}}\rho_{0}(\gamma_{1}) \big) \rho_{0}(\gamma_{2}) +O(t^{2})\\ & = \rho_{0}(\gamma_{1} \gamma_{2}) + t \big( \rho_{0}(\gamma_{1}) c_{\gamma_{2}} \rho_{0}(\gamma_{1})^{-1} \rho_{0}(\gamma_{1}) + c_{\gamma_{1}}\rho_{0}(\gamma_{1}) \big) \rho_{0}(\gamma_{2}) +O(t^{2})\\ & = \rho_{0}(\gamma_{1} \gamma_{2}) + t \big( \rho_{0} (\gamma_{1}) c_{\gamma_{2}} \rho_{0}(\gamma_{1})^{-1} + c_{\gamma_{1}} \big) \rho_{0}(\gamma_{1}) \rho_{0}(\gamma_{2}) +O(t^{2})\\ & = \rho_{0}(\gamma_{1} \gamma_{2}) + t \big( \mathrm{Ad}(\rho_{0}(\gamma_{1})) c_{\gamma_{2}}+ c_{\gamma_{1}}\big) \rho_{0}(\gamma_{1} \gamma_{2}) + O(t^{2}). \end{split} \end{equation*} From the above equation, notice that $$ c_{\gamma_{1} \gamma_{2}}= \mathrm{Ad}(\rho_{0}(\gamma_{1})) c_{\gamma_{2}}+ c_{\gamma_{1}}.$$ Therefore, we define $\Psi \big(\frac{d}{dt} \rho_{t}|_{t=0}\big)$ to be the cocycle $c \in Z^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})$. We show next that $\Psi$ is injective. Suppose that the cocycle $c$ determined by $\rho_{t}$ is a coboundary, i.e., $c(\gamma)= u-\mathrm{Ad}(\rho_{0}(\gamma))u$ for some $u \in \mathfrak{g}$ (see (\ref{coboundary})). Then the curve $\rho_{t}(\gamma)= g_{t} \rho_{0}(\gamma)g_{t}^{-1}$ induced by a path $g_{t}=e+t u+O(t^{2}), u \in \mathfrak{g}$ is tangent at $t=0$ to the orbit $\mathrm{PSL}(2, \mathbb{R})\rho_{0}$ for all $\gamma \in \Gamma_{g}$. Moreover, $\Psi$ is surjective because of the fact that $\mathrm{dim} H^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}})= 6g-6$. The fact follows from a non-trivial result (see \cite{cohgol}) that given a connected Lie group $G$ and $\rho_{0} \in \mathrm{Hom}(\Gamma_{g}, G)$, $$\mathrm{dim}Z^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}}) = (2g-1) \mathrm{dim} G+\mathrm{dim}C_{G}(\rho(\Gamma_{g})),$$ where $C_{G}(\rho(\Gamma_{g}))$ denotes the the centralizer of $\rho(\Gamma_{g})$ in $G$ and $$\mathrm{dim}B^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}}) = \mathrm{dim}G - \mathrm{dim}C_{G}(\rho(\Gamma_{g})).$$ For the case of our interest, i.e., when $G= \mathrm{PSL}(2, \mathbb{R})$, $C_{G}(\rho(\Gamma_{g}))$ is trivial (see Remark \ref{cyclic}). Therefore, $$\mathrm{dim}Z^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}}) = (2g-1) \mathrm{dim} G = 6g-3, \quad \mathrm{dim}B^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho_{0}}) = \mathrm{dim}G =3.$$ \hfill \qedsymbol \end{proof} \begin{remark} \normalfont Note that $\rho_{0} \in \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))$ can be lifted to a homomorphism $\widetilde{\rho_{0}}:\Gamma_{g} \longrightarrow \mathrm{SL}(2, \mathbb{R})$ because the Euler class $e(\rho_{0})$ of the oriented $\mathbb{S}^{1}$-bundle associated to $\rho_{0}$ equals twice the Euler number of $\mathbb{R}^{2}$-bundle associated to $\widetilde{\rho_{0}}$, i.e., $e(\rho_{0})=2g-2$. See \cite[Appendix C]{milsta} for more details. As a result, in the above proof, the expressions of $\rho_{t}(\gamma_{1}\gamma_{2})$ and $g_{t}$ are justified. \end{remark} \subsubsection{Analytic description: Holomorphic quadratic differentials} \label{thespaceofhqd} Let $K_{\Sigma_{g}}$ be the canonical line bundle, that is, the line bundle over $\Sigma_{g}$ such that the fiber $K_{x}$ over any point $x \in \Sigma_{g}$ is the complex cotangent space $T_{x}^{\ast}\Sigma_{g}$ to $\Sigma_{g}$ at $x$. Let $Q_{\Sigma_{g}}$ be the tensor square of the canonical line bundle $K_{\Sigma_{g}}$. The bundle $Q_{\Sigma_{g}}$ and its sections provide a glimpse into one of the important aspects of the Teichmueller theory. \begin{defn} \label{defnofhqd} \normalfont A holomorphic quadratic differential on $\Sigma_{g}$ is a holomorphic section of $Q_{\Sigma_{g}}$. \end{defn} We will denote a holomorphic quadratic differential on $\Sigma_{g}$ by $q$. Locally, $q$ on $\Sigma_{g}$ as specified in any atlas $\{(U_{i}, z_{i})\}$ can be described as $f_{i}(z_{i})dz_{i}^{2}$, where each $f_{i}$ is a holomorphic function on $U_{i}$ of $\Sigma_{g}$ and $dz_{i}^{2}:= dz_{i} \otimes dz_{i}$ is a section of $Q_{\Sigma_{g}}$. Let's denote the space of holomorphic quadratic differentials on $\Sigma_{g}$ by $\mathrm{HQD}(\Sigma_{g})$. Since $K_{\Sigma_{g}}$ has degree $2g-2$, the Riemann-Roch formula (see \cite{farkaskra}) implies that $$\mathrm{dim}(\mathrm{HQD}(\Sigma_{g}))=\mathrm{deg}(Q_{\Sigma_{g}})-g+1=3g-3.$$ Note that the bundle $Q_{\Sigma_{g}}$ appears in a splitting of the bundle $S^{2}(T\Sigma_{g})$ of (real) symmetric bilinear forms on $T\Sigma_{g}$. This splitting is described as follows: one summand is the $1$-dimensional real vector subbundle spanned by the everywhere nonzero section of the hyperbolic metric $\textbf{g}$ on $\Sigma_{g}$. The other summand is the image of the bundle of quadratic differentials under the following embedding: \begin{equation} \label{quadmap} \psi: \mathrm{hom}_{\mathbb{C}} (T\Sigma_{g} \otimes_{\mathbb{C}} T\Sigma_{g}, \mathbb{C}) \longrightarrow S^{2}(T\Sigma_{g}) \end{equation} where $\psi(q)$ is the real part of $q$, viewed as a (family) of symmetric $\mathbb{R}$-bilinear forms. This subbundle is the \textit{trace-free} summand by definition. It is a 2-dimensional subbundle of a 3-dimensional (real) vector bundle which comes with a structure of 1-dimensional complex vector bundle. We illustrate the above splitting as follows: \begin{example} \normalfont Let $U$ be an open subset of $\mathbb{C}$ with the complex structure induced from $\mathbb{C}$. Then $TU$ is identified with a trivial bundle $\mathbb{C} \times U \longrightarrow U$ and therefore, $\mathrm{hom}_{\mathbb{C}}(TU \otimes_{\mathbb{C}} TU)$ is also identified with a trivial bundle $\mathbb{C} \times U \longrightarrow U$. Therefore, quadratic differentials on $U$ whether holomorphic or not, are identified with complex valued functions on $U$. For such a function $f$, we get $$ \psi(f)(z)= \begin{bmatrix} \Re(f(z)) & -\Im(f(z)) \\ -\Im(f(z)) & - \Re(f(z)) \end{bmatrix}, $$ where $\psi$ is the map given in (\ref{quadmap}). This is very easy to check. The preferred ordered basis of $T_{z}U \cong \mathbb{C}$ as a real vector space is $\{1, \iota\}$. If $f(z)=x+y \iota$ then $\Re(1 \cdot f(z) \cdot 1)=x$, $\Re(\iota \cdot f(z) \cdot \iota)=-x$, $\Re(1 \cdot f(z) \cdot \iota )=-y$. \end{example} From the above discussion, it follows automatically that a holomorphic quadratic differential $q$ on $\Sigma_{g}$ gives a one parameter family $\{g(t)\}_{t \in [0, \epsilon)}$ of deformations of $\textbf{g}$ on $\Sigma_{g}$ such that $g(0)=\textbf{g}$ and $\frac{d g(t)}{dt}\big|_{t=0}=\psi(q)$. In other words, for $t$ close to $0$, $g(t) = \textbf{g} + t \psi(q)$. We view $g(t)$ as a curve in the space $\mathcal{M}$ of Riemannian metrics on $\Sigma_{g}$. Recall that a Riemannian metric on $\Sigma_{g}$ determines an almost complex structure $J$ on $\Sigma_{g}$ which further determines a complex structure on $\Sigma_{g}$. This is due to the Korn-Lichtenstein theorem. Consequently, we get a one parameter family of complex curves $\{\Sigma_{g}^{t}\}_{t \in [0, \epsilon)}$. From the ``Uniformization theorem'', each of these complex curves in the family $\{\Sigma_{g}^{t}\}_{t \in [0, \epsilon)}$ has a preferred hyperbolic metric. Hence, we view $\{\Sigma_{g}^{t}\}_{t \in [0, \epsilon)}$ as a smooth curve in the Teichmueller space $\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R})$ such that $$\frac{d \Sigma_{g}^{t} }{dt} \bigg|_{t=0} \in T_{[\rho_{0}]} \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R}).$$ In summary, we have a linear map from $\mathrm{HQD}(\Sigma_{g})$ to $T_{[\rho_{0}]} \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R})$. The injectivity of this linear map follows from \cite{sampson}, \cite{wolf}. Furthermore, this map is a bijective linear map because the dimension of $\mathrm{HQD}(\Sigma_{g})$ and $T_{[\rho_{0}]} \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))/ \mathrm{PSL}(2, \mathbb{R})$ (as real vector spaces) is same. \section{Explicit expressions of harmonic vector fields on $\mathbb{H}^{2}$} \label{chapter3} \subsection{Harmonic maps} \label{section13} \textit{Conventions:} All manifolds are finite dimensional, connected, and Riemannian of class $C^{\infty}$, unless otherwise stated. All vector bundles and their sections are smooth, unless otherwise specified. Now we review some basic notions from the theory of harmonic maps. We make an effort to do our computations coordinate free first and then in coordinates. The reader is referred to the textbook \cite{jost1} for proofs and much more details on harmonic maps. Other references on harmonic maps include \cite{eellslemaire}, \cite{lemaire4}, \cite{yau}, and \cite{jcwood}. \smallskip \newline Let $(M,g)$ and $(N, h)$ be $m$ and $n$ dimensional manifolds with the Levi-Civita connections $\nabla^{g}$ and $\nabla^{h}$, respectively. Let $\phi: M \longrightarrow N$ be a smooth map. The differential $$d \phi \in \Gamma(M,T^{\ast}M \otimes \phi^{-1}TN)$$ can be viewed as a $\phi^{-1}(TN)$-valued 1-form on $M$, i.e., $d \phi \in \mathscr{A}^{1}(\phi^{-1}(TN))$. Before we define the notion of a \textit{harmonic map}, observe the following: \begin{enumerate} \item There exists a unique connection, $\phi^{-1}\nabla^{h}$, induced by $\phi$ on $\phi^{-1}(TN)$. Note that $\phi^{-1}(TN)$ is a vector bundle on $M$ defined by $\phi$. \item The bundle $T^{\ast} M \otimes \phi^{-1} TN$ has a connection $\nabla$, naturally induced by $\nabla^{g}$ and $\phi^{-1}\nabla^{h}$. \end{enumerate} \begin{defn} \normalfont $\nabla d \phi \in \Gamma(M, \otimes^{2} T^{\ast}M \otimes \phi^{-1}TN)$ is called the \textit{second fundamental form of $\phi$}. \end{defn} \begin{defn} \label{tensionfieldharmonic} \normalfont $\mathrm{Trace}(\nabla d \phi) \in \Gamma(M, \phi^{-1}TN)$ is called the \textit{tension field} of $\phi$. It is usually denoted by $\tau(\phi)$. \end{defn} \begin{defn} \label{totgeo} \normalfont $\phi$ is said to be \textit{totally geodesic} if $\nabla d \phi =0$. \end{defn} \begin{defn} \label{harmonicdefn} \normalfont $\phi$ is said to be \textit{harmonic} if \begin{equation} \label{equ2} \tau(\phi)=0. \end{equation} We call $\tau$ the \textit{Eells-Sampson Laplacian}. \end{defn} \textit{In co-ordinate form:} By taking coordinate charts, the second fundamental form of $\phi$ at $x= (x^{1},\ldots, x^{m}) \in U \subset M$ can be represented as: \begin{equation} \label{trace} (\nabla d \phi)^{\alpha}_{ij}(x)= \frac{\partial^{2} \phi^{\alpha}}{\partial x^{i} \partial x^{j}}(x) - \Gamma^{k}_{ij} \frac{\partial \phi^{\alpha}}{\partial x^{k}}(x) + \Upsilon^{\alpha}_{\beta \gamma}(\phi(x)) \frac{\partial \phi^{\beta}}{\partial x^{i}}(x) \frac{\partial \phi^{\gamma}}{\partial x^{j}} (x), \end{equation} where $\Gamma^{k}_{ij}$, $\Upsilon^{\alpha}_{\beta \gamma}$ denote the Christoffel symbols of $\nabla^{g}$ and $\nabla^{h}$. Note that we have used the Einstein-Summation convention in (\ref{trace}). In coordinate charts, \begin{equation*} \tau^{\alpha}(\phi)(x)= g^{ij}\big(\nabla(d \phi)^{\alpha}_{\ij}(x)\big), \end{equation*} where $g^{ij}$ denotes the inverse of the metric tensor $g_{ij}$. (\ref{equ2}) in co-ordinate form can be expressed as: \begin{equation} \label{euler} \sum_{i,j=1}^{m} g^{ij} \Bigg( \frac{\partial^{2} \phi^{\alpha}}{\partial x^{i} \partial x^{j}} - \sum_{k=1}^{m} \Gamma^{k}_{ij}(x) \frac{\partial \phi^{\alpha}}{\partial x^{k}} + \sum_{\beta, \gamma=1}^{n} \Upsilon^{\alpha}_{\beta \gamma}(\phi(x)) \frac{\partial \phi^{\beta}}{\partial x^{i}} \frac{\partial \phi^{\gamma}}{\partial x^{j}} \Bigg)=0, \quad 1 \leq \alpha \leq n. \end{equation} Note that in (\ref{euler}), the term $$\sum_{i,j=1}^{m} g^{ij} \bigg(\frac{\partial^{2} \phi^{\alpha}}{\partial x^{i} \partial x^{j}} - \sum_{k=1}^{m} \Gamma^{k}_{ij} \frac{\partial \phi^{\alpha}}{\partial x^{k}}\bigg)$$ is the Laplace-Beltrami operator on $M$, a contribution of $\nabla^{g}$ in $T^{\ast}M$ and the other term $$\sum_{i,j=1}^{m} g^{ij} \bigg(\sum_{\beta, \gamma=1}^{n} \Upsilon^{\alpha}_{\beta \gamma}(\phi(x)) \frac{\partial \phi^{\beta}}{\partial x^{i}} \frac{\partial \phi^{\gamma}}{\partial x^{j}} \bigg)$$ which is a non-linear term containing the Christoffel symbols of $\nabla^{h}$ is a contribution of $\phi^{-1}\nabla^{h}$ in $\phi^{-1}TN$. (\ref{euler}) is the \textit{Euler-Lagrange equation} for the \textit{energy} $E$ of $\phi$ which can be defined under some conditions, for example when $M$ is compact, as: \begin{equation*} E(\phi)= \int_{M} e(\phi) d \mu_{g}, \end{equation*} where $d \mu_{g}$ denotes the measure on $M$ induced by $g$ and $e(\phi)$ is the \textit{energy density} of $\phi$. The energy density $e(\phi)$ of $\phi$ is defined by $$e(\phi)(x)= \frac{1}{2} \lVert d\phi(x)\rVert^{2} = \frac{1}{2} \mathrm{trace}(\phi^{\ast}h)(x),$$ where $ \lVert d\phi(x)\rVert$ is the Hilbert-Schmidt norm of the differential map $$d\phi(x): T_{x}M \longrightarrow T_{\phi(x)}N.$$ The energy density $e(\phi)$ of $\phi$ has the following expression in local coordinates \begin{equation}\label{energydensity} e(\phi)= \frac{1}{2} g^{ij}(x) h_{\beta \gamma}(\phi) \frac{d \phi^{\beta}}{d x^{i}} \frac{d \phi^{\gamma}}{d x^{j}}, \quad x \in M. \end{equation} When $M$ is compact, we can define $\phi$ to be a harmonic map if it's a critical point of $E$. \begin{remark} \normalfont Harmonic maps are critical points of the energy functional and hence should not be seen as energy minimizers. Below we give the formulation of the energy extremal problem in the case of harmonic maps: \smallskip \\Given a smooth map $\phi: (M,g) \longrightarrow (N, h)$, let \begin{equation*} E^{\ast}[\phi]=\mathrm{inf} \{ E(\phi'): \phi'= \mathrm{smooth}, \phi' \hspace{2pt} \text{is homotopic to} \hspace{2pt} \phi \} \end{equation*} A smooth map $\phi$ such that $E(\phi) = E^{\ast}[\phi]$ is called an energy minimizer. For the existence and the uniqueness of energy minimizers when the target manifold is equipped with a strictly negatively curved metric, see \cite{ells}, \cite{hart}. \end{remark} Now, if we have two complex manifolds $\Sigma_{1}$ and $\Sigma_{2}$ for $M$ and $N$, and on these manifolds, we have conformal metrics, $$\sigma(z)^{2} dz d \bar{z}= \sigma(z)^{2}(dx^{2}+dy^{2}) \quad (z=x+ \iota y)$$ and $$\rho(u)^{2} du d \bar{u}= \rho(u)^{2}(du_{1}^{2}+du_{2}^{2}) \quad (u=u_{1}+ \iota u_{2}) $$ then the Laplace-Beltrami operator on $\Sigma_{1}$ is given by $\frac{1}{\sigma(z)^{2}}\frac{\partial}{\partial z} \frac{\partial}{\partial \bar{z}}$. According to J. Jost (see \cite[Chapter 1]{jost1}), (\ref{euler}) in these coordinates takes the form \begin{equation} \label{equ3} \frac{1}{\sigma(z)^{2}}\phi_{z \bar{z}}+ \frac{1}{\sigma(z)^{2}} \frac{2 \rho_{\phi}}{\rho} \phi_{z} \phi_{\bar{z}}=0, \end{equation} where a subscript denotes a partial derivative and $\rho_{\phi}$ denotes the Wirtinger derivative of $\rho$ at the point $\phi(z)$. Therefore, a conformal map between Riemann surfaces with conformal metrics is harmonic. From (\ref{equ3}), we can see that in the case of a smooth map $\phi: (\Sigma_{1}, \sigma(z)^{2} dz d \bar{z}) \longrightarrow (\Sigma_{2}, \rho(u)^{2} du d \bar{u}) $ between Riemann surfaces with conformal Riemannian metrics, the Riemannian metric on $\Sigma_{1}$ is not needed to decide whether $\phi$ is harmonic but the Riemannian metric on $\Sigma_{2}$ matters. More generally, it is also true for a smooth map from a Riemann surface to a Riemannian manifold. In summary, we see harmonic maps as a very efficient tool to compare the Riemannian metric structure of $\Sigma_{2}$ to the conformal structure of $\Sigma_{1}$. Next we discuss some basic examples of harmonic maps. \begin{example} \normalfont Totally geodesic maps are harmonic. Clear from Definition (\ref{totgeo}). \end{example} \begin{example} \normalfont The identity map $(M, g) \longrightarrow (M, g)$ is harmonic. \end{example} \begin{example} \normalfont Let $M=\mathbb{S}^{1}$ and $N$ is compact without boundary, then every homotopy class of maps of $M$ into $N$ contains a closed geodesic, hence a harmonic map. \end{example} To discuss the next two examples we will make a small investment in algebra which will lead us to consider natural quantities. Recall the definition of an almost complex structure on $\Sigma_{g}$ from the introduction. Extending an almost complex structure $J: T\Sigma_{g} \longrightarrow T\Sigma_{g}$ on $\Sigma_{g}$ to the complexified tangent bundle $(T\Sigma_{g})^{c}:=T\Sigma_{g} \otimes_{\mathbb{R}} \mathbb{C}$ amounts to having a decomposition of the complexified tangent space $(T_{x}\Sigma_{g})^{c}$ at each $x \in \Sigma_{g}$ into $(T_{x}\Sigma_{g})^{(1,0)}$ and $(T_{x}\Sigma_{g})^{(0, 1)}$ corresponding to eigenvalues $\iota$ and $-\iota$. That is, $$(T_{x}\Sigma_{g})^{(1,0)} =\{v \in (T_{x}\Sigma_{g})^{c} | Jv=\iota v \}, \quad (T_{x}\Sigma_{g})^{(0, 1)} = \{ v \in (T_{x}\Sigma_{g})^{c} | Jv=-\iota v\}.$$ $(T_{x}\Sigma_{g})^{(1,0)}$ and $(T_{x}\Sigma_{g})^{(0, 1)}$ are called holomorphic and antiholomorphic tangent spaces, spanned by \begin{equation*} \frac{\partial}{\partial z} = \frac{1}{2} \bigg( \frac{\partial}{\partial x} - \iota \frac{\partial}{\partial y} \bigg), \quad \frac{\partial}{\partial \bar{z}} = \frac{1}{2} \bigg( \frac{\partial}{\partial x} + \iota \frac{\partial}{\partial y} \bigg), \end{equation*} where $z=x+\iota y$. In a similar fashion, we can complexify the dual tangent bundle $T^{\ast}\Sigma_{g}$ and again, for every $x \in \Sigma_{g}$, we can decompose $(T_{x}^{\ast}\Sigma_{g})^{c}$ into its $\pm \iota$ eigenspaces - $(T_{x}^{\ast}\Sigma_{g})^{(1,0)}$ and $(T_{x}^{\ast}\Sigma_{g})^{(0, 1)}$. $(T_{x}^{\ast}\Sigma_{g})^{(1,0)}$ and $(T_{x}^{\ast}\Sigma_{g})^{(0, 1)}$ are spanned by $$dz=dx+\iota dy, \quad d \bar{z}=dx-\iota dy$$ respectively. Using the above decompositions, we can then decompose complexified tensor bundles and hence sections of tensor bundles. Most importantly, we will consider a symmetric tensor $s$ in the complexification of the bundle $T^{\ast} \Sigma_{g} \otimes T^{\ast} \Sigma_{g}$. Note that $s$ can be written in terms of $dz^{2}:= dz \otimes dz$, $d\bar{z}^{2}:= d\bar{z} \otimes d\bar{z}$, and $|dz^{2}|:= \frac{1}{2} (dz \otimes d\bar{z} + d\bar{z} \otimes dz)$. Tensors that have only $(2, 0)$ part can be written locally as $fdz^{2}$ for some locally defined complex valued function $f$ and are famously known as quadratic differentials (see \textbf{\cref{thespaceofhqd}}). \begin{example} \label{harmexamplenice} \normalfont When $M=\Omega \subset \mathbb{R}^{n}$ and $N=\mathbb{R}$, then the harmonic map equations are the harmonic function equations, i.e., $$ \Delta \phi =0.$$ If $M$ is a surface with a complex structure and $N=\mathbb{R}$, then in the complex language the Laplace equation can be written as: $$ 4\frac{\partial}{\partial \bar{z}} \frac{\partial}{\partial z}\phi = 0.$$ Let's try to observe something really important by rewriting the above equation as follows: \begin{equation} \label{equ4} \frac{\partial}{\partial \bar{z}} \bigg(\frac{\partial}{\partial z}\phi \bigg) = 0. \end{equation} We can also write (\ref{equ4}) in more fancy way as follows: $$ \bar{\partial} \big((d \phi)^{(1,0)}\big)=0,$$ where the object in the parentheses is a ``holomorphic object'' (if and only if the equation holds). In other words, tied to the harmonicity of a map $\phi$ on a surface (with a complex structure) is a ``holomorphic object'' which is a holomorphic 1-form in the present case. \end{example} \begin{example} \label{harmimplieshol} \normalfont A diffeomorphism $\phi: (\Sigma_{1}, \sigma(z)^{2} dz d \bar{z}) \longrightarrow (\Sigma_{2}, \rho(u)^{2} du d \bar{u})$ is harmonic iff $(2, 0)$-part of the pullback metric $\phi^{\ast}(\rho(u)^{2} du d \bar{u})$ is holomorphic. This can be seen as follows: we denote the conformal metric $\rho(u)^{2} du d \bar{u}$ on $\Sigma_{2}$ by $h$. The pullback of $h$ by $\phi$ has the following local expression: \begin{equation} \label{equ5} \begin{split} \phi^{\ast} (h) & = (\phi^{\ast} (h))^{(2, 0)} + (\phi^{\ast} (h))^{(1, 1)} + (\phi^{\ast} (h))^{(0, 2)} \\ & = \langle \phi_{\ast} \partial_{z}, \phi_{\ast} \partial_{z} \rangle_{h} dz^{2} + \big(\Arrowvert \phi_{\ast} \partial_{z} \Arrowvert^{2}_{h} + \Arrowvert \phi_{\ast} \partial_{\bar{z}} \Arrowvert^{2}_{h} \big)\sigma^{2}(z) dz d \bar{z} + \langle \phi_{\ast} \partial_{\bar{z}}, \phi_{\ast} \partial_{\bar{z}} \rangle_ {h} d\bar{z}^{2}. \end{split} \end{equation} Note that in the first equality we used the complex eigenspace decomposition $$\phi^{\ast} (h) = (\phi^{\ast} (h))^{(2, 0)} + (\phi^{\ast} (h))^{(1, 1)} + (\phi^{\ast} (h))^{(0, 2)}$$ under the action of $J$ on $T\Sigma_{g}$. Also, \begin{equation} \label{simplified1} \begin{split} \langle \phi_{\ast} \partial_{z}, \phi_{\ast} \partial_{z} \rangle_{h} dz^{2} & = h \Big(\frac{\partial \phi}{\partial z}, \frac{\partial \phi}{\partial z}\Big) dz^{2} \\ & = \bigg( h \Big(\frac{\partial \phi}{\partial x}, \frac{\partial \phi}{\partial x}\Big) - h \Big(\frac{\partial \phi}{\partial y}, \frac{\partial \phi}{\partial y}\Big) - 2 \iota h \Big(\frac{\partial \phi}{\partial x}, \frac{\partial \phi}{\partial y}\Big) \bigg)dz^{2} \\ & = \big( |\phi_{x}|^{2} - |\phi_{y}|^{2} - 2 \iota h(\phi_{x}, \phi_{y}) \big) dz^{2} \\ & = 4 \rho^{2} \phi_{z} \bar{\phi}_{z} dz^{2} \end{split} \end{equation} and \begin{equation} \label{simplified2} \big(\Arrowvert \phi_{\ast} \partial_{z} \Arrowvert^{2}_{h} + \Arrowvert \phi_{\ast} \partial_{\bar{z}} \Arrowvert^{2}_{h} \big)\sigma^{2}(z)= e(\phi), \end{equation} the energy density of $\phi$, expressed locally in (\ref{energydensity}). Now, (\ref{equ5}) has the following form using the simplified expressions in (\ref{simplified1}) and (\ref{simplified2}) \begin{equation*} \label{pullbackquad} \phi^{\ast} (h) = 4 \rho^{2} \phi_{z} \bar{\phi}_{z} dz^{2} + e(\phi) dz d \bar{z} + \overline{4 \rho^{2} \phi_{z} \bar{\phi}_{z} d z^{2}} \end{equation*} Now, from \cite[Lemma 1.1]{jost1}, it is easy to see that \begin{equation*} \label{pullbackquad1} \begin{split} \partial_{\bar{z}}((\phi^{\ast} (h))^{(2, 0)}) & = \partial_{\bar{z}}(4 \rho^{2} \phi_{z} \bar{\phi}_{z} dz^{2} ) \\ & = \rho^{2} \big(\bar{\phi}_{z} \tilde{\tau}(\phi) + \phi_{z} \overline{\tilde{\tau}(\phi)} \big), \end{split} \end{equation*} where $\tilde{\tau}(\phi)= \phi_{z \bar{z}}+ \frac{2 \rho_{\phi}}{\rho} \phi_{z} \phi_{\bar{z}}$. Therefore, $\partial_{\bar{z}}((\phi^{\ast} (h))^{(2, 0)})=0$ when $\phi$ is harmonic, i.e., when $\tau(\phi)=0$ (see (\ref{equ3})) and hence $\tilde{\tau}(\phi)=0$. We denote $(\phi^{\ast} (h))^{(2, 0)}$ by $q$. Conversely, if $q$ is holomorphic, i.e., $$\bar{\phi}_{z} \tau(\phi) + \phi_{z} \overline{\tau(\phi)} =0 $$ and if $\tau(\phi) \neq 0$ at a point $p \in \Sigma_{1}$, this would imply $|\phi_{z}|=|\bar{\phi}_{z}|=|\phi_{\bar{z}}|$ and hence the Jacobian at $p$ is zero which contradicts the fact that the Jacobian is non zero everywhere since $\phi$ is a diffeomorphism. Furthermore, $q \equiv 0$ iff $\phi$ is conformal. \end{example} \subsection{The notion of a harmonic vector field} \label{chapter3section2} We introduce the notion of a \textit{harmonic vector field} on a Riemannian manifold $M$ which is regarded as the infinitesimal generator of local harmonic diffeomorphisms. Note that some sources use the term \textit{harmonic vector field} to mean vector fields which have harmonic associated 1-form \cite{yano} and vector fields as sections of the tangent bundle with \textit{lift metrics} \cite{nauhaud}. Let $U$ be an open subset of a Riemannian manifold $M$. Let $\{\phi_{t}\}_{t \in [0, \epsilon)}$ be a smooth family of smooth maps $$\phi_{t} : U \longrightarrow M$$ where $\phi_{0}$ is the inclusion. Then $\xi= \frac{d \phi_{t}}{dt} \vert_{t=0}$ is a vector field on $U$. \begin{defn}[Harmonic vector field] \label{defn12} \normalfont The vector field $\xi$ on $U$ is harmonic if there exists a smooth family of smooth maps $\{\phi_{t}: U \longrightarrow M\}_{t \in [0, \epsilon)}$ which satisfies the following: \begin{enumerate} \item $\phi_{0}$ is the inclusion map, \item $\displaystyle\frac{d \phi_{t}}{dt}\Big\vert_{t=0} = \xi$, \item $\forall x \in U:~\displaystyle\frac{d}{dt}\Big\vert_{t=0}~\tau(\phi_{t})(x)=0\,.$ \end{enumerate} \end{defn} \begin{remark} \normalfont Given $\xi$ we can always find the family $\{ \phi_{t} \}_{t \in [0, \epsilon]}$ satisfying (1) and (2) in Definition \ref{defn12}. \end{remark} \begin{remark} \normalfont The choice of $\{\phi_{t}\}_{t \in [0, \epsilon)}$ is unimportant in (3) in Definition \ref{defn12}. \end{remark} Here $\tau$ is the \textit{Eells-Sampson Laplacian} which has been introduced in (\ref{equ2}). Condition 3 in Definition \ref{defn12} can be expressed in co-ordinate form as: \begin{equation} \label{newharm} \frac{d}{dt}\bigg|_{t=0}\Bigg(\sum_{i,j=1}^{m} g^{ij}(x) \Bigg( \frac{\partial^{2} \phi_{t}^{\alpha}}{\partial x^{i} \partial x^{j}} - \sum_{k=1}^{m} \Gamma^{k}_{ij}(x) \frac{\partial \phi_{t}^{\alpha}}{\partial x^{k}} + \sum_{\beta, \gamma=1}^{m} \Gamma^{\alpha}_{\beta \gamma}(\phi_{t}(x)) \frac{\partial \phi_{t}^{\beta}}{\partial x^{i}} \frac{\partial \phi_{t}^{\gamma}}{\partial x^{j}} \Bigg) \Bigg)=0, \quad 1 \leq \alpha \leq m. \end{equation} Now, for each $1 \leq i \leq m$, $\nabla_{t} \frac{\partial \phi_{t}}{\partial x^{i}} = \nabla_{i} \frac{\partial \phi_{t}}{\partial t}$. Therefore, (\ref{newharm}) becomes \begin{equation*} \resizebox{0.97\hsize}{!}{$ \begin{split} \sum_{i,j=1}^{m} g^{ij}(x) \Bigg( \frac{\partial^{2}}{\partial x^{i} \partial x^{j}} \bigg(\frac{d \phi_{t}^{\alpha}}{dt}\bigg|_{t=0}\bigg) - \sum_{k=1}^{m} \Gamma^{k}_{ij}(x) \frac{\partial}{\partial x^{k}} \bigg(\frac{d \phi_{t}^{\alpha}}{dt}\bigg|_{t=0}\bigg) + \sum_{\beta, \gamma=1}^{m} \frac{d}{dt}\bigg|_{t=0} \bigg( \Gamma^{\alpha}_{\beta \gamma}(\phi_{t}(x)) \frac{\partial \phi_{t}^{\beta}}{\partial x^{i}} \frac{\partial \phi_{t}^{\gamma}}{\partial x^{j}} \bigg)\Bigg)=0, \end{split} $} \end{equation*} where $ 1 \leq \alpha \leq m$. Since $\xi^{\alpha}= \frac{d \phi_{t}^{\alpha}}{dt}\big|_{t=0}$, we have \begin{equation} \label{newharm1} \begin{split} \sum_{i,j=1}^{m} g^{ij}(x) \Bigg( \frac{\partial^{2} \xi^{\alpha}}{\partial x^{i} \partial x^{j}} - \sum_{k=1}^{m} \Gamma^{k}_{ij}(x) \frac{\partial \xi^{\alpha}}{\partial x^{k}} + \sum_{\beta, \gamma=1}^{m} \bigg( \big(\Gamma^{\alpha}_{\beta \gamma}\big)'(\phi_{0}(x)) \cdot \xi \bigg) \frac{\partial \phi_{0}^{\beta}}{\partial x^{i}} \frac{\partial \phi_{0}^{\gamma}}{\partial x^{j}} \\ + \Gamma^{\alpha}_{\beta \gamma}(\phi_{0}(x)) \bigg( \frac{\partial \xi^{\beta}}{\partial x^{i}} \frac{\partial \phi_{0}^{\gamma}}{\partial x^{j}} + \frac{\partial \phi_{0}^{\beta}}{\partial x^{i}} \frac{\partial \xi^{\gamma}}{\partial x^{j}} \bigg)\Bigg)=0, \end{split} \end{equation} where $1 \leq \alpha \leq m$ and $\big(\Gamma^{\alpha}_{\beta \gamma}\big)'$ denotes the derivative of $\Gamma^{\alpha}_{\beta \gamma}$. Since $\phi_{0}: U \longrightarrow M$ is the inclusion map, we rewrite (\ref{newharm1}): \begin{equation} \label{newharm2} \sum_{i,j=1}^{m} g^{ij}(x) \Bigg( \frac{\partial^{2} \xi^{\alpha}}{\partial x^{i} \partial x^{j}} - \sum_{k=1}^{m} \Gamma^{k}_{ij}(x) \frac{\partial \xi^{\alpha}}{\partial x^{k}} + \sum_{\beta, \gamma=1}^{m} \bigg( \big(\Gamma^{\alpha}_{\beta \gamma}\big)'(x) \cdot \xi \bigg) \delta_{\beta i} \delta_{\gamma j} + \Gamma^{\alpha}_{\beta \gamma}(x) \bigg( \frac{\partial \xi^{\beta}}{\partial x^{i}} \delta_{\gamma j} + \delta_{\beta i} \frac{\partial \xi^{\gamma}}{\partial x^{j}} \bigg)\Bigg)=0, \end{equation} where $1 \leq \alpha \leq m$, $\frac{\partial \phi_{0}^{\gamma}}{\partial x^{j}}= \delta_{\gamma j}$ and $\frac{\partial \phi_{0}^{\beta}}{\partial x^{i}}= \delta_{\beta i}$. \bigskip \newline We now assume that $M$ is $\mathbb{H}^{2}$ with the standard hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$, coordinatized as an open subset of $\mathbb{C}$. Rewriting (\ref{newharm2}), we get \begin{equation} \label{equ7} \begin{split} \sum_{i,j=1}^{2} \textbf{g}_{\mathbb{H}^{2}}^{ij}(x) \bigg( \frac{\partial^{2} \xi^{\alpha}}{\partial x^{i} \partial x^{j}} - \sum_{k=1}^{2} \Gamma^{k}_{ij}(x) \frac{\partial \xi^{\alpha}}{\partial x^{k}} + \sum_{\beta, \gamma=1}^{2} \bigg( \big(\Gamma^{\alpha}_{\beta \gamma}\big)'(x) \cdot \xi \bigg) \delta_{\beta i} \delta_{\gamma j} \\ + \sum_{\beta, \gamma=1}^{2} \Gamma^{\alpha}_{\beta \gamma}(x)\bigg( \frac{\partial \xi^{\beta}}{\partial x^{i}} \delta_{\gamma j}+\delta_{\beta i} \frac{\partial \xi^{\gamma}}{\partial x^{j}}\bigg) \bigg)=0, \end{split} \end{equation} where $1 \leq \alpha \leq 2$. The Christoffel symbols $\Gamma_{11}^{1}$, $\Gamma_{22}^{1}$, $\Gamma_{12}^{2}$ and $\Gamma_{21}^{2}$ for $\textbf{g}_{\mathbb{H}^{2}}$ vanish. Also $g_{\mathbb{H}^{2}}^{11}=g_{\mathbb{H}^{2}}^{22}=y^{2}$ and $g_{\mathbb{H}^{2}}^{12}=g_{\mathbb{H}^{2}}^{21}=0$. Hence (\ref{equ7}) simplifies to: \begin{equation} \label{equ8} \begin{split} y^{2}\frac{\partial^{2} \xi^{\alpha}}{\partial^{2} x} + y^{2}\frac{\partial^{2} \xi^{\alpha}}{\partial^{2} y}-\bigg(y^{2} \Gamma^{2}_{11} \frac{\partial \xi^{\alpha}}{\partial y}+ y^{2} \Gamma^{2}_{22} \frac{\partial \xi^{\alpha}}{\partial y}\bigg) + y^{2} \big(\big(\Gamma^{\alpha}_{11}\big)'(x) \cdot \xi \big)+ y^{2} \big( \big(\Gamma^{\alpha}_{22}\big)'(x) \cdot \xi \big) \\ + y^{2}\bigg( \Gamma^{\alpha}_{11}\bigg(\frac{\partial \xi^{1}}{\partial x}+ \frac{\partial \xi^{1}}{\partial x}\bigg)+ \Gamma^{\alpha}_{12}\bigg(0+\frac{\partial \xi^{2}}{\partial x}\bigg)+ \Gamma^{\alpha}_{21}\bigg(\frac{\partial \xi^{2}}{\partial x}+0\bigg) \bigg) \\ + y^{2} \bigg( \Gamma^{\alpha}_{12}\bigg(\frac{\partial \xi^{1}}{\partial y}+0\bigg)+ \Gamma^{\alpha}_{21}\bigg(0+\frac{\partial \xi^{1}}{\partial y}\bigg)+ \Gamma^{\alpha}_{22}\bigg(\frac{\partial \xi^{2}}{\partial y}+ \frac{\partial \xi^{2}}{\partial y}\bigg) \bigg)=0; \quad 1 \leq \alpha \leq 2. \end{split} \end{equation} The other Christoffel symbols for $\textbf{g}_{\mathbb{H}^{2}}$ are given as follows: $$\Gamma^{1}_{12}=\Gamma^{1}_{21}= \Gamma^{2}_{22}= -\frac{1}{y}, \quad \Gamma^{2}_{11}= \frac{1}{y}. $$ Substituting these values into (\ref{equ8}), we obtain the following two equations which describe the conditions for $\xi$ to be a \textit{harmonic vector field} on $U$: \begin{equation} \label{equ9} \xi^{1}_{xx}+\xi^{1}_{yy}- \frac{2}{y} ( \xi^{2}_{x}+\xi^{1}_{y})=0 \end{equation} \begin{equation} \label{equ10} \xi^{2}_{xx}+\xi^{2}_{yy}+ \frac{2}{y} (\xi^{1}_{x}-\xi^{2}_{y})=0 \end{equation} If the flow $(\phi_t)$ and the vector field $\xi$ are related as above, then we can describe $\phi_{t}$ up to the first order in terms of $\xi$ using the standard coordinates in $\mathbb{H}^{2}\subset\mathbb{C}$: \begin{equation*} \phi_{t}(p) \approx p+t \xi(p) \end{equation*} (for $p\in U$ and sufficiently small $t$). We define a family of Riemannian metrics on $U$ as follows: \begin{equation} \label{equ11} t \longmapsto \rho_{t} = \phi_{t}^{\ast} \textbf{g}_{\mathbb{H}^{2}} \end{equation} More precisely the map in (\ref{equ11}) can be viewed as \begin{equation} \label{equ12} t \longmapsto (D \phi_{t} : T_{p} U \longrightarrow T_{\phi_{t}(p)} \mathbb{H}^{2})^{\ast} \textbf{g}_{\mathbb{H}^{2}} \end{equation} (\ref{equ12}) to the \textit{first order} can be expressed as follows: \begin{equation*} t \longmapsto (\text{Id}+t\cdot d\xi : T_{p} U \longrightarrow T_{\phi_{t}(p)} \mathbb{H}^{2})^{\ast} \textbf{g}_{\mathbb{H}^{2}}, \end{equation*} where $d\xi$ is the derivative of $\xi$ (where $\xi$ is viewed as a smooth map $\mathbb{C} \longrightarrow \mathbb{C}$) at $p$, and it is an $\mathbb{R}$-linear map. Using the \textit{first order} approximation, $\rho_{t}$ is given as: \begin{equation*} \begin{split} \rho_{t} & \approx (\text{Id}+t\cdot d\xi)^{T} (\textbf{g}_{\mathbb{H}^{2}}+t \textbf{g}'_{\mathbb{H}^{2}}(\xi) (\text{Id}+t\cdot d\xi) \\ & \approx \textbf{g}_{\mathbb{H}^{2}} + t\cdot d\xi^{T} \textbf{g}_{\mathbb{H}^{2}} + t \textbf{g}'_{\mathbb{H}^{2}}(\xi) + t\cdot d\xi \textbf{g}_{\mathbb{H}^{2}} \\ & \approx \textbf{g}_{\mathbb{H}^{2}} + (t\cdot d\xi^{T} +t\cdot d\xi) \textbf{g}_{\mathbb{H}^{2}} + t \textbf{g}'_{\mathbb{H}^{2}}(\xi) \end{split} \end{equation*} In the above expression, $d\xi^{T}$ denotes the transpose of $d\xi$ when written in the local coordinates. Calculating \[ \frac{d \rho_{t}}{dt} \Big\vert_{t=0} \] gives us a section of $S^{2}(TU)$, the vector bundle of (real) symmetric bilinear forms on $TU$ and this is denoted by $\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}}$, the Lie derivative of $\textbf{g}_{\mathbb{H}^{2}}$ w.r.t $\xi$. Therefore, \begin{equation} \label{equ13} \mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}} = (d\xi^{T} +d\xi) \textbf{g}_{\mathbb{H}^{2}} + \textbf{g}'_{\mathbb{H}^{2}}(\xi) \end{equation} in our preferred coordinates. Now, to obtain a local expression for $\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}} \in \Gamma(S^{2}(TU))$, we represent $d\xi$ by the following matrix \begin{equation*} d\xi= \begin{bmatrix} \xi^{1}_{x} & \xi^{1}_{y} \\ \xi^{2}_{x} & \xi^{2}_{y} \end{bmatrix} \end{equation*} Using the above expression for $d\xi$, the right-hand side of (\ref{equ13}) can be represented as \begin{equation*} \begin{aligned} \mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}} & = \frac{1}{y^{2}}\begin{bmatrix} 2\xi^{1}_{x} & \xi^{1}_{y}+\xi^{2}_{x} \\ \xi^{2}_{x}+\xi^{1}_{y} & 2\xi^{2}_{y} \end{bmatrix}+ \begin{bmatrix} \frac{-2}{(\xi^{2})^{3}} & 0 \\ 0 & \frac{-2}{(\xi^{2})^{3}} \end{bmatrix} \\ & = \underbrace{\frac{1}{y^{2}}\begin{bmatrix} \xi^{1}_{x} - \xi^{2}_{y} & \xi^{1}_{y} +\xi^{2}_{x} \\ \xi^{1}_{y} +\xi^{2}_{x} & \xi^{2}_{y} - \xi^{1}_{x} \end{bmatrix}}_{\mathrm{TF}} + \frac{1}{y^{2}}\begin{bmatrix} \xi^{1}_{x}+\xi^{2}_{y} & 0 \\ 0 & \xi^{1}_{x}+\xi^{2}_{y} \end{bmatrix} + \begin{bmatrix} \frac{-2}{(\xi^{2})^{3}} & 0 \\ 0 & \frac{-2}{(\xi^{2})^{3}} \end{bmatrix} \end{aligned} \end{equation*} Recall from \textbf{\cref{thespaceofhqd}} that the bundle $S^{2}(TU)$ of (real) symmetric bilinear forms on $TU$ splits into 1-dimensional real vector subbundle spanned by the everywhere nonzero section $\textbf{g}_{\mathbb{H}^{2}}$ and the image of the embedding (recall (\ref{quadmap})) $$ \psi: \mathrm{hom}_{\mathbb{C}} (TU \otimes_{\mathbb{C}} TU, \mathbb{C}) \longrightarrow S^{2}(TU),$$ where $\psi(q)$ is the real part of $q=fdz^{2}$. In particular, $\psi\big((\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}})^{(2, 0)}\big)$ is the real part of $(\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}})^{(2, 0)}$. Using the above splitting it is straightforward to check that the trace-free component of $\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}}$ is $\psi\big((\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}})^{(2, 0)}\big)=\psi(fdz^2)$ where $f(z) = \mathrm{TF}_{11}- \iota \mathrm{TF}_{12}$. Notice that \begin{equation} \label{similarwolpertequation} \overline{f(z)} = \mathrm{TF}_{11}+ \iota \mathrm{TF}_{12}= \frac{2}{y^{2}} \frac{\partial \xi}{\partial \bar{z}} = \frac{-8}{(z-\bar{z})^{2}} \frac{\partial \xi}{\partial \bar{z}}. \end{equation} Furthermore, (\ref{similarwolpertequation}) is equivalent (upto to a constant factor) to the following \textit{potential equation} (see Appendix \ref{genesis}) described by S. Wolpert in his paper \cite{wolpert} \begin{equation} \label{wolpertequation} \overline{f(z)} = \frac{1}{(z-\bar{z})^{2}} \frac{\partial \xi}{\partial \bar{z}}. \end{equation} Moreover, (\ref{equ9}) and (\ref{equ10}) are precisely the conditions that the corresponding quadratic differential $(\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}})^{(2, 0)}$ is holomorphic, i.e., $f$ is holomorphic. Therefore, we can summarize our discussion as follows: \begin{prop} \label{thm2} $\xi$ is a harmonic vector field on $U$ iff the quadratic differential $(\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}})^{(2, 0)}$ associated with it is holomorphic. In the standard coordinates, $(\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}})^{(2, 0)} = fdz^{2}$ where $\overline{f(z)} = \frac{-8}{(z-\bar{z})^{2}} \frac{\partial \xi}{\partial \bar{z}}$. \end{prop} \begin{remark} \normalfont Proposition \ref{thm2} is an infinitesimal version of Lemma 1.1 in \cite{jost1} and Example \ref{harmimplieshol}. In fact the statement in \cite{jost1} is more general since it applies to harmonic maps between oriented 2-dimensional Riemannian manifolds. \end{remark} \begin{coro} \label{rem1} Every holomorphic vector field on $U$ is harmonic. \end{coro} \begin{proof} Let $\xi$ be a holomorphic vector field on $U$. Then $\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}}$ in (\ref{equ13}) has diagonal form and therefore $(\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}})^{(2, 0)}$, the trace-free part of $\mathcal{L}_{\xi}\textbf{g}_{\mathbb{H}^{2}}$, is zero. \hfill \qedsymbol \end{proof} \subsubsection{Constructing harmonic vector fields on $U \subseteq \mathbb{H}^{2}$} \label{harmonicexplicit} \begin{theorem} \label{sess} Let $\mathcal{HOL}$ denote the sheaf of holomorphic vector fields on $\mathbb{H}^{2}$, $\mathcal{HARM}$ denote the sheaf of harmonic vector fields on $\mathbb{H}^{2}$ and $\mathcal{HQD}$ denote the sheaf of holomorphic quadratic differentials on $\mathbb{H}^{2}$. Then the following sequence of sheaves \begin{equation} \label{shortexact} \xymatrix{ \mathcal{HOL} \ar[r]^-{\alpha} & \mathcal{HARM} \ar[r]^-{\beta} & \mathcal{HQD} } \end{equation} is a short exact sequence of sheaves on $\mathbb{H}^{2}$. In (\ref{shortexact}), $\alpha$ is the inclusion map and $\beta$ is given by the formula in Proposition \ref{thm2}. \end{theorem} Before we prove Theorem \ref{sess}, we discuss the following result by S. Wolpert \cite[Section 2]{wolpert}: let $\eta$ be the vector field on $\mathbb{H}^{2}$ given by $\eta(z)=(1,0)$ everywhere. Given a holomorphic quadratic differential $q=f(z)dz^{2}$ on $\mathbb{H}^{2}$, there exists a global solution $\xi$ of the potential equation $\frac{\partial \xi}{\partial \bar{z}} = (z-\bar{z})^{2} \overline{f(z)}$ (see (\ref{wolpertequation})) and an explicit formula for $\xi$ is given as: \begin{equation} \label{wolpertformula} \xi(z)= \Bigg( \overline{\int_{w}^{z} (\bar{z}-\zeta)^{2} f(\zeta) d\zeta} \Bigg) \eta(z), \end{equation} where $w \in \mathbb{H}^{2}$ is fixed and $\zeta, z \in \mathbb{H}^{2}$. The formula for $\xi$ in (\ref{wolpertformula}) is path independent since the integrand is holomorphic. \bigskip \newline \paragraph{\textit{Proof of Theorem \ref{sess}:}} Exactness at the term $\mathcal{HARM}$ in (\ref{shortexact}) follows from Theorem \ref{thm2} and Corollary \ref{rem1}. Now, Let $q=f(z)dz^{2}$ be defined in a neighborhood $V$ of $w \in \mathbb{H}^{2}$, where $w \in \mathbb{H}^{2}$ is fixed. To prove the local surjectivity of $\beta$ we have to get a solution for a harmonic vector field $\xi$ whose associated holomorphic quadratic differential is $q$ in a possibly smaller neighborhood $U \subset V$ of $w$. It is clear that (\ref{wolpertformula}) gives the required solution for $\xi$ upto a constant factor. \hfill \qedsymbol \begin{coro} \label{coroharmonic} If a sequence of harmonic vector fields defined on an open set $U$ in $\mathbb{H}^{2}$ converges uniformly on compact subsets of $U$, and if all of them determine the same holomorphic quadratic differential $q$ on $U$, then the limit vector field is again harmonic and still determines the same holomorphic quadratic differential $q$ on $U$. \end{coro} We will now describe a more pedestrian approach to finding harmonic vector fields with prescribed holomorphic quadratic differential. This has certain advantages over Wolpert's formula, as we will see in \textbf{\cref{extendharmonic}}. First, we give an explicit expression for a harmonic vector field on $U \subset \mathbb{H}^{2}$ whose associated holomorphic quadratic differential $q$ is given. \begin{lemma} \label{lemma2} Let $U$ be an open subset of $\mathbb{H}^{2}$ with the usual hyperbolic metric. Let $\eta$ be the vector field on $U$ given by $\eta(z)=(1,0)$ everywhere: vectors parallel to the real axis, pointing left to right, of euclidean length 1. Let $f$ be a holomorphic function on $U$. The quadratic differential $q$ associated to the vector field $\xi=y^{n}f \eta$ is represented as: \begin{equation} \label{equ14} q=-n \iota y^{n-3} \overline{f} dz^{2}, \quad n \geq 3. \end{equation} \end{lemma} \begin{proof} We use the recipe in Proposition \ref{thm2} to prove the Lemma. And it suffices to prove for $n=3$. From (\ref{similarwolpertequation}), we have \begin{equation*} \begin{split} \frac{\partial \xi}{\partial \bar{z}} & = \frac{\partial }{\partial \bar{z}} (y^{3}f) = \frac{\partial }{\partial \bar{z}} \bigg( \frac{(z- \bar{z})^{3}}{-8 \iota} f \bigg) = \frac{3 (z-\bar{z})^{2}}{8 \iota} f = \frac{3 \iota (z-\bar{z})^{2}}{-8} f = \frac{(z-\bar{z})^{2}}{-8} (\overline{-3\iota \bar{f}}), \end{split} \end{equation*} so that $q= -3\iota\bar{f}dz^{2}$. \hfill \qedsymbol \end{proof} \bigskip Using Lemma \ref{lemma2}, we can find an explicit expression for a harmonic vector field $\xi$ on $\mathbb{H}^{2}$ whose associated holomorphic quadratic differential is $$q= z^{n} dz^{2} (n \geq 0)$$ using the obvious expression $z^{n} = (\overline{z}+2 \iota y)^{n} = \sum_{k=0}^{n} \binom{n}{k} (2 \iota y)^{n-k} \bar{z}^{k}$. \begin{lemma} \label{lemma3} An explicit expression for a harmonic vector field $\xi$ on $ \mathbb{H}^{2}$ whose associated holomorphic quadratic differential is $q= f(z) dz^{2}$, where $f(z) = z^{n}$, for some $n \geq 0$ (a holomorphic function on $ \mathbb{H}^{2}$), is given as: \begin{equation*} \begin{split} \xi(z) & = \Bigg( \sum_{k=0}^{n} \binom{n}{k} \frac{(-2)(-2 \iota)^{n-k-1}}{n-k+3} y^{n-k+3}z^{k} \Bigg) \eta(z) \\ & = \Bigg( \sum_{k=0}^{n} \binom{n}{k} (-2)(-2 \iota)^{n-k-1} \Big(\int_{0}^{y} \zeta^{n-k+2} d \zeta \Big) z^{k} \Bigg) \eta(z) \\ & = \Bigg( \int_{0}^{y} \bigg( \sum_{k=0}^{n} \binom{n}{k} (-2)(-2 \iota)^{n-k-1} \zeta^{n-k+2} z^{k} \bigg) d \zeta \Bigg) \eta(z) \\ & = \Bigg( \int_{0}^{\Im(z)} -\iota \zeta^{2} (z-2 \iota \zeta)^{n} d \zeta \Bigg) \eta(z) \end{split} \end{equation*} \end{lemma} \begin{lemma} \label{lemma4} An explicit expression for a harmonic vector field $\xi$ on $U \subset \mathbb{H}^{2}$ whose associated holomorphic quadratic differential is $q= f(z) dz^{2}$, where $f(z) = (z-a)^{n} (n \geq 0)$ is a holomorphic function on $U \subset \mathbb{H}^{2}$ and $a \in \mathbb{H}^{2}$ fixed, is given as: \begin{equation} \label{explicit} \begin{split} \xi(z) & = \Bigg( \sum_{k=0}^{n} \binom{n}{k} (-\bar{a})^{n-k} \bigg( \int_{0}^{\Im(z)} -\iota \zeta^{2} (z-2 \iota \zeta)^{k} d \zeta \bigg) \Bigg) \eta(z) \\ & = \Bigg( \int_{0}^{\Im(z)} -\iota \zeta^{2} \bigg( \sum_{k=0}^{n} \binom{n}{k} (-\bar{a})^{n-k} (z-2 \iota \zeta)^{k} \bigg) d \zeta \Bigg) \eta(z) \\ & = \Bigg( \int_{0}^{\Im(z)} -\iota \zeta^{2} \big(z-\bar{a}-2 \iota \zeta \big)^{n} d \zeta \Bigg) \eta(z) \\ & = \Bigg( \int_{0}^{\Im(z)} -\iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \Bigg) \eta(z). \end{split} \end{equation} \end{lemma} \bigskip \paragraph{\textit{Another Proof of Theorem \ref{sess}:}} Exactness at the term $\mathcal{HARM}$ in (\ref{shortexact}) follows from Theorem \ref{thm2} and Corollary \ref{rem1}. Let $q=f(z)dz^{2}$ be defined in a neighborhood $V$ of $a \in \mathbb{H}^{2}$, where $a \in \mathbb{H}^{2}$ is fixed. To prove the local surjectivity of $\beta$ we have to get a solution for a harmonic vector field whose associated holomophic quadratic differential is $q$ in a possibly smaller neighborhood $U \subset V$ of $a$. Note that we can't use the expression in (\ref{explicit}). As $\zeta$ runs from $0$ to $\Im(z)$, $f(\bar{z}+2 \iota \zeta)$ does not even make sense when $\zeta=0$. We try the following \begin{equation} \label{explicit1} \xi_{c}(z)= \Bigg( \int_{c}^{\Im(z)} -\iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \Bigg) \eta(z), \end{equation} where $c$ is any positive real number. \begin{figure}[h] \centering \includegraphics[height=4cm]{pic6.jpg} \caption{The expression for $\xi_{c}(\iota)$ defined along the hyperbolic line joining $\iota$ and $c$} \label{pic6} \end{figure} But there is a caveat: as $\zeta$ runs from $c$ to $\Im(z)$, $f(\bar{z}+2 \iota \zeta)$ may not be defined on the upper half plane since we are assuming that $f$ is defined only on $V \subset \mathbb{H}^{2}$. By making the best possible choice of $c$ which is $\Im(a)$ in this case, we get the required solution as follows \begin{equation} \label{rightsol} \xi_{\Im(a)}(z)= \Bigg( \int_{\Im(z)}^{\Im(a)} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \Bigg) \eta(z), \end{equation} defined on $$U=\{ z \in V | \bar{z}+2 \iota t \in V \hspace{2pt} \mathrm{for} \hspace{1pt} \mathrm{all} \hspace{2pt} t \in [\Im(z), \Im(a)] \}.$$ Evaluating the expression in (\ref{rightsol}) at $a$, we get \begin{equation*} \begin{split} \xi_{\Im(a)}(a) & = \Bigg( \int_{\Im(a)}^{\Im(a)} \iota \zeta^{2} \overline{f(\bar{a}+2 \iota \zeta)} d \zeta \Bigg) \eta(a) \\ & = 0. \end{split} \end{equation*}\hfill \qedsymbol \begin{remark} \label{wecanextend} \normalfont Let $q$ be a quadratic differential which is defined everywhere on $\mathbb{H}^{2}$ and is bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$, i.e., $$\Arrowvert q \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} = \arrowvert f(z) \arrowvert \Arrowvert dz^{2} \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} \leq D,$$ where $\Arrowvert dz^{2} \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}}= \Im(z)^{2}$ and $D$ is a positive real number. Note that $\xi_{c}$ in (\ref{explicit1}) has a continuous extension on $\mathbb{R}$. In other words, for $z$ such that $\Im(z)=0$, we define \begin{equation} \label{limitimprop} \xi_{c}(z) = \lim_{\epsilon \rightarrow 0}\Bigg( \int_{\epsilon}^{c} \iota \zeta^{2} \overline{f(z+2 \iota \zeta)} d \zeta \Bigg) \eta(z). \end{equation} To prove that the above limit exists, we use the Cauchy criterion of convergence of improper integrals, \begin{equation*} \begin{split} \bigg | \int_{\epsilon_{1}}^{\epsilon_{2}} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \bigg| & \leq \int_{\epsilon_{1}}^{\epsilon_{2}} \zeta^{2} \frac{D}{4 \zeta^{2}} d\zeta \\ & = \frac{D}{4} (\epsilon_{2} -\epsilon_{1}). \end{split} \end{equation*} From the above estimate, it is clear that the limit in (\ref{limitimprop}) exists. \end{remark} \begin{theorem} \label{thmglobharmvf} Let $q=f(z)dz^{2}$ be a holomorphic quadratic differential on $\mathbb{H}^{2}$. Suppose that $q$ satisfies the following boundedness conditions \begin{enumerate} \item $q$ is bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$, i.e. \begin{equation} \label{equ25} \Arrowvert q \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} = \arrowvert f(z) \arrowvert \Arrowvert dz^{2} \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} \leq D, \end{equation} where $\Arrowvert dz^{2} \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}}= \Im(z)^{2}$ and $D$ is a positive real number. \item The first and second covariant derivative of $q$ w.r.t the linear connection $\nabla$ on $T^{\ast} \mathbb{H}^{2} \otimes_{\mathbb{C}} T^{\ast} \mathbb{H}^{2}$, are bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$. \end{enumerate} Then there exists a harmonic vector field $\xi^{\mathrm{reg}}$ on $\mathbb{H}^{2}$ such that $\beta(\xi^{\mathrm{reg}})=q$, where $\beta$ is introduced in Theorem \ref{sess}. An explicit formula is \begin{equation} \label{equ62} \xi^{\mathrm{reg}}(z) = \lim_{c \to \infty} \Bigg( \xi_{c}(z) - \bigg( \xi_{c}(\iota) + \frac{\partial \xi_{c}}{\partial z}\bigg|_{z=\iota}\cdot (z-\iota)\bigg) \Bigg), \end{equation} where $$\xi_{c}(z)= \Bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \Bigg) \eta(z)$$ and $c$ is a positive real number. \end{theorem} \begin{remark} \normalfont We have introduced a simple terminology $\mathrm{reg}$ short for ``regularisation'' to characterise our required harmonic vector field. \end{remark} \begin{remark} \normalfont The boundedness conditions on $q$ in the above theorem are satisfied if $q$ is invariant under the action of a discrete cocompact subgroup $\Gamma$ of $\mathrm{PSL}(2, \mathbb{R})$, i.e., $$ f(\gamma(z)) \gamma'(z)^{2}=f(z), \quad z \in \mathbb{H}^{2}, \quad \forall \gamma \in \Gamma.$$ \end{remark} \begin{remark} \label{conditioncov1} \normalfont In Theorem \ref{thmglobharmvf}, $\nabla$ is a first order linear differential operator \begin{equation*} \label{equ91} \mathscr{A}^{0}(\mathbb{H}^{2}, T^{\ast}\mathbb{H}^{2} \otimes_{\mathbb{C}} T^{\ast}\mathbb{H}^{2}) \longrightarrow \mathscr{A}^{1}(\mathbb{H}^{2}, T^{\ast}\mathbb{H}^{2} \otimes_{\mathbb{C}} T^{\ast}\mathbb{H}^{2}), \end{equation*} where on the L.H.S. we have sections of the vector bundle $T^{\ast}\mathbb{H}^{2} \otimes_{\mathbb{C}} T^{\ast}\mathbb{H}^{2} \longrightarrow \mathbb{H}^{2}$ and on the R.H.S we have the space of $T^{\ast}\mathbb{H}^{2} \otimes_{\mathbb{C}} T^{\ast}\mathbb{H}^{2}$-valued 1-forms, i.e., sections of the vector bundle $\mathrm{\mathbf{hom}}(T\mathbb{H}^{2}, T^{\ast}\mathbb{H}^{2} \otimes_{\mathbb{C}} T^{\ast}\mathbb{H}^{2}).$ Recall that the Levi-Civita connection $\nabla$ of the hyperbolic plane can be extended complex linearly to the complexification of the tangent and cotangent bundles - $(T\mathbb{H}^{2})^{c}$ and $(T^{\ast}\mathbb{H}^{2})^{c}$ - of the plane and their tensor products, and then decomposed as \begin{equation*} \nabla = \nabla_{\frac{\partial}{\partial z}} \oplus \nabla_{\frac{\partial}{\partial \bar{z}}}. \end{equation*} Recall the discussion just before Example \ref{harmexamplenice}. We view $\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \bar{z}}$ as sections of the complexified tangent bundle $(T\mathbb{H}^{2})^{c}$, and $dz$ and $d \bar{z}$ as sections of the complexified cotangent bundle $(T^{\ast}\mathbb{H}^{2})^{c}$. Furthermore, $dz\big( \frac{\partial}{\partial z}\big) = 1$ and $dz \big( \frac{\partial}{\partial \bar{z}}\big) =0$. For example, applied to functions $f: \mathbb{H}^{2} \longrightarrow \mathbb{C}$, we have $\nabla_{\frac{\partial}{\partial z}} f = f_{z} dz$ and $\nabla_{\frac{\partial}{\partial \bar{z}}} f = f_{\bar{z}} d \bar{z}$. Now, for the hyperbolic plane with the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}=\rho^{2}dz d\bar{z}$, where $\rho(z)=1 / \Im(z)$, we get the following: \begin{equation} \label{attempt2} \begin{split} \nabla \frac{\partial}{\partial z} = \frac{2 \rho_{z}}{\rho} dz \otimes \frac{\partial}{\partial z}, \quad \nabla dz = dz \otimes \nabla_{\frac{\partial}{\partial z}} dz = - \frac{2 \rho_{z}}{\rho} dz \otimes dz \\ \nabla_{ \frac{\partial}{\partial \bar{z}}} \frac{\partial}{\partial z}=0, \quad \nabla_{\frac{\partial}{\partial z}} \frac{\partial}{\partial z} = \frac{2 \rho_{z}}{\rho} \frac{\partial}{\partial z}. \end{split} \end{equation} Equations in (\ref{attempt2}) are taken from \cite{mlee}. To get boundedness conditions on $f_{z}$ and $f_{zz}$ from boundedness conditions on $q$ and on the first and second covariant derivative of $q=fdz^{2}$, i.e., \begin{equation} \begin{split} \lVert q \lVert_{\textbf{g}_{\mathbb{H}^{2}}} & \leq D \\ \lVert \nabla q \lVert_{\textbf{g}_{\mathbb{H}^{2}}} & \leq D_{1} \\ \lVert \nabla^{2} q \lVert_{\textbf{g}_{\mathbb{H}^{2}}} & \leq D_{2}, \end{split} \end{equation} $D_{1}$ and $D_{2}$ are positive real numbers, we need to compute $\nabla q$ and $\nabla^{2} q$. Consider the first covariant derivative of $q$ w.r.t $\nabla$: \begin{equation} \label{firstcovariant} \begin{split} \nabla f dz^{2} & = \nabla (f) dz^{2} + f \nabla(dz \otimes dz) \\ & = f_{z} dz^{3} + f_{\bar{z}} d \bar{z} \otimes dz^{2} + f (\nabla dz \otimes dz + dz \otimes \nabla dz) \\ & = f_{z} dz^{3} + 0 + f \cdot -\frac{4 \rho_{z}}{\rho} dz^{3}, \end{split} \end{equation} where the last equality follows from (\ref{attempt2}) and the fact that $f$ is a holomorphic function. From (\ref{firstcovariant}), we have $$ \lVert f_{z} dz^{3} + f \cdot - \frac{4 \rho_{z}}{\rho} dz^{3} \rVert_{\textbf{g}_{\mathbb{H}^{2}}} \leq D_{1}$$ which implies \begin{equation} \label{firstcov} |f_{z}| \leq \frac{K_{1}}{\Im(z)^{3}}, \end{equation} where $K_{1}$ is a positive constant that depends upon the bounds for $f$. Now, using (\ref{attempt2}) and (\ref{firstcovariant}) consider the second covariant derivative of $q$ w.r.t $\nabla$: \begin{equation} \label{secondcov} \begin{aligned} \nabla \big( f_{z} dz^{3} + f \cdot -\frac{4 \rho_{z}}{\rho} dz^{3}\big) & = \nabla f_{z} dz^{3} + f_{z} \nabla(dz \otimes dz \otimes dz) + \nabla f \cdot -\frac{4 \rho_{z}}{\rho} dz^{3} \\ & \qquad - f \cdot \nabla \bigg(\frac{4 \rho_{z}}{\rho} \bigg) dz^{3} + f \cdot -\frac{4 \rho_{z}}{\rho} \nabla(dz \otimes dz \otimes dz) \\ & = f_{zz}dz^{4}+f_{z\bar{z}}d\bar{z} \otimes dz^{3}+f_{z} \cdot - \frac{6 \rho_{z}}{\rho} dz^{4} + f_{z} \cdot -\frac{4 \rho_{z}}{\rho} dz^{4} \\ & \qquad + f_{\bar{z}} \cdot -\frac{4 \rho_{z}}{\rho} d \bar{z} \otimes dz^{3} -f \cdot \rho^{2} dz^{4} + f \cdot \frac{4 \rho_{z}}{\rho} \cdot \frac{6 \rho_{z}}{\rho} dz^{4} \\ & = f_{zz}dz^{4} + 0 + f_{z} \cdot -\frac{6 \rho_{z}}{\rho} dz^{4} + f_{z} \cdot -\frac{4 \rho_{z}}{\rho} dz^{4} + 0 \\ & \qquad -f \cdot \rho^{2} dz^{4} + f \cdot \frac{24 \rho_{z}^{2}}{\rho^{2}} dz^{4} \\ & = f_{zz}dz^{4} + f_{z} \cdot -\frac{10 \rho_{z}}{\rho} dz^{4} -f \cdot \rho^{2} dz^{4} + f \cdot \frac{24 \rho_{z}^{2}}{\rho^{2}} dz^{4}. \end{aligned} \end{equation} From (\ref{secondcov}), the second covariant derivative of $q$ being bounded in the hyperbolic metric implies the following: \begin{equation} \label{secondcov1} |f_{zz}| \leq \frac{K_{2}}{\Im(z)^{4}}, \end{equation} where $K_{2}$ is a positive constant that depends upon the bounds for $f$ and $f_{z}$. \end{remark} Before we begin with the proof of Theorem \ref{thmglobharmvf} which establishes the global surjectivity of $\beta$ in Theorem \ref{sess}, we discuss the following abortive attempts to get a (global) harmonic vector field on the whole upper half plane $\mathbb{H}^{2}$. \begin{remark} \label{firstattempt} \normalfont Assume that $q$ is bounded in the hyperbolic metric, i.e. \begin{equation*} \Arrowvert q \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} = \arrowvert f(z) \arrowvert \Arrowvert dz^{2} \Arrowvert_{\textbf{g}_{\mathbb{H}^{2}}} \leq D, \end{equation*} where $D$ is a positive real number. We try to define \begin{equation} \label{equ19} \xi(z)= \Bigg( \int_{\Im(z)}^{\infty} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \Bigg) \eta(z) = \lim_{c \to \infty} \Bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \Bigg) \eta(z), \end{equation} hoping that the above limit exists. In this case, we say that the improper integral in (\ref{equ19}) \textit{converges} and its value is that of the limit. From the above mentioned boundedness condition on $q$ we get the following \begin{equation} \label{equ26} \arrowvert f(z) \arrowvert \leq \frac{D}{\Im(z)^{2}}, \forall z \in \mathbb{H}^{2}. \end{equation} From the Cauchy criterion of convergence of improper integrals, the improper integral $$\int_{\Im(z)}^{\infty} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta$$ in (\ref{equ19}) converges iff for every $\epsilon > 0$ there is a $K \geq \Im(z)$ so that for all $A, B \geq K$ we have $$\bigg \arrowvert \int_{A}^{B} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \bigg \arrowvert < \epsilon.$$ Using (\ref{equ26}), we have \begin{equation} \label{inequalities1} \begin{split} \bigg \arrowvert \int_{A}^{B} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \bigg \arrowvert & \leq \int_{A}^{B} \zeta^{2} \big \arrowvert \overline{f(\bar{z}+2 \iota \zeta)} \big \arrowvert d \zeta \\ & \leq \int_{A}^{B} \frac{D \zeta^{2}}{(2 \zeta - \Im(z))^{2}} d \zeta \end{split} \end{equation} Now, we assume that $A \geq \Im(z)$. Then the denomiator $(2 \zeta - \Im(z))^{2}$ in the second inequality in (\ref{inequalities1}) is atleast as big as $\zeta^{2}$. Rewriting (\ref{inequalities1}), we get \begin{equation*} \begin{split} \bigg \arrowvert \int_{A}^{B} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta \bigg \arrowvert & \leq \int_{A}^{B} \frac{D \zeta^{2}}{ \zeta^{2}} d \zeta \\ & = \int_{A}^{B} D d \zeta \\ & = D(B-A). \end{split} \end{equation*} From the above estimate, there is no conclusion that limit in (\ref{equ19}) exists. \end{remark} \begin{remark} \normalfont Assume that both $q$ and its first covariant derivative w.r.t $\nabla$ are bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$. From Remark \ref{conditioncov1} and (\ref{firstcov}), the covariant derivative of $q$ (w.r.t $\nabla$) being bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$ implies the following: \begin{equation} \label{equ27} \arrowvert f_{z} \arrowvert \leq \frac{K_{1}}{\Im(z)^{3}}, \end{equation} where $f_{z}$ denotes the first complex derivative of $f$, $f$ being a holomorphic function on $\mathbb{H}^{2}$. We try to define \begin{equation} \label{equ20} \xi(z)=\lim_{c \to \infty} (\xi_{c}(z)-\xi_{c}(\iota)) \end{equation} hoping that the above limit exists. We view $\xi_{c}(\iota)$ as the zeroth order Taylor approximation of $\xi_{c}(z)$ at $z=\iota$. Moreover, $\xi_{c}(\iota)$ is a constant vector field, hence a holomorphic vector field, depending on $c$. Note that the expression in (\ref{equ20}) resembles the idea of Weierstrass in constructing the Weierstrass $\mathcal{P}$-function. Naively speaking, we want to compare the integral along a vertical hyperbolic line $\mathcal{L}_{1}$ joining some point $z$ to $\bar{z}+2 \iota c$ with the integral along a vertical hyperbolic line $\mathcal{L}_{2}$ joining $\iota$ to $(2c-1)\iota$. Infact, $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are asymptotic lines in the hyperbolic plane $\mathbb{H}^{2}$. Let's first spell out the expression $\xi_{c}(z)-\xi_{c}(\iota)$ on the R.H.S of (\ref{equ20}). \paragraph{Case I: $2c \geq 1 \geq \Im(z)$} \begin{align} \xi_{c}(z)-\xi_{c}(\iota) & = \bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}d \zeta-\int_{1}^{c} \iota \zeta^{2}\overline{f(\bar{\iota}+2 \iota \zeta)}d \zeta \bigg) \eta(z) \nonumber \\ & = \bigg( \int_{\Im(z)}^{1} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}d \zeta + \int_{1}^{c} \iota \zeta^{2}\overline{f(\bar{z}+2 \iota \zeta)} d \zeta -\int_{1}^{c} \iota \zeta^{2}\overline{f(\bar{\iota}+2 \iota \zeta)} d \zeta \bigg) \eta(z) \nonumber \\ & = \bigg( \underbrace{\int_{1}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} \bigg)d \zeta}_{I_{c}}- \int_{1}^{\Im(z)} \iota \zeta^{2}\overline{f(\bar{z}+2 \iota \zeta)} d \zeta \bigg) \eta(z) \label{esti} \end{align} \paragraph{Case II: $2c \geq \Im(z) \geq 1$} \begin{align} \xi_{c}(z)-\xi_{c}(\iota) & = \bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}d \zeta-\int_{1}^{c} \iota \zeta^{2}\overline{f(\bar{\iota}+2 \iota \zeta)}d \zeta \bigg) \eta(z) \nonumber \\ & = \bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}d \zeta-\int_{1}^{\Im(z)} \iota \zeta^{2}\overline{f(\bar{\iota}+2 \iota \zeta)} d \zeta -\int_{\Im(z)}^{c} \iota \zeta^{2}\overline{f(\bar{\iota}+2 \iota \zeta)} d \zeta \bigg) \eta(z) \nonumber \\ & = \bigg( \underbrace{\int_{\Im(z)}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} \bigg)d \zeta}_{II_{c}}- \int_{1}^{\Im(z)} \iota \zeta^{2}\overline{f(\bar{\iota}+2 \iota \zeta)} d \zeta \bigg) \eta(z) \label{equ21} \end{align} Since $ \int_{1}^{\Im(z)} \iota \zeta^{2}\overline{f(\bar{z}+2 \iota \zeta)} d \zeta$ and $ \int_{1}^{\Im(z)} \iota \zeta^{2}\overline{f(\bar{\iota}+2 \iota \zeta)} d \zeta$ on the R.H.S of (\ref{esti}) and (\ref{equ21}) are independent of $c$ we only work with $I_{c}$ and $II_{c}$ to determine whether the limit in (\ref{equ20}) exists or not. Now if $A, B \geq c$, we have \begin{equation} \label{attemptnumber3} I_{B}-I_{A} = \int_{A}^{B} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} \bigg)d \zeta, \end{equation} and \begin{equation} \label{attemptnumber3a} II_{B}-II_{A} = \int_{A}^{B} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} \bigg)d \zeta. \end{equation} Using (\ref{equ27}), we have the following estimate for (\ref{attemptnumber3}) and (\ref{attemptnumber3a}) \begin{equation} \label{inequalities2} \begin{aligned} \bigg \arrowvert \int_{A}^{B} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} \bigg) d \zeta \bigg \arrowvert & \leq \int_{A}^{B} \zeta^{2} \cdot \frac{K_{1}}{(2 \zeta - \Im(z))^{3}} \cdot |\bar{z} - \bar{\iota} | d \zeta, \end{aligned} \end{equation} where the inequality in (\ref{inequalities2}) follows from (\ref{equ27}). Now, we assume that $A \geq \Im(z)$. Then the denomiator $(2 \zeta - \Im(z))^{3}$ in the inequality in (\ref{inequalities2}) is atleast as big as $\zeta^{3}$. Rewriting (\ref{inequalities2}), we get \begin{equation*} \begin{aligned} \bigg \arrowvert \int_{A}^{B} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} \bigg) d \zeta \bigg \arrowvert & \leq |\bar{z} - \bar{\iota} | \int_{A}^{B} \zeta^{2} \cdot \frac{K_{1}}{ \zeta^{3}} d \zeta \\ & = |\bar{z} - \bar{\iota} | \cdot K_{1}\log \bigg(\frac{B}{A} \bigg). \end{aligned} \end{equation*} Observe that the attempt in (\ref{equ20}) is much better than the attempt in (\ref{equ19}). But it does not serve our purpose. \end{remark} \paragraph{\textit{Proof of Theorem \ref{thmglobharmvf}}:} Recall (\ref{secondcov1}). To begin with we note that the second boundedness condition on $q$ can be translated as follows: \begin{equation} \label{equ63} \arrowvert f_{zz} \arrowvert \leq \frac{K_{2}}{\Im(z)^{4}}, \quad \forall z \in \mathbb{H}^{2}, \end{equation} where $f_{zz}$ denote the second complex derivative of $f$, $f$ being a holomorphic function on $\mathbb{H}^{2}$. To prove that $\xi^{\mathrm{reg}}(z)$ converges we use the Cauchy criterion of convergence of improper integrals which has been stated in Remark \ref{firstattempt}. We notice that $$\xi_{c}(\iota)+\frac{\partial \xi_{c}}{\partial z}\bigg|_{z=\iota} \cdot (z- \iota)$$ in (\ref{equ62}) is the \textit{holomorphic part} of the first order Taylor approximation of $\xi_{c}(z)$ at $z=\iota$. Let's denote it by $T^{\mathrm{hol}}_{1, \iota}(\xi_{c}(z))$. Also, $\frac{\partial \xi_{c}}{\partial z}\big|_{z=\iota}$ is nothing complicated but a complex number because $\xi'_{c}(z)|_{z=\iota}$ as an $\mathbb{R}$-linear map from $\mathbb{C}$ to $\mathbb{C}$ can be written uniquely as a sum of a $\mathbb{C}$-linear map and a $\mathbb{C}$-conjugate linear map. Let's denote the integrand $ \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}$ in the expression of $\xi_{c}(z)$ by $F(\zeta, z)$. As both $F(\zeta, z)$ and its partial derivatives are continuous in $\zeta$ and $z$, we can express $\xi'_{c}(z)|_{z=\iota}$ using the Leibniz rule as follows: \begin{equation} \label{equ71} \resizebox{1.06\hsize}{!}{$ \begin{aligned} \xi'_{c}(z)|_{z=\iota} & = \Bigg(- \iota \Im(z)^{2} \overline{f(\bar{z}+2 \iota \Im(z))} \cdot \Im'(z) + \int_{\Im(z)}^{c} \iota \zeta^{2} \bigg(\frac{\partial}{\partial z} \overline{f(\bar{z}+2 \iota \zeta) } dz + \frac{\partial}{\partial \bar{z}}\overline{f(\bar{z}+2 \iota \zeta) } d\bar{z}\bigg)d \zeta\Bigg) \bigg|_{z=\iota} \\ & = \Bigg(- \iota \Im(z)^{2} \overline{f(z)} \cdot \Im'(z) + \int_{\Im(z)}^{c} \iota \zeta^{2} \frac{\partial}{\partial z} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta\Bigg) \bigg|_{z=\iota} \\ & = \underbrace{- \iota \overline{f(\iota)} \cdot \Im'(z)|_{z=\iota}}_{K} + \Bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \frac{\partial}{\partial z} \overline{f(\bar{z}+2 \iota \zeta)} d \zeta\Bigg) \bigg|_{z=\iota} \end{aligned} $} \end{equation} where the second equality in (\ref{equ71}) follows from the fact that $f$ is a holomorphic function, hence we get \begin{equation*} \frac{\partial}{\partial \bar{z}} \overline{f(\bar{z}+2 \iota \zeta)} =\overline{ \Bigg( \frac{\partial}{\partial z} f(\bar{z}+2 \iota \zeta) \Bigg)} = 0. \end{equation*} Note that we have omitted $dz$ in $\frac{\partial}{\partial z} \overline{f(\bar{z}+2 \iota \zeta) } dz$ because $dz$ as a linear map can be viewed as the $2 \times 2$ identity matrix. Since the summand $\iota \overline{f(\iota)} \cdot \Im'(z)|_{z=\iota}$ in (\ref{equ71}) does not depend on $c$, therefore it does not hurt to drop it in the expression of $T^{\mathrm{hol}}_{1, \iota}(\xi_{c}(z))$ for convergence investigation. We will denote the corrected term by $\Psi_{c}(z)$. Using (\ref{equ71}), $\Psi_{c}(z)$ can be written as: \begin{equation} \label{equ72} \Psi_{c}(z) = \Bigg( \int_{1}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta \Bigg) \eta(z). \end{equation} Then \begin{equation} \label{newxi} \xi^{\mathrm{reg}}(z) = \lim_{c \rightarrow \infty}\big(\xi_{c}(z)-\Psi_{c}(z) - K \big). \end{equation} Let's first spell out the expression $\xi_{c}(z) - \Psi_{c}(z)$. \smallskip \paragraph{\textbf{Case I:} $2c \geq 1 \geq \Im(z)$} \begin{equation} \label{anotheresti} \begin{aligned} \xi_{c}(z) - \Psi_{c}(z) & = \Bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}d \zeta \\ & \qquad -\int_{1}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta \Bigg) \eta(z) \\ & = \Bigg( \int_{\Im(z)}^{1} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}d \zeta + \int_{1}^{c} \iota \zeta^{2}\overline{f(\bar{z}+2 \iota \zeta)} d \zeta \\ & \qquad - \int_{1}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+ 2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta \Bigg) \eta(z) \\ & = \Bigg( \underbrace{\int_{1}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} - \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta}_{I_{c}} \\ & \qquad - \int_{1}^{\Im(z)} \iota \zeta^{2}\overline{f(\bar{z}+2 \iota \zeta)} d \zeta \Bigg) \eta(z). \end{aligned} \end{equation} \paragraph{\textbf{Case II:} $2c \geq \Im(z) \geq 1$} \begin{equation} \label{anotherequ21} \begin{aligned} \xi_{c}(z)-\Psi_{c}(z) & = \Bigg( \int_{\Im(z)}^{c} \iota \zeta^{2} \overline{f(\bar{z}+2 \iota \zeta)}d \zeta \\ & \qquad -\int_{1}^{\Im(z)} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta \\ & \qquad -\int_{\Im(z)}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta \Bigg) \eta(z) \\ & = \Bigg( \underbrace{\int_{\Im(z)}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} - \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta}_{II_{c}} \\ & \qquad - \int_{1}^{\Im(z)} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta \Bigg) \eta(z). \end{aligned} \end{equation} Since the integrals $$\int_{1}^{\Im(z)} \iota \zeta^{2}\overline{f(\bar{z}+2 \iota \zeta)} d \zeta$$ and $$\int_{1}^{\Im(z)} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta$$ in R.H.S of (\ref{anotheresti}) and (\ref{anotherequ21}) are independent of $c$, we work with $I_{c}$ and $II_{c}$ in (\ref{anotheresti}) and (\ref{anotherequ21}) to prove the convergence of $\xi^{\mathrm{reg}}$. Now if $A, B \geq c$, we have \begin{equation*} I_{B}-I_{A} = \int_{A}^{B} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} - \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta, \end{equation*} and \begin{equation*} II_{B}-II_{A} = \int_{A}^{B} \iota \zeta^{2} \bigg( \overline{f(\bar{z}+2 \iota \zeta) - f(\bar{\iota}+2 \iota \zeta)} - \bigg( \frac{\partial}{\partial z} \overline{f(\bar{z} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z-\iota) \bigg)d \zeta. \end{equation*} Using the Remainder Estimation Theorem for $f$, we have \begin{equation} \label{inequalities3} \arrowvert I_{B}-I_{A} \arrowvert = \arrowvert II_{B}-II_{A} \arrowvert \leq \int_{A}^{B} \zeta^{2} \cdot \mathrm{max}_{w}\arrowvert f^{(2)}(w) \arrowvert \cdot |(\bar{z}+2 \iota \zeta)- (\bar{\iota}+2 \iota \zeta)|^{2} d \zeta, \end{equation} where $w$ is varying on the line segment connecting $\bar{z}+2 \iota \zeta$ and $\bar{\iota}+2 \iota \zeta$. We assume $A, B > \Im(z)$. Using (\ref{equ63}), we rewrite (\ref{inequalities3}) as follows: \begin{equation} \label{inequalities4} \begin{aligned} \arrowvert I_{B}-I_{A} \arrowvert = \arrowvert II_{B}-II_{A} \arrowvert & \leq \int_{A}^{B} \zeta^{2} \cdot \mathrm{max}_{w}\frac{K_{2}}{(\Im(w))^{4}} \cdot |\bar{z}-\bar{\iota}|^{2} d \zeta \\ & \leq |\bar{z}-\bar{\iota}|^{2} \int_{A}^{B} \zeta^{2} \cdot \frac{K_{2}}{(2 \zeta- \Im(z))^{4}} d \zeta \end{aligned} \end{equation} Also, the denomiator $(2 \zeta- \Im(z))^{4}$ is atleast as big as $\zeta^{4}$. As a result (\ref{inequalities4}) has the following form: \begin{equation} \label{inequalities5} \begin{aligned} \arrowvert I_{B}-I_{A} \arrowvert = \arrowvert II_{B}-II_{A} \arrowvert & \leq |\bar{z}-\bar{\iota}|^{2} \int_{A}^{B} \zeta^{2} \cdot \frac{K_{2}} {\zeta^{4}} d \zeta \\ & = |\bar{z}-\bar{\iota}|^{2} \cdot \frac{K_{2}}{4} \bigg(-\frac{1}{B}+\frac{1}{A}\bigg). \end{aligned} \end{equation} These estimates show that $\xi^{\mathrm{reg}}$ is a well defined vector field. But they also show that $\xi^{\mathrm{reg}}$ is locally a uniform limit of harmonic vector fields which determine the same holomorphic quadratic differential. Therefore, $\xi^{\mathrm{reg}}$ is a harmonic vector field by Corollary \ref{coroharmonic}. \hfill \qedsymbol \subsection{Extending harmonic vector fields on $\mathbb{H}^{2}$ to the boundary circle $\mathbb{S}^{1}$} \label{extendharmonic} We refer to the extended real axis $\overline{\mathbb{R}} := \mathbb{R} \cup \{\infty\}$ as the boundary at infinity of $\mathbb{H}^{2}$. We are using the unit disc model so that we have a well defined notion of the tangent space at the point $\{\infty\} \in \partial \mathbb{H}^{2}$ as there is a natural 1-1 correspondence between $\partial \mathbb{D}$ and $\partial \mathbb{H}^{2}$. The starting point is to compare the length of a vector $v \in T_{z} \mathbb{H}^{2}$ for some $z \in \mathbb{H}^{2}$ (measured in the Euclidean metric) with the length of the pushforward of $v$ (measured in the Euclidean metric) by a conformal map between $\mathbb{H}^{2}$ and $\mathbb{D}$. Consider the Cayley transformation \begin{equation} \label{cayley} C(z)= \frac{z-\iota}{z+\iota} \end{equation} mapping the upper half plane model of $\mathbb{H}^{2}$ to the unit disc model $\mathbb{D}$ of $\mathbb{H}^{2}$. We have \begin{equation} \label{endeq} |dC_{z} (v)| = \frac{| v |}{|z|^{2}}, \quad \forall v \in T_{z}\mathbb{H}^{2}. \end{equation} \begin{theorem} \label{boundary} The harmonic vector field $\xi^{\mathrm{reg}}$ in Theorem \ref{thmglobharmvf}, transformed from $\mathbb{H}^{2}$ to the open unit disc $\mathbb{D} \subset\mathbb{C}$ by the Cayley transform $C$ given by (\ref{cayley}) extends to a continuous vector field, say $\chi$, on $\overline{\mathbb{D}}$ defined as follows: \begin{equation} \label{extendedvf} \chi(C(z)) = \begin{cases} C_{\ast}(\xi^{\mathrm{reg}}(z)) & \quad z \in \mathbb{H}^{2} \\ C_{\ast}(\xi^{\mathrm{reg}}(z)) & \quad z \in \partial \mathbb{H}^{2} \setminus \{\infty\} \\ 0 & \quad z = \{ \infty \} \end{cases} \end{equation} where $C_{\ast}(\xi^{\mathrm{reg}}(z))$ is the pushforward of $\xi^{\mathrm{reg}}(z)$ by the Cayley transform $C$. \end{theorem} Before we prove Theorem \ref{boundary}, we discuss the one and only disadvantage of Wolpert's formula (\ref{wolpertformula}) in the following remark: \begin{remark} \normalfont Recall Wolpert's global solution $\xi$ (see (\ref{wolpertformula})) for the potential equation (\ref{wolpertequation}). Given that $q=fdz^{2}$ is bounded in the hyperbolic metric $\textbf{g}_{\mathbb{H}^{2}}$, i.e., $|f(z)| \leq \frac{D}{\Im(z)^{2}}$ where $D$ is a positive constant, $\xi$ extends to the real line $\mathbb{R}$. This can be seen as follows: $f$ is not defined for $z$ such that $\Im(z)=0$. So the integral in (\ref{wolpertformula}) is an improper integral, so for $z$ such that $\Im(z)=0$, we define \begin{equation*} \label{estimatewol} \xi(z)= \lim_{\epsilon \rightarrow 0} \Bigg( \Bigg( \overline{\int_{w}^{z+\iota \epsilon} (\overline{z+\iota \epsilon}-\zeta)^{2} f(\zeta) d\zeta} \Bigg) \eta(z) \Bigg). \end{equation*} The above limit exists, as can be seen by taking $w$ to be $\iota$ and using $|f(z)| \leq \frac{D}{\Im(z)^{2}}$. We have no reason to believe that $\xi$ extends to the point $\{\infty \}$ in the boundary $\mathbb{R} \cup \{\infty \}$. Here is an argument: for the sake of convenience, we choose the line segment from $w=\iota$ to $z=c\iota$ as the path of integration in the expression of $\xi$, where $c>1$ is a positive real number. Then, $$\xi(c \iota) = \overline{\int_{\iota}^{c\iota} (\overline{c\iota}-\zeta)^{2} f(\zeta) d \zeta}.$$ Therefore, \begin{equation*} \begin{split} |\xi(c \iota)| & \leq D\int_{\iota}^{c\iota} \frac{|\overline{c\iota}-\zeta|^{2}}{\Im(\zeta)^{2}} d\zeta \\ & = D\int_{\iota}^{c\iota} \frac{|c\iota-\zeta|^{2}}{\Im(\zeta)^{2}} d\zeta \\ & \leq D \cdot |c\iota - \iota| \cdot \mathrm{max}_{\zeta} \frac{|c\iota-\zeta|^{2}}{\Im(\zeta)^{2}}, \end{split} \end{equation*} where $\zeta$ is varying on the line segment from $\iota$ to $c\iota$. From the above estimate, it is clear that $\xi$ is $O(|z|^{3})$ at the point $\{ \infty \}$ in the boundary $\mathbb{R} \cup \{\infty \}$. \end{remark} \bigskip \paragraph{\textit{Proof of Theorem \ref{boundary}:}} Recall Remark \ref{wecanextend}. For $z$ such that $\Im(z)=0$, the definition of $\xi^{\mathrm{reg}}$ makes perfectly good sense because the convergence of the improper integral in the expression of $\xi^{\mathrm{reg}}$ for $z$ such that $\Im(z)=0$ follows from the conditions given in (\ref{equ26}), (\ref{equ27}), and (\ref{equ63}). Now, we claim that for a sequence $\{z_{n}\}$ of points in $\mathbb{H}^{2}$ such that $|z_{n}| \rightarrow \infty$, where $|\cdot|$ denotes the absolute value \begin{equation} \label{approachinfinity} \lim_{|z_{n}| \rightarrow \infty} | C_{\ast}(\xi^{\mathrm{reg}}(z_{n})) | =0, \end{equation} where $|C_{\ast}(\xi^{\mathrm{reg}}(z))|$ denotes the length of the pushforward of $\xi^{\mathrm{reg}}(z)$ measured in the Euclidean metric. Using (\ref{endeq}), we rewrite (\ref{approachinfinity}) as follows \begin{equation} \label{approachinfinity1} \lim_{|z_{n}| \rightarrow \infty} \frac{| \xi^{\mathrm{reg}}(z_{n})|} {|z_{n}|^{2}} =0. \end{equation} The main idea is to split the integral $\int_{0}^{c} \iota \zeta^{2} \overline{f(z_{n}+2 \iota \zeta)} d \zeta$ at height $h$ such that $h = |z_{n}|$ and estimate the resulting integrals in different ways. Using (\ref{equ71}), (\ref{equ72}), and (\ref{newxi}), our expression for $\xi^{\mathrm{reg}}(z_{n})$ takes the following form: \begin{equation*} \label{equ120} \xi^{\mathrm{reg}}(z_{n}) = \underbrace{\xi_{h}(z_{n}) - \bigg( \xi_{h}(\iota) + \frac{\partial \xi_{h}}{\partial z}\bigg|_{z=\iota} \cdot (z_{n}-\iota)\bigg) }_{\xi_{1}^{\mathrm{reg}}(z_{n})} + \underbrace{\lim_{c \to \infty} \big( \xi_{h, c}(z_{n}) - \Psi_{h, c}(z_{n}) -K \big)}_{\xi_{2}^{\mathrm{reg}}(z_{n})}, \end{equation*} where \begin{equation*} \begin{split} \xi_{h}(z_{n}) = \bigg( \int_{0}^{h} \iota \zeta^{2} \overline{f(z_{n}+2 \iota \zeta)} d \zeta \bigg) \eta(z_{n}), & \quad \xi_{h, c}(z_{n}) = \bigg( \int_{h}^{c} \iota \zeta^{2} \overline{f(z_{n}+2 \iota \zeta)} d \zeta \bigg) \eta(z_{n}), \\ \xi_{h}(\iota) = \bigg( \int_{1}^{h} \iota \zeta^{2} \overline{f(\bar{\iota}+2 \iota \zeta)} d \zeta \bigg) \eta(\iota), & \quad \frac{\partial \xi_{h}}{\partial z}\bigg|_{z=\iota} = \bigg( \int_{1}^{h} \iota \zeta^{2} \bigg( \frac{\partial}{\partial z}\overline{f(z_{n}+2 \iota \zeta)} \bigg)\bigg|_{z=\iota} d \zeta \bigg) \eta(\iota), \end{split} \end{equation*} \begin{equation*} \Psi_{h, c}(z_{n}) = \Bigg( \int_{h}^{c} \iota \zeta^{2} \bigg( \overline{f(\bar{\iota}+2 \iota \zeta) } + \bigg( \frac{\partial}{\partial z} \overline{f(z_{n} +2 \iota \zeta) } \bigg) \bigg|_{z=\iota} \cdot (z_{n}-\iota) \bigg)d \zeta \Bigg) \eta(z_{n}). \end{equation*} Note that we have treated $h=|z_{n}|$ as a constant independent of $z_{n}$. Using (\ref{firstcov}), (\ref{secondcov1}), and (\ref{equ26}), each individual term - $\xi_{h}(z_{n})$, $\xi_{h}(\iota)$ and $\frac{\partial \xi_{h}}{\partial z}|_{z=\iota} \cdot (z_{n}-\iota)$ - in the expression of $\xi_{1}^{\mathrm{reg}}(z_{n})$ satisfies the following inequalities when estimated in the Poincare metric $\textbf{g}_{\mathbb{H}^{2}}$: \begin{equation*} \label{equ129} \begin{aligned} \arrowvert \xi_{h}(z_{n}) \arrowvert & \leq \frac{D}{4} |z_{n}|, \\ \arrowvert \xi_{h}(\iota) \arrowvert & \leq \frac{D}{4} |z_{n}|, \\ \bigg \arrowvert \frac{\partial \xi_{h}}{\partial z}\big|_{z=\iota} \cdot (z_{n}-\iota) \bigg \arrowvert & \leq \frac{K_{1}}{8} |z_{n}|. \end{aligned} \end{equation*} At this point (\ref{endeq}) comes in handy and show us immediately that $C_{\ast}(\xi_{1}^{\mathrm{reg}}(z_{n})) \rightarrow 0$ as $|z_{n}| \rightarrow \infty$. From the estimate given in (\ref{inequalities5}) in the proof of Theorem \ref{thmglobharmvf} we have $C_{\ast}(\xi_{2}^{\mathrm{reg}}(z_{n})) \rightarrow 0$ as $|z_{n}| \rightarrow \infty$. \hfill \qedsymbol \section{Going from the analytic description to the cohomological description} \label{analtocohom} \subsection{Vector fields on $\mathbb{D}$ and $\mathbb{S}^{1}$} \label{vectorfieldsonthecircle} We will denote the Hilbert space of measurable functions $f$ on $\mathbb{S}^{1}$ such that $$\int_{\mathbb{S}^{1}} | f(x)|^{2} dx < + \infty $$ modulo the equivalence relation of almost-everywhere equality by $L^{2}(\mathbb{S}^{1})$. We are not going to prove the completeness of $L^{2}(\mathbb{S}^{1})$. The main idea to prove completeness of $L^{2}(\mathbb{S}^{1})$ is that a Cauchy sequence of $L^{2}$-functions has a subsequence that converges pointwise off a set of measure $0$. There is a different definition of $L^{2}(\mathbb{S}^{1})$, namely the completion of $C^{0}(\mathbb{S}^{1})$, the space of continuous $\mathbb{C}$-valued functions on $\mathbb{S}^{1}$, with respect to the norm \begin{equation} \label{l2norm} \lVert f \rVert:= \frac{1}{\sqrt{2 \pi}} \bigg(\int_{\mathbb{S}^{1}} |f(z)|^{2} dz \bigg)^{1/2} \end{equation} The Fourier basis elements are the exponential functions $\psi_{k}(z):=z^{k}$ for $z \in \mathbb{S}^{1}$. The exponential functions $\{\psi_{k}|k \in \mathbb{Z}\}$ form an orthonormal set in $L^{2}(\mathbb{S}^{1})$. But it's not clear immediately that they form an orthonormal Hilbert basis (see \cite{beals}). From orthonormality, the \textit{Fourier coefficients} $a_{k} \in \mathbb{C}$ of $f$ are the inner products \begin{equation*} a_{k}= \langle f, \psi_{k} \rangle = \frac{1}{2 \pi}\int_{\mathbb{S}^{1}} f(z) \overline{\psi_{k}(z)} dz. \end{equation*} The Fourier expansion of $f \in L^{2}(\mathbb{S}^{1})$ is $$f(z)= \sum_{k \in \mathbb{Z}} a_{k} \psi_{k}(z)$$ where the equality means convergence of the partial sums to $f$ in the $L^{2}$-norm, or $$\lim_{N \rightarrow \infty} \frac{1}{\sqrt{2 \pi}} \int_{\mathbb{S}^{1}} \bigg|\sum_{k=-N}^{N} a_{k} \psi_{k}(z)-f(z)\bigg|^{2} dz=0.$$ The convenient algebraic property of $\psi_{k}$ is that the basis is multiplicative. And multiplication of functions corresponds to the \textit{convolution of Fourier series;} this is actually obvious in our context since \begin{equation} \label{convolution} \psi_{k} \cdot \psi_{l} = \psi_{k+l}. \end{equation} From now on we will denote $L^{2}(\mathbb{S}^{1})$ by $\mathcal{H}$. There is an orthogonal sum splitting $\mathcal{H}= \mathcal{H}^{1} \oplus \mathcal{H}^{2}$ where $ \mathcal{H}^{1}$ is the closure of the span of $\{\psi_{k} | k <0 \}$ and consequently, $ \mathcal{H}^{2}$ is the closure of the span of $\{\psi_{k} | k \geq 0 \}$. An element of $ \mathcal{H}^{2}$, say $$f:= \sum_{k \geq 0} a_{k} f_{k}$$ has a canonical extension to a function (in the $L^{2}$-sense) defined on the unit disk $\mathbb{D}$ in $\mathbb{C}$ by the formula $$ z \longmapsto \sum_{k \geq 0} a_{k} z^{k}.$$ This is in fact a convergent power series in the open unit disk $\mathbb{D}$, so defines a holomorphic function on the open unit disk $\mathbb{D}$ in $\mathbb{C}$. So we should see $ \mathcal{H}^{2}$ as the linear subspace of $\mathcal{H}$ consisting of those $L^{2}$-functions on $\mathbb{S}^{1}$ which extend holomorphically to the open unit disk $\mathbb{D}$ in $\mathbb{C}$. Equivalently, think of $ \mathcal{H}^{2}$ as the complex vector space of $L^{2}$-vector fields on $\mathbb{S}^{1}$ which extend holomorphically to the open unit disk, i.e. $$\mathcal{H}^{2}= \{X: \mathbb{S}^{1} \longrightarrow \mathbb{R}^{2} | X \hspace{1pt} \mathrm{is} \hspace{1pt} L^{2}, X(z) \in T_{z} \mathbb{R}^{2} \cong \mathbb{R}^{2} \cong \mathbb{C}, \forall z \in \mathbb{S}^{1}\},$$ where the norm on $X$ is taken in the sense of (\ref{l2norm}). \begin{remark} \label{l2cont} \normalfont A smooth or continuous vector field $X$ on the open unit disk $\mathbb{D}$ has an \textit{$L^{2}$-extension} to the closed disk $\overline{\mathbb{D}}$ if the following holds: for every $\epsilon > 0$, we get a continuous vector field $X_{\epsilon}$ on $\mathbb{S}^{1}_{1-\epsilon}$, a circle of radius $1-\epsilon$ (which can be identified canonically with $\mathbb{S}^{1}$ by streching), by restricting $X$ to $\mathbb{S}^{1}_{1-\epsilon}$. Now, letting $\epsilon \rightarrow 0$, we get a sequence $\{X_{\epsilon}\}$ in the Hilbert space of $L^{2}$-vector fields on the boundary circle $\mathbb{S}^{1}$. And if $\{X_{\epsilon}\}$ converges to a $L^{2}$-vector field on the boundary circle $\mathbb{S}^{1}$, then $X$ has an $L^{2}$- extension to the closed disk $\overline{\mathbb{D}}$. \end{remark} \begin{defn} \label{defnoftan} \normalfont A vector field on $\mathbb{S}^{1}$ with values in $\mathbb{R}^{2}$ or $\mathbb{C}$ is called \textit{tangential} if it makes $\mathbb{S}^{1}$ flows into itself. \end{defn} We denote the space of tangential vector fields on $\mathbb{S}^{1}$ by $\mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1})$. It is a real vector space. To get more insight, consider the following example: \begin{example} \label{examoftan} \normalfont Consider the following complex-valued vector field on $\mathbb{S}^{1}$: $$X(x, y)= -y \frac{\partial}{\partial x}+x \frac{\partial}{\partial y}.$$ In complex coordinates, we express $X$ as $X(z) = \iota z$. It is clear that $X$ is a tangential vector field on $\mathbb{S}^{1}$ since $$\sigma(t, (x,y)) = (x \cos t- y \sin t, x \sin t+y \cos t)$$ is a flow generated by $X$ and the flow through $(x, y)$ is a circle whose centre is at origin. Clearly, $\sigma(t, (x,y))=(x, y)$ if $t=2n \pi, n \in \mathbb{Z}$. See L.H.S of Figure \ref{pic26}. \begin{figure}[h] \centering \includegraphics[height=5cm]{nicepic.jpg} \caption{An example of a tangential vector field on $\mathbb{S}^{1}$} \label{pic26} \end{figure} \end{example} Note that the above example is only one solution of tangential vector fields on $\mathbb{S}^{1}$. But we get all other solutions by multiplying $X$ in Example \ref{examoftan} with any real valued function on $\mathbb{S}^{1}$. Note that vector fields can be multiplied with functions. For simplicity, we think of multiplication of $L^{2}$-vector fields on $\mathbb{S}^{1}$ with real valued functions on $\mathbb{S}^{1}$ as multiplication of functions with functions. \smallskip Recall that we have expressed an $L^{2}$-function $f$ on $\mathbb{S}^{1}$ with values in $\mathbb{C}$ as $\sum_{k \in \mathbb{Z}}a_{k} \psi_{k}$. It's a routine exercise in Fourier analysis to show that $f$ is real valued iff $a_{k}= \overline{a_{-k}}$ for all $k$. Therefore the corresponding (real) Fourier expansion of $f$ is $$f(x)= \frac{1}{2}a'_{0} + \sum_{k=1}^{\infty} a'_{k} \cos(kx) + b'_{k} \sin(kx),$$ where $a'_{k}=a_{k}+a_{-k}$ and $b'_{k}= \iota(a_{k}-a_{-k})$. So, the real-valued functions $$\{1, \cos(kx), \sin(kx) | k=1, 2, 3, \ldots \}$$ also form an orthogonal basis of the space $\mathcal{H}$, since $$\cos(kx)= \frac{\exp(\iota k x) + \exp(-\iota k x)}{2} = \frac{z^{k}+z^{-k}}{2},$$ $$\sin(kx)= \frac{\exp(\iota k x) - \exp(-\iota k x)}{2 \iota} = \frac{z^{k}-z^{-k}}{2 \iota}.$$ Using (\ref{convolution}), i.e., the fact that the Fourier transform of the product of functions is the convolution of the Fourier transforms, we have the following real Hilbert basis of $\mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1})$: \begin{equation} \label{realbasis} \bigg \{\iota z, \frac{\iota z^{1+k}+\iota z^{1-k}}{2}, \frac{z^{1+k}-z^{1-k}}{2} \bigg| k=1, 2, 3, \ldots \bigg \}. \end{equation} Also, Killing vector fields on $\mathbb{D}$ are the infinitesimal generators of isometries of $\mathbb{D}$, hence Killing vector fields on $\mathbb{D}$ are tangential vector fields on $\mathbb{S}^{1}$. We will denote the three dimensional real vector space whose elements are Killing vector fields on $\mathbb{S}^{1}$ by $\mathfrak{X}_{\textrm{Killing}}(\mathbb{S}^{1})$. \begin{theorem} \label{infinitedimproblem} We have \begin{enumerate} \item $\mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1}) \cap \mathcal{H}^{2} = \mathfrak{X}_{\textrm{Killing}}(\mathbb{S}^{1}).$ \item $\mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1}) + \mathcal{H}^{2}$ is the vector space of all $L^{2}$-vector fields on $\mathbb{S}^{1}$. \end{enumerate} \end{theorem} \begin{proof} \normalfont (1). As any complex vector space has an underlying real vector space so the real Hilbert basis of the space $\mathcal{H}^{2}$ is given as $$\{z^{k}, \iota z^{k} | k \geq 0 \}.$$ The basis for $\mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1})$ is given by (\ref{realbasis}). Assume $X \in \mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1}) \cap \mathcal{H}^{2}$. Then $X = \sum_{k \geq 0}a_{k} z^{k} + b_{k} i z^{k}$ and $X = a'_{0}\iota z + \sum_{k \geq 1} a'_{k} \frac{\iota z^{1+k}+\iota z^{1-k}}{2} + b'_{k} \frac{z^{1+k}-z^{1-k}}{2}$. Since $$\sum_{k \geq 0}a_{k} z^{k} + b_{k} i z^{k} = a'_{0}\iota z + \sum_{k \geq 1} a'_{k} \frac{\iota z^{1+k}+\iota z^{1-k}}{2} + b'_{k} \frac{z^{1+k}-z^{1-k}}{2},$$ comparing the coefficients of $z^{k}$ and $\iota z^{k}$ in each expression, we obtain $a'_{0}=b_{1}$, $b_{2}=b_{0}=\frac{a'_{1}}{2}, a_{2}= \frac{b'_{1}}{2} = -a_{0}$, and all other coefficients are zero. Therefore, $X$ is a linear combination with real coefficients of $\iota z$, $\frac{z^{2}-1}{2}$, and $\frac{\iota z^{2}+\iota}{2}$ Note that $\iota z$, $\frac{z^{2}-1}{2}$, and $\frac{\iota z^{2}+\iota}{2}$ are linearly independent. Hence the vector space $\mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1}) \cap \mathcal{H}^{2}$ is a $3$-dimensional space which is nothing but $\mathfrak{X}_{\textrm{Killing}}(\mathbb{S}^{1}).$ \bigskip \newline (2) A real Hilbert basis of the space of $L^{2}$-vector fields on $\mathbb{S}^{1}$ is given by $\{z^{k}, \iota z^{k} | k \in \mathbb{Z} \}$. Then it is very easy to see that \begin{equation*} \begin{aligned} X(z) - \Bigg( \Bigg( \sum_{k \in \{2, 3, \ldots\}} b_{1-k}\big(\iota z^{1+k}+\iota z^{1-k}\big)\Bigg) - \sum_{k \in \{2, 3, \ldots\}} a_{1-k}\big(z^{1+k}-z^{1-k}\big) \Bigg)\\ \qquad = a_{0}+b_{0}\iota + a_{1}z+b_{1} \iota z + a_{2} z^{2}+b_{2} \iota z^{2} + (a_{3}+a_{-1})z^{3}+(b_{3}-b_{-1})\iota z^{3}+ \cdots, \end{aligned} \end{equation*} where $X(z)= \sum_{k \in \mathbb{Z}} a_{k}z^{k} + b_{k} \iota z^{k}$, $z \in \mathbb{S}^{1}$. Therefore, $X = X_{1} + X_{2}$, where $X_{1} \in \mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1})$ and $X_{2} \in \mathcal{H}^{2}$. \hfill \qedsymbol \end{proof} \bigskip Before we state conclusions of this section we introduce some notions and conventions: \begin{enumerate} \item Let $\mathfrak{M}$ be a $\Gamma$-module, where $\Gamma$ is a subgroup of $\mathrm{PSU}(1, 1)$. A map $c: \Gamma \longrightarrow \mathfrak{M}$ is called a cocycle if \begin{equation*} \label{group1} c_{\gamma_{1} \circ \gamma_{2}}= \gamma_{2}^{\ast}c_{\gamma_{1}} + c_{\gamma_{2}}, \gamma_{1}, \gamma_{2} \in \Gamma, \end{equation*} $c_{\gamma}$ stands for $c(\gamma)$, $\ast$ denotes the action of $\Gamma$ on $\mathfrak{M}$. If $m \in \mathfrak{M}$, its \textit{coboundary} $\delta m$ is the cocycle \begin{equation} \label{group2} \gamma \longmapsto \gamma^{\ast}m - m, \gamma \in \Gamma. \end{equation} The first \textit{cohomology group} $H^{1}(\Gamma; \mathfrak{M})$ is the quotient $Z^{1}(\Gamma; \mathfrak{M}) / B^{1}(\Gamma; \mathfrak{M})$. \item The most important cases of $\mathfrak{M}$ from the viewpoint of this thesis are \begin{enumerate} \item $\mathcal{S}^{\infty}(T\mathbb{D})$, the vector space of smooth vector fields on $\mathbb{D}$. $\Gamma$ acts on $\mathcal{S}^{\infty}(T\mathbb{D})$ in the following manner \begin{equation} \label{actiononvector} \gamma^{\ast}F = F(\gamma) \gamma'^{-1}, \quad \gamma \in \Gamma, F \in \mathcal{S}^{\infty}(T\mathbb{D}). \end{equation} \item $\mathrm{HOL}$, the vector space of holomorphic vector fields on $\mathbb{D}$. $\Gamma$ acts on $\mathrm{HOL}$ in the same manner as in (\ref{actiononvector}). \item $\mathfrak{g}$, the vector space of Killing vector fields on $\mathbb{D}$. Note that we have already dealt with this case in \textbf{\cref{cohomo}} in \textbf{\cref{tangentteichmuellerspace}}. \end{enumerate} \item Note that \begin{equation*} \label{group3} \mathfrak{g} \subset \mathrm{HOL} \subset \mathcal{S}^{\infty}(T\mathbb{D}). \end{equation*} \end{enumerate} Recall \textbf{\cref{chapter3section2}} in \textbf{\cref{chapter3}}. Given a holomorphic quadratic differential $q$ on $\mathbb{D}$ which satisfies boundedness conditions, namely, $q$ is bounded in the hyperbolic metric $\textbf{g}_{\mathbb{D}}$ of $\mathbb{D}$, and the first and the second covariant derivative of $q$ w.r.t the linear connection on $T^{\ast} \mathbb{D} \otimes_{\mathbb{C}} T^{\ast}\mathbb{D}$ are bounded in $\textbf{g}_{\mathbb{D}}$, we obtain a harmonic vector field $\chi$ on $\mathbb{D}$ that extends continuously on the boundary circle $\mathbb{S}^{1}$ such that $(\mathcal{L}_{\chi}\textbf{g}_{\mathbb{D}})^{(2, 0)}= q$. Note that $\chi$ is not necessarily tangential to the boundary circle $\mathbb{S}^{1}$. We will denote the restriction of $\chi$ to $\mathbb{S}^{1}$ by $\chi|_{\mathbb{S}^{1}}$. Using Theorem \ref{infinitedimproblem} (2), we can write $\chi|_{\mathbb{S}^{1}}$ as $\chi_{1} +\chi_{2}$, where $\chi_{1} \in \mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1})$ and $\chi_{2} \in \mathcal{H}^{2}$. Since $\chi$ is a harmonic vector field on $\mathbb{D}$ whose associated holomorphic quadratic differential is $q$, then the holomorphic quadratic differential associated with the vector field $\chi_{1}$ is the same $q$. Because the holomorphic quadratic differential associated with $\chi_{2}$ is zero. Notice that in the expression of $\chi_{1}=\chi-\chi_{2}$ we are working with the holomorphic extension of $\chi_{2}$ to the open unit disk $\mathbb{D}$. Now, the coboundary of $\chi$, i.e., $$\delta \chi(\gamma) = \chi(\gamma) \gamma'^{-1} - \chi, \quad \forall \gamma \in \Gamma$$ is a cocycle with values in $\mathrm{HOL}$ because of the $\Gamma$-invariance of $q$. But our goal is to get a cocycle with values in $ \mathfrak{g}$, where $\mathfrak{g}$ is the Lie algebra of $\mathrm{Isom}^{+}(\mathbb{D})$. Using Theorem \ref{infinitedimproblem} (1), we can easily see that for every $\gamma \in \Gamma$, $\delta (\chi_{1})(\gamma) \in \mathfrak{X}_{\textrm{tangential}}(\mathbb{S}^{1}) \cap \mathcal{H}^{2}$ and therefore we get a cocycle in $\mathfrak{X}_{\textrm{Killing}}(\mathbb{S}^{1}) \cong \mathfrak{g}$. We summarize our discussion as \begin{theorem} \label{summarytheo} Given a holomorphic quadratic differential $q=fdz^{2}$ on the Poincar\'{e} disk $\mathbb{D}$ which satisfies the following boundedness conditions: \begin{enumerate} \item $q$ is bounded in the hyperbolic metric on $\mathbb{D}$, i.e., $$\lVert q \rVert_{\textbf{g}_{\mathbb{D}}} \leq D, $$ where $D$ is a positive real number. \item The first and the second covariant derivative of $q$ w.r.t the linear connection on $T^{\ast}\mathbb{D} \otimes_{\mathbb{C}} T^{\ast} \mathbb{D}$ are bounded in $\textbf{g}_{\mathbb{D}}$. \end{enumerate} Then there exists a harmonic vector field $\chi$ on $\mathbb{D}$ which $L^{2}$-extends to the closed disk $\overline{\mathbb{D}}$ such that $(\mathcal{L}_{\chi}\textbf{g}_{\mathbb{D}})^{(2, 0)}= q$. Moreover, the restriction of that extension to the boundary circle $\mathbb{S}^{1}$ is tangential and $\chi$ is unique upto the addition of holomorphic vector fields on $\mathbb{D}$ which extend tangentially to the boundary circle $\mathbb{S}^{1}$. From Theorem \ref{infinitedimproblem} (1), $\chi$ is unique upto the addition of the vector space $\mathfrak{g}$ of Killing vector fields on $\mathbb{D}$. \end{theorem} \begin{coro} \label{summarytheocor} Let $\Gamma$ denote a subgroup of $\mathrm{Isom}^{+}(\mathbb{D})$ where $\mathrm{Isom}^{+}(\mathbb{D})$ is the group of orientation preserving isometries of $\mathbb{D}$. If $q=fdz^{2}$ and $\chi$ are related as in Theorem \ref{summarytheo} and if in addition to (1) and (2) in Theorem \ref{summarytheo}, $q$ is $\Gamma$-invariant, i.e., $$f(\gamma(z))\gamma'(z)^{2}=f(z), \quad \forall \gamma \in \Gamma, z \in \mathbb{D},$$ then $\delta \chi$ defined by $$ \gamma \longmapsto \chi(\gamma) \gamma'^{-1}-\chi, \quad \forall \gamma \in \Gamma$$ is a $1$-cocycle $c$ with coefficients in the $\Gamma$-module $\mathfrak{g}$ - the Lie algebra of $\mathrm{Isom}^{+}(\mathbb{D})$ and its cohomology class $[c]$ depends only on $q$. \end{coro} \begin{proof} From Theorem \ref{summarytheo}, we know that $\chi$ is unique upto the addition of Killing vector fields on $\mathbb{D}$, hence for every $\gamma \in \Gamma$, $\delta \chi(\gamma)$ is a holomorphic vector field which extends tangentially to the boundary circle $\mathbb{S}^{1}$. Therefore, for every $\gamma \in \Gamma$, $\delta \chi(\gamma) \in \mathfrak{g}$. Recall that we have for every $\gamma \in \Gamma$, $c(\gamma) = \frac{\chi(\gamma)}{\gamma'} - \chi$. Since $\chi$ is well-defined upto addition of a Killing vector field $X$ on $\mathbb{D}$, it follows that $c$ is well defined upto addition of $\delta X$. Hence, the cohomology class $[c]$ of $c$ is well defined. \hfill \qedsymbol \end{proof} \begin{remark} \normalfont In Corollary \ref{summarytheocor}, we view $\chi$ as a $0$-cochain with values in the vector space of harmonic vector fields on $\mathbb{D}$. \end{remark} \begin{coro} \label{onewaymap} Let $\Gamma$ in Corollary \ref{summarytheocor} be a discrete cocompact subgroup of $\mathrm{Isom}^{+}(\mathbb{D})$. Then we have an injective mapping \begin{equation} \label{mainmap1} \begin{split} \varPhi: \mathrm{HQD}(\mathbb{D}, \Gamma) & \longrightarrow H^{1}(\Gamma; \mathfrak{g}) \\ q & \longmapsto [c], \end{split} \end{equation} where $\mathrm{HQD}(\mathbb{D}, \Gamma)$ denotes the vector space of $\Gamma$-invariant holomorphic quadratic differentials on $\mathbb{D}$ and $c= \delta \chi$. \end{coro} \begin{proof} We assume that $\varPhi(q)=[c]=0$. Then, there exists an element $X \in \mathfrak{g}$ such that $c = \delta (X)$. By setting $Y = \chi - X$ we notice that the holomorphic quadratic differential associated to $Y$ is $q$, and $\delta Y =0$, i.e., $Y$ is invariant under the action of $\Gamma$. Therefore, $Y$ can be viewed as a harmonic vector field on the the surface $\mathbb{D} / \Gamma$. From \cite[Proposition 4.2]{dodson}, on a two dimensional compact orientable Riemannian manifold without boundary, a harmonic vector field is a conformal vector field. Therefore, $q \equiv 0$. \hfill \qedsymbol \end{proof} \section{Going from the cohomological description to the analytic description} \label{cohomtoanal} First we set some conventions. The group $\mathrm{SU}(1, 1)$ is the set of matrices $$\mathrm{SU}(1,1) = \bigg \{ \begin{bmatrix} a & b \\ \bar{b} & \bar{a} \end{bmatrix} \in \mathrm{GL}(2, \mathbb{C}) \big| |a|^{2}-|b|^{2}=1 \bigg \},$$ with group multiplication given by matrix multiplication. Note that the group $\mathrm{SU}(1, 1)$ is isomorphic to the group $\mathrm{SL}(2, \mathbb{R})$ of $2 \times 2$ real matrices with determinant $1$. We identify the circle group $\mathrm{SO}(2)$ with the subgroup of $\mathrm{SU}(1, 1)$ given by $$\mathrm{SO}(2) = \bigg \{ \begin{bmatrix} \exp{(\iota \theta)} & 0 \\ 0 & \exp{(-\iota \theta)} \end{bmatrix} \bigg | \quad \theta \in [0, 2 \pi) \bigg \}. $$ Recall that $\mathrm{Aut}(\mathbb{D})$, the orientation preserving isometries of the Poincar\'{e} disk $\mathbb{D}$ with the hyperbolic metric $\textbf{g}_{\mathbb{D}}$, is identified with $$\mathrm{PSU}(1, 1) = \mathrm{SU}(1, 1) / \{ \pm \mathrm{Id}\}$$ because every $\gamma \in \mathrm{PSU}(1,1)$ acts on $\mathbb{D}$ by the following formula \begin{equation*} \label{actiononthedisk} \gamma(z) = \frac{az+b}{\bar{b}z+\bar{a}}, \gamma = \begin{bmatrix} a & b \\ \bar{b} & \bar{a} \end{bmatrix}, \quad |a|^{2}-|b|^{2}=1, \quad \forall z \in \mathbb{D}. \end{equation*} \subsection{$\Gamma$-invariant partition of unity on $\mathbb{D}$} \label{cohomtoanalsection1} Recall that a partition of unity subordinate to an open covering $\{ U_{i}\}$ of a manifold $M$ is a collection $\{\varphi_{i}\}$ of non-negative smooth functions such that \begin{enumerate} \item $\mathrm{supp}(\varphi_{i}) \subset U_{i}$. \item Each $p \in M$ has a neighborhood that intersects with only finitely many $\mathrm{supp}(\varphi_{i})$. \item $\sum \varphi_{i} =1$. \end{enumerate} Let $\Gamma$ be a discrete cocompact subgroup of $\mathrm{PSU}(1, 1)$. Below we give the existence of a $\Gamma$-invariant partition of unity on $\mathbb{D}$. \begin{lemma} \label{partition} There exists a smooth function $\varphi$ on $\mathbb{D}$ such that \begin{enumerate} \item $0 \leq \varphi \leq 1$. \item For each $z \in \mathbb{D}$, there is a neighborhood $U$ of $z$ and a finite subset $S$ of $\Gamma$ such that $\varphi=0$ on $\gamma(U)$ for every $\gamma \in \Gamma-S$. \item $\sum_{\gamma \in \Gamma} \varphi(\gamma(z))=1$ on $\mathbb{D}$. \end{enumerate} \end{lemma} \begin{proof} We choose an open covering $\{U_{i}\}_{i \in I}$ of the closed surface $\mathbb{D} / \Gamma$ where each $U_{i}$ is simply connected and a smooth partition of unity $\{\alpha_{i}\}$ subordinate to the covering $\{U_{i}\}_{i \in I}$. For each $U_{i}$, we choose a single component $V_{i}$ of $\pi^{-1}(U_{i})$ where $\pi: \mathbb{D} \longrightarrow \mathbb{D} / \Gamma$ is the projection map, and set \begin{equation*} \phi_{i}(z) = \begin{cases} \alpha_{i}(\pi(z)), & \quad z \in V_{i} \\ 0, & \quad z \in \mathbb{D} - V_{i}. \end{cases} \end{equation*} Note that the mapping $\pi$ restricted to each component of $\pi^{-1}(U_{i})$ is a one-to-one covering. It's clear that $\phi_{i} \in C^{\infty}(\mathbb{D})$, and that $\phi = \sum_{i} \phi_{i}(z), z \in \mathbb{D}$ has the required properties. \hfill \qedsymbol \end{proof} \begin{remark} \normalfont We suspect that Lemma \ref{partition} is a simpler version of results on \textit{Kleinian groups} (see \cite{Kra}). \end{remark} \smallskip To go from the cohomological description of tangent spaces (to the Teichmueller space) to the analytic description which is given by the space of holomorphic quadratic differentials on $\Sigma_{g}$, we first construct a tangential vector field on the circle $\mathbb{S}^{1}$ (recall \textbf{\cref{vectorfieldsonthecircle}} from \textbf{\cref{analtocohom}}) from a cocycle $c$ representing a cohomology class $[c] \in H^{1}(\Gamma; \mathfrak{g})$, where $\mathfrak{g}$ is the Lie algebra of the group of orientation preserving isometries of $\mathbb{D}$. We use Lemma \ref{partition} to get the following: given any $[c] \in H^{1}(\Gamma; \mathfrak{g})$ we set $$\psi(z)= - \sum_{\gamma \in \Gamma} \varphi(\gamma(z)) c_{\gamma}(z), \quad z \in \mathbb{D}.$$ \begin{lemma}[\cite{Kra}] \label{continuousvector} $\psi$ is a $C^{\infty}$-vector field on $\mathbb{D}$ such that for $A \in \Gamma$, $z \in \mathbb{D}$, \begin{equation} \label{actiononvector1} (A^{\ast}\psi)(z)-\psi(z)= c_{A}(z). \end{equation} \end{lemma} \begin{proof} Recall (\ref{actiononvector}). Consider the L.H.S of (\ref{actiononvector1}) in the Lemma, we have \begin{equation*} \begin{split} (A^{\ast}\psi)(z)-\psi(z) & = - \sum_{\gamma \in \Gamma} \bigg( \varphi(\gamma (A z)) c_{\gamma} (Az) A'(z)^{-1} - \varphi(\gamma(z)) c_{\gamma}(z) \bigg) \\ & = - \sum_{\gamma \in \Gamma} \bigg(\varphi(\gamma (A z)) \bigg(c_{\gamma \circ A}(z) - c_{A}(z)\bigg) - \varphi(\gamma(z)) c_{\gamma}(z) \bigg) \\ & = \sum_{\gamma \in \Gamma} \varphi(\gamma (Az)) c_{A}(z) = c_{A}(z). \end{split} \end{equation*} The second equality in the above equation follows from the fact that $c$ is a cocycle. Therefore, $$\delta \psi=c.$$ \hfill \qedsymbol \end{proof} \begin{remark} \normalfont Let $\mathcal{S}^{\infty}(T \mathbb{D})$ denote the vector space of $C^{\infty}$-vector fields on $\mathbb{D}$. From Lemma \ref{continuousvector}, we have $H^{1}(\Gamma; \mathcal{S}^{\infty}(T \mathbb{D}))= \{0\}$. \end{remark} \begin{coro} \label{continuousvector1} If $\mathrm{HOL}$ is the vector space of holomorphic vector fields on $\mathbb{D}$, then for every cocycle $c$ representing a cohomology class $[c] \in H^{1}(\Gamma; \mathrm{HOL})$, there is a $\psi \in \mathcal{S}^{\infty}(T \mathbb{D})$ such that $$c= \delta \psi.$$ \end{coro} \begin{proof} The injection of $\mathrm{HOL}$ into $\mathcal{S}^{\infty}(T \mathbb{D})$ induces a mapping $$H^{1}(\Gamma; \mathrm{HOL}) \longrightarrow H^{1}(\Gamma; \mathcal{S}^{\infty}(T \mathbb{D})).$$ \hfill \qedsymbol \end{proof} \begin{remark} \label{continuousvector2} \normalfont Corollary \ref{continuousvector1} is true if we replace $\mathrm{HOL}$ by the vector space of Killing vector fields $\mathfrak{g}$ on $\mathbb{D}$ because of $\mathfrak{g} \subset \mathrm{HOL} \subset \mathcal{S}^{\infty}(T\mathbb{H}^{2})$. \end{remark} Let $c$ be a $1$-cocycle with values in the vector space $\mathfrak{g}$ of Killing vector fields on $\mathbb{D}$. From \textbf{\cref{chapter3}} and \textbf{\cref{analtocohom}} we know that there exists a harmonic vector field $\chi$ with a tangential $L^{2}$-extension on the boundary circle $\mathbb{S}^{1}$ such that $\delta \chi =c$. From Lemma \ref{continuousvector}, Corollary \ref{continuousvector1}, and Remark \ref{continuousvector2}, we get another $0$-cochain $\psi$ in $\mathcal{S}^{\infty}(T \mathbb{D})$ such that $\delta \psi=c$. Therefore, $\chi-\psi$ is a $0$-cocycle in $\mathcal{S}^{\infty}(T \mathbb{D})$ and $\chi-\psi$ is invariant under the action of $\Gamma$, i.e., \begin{equation} \label{newvectorfield} \begin{split} (\chi-\psi) & = \gamma^{\ast}(\chi-\psi) \\ & = \big((\chi-\psi)(\gamma)\big)\gamma'^{-1}, \quad \forall \gamma \in \Gamma. \end{split} \end{equation} Hence, $\chi-\psi$ is bounded in the hyperbolic metric on $\mathbb{D}$. \begin{coro} \label{shortcoro} $\psi$ admits an $L^{2}$-extension to the closed unit disk $\overline{\mathbb{D}}$ whose restriction $\psi^{\sharp}$ to the boundary circle $\mathbb{S}^{1}$ is tangential. \end{coro} \begin{remark} \normalfont Note that in Corollary \ref{shortcoro} such an extension is unique and it depends only on $c$, not on the choice of $\varphi$ in Lemma \ref{partition}. \end{remark} \subsection{The Poisson map adapted to vector fields} \label{cohomtoanalsection2} To get a vector field which is harmonic on the interior of $\mathbb{D}$ from a tangential vector field on $\mathbb{S}^{1}$, we first give the reincarnation of \textit{the Poisson integral formula} and then adapt it to the case of vector fields. Recall that the \textit{Dirichlet problem} asks for finding a harmonic function $F$ on the disk $\mathbb{D}$ given a continous function $f$ on the boundary circle $\mathbb{S}^{1}$ such that they together make a continous function on the closed disk $\overline{\mathbb{D}}$. The Poisson integral map is an important tool to solve the Dirichlet problem: \begin{equation} \label{thepoissonmap} F(r e^{\iota \theta}) = \frac{1}{2 \pi} \int_{0}^{2 \pi} f(e^{\iota \phi}) \frac{1-r^{2}}{1+r^{2}-2r \cos(\theta-\phi)} d \phi. \end{equation} The term $\frac{1-r^{2}}{1+r^{2}-2r \cos(\theta-\phi)}$ is called the \textit{Poisson Kernel} and denoted by $K$. When $z=r e^{\iota \theta}$ and $w=e^{\iota \phi}$, we have \begin{equation} \label{poissonnewform} K(w, z) = \frac{|w|^{2}-|z|^{2}}{|w-z|^{2}} = \Re \bigg( \frac{w+z}{w-z}\bigg). \end{equation} Note that $K(w, z)$ is defined for $0 \leq |z| < |w| \leq 1$. We assume that $|w|=1$, then $$K(w, z)= \frac{1-|z|^{2}}{|1-z \bar{w}|^{2}},$$ since $|w-z|=|w\bar{w}-z\bar{w}|=|1-z \bar{w}|$. Therefore, $$\frac{1-|z|^{2}}{|1-z \bar{w}|^{2}} = \frac{1-z \bar{z}}{(1-z \bar{w})(1-\bar{z}w)} = \sum_{n=0}^{\infty} \bar{z}^{n} w^{n} + \sum_{n=1}^{\infty} z^{n} \bar{w}^{n}. $$ So, $$K(e^{\iota \phi}, r e^{\iota \theta})= \sum_{n=-\infty}^{\infty} r^{|n|} e^{\iota n(\phi-\theta)} = K_{r}(\phi-\theta).$$ It is obvious that $K$ is a positive function of $w$ and $z$. So, (\ref{thepoissonmap}) can also be written as $$F(re^{\iota \theta}) = \frac{1}{2 \pi} \int_{0}^{2 \pi} K_{r}(\theta-\phi) f(e^{\iota \phi}),$$ where $K_{r}(\theta-\phi) = K_{r}(\phi-\theta)$. \subsubsection{Reincarnation of the Poisson integral formula} \label{invariancepoisson} We denote the space of continuous functions on the circle $\mathbb{S}^{1}$ by $C^{0}(\mathbb{S}^{1})$ and the space of continuous functions on the open unit disk $\mathbb{D}$ by $C^{0}(\mathbb{D})$. To construct and characterise the Poisson map $$P:C^{0}(\mathbb{S}^{1}) \longrightarrow C^{0}(\mathbb{D}) $$ given in (\ref{thepoissonmap}) which is continuous w.r.t to the topology of uniform convergence on both the source and the target space, we first observe that $P(f)(0)$ is nothing but the \textit{normalised Haar integral}\footnote{Let $G$ denote a locally compact group. The real vector space of the real valued continuous functions on $G$ with compact support is denoted by $C_{c}(G)$. The set of nonnegative functions in $C_{c}(G)$ is denoted by $C_{c}^{+}(G)$. A continuous linear functional $I: C_{c}(G) \longrightarrow \mathbb{R}$ is called a \textit{Haar integral} if the following hold: 1) if $f \in C_{c}^{+}(G)$, then $I(f) \geq 0$, 2) if $g \in G$ and $f \in C_{c}(G)$, then $I(gf)=I(f)$, 3) there exists a function $f \in C_{c}^{+}(G)$ with $I(f) > 0$. Note that for $r >0$, $rI$ is again a Haar integral. For more information, see \cite{kap}, \cite{stro}. } $$\frac{1}{2\pi}\int_{\mathbb{S}^{1}} f.$$ By convention, integral of the constant function $1$ over $\mathbb{S}^{1}$ is $2\pi$. Therefore, $P(f)(0)$ is linear, positive, continous, and invariant under the circle group. To obtain the expression for $P(f)(z)$, $z \in \mathbb{D}$, we use the transitivity of the action of $\mathrm{PSU}(1, 1)$ on the open unit disk $\mathbb{D}$, i.e., $P(f)(z)= P(f)(\gamma(0))$ for some $\gamma \in \mathrm{PSU}(1, 1)$ such that $\gamma(0)=z$. Moreover, \begin{equation} \label{reincarnationpoisson} P(f)(z)= P(f)(\gamma(0)) = P(f \cdot \gamma)(0) = \frac{1}{2\pi} \int_{\mathbb{S}^{1}} f \cdot \gamma, \end{equation} where the second equality follows from the fact that the Poisson map $P$ is \textit{ $\mathrm{PSU}(1, 1)$-equivariant}, i.e., $P(f \cdot \gamma) = P(f) \cdot \gamma$, for all $\gamma \in \mathrm{PSU}(1, 1)$ and all $f \in C^{0}(\mathbb{S}^{1})$, where $\cdot$ denotes the action of $\mathrm{PSU}(1, 1)$ on $C^{0}(\mathbb{S}^{1})$ and $C^{0}(\mathbb{D})$ by pre-composition. The condition can also be understood as the following commutative diagram: $$\xymatrix{ C^{0}(\mathbb{S}^{1}) \ar[r]^{P} \ar[d]^{\gamma \cdot} & C^{0}(\mathbb{D}) \ar[d]^{\gamma \cdot} \\ C^{0}(\mathbb{S}^{1}) \ar[r]^{P} & C^{0}(\mathbb{D}) }$$ The $\mathrm{PSU}(1, 1)$-equivariance of the Poisson map follows from the uniqueness of solutions to the Dirichlet problem for Laplace's equation, i.e., for a given $f \in C^{0}(\mathbb{S}^{1})$, the Dirichlet problem for Laplace's equation \begin{equation*} \begin{split} \Delta F & = 0 \hspace{3pt} \text{on} \hspace{3pt} \mathbb{D} \\ F & = f \hspace{3pt} \text{on} \hspace{3pt} \mathbb{S}^{1} \end{split} \end{equation*} has atmost one solution $F \in C^{2}(\mathbb{D}) \cap C^{1}(\overline{\mathbb{D}})$. Transforming $f \in C^{0}(\mathbb{S}^{1})$ by an element $\gamma \in \mathrm{PSU}(1, 1)$ gives us a new harmonic extension $F_{1}$ of $f \cdot \gamma$ on $\mathbb{D}$. From the weak maximum principle applied to the harmonic function $F \circ \gamma-F_{1}$, we have $F\circ \gamma-F_{1} \leq \mathrm{max}_{\mathbb{S}^{1}} (F\circ \gamma-F_{1})=0$. Thus, $F\circ \gamma \leq F_{1}$ on $\mathbb{D}$. Similarly, we get $F_{1} \leq F\circ \gamma$. Therefore, $F \circ \gamma$ and $F_{1}$ coincide. Note that the last equality in (\ref{reincarnationpoisson}) follows from the fact that $P(f)(0)$ is the Haar integral. $P(f)(z)$ is well-defined, i.e., it does not depend on $\gamma \in \mathrm{PSU}(1, 1)$ and is unique upto a positive scaling factor because if we take $z$ to be the origin again, then the stabilizer subgroup of $\mathrm{PSU}(1, 1)$ w.r.t to the origin is the circle group $SO(2)$ and the Haar integral is invariant under rotations. We list the following properties which are satisfied by $P$: \begin{enumerate} \item $P$ is linear, \item $P$ is continous, \item $P$ is $\mathrm{PSU}(1, 1)$-equivariant. \end{enumerate} \begin{prop} \label{zerotoz} Given a point $z \in \mathbb{D}$, the map $w \longmapsto \frac{w+z}{w\bar{z}+1}$ is a hyperbolic isometry that sends the origin to the point $z$. \end{prop} \begin{proof} We check that indeed $\gamma(0)=z$. Let $\gamma(z)= \frac{w+z}{w\bar{z}+1}$, and let $\gamma(w)=f(w)+\iota g(w)$. By differentiating, we get $\frac{d\gamma(w)}{dw}= \frac{1-|z|^{2}}{(w\bar{z}+1)^{2}}$. Observe that $$df(w)^{2}+dg(w)^{2}= (df(w)+\iota dg(w))(df(w)-\iota dg(w)) = d\gamma(w) \overline{d\gamma(w)}.$$ Therefore, \begin{equation*} \resizebox{1.03\hsize}{!}{$ \frac{2 \sqrt{df(w)^{2}+dg(w)^{2}}}{1-f(w)^{2}-g(w)^{2}} = \frac{2\sqrt{d\gamma(w) \overline{d\gamma(w)}}}{1-|\gamma(w)|^{2}} = \frac{2 \sqrt{\frac{d\gamma(w)}{dw} dw \frac{\overline{d\gamma(w)}}{\overline{dw}} \overline{dw}}}{1-|\gamma(w)|^{2}} = \frac{2 \sqrt{ \frac{(1-|z|^{2})^{2}} {(w\bar{z}+1)^{2}(\bar{w}z+1)^{2}} }}{1-|\gamma(w)|^{2}} \sqrt{dx^{2}+dy^{2}}. $} \end{equation*} Simplifying $1-|\gamma(w)|^{2}$ further, we get $$\frac{2 \sqrt{ \frac{(1-|z|^{2})^{2}} {(w\bar{z}+1)^{2}(\bar{w}z+1)^{2}} }}{1-|\gamma(w)|^{2}} = \frac{2}{1-|w|^{2}}.$$ Therefore, $\gamma$ is a hyperbolic isometry. The final and remaining thing is to check that $\gamma$ maps $\mathbb{D}$ to itself. Suppose that $|w| <1$. We want to show that $|\frac{w+z}{w\bar{z}+1}|<1$. This is equivalent to showing that $|w+z| < |w\bar{z}+1|$. Furthermore, it is enough to show that $(w+z)(\bar{w}+\bar{z}) < (w\bar{z}+1)(\bar{w}z+1)$, or equivalently, $w\bar{w} + z \bar{z} < w\bar{w} z \bar{z} +1$, or $(1-w\bar{w})(1-z \bar{z}) >0$, which is true since $1-w\bar{w}$ and $1-z \bar{z}$ are both positive. \hfill \qedsymbol \end{proof} \bigskip We summarize our discussion as follows: \begin{prop} \label{poissonrecons} Every continuous linear map $F: C^{0}(\mathbb{S}^{1}) \longrightarrow C^{0}(\mathbb{D})$ which is $\mathrm{PSU}(1, 1)$-equivariant is a scalar multiple of the continous linear map $P: C^{0}(\mathbb{S}^{1}) \longrightarrow C^{0}(\mathbb{D})$ given by the following formula \begin{equation*} \label{poissonre} P(f)(z) = \frac{1}{2\pi} \int_{\mathbb{S}^{1}} f \cdot \gamma, \end{equation*} where $\gamma \in \mathrm{PSU}(1, 1)$ is given in Proposition \ref{zerotoz} such that $\gamma(0)=z$ and $f \in C^{0}(\mathbb{S}^{1})$. \end{prop} \begin{remark} \label{acharemark} \normalfont Alternatively, we can construct such a linear map $F: C^{0}(\mathbb{S}^{1}) \longrightarrow C^{0}(\mathbb{D})$ in Proposition \ref{poissonrecons} by plugging the Dirac distribution $\delta$ at the point $1 \in \mathbb{S}^{1}$ into the formula for $P$ instead of a continous function $f$ on the circle $\mathbb{S}^{1}$. We adopt the view that $\delta$ is the limit of step functions $\{\epsilon^{-1} g_{\epsilon}\}$ where $g_{\epsilon}$ is the characteristic function of an arc of length $\epsilon$ centered at $1 \in \mathbb{S}^{1}$. Therefore, we define $\delta \cdot \gamma = \gamma^{\ast}\delta$ to be the Dirac distribution at the point $\gamma^{-1}(1)$ times $\big|\big(\gamma'(\gamma^{-1}(1))\big)^{-1}\big|$. This suggests \begin{equation*} F(\delta)(z) = \frac{1}{2 \pi}\int_{\mathbb{S}^{1}} \delta \cdot \gamma, \end{equation*} where $\gamma(0)=z$ and the explicit form of $\gamma$ is given by Proposition \ref{zerotoz}. Using $\gamma(w) = \frac{w+z}{w \bar{z}+1}$, we see that $$2 \pi \cdot \big(F(\delta)(z)\big) = \big|\big(\gamma'(\gamma^{-1}(1))\big)^{-1}\big| = \frac{1-|z|^{2}}{|1-\bar{z}|^{2}} = \frac{1-|z|^{2}}{|1-z|^{2}}. $$ We denote the real valued (positive) function $z \longmapsto \frac{1-|z|^{2}}{|1-z|^{2}}$ defined on $\mathbb{D}$ for $0 \leq |z| < 1$ by $K$. The intuition is $F(\delta)=\frac{K}{2 \pi}$ and therefore, we define \begin{equation} \label{newpois} F(f) = f \ast K, \end{equation} where $f \in C^{0}(\mathbb{S}^{1})$ and $K(z)=\frac{1-|z|^{2}}{|1-z|^{2}}$, and $\ast$ denotes the convolution\footnote{The convolution of $K$ and $f$ is defined as: $ (f \ast K)(z):= \frac{1}{2 \pi} \int_{\mathbb{S}^{1}} f(w) K(zw^{-1}) dw.$} of $K$ and $f$. To show the $\mathrm{PSU}(1, 1)$-equivariance, we first note that every element $A \in \mathrm{PSU}(1, 1)$ has a unique expression $A= BC$ where $B \in \mathrm{SO}(2)$ and $C$ is in the two-dimensional subgroup $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$ of $\mathrm{PSU}(1, 1)$ consisting of all elements which fix the element $1$ in the boundary circle $\mathbb{S}^{1}$. Also, $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$ acts transitively on $\mathbb{D}$. The general form of elements $\gamma$ of the group $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$ is given by the following: \begin{equation} \label{gammafix1} \gamma(z)=\frac{az+b}{\bar{b}z+\bar{a}}, |a|^{2}-|b|^{2}=1, a+b = \overline{a+b}. \end{equation} Hence, showing the $\mathrm{PSU}(1, 1)$-equivariance of $F$ is equivalent to showing the $\mathrm{SO}(2)$-equivariance and $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$-equivariance of $F$. It is easy to see that $F$ in (\ref{newpois}) is $\mathrm{SO}(2)$-equivariant. To show the $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$-equivariance of $F$ in (\ref{newpois}), we claim that $\gamma^{\ast}K= cK$, where $c$ is a positive constant and $\gamma \in \mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$. We have \begin{equation*} \begin{split} K(\gamma(z)) & = \frac{1-|\gamma(z)|^{2}}{|1-\gamma(z)|^{2}} = \frac{1-\gamma(z)\overline{\gamma(z)}}{\big(1-\gamma(z)\big) \big(1-\overline{\gamma(z)}\big)}\\ & = \frac{1 - \frac{az+b}{\bar{b}z+\bar{a}} \cdot \frac{\bar{a}\bar{z}+\bar{b}}{b\bar{z}+a}} {\bigg(1-\frac{az+b}{\bar{b}z+\bar{a}}\bigg) \cdot \bigg(1-\frac{\bar{a}\bar{z}+\bar{b}}{b\bar{z}+a}\bigg)} = \frac{1 - \frac{|a|^{2}|z|^{2} + az \bar{b}+b\bar{a}\bar{z}+|b|^{2}} {|b|^{2}|z|^{2}+az \bar{b}+b\bar{a}\bar{z}+|a|^{2}}} {\bigg(\frac{(b-\bar{a})\bar{z} - (\bar{b}-a)}{b \bar{z}+a} \bigg) \bigg( \frac{(\bar{b}-a)z - (b-\bar{a})}{\bar{b} z+\bar{a}} \bigg)} \\ & = \frac{1-|z|^{2}} {(b\bar{z}+a) (\bar{b}z+\bar{a})} \cdot \Bigg(\frac{\big(b-\bar{a}\big)^{2} \cdot \big|1-\bar{z}\big|^{2}} {(b\bar{z}+a) (\bar{b}z+\bar{a})}\Bigg)^{-1} = \frac{1-|z|^{2}}{\big(b-\bar{a}\big)^{2} \cdot \big|1-\bar{z}\big|^{2}} \\ & = \frac{\gamma'(1)^{-1} \big(1-|z|^{2}\big)}{\big|1-\bar{z}\big|^{2}} = \gamma'(1)^{-1} K(z), \end{split} \end{equation*} where $\gamma'(1)=\big(\bar{b}+\bar{a}\big)^{-2}$. Note that $K$ is the real part of a holomorphic function, hence harmonic. Therefore, $F(f)$ is also harmonic. \end{remark} \begin{coro} The map $F$ in (\ref{newpois}) is the Poisson map given in (\ref{thepoissonmap}). Hence, the map $P$ in Proposition \ref{poissonrecons} lands in the vector space of harmonic functions on the open unit disk $\mathbb{D}$. \end{coro} \bigskip Let $\mathcal{S}_{C^{0}}(T\mathbb{S}^{1})$ be the Banach space of (tangential) continuous vector fields on $\mathbb{S}^{1}$ and $\mathcal{S}_{C^{0}}(T\mathbb{D})$ be the space of continuous vector fields on the open disk $\mathbb{D}$. We want to mimick the reincarnation of the Poisson map in the case of vector fields. \begin{prop} \label{so2linear} Every continuous and $\mathrm{SO}(2)$-equivariant linear functional $\Lambda$ from the real Banach space of continous tangential vector fields on $\mathbb{S}^{1}$ to $\mathbb{C}$ has the following form: $$\Lambda(X)= \bigg(\int_{\mathbb{S}^{1}} X\bigg) \cdot v, $$ where $X$ is a tangential vector field on $\mathbb{S}^{1}$ and $v \in \mathbb{C}$. \end{prop} \begin{prop} \label{prop412} Every continous linear map $$\mathcal{F}: \mathcal{S}_{C^{0}}(T\mathbb{S}^{1}) \longrightarrow \mathcal{S}_{C^{0}}(T\mathbb{D})$$ which is equivariant under the action of $\mathrm{PSU}(1,1)$ is a scalar multiple of the continous linear map $$\mathcal{P}: \mathcal{S}_{C^{0}}(T\mathbb{S}^{1}) \longrightarrow \mathcal{S}_{C^{0}}(T\mathbb{D})$$ given by the following formula \begin{equation} \label{poissonvector} \mathcal{P}(X)(z)=\mathcal{P}(X)(\gamma(0)) = \gamma'(0) \cdot \Big(\mathcal{P}(\gamma^{\ast}X)(0) \Big) = \gamma'(0) \cdot \bigg(\frac{1}{2\pi}\int_{\mathbb{S}^{1}} \gamma^{\ast}X\bigg), \end{equation} for some $\gamma \in \mathrm{PSU}(1, 1)$ such that $\gamma(0)=z$. \end{prop} \begin{remark} \normalfont The third equality in the expression of $\mathcal{P}(X)(z)$ in (\ref{poissonvector}) follows from Proposition \ref{so2linear}. \end{remark} \begin{remark} \normalfont The scalar in Proposition \ref{prop412} can be any complex number. Also, note that $\mathcal{P}(X)(0) \in T_{0}\mathbb{D}$ and the second equality in (\ref{poissonvector}) follows from the $\mathrm{PSU}(1,1)$-equivariance of $\mathcal{P}$, i.e., $$\mathcal{P}(\gamma^{\ast}X)=\gamma^{\ast}(\mathcal{P}(X)), \quad \forall \gamma \in \mathrm{PSU}(1, 1),$$ where $\gamma^{\ast}X=X(\gamma)\gamma'^{-1}, \gamma \in \mathrm{PSU}(1,1), X \in \mathcal{S}_{C^{0}}(T\mathbb{S}^{1})$. \end{remark} \begin{remark} \normalfont We can also construct such a linear map $$\mathcal{F}: \mathcal{S}_{C^{0}}(T\mathbb{S}^{1}) \longrightarrow \mathcal{S}_{C^{0}}(T\mathbb{D})$$ in Proposition \ref{prop412} by plugging the \textit{Dirac vector field} $\boldsymbol \delta$ in the formula for $\mathcal{P}$ instead of a tangential vector field $X$ on the circle $\mathbb{S}^{1}$. We adopt the view that $\boldsymbol \delta$ is the limit of vector fields $\{\epsilon^{-1} g_{\epsilon}\}$ where $g_{\epsilon}$ is the norm $1$ (positively oriented) tangential vector field supported on an arc of length $\epsilon$ centered at $1 \in \mathbb{S}^{1}$. Therefore, we have \begin{equation} \label{pullbackdirac} \mathcal{F}(\boldsymbol \delta)(z) = \mathcal{F}(\boldsymbol \delta)(\gamma(0)) = \gamma'(0) \cdot \Big(\mathcal{F}\big(\gamma^{\ast} \boldsymbol \delta \big)(0)\Big) = \gamma'(0) \cdot \bigg(\frac{1}{2\pi}\int_{\mathbb{S}^{1}} \gamma^{\ast} \boldsymbol \delta \bigg), \end{equation} where $\gamma(0)=z$ and the explicit form of $\gamma$ is given by Proposition \ref{zerotoz}. (\ref{pullbackdirac}) is further simplified to \begin{equation} \label{pullbackdirac1} 2 \pi \cdot \big(\mathcal{F}(\boldsymbol \delta)(z)\big) = \gamma'(0) \cdot \Big(\iota \cdot \big|\big(\gamma'(\gamma^{-1}(1))\big)^{-1}\big| \cdot \big(\gamma'(\gamma^{-1}(1))\big)^{-1}\Big). \end{equation} Observe that the factor $\big|\big(\gamma'(\gamma^{-1}(1))\big)^{-1}\big|$ accounts for the streching of the arc length when we pull back the Dirac vector field under $\gamma$ and the factor $\big(\gamma'(\gamma^{-1}(1))\big)^{-1}$ accounts for the streching of vectors. Using $\gamma(w) = \frac{w+z}{w \bar{z}+1}$, the expression $$\gamma'(0) \cdot \Big(\iota \cdot \big|\big(\gamma'(\gamma^{-1}(1))\big)^{-1}\big| \cdot \big(\gamma'(\gamma^{-1}(1))\big)^{-1}\Big)$$ in (\ref{pullbackdirac1}) simplifies to \begin{equation} \label{finalharmonic} \frac{\iota (1-|z|^{2})^{3}}{|1-\bar{z}|^{2} \cdot (1-\bar{z})^{2}}. \end{equation} We call the vector field given by (\ref{finalharmonic}) the \textit{Poisson kernel vector field} and denote it by $\textbf{K}$. By definition $\mathcal{F}(\boldsymbol \delta) = \frac{1}{2\pi}\textbf{K}$. Let $X$ be a tangential vector field on the boundary circle $\mathbb{S}^{1}$ of the form $f Y$ where $f$ is a real-valued continuous function on the boundary and $Y$ is the norm $1$ tangential vector field on $\mathbb{S}^{1}$ given by $z \longmapsto \iota z$. From the above discussion, a vector field $\mathcal{F}(X)$ on $\mathbb{D}$ is given by the convolution of the Poisson Kernel vector field $\textbf{K}$ with a given function $f$ on $\mathbb{S}^{1}$, i.e., \begin{equation} \label{finals} \mathcal{F}(X) = f \ast \textbf{K}. \end{equation} \end{remark} \begin{prop} \label{propcon} The map $\mathcal{F}$ in (\ref{finals}) satisfies the conditions of the map $\mathcal{F}$ in Proposition \ref{prop412}. \end{prop} Before we prove Proposition \ref{propcon}, we state and prove the following: \begin{theorem} \label{kernelvectorfieldisharmonic} The Poisson Kernel vector field $\textbf{K}$ given by (\ref{finalharmonic}) in Remark \ref{pullbackdirac1} is harmonic at every point $z \in \mathbb{D}$. \end{theorem} \begin{proof} Recall Theorem \ref{thm2} in \textbf{\cref{chapter3section2}} in \textbf{\cref{chapter3}} where we show that a vector field $\xi$ on $\mathbb{D}$ is harmonic iff the quadratic differential $(L_{\xi}\textbf{g}_{\mathbb{D}})^{(2, 0)}$ associated with it is holomorphic. We first prove that $\textbf{K}$ is harmonic at the origin in $\mathbb{D}$. We write the Taylor approximation of $\textbf{K}$ up to the second order at the origin as follows: \begin{equation} \label{finalharmonic1} \begin{split} \textbf{K}(z) & = \frac{\iota (1-|z|^{2})^{3}}{|1-\bar{z}|^{2} \cdot (1-\bar{z})^{2}} \\ & = \frac{\iota (1-|z|^{2})^{3} }{ (1-\bar{z}) \overline{(1-\bar{z})} (1-\bar{z})^{2}} \\ & = \iota (1-|z|^{2})^{3} (1- \bar{z})^{-3} (1-z)^{-1} \\ & \approx \iota(1-3|z|^{2})(1+\bar{z}+\bar{z}^{2})^{3} (1+z+z^{2}) \\ & \approx \iota(1-3|z|^{2}) (1+3 \bar{z} + 3 \bar{z}^{2}+3 \bar{z}^{2})(1+z+z^{2}) \\ & \approx \iota(1+ 3 \bar{z} + 6 \bar{z}^{2} - 3|z|^{2} ) (1+z+z^{2}) \\ & \approx \iota(1 + 3 \bar{z} + 6 \bar{z}^{2} - 3|z|^{2} + z + 3|z|^{2} + z^{2}) \\ & = \iota (1 + z + 3 \bar{z} + z^{2} + 6 \bar{z}^{2}) \\ & = \iota \big(1+ (x+ \iota y) + 3 (x- \iota y) + x^{2}-y^{2} + 2 \iota xy + 6 (x^{2}-y^{2}) - 12 \iota xy \big) \\ & = \iota (1+ 4x -2 \iota y + 7x^{2}-7y^{2} - 10 \iota xy) \\ & = (2y+10xy, 1+4x+7x^{2}-7y^{2}). \end{split} \end{equation} Note that the metric $\textbf{g}_{\mathbb{D}}$ at the origin does not change. Following the criteria for harmonicity of a vector field from \textbf{\cref{chapter3section2}} in \textbf{\cref{chapter3}}, we notice that the quadratic differential $q$ associated to $\textbf{K}$ is given as $(6\iota-24 \iota z) dz^{2}$. The function $f(z)= 6\iota-24 \iota z$ is holomorphic. Hence, $\textbf{K}$ is harmonic at the origin in $\mathbb{D}$. Now, we claim that the vector field $\textbf{K}$ when transformed using elements $\gamma \in \mathrm{PSU}(1,1)$ which fix the element $1$ in the boundary circle $\mathbb{S}^{1}$, changes only by multiplying it by a non-zero real constant. \paragraph{\textit{Proof of the claim:}} Recall the general form of elements $\gamma$ of the group $\mathrm{PSU}(1, 1)$ which fix the element $1$ in the boundary circle $\mathbb{S}^{1}$ given by (\ref{gammafix1}) in Remark \ref{acharemark}. Now, $\gamma$ acts on $\textbf{K}$ in the usual way: \begin{equation} \label{actiononkernelvector} \begin{split} \gamma^{\ast}\textbf{K} & = \textbf{K}(\gamma(z))\gamma'(z)^{-1} \\ & = \frac{\iota \big(1-|\gamma(z)|^{2}\big)^{3}}{\big|1-\overline{\gamma(z)}\big|^{2} \cdot \big(1-\overline{\gamma(z)}\big)^{2}} \cdot \gamma'(z)^{-1}. \end{split} \end{equation} Using (\ref{gammafix1}), the numerator and the denominator of the term $ \frac{\iota \big(1-|\gamma(z)|^{2}\big)^{3}}{\big|1-\overline{\gamma(z)}\big|^{2} \cdot \big(1-\overline{\gamma(z)}\big)^{2}}$ in the R.H.S of (\ref{actiononkernelvector}) are explicitly written as: \begin{equation} \label{numerator} \begin{split} \iota \big(1-|\gamma(z)|^{2}\big)^{3} & = \iota \bigg(1- \gamma(z) \overline{\gamma(z)} \bigg)^{3} \\ & = \iota \bigg(1 - \frac{az+b}{\bar{b}z+\bar{a}} \cdot \frac{\bar{a}\bar{z}+\bar{b}}{b\bar{z}+a} \bigg)^{3} = \iota \bigg(1 - \frac{|a|^{2}|z|^{2} + az \bar{b}+b\bar{a}\bar{z}+|b|^{2}} {|b|^{2}|z|^{2}+az \bar{b}+b\bar{a}\bar{z}+|a|^{2}} \bigg)^{3} \\ & = \iota \bigg( \frac{1-|z|^{2}} {|b|^{2}|z|^{2}+az \bar{b}+b\bar{a}\bar{z}+|a|^{2}}\bigg)^{3} = \frac{\iota \big(1-|z|^{2}\big)^{3}}{\big((\bar{b}z+\bar{a})(b\bar{z}+a)\big)^{3}} \end{split} \end{equation} and \begin{equation} \label{denominator} \begin{split} \big|1-\overline{\gamma(z)}\big|^{2} \cdot \big(1-\overline{\gamma(z)}\big)^{2} & = \big(1-\overline{\gamma(z)}\big) \overline{\big( 1-\overline{\gamma(z)}\big)} \cdot \big(1-\overline{\gamma(z)}\big)^{2} \\ & = \big(1-\overline{\gamma(z)}\big) \big(1-\gamma(z)\big) \cdot \big(1-\overline{\gamma(z)}\big)^{2} \\ & = \bigg(\frac{(b-\bar{a})\bar{z} - (\bar{b}-a)}{b \bar{z}+a} \bigg) \bigg( \frac{(\bar{b}-a)z - (b-\bar{a})}{\bar{b} z+\bar{a}} \bigg) \bigg(\frac{(b-\bar{a})\bar{z} - (\bar{b}-a)}{b \bar{z}+a} \bigg)^{2} \\ & = \frac{\big(b-\bar{a}\big)^{2} \cdot \big|1-\bar{z}\big|^{2}} {(b\bar{z}+a) (\bar{b}z+\bar{a})} \cdot \frac{\big(b-\bar{a}\big)^{2}\big(1-\bar{z}\big)^{2}}{\big(b \bar{z}+a\big)^{2}} = \frac{\big(b-\bar{a}\big)^{4} \big|1-\bar{z}\big|^{2} \big(1-\bar{z}\big)^{2} } { (b\bar{z}+a)^{3} (\bar{b}z+\bar{a})}, \end{split} \end{equation} where in the last two equalities in (\ref{denominator}) we have used the fact that $b-\bar{a}$ is real, i.e., $b-\bar{a}=\bar{b}-a$. Also, $\gamma'(z)^{-1}= \big(\bar{b} z+\bar{a}\big)^{2}$. Using (\ref{numerator}) and (\ref{denominator}), the explicit form of the R.H.S of (\ref{actiononkernelvector}) is \begin{equation*} \begin{split} \frac{\iota \big(1-|\gamma(z)|^{2}\big)^{3}}{\big|1-\overline{\gamma(z)}\big|^{2} \cdot \big(1-\overline{\gamma(z)}\big)^{2}} \cdot \gamma'(z)^{-1} & = \frac{\iota \big(1-|z|^{2}\big)^{3} (b\bar{z}+a)^{3} (\bar{b}z+\bar{a})} {\big((\bar{b}z+\bar{a})(b\bar{z}+a)\big)^{3} \big(b-\bar{a}\big)^{4} \big|1-\bar{z}\big|^{2} \big(1-\bar{z}\big)^{2}} \cdot \big(\bar{b} z+\bar{a}\big)^{2} \\ & = \frac{1}{\big(b-\bar{a}\big)^{4}} \cdot \frac{\iota (1-|z|^{2})^{3}}{|1-\bar{z}|^{2} \cdot (1-\bar{z})^{2}} \\ & = \frac{1}{\big(b-\bar{a}\big)^{4}} \textbf{K}(z) = (\gamma'(1))^{-2} \textbf{K}(z). \end{split} \end{equation*} As mentioned in Remark \ref{acharemark}, every element $A \in \mathrm{PSU}(1, 1)$ has a unique expression $A= BC$ where $B \in \mathrm{SO}(2)$ and $C$ is in the two-dimensional subgroup $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$ of $\mathrm{PSU}(1, 1)$ consisting of all elements which fix the element $1$ in the boundary circle $\mathbb{S}^{1}$. Therefore, $\textbf{K}$ is $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$-invariant up to multiplication by real scalars. Note that the harmonicity of a vector field on the open unit disk $\mathbb{D}$ is preserved by conformal automorphisms of $\mathbb{D}$. Hence, $\textbf{K}$ is harmonic everywhere on the open unit disk $\mathbb{D}$. \hfill \qedsymbol \end{proof} \begin{remark} \normalfont $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$-invariance of $\textbf{K}$ up to multiplication by real scalars suffices to ensure that $\textbf{K}$ is harmonic on the open unit disk $\mathbb{D}$ because $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$ acts transitively on the open unit disk $\mathbb{D}$. \end{remark} \begin{remark} \normalfont Since the Poisson Kernel vector field $\textbf{K}$ is harmonic, $\mathcal{F}(X)$ given by (\ref{finals}) is also harmonic on $\mathbb{D}$, where $X$ is a tangential vector field on $\mathbb{S}^{1}$. \end{remark} \bigskip \paragraph{\textit{Proof of Proposition \ref{propcon}:}} The map $\mathcal{F}$, given by (\ref{finals}), is clearly $\mathrm{PSU}(1, 1)$-equivariant. It follows from $\mathrm{Stab}_{\mathrm{PSU}(1, 1)}(1)$-invariance of $\textbf{K}$ up to multiplication by real scalars (see Proof of Proposition \ref{kernelvectorfieldisharmonic}). Hence, it immediately follows that $\mathcal{F}$ satisfies all the conditions stated in Proposition \ref{prop412}. \hfill \qedsymbol \begin{coro} \label{coro4213} The map $\mathcal{F}$, given by (\ref{finals}), is same as the map $\mathcal{P}$ in Proposition \ref{prop412}. Hence, the map $\mathcal{P}$ in Proposition \ref{prop412} lands in the vector space of harmonic vector fields on the open unit disk $\mathbb{D}$. \end{coro} \begin{lemma}\label{finaltheorem} For a continuous tangential vector field $X$ on $\mathbb{S}^{1}$, $\mathcal{F}(X)$ and $X$ together make up a continuous vector field on the closed unit disk $\overline{\mathbb{D}}$. \end{lemma} \begin{proof}[Sketch] For every $\epsilon > 0$, we get a continuous vector field $\textbf{K}_{1-\epsilon}$ on $\mathbb{S}^{1}$ by composing $\textbf{K}$ with the map $z \longmapsto (1-\epsilon)z$. We first notice that \begin{equation} \label{diractvector} \textbf{K}_{1-\epsilon}(z) = \frac{\iota \big(1-|(1-\epsilon)z|^{2}\big)^{3}} {\big(1-(1-\epsilon)\bar{z}\big)^{3} \cdot \big(1-(1-\epsilon)z\big)}, \end{equation} where $|z|=1$. Simplifying (\ref{diractvector}), we get \begin{equation} \label{diractvector1} \textbf{K}_{1-\epsilon}(z) \approx \frac{\iota 8 \epsilon^{3} z^{3}}{\big(1-(1-\epsilon)z\big) \cdot \big(z-(1-\epsilon) \big)^{3}}, \end{equation} where we used the fact that $\bar{z}=z^{-1}$. We put $1-\epsilon =s$ in (\ref{diractvector1}) and get \begin{equation*} \label{diractvector2} \textbf{K}_{1-\epsilon}(z) \approx \frac{\iota 8 \epsilon^{3} z^{3}}{(1-sz) \cdot (z-s)^{3}}. \end{equation*} Let $\lambda_{z}=|z-(1-\epsilon)|$. Notice that \begin{equation} \label{estimatedelta} |\textbf{K}_{1-\epsilon}(z)| \leq \frac{8}{\epsilon}. \end{equation} The estimate in (\ref{estimatedelta}) is independent of $z$. And now, let us try to make an upper bound for $|\textbf{K}_{1-\epsilon}(z)|$ which is dependent on $z$. The following estimate for $|\textbf{K}_{1-\epsilon}(z)|$ ensures that $\textbf{K}_{1-\epsilon}$ is 'very small' outside the arc of length $\sqrt{2\epsilon}$ centered at $1$. \begin{equation} \label{estimatedelta1} |\textbf{K}_{1-\epsilon}(z)| \leq \frac{8 \epsilon^{3}}{\lambda_{z}^{4}}. \end{equation} (\ref{estimatedelta}) and (\ref{estimatedelta1}) have following two consequences: \begin{enumerate} \item if $X=fY$, where $f$ is a real-valued continuous function on $\mathbb{S}^{1}$ and $Y$ is the norm $1$ continous tangential vector field on $\mathbb{S}^{1}$, we will have \begin{equation} \label{estimatedelta2} (f \ast K_{1-\epsilon})(z) \approx f(z) \cdot \bigg(\frac{1}{2 \pi} \int_{\mathbb{S}^{1}} K_{1-\epsilon} \bigg), \quad z \in \mathbb{S}^{1}. \end{equation} Therefore, it is enough to show that $$ \lim_{\epsilon \rightarrow 0}\bigg(\frac{1}{2 \pi} \int_{\mathbb{S}^{1}} K_{1-\epsilon} \bigg) = \iota. $$ \item we may replace the ordinary Haar integral by the complex path integral at the price of dividing by $\iota$. \end{enumerate} Therefore, \begin{equation} \label{diractvector3} \begin{split} \lim_{\epsilon \rightarrow 0} \Bigg(\frac{1}{2\pi \iota} \int_{\mathbb{S}^{1}} \frac{\iota 8 \epsilon^{3} z^{3}}{\big(1-sz\big) \cdot \big(z-s \big)^{3}} dz \Bigg) & = \lim_{\epsilon \rightarrow 0} \Bigg( \frac{\iota 8 \epsilon^{3}}{2 \pi \iota} \int_{\mathbb{S}^{1}} \frac{ z^{3}}{\big(1-sz\big) \cdot \big(z-s \big)^{3}} dz \Bigg) \\ & = \lim_{\epsilon \rightarrow 0} \bigg( \frac{\iota 8 \epsilon^{3}}{2 \pi \iota} \big( 2 \pi \iota \cdot \mathrm{Res}(f, s) \big)\bigg), \end{split} \end{equation} where $f(z) = \frac{z^{3}}{(1-sz) \cdot (z-s)^{3}}$, and $$\mathrm{Res}(f, s) = \frac{6s-12s^{3}+8s^{5}-2s^{7}}{2(1-s^{2})^{4}} =\frac{4 \epsilon + 2 \epsilon^{2}+2\epsilon^{3}- 30 \epsilon^{4}+34\epsilon^{5}- 14 \epsilon^{6}+2 \epsilon^{7}} {2\big(16 \epsilon^{4}-32 \epsilon^{5}+20 \epsilon^{6}-8 \epsilon^{7}+\epsilon^{8}\big)}.$$ Rewriting (\ref{diractvector3}), we get \begin{equation*} \begin{split} \lim_{\epsilon \rightarrow 0} \bigg( \frac{\iota 8 \epsilon^{3}}{2 \pi \iota} \Big( 2 \pi \iota \cdot \mathrm{Res}(f, s) \big)\bigg) & = \lim_{\epsilon \rightarrow 0} \Bigg(8 \iota \cdot \frac{4 \epsilon + 2 \epsilon^{2}+2\epsilon^{3}- 30 \epsilon^{4}+34\epsilon^{5}- 14 \epsilon^{6}+2 \epsilon^{7}} {2\big(16 \epsilon-32 \epsilon^{2}+20 \epsilon^{3}-8 \epsilon^{4}+\epsilon^{5}\big)} \Bigg) \\ & = 4 \iota \Bigg( \lim_{\epsilon \rightarrow 0} \frac{4 \epsilon + 2 \epsilon^{2}+2\epsilon^{3}- 30 \epsilon^{4}+34\epsilon^{5}- 14 \epsilon^{6}+2 \epsilon^{7}} {16 \epsilon-32 \epsilon^{2}+20 \epsilon^{3}-8 \epsilon^{4}+\epsilon^{5}} \Bigg) \\ & = \iota. \end{split} \end{equation*} \hfill \qedsymbol \end{proof} \begin{coro} \label{lastcoro} For an $L^{2}$-tangential vector field $X$ on $\mathbb{S}^{1}$, $X$ is an $L^{2}$-boundary extension of the smooth vector field $\mathcal{F}(X)$ on the open unit disk $\mathbb{D}$. \end{coro} \begin{proof} Notice that in the proof of Lemma \ref{finaltheorem}, we showed that $$\lim_{\epsilon \rightarrow 0} \textbf{K}_{1-\epsilon} = 2 \pi \boldsymbol \delta.$$ Hence, Corollary \ref{lastcoro} follows from Lemma \ref{finaltheorem} and \cite[Proposition 5.4]{shubin}. \hfill \qedsymbol \end{proof} \begin{remark} \normalfont We suspect that Corollary \ref{lastcoro} is an infinitesimal version of the problem of finding harmonic extensions of quasiconformal maps (from $\mathbb{S}^{1}$ to itself) to the open unit disk $\mathbb{D}$ or the upper half plane $\mathbb{H}^{2}$. See \cite{hardt} for more details. \end{remark} \subsection{A detailed map from $H^{1}(\Gamma; \mathfrak{g})$ to $\mathrm{HQD}(\mathbb{D}, \Gamma)$} Here we summarize the main consequences of \textbf{\cref{cohomtoanalsection1}} and \textbf{\cref{cohomtoanalsection2}}. \begin{theorem} \label{lastsectiontheo} Let $\Gamma$ be a discrete cocompact subgroup of $\mathrm{PSU}(1, 1)$. For every cocycle $c$ representing a cohomology class $[c] \in H^{1}(\Gamma; \mathfrak{g})$, there exists a smooth vector field $\psi$ on the open unit disk $\mathbb{D}$ such that $c= \delta \psi$. Moreover, any such $\psi$ admits an $L^{2}$-extension to $\overline{\mathbb{D}}$ whose restriction $\psi^{\sharp}$ to the boundary circle $\mathbb{S}^{1}$ is tangential. There exists a homomorphism \begin{equation} \label{mainmap2} \begin{split} \varPsi: H^{1}(\Gamma; \mathfrak{g}) & \longrightarrow \mathrm{HQD}(\mathbb{D}, \Gamma) \\ [c] & \longmapsto \big(\mathcal{L}_{\mathcal{F}(\psi^{\sharp})}\textbf{g}_{\mathbb{D}} \big)^{(2, 0)}, \end{split} \end{equation} where the map $\mathcal{F}$ is introduced in (\ref{finals}) and $\mathcal{F}(\psi^{\sharp})$ is a harmonic vector field on the open disk $\mathbb{D}$. \end{theorem} \begin{coro} \label{lastsectioncorol} $$ \varPhi \circ \varPsi = \mathrm{Id},$$ where the map $\varPhi$ is constructed in (\ref{mainmap1}) (see Corollary \ref{onewaymap}) and the map $\varPsi$ in (\ref{mainmap2}) (see Theorem \ref{lastsectiontheo}). \end{coro} \begin{proof}[Sketch] Recall from Corollary \ref{shortcoro} (and \textbf{\cref{cohomtoanalsection1}}) that given a cocycle $c$ representing a cohomology class $[c] \in H^{1}(\Gamma; \mathfrak{g})$, there exists a smooth vector field $\psi$ on the open unit disk $\mathbb{D}$ such that $c= \delta \psi$ and $\psi$ admits a unique $L^{2}$-extension to $\overline{\mathbb{D}}$. We denote the restriction of that extension to the boundary circle $\mathbb{S}^{1}$ by $\psi^{\sharp}$, and $\psi^{\sharp}$ is tangential. Note that $\delta \psi^{\sharp} = c^{\sharp}$, where $c^{\sharp}$ is a $1$-cocycle (determined by $c$) with values in the vector space of Killing vector fields on $\mathbb{S}^{1}$. The map $\mathcal{F}$ maps Killing vector fields on $\mathbb{S}^{1}$ to Killing vector fields on the open unit disk $\mathbb{D}$. Therefore, it is clear that $\delta \mathcal{F}(\psi^{\sharp}) =c$ and $\mathcal{F}(\delta \psi^{\sharp}(\gamma)) = c^{\sharp}(\gamma)$, for every $\gamma \in \Gamma$. \hfill \qedsymbol \end{proof} \subsection{Open Problems} We state the following non-exhaustive list of open problems based on this section: \medskip From Corollary \ref{shortcoro} we know that given a cocycle $c$ representing a cohomology class $[c] \in H^{1}(\Gamma; \mathfrak{g})$, there exists a smooth vector field $\psi$ on the open unit disk $\mathbb{D}$ such that $c= \delta \psi$ and $\psi$ admits a unique $L^{2}$-extension to the closed unit disk $\overline{\mathbb{D}}$ whose restriction $\psi^{\sharp}$ to the boundary circle $\mathbb{S}^{1}$ is tangential. For the construction of $\psi$ we can either use the $\Gamma$-invariant partition of unity method or the difficult theory of \textbf{\cref{chapter3}} and \textbf{\cref{analtocohom}} which produces a harmonic solution. Corollary \ref{shortcoro} is valid for all of these but the construction of an $L^{2}$-extension of $\psi$ to $\overline{\mathbb{D}}$ relies on the existence of harmonic vector fields. \begin{problem} \label{openprob1} Is there a more direct way of proving Corollary \ref{shortcoro} which does not take harmonicity into account? \end{problem} \medskip In \textbf{\cref{invariancepoisson}}, we have not shown that there exists a \textit{unique} harmonic extension of a tangential $L^{2}$-vector field $X$ on $\mathbb{S}^{1}$ to the closed unit disk $\overline{\mathbb{D}}$. \begin{problem} \label{openprob2} Given a tangential $L^{2}$-vector field $X$ on the boundary circle $\mathbb{S}^{1}$, does there exist a unique harmonic extension to the closed unit disk $\overline{\mathbb{D}}$? \end{problem} \section{Application: a connection on the universal Teichm\"uller curve} \label{connectionuniversal} \iffalse Recall that with the notion of a connection on a vector bundle $E \longrightarrow M$, we decode the meaning of the directional derivative $ds(p)X$ (also known as the covariant derivative), where $s:M \longrightarrow E$ is a section of the vector bundle $E$ and $X \in T_{p}M$ and $p \in M$. Formally speaking, a connection is a linear map $\nabla$ from the space $\mathcal{S}(E)$ of sections of $E$ to the space $\mathcal{S}(M, \mathrm{Hom}(TM, E))$ of $E$-valued $1$-forms, which satisfies the \textit{Leibniz rule}. See \cite{morris} for more details. Another interesting definition of a connection surfaces when we consider an arbitrary fiber bundle $E' \longrightarrow M$ because, in this case, we have $\nabla s(p) X \in T_{s(p)} E'_{p} \subset T_{s(p)}E'$, where $s$ is a section of the fiber bundle $E'$ and $X \in T_{p}M$. In fact, $\nabla s(p) X \in (T_{v} E')_{s(p)}$, where $T_{v}E'$ denotes the vertical tangent bundle of $E'$. This observation inspires the following definition of a connection on a fiber bundle: \fi \begin{defn}[Ehresmann's definition] \label{connectionnew} \normalfont A connection on a smooth fiber bundle $f: E \longrightarrow M$ is a smooth vector subbundle $T_{h}E$ - the horizontal tangent bundle - of the tangent bundle $TE \longrightarrow E$ such that $T_{h}E \oplus T_{v}E = TE$. \end{defn} \begin{remark} \normalfont According to \cite[Note 1, Page 287]{kob}, Definition \ref{connectionnew} of a connection on a smooth fibre bundle $f: E \longrightarrow M$ is given for the first time in \cite{ehres}. \end{remark} \begin{remark} \label{connectionnew1} \normalfont Equivalently, a connection on a smooth fiber bundle $f: E \longrightarrow M$ is a smooth vector bundle homomorphism $f^{\ast}TM \longrightarrow TE$ such that the composition $$f^{\ast}TM \longrightarrow TE \longrightarrow TE/T_{v}E $$ is identity. \end{remark} \begin{remark} \label{remarkconnectionnew} \normalfont Consequently, the difference of two connections on a smooth fiber bundle $f: E \longrightarrow M$ is a vector bundle homomorphism $f^{\ast}TM \longrightarrow T_{v}E$. In particular, if $E$ is a product, i.e., $E= M \times F$, and $f$ is the projection, then there is a preferred choice of connection and any other connection on this trivial bundle is described by a vector field on the fiber $F$ for every tangent vector $X \in T_{p}M$, where $p \in M$. \end{remark} \medskip With the help of Definition \ref{connectionnew} and Remark \ref{remarkconnectionnew}, we will describe a connection on the \textit{universal Teichm\"uller curve}. Associated to the $\mathrm{PSL}(2, \mathbb{R})$-principal bundle $$ \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \longrightarrow \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R})$$ we have the following smooth fiber bundle \footnote{Actually, $\pi$ is a proper submersion and from the Ehresmann fibration theorem, it follows that $\pi$ is a smooth fiber bundle.} known as the universal Teichm\"uller curve \begin{equation} \label{universalbundle} \pi: \Gamma_{g} \big \backslash \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \times_{\mathrm{PSL}(2, \mathbb{R})} \mathbb{H}^{2} \longrightarrow \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R}) \end{equation} with the fiberwise $\Gamma_{g}$-action (which is free and proper), given by the following map $$\gamma \cdot [\rho, z] \longmapsto [\rho, \rho(\gamma)z], \quad \forall \gamma \in \Gamma_{g}.$$ The kernel of the map $d\pi$ gives us a line bundle over the total space of the bundle $\pi$ given in (\ref{universalbundle}). To make the process of describing a connection on the universal Teichm\"uller curve more clear and digestible to the reader, we first restrict our attention to the trivial bundle $$ \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \times \mathbb{H}^{2} \longrightarrow \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})).$$ From Remark \ref{remarkconnectionnew}, we know that to describe a connection on the trivial bundle \begin{equation} \label{trivialbundle} \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \times \mathbb{H}^{2} \longrightarrow \mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) \end{equation} we need to describe a vector field $\mathfrak{Y}$ on $\mathbb{H}^{2}$ for every $1$-cocycle $$c \in T_{\rho}(\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R}))) \cong Z^{1}(\Gamma_{g}; \mathfrak{g}_{\mathrm{Ad}\rho}).$$ But there is more to it than meets the eye. We need to describe a connection that respects the $\Gamma_{g}$-action on each fiber of the bundle given in (\ref{trivialbundle}). So, this condition translates to the following condition on $\mathfrak{Y}$ $$\delta \mathfrak{Y} =c, $$ where $\delta$ is the coboundary operator. \medskip Therefore, the description of a connection on the universal Teichm\"uller curve given in (\ref{universalbundle}) is equivalent to the description of a vector field $\mathfrak{Y}$ on $\mathbb{H}^{2}$ (or on $\mathbb{D}$) for every $c \in T_{\rho}(\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})))$ representing the cohomology class $[c] \in T_{[\rho]} (\mathrm{Hom}_{0}(\Gamma_{g}, \mathrm{PSL}(2, \mathbb{R})) / \mathrm{PSL}(2, \mathbb{R}))$ such that $$\delta \mathfrak{Y} =c.$$ We choose $\mathfrak{Y}$ so that it is the unique ``harmonic'' vector field on $\mathbb{D}$ satisfying $\delta \mathfrak{Y} =c$. See \textbf{\cref{chapter3section2}} and \textbf{\cref{analtocohom}}. Note that the connection on the trivial bundle given in (\ref{trivialbundle}) so constructed is not only invariant under the action of $ \Gamma_{g}$, but also under the action of $\mathrm{PSL}(2, \mathbb{R})$. Also, this connection on the universal Teichm\"uller curve is identical with the one proposed by S. Wolpert in \cite[Section 5]{wolpert1}. The reasons for this agreement are given in \cref{genesis}. \begin{problem} \normalfont The Chern form of the vertical bundle $\mathrm{ker}d\pi$ is calculated by S. Wolpert using the connection $1$-form and curvature $2$-form for a smooth metric on the vertical bundle $\mathrm{ker}d\pi$ (\cite[Section 4 \& 5]{wolpert1}). An obvious question would be whether any of the forms in Chern forms and Riemann tensor would be \textit{harmonic} (in the sense of Hodge theory) w.r.t the Weil-Petersson metric (\cite{wolpert2}, \cite{yamada}). If not, what are obstructions for Chern class forms to be harmonic? \end{problem} \begin{remark} \normalfont This open problem arises in an email correspondence between the author and S. Wolpert. \end{remark} \newpage
1,314,259,995,975
arxiv
\section{Introduction} Since the pioneer work of Caves in 1981~\cite{Caves}, quantum metrology has made a great progress as a successful application of quantum mechanics to enhance the measurement precision~\cite{intro0,intro00,intro1,intro10,intro11,intro2, intro20,intro21,intro22,Lu,Lu1,intro3,intro31,intro32}. However, unlike the single-parameter quantum metrology, multiparameter quantum metrology for a long time was not adequately studied. One major reason is that quantum multiparameter Cram\'{e}r-Rao bound in general cannot be saturated. A decade ago, the condition of this bound to be tight for pure states has been given~\cite{Matsumoto,Fujiwara}. Since then, several protocols on multiparameter estimation were proposed in different scenarios~\cite{Walmsley,Yue,Pinel,Yao,Yao1,lang}. One of these works was given by Humphreys \emph{et al.} in Ref.~\cite{Walmsley}, where the phase imaging problem was mapped into a multiparameter metrology process and a generalized form of NOON states was used as the input resource. They found that the simultaneous estimation with the generalized NOON states is better than the independent estimation with the NOON state. On the other hand, the NOON state is not the only state that is available to reach the Heisenberg limit. Another useful state is the so-called entangled coherent state (ECS), which has been widely applied and studied in quantum metrology recently~\cite{Sanders2012,GRJin,Liupra,Joo,ECS,Gerry}. In a Mach-Zehnder interferometer, ECS has been proved to be more powerful than NOON state in giving a Heisenberg scaling precision~\cite{Joo}. Even in a lossy interferometer, ECS can still beat the shot-noise limit for a not very large loss rate~\cite{GRJin,Liupra}. Thus, it is reasonable for one to wonder that if a generalized form of ECS could give a better theoretical precision than the counterpart of NOON state. This is the major motivation of this work. In this paper, we apply a generalized form of ECS in linear and nonlinear optical interferometers. By calculating the quantum Fisher information matrix (QFIM), we give the analytical expression of the quantum Cram\'{e}r-Rao bound (QCRB) for both linear and nonlinear protocols. In the linear protocol, the bound can reach the Heisenberg scaling for most values of the total photon number. Meanwhile, with respect to the parameter number $d$, in both protocols, for most values of photon number, the bounds provide better precisions than that given by independent protocol with ECS or NOON state, which is the same as the generalized NOON state discussed in Ref.~\cite{Walmsley}. The paper is organized as follows. In Sec.~\ref{sec:Cramer-Rao-bound}, we briefly review the quantum Cram\'{e}r-Rao bound as well as the quantum Fisher information matrix for multiparameter estimations. In Sec.~\ref{sec:ECS}, we introduce a generalized form of entangled coherent state for multiple modes and apply it in linear and nonlinear optical interferometers. Furthermore, the comparison between this state and the generalized NOON state is discussed. In Sec.~\ref{sec:Measurement}, we discuss the optimal measurement problem to achieve the bound. In Sec.~\ref{random}, we extend our discussion to random variables and compare the generalized ECS and NOON state with quantum Ziv-Zakai bound. Section~\ref{sec:Conclusion} is the conclusion. \section{Quantum Cram\'{e}r-Rao bound} \label{sec:Cramer-Rao-bound} In a multiparameter quantum metrology process, the quantum state $\rho$ depends on a set of deterministic parameters $\bm\theta=\{\theta_j\}$. The value of $\bm\theta$ is estimated by processing the observation data obtained by measuring the quantum system. A generalized quantum measurement is characterized by a positive-operator-valued measure $\{E_x\}$ with $x$ denoting outcomes. According to quantum mechanics, the probability of obtaining the outcome $x$ is $p(x)=\mathrm{Tr}(E_x\rho)$. Denote the estimator for $\theta_j$ by $\hat\theta_j$, which is a map from the measurement outcome $x$ to the estimates. The accuracy of the multiparameter estimation can be measured by the estimation-error covariance matrix: $C_{jk}:=\int\!dx\,[\hat\theta_j(x)-\theta_j][\hat\theta_k(x)-\theta_k]\mathrm{Tr}(E_x\rho)$. For (locally) unbiased estimators $\hat\theta_j$, the QCRB on the estimation error reads~\cite{Helstrom,Holevo} \begin{equation} C\geq(\nu \mathcal{F})^{-1},\label{eq:C_R} \end{equation} where $\nu$ is the number of the repetition of the experiments, and $\mathcal{F}$ is the quantum Fisher information matrix (QFIM). Let $L_j$ be the symmetric logarithmic derivative (SLD) for $\theta_j$, which is a Hermitian operator satisfying $\partial\rho/\partial\theta_j=\left(\rho L_j+L_j\rho\right)/2$. Then, the QFIM is defined by \begin{equation} \mathcal{F}_{jk}=\frac12\mathrm{Tr}\left[(L_j L_k + L_k L_j)\rho \right]. \end{equation} Recently, it has been found that similarly with the quantum Fisher information~\cite{Liu2013}, the QFIM can also be expressed in the support of the density matrix~\cite{liu2014}. Denote $\partial_j:=\partial/\partial \theta_j$ for simplicity henceforth. For a pure state $\rho=|\psi\rangle\langle\psi|$, the elements of QFIM can be expressed as~\cite{Helstrom,Holevo} \begin{equation} \mathcal{F}_{jk}=4\mathrm{Re}\left( \langle\partial_j\psi|\partial_k\psi\rangle- \langle\partial_j\psi|\psi\rangle\langle\psi|\partial_k\psi\rangle \right). \end{equation} In this work, we use the the total variance $|\delta\hat{\bm\theta}|^{2}:=\mathrm{Tr}\,C$ as a figure of merit for the multiparameter estimation. Taking the trace on both sides of inequality (\ref{eq:C_R}) leads to \begin{equation} |\delta\bm{\hat{\theta}}|^{2}\geq\mathrm{Tr}(\mathcal{F}^{-1}),\label{eq:tot_var} \end{equation} where we have set $\nu=1$ as we are only interested in the quantum enhancements. For a two-parameter case, Eq.~(\ref{eq:tot_var}) is reduced into $|\delta\bm{\hat{\theta}}|^{2}\geq1/F_{e}$, where $F_{e}=\mathrm{det}\mathcal{F}/\mathrm{Tr}\mathcal{F}$ with $\mathrm{det}(\cdot)$ denoting the determinant can be treated as an effective quantum Fisher information. In order to draw conclusion on the best possible estimation error from the QCRB, it is important to know whether the lower bound is achievable. The QCRB for multiparameter estimation is in general not achievable. However, for pure states, the QCRB can be saturated if $\mathrm{Im}\langle\psi|L_j L_k|\psi\rangle=0$ are satisfied for all $j$, $k$, and $\bm\theta$~\cite{Matsumoto,Fujiwara,Walmsley}. Note that $L_j=2\partial_j(|\psi\rangle\langle\psi|)$ is an SLD operator for $\theta_j$. It can be shown that $\mathrm{Im}\langle\psi|L_j L_k|\psi\rangle=0$ is equivalent to $\langle\partial_j \psi|\partial_k \psi\rangle\in\mathbb{R}$. For a unitary parametrization process, i.e., $|\psi\rangle=U_{\bm\theta}|\psi_{\mathrm{in}}\rangle$, this condition can be rewritten as~\cite{single2} \begin{equation} \langle \psi_{\mathrm{in}}|[\mathcal{H}_j,\mathcal{H}_k] |\psi_{\mathrm{in}}\rangle=0, \quad\forall\, j,k, \end{equation} where $\mathcal{H}_j:=i(\partial_j U_{\bm \theta}^{\dagger})U_{\bm\theta}$ is the characteristic operator for the parameterization of the $l$th parameter. \begin{figure}[t] \center\includegraphics[width=9cm]{parallel_and_series}\protect \caption{\label{fig:parallel} Two methods to sense multiple parameters: (a) the simultaneous protocols and (b) the sequential protocols.} \end{figure} The single-parameter unitary parametrization processes have been detailedly discussed in recent works~\cite{single2,single1,single3}. For a multiparameter unitary parametrization process, there are two basic methods to sense the multiple parameters: the simultaneous protocols and the sequential protocols, as shown in Fig.~\ref{fig:parallel}. Let $\{H_j\}$ be a set of Hermitian operators. Then, the simultaneous sensing is described by \begin{equation} U_{\mathrm{I}}=\exp\left(\sum_{j=1}^{d}i H_j\theta_j\right) \end{equation} with $d$ being the number of the parameters, while the sequential sensing is described by \begin{equation} U_{\mathrm{II}}=\prod_{j=1}^{d}\exp(iH_{d-j+1}\theta_{d-j+1}). \end{equation} In this paper, we focus on the phase estimations in the optical multi-arm interferometer, in which $H_j$ is a local operator on the $j$th mode. Thus, all the generating operators $H_j$ are commutative, and these two methods are equivalent. Moreover, in such a case, it is easy to show that $\mathcal{H}_j=H_j$, and thus $\langle\psi_{\mathrm{in}}|[\mathcal{H}_j, \mathcal{H}_k]|\psi_{\mathrm{in}}\rangle =\langle\psi_{\mathrm{in}}|[H_k,H_j]|\psi_{\mathrm{in}}\rangle=0$. This implies that the Cram\'{e}r-Rao bound is theoretically achievable. The element of QFIM can be written as \begin{equation} \mathcal{F}_{jk} = 4\Big( \langle\psi_{\mathrm{in}}|H_j H_k|\psi_{\mathrm{in}}\rangle -\langle\psi_{\mathrm{in}}|H_j|\psi_{\mathrm{in}}\rangle\langle\psi_{\mathrm{in}}|H_k|\psi_{\mathrm{in}}\rangle \Big). \label{eq:f} \end{equation} \section{Generalized entangled coherent state for multiparameter estimation}\label{sec:ECS} \subsection{Generalized entangled coherent state} Entangled coherent state (ECS) has been applied in several single-parameter quantum metrology protocols and proved to be powerful in beating the shot-noise limit~\cite{GRJin,Liupra,Joo}. Let $|\alpha\rangle$ be a coherent state and $|0\rangle$ the vacuum state. The ECS is given by~\cite{ECS,Gerry} \begin{equation} |\mathrm{ECS}\rangle=\mathcal{N}\left(|\alpha 0\rangle+|0\alpha\rangle\right), \end{equation} where $\mathcal{N}=[2(1+e^{-|\alpha|^{2}})]^{-1/2}$ is the normalizing factor. Taking the ECS as the input, the QCRB for a parameter sensed by the Hamiltonian $H=a^\dagger a$ with $a$ denoting the annihilation operator of the first mode is given by~\cite{Joo} \begin{equation} |\delta\bm{\hat{\theta}}|_{\mathrm{ECS}}^{2}=\frac{1}{4\mathcal{N}^{2}|\alpha|^{2} \left[1+|\alpha|^{2}\left(1-\mathcal{N}^{2}\right)\right]}. \end{equation} Therefore, for the independent estimations for $d$ parameters, the QCRB on the total variance is $|\delta\bm{\hat{\theta}}|_{\mathrm{ind}}^{2}=d|\delta\bm{\hat{\theta}}|_{\mathrm{ECS}}^{2}$. In terms of the mean number of total photons involved, i.e., $N_{\mathrm{tot}}=2d\mathcal{N}^{2}|\alpha|^{2}$, we obtain \begin{equation} |\delta\bm{\hat{\theta}}|_{\mathrm{ind}}^{2}=\frac{d^{3}}{N_{\mathrm{tot}} \left[2d+N_{\mathrm{tot}}\left(\mathcal{N}^{-2}-1\right)\right]}. \end{equation} When $|\alpha|$ is large, $\mathcal{N}^{-2}\rightarrow2$, the total variance \begin{equation} |\delta\bm{\hat{\theta}}|_{\mathrm{ind}}^{2}\rightarrow\frac{d^{3}} {N_{\mathrm{tot}}\left(N_{\mathrm{tot}}+2d\right)}. \end{equation} This bound is lower than $d^{3}/N_{\mathrm{tot}}^{2}$, which is given by NOON state in the independent estimation~\cite{Walmsley}. Multiple parameters can also be sensed and estimated by entangled input states, e.g., using a multi-arm interferometer with generalized NOON states as input~\cite{Walmsley}. Here, we use a generalized form of the ECS instead, as the ECS is more powerful than the NOON states in the two-arm interferometer for a fixed mean number of the total photons. For a multiparameter estimation scenario as shown in Fig.~\ref{fig:parallel}, we set the reference beam as mode zero and the parametrized beams as mode $1$ to mode $d$. Taking into consideration the symmetry among $d$ modes, we generalize the ECS to the multi-mode case as \begin{equation} |\psi\rangle=b\sum_{j=1}^d|\alpha\rangle_j + c|\alpha\rangle_0,\label{eq:in} \end{equation} where $|\alpha\rangle_j=\exp(\alpha a^{\dagger}_j-\alpha^{*}a_j)|0\rangle$ with $|0\rangle$ being the multi-mode vacuum is a state with a coherent state in $j$th mode and vacuums in others. The coefficients $b$ and $c$ are complex numbers; due to the normalization of $|\psi\rangle$, they satisfy \begin{equation}\label{eq:normalization} |c|^2 + (bc^*+cb^*)v + |b|^2 u = 1 \end{equation} with \begin{equation}\label{eq:uv} u:=d+d(d-1)e^{-|\alpha|^2} \mbox{ and } v:=de^{-|\alpha|^2}. \end{equation} In this paper, we will use this generalized form of ECS as the input state to sense the parameters. \subsection{Local parameterization} Let us consider that the parameters are sensed via $U_{\bm\theta}=\exp(i\sum_{j=1}^d H_j \theta_j)$ with $H_j=(a_j^{\dagger}a_j)^m$, where $m$ is a positive integer. Taking the generalized ECS Eq.~(\ref{eq:in}) as the input state, it follows that \begin{equation} \langle\psi|H_j|\psi\rangle = |b|^2f(m,\alpha) \quad\mbox{and}\quad \langle\psi|H_j H_k|\psi\rangle = |b|^2 f(2m,\alpha)\delta_{jk} \end{equation} with $f(m,\alpha) := \langle \alpha|(a^\dagger a)^m|\alpha\rangle$. From Eq.~(\ref{eq:f}), the elements of the QFIM are given by \begin{equation} \mathcal{F}_{jk} = 4 [\delta_{jk} |b|^2 f(2m,\alpha) - |b|^4 f(m,\alpha)^2]. \end{equation} Consequently, the QFIM can be expressed as \begin{equation} \mathcal{F}=4 |b|^2 f(2m,\alpha) \left(\openone - \frac{|b|^2 f(m,\alpha)^2}{f(2m,\alpha)}\mathcal{I}\right), \end{equation} where $\openone$ is the identity matrix, and $\mathcal{I}$ is the matrix with elements $\mathcal{I}_{jk}=1$ for all $j$ and $k$. Noting that $\mathcal{I}^2=d\mathcal{I}$, it can be shown that \begin{equation} \left[\gamma(\openone+\omega\mathcal{I})\right]^{-1} = \frac{1}{\gamma}\left(\openone-\frac{\omega}{1+\omega d}\mathcal{I}\right) \end{equation} with $\gamma$ and $\omega$ being real numbers. Thus, we obtain the analytical result for the inverse of the QFIM as \begin{equation} \mathcal{F}^{-1} = \frac{1}{4|b|^2 f(2m,\alpha)} \left(\openone+ \frac{|b|^2f(m,\alpha)^2}{f(2m,\alpha)-|b|^2f(m,\alpha)^2d} \mathcal{I}\right). \end{equation} Tracing both sides of the above equality, we obtain the lower bound on the total variance: \begin{equation}\label{eq:general_bound} |\delta\hat{\bm\theta}|^2 \geq \mathrm{Tr}(\mathcal{F}^{-1})= \frac{d}{4f(2m,\alpha)}\left(\frac{1}{|b|^2}+\frac{1}{g-|b|^2d}\right) \end{equation} with $g:=f(2m,\alpha)/f(m,\alpha)^2$ being a nonnegative number. Since we are interested in the minimal estimation error, we minimize the QCRB on the total variance over $b$, which is equivalent to minimize the quantity $w:=1/|b|^2+1/(g-|b|^2d)$ in Eq.~(\ref{eq:general_bound}) over $b$. By noting that $|b|^2$ takes value in a continuous range and $\mathrm{Tr}(\mathcal{F}^{-1})$ is nonnegative, it follows from Eq.~(\ref{eq:general_bound}) that we only need to investigate those $|b|^2<g/d$, as $\mathrm{Tr}(\mathcal{F}^{-1})$ will be negative when $|b|^2$ passes through $g/d$. In this domain, the minimum of $w$ is at the place where the derivative of $w$ with respect to $|b|^2$ is zero, that is \begin{equation} |b|=b_\star:= \sqrt{g/(\sqrt{d} + d)}. \end{equation} Note that the rigor domain of $|b|^2$ is determined by the normalization condition Eq.~(\ref{eq:normalization}). Due to an irrelevant global phase in the states in Eq.~(\ref{eq:in}), we can always assume that $c$ is real. Equation (\ref{eq:normalization}) then becomes \begin{equation} c^2 + 2(\Re\,b)vc + |b|^2 u - 1 = 0, \end{equation} which has a solution for $c$ only if $(\Re\,b)^2 v^2 - |b|^2 u + 1 \geq 0$. Since $|b|\geq|\Re\,b|$, we obtain \begin{equation} |b|^2 \leq\frac1{u-v^2} = \frac1{d+d(d-1)e^{-|\alpha|^2}-d^2e^{-2|\alpha|^2}}\equiv\Gamma, \end{equation} which implies that the domain of $|b|^2$ is $[0,\Gamma]$. Therefore, we obtain \begin{equation}\label{eq:optimal} \min_{|b|\in[0,\sqrt\Gamma]}\mathrm{Tr}(\mathcal{F}^{-1})= \cases{ \frac{d(1+\sqrt d)^2}{4}\frac{f(m,\alpha)^2}{f(2m,\alpha)^2}& for $b_\star\leq\sqrt\Gamma$ \\ \frac{d}{4f(2m,\alpha)}\left(\frac{1}{\Gamma}+\frac{1}{g-\Gamma d}\right) & for $b_\star>\sqrt\Gamma$. } \end{equation} We first consider the linear parametrization protocol, for which $m=1$. Note that $f(1,\alpha)=|\alpha|^2$, $f(2,\alpha)=|\alpha|^2(1+|\alpha|^2)$, and therefore $g=1+|\alpha|^{-2}$. From Eq.~(\ref{eq:optimal}), we obtain the lower bound on the total variance: \begin{equation}\label{eq:general_bound1} |\delta\hat{\bm\theta}|^2 \geq \mathrm{Tr}(\mathcal{F}_{\mathrm{L}}^{-1})= \frac{d}{4|\alpha|^2(1+|\alpha|^2)} \left( \frac1{|b|^2} + \frac{1}{1+|\alpha|^{-2} -d |b|^2} \right). \end{equation} After minimizing over $b$, we obtain \begin{equation}\label{eq:CR_linear} |\delta\bm{\hat{\theta}}|^2\geq|\delta\bm{\hat{\theta}}|_{\mathrm{L}}^2 =\frac{d(\sqrt{d}+1)^2}{4\left(1+|\alpha|^2\right)^2}, \end{equation} provided that $b_\star\leq\sqrt\Gamma$. Figure~\ref{fig:b_alpha} shows the parameter regime where $|b_\star|\leq\sqrt\Gamma$. The purple areas in both panels represent the regime where $b_\star$ is inside the domain of $b$. From Fig.~\ref{fig:b_alpha}(a), it can be found that $b$ can be $b_\star$ for a large $|\alpha|$. This is reasonable because when $|\alpha|$ is infinite, $b_\star=1/\sqrt{d+\sqrt{d}}$ and $\Gamma=1/d$, $b_\star$ is always less than $\sqrt\Gamma$. For a large $d$, $b_\star$ may be beyond the domain of $b$ when $|\alpha|$ is very small. A more experimentally realizable regime is that both of $d$ and $|\alpha|$ are not very large and comparable to each other, as shown in Fig.~\ref{fig:b_alpha}(b). In this regime, $b_\star$ is inside the domain of $b$ for most areas. Especially, when $|\alpha|$ is larger than around $2$, $b_\star$ is always reachable, indicating that the corresponding bound $|\delta\bm{\hat{\theta}}|_{\mathrm{L}}^{2}$ can always be reached. \begin{figure} \center\includegraphics[width=10cm,height=5cm]{b_alpha} \caption{\label{fig:b_alpha} Region partition according to whether $b_\star$ is in the domain of $b$ for the linear parameterization with $H_j=a_j^\dagger a_j$. The blue region represent where $b_\star$ is outside the domain of $b$. } \end{figure} We also consider a nonlinear parametrization protocol for which $m=2$. It can be shown that $f(2,\alpha)=|\alpha|^2(1+|\alpha|^2)$ and $f(4,\alpha)=|\alpha|^{8}+6|\alpha|^{6}+7|\alpha|^{4}+|\alpha|^{2}$. After minimizing over $b$, we obtain \begin{equation} |\delta\bm{\hat{\theta}}|^2\geq|\delta\bm{\hat{\theta}}|_{\mathrm{NL}}^2 =\frac{d(\sqrt{d}+1)^2}{4}\left(\frac{1+|\alpha|^2}{|\alpha|^{6}+6|\alpha|^{4}+7|\alpha|^{2}+1}\right)^2, \end{equation} provided that $b_\star\leq\sqrt\Gamma$. In Fig.~\ref{fig:b_n_alpha} we can see that, similarly to Fig.~\ref{fig:b_alpha}, $b_\star\leq\sqrt\Gamma$ is satisfied in most areas. \begin{figure} \center\includegraphics[width=10cm,height=5cm]{b_n_alpha} \protect\caption{\label{fig:b_n_alpha} Region partition according to whether $b_\star$ is in the domain of $b$ for the nonlinear parameterization with $H_j=(a_j^\dagger a_j)^2$. The blue region represent where $b_\star$ is outside the domain of $b$. } \end{figure} \subsection{Analysis} Here, we give an analysis on the QCRB given by the generalized ECS, and compare them with the ones given by the generalized NOON states, which was applied in multiparameter metrology in Ref.~\cite{Walmsley}. The generalized NOON states proposed in Ref.~\cite{Walmsley} reads $|\psi_{\mathrm{s}}\rangle=b\sum_{j=1}^d|N\rangle_j+c|N_0\rangle$, where $|N\rangle_j=|\cdots00N00\cdots\rangle$ is the state with $N$ photons in $j$th mode and vacuum in others. For the linear parametrization protocol $H=\sum_{j=1}^d a_j^{\dagger} a_j$, the minimal QCRB on the total variance over all generalized NOON states for a given $N$ is~\cite{Walmsley} \begin{equation} |\delta\bm{\hat{\theta}}|_{\mathrm{sL}}^{2}=\frac{d(\sqrt{d}+1)^{2}}{4N^{2}}, \end{equation} where the optimal value of $b$ is $b=1/\sqrt{d+\sqrt{d}}$. For the nonlinear parametrization protocol $H=\sum_{j=1}^d (a_j^{\dagger}a_j)^2$, through some straightforward calculations, we obtain the minimal QCRB \begin{equation} |\delta\bm{\hat{\theta}}|_{\mathrm{sNL}}^{2}=\frac{d(\sqrt{d}+1)^{2}}{4N^{4}}, \end{equation} which is also attained at $b=1/\sqrt{d+\sqrt{d}}$. From the expressions of $|\delta\bm{\hat{\theta}}|_{\mathrm{L}}^{2}$, $|\delta\bm{\hat{\theta}}|_{\mathrm{NL}}^{2}$, $|\delta\bm{\hat{\theta}}|_{\mathrm{sL}}^{2}$ and $|\delta\bm{\hat{\theta}}|_{\mathrm{sNL}}^{2}$, we find that all these bounds share the same scaling relation with respect to the parameter number $d$; they are all proportional to $d(\sqrt{d}+1)^{2}$. Furthermore, both $|\delta\bm{\hat{\theta}}|_{\mathrm{L}}^{2}$ and $|\delta\bm{\hat{\theta}}|_{\mathrm{NL}}^{2}$ provide a $\mathcal{O}(d)$ advantage compared to the independent estimation with ECS or NOON state, which is the same as $|\delta\bm{\hat{\theta}}|_{\mathrm{sL}}^{2}$~\cite{Walmsley}. These protocols show different relations to the average total photon number $N_{\mathrm{tot}}$. Obviously, the average total photon number of $|\psi_{\mathrm{s}}\rangle$ is $N_{\mathrm{s,tot}}=N$. Meanwhile, the average total photon number of $|\psi_{\alpha}\rangle$ is $N_{\alpha,\mathrm{tot}}=|\alpha|^2\left(d|b|^2+|c|^2\right)$, which is dependent on the values of $b$ and $c$. When $\alpha$ is sufficiently large such that $d\exp(-|\alpha|^2)\ll1$, we have $N_{\alpha,\mathrm{tot}}\simeq|\alpha|^2$ as a result of $d|b|^2+|c|^2\simeq1$ implied by the normalization condition Eq.~(\ref{eq:normalization}). As a matter of fact, when $|\alpha|=4$, $\exp(-|\alpha|^{2})\simeq10^{-7}$. For a not very large $d$, choosing $d=5$ for example, at the optimal value $b_\star$ of $b$, the difference between $N_{\mathrm{\alpha,tot}}$ and $|\alpha|^{2}$ is around $10^{-6}$. Thus, for most values of $|\alpha|$ and $d$, the average photon number can be approximated as $|\alpha|^{2}$. With this approximation, $|\delta\bm{\hat{\theta}}|_{\mathrm{L}}^{2}\varpropto N_{\mathrm{\alpha,tot}}^{-2}$ and $|\delta\bm{\hat{\theta}}|_{\mathrm{NL}}^{2}\varpropto N_{\mathrm{\alpha,tot}}^{-4}$. In Fig.~\ref{fig:N_tot}, we plot these four QCRBs, $|\delta\bm{\hat{\theta}}|_{\mathrm{L}}^{2}$, $|\delta\bm{\hat{\theta}}|_{\mathrm{NL}}^{2}$, $|\delta\bm{\hat{\theta}}|_{\mathrm{sL}}^{2}$, and $|\delta\bm{\hat{\theta}}|_{\mathrm{sNL}}^{2}$ as functions of the average total photon number. Comparing $|\delta\bm{\hat{\theta}}|_{\mathrm{sL}}^{2}$ and $|\delta\bm{\hat{\theta}}|_{\mathrm{L}}^{2}$ in linear protocols, in the regime of small average total photon numbers, even $b_\star$ may be greater than $\sqrt\Gamma$, the generalized ECS still gives a lower QCRB than the generalized NOON states. However, this advantage reduces when $N_{\mathrm{tot}}$ increases. For a very large $N_{\mathrm{tot}}$, the generalize ECS and the generalized NOON states are basically equivalent to each other on the estimation precision. Besides, the nonlinear parametrization process is always better than the linear one for the same input state in this case, as expected. What is more interesting here is that, for a small $N_{\mathrm{tot}}$, the linear protocol with generalized ECS can give a lower bound than the nonlinear counterpart with generalized NOON states. This gives an alternative strategy for small photon number scenario when the nonlinear parametrization is very challenging to perform. \begin{figure} \center\includegraphics[width=8cm]{variance_Ntot.eps} \protect\caption{\label{fig:N_tot} Variation of $|\delta\bm{\theta}|_{\mathrm{sL}}^{2}$, $|\delta\bm{\theta}|_{\mathrm{L}}^{2}$, $|\delta\bm{\theta}|_{\mathrm{sNL}}^{2}$ and $|\delta\bm{\theta}|_{\mathrm{NL}}^{2}$ as functions of the average total photon number $N_{\mathrm{tot}}$. The black and red solid lines represent $|\delta\bm{\theta}|_{\mathrm{sL}}^{2}$ and $|\delta\bm{\theta}|_{\mathrm{L}}^{2}$, the QCRB with the generalized NOON states and the generalized ECS for linear protocol, respectively. The black dash-dot and blue dash lines represent $|\delta\bm{\theta}|_{\mathrm{sNL}}^{2}$ and $|\delta\bm{\theta}|_{\mathrm{NL}}^{2}$, the counterpart with with the generalized NOON states and the generalized ECS for nonlinear protocol, respectively. The total parameter number is set to $d=5$ here.} \end{figure} \section{Measurement} \label{sec:Measurement} For an entire metrology process, the measurement has to be considered as the QCRB cannot be always saturated for any measurement. As a matter of fact, different measurement strategies would give different classical Cram\'{e}-Rao bound and further give different metrology scalings. For the estimation of a single parameter, the projective measurement with respect to the eigenstates of the SLD operator can be used as the optimal measurement if they are (locally) independent of the parameter under estimation~\cite{Caves94,Zhong14,Paris}. For the cases where the eigenstates of the SLD operator depend on the true value of the parameter, one has to resort to the adaptive measurement and estimation scheme to asymptotically attain the QCRB~\cite{Nagaoka1988,Fujiwara2006}. For multiparameter estimations, due to the non-commutativity of the SLDs, the QCRB in general cannot be attained. However, for multiparameter estimation with pure states, the QCRB can be attained if $\mathrm{Im}\langle\psi|L_j L_k|\psi\rangle=0$ for all $j$, $k$, and $\bm\theta$~\cite{Matsumoto,Fujiwara}, which is satisfied for the case consider in this work. In principle, there exists an optimal measurements asymptotically attain the QCRB, although this optimal measurement may be hard to implemented experimentally. General methods to construct such an optimal measurement can be found in Ref.~\cite{Matsumoto, Walmsley}. \section{Deterministic parameter versus random parameter} \label{random} During the entire calculation of the paper, we treat the phase shifts as unknown but deterministic signals~\cite{Levy,Poor,Lehmann,Rivas}, which means that the true values of phase shifts are always the same during the repetitions of the experiment. In other words, we have independent and identically distributed samples to perform measurements, with the collection of the measurement outcomes the parameters are estimated. For example, during the detection of the gravity, the gravity is commonly treated as a deterministic parameter. Meanwhile, optical interferometry is a major approach for this detection, for example the LIGO (Laser Interferometer Gravitational-Wave Observatory) program. Thus, it is reasonable to treat the phase shifts as deterministic signals. However, in some different scenarios, for instance, during the measure of the gravitational acceleration in a specific location on earth, its value may be slightly affected by the flow of some underground magma or geology movement. Thus, it is also reasonable to treat the signal as a a random parameter in these scenarios. Recently, Tsang proposed a quantum version of Ziv-Zakai bounds for estimating a random parameter~\cite{Tsang}. Using this bound, Giovannetti and Maccone found that for high prior information regime, the accuracy given by sub-Heisenberg strategies is no better than that obtained by guessing according to the prior distribution~\cite{Giovannetti}. Thus, the precision for a random variable and a deterministic parameter may have great differences. For the generalized NOON state, the quantum Ziv-Zakai bound has been given by Zhang and Fan in Ref.~\cite{zhang} as \begin{equation} |\delta\bm{\hat{\theta}}|^2\geq\mathrm{max}\left\{\frac{d(d+\sqrt{d})^2 }{80\lambda^2 N^2}, \frac{(\pi^2/16-0.5)d(d+\sqrt{d})^2}{(d+\sqrt{d}-1)N^2}\right\} \label{eq:ZZ_NOON} \end{equation} with $\lambda\simeq 0.7246$, where the prior distribution of the random parameters are assumed to be uniform with large width windows. For a large $d$, the previous expression is always larger than the latter on in the braces. Thus, the $\mathcal{O}(d)$ advantage vanishes if the parameter underestimation is a random variable. However, this bound is still better than that given by independent estimations~\cite{zhang}. Here, we obtain the quantum Ziv-Zakai bound for generalized ECS. Through some straightforward calculations, the bound for linear parametrization process is \begin{equation} |\delta \bm{\hat{\theta}}|^2\geq \mathrm{max}\left\{\frac{d(d+\sqrt{d})^2 }{80\lambda^2 (|\alpha|^2+1)^2}, \frac{(\pi^2/16-0.5)d(d+\sqrt{d})^2}{(d+\sqrt{d}-1) (|\alpha|^2+1)^2}\right\}. \end{equation} Similarly with the generalized NOON state, the $\mathcal{O}(d)$ advantage vanishes in this bound. However, for a not very small value of $|\alpha|$, $N_{\mathrm{tot}}\simeq|\alpha|^2$, this bound is still lower than Eq.~(\ref{eq:ZZ_NOON}), which means even for random variables, the generalized ECS can provide a better precision than generalized NOON state. \section{Conclusion\label{sec:Conclusion}} In this paper, we have proposed a generalized form of entangled coherent states and apply them as the input state of a multi-arm interferometer for estimating multiple phase shifts. We have obtained the QCRB on the estimation error for both linear and nonlinear protocols. Similarly with the generalized NOON state, the simultaneous estimation with generalized entangled coherent state can provide a better precision than the independent estimation. Meanwhile, We find that the bound from the generalized entangled coherent state is better than that given by the generalized NOON state. \ack The authors thank Heng-Na Xiong and Animesh Datta for valuable discussion. This work was supported by the NFRPC through Grant No. 2012CB921602 and the NSFC through Grants No. 11475146. X.-M. L. also acknowledges the support from the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and NSFC under Grant No. 11304196. Z. S. also acknowledges the support from NSFC with grants No. 11375003, Natural Science Foundation of Zhejiang Province with grant No. LZ13A040002 and Program for HNUEYT with grants No. 2011-01-011. \section*{References}
1,314,259,995,976
arxiv
\section{Introduction} It is hard to find specific content in massive resource library. Generally, these contents can be transformed into vectors of different lengths using proper embedding algorithms. The state-of-the-art examples include, for text data, word2vec \cite{mikolov2013distributed}, and for image data, convolutional neural network \cite{sharif2014cnn, gong2014multi}. Using these embedding vectors of dozens to thousands of dimensions, the distance from queries to every entry in the database can be calculated, and the nearest ones can be found. The true problem is how to find the most similar contents from an arbitrary query in a large database when users request it with low delay. This is also the most computationally expensive part of many algorithms diversified in biology (gene classification \cite{pan2004comprehensive}), computer vision (local image feature matching \cite{lowe2004distinctive}), speech recognition (content-based music search \cite{li2004content}) and many other fields. We hope that the both speed and accuracy of the search algorithms can be high even employed on large datasets. When working with high-dimensional features, which is often the case in computer vision applications, there is no known exact nearest-neighbor search algorithm that has acceptable performance. To obtain a speed improvement, researchers developed approximate search algorithms \cite{aumuller2017ann}. Generally these algorithms can provide 80 percent or more of the correct neighbors, and be much faster than exact search. When a even higher proportion of correctness is required, for example 90 percent or 98 percent, the speed of most approximate search algorithms drops quickly. Furthermore, Some of these algorithms need to train a codebook for indexing before searching, which is also a time-consuming part. Since the data size is becoming extremely large and the embedding vectors can be long to keep more information, the distance calculation now requires a great number of basic arithmetic calculations and comparisons, but the calculation for pairs of embedding vectors does not affect each other. In this decade, great development on GPU computation has been achieved. GPUs are much skilled at parallel computing for simple calculation than CPUs, which meets the case of nearest-neighbor search problem. Exact search algorithm \cite{garcia2010k} and some approximate search algorithms \cite{pan2011fast,johnson2019billion} have already been implemented on GPU and obtained great improvements on performance. In this paper, we propose a fast and accurate algorithm implemented on GPU for approximate nearest-neighbor search problem. A binary quantization method is proposed to compress floating-point numbers into 3- or 4-bit binary codes without training. Cosine similarity calculations of vectors are simplified to exclusive OR (XOR) operations of binary codes and is further optimized based on the parallel characteristics of GPU. Based on the quantization method and optimized calculations, cosine similarities of normalized data points can then be fast calculated on GPU when both data size and vector length are large. As training is not needed for quantization, this method will be useful for the situations where dataset distribution changes rapidly or only few queries are asked in large datasets. This paper makes the following contributions: \begin{itemize} \item We propose a new quantization method to encode number within $[-1,1]$ into arbitrary bits (Section 3). \item We provide a novel view that transforming multiplication into bitwise XOR operation, and use this transformation to accelerate multiplications in limited scope (Section 3). \item We propose a train-free algorithm to implement GPU exhaustive kNN-Selection on large datasets, which based on cosine similarity and has a series of parameters controlling the accuracy and speed (Section 3 \& 4). \item We conduct real-data experiments that show that the proposed algorithm has a state-of-the-art searching efficiency and high accuracy on large-scale nearest-neighbor search tasks. The algorithm is also extensible on multi-GPU configurations (Section 5). \end{itemize} \section{Related Work} Generally, K-nearest neighbor search is to find the top $K$ most similar vectors in $n$ vectors for each vector query given the distance metric. Here each vector has $N$ components, and in this paper we specify the metric with descending cosine similarity, defined by the inner product of two normalized vectors. This section presents a review of previous work in this area. The brute force way to solve the problem is to calculate pairwise distance between the query vector and each alternative vector and use a minimum heap to store the top $K$ nearest vector. This way costs great computing resources (with time complexity $O(nN+n\log(K))$). Garcia et al. \cite{garcia2010k} implemented parallel brute force algorithm on NVIDIA GPU using CUDA and CUBLAS, showing that the speed can be 25x and 62x faster than highly optimized ANN C++ library implemented on CPU. As this migration cannot really reduce the resource consumption on computation, researchers have been providing solutions to calculate the approximate nearest neighbors with high precision but much lower time complexity. Most of these techniques are target on reduce the search space. We revise the most widely used K-NN search techniques, classified in three categories: hashing based techniques, partitioning trees and nearest neighbor graph techniques. The best-known hashing-based techniques might be local sensitive hashing (LSH) \cite{andoni2006near}, in which many hash functions, with property that elements with similar hashes are more likely to be similar, are used. Variants of LSH such as multi-probe LSH \cite{lv2007multi} and LSH Forest \cite{bawa2005lsh} help improve the performance of these techniques. As the performance of LSH is highly related to the quality of hashing functions, huge work focuses on improving hashing methods \cite{shakhnarovich2003fast,wang2010semi,xu2011complementary}. Pan implemented LSH based nearest-neighbor search on GPU \cite{pan2011fast}, making searching much faster. As LSH is highly sensitive to the hashing function we choose, we are not going to compare it with our method in the experiment section. However, combining LSH with our method to further accelerate the search speed is possible, as we will mention later. Partitioning trees is also a popular technique for approximate nearest-neighbor search. KD tree \cite{silpa2008optimised} is one of the best known nearest-neighbor algorithms. It is effective on datasets with low dimensionality but gets poor performance on datasets with high dimensionality. Gong encoded image matrix into binary values to find similar results and achieved good search recalls for image datasets \cite{gong2012angular,gong2013learning}. This method does not provide good results for non-classification problems. Other methods like annoy \cite{bernhardsson2018annoy}, ball tree \cite{liu2006new,omohundro1989five} use decision tree to make searching an $O(\log(N))$ level job. Herve Jegou proposed product quantization (PQ) to provide a short code representation of vectors \cite{jegou2010product} and improved the searching efficiency by IVFADC algorithm \cite{jegou2011searching}. In product quantization, space is decomposed into a Cartesian product of low dimensional subspaces, and data points are represented by compact codes computed as quantization indices in these subspaces. A codebook needs to be trained by a training dataset with distribution similar to the population before indexing. The training phase can take a long time when the training set is large. The compact codes are then efficiently compared to the query points using an asymmetric approximate distance in the search phase. Using an inverted file system, PQ can help efficiently search nearest neighbors on high-dimensional datasets. Inverted multi-index (IMI) proposed by Babenko and Lempitsky \cite{babenko2014inverted}, which replaces the standard quantization in an inverted index with product quantization, obtains a denser subdivision of the search space. These methods are efficient at searching on large datasets with high dimensionally and are now used and accelerated by GPU in Facebook's Faiss library \cite{johnson2019billion}. As the new approach proposed in this paper is also a vector-quantization based technique, we will compare our results with the IVFADC version of PQ based nearest-neighbor searching. Nearest neighbor graph methods is based on the thought that, when there comes a query, we start to calculate the distance from a random point to the query, and try to search along the "steepest descent direction" of distance between points on the direction and the query. In practice, a graph structure in which points are vertices and edges connect each point to its friend points is built. For each point in the graph, the friend points are likely to be close to it. The query points are used to explore this graph using various strategies in order to get closer to their nearest neighbors. As an optimized graph must be built, these graph methods also have train phases to build graph and take long time on training when the dataset is huge. Malkov raised an efficient and robust searching algorithm using Hierarchical Navigable Small World (HNSW) graphs \cite{malkov2018efficient}. HNSW is one of the best practices of nearest neighbor graph techniques so far. However, it is not a good idea to implement HNSW on GPU, which takes advantage on parallel computing, since we need to access the points based on a strict hierarchical order. We will compare the performance of our approach and HNSW with comparable computing resources. Faiss \cite{johnson2019billion} is a good solution that works on GPU verified by ANN-benchmark \cite{aumuller2017ann}. The performance of PQ methods on GPU has been optimized by Faiss. The library also integrates a CPU version of HNSW search. For both algorithms here, the training and indexing step for large dataset takes a long time, which is a crucial drawback for some real-time online data. They suffer from a painful trade-off among training time, recall/precision rate and search speed. In the following section we will provide a novel approach with short preprocessing time, high recall/precision and fast search speed. \section{Compress Vectors with XOR-friendly Binary Quantization} Given two floating-point vectors $ \bm{X} = (x_1, x_2, \cdots, x_N) $ and $ \bm{Y} = (y_1, y_2, \cdots, y_N) $ with $\Vert\bm{X}\Vert_2=\Vert\bm{Y}\Vert_2=1$, the cosine similarity of them is $$ \text{Similarity}(\bm{X},\bm{Y}) = \sum\limits_{i=1}^{N} x_i \cdot y_i $$ It requires N multiplications and (N-1) additions, resulting in intensive computational complexity though these operations can be parallelized by SIMD instructions. Besides, for floating-point vectors, the memory bandwidth can also limit the throughput of computations when processing large scale similarity computation. For example, the memory bandwidth of DDR4 2666 is 21.3 GB/s. To solve these two problems, we propose a fast similarity search mechanism that quantizes 32-bit floating-point numbers to low-bit binary numbers and replaces high-complexity multiplications by XOR computations, which has low complexity on computation. We first introduce the relationship between multiplication and XOR operation. \subsection{Multiplication and XOR on simple sets} \par Consider two sets $G=\{+1, -1\}$ and $\bar{G}=\{0,1\}$. Define multiplication on $G$ is simple multiplication ($\cdot$), and multiplication on $\bar{G}$ is XOR operation ($\oplus$). The following proposition reveals the relationship between these two different operations. \begin{proposition} \label{prop:groups} $(G, \cdot)$ and $(\bar{G}, \oplus)$ are two isomorphism groups under mapping $\sigma: G\rightarrow\bar{G}$, where \begin{equation} \label{eq:prop_1} \sigma(a)=\frac{1-a}{2}, \forall a\in G \end{equation} \end{proposition} The proposition can be directly verified by checking enumerate all possible operations. In these two groups, (+1) and 0 are identity elements while (-1) and 1 are inverse elements respectively. Using \ref{prop:groups}, the multiplication on $G$ can be replaced by the XOR computation on $\bar{G}$ with the following formula. \begin{equation} \label{eq:mul_xor:2} \begin{aligned} a_1 \cdot a_2 &= \sigma^{-1}{({\bar{a}_1 \oplus \bar{a}_2})} \\ &= 1 - 2({\bar{a}_1 \oplus \bar{a}_2}) \end{aligned} \end{equation} Therefore, if all components in vectors can be represented as a combination of $ a_i \in \{+1, -1\}$, then the high-complexity multiplication can be replaced by the low-complexity XOR computation. \subsection{XOR-Friendly Binary Quantization} To take advantage of the XOR computation, a special strategy is used to quantize floating-point numbers to binary numbers. For any real number $x\in(-1,+1)$, we use a mapping $f_B(\cdot)$ $(B\geq 1)$ to approximate $x$ into a subspace of $\mathbb{R}$. For simplification we write $x_B=f_B(x)$. $f_B$ has the following representation: $$ \label{eq:product:0} f_B(x)=x_B = a_{B-1} \cdot \frac{1}{2} + \cdots + a_{1} \cdot \frac{1}{2^{B-1}} + a_0 \cdot \frac{1}{2^B} = \sum\limits_{i=0}^{B-1} \frac{1}{2^{B-i}} a_i $$ where $ a_i \in G,\ i\in{0,1,...,B-1} $. Specially we define $f_0(x)=0$. Then the value of $a_i$ is decided by the following formula: $$ \label{eq:product:how} a_{B-1-i} = sign(x-x_i\geq 0)\,\ i\in{0,1,...,B-1}, sign(x)= \left\{ \begin{aligned} 1 & & x\geq0 \\ -1 & & x<0 \\ \end{aligned} \right . $$ Similar with binary representation, this approximation has some good properties. Define $\mathbb{N}^+$ as the set of all positive integer, then we have \begin{proposition} \label{prop:limit} $\forall x \in(-1,+1),\ \forall B\in \mathbb{N}^+,\ |x_B-x|\leq2^{-B}$. And $x_B$ uniformly convergent to $I(x)=x$: $\lim_{B\rightarrow+\infty} x_B = x.$ \end{proposition} The proof of this proposition is in Appendix \ref{sec:prof1}. When $ B \rightarrow +\infty $, $ x $ can be exactly represented. Instead, if $ B $ is fixed to a finite number, $ x $ will be approximately represented. In this case, if we take a fixed $B$ (indicating the number of encoding bits), with the vector $\textbf{C} = [\frac{1}{2}, \frac{1}{2^2}, \cdots, \frac{1}{2^B}] $ as a fixed codebook, denote $[a_{B-1}, \cdots, a_1, a_0]\cdot\textbf{C}$ as $(a_{B-1} \cdots a_1 a_0)_{(\cdot)}$, then all $x \in (-1, +1)$ can then be encoded as \begin{equation} \label{eq:product:1} x = (a_{B-1} \cdots a_1 a_0)_{(\cdot)} \end{equation} where $ a_i \in G $. Next we will show that, based on this quantization scheme, the multiplication between floating-point numbers can then be replaced by XOR. Given the encoding bit $B_x, B_y$, the product of $x,y \in (-1, +1)$ can be calculated as \begin{equation} \label{eq:product:2} \begin{aligned} xy &= (a_{B_x-1} \cdots a_1 a_0)_{(\cdot)} (c_{B_y-1} \cdots c_1 c_0)_{(\cdot)} \\ &= \sum\limits_{i=0}^{B_x-1} \frac{1}{2^{B_x-i}} a_i \cdot \sum\limits_{i=0}^{B_y-1} \frac{1}{2^{B_y-i}} c_i = \frac{1}{2^{B_x+B_y}} \sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1} 2^{i+j} (a_ic_j) \\ \end{aligned} \end{equation} where $a_i,c_i \in \{+1, -1\}$. By Eq. \ref{eq:mul_xor:2}, Eq. \ref{eq:product:2} can be transformed as $$ \label{eq:product:3} \begin{aligned} xy &= \frac{1}{2^{B_x+B_y}} (\sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1} 2^{i+j}\sigma^{-1}(\bar{a}_i \oplus \bar{c}_j))\\ &= \frac{1}{2^{B_x+B_y}} (\sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1} (2^{i+j}-(\bar{a}_i \oplus \bar{c}_j) << (i+j+1))) \end{aligned} $$ where $\bar{a}_i,\bar{c}_i \in \{0, 1\}$ are the corresponding element in $G$ described by proposition \ref{prop:groups}, and $<<$ is shift logical left instruction ($a<<b=a*2^b$). Then, the multiplications are replaced by fast bitwise operations - XOR and bit shifts. For convenience of description, a new operator $\otimes$ is introduced. We define $\bar{x} = (\bar{a}_{B_x-1} \cdots \bar{a}_1 \bar{a}_0)_2$ and $ \bar{y} = (\bar{c}_{B_y-1} \cdots \bar{c}_1 \bar{c}_0)_2 $. Here $\bar{x}$ and $\bar{y}$ are integers represented by binary code. We name this kind of quantization as XOR-Friendly Binary Quantization (XFBQ). We introduce a new operation $\otimes$ to denote \begin{align*} \label{eq:product:4} \bar{x} \otimes \bar{y} &= (\bar{a}_{B_x-1} \cdots \bar{a}_1 \bar{a}_0)_2 \otimes (\bar{c}_{B_y-1} \cdots \bar{c}_1 \bar{c}_0)_2 \\ &= \sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1} ((\bar{a}_i \oplus \bar{c}_j) \ll (i+j)) \end{align*} Then the result of $\bar{x} \otimes \bar{y}$ can be equivalently transformed to the product of $xy$ \begin{equation} \label{eq:product:6} \begin{aligned} xy &= \frac{1}{2^{B_x+B_y}} (\sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1} 2^{i+j} - 2 \sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1} (\bar{a}_i \oplus \bar{c}_j \ll (i+j))) \\ &= \frac{1}{2^{B_x+B_y}} ((2^{B_x} - 1)(2^{B_y}-1) - 2 (\bar{x} \otimes \bar{y})) \end{aligned} \end{equation} Therefore, the multiplication of floating-point numbers can be optimized by following steps: \begin{itemize} \item Given the encoding bit $B_x, B_y$, based on Equation \ref{eq:product:1} and \ref{eq:prop_1}, quantize two operands $x, y \in (-1, +1)$ as $\bar{x} = (\bar{a}_{B_x-1} \cdots \bar{a}_1 \bar{a}_0)_2 $ and $\bar{y} = (\bar{c}_{B_y-1} \cdots \bar{c}_1 \bar{c}_0)_2 $. \item Calculate the result of $ \bar{x} \otimes \bar{y}$ using XOR operations. \item Use Equation \ref{eq:product:6} to calculate the product of $xy$. \end{itemize} \subsection{Inner Product of Vectors with XFBQ} Based the above quantization scheme of floating-point numbers, floating-point vectors can be quantized in a XOR-Friendly way using the following scheme. Given two N-dimension vectors $ \bm{X} = [x_0, x_1, \cdots, x_{N-1}]$ and $ \bm{Y} = [y_0, y_1, \cdots, y_{N-1}]$, $ x_k, y_k \in (-1, +1)$, with the encoding bit $B$, we have \begin{equation} \label{eq:inner_product:0} \begin{aligned} x_k = (a_{(B_x-1)k} \cdots a_{1k} a_{0k})_{(\cdot)} &\Leftrightarrow \bar{x}_k = (\bar{a}_{(B_x-1)k} \cdots \bar{a}_{1k} \bar{a}_{0k})_2 \\ y_k = (c_{(B_y-1)k} \cdots c_{1k} c_{0k})_{(\cdot)} &\Leftrightarrow \bar{y}_k = (\bar{c}_{(B_y-1)k} \cdots \bar{c}_{1k} \bar{c}_{0k})_2 \end{aligned} \end{equation} where $a_{bk},c_{bk} \in \{+1, -1\}$ and $\bar{a}_{bk},\bar{c}_{bk} \in \{0, 1\}$. \par We denote $\sum_{k=0}^{N-1} \bar{x}_k \otimes \bar{y}_k$ with $\bar{\bm{X}} \otimes \bar{\bm{Y}}$. Using Eq. \ref{eq:product:6}, the result of $\bar{\bm{X}} \otimes \bar{\bm{Y}}$ can be transformed to the inner product by \begin{equation} \label{eq:inner_product:2} \bm{XY} = \frac{1}{2^{B_x+B_y}}(N(2^{B_x} - 1)(2^{B_y}-1) - 2(\bar{\bm{X}} \otimes \bar{\bm{Y}})) \end{equation} Furthermore, population count (POPCNT) operations are introduced to improve the performance of the inner product. Since \begin{equation} \label{eq:inner_product:3} \begin{aligned} \bar{\bm{X}} \otimes \bar{\bm{Y}} &= \sum\limits_{k=0}^{N-1} \sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1}((\bar{a}_{ik} \oplus \bar{c}_{jk}) << (i+j)), \\ \end{aligned} \end{equation} notice that \begin{align*} \label{eq:inner_product:4} \sum\limits_{k=0}^{N-1} (\bar{a}_{ik} \oplus \bar{c}_{jk}) &= POPCNT((\bar{a}_{i(N-1)} \cdots \bar{a}_{i0} )_2 . \oplus (\bar{c}_{j(N-1)} \cdots \bar{c}_{j0})_2) \end{align*} In order to utilize POPCNT operations and deduce the total number of instructions for a faster calculation performance, the N-dimension quantized vectors are considered to be reconstructed as $B_x(B_y)$-dimension vectors, \begin{equation} \label{eq:inner_product:5} \begin{aligned} \hat{\bm{X}} = [\hat{\bm{x}}_{B_x-1}, \cdots, \hat{\bm{x}}_1, \hat{\bm{x}}_0],&\text{ where }\hat{\bm{x}}_b = (\bar{a}_{b(N-1)} \cdots \bar{a}_{b0})_2 \\ \hat{\bm{Y}} = [\hat{\bm{y}}_{B_y-1}, \cdots, \hat{\bm{y}}_1, \hat{\bm{y}}_0],&\text{ where }\hat{\bm{y}}_b = (\bar{c}_{b(N-1)} \cdots \bar{c}_{b0})_2 \\ \end{aligned} \end{equation} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.5\textwidth]{encoding} \end{center} \caption[]{Preprocessing of dataset in our method. The work will be done in time of O(BN). Encoding, transform and reconstruct can all be well done parallelly.} \label{fig:inner_product:0} \end{figure} The process of the reconstruction is shown in Figure \ref{fig:inner_product:0}. Then Eq. \ref{eq:inner_product:3} can be represented as \begin{equation}\label{eq:inner_product:6} \hat{\bm{X}} \times \hat{\bm{Y}} \triangleq \sum\limits_{i=0}^{B_x-1}\sum\limits_{j=0}^{B_y-1} (POPCNT(\hat{\bm{x}}_i \oplus \hat{\bm{y}}_j) << (i+j)) = \bar{\bm{X}} \otimes \bar{\bm{Y}}. \end{equation} The benefit from this storage scheme can be shown by a example. Suppose $\bm{x}$ and $\bm{y}$ are both 32-d vectors, and we set $B_x=B_y=3$. Without encoding 32 multiplications are needed. IN XFBQ, both vectors are encoded into 96 bits. Without this scheme we need to take $96*96=9216$ XOR operations separately, which takes long time on sending instructions. If we store the data as $\hat{\bm{X}}$ and $\hat{\bm{Y}}$, then all $\hat{\bm{x}}_b$ and $\hat{\bm{y}}_b$ can be saved in 4-byte integers respectively as all bits in the same integer share the same shifting bits in calculation. Now both $\bm{x}$ and $\bm{y}$ are encoded into 3 integers, and only $3*3=9$ XOR and POPCNT operations with corresponding shifts are needed. As POPCNT is faster than multiplication, and we decrease the number of instructions, we can see great speedup. Modern processing units also support POPCNT64, which takes the same operation on two 8-byte integers. This instruction will further improve the performance. Based on this storage scheme, XFBQ works better on higher-dimensional vectors. To sum all above up, calculating the inner product of two floating-point vectors can be optimized by following steps: \begin{itemize} \item Given the encoding bit $B$, based on Equation \ref{eq:inner_product:0} and \ref{eq:inner_product:5}, quantize and reconstruct the vectors as $\hat{\bm{X}}$ and $\hat{\bm{Y}}$. \item Based on Equation \ref{eq:inner_product:6}, calculate the result of $\hat{\bm{X}} \times \hat{\bm{Y}}$. \item Based on Equation \ref{eq:inner_product:2} and \ref{eq:inner_product:6}, obtain the inner product of $\bm{X}\bm{Y}$. \end{itemize} \subsection{Complexity Analysis} \label{subsec:complexity} Given two N-dimension vectors $\bm{X}$ and $\bm{Y}$, here N is multiple of 64 for convenience of description, based on the above quantization scheme, these vectors can be quantized as $\hat{\bm{X}}$ and $\hat{\bm{Y}}$ and then perform bitwise operations instead of multiplications for the inner product. As shown in Table \ref{tab:complexity:0}, the calculation of the inner product can be significantly optimized by the proposed scheme. \begin{table}[h] \caption{Analysis of computational complexity} \centering \begin{tabular} {c c c} \hline Calculation & Manipulations (times) & Memory (bytes) \\ \hline \hline $\bm{XY}$ & \parbox{3cm}{\centering N multiplications and (N-1) additions} & 2N * size of(float) \\ [2ex] \hline \parbox{1.5cm}{\centering $\hat{\bm{X}} \times \hat{\bm{Y}}$ } & \parbox{3cm}{\centering $B_xB_y * (N / 64)$ XOR and POPCNT, $(B_xB_y * (N / 64) - 1)$ bitwise shifts and additions } & \parbox{2cm}{\centering $(B_x+B_y)(N / 64)$ * size of(uint64)} \\ [25pt] \hline \end{tabular} \label{tab:complexity:0} \end{table} \subsection{Error Analysis of Inner Product Calculation} \label{sec:err} Using the quantization method we provided above, a 4-byte floating-point number can be easily transformed into and stored in 3 or 4 bits, saying that we can store 8-11 times data as storing the original data. We admit there is a huge loss on precision, but using the following tricks, the loss on the final precision of similarities will be acceptable. \par Suppose $\bm{x}, \bm{y}$ are two vectors with same dimension. $\bm{x'}$ and $\bm{y'}$ are approximations of them. Then Similarities is calculated by \begin{equation}\label{calcsim} \begin{aligned} |\bm{x}||\bm{y}|\cos(\bm{x},\bm{y})&=sim(\bm{x},\bm{y})\\ &\approx sim(\bm{x'},\bm{y'})=\bm{x'}\cdot \bm{y'}=|\bm{x'}||\bm{y'}|\cos(\bm{x'},\bm{y'}). \end{aligned} \end{equation} XFBQ can be seen as an approximate method. Using this formula, the similarity error introduced by quantization can be split into two parts: the length error and the angle error. \subsubsection{Length Error} For each component of a vector, it has been approximated to the nearest point that can be represented in a few bits, which introduce a length error in each dimension. As the error accumulates, the length of the quantized vector we stored should be different from the real value. That is the length error. Fixing the number of bits we use for one component, this error can be large when most of the components are small, which is just the case when vector has a lot of components. Good news is, in many fields like speech recognition and NLP, the data vectors are in high dimension, and we can expect that the vectors are usually small in every dimension with high probability. On the popular dataset of embedding vectors like GloVe \cite{pennington2014glove}, all tokens are no larger than 0.5 after vector normalization. Tokens are even smaller for many fields needing nearest neighbor search such as fingerprints recognition and facial recognition. These findings make the following assumption reasonable: \newtheorem{assumption}{Assumption}[section] \begin{assumption} $\forall \bm{x}=(x_1,...,x_N)$, $\exists \epsilon\in(0, 1/2]$, $s.t.$\\ $\max_{i\in\{1,...,N\}} x_i<\epsilon$. \end{assumption} Even for the cases that some vectors have large value in specific dimensions, we can take a linear transformation using an orthogonal matrix $R$ \cite{gong2012angular} to make the assumption effective. Generally, $\epsilon$ will shrink when $N$ goes high. We can expect that $1/\epsilon \approx \log_2 \sqrt{N}$ (But it should be specially calculated for every dataset). Under this assumption, the vectors can be scaled up by $scale=1/\epsilon$ before quantizing them. When there is no loss of precision, the inner product will be scaled up by $1/\epsilon^2$ and the similarity results keep their order. Scaling up makes the data more scattered in $[-1,1]$ and can deduce the length error introduced by the quantization. We test our quantization approach on 128-d and 1000-d random data and SIFT1M dataset. The results show that after the quantization, the new vector length is about 5-10\% larger than the scaled vector, and the expansion of length on different quantized vectors is about 5\% of their average length, making the error tolerable. \subsubsection{Angle Error} As the quantized vectors are no longer in the direction of the original ones, when we use the new vectors to calculate the cosine value in \ref{calcsim}, the value is, actually no way the same with the real value. This is the angle error. We may accept it when the change of angle is small as searching for the top $K$ nearest neighbors: cosine function has small derivative value when function value is around 1, so the cosine value will not be influenced much when the two vectors are intrinsically point to similar directions. For example, we find that when $N=128$, the angle between primitive and quantized vectors can be as much as 30 degrees, seems to be unacceptable. Using the scaling up trick proposed for length error, we can decrease the angle error from the quantization as well. The common case for the difference between the angle of $x, y$ and the angle of $x', y'$ after scaling up is just 5-10 degrees and less. Experiments on a higher dimension enhance this find. This angle error will finally add about 5\% relative error, on the top $K$ nearest vectors, when $K\leq 1000$, according to our trials. In the discussion above, we used a conservative $scale$ parameter that is no larger than any components of all vectors. In most real-world problem, we can relax this restriction, and make it no larger than 98\% of all components, or even less. Experimental results show that a properly expanded $scale$ can accelerate calculation, holding the error acceptable. \subsection{Control Selection Error by Extra Distance} \label{sec:extra_dis} Methods proposed in Section \ref{sec:err} can already help find similar vectors. Next we hope to further reduce the impact of these errors and make the result precise. First, we show how errors influences the distance we get. \ref{eq:inner_product:2} tells the relationship between the similarity ($\bm{X}\bm{Y}$) and the distance ($\bar{\bm{X}}\otimes\bar{\bm{Y}}$) we calculate. Especially when we quantize query with 4 bits and quantize document with 3 bits, the relationship is $$Distance = (constant) - 64ScaledSimilarity.$$ There is a linear relationship between the distance and the similarity. If the error is considered, then the scaled similarity (SS) can be written as \begin{equation*} \begin{split} SS = (similarity + error_{angle})* (scale+ error_{query\ length})\\ * (scale + error_{doc\ length}) \end{split} \end{equation*} When we fix the query and find the top $K$ similar documents, the error on query length is fixed, and only the length error on documents and the angle error is floating. The length error round within 10\% of the scale, and the angle error can be assumed to be no larger than $\pm0.1$ for top documents. In total, the fluctuation range of the distance is no larger than 15\% of the whole range of the distance. \subsubsection{Extra distance method} \par Our strategy of improving search accuracy is as following: Given a query, distances from the query to candidate documents will be calculated by the quantization algorithm and then the minimum distance can be found to retrieve top $K$ similar documents (Gray part in Figure \ref{fig:extra_dis}). Besides, the minimum distance can optionally be added with an "extra distance" to prevent missing good results caused by the error. The documents whose distances are not greater than the minimum distance are recalled (Gray and green part in Figure \ref{fig:extra_dis}), and then the floating-point similarities between these documents and the query are calculated. Based on these similarities, the top $K$ results are finally returned. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{extra_distance.png} \caption{Sketch plot about the usage of extra distance. The gray area includes top $K$ similar vectors to the query, and the green area includes the following vectors with distances no larger than top $K$ distance plus extra distance. Vectors in both areas are sent to refine selection.} \label{fig:extra_dis} \end{figure} \subsubsection{Choose the extra distance} \par In numerical experiments, we fixed one vector (as the query) and randomly generate massive vectors to calculate the pair-wise similarity by both real value and our quantization method. We normalize both similarities respectively so that results for both methods have 0 mean and 1 variance. A sample of the distributions after normalization is shown in Figure \ref{fig:sim_dist}. Notice that they are close to each other in distribution. This suggests that the boundaries of true top $K$ and the calculated top $K$ are close after normalization. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{similar_dist.png} \caption{A sample of the distributions of exact similarity and quantized similarity after normalization. Practically they have the same distribution, but the components are arranged in totally different ways.} \label{fig:sim_dist} \end{figure} \par Assume that the entire similarity space is divided into several buckets, then a vector with high similarity may drop down to no larger than a few buckets at a high probability. If we find the calculated top $K$ bounds and then consider the vectors in those lower buckets, and sort the original similarities of these vectors, the resulting top $K$ will have a high accuracy/recall rate. \par As analyzed above, the fluctuation range of the distance is no larger than 15\% of the whole range of the distance. This theoretical upper bound can help to determine the upper bound for extra distance. In application with 4 bits * 3 bits quantization, we believe considering 5\% to 10\% of SS range is enough for high recall rates ($>99\%$). \section{XFBQ based K-NN Search Algorithm} \begin{algorithm}[!htbp] \caption{Fast K-NN Search Algorithm in CUDA}\label{alg:select} \algnewcommand{\LeftComment}[1]{\State \(\triangleright\) #1} \algblockdefx{ParallelFor}{EndParallelFor} [1]{\textbf{parallel in CUDA for} #1 \textbf{do}} {\textbf{end for}} \begin{algorithmic}[1] \Function{K-Select}{$[\bm{X}_1, \cdots, \bm{X}_n], \bm{Q}, K, extraDistance$} \LeftComment Quantize all document vectors and the query vector (with a scale factor) \State $[\bm{\bar{X}}_1, \cdots, \bm{\bar{X}}_n], \bm{\bar{Q}} = \Call{Quantize}{[\bm{X}_1, \cdots, \bm{X}_n], \bm{Q}}$ \State \LeftComment Calculate distances between documents and the query \State $D = [d_1, \cdots, d_n]$ \Comment initialize distances \ParallelFor {$i \gets 1 : n$} \State $d_i = \Call{CalcDistance}{\bm{\bar{X}}_i, \bm{\bar{Q}}}$ \EndParallelFor \State \LeftComment Histogram distances and select candidates by the $K$th minimum distance \State $H \gets [h_1, \cdots, h_M]$ \Comment initialize bins of the histogram \ParallelFor {$i \gets 1 : n$} \State \Call{Add}{$h_{d_i}$, 1} \EndParallelFor \State $maxDistance \gets (\arg\min_j\sum_{i=1}^j h_i \geq K) + extraDistance$ \Comment fix the boundary of top $K$ candidates \State $L \gets empty\ list$ \Comment initialize index array of candidates \ParallelFor{$index \gets 1 : n$} \If{$d_{index} \leq maxDistance$} \State Append $index$ to $L$ \EndIf \EndParallelFor \State \LeftComment Refine candidates \State $\bm{S} \gets empty\ list$ \ParallelFor{$i \gets 1 : L.length$} \State $similarity =$ \Call{CalcSimilarity}{$\bm{X}_{l_i}, \bm{Q}$} \State Append $(l_i, similarity)$ to $\bm{S}$ \EndParallelFor \State $\bm{S} \gets$ \Call{SortBySimilarity}{$\bm{S}$} \Comment sort candidates by similarities \State \Return $[s_1, \cdots, s_{K}]$ \EndFunction \end{algorithmic} \end{algorithm} As shown in Algorithm \ref{alg:select} , we perform a $K$-NN Search approach as following steps: \begin{itemize} \item Quantization. Quantize all document vectors and the query vector. \item Distance Calculation. Calculate distances between the quantized query vector and all quantized document vectors. \item Histogram and select. Histogram the distances and find the $K$th minimum distance as mentioned in Section \ref{sec:extra_dis}. Based on this distance, select the candidate documents. \item Refine. Sort candidate documents based on exact similarities calculated by original floating-point vectors. This step can also be done on CPU with little influence on overall performance , if GPU memory is limited. \end{itemize} \subsection{Details of the Selection Algorithm} The details of these steps are described as follows.\\\\ \textbf{Quantization.} The quantization of an N-dimension floating-point vector is optimized by the \textit{Advanced Vector Extensions} (AVX) intrinsic and some ingenious bit operations in CPU. The entire quantization process has two following steps. The first step is to quantize the floating-point vectors by \textit{Single Instruction Multiple Data} (SIMD) fashion since the quantization process for each dimension is independent. In the second step, the required bits are extracted from the quantized vectors and stored into \textit{uint64\_t} array for efficient access. By bitwise AND with elaborate bit masks and some other bitwise operations, 8-bit quantization can simultaneously be performed.\\\\ \textbf{Distance Calculation.} In this step distances between the quantized query vector and all quantized document vectors are calculated on GPU. Before this, all quantized document vectors are reorganized as bundles of size 32 for warps - units of execution in GPU, and rearranged to column-based access pattern. With this structure of quantized document vectors, each warp can access the global memory in a high-efficient way. The calculation of the distance is implemented by the bitwise XOR, CUDA integer intrinsic \textit{\_\_popcll} and bitwise shift operations as shown in Equation \ref{eq:inner_product:6}. Distances are stored as integers of $\bar{\bm{X}} \otimes \bar{\bm{Y}}$, as they are equivalent to real inner products under a linear transformation.\\\\ \textbf{Histogram and Select.} The top $K$ documents that are most similar to the query need to be selected with the information of obtained unordered distances. Traditional sequential method is to maintain a minimum heap and get top K results, but it is not totally friendly for parallel computing. Instead, a two-step parallel approach is used here. First, a histogram of the distances is parallelly built to find the $K$th minimum distance of all vectors. Since the integers stored by previous step take limited unique values, each unique value can take a bin in the histogram, and the process can be efficiently done. In the second step, with the $K$th minimum distance (often we add an extra distance as mentioned in Section \ref{sec:extra_dis}), the candidate top $K$ documents are selected from all documents. \\\\ \textbf{Refine.} To improve the recall rate, the selected candidate results will be refined by exact similarities calculated from floating-point vectors. This process can be done on either CPU or GPU. For the GPU implementation, considering the feature of warp execution and 128-byte alignment of the L1 cache, vectors are grouped for calculation in a manner similar to loop unrolling. After calculation, candidate documents are sorted by exact similarities to get the top $K$ nearest neighbors. \subsection{Complexity Analysis} \subsubsection{Time Complexity} In preprocessing step, we quantize all the document vectors from $N$ floating numbers into a space of $B_d$ bits $\times N$ on CPU and copy the quantized vectors to GPU. In practice we choose $B_d$ as 3. Time complexity here is $O(nN)$. In searching step, we first calculate the distance between $n$ quantized document vectors and the query vector. Benefit from the new representation, we can take the special XOR operation between 64 pairs of tokens and add them up at the same time with $(B_d*B_q + B_d+B_q)$ bit operations and $(B_d*B_q-1)$ additions. Here $B_d$ and $B_q$ represent how many bits the document/query vectors are quantized into. Therefore, the time complexity here is still $O(\frac{B_dB_q}{64}nN)$. As $B_d$ and $B_q$ are fixed after preprocessing, this step can be seen as $O(nN)$ with a much smaller constant compared to brute force approach. Next in histogram and select step, every distance is contributing to the corresponding bin in histogram on GPU parallelly, and the time cost does not exceed the size of the bin with the most distances -- In the worst case $O(n)$, but in average $O(n/(\text{Distance Scale}))$. Here Distance Scale is a number related to scale introduced in Section \ref{sec:err}. Generally speaking, time here can be ignored compared to distance calculation. Finding the boundary of top $K$ candidates takes $O({Distance Scale})$ time and finding top $K$ candidate vectors on GPU takes another $O(K)$ time. Finally in refine step as $O(K)$ candidates are taken, $O(KN+K\log K)$ time are needed for calculating the exact similarity and sort them. In general, searching takes a time complexity of $O(\frac{B_dB_q}{64}nN)$. It can be at least 5 - 6 times faster than brute force way using the same computing resource. \subsubsection{Space Complexity} Beside storing the vector information with $O(nN)$ space on CPU, we need an extra $O(\frac{B_d}{64}nN)$ space to store the quantized vectors on GPU. Distance result can be ignored compared to the vector storage. Detail of the storage has been shown in \ref{subsec:complexity}. Therefore, with the same space resources on GPU, our approach can deal with 10x data points compared to brute force way when data are all stored on GPU from the very beginning. \section{Experiments Results} In this section, we compare our approach with other popular methods on different public datasets. The baseline approach is a brute-force way of the similarity calculation implemented by ourselves that only taking advantage of the parallelization of GPUs. We also compared with the state-of-the-art method implemented by Faiss, including HNSW, IVF-HNSW on CPU, and IVF Flat, IVF Product Quantization approach on GPU. All programs are run under Ubuntu 16.04 LTS with 20 Intel Core i7-6950X CPU @ 3.00GHz and 1 - 4 Nvidia Titan V GPU. As we did not optimize our program for multi-query requests, we represent results of single-query requests here. The metric we use for comparison is query per second (QPS) and precision at top 1/10/100/1,000. Here we define $$\text{Precision@i} = |\text{(Calculated top K)}\cap\text{(Real top K)}|/K.$$ We first take experiments on synthetic datasets to choose parameters $B_d$ and $B_q$ that can make the search fast and accurate. When $B_d$ and $B_q$ are no larger than 2, the precision of the result will be poor. $B_d, B_q \in {3,4}$ takes a balance between speed and accuracy. When $B_d=B_q=3$, assume sizeof(uint64) = 8 bytes and sizeof(float) = 4 bytes, by Table \ref{tab:complexity:0} we know XFBQ based inner product calculation only costs 28\% instructions and 9\% space compared to the brute force algorithm. When $B_d=3$ and $B_q=4$, 37\% instructions and 11\% space are cost compared to the brute force algorithm. We choose two different open datasets for testing. Test purposes are different on these datasets. \textbf{Tencent AILab word embedding dataset} \cite{song2018directional} has 8,824,330 records for Chinese and English words. Each record is a 200-d vector. On this dataset we aim to test the speed of XFBQ based inner product calculation and the performance of our approach on single GPU. We randomly take 24,330 records in the dataset as queries and keep the others as dictionary. In our approach we choose $B_d=3, B_q=4$ as a common setting, and $scale=2.95$ based on the range of records. Precision are controlled by setting different extra distance (from 0 to 20). Figure \ref{fig:speed} show that our approach are on average 10 times faster than common parallel computing on inner product calculation. Results of the precision of k-NN search are shown in Figure \ref{fig:AILabResult}. IVF-PQ \cite{jegou2010product} provides a fast but low-precision search for nearest neighbors. It also costs great time on training. HNSW\cite{malkov2018efficient} also takes thousands of CPU seconds on training, and its efficiency drops down quickly when required $K$ increases. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{compare} \caption{Time cost on calculating inner product with and without XFBQ. XFBQ can save 90\% of the time cost.} \label{fig:speed} \end{figure} \begin{figure}[!htbp] \subfigure[Results on high precision part]{ \begin{minipage}[b]{0.25\textwidth} \centering \includegraphics[width=\linewidth]{AILabPlot} \label{fig:ailab:side:a} \end{minipage} } \subfigure[Results of IVF-PQ approach. Different trends for same label shows results of different PQ code length, 8/20/40 bytes from left to right.]{ \begin{minipage}[b]{0.2\textwidth} \centering \includegraphics[width=\linewidth]{AILab_IVFPQ} \label{fig:ailab:side:b} \end{minipage} } \caption[]{Queries per Second as functions of Precision at different top $K$ levels on Tencent AI Lab dataset. The IVF-PQ approach run by Faiss is on a separate figure due to low precision it reaches. Our approach based on binary quantization outperform all other state-of-the-art systems when a high precision is required. The efficiency is not influenced heavily when $K$ changes.} \label{fig:AILabResult} \end{figure} \begin{table}[!htbp] \caption{Preprocessing Time on Tencent AI Lab Dataset} \centering \begin{tabular}{lcc} \toprule Method & Resource Usage & \parbox{2cm}{Preprocessing Time(s)} \\ \midrule Our approach & 1 CPU + 1 GPU & 17 \\ IVF3072FLAT & 1 CPU + 1 GPU & 18 \\ IVF65536PQ8/20/40 & 1 CPU + 1 GPU & 440 - 800 \\ HNSW20/40/50 & 20 CPU & 480 - 600 \\ \bottomrule \end{tabular} \label{tab:AILabTrain} \end{table} When high accuracy becomes a must, our approach has the highest QPS when using only one GPU. On the whole search process, we have 6 times faster than brute force way with optimized parallel implementation on GPU, and a 1.5x - 2x speed (measuring on same precision) compared to IVF3072 Flat search implemented by Faiss on GPU. Our approach also takes the least preprocessing time, which is similar to IVF Flat approach, as shown in Table \ref{tab:AILabTrain}. The space used by our approach is similar to HNSW. The result above shows that our approach can replace other state-of-the-art methods for k nearest neighbor search asking for high precision and a quick start.\\\\ \textbf{Deep100M} is the first 100 million vectors of dataset Deep1B \cite{babenko2016efficient}, which has 1 billion CNN representations for images with dimension 96. We design a test on only 100M records to avoid extreme long training time on those approaches for comparison. The query size is 10,000. We tested HNSW approach with 20 CPU cores. Other approaches are tested with 1 CPU and 1 or 4 GPUs in order to show the performance of our algorithm on multiple devices. In our approach we also choose $B_d=3$ and $B_q=4$, but set $scale=2.0$ to fit the data. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{Deep100MPlot2} \caption{Queries per Second as functions of Precision at different top $K$ levels on Deep100M dataset. Precision axis is rescaled with a lower limit of 0.9 to show the line for IVF-PQ method on some of the plot, as they cannot reach a high precision on all tasks. No green points are shown in the plot as that approach cannot reach such a high precision.} \label{fig:Deep100M} \end{figure} The result is shown in Figure \ref{fig:Deep100M}. The scale of precision axis is different from that in Figure \ref{fig:ailab:side:a}, as we try to show the performance of the best among all IVF-PQ methods. The improvement on search efficiency shows that our approach fits well for multiple GPUs. When more devices are provided, the QPS increases linearly on our approach. In other words, our approach can be deployed on a distributed system. Our approach can deal with 2.5 billion tokens per gigabyte video memory, so a Nvidia P40 with 24GB memory can deal with 60 billion tokens, which exceeds the size of many real time searching problems. Our approach keeps a stable performance on all requests and is the fastest when precision is over 99\%, using 4 GPUs. Compared to HNSW, we also have much lower memory usage. \begin{table}[!htbp] \centering \caption{Preprocessing Time on Deep100M Dataset. For our approach, the time is used for encoding and copying to GPU. For others, here only record their time for training and adding index.} \begin{tabular}{lcc} \toprule Method & Resource Usage & \parbox{2cm}{Preprocessing Time(s)} \\ \midrule Our approach & 1 CPU + 1 GPU & 57 \\ Our approach & 1 CPU + 4 GPU & 20 \\ FaissIVF65536PQ48 & 1 CPU + 1 GPU & 1424 \\ HNSW32 & 20 CPU & 6025 \\ \bottomrule \end{tabular} \label{table:Deep100M} \end{table} Results on 1 GPU in Figure \ref{fig:Deep100M} reflects that our approach can still be dragged down by eager needs on computing power. This is the main drawback of it. As our approach only focus on how to calculate the similarity, it can be further improved by combining with other mature nearest neighbor search techniques focusing on reducing the searching area. For example, Locality Sensitive Hashing can be used in preprocessing and the search will then become non-exhaustive, taking away most of the calculation. In this way the searching will be n times faster than current speed when only one over n of the samples are chosen to be considered. Also, our techniques can be deployed on an inverted file system. Compared to IVF-PQ structure, we can provide higher precision with lower preprocessing time. Therefore, our approach has great potential on accelerating search speed. \section{Conclusion} In this work, we have presented a new approach for performing efficient k nearest-neighbor search using cosine similarity metric on GPU. We propose a new binary quantization method (XFBQ) for compressing calculation. This technique is combined with a special $k$-selection method using the calculated distance. Overall our techniques provide significant reductions on pre-searching time cost compared to other popular approximate nearest-neighbor search algorithms and keep a state-of-the-art searching efficiency, especially when high accuracy is needed. Since most of our work is on accelerating similarity calculation, our approach can be further combined with other popular techniques focusing on reducing search space, such as locality sensitive hashing, and inverted file system, to get a even faster speed. As a single high-performance GPU can handle the calculation work for hundreds of millions of vector productions, CPU resources can be liberated for works with higher complexity. \bibliographystyle{ACM-Reference-Format}
1,314,259,995,977
arxiv
\section{Introduction} \seclab{intro} Inverse problems are ubiquitous in science and engineering. Perhaps the most popular family of inverse problems is to determine a set of parameters (or a function) given a set of indirect observations, which are in turn provided by a parameter-to-observable map plus observation uncertainties. For example, if one considers the problem of determining the heat conductivity of a thermal fin given measured temperature at a few locations on the thermal fin, then: i) the desired unknown parameter is the distributed heat conductivity, ii) the observations are the measured temperatures, iii) the parameter-to-observable map is the mathematical model that describes the temperature on the thermal fin as a function of the heat conductivity; indeed the temperature distribution is a solution of an elliptic partial differential equation (PDE) whose coefficient is the heat conductivity, and iv) the observation uncertainty is due to the imperfection of the measurement device and/or model inadequacy. The Bayesian inversion framework refers to a mathematical method that allows one to solve statistical inverse problems taking into account all uncertainties in a systematic and coherent manner. The Bayesian approach does this by reformulating the inverse problem as a problem in statistical inference, incorporating uncertainties in the observations, the parameter-to-observable map, and prior information on the parameter. In particular, we seek a statistical description of all possible (set of) parameters that conform to the available prior knowledge and at the same time are consistent with the observations. The solution of the Bayesian framework is the so-called posterior measure that encodes the degree of confidence on each set of parameters as the solution to the inverse problem under consideration. Mathematically the posterior is a surface in high dimensional parameter space. The task at hand is therefore to explore the posterior by, for example, characterizing the mean, the covariance, and/or higher moments. The nature of this task is to compute high dimensional integrals for which most contemporary methods are intractable. Perhaps the most general method to attack these problems is the Markov chain Monte Carlo (MCMC) method which shall be introduced in subsequent sections. Let us now summarize the content of the paper. We start with the description of the statistical inverse problem under consideration in Section \secref{infiniteBayes}. It is an inverse steady state heat conduction governed by elliptic PDEs. We postulate a Gaussian measure prior on the parameter space to ensure that the inverse problem is well-defined. The prior itself is a well-defined object whose covariance operator is the inverse of an elliptic differential operator and with the mean function living in the Cameron-Martin space of the covariance. The posterior is given by its Radon-Nikodym derivative with respect to the prior measure, which is proportional to the likelihood. Since the RMHMC simulation method requires the gradient, Hessian, and the derivative of the Fisher information operator, we discuss, in some depth, how to compute the derivatives of the potential function (the misfit functional) with PDE constraints efficiently using the adjoint technique in Section \secref{adjoint}. In particular, we define a Fisher information operator and show that it coincides with the well-known Gauss-Newton Hessian of the misfit. We next present a discretization scheme for the infinite Bayesian inverse problem in Section \secref{FEM}. Specifically, we employ a standard continuous $H^1$-conforming finite element (FEM) method to discretize both the likelihood and the Gaussian prior. We choose to numerically compute the truncated Karhunen-Lo\`eve expansion which requires one to solve an eigenvalue problem with fractional Laplacian. In order to accomplish this task, we use a matrix transfer technique (MTT) which leads to a natural discretization of the Gaussian prior measure. In Section \secref{HMC}, we describe the Riemannian manifold Hamiltonian Monte Carlo (RMHMC) and its variants at length, and its application to our Bayesian inverse problem. Section \secref{lowrank} presents a low rank approach to approximate the Fisher information matrix and its inverse efficiently. This is possibly due to the fact that the Gauss-Newton Hessian, and hence the Fisher information operator, is a compact operator. Various numerical results supporting our proposed approach are presented in Section \secref{results}. We begin this section with an extensive study and comparison of Riemannian manifold MCMC methods for problems with two parameters, and end the section with $1025$-parameter problem. Finally, we conclude the paper in Section \secref{conclusions} with a discussion on future work. \section{Problem statement} \seclab{infiniteBayes} In order to clearly illustrate the challenges arising in PDE-constrained inverse problems for MCMC based Bayesian inference, we consider the following heat conduction problem governed by an elliptic partial differential equation in the open and bounded domain $\Omega \subset \R^n$: \begin{align*} -\Div\LRp{e^\u\Grad \w} &= 0 & \text{ in } \Omega \\ -e^\u\Grad \w \cdot \mb{n} &= Bi \,u &\text{ on } \partial \Omega \setminus \Gamma_{R}, \\ -e^\u\Grad \w \cdot \mb{n} &= -1 &\text{ on } \Gamma_{R}, \end{align*} where $\w$ is the forward state, $\u$ the logarithm of distributed thermal conductivity on $\Omega$, $\mb{n}$ the unit outward normal on $\pOmega$, and $Bi$ the Biot number. In the forward problem, the task is to solve for the temperature distribution $\w$ given a description of distributed parameter $\u$. In the inverse problem, the task is to reconstruct $\u$ given some available observations, e.g, temperature observed at some parts/locations of the domain $\Omega$. We initially choose to cast the inverse problem in the framework of PDE-constrained optimization. To begin, let us consider the following additive noise-corrupted pointwise observation model\footnote{We assume the forward state $\w$ is sufficiently regular, i.e. $\w \in H^{s}, s > n/2$, so that $w$ is, by the virtue of the Sobolev embedding theorem, continuous, and therefore it is meaningful to measure $w$ pointwise.} \begin{equation}\eqnlab{pointwiseObs} \d_j := \w\LRp{\x_j} + \eta_j, \quad j = 1,\hdots,K, \end{equation} where $K$ is the total number of observation locations, $\LRc{\mb{x}_j}_{j=1}^K$ the set of points at which $\w$ is observed, $\eta_j$ the additive noise, and $\d_j$ the actual noise-corrupted observations. In this paper we work with synthetic observations and hence there is no model inadequacy in \eqnref{pointwiseObs}. Concatenating all the observations, one can rewrite \eqnref{pointwiseObs} as \begin{equation} \eqnlab{observation} \db := \mc{G}\LRp{\u} + \etab, \end{equation} with $\mc{G} := \LRs{\w\LRp{\mb{x}_1}, \hdots,\w\LRp{\mb{x}_K}}^T$ denoting the map from the distributed parameter $\u$ to the noise-free observables, $\etab$ being random numbers normally distributed by $\GM{0}{\L}$ with bounded covariance matrix $\L$, and $\db = \LRs{\d_1,\hdots,\d_K}^T$. For simplicity, we take $\L = \sigma^2\I$, where $\I$ is the identity matrix. Our inverse problem can be now formulated as \begin{align} &\min_{\u} \J\LRp{\u,\db} := \frac{1}{2}\snor{\db-\mc{G}\LRp{\u}}_\L^2 = \frac{1}{2\sigma^2} \sum_{j = 1}^K\LRp{\w\LRp{\x_j}-\d_j}^2 \eqnlab{Cost} \end{align} \SubjectTo \begin{subequations} \eqnlab{forwardEqn} \begin{align} -\Div\LRp{e^\u\Grad \w} &= 0 & \text{ in } \Omega, \eqnlab{forward} \\ -e^\u\Grad \w \cdot \mb{n} &= Bi \,u &\text{ on } \partial \Omega \setminus \Gamma_{R}, \eqnlab{forwardbcRobin}\\ -e^\u\Grad \w \cdot \mb{n} &= -1 &\text{ on } \Gamma_{R},\eqnlab{forwardbcNeumann}. \end{align} \end{subequations} where $\snor{\cdot}_\L:= \snor{\L^{-\half}\cdot}$ denotes the weighted Euclidean norm induced by the canonical inner product $\LRp{\cdot,\cdot}$ in $\R^K$. This optimization problem is however ill-posed. An intuitive reason is that the dimension of observations $\db$ is much smaller than that of the parameter $\u$, and hence they provide limited information about the distributed parameter $\u$. As a result, the null space of the Jacobian of the parameter-to-observation map $\mc{G}$ is non-empty. Indeed, we have shown that the Gauss-Newton approximation of the Hessian (which is the square of this Jacobian, and is also equal to the full Hessian of the data misfit $\mathcal{J}$ evaluated at the optimal parameter) is a compact operator \cite{Bui-ThanhGhattas12a, Bui-ThanhGhattas12, Bui-ThanhGhattas12f}, and hence its range space is effectively finite-dimensional. One way to overcome the ill-posedness is to use {\em Tikhonov regularization} (see, e.g., \cite{Vogel02}), which proposes to augment the cost functional \eqnref{Cost} with a quadratic term, i.e., \begin{equation} \tilde{\mc{J}} := \frac{1}{2}\snor{\db-\mc{G}\LRp{\u}}_\L^2 + \frac{\kappa}{2}\nor{R^{1/2}\u}^2, \eqnlab{CostRegularized} \end{equation} where $\kappa$ is a regularization parameter, $R$ some regularization operator, and $\nor{\cdot}$ some appropriate norm. This method is a representative of deterministic inverse solution techniques that typically do not take into account the randomness due to measurements and other sources, though one can equip the deterministic solution with a confidence region by post-processing (see, e.g., \cite{Vexler04} and references therein). It should be pointed out that if the regularization term is replaced by the Cameron-Martin norm of $\u$ (the second term in \eqnref{MAP}), the Tikhonov solution is in fact identical to the maximum a posteriori point in \eqnref{MAP}. However, such a point estimate is insufficient for the purpose of fully taking the randomness into account. In this paper, we choose to tackle the ill-posedness using a {\em Bayesian} framework \cite{Franklin70, LehtinenPaivarintaSomersalo89, Lasanen02, Stuart10, Piiroinen05}. We seek a statistical description of all possible $\u$ that conform to some prior knowledge and at the same time are consistent with the observations. The Bayesian approach does this by reformulating the inverse problem as a problem in {\em statistical inference}, incorporating uncertainties in the observations, the forward map $\mc{G}$, and prior information. This approach is appealing since it can incorporate most, if not all, kinds of randomness in a systematic manner. To begin, we postulate a Gaussian measure $\mu := \GM{\u_0}{\alpha^{-1}\mc{C}}$ on $\u$ in $\Ltwo$ where \[ \mc{C} := \LRp{I - \Delta}^{-s} =: \mc{A}^{-s} \] with the domain of definition \[ D\LRp{\mc{A}} := \LRc{u \in H^2\LRp{\Omega}: \pp{u}{\mb{n}} = 0 \text{ on } \pOmega}, \] where $H^2\LRp{\Omega}$ is the usual Sobolev space. Assume that the mean function $\u_0$ lives in the Cameron-Martin space of $\mc{C}$, then one can show (see \cite{Stuart10}) that the measure $\mu$ is well-defined when $s > n/2$ ($d$ is the spatial dimension), and in that case, any realization from the prior distribution $\mu$ is almost surely in the H\"older space $\X := C^{0,\beta}\LRp{\Omega}$ with $0 < \beta < s/2$. That is, $\mu\LRp{X} = 1$, and the Bayesian posterior measure $\nu$ satisfies the Radon-Nikodym derivative \begin{equation} \eqnlab{RadonNikodym} \pp{\nu}{\mu}\LRp{\u|\db} \sim \exp\LRp{-\mc{J}\LRp{\u,\db}} = \exp\LRp{-\half\snor{\db - \mc{G}\LRp{\u}}^2_\L}, \end{equation} if $\mc{G}$ is a continuous map from $\X$ to $\R^K$. Note that the Radon-Nikodym derivative is proportional to the the likelihood defined by \[ \like \sim \exp\LRp{-\mc{J}\LRp{\u,\db}}. \] The maximum a posteriori (MAP) point is defined as \begin{equation} \eqnlab{MAP} \u^{MAP} := \arg\min_\u \mc{J}\LRp{\u, \db} :=\half\snor{\db - \mc{G}\LRp{\u}}^2_\L + \frac{\alpha}{2} \nor{\u}^2_{\mc{C}}, \end{equation} where $\nor{\cdot}_{\mc{C}} := \nor{\mc{C}^{-\half}\cdot}$ denotes the weighted $\Ltwo$ norm induced by the $\Ltwo$ inner product $\LRa{\cdot,\cdot}$. \section{Adjoint computation of gradient, Hessian, and the third derivative tensor} \seclab{adjoint} In this section, we briefly present the adjoint method to efficiently compute the gradient, Hessian, and the third derivative of the cost functional \eqnref{Cost}. We start by considering the weak form of the (first order) forward equation \eqnref{forwardEqn}: \begin{equation} \eqnlab{Forwardw} \iOm{e^\u\Grad\w \cdot \Grad\lamh} + \iGb{Bi\, \w \lamh} = \iGs{\lamh}, \end{equation} with $\lamh$ as the test function. Using the standard reduced space approach (see, e.g., a general discussion in \cite{NocedalWright06} and a detailed derivation in \cite{Bui-ThanhGhattas13}) one can show that the gradient $\Grad\mc{J}\LRp{\u}$, namely the Fr\'echet derivative of the cost functional $\mc{J}$, acting in any direction $\uo$ is given by \begin{equation} \eqnlab{Gradient} \LRa{\Grad\mc{J}\LRp{\u},\uo} = \iOm{\uo e^{\u}\Grad\w\cdot\Grad\lambda}, \end{equation} where the (first order) adjoint state $\lambda$ satisfies the adjoint equation \begin{equation} \eqnlab{Adjointw} \iOm{e^\u\Grad\lambda \cdot \Grad\wh} + \iGb{Bi\, \lambda \wh} = -\frac{1}{\sigma^2}\sum_{j=1}^K\LRp{\w\LRp{\x_j}-\d_j}\wh\LRp{\x_j}, \end{equation} with $\wh$ as the test function. On the other hand, the Hessian, the Fr\'echet derivative of the gradient, acting in directions $\uo$ and $\ut$ (superscript ``2'' means the second variation direction) reads \begin{equation} \eqnlab{FullHessian} \LRa{\LRa{\Grad^2\mc{J}\LRp{\u},\uo},\ut} = \iOm{\uo e^\u \Grad\w\cdot\Grad\lamt} + \iOm{\uo e^\u\Grad\wt\cdot\Grad\lambda} + \iOm{\uo\ut e^\u\Grad\w\cdot\Grad\lambda}, \end{equation} where the second order forward state $\wt$ obeys the second order forward equation \begin{equation} \eqnlab{Iforward} \iOm{e^\u\Grad\wt \cdot \Grad\lamh} + \iGb{Bi\, \wt \lamh} = - \iOm{\ut e^\u\Grad\w \cdot \Grad\lamh}, \end{equation} and the second order adjoint state $\lamt$ is governed by the second order adjoint equation \begin{equation} \eqnlab{Iadjoint} \iOm{e^\u\Grad\lamt \cdot \Grad\wh} + \iGb{Bi\, \lamt \wh} = -\frac{1}{\sigma^2}\sum_{j=1}^K\wt\LRp{\x_j}\wh\LRp{\x_j} - \iOm{\ut e^\u\Grad\lambda \cdot \Grad\wh}. \end{equation} We define the generalized Fisher information operator\footnote{Note that the Fisher information operator is typically defined for finite dimensional settings in which it is a matrix.} acting in directions $\uo$ and $\ut$ as \begin{equation} \eqnlab{Fisher} \LRa{\LRa{G\LRp{\u},\uo },\ut} := \Ex_\like\LRs{\LRa{\LRa{\Grad^2\mc{J}\LRp{\u},\uo},\ut}}, \end{equation} where the expectation is taken with respect to the likelihood---the distribution of the observation $\db$. Now, substituting \eqnref{FullHessian} into \eqnref{Fisher} and assuming that the integrals/derivatives can be interchanged we obtain \begin{align*} \LRa{\LRa{G\LRp{\u},\uo },\ut} &= \iOm{\uo e^\u \Grad\w\cdot\Grad\Ex_\like\LRs{\lamt}} + \iOm{\uo e^\u\Grad\wt\cdot\Grad\Ex_\like\LRs{\lambda}} \\ &+ \iOm{\uo\ut e^\u\Grad\w\cdot\Grad\Ex_\like\LRs{\lambda}}, \end{align*} where we have used the assumption that the parameter $\u$ is independent of observation $\db$ and the fact that $\w$ and $\wt$ do not depend on $\db$. The next step is to compute $\Grad\Ex_\like\LRs{\lambda}$ and $\Ex_\like\LRs{\lamt}$. To begin, let us take the expectation the first order adjoint equation \eqnref{Adjointw} with respect to $\like$ to arrive at \begin{multline*} \iOm{e^\u\Grad\Ex_\like\LRs{\lambda} \cdot \Grad\wh} + \iGb{Bi\, \Ex_\like\LRs{\lambda} \wh} = \\-\frac{1}{\sigma^2}\sum_{j=1}^K\Ex_\like\LRs{\w\LRp{\x_j}-\d_j}\wh\LRp{\x_j} = 0, \end{multline*} where the second equality is obtained from \eqnref{pointwiseObs} and the assumption $\eta_j \sim \GM{0}{\sigma^2}$. We conclude that \begin{equation} \eqnlab{ExAdjoint} \Grad\Ex_\like\LRs{\lambda} = 0. \end{equation} On the other hand, if we take the expectation of the second order adjoint equation \eqnref{Iadjoint} and use \eqnref{ExAdjoint} we have \begin{equation} \eqnlab{ExIadjoint} \iOm{e^\u\Grad\Ex_\like\LRs{\lamt} \cdot \Grad \wh} + \iGb{Bi\, \Ex_\like\LRs{\lamt} \wh} = -\frac{1}{\sigma^2}\sum_{j=1}^K\wt\LRp{\x_j}\wh\LRp{\x_j}. \end{equation} Let us define \[ \lamtt := \Ex_\like\LRs{\lamt}, \] then \eqnref{ExIadjoint} becomes \begin{equation} \eqnlab{ExIadjointF} \iOm{e^\u\Grad\lamtt \cdot \Grad\wh} + \iGb{Bi\, \lamtt \wh} = -\frac{1}{\sigma^2}\sum_{j=1}^K\wt\LRp{\x_j} \wh\LRp{\x_j}. \end{equation} As a result, the Fisher information operator acting along directions $\uo$ and $\ut$ reads \begin{equation} \eqnlab{FisherF} \LRa{\LRa{G\LRp{\u},\uo },\ut} = \iOm{\uo e^\u \Grad\w\cdot\Grad\lamtt}, \end{equation} where $\lamtt$ is the solution of \eqnref{ExIadjointF}, a variant of the second order adjoint equation \eqnref{Iadjoint}. The Fisher information operator therefore coincides with the Gauss-Newton Hessian of the cost functional \eqnref{Cost}. The procedure for computing the gradient acting on an arbitrary direction is clear. One first solves the first order forward equation \eqnref{Forwardw} for $\w$, then the first order adjoint \eqnref{Adjointw} for $\lambda$, and finally evaluate \eqnref{Gradient}. Similarly, one can compute the Hessian (or the Fisher information operators) acting on two arbitrary directions by first solving the second order forward equation \eqnref{Iforward} for $\wt$, then the second order adjoint equation \eqnref{Iadjoint} (or its variant \eqnref{ExIadjointF}) for $\lamt$ (or $\lamtt$), and finally evaluating \eqnref{FullHessian} (or \eqnref{FisherF}). One of the main goals of the paper is to study the Riemann manifold Hamiltonian Monte Carlo method in the context of Bayesian inverse problems governed by PDEs. It is therefore essential to compute the derivative of the Fisher information operator. This task is obvious for problems with available closed form expressions of the likelihood and the prior, but it is not so for those governed by PDEs. Nevertheless, using the adjoint technique we can compute the third order derivative tensor acting on three arbitrary directions with three extra PDE solves, as we now show. To that end, recall that the Fisher information operator acting on directions $\uo$ and $\ut$ is given by \eqnref{FisherF}. The Fr\'echet derivative of the Fisher information operator along the additional direction $\uth$ (superscript ``3'' means the third variation direction) is given by \begin{align} &\LRa{\LRa{\LRa{\T\LRp{\u}, \uo},\ut}, \uth} := \LRa{\Grad\LRa{\LRa{G\LRp{\u}, \uo},\ut}, \uth} \nonumber \\&= \iOm{\uo\uth e^\u\Grad\w \cdot \Grad\lamtt} + \iOm{\uo e^\u \Grad \wth \cdot \Grad\lamtt} + \iOm{\uo e^\u \Grad\w\cdot\Grad\lamtth} \eqnlab{tensor}, \end{align} where $\wth$, $\lamtth$ are the variation of $\w$ and $\lamtt$ in the direction $\uth$, respectively. One can show that $\uth$ satisfies another second order forward equation \begin{equation} \eqnlab{Iforwardn} \iOm{e^\u\Grad\wth \cdot \Grad \lamh} + \iGb{Bi\, \wth \lamh} = - \iOm{\uth e^\u\Grad\w \cdot \Grad\lamh}. \end{equation} Similarly, $\lamtth$ is the solution of the third order adjoint equation \begin{equation} \eqnlab{IIadjoint} \iOm{e^\u\Grad\lamtth \cdot \Grad \wh} + \iGb{Bi\, \lamtth \wh} = -\frac{1}{\sigma^2}\sum_{j=1}^K\wtth\LRp{\x_j} \wh\LRp{\x_j} - \iOm{\uth e^\u\Grad\lamtt \cdot \Grad\wh}, \end{equation} and $\wtth$, the variation of $\ut$ in direction $\uth$, satisfies the following third order forward equation \begin{align} &\iOm{e^\u\Grad\wtth \cdot \Grad \lamh} + \iGb{Bi\, \wtth \lamh} = \nonumber\\ &-\iOm{\uth e^\u\Grad\wt \cdot \Grad \lamh} - \iOm{\uth\ut e^\u\Grad\w \cdot \Grad\lamh} - \iOm{\ut e^\u\Grad\wth \cdot \Grad\lamh}. \eqnlab{IIforward} \end{align} Note that it would have required four extra PDE solves if one computes the third derivative of the full Hessian \eqnref{FullHessian}. It is important to point out that the operator $\T$ is only symmetric with respect to $\uo$ and $\ut$ since the Fisher information is symmetric, but not with respect to $\uo$ and $\uth$ or $\ut$ and $\uth$. The full symmetry only holds for the derivative of the full Hessian, that is, the true third derivative of the cost functional. \section{Discretization} \seclab{FEM} As presented in Section \secref{infiniteBayes}, we view our inverse problem from an infinite dimensional point of view. As such, to implement our approach on computers, we need to discretize the prior, the likelihood and hence the posterior. We choose to use the finite element method. In particular, we employ the standard $H^1\LRp{\Omega}$ finite element method (FEM) to discretize the forwards and adjoints (the likelihood), and the operator $\mc{A}$ (the prior). It should be pointed out that the Cameron-Martin space can be shown (see, e.g., \cite{Stuart10}) to be a subspace of the usual fractional Sobolev space $H^s\LRp{\Omega}$, which is in turn a subspace of $H^1\LRp{\Omega}$. Thus, we are using a non-conforming FEM approach (outer approximation). For convenience, we further assume that the discretized state and parameter live on the same finite element mesh. Since FEM approximation of elliptic operators is standard (see, e.g., \cite{Ciarlet78}), we will not discuss it here. Instead, we describe the matrix transfer technique (see, e.g, \cite{IlicLiuTurnerEtAl05} and the references therein) to discretize the prior. Define $\Q := \mc{C}^{1/2} = \mc{A}^{-s/2}$, then the eigenpairs $\LRp{\lambda_i,\v_i}$ of $\Q$ define the Karhunen-Lo\`eve (KL) expansion of the prior distribution as \[ \u = \u_0+ \frac{1}{\sqrt{\alpha}}\sum_{i=1}^\infty a_i \lambda_i \v_i, \] where $a_i \sim \GM{0}{1}$. We need to solve \[ \Q \v_i = \lambda_i\v_i, \] or equivalently \begin{equation} \eqnlab{eigenProblem} \mc{A}^{s/2} \v_i = \frac{1}{\lambda_i}\v_i. \end{equation} To solve \eqnref{eigenProblem} using the matrix transfer technique (MTT), let us denote by $\M$ the mass matrix, and $\K$ the stiffness matrix resulting from the discretization of the Laplacian $\Delta$. The representation of $\mc{A}$ in the finite element space (see, e.g., \cite{Simpson08} and the references therein) is given by \[ \A := \M^{-1}\K + \I. \] Let bold symbols denote the corresponding vector of FEM nodal values, e.g., $\ub$ is the vector containing all FEM nodal values of $\u$. If we define $\LRp{\sigma_i,\vb_i}$ as eigenpairs for $\A$, i.e, \begin{equation} \eqnlab{eigenmatrix} \A \vb_i = \sigma_i \vb_i, \text{ or } \A \Vb = \boldsymbol{\Sigma}\Vb \end{equation} where $\vb_i^T\M\vb_j = \delta_{ij}$, and hence $\Vb^{-1} = \Vb^T\M$, $\delta_{ij}$ is the Kronecker delta function, and $\boldsymbol{\Sigma}$ is the diagonal matrix with entries $\sigma_i$. Since $\A$ is similar to $\M^{-\half}\LRp{\K + \M}\M^{-\half}$, a symmetric positive definite matrix, $\A$ has positive eigenvalues. Using MTT method, the matrix representation of \eqnref{eigenProblem} reads \[ \A^{s/2}\vb_i = \frac{1}{\lambda_i}\vb_i, \] where \[ \A^{s/2} := \Vb\boldsymbol{\Sigma}^{s/2}\Vb^{-1}. \] It follows that \[ \lambda_i = \sigma^{-s/2}_i. \] The Galerkin FEM approximation of the prior via truncated KL expansion reads \begin{equation} \eqnlab{uKLtruncated} \ub = \ub_0+ \frac{1}{\sqrt{\alpha}}\sum_{i=1}^{N}a_i\lambda_i\vb_i, \end{equation} with $\ub$ as the FEM nodal value of the approximate prior sample $\u$ and $N$ as the number of FEM nodal points. Note that for ease in writing, we have used the same notation $\u$ for both infinite dimensional prior sample and its FEM approximation. Since $\u \in \Ltwo$, $\ub$ naturally lives in $\R^N_\M$, the Euclidean space with weighted inner product $\LRp{\cdot,\cdot}_\M := \LRp{\cdot,\M\cdot}$. A question arises: what is the distribution of $\ub$? Clearly $\ub$ is a Gaussian with mean $\ub_0$ since $a_i$ are. The covariance matrix $\C$ for $\ub$ is defined by \[ \LRp{\zb, \C\yb}_\M := \Ex\LRs{\LRp{\ub-\ub_0,\M\zb}\LRp{\ub-\ub_0,\M\yb}} = \frac{1}{\alpha}\zb^T \M \Vb \boldsymbol{\Lambda}^2 \Vb^T\M\yb, \] where we have used \eqnref{uKLtruncated} to obtain the second equality and $\boldsymbol{\Lambda}$ is the diagonal matrix with entries $\Lambda_{ii} = \lambda_i^{-1}$. It follows that \begin{equation} \eqnlab{eqnC} \C = \frac{1}{\alpha} \Vb \boldsymbol{\Lambda}^2 \Vb^T\M \end{equation} as a map from $\R^N_\M$ to $\R^N_\M$, and its inverse can be shown to be \[ \C^{-1} = \alpha \Vb \boldsymbol{\Lambda}^{-2} \Vb^T\M, \] whence the distribution of $\ub$ is \begin{equation} \eqnlab{uDist} \ub \sim \GM{\ub_0}{\alpha \Vb \boldsymbol{\Lambda}^{-2} \Vb^T\M} \sim \exp\LRs{-\frac{\alpha}{2} \LRp{\ub-\ub_0}^T\M\Vb \boldsymbol{\Lambda}^{-2} \Vb^T\M\LRp{\ub-\ub_0}}. \end{equation} As a result, the FEM discretization of the prior can be written as \begin{align*} \frac{\alpha}{2}\nor{\u-\u_0}^2_{\mc{C}} &:= \frac{\alpha}{2} \nor{\mc{A}^{s/2}\LRp{\u-\u_0}}^2 \stackrel{\text{MTT}}{\approx} \frac{\alpha}{2}\LRp{\ub-\ub_0}^T\M\Vb\boldsymbol{\Lambda}^{-2}\Vb^T\M\LRp{\ub-\ub_0}. \end{align*} Thus, the FEM approximation of the posterior is given by \[ \post \sim \exp\LRp{-\half\snor{\db - \mc{G}\LRp{\u}}^2_\L} \times \exp\LRp{-\frac{\alpha}{2}\LRp{\ub-\ub_0}^T\M\Vb\boldsymbol{\Lambda}^{-2}\Vb^T\M\LRp{\ub-\ub_0}}. \] The detailed derivation of the FEM approximation of infinite Bayesian inverse problems in general and the prior in particular will be presented elsewhere \cite{Bui-Thanh14}. \section{Riemannian manifold Langevin and Hamiltonian Monte Carlo methods} \seclab{HMC} In this section we give a brief overview of the MCMC algorithms that we consider in this work. Some familiarity with the concepts of MCMC is required by the reader since an introduction to the subject is out of the scope of this paper. \subsection{Metropolis-Hastings} For a random vector $\ub \in \mathbb{R}^N$ with density $\pi(\ub)$ the Metropolis-Hastings algorithm employs a proposal mechanism $q(\ub^{*}|\ub^{t-1})$ and proposed moves are accepted with probability $\min \left\{1,\pi(\ub^{*}) q(\ub^{t-1}|\ub^{*})/\pi(\ub^{t-1})q(\ub^{*}|\ub^{t-1})\right\}$. Tuning the Metropolis-Hastings algorithm involves selecting an appropriate proposal mechanism. A common choice is to use a Gaussian proposal of the form $q(\ub^{*}|\ub^{t-1}) = \mathcal{N}(\ub^{*}|\ub^{t-1},\boldsymbol{\Sigma})$, where $\mathcal{N}(\cdot|\boldsymbol{\mu},\boldsymbol{\Sigma})$ denotes the multivariate normal density with mean $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}$. Selecting the covariance matrix however, is far from trivial in most of cases since knowledge about the target density is required. Therefore a more simplified proposal mechanism is often considered where the covariance matrix is replaced with a diagonal matrix such as $\boldsymbol{\Sigma}=\epsilon\I$ where the value of the scale parameter $\epsilon$ has to be tuned in order to achieve fast convergence and good mixing. Small values of $\epsilon$ imply small transitions and result in high acceptance rates while the mixing of the Markov Chain is poor. Large values on the other hand, allow for large transitions but they result in most of the samples being rejected. Tuning the scale parameter becomes even more difficult in problems where the standard deviations of the marginal posteriors differ substantially, since different scales are required for each dimension, and when correlations between different variables exist. In the case of PDE-constrained inverse problems in very high dimensions with strong nonlinear interactions inducing complex non-convex structures in the target posterior this tuning procedure is typically doomed to failure of convergence and mixing. There have been many subsequent developments of this basic algorithm however the most important with regard to inverse problems, arguably, is the formal definition of Metropolis Hastings in an infinite dimensional functional space. One of the main failings of Metropolis Hastings is the drop-off in acceptance probability as the dimension of the problem increases. By defining the Metropolis acceptance probability in the appropriate Hilbert space the acceptance probability should then be invariant to the dimension of the problem and this is indeed the case as is described in a number of scenarios by \cite{CotterRobertsStuartEtAl13}. Furthermore the definition of a Markov chain transition kernel directly in the Hilbert space which exploits Hamiltonian dynamics in the proposal mechanism followed in \cite{OttobrePillaiPinskiEtAl14}. These are important methodological advances for MCMC applied to Inverse Problems. As the infinite dimensional nature of the problem is a fundamental aspect of the problem it is sensible that this characteristic is embedded in the MCMC scheme. In a similar vein by noting that the statistical model associated with the specific inverse problem is generated from an underlying partial differential equation or system of ordinary differential equations a natural geometric structure structure on the space of probability distributions is induced. This structure provides a rich source of model specific information that can be exploited in devising MCMC schemes that are informed by the underlying structure of the model itself. In \cite{GirolamiCalderhead11} a way around this situation was provided by accepting that the statistical model can itself be considered as an object with an underlying geometric structure that could be embedded into the proposal mechanism. A class of MCMC methods were developed based on the differential geometric concepts underlying Riemannian manifolds. \subsection{Riemann Manifold Metropolis Adjusted Langevin Algorithm} Denoting the log of the target density as $\mathcal{L}(\ub) = \log \pi(\ub)$, the manifold Metropolis Adjusted Langevin Algorithm (mMALA) method, \cite{GirolamiCalderhead11}, defines a Langevin diffusion with stationary distribution $\pi(\ub)$ on the Riemann manifold of density functions with metric tensor $\G(\ub)$. By employing a first order Euler integrator for discretising the stochastic differential equation a proposal mechanism with density $q(\ub^*|\ub^{t-1}) = \mathcal{N}(\ub^*| \boldsymbol{\mu}(\ub^{t-1},\epsilon),\epsilon^2\G^{-1}(\ub^{t-1}))$ is defined, where $\epsilon$ is the integration step size, a parameter which needs to be tuned, and the $k$th component of the mean function $\boldsymbol{\mu}(\ub,\epsilon)_k$ is \begin{eqnarray} \boldsymbol{\mu}(\ub,\epsilon)_k & = & \ub_k + \frac{\epsilon^2}{2}\left(\G^{-1}(\ub)\nabla_{\ub}\mathcal{L}(\ub)\right)_k - \epsilon^2 \sum_{i=1}^N\sum_{j=1}^N\G(\ub)_{i,j}^{-1}\Gamma_{i,j}^k \label{eq:meanmMALA}\end{eqnarray} where $\Gamma_{i,j}^k$ are the Christoffel symbols of the metric in local coordinates. Note that we have used the Christoffel symbols to express the derivatives of the metric tensor, and they are computed using the adjoint method presented in Section \secref{adjoint}. Due to the discretisation error introduced by the first order approximation convergence to the stationary distribution is not guaranteed anymore and thus the Metropolis-Hastings ratio is employed to correct for this bias. In \cite{GirolamiCalderhead11} a number of examples are provided illustrating the potential of such a scheme for challenging inference problems. One can interpret the proposal mechanism of RMMALA as a local Gaussian approximation to the target density where the effective covariance matrix in RMMALA is the inverse of the metric tensor evaluated at the current position. Furthermore a simplified version of the RMMALA algorithm, termed sRMMALA, can also be derived by assuming a manifold with constant curvature thus cancelling the last term in Equation (\ref{eq:meanmMALA}) which depends on the Christoffel symbols. Whilst this is a step forward in that much information about the target density is now embedded in the proposal mechanism it is still driven by a random walk. The next approach to be taken goes beyond the direct and scaled random walk by defining proposals which follow the geodesic flows on the manifold of densities and thus presents a potentially really powerful scheme to explore posterior distributions. \subsection{Riemann Manifold Hamiltonian Monte Carlo} The Riemann manifold Hamiltonian Monte Carlo (RMHMC) method defines a Hamiltonian on the Riemann manifold of probability density functions by introducing the auxiliary variables $\pb\sim \mathcal{N}(\boldsymbol{0},\G(\ub))$ which are interpreted as the momentum at a particular position $\ub$ and by considering the negative log of the target density as a potential function. More formally the Hamiltonian defined on the Riemann manifold is \begin{equation} \Hc(\ub,\pb) = -\mathcal{L}(\ub) +\frac{1}{2}\log\left(2\pi|\G(\ub)|\right) + \frac{1}{2}\pb^T\G(\ub)^{-1}\pb \end{equation} where the terms $-\mathcal{L}(\ub) +\frac{1}{2}\log\left(2\pi|\G(\ub)|\right) $ and $\frac{1}{2}\pb^T\G(\ub)^{-1}\pb$ are the potential energy and kinetic energy terms respectively. and the dynamics given by Hamiltons equations are \begin{eqnarray} \frac{d \ub_k}{dt} &= \frac{\partial \Hc }{\partial \pb_k} = \left(\G(\ub)^{-1}\pb\right)_k \\ \frac{d\pb_k}{dt} &= -\frac{\partial \Hc }{\partial \ub_k} = \frac{\partial \mathcal{L}(\ub)}{\partial \ub_k} -\frac{1}{2}Tr\left[\G(\ub)^{-1}\frac{\partial \G(\ub)}{\partial \ub_k} \right] \nonumber\\ &+\frac{1}{2}\pb^T\G(\ub)^{-1}\frac{\partial \G(\ub)}{\partial \ub_k}\G(\ub)^{-1}\pb \eqnlab{Hamilton} \end{eqnarray} These dynamics define geodesic flows at a particular energy level and as such make proposals which follow deterministically the most efficient path across the manifold from the current density to the proposed one. Simulating the Hamiltonian requires a time-reversible and volume preserving numerical integrator. For this purpose the Generalised Leapfrog algorithm can be employed and provides a deterministic proposal mechanism for simulating from the conditional distribution, i.e. $\ub^*|\pb \sim \pi(\ub^*|\pb)$. More details about the Generalised Leapfrog integrator can be found in \cite{GirolamiCalderhead11}. To simulate a path (which turns out to be a local geodesic) across the manifold, the Leapfrog integrator is iterated $L$ times which along with the integration step size $\epsilon$ are parameters requiring tuning. Again due to the discrete integration errors on simulating the Hamiltonian in order to ensure convergence to the stationary distribution the Metropolis-Hastings acceptance ratio is applied. The RMHMC method has been shown to be highly effective in sampling from posteriors induced by complex statistical models and offers the means to efficiently explore the hugely complex and high dimensional posteriors associated with PDE-constrained inverse problems. \section{Low rank approximation of the Fisher information matrix} \seclab{lowrank} As presented in Section \secref{HMC}, we use the Fisher information matrix at the MAP point augmented with the Hessian of the prior as the metric tensor in our HMC simulations. It is therefore necessary to compute the augmented Fisher matrix and its inverse. In~\cite{Bui-ThanhGhattas12a, Bui-ThanhGhattas12, Bui-ThanhGhattas12f}, we have shown that the Gauss-Newton Hessian of the cost functional \eqnref{Cost}, also known as the data misfit, is a compact operator, and that for smooth $\u$ its eigenvalues decay exponentially to zero. Thus, the range space of the Gauss-Newton Hessian is effectively finite-dimensional even before discretization, i.e., it is independent of the mesh. In other words, the Fisher information matrix admits accurate low rank approximations and the accuracy can be improved as desired by simply increasing the rank of the approximation. We shall exploit this fact to compute the augmented Fisher information matrix and its inverse efficiently. We start with the augmented Fisher information matrix in $\R^N_\M$ \begin{align*} \G := \M^{-1}\H + \alpha \Vb \boldsymbol{\Lambda}^{-2} \Vb^T\M = \alpha \Vb\boldsymbol{\Lambda}^{-1}\LRp{\frac{1}{\alpha}\boldsymbol{\Lambda}\Vb^T\H\Vb\boldsymbol{\Lambda} + \I}\boldsymbol{\Lambda}^{-1}\Vb^{-1}, \end{align*} where $\H$ is the Fisher information matrix obtained from \eqnref{FisherF} by taking $\uo$ and $\ut$ as FEM basis functions. Assume that $\H$ is compact (see, e.g., \cite{Bui-ThanhGhattas12a, Bui-ThanhGhattas12}), together with the fact that $\boldsymbol{\Lambda}_{ii}$ decays to zero, we conclude that the prior-preconditioned Fisher information matrix \[ \Ht := \frac{1}{\alpha}\boldsymbol{\Lambda}\Vb^T\H\Vb\boldsymbol{\Lambda} \] also has eigenvalues decaying to zero. Therefore it is expected that the eigenvalues of the prior-preconditioned matrix decays faster than those of the original matrix $\H$. Indeed, the numerical results in Section \secref{results} will confirm this observation. It follows that (see, e.g., \cite{FlathWilcoxAkcelikEtAl11, Bui-ThanhGhattasMartinEtAl13} for similar decomposition) $\Ht$ admits a $r$-rank approximation of the form \[ \Ht = \frac{1}{\alpha}\boldsymbol{\Lambda}\Vb^T\H\Vb\boldsymbol{\Lambda} \approx \Vb_r\S\Vb_r^T, \] where $\Vb_r$ and $\S$ (diagonal matrix) contain the first $r$ dominant eigenvectors and eigenvalues of $\Ht$, respectively. In this work, similar to \cite{Bui-ThanhGhattasMartinEtAl13}, we use the one-pass randomized algorithm in \cite{HalkoMartinssonTropp11} to compute the low rank approximation. Consequently, the augmented Fisher information matrix becomes \[ \G \approx \alpha \Vb\boldsymbol{\Lambda}^{-1}\LRp{\Vb_r\S\Vb_r^T + \I}\boldsymbol{\Lambda}^{-1}\Vb^{-1}, \] from which we obtain the inverse, by using the Woodbury formula \cite{GolubVan96}, \[ \G^{-1} \approx \frac{1}{\alpha} \Vb\boldsymbol{\Lambda}\LRp{\I - \Vb_r\D\Vb_r^T }\boldsymbol{\Lambda}\Vb^{-1}, \] where $\D$ is a diagonal matrix with $\D_{ii} = \S_{ii}/\LRp{\S_{ii}+1}$. In the RMHMC method, we need to randomly draw the momentum variable as $\pb \sim \GM{\mb{0}}{\G}$. If one considers \[ \pb = \sqrt{\alpha} \Vb \Lambda^{-1}\bb + \sqrt{\alpha} \Vb\boldsymbol{\Lambda}^{-1}\Vb_r\S^{1/2}\cb, \] where $\bb_i,\cb_i \sim \GM{0}{1}$, then one can show , by inspection, that $\pb$ is distributed by $\GM{\mb{0}}{\G}$. \section{Numerical results} \seclab{results} For convenience, let us recall that the finite element (FEM) approximation of the posterior is given as \begin{equation} \eqnlab{finalPosterior} \post \sim \exp\LRp{-\half\snor{\db - \mc{G}\LRp{\u}}^2_\L} \times \exp\LRp{-\frac{\alpha}{2}\LRp{\ub-\ub_0}^T\M\Vb\boldsymbol{\Lambda}^2\Vb^T\M\LRp{\ub-\ub_0}}, \end{equation} $\ub_0$ is the FEM nodal value of the prior mean function $\u_0$, $\M$ is the mass matrix, $\Vb$ the matrix of eigenvectors defined in \eqnref{eigenmatrix}, $\boldsymbol{\Lambda}$ the diagonal matrix introduced in \eqnref{eqnC}, $\L = \sigma^2\I$, $\db$ vector of observation data, and $\mc{G}\LRp{\u}$ the forward map given by the forward equation \begin{align*} -\Div\LRp{e^\u\Grad \w} &= 0 & \text{ in } \Omega \\ -e^\u\Grad \w \cdot \mb{n} &= Bi \,u &\text{ on } \partial \Omega \setminus \Gamma_{R}, \\ -e^\u\Grad \w \cdot \mb{n} &= -1 &\text{ on } \Gamma_{R}, \end{align*} that is discretized by the $H^1$-conforming FEM method. In this section, we study Riemann manifold Monte Carlo methods and their variations to explore the posterior \eqnref{finalPosterior}. In particular, we compare the performance of four methods: i) sRMMALA obtained by ignoring the third derivative in RMMALA, ii) RMMALA, iii) sRMHMC obtained by first computing the augmented Fisher metric tensor at the MAP point and then using it as the constant metric tensor, iv) RMHMC. For all methods, we form the augmented Fisher information matrix exactly using \eqnref{FisherF} with $\uo$ and $\ut$ as finite element basis vectors. For RMMALA and RMHMC we also need the derivative of the metric tensor which is a third order tensor. It can be constructed exactly using \eqnref{tensor} with $\uo, \ut$ and $\uth$ as finite element basis vectors. We also need extra work for the RMHMC method since each Stormer-Verlet step requires an implicit solve for both the first half of momentum and full position. For inverse problems such as those considered in this paper, the fixed point approach proposed in \cite{GirolamiCalderhead11} does not seem to converge. We therefore have to resort to a full Newton method. Since we explicitly construct the metric tensor and its derivative, it is straightforward for us to develop the Newton scheme. For all problems considered in this section, we have observed that it takes at most five Newton iterations to converge. Note that we limit ourselves in comparing these four methods in the Riemannian manifold MCMC sampling family. Clearly, other methods are available, we avoid ``unmatched comparison'' in terms of cost and the level of exploiting the structure of the problem. Even in this limited family, RMHMC is most expensive since it requires not only third derivatives but also implicit solves, but the ability in generating almost independent samples is attractive and appealing as we shall show. Though our proposed approach described in previous sections are valid for any spatial dimension $d$, we restrict ourselves to a one dimensional problem, i.e. $d = 1$, to clearly illustrate our points and findings. In particular, we take $\Omega = \LRs{0,1}$, $\Gamma_R = \LRc{1}$. We set $Bi = 0.1$ for all examples. As discussed in Section \secref{infiniteBayes}, for the Gaussian prior to be well-defined, we take $s = 0.6 > n/2 = 1/2$. \subsection{Two-parameter examples} \seclab{twoparameter} We start our numerical experiments with two parameters. This will help demonstrate various aspects of RMHMC which are otherwise too computationally expensive for high dimensional problems. In particular, two-parameter example allows us to compute the complete third derivative tensor and perform the Newton method for each Stormer-Verlet step. This in turn allows us to show the capability of the full RMHMC over its simplified variants in tackling challenging posterior densities in which the metric tensor changes rapidly. In order to construct the case with two parameters we consider FEM with one finite element. We assume that there is one observation point, i.e. $K = 1$, and it is placed at the left boundary $x = 0$. In the first example, we first take $s = 0.6$, $\sigma = 0.1$, and $\alpha = 0.1$. The posterior in this case is shown in Figure \figref{posterior1}. We start by taking a time step of $0.02$ with $100$ Stormer-Verlet steps for both sRMHMC and RMHMC. The acceptance rate for both methods is $1$. One would take a time step of $2$ for both sRMMALA and RMMALA to be comparable with sRMHMC and RMHMC, but the acceptance would be zero. Instead we take time step of $1$ so that the acceptance rate is about $0.5$ for sRMMALA and $0.3$ for RMMALA. The MAP point is chosen as the initial state for all the chains with $5000$ sample excluding the first $100$ burn-ins. The result is shown in Figure \figref{comparison1}. \begin{figure}[h!tb] \subfigure[$s = 0.6$, $\alpha = 0.1$, and $\sigma = 0.1$]{ \includegraphics[trim=1cm 6.5cm 2cm 7.3cm,clip=true,width=0.32\columnwidth]{ThermalFin1DNelements1al60alpha1m-1noise1m-1} \figlab{posterior1} } \subfigure[$s = 0.6$, $\alpha = 1$, and $\sigma = 0.01$]{ \includegraphics[trim=1cm 6.5cm 2cm 7.3cm,clip=true,width=0.32\columnwidth]{ThermalFin1DNelements1al60alpha1p0noise1m-2} \figlab{posterior2} } \subfigure[$s = 0.6$, $\alpha = 0.1$, and $\sigma = 0.01$]{ \includegraphics[trim=1cm 6.5cm 2cm 7.3cm,clip=true,width=0.32\columnwidth]{ThermalFin1DNelements1al60alpha1m-1noise1m-2} \figlab{posterior3} } \caption{The contours of the posterior for three combinations of $s$ (the prior smoothness), $\alpha$ (the ``amount'' of the prior), and $\sigma$ (the noise standard deviation).} \figlab{Posterior1} \end{figure} \begin{figure}[h!tb] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,width=0.97\columnwidth]{RMHMC_Thermal_ComparisonN5000Epsilon21al60alpha1m-1noise1m-1} \caption{Comparison of simRMMALA, RMMALA, simRMHMC, and RMHMC: chains with 5000 samples, burn-in of $100$, starting at the MAP point. In this example, $s = 0.6$, $\alpha = 0.1$, and $\sigma = 0.1$. Time step is $\varepsilon = 1$ for simRMMALA and RMMALA, and $\varepsilon = 0.02$ with the number of time steps $L = 100$ for simRMHMC and RMHMC. In the left column: the exact synthetic solution is black, the sample mean is red, and the shaded region is the $95\%$ credibility region. In the middle column: blue is the trace plot for $\u_1$ while green is for $\u_2$. In the right column: red and black are the autocorrelation function for $\u_1$ and $\u_2$, respectively.} \figlab{comparison1} \end{figure} As can be seen, the RMHMC chain is the best in terms of mixing by comparing the second column (the trace plot) and the third column (the autocorrelation function ACF). Each RMHMC sample is almost uncorrelated to the previous ones. The sRMHMC is the second best, but the samples are strongly correlated compared to those of RMHMC, e.g. one uncorrelated sample for every $40$. It is interesting to observe that the full RMMALA and sRMMALA have performance in terms of auto-correlation length that is qualitatively similar at least in the first $5000$ samples. This is due to the RMMALA schemes being driven by a single step random walk that cannot exploit fully the curvature information available to the geodesic flows of RMHMC, see rejoinder of \cite{GirolamiCalderhead11}. Note that it is not our goal to compare the behavior of the chains when they converge. Rather we would like to qualitatively study how fast the chains are well-mixed (mixing time). This is important for large-scale problems governed by PDEs since ``unpredicted'' mixing time implies a lot of costly waste in PDE solves which one must avoid. Though RMHMC is expensive in generating a sample, the cost of generating an uncorrelated/independent sample seems to be comparable to sRMHMC for this example. In fact, if we measure the cost in terms of the number of PDE solves, the total number of PDE solves for RMHMC is $42476480 $ while it is $ 1020002$ for sRMHMC, a factor of $40$ more expensive. However, the cost in generating an almost uncorrelated/independent sample is the same since sRMHMC generates one uncorrelated sample out of $40$ while it is one out of one for RMHMC. To see how each method distributes the samples we plot one for every five samples in Figure \figref{comparison1Trajectory}. All methods seem to explore the high probability density region very well. This explains why the sample mean and the $95\%$ credibility region are similar for all methods in the first column of Figure \figref{comparison1}. \begin{figure}[h!t!b!] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,width=0.97\columnwidth]{HelmholtzN5000BurnIn100Epsilon2Nsteps1001al60alpha1m-1noise1m-1Trajectory} \caption{Comparison MCMC trajectories (plot one for each five samples) among simRMMALA, RMMALA, simRMHMC, and RMHMC: chains with 5000 samples, burn-in of $100$, starting at the MAP point. In this example, $s = 0.6$, $\alpha = 0.1$, and $\sigma = 0.1$. Time step is $\varepsilon = 1$ for simRMMALA and RMMALA, and $\varepsilon = 0.02$ with the number of time steps $L = 100$ for simRMHMC and RMHMC.} \figlab{comparison1Trajectory} \end{figure} \begin{figure}[h!tb] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,width=0.97\columnwidth]{RMHMC_Thermal_ComparisonN5000Epsilon41al60alpha1p0noise1m-2} \caption{Comparison among simRMMALA, RMMALA, simRMHMC, and RMHMC: chains with 5000 samples, burn-in of $100$, starting at the MAP point. In this example, $s = 0.6$, $\alpha = 1$, and $\sigma = 0.01$. Time step is $\varepsilon = 1$ for simRMMALA and RMMALA, and $\varepsilon = 0.04$ with the number of time steps $L = 100$ for simRMHMC and RMHMC. In the left column: the exact synthetic solution is black, the sample mean is red, and the shaded region is the $95\%$ credibility region. In the middle column: blue is the trace plot for $\u_1$ while green is for $\u_2$. In the right column: red and black are the autocorrelation function for $\u_1$ and $\u_2$, respectively.} \figlab{comparison2} \end{figure} In the second example we consider the combination $s = 0.6$, $\sigma = 0.01$, and $\alpha = 1$ which leads to the posterior shown in Figure \figref{posterior2}. For sRMHMC and RMHMC, we take time step $\varepsilon= 0.04$ with $L = 100$ time steps, while it is $1$ for both sRMMALA and RMMALA. Again, the acceptance rate is unity for both sRMHMC and RMHMC while it is $0.65$ for sRMMALA and $0.55$ for RMMALA, respectively. The result for four methods is shown in Figure \figref{comparison2}. As can be seen, this example seems to be easier than the first one since even though the time step is larger, the trace plot and the ACF looks better. It is interesting to observe that sRMHMC is comparable with RMHMC (in fact the ACF seems to be a bit better) for this example. As a result, RMHMC is more expensive than sRMHMC for less challenging posterior in Figure \figref{posterior2}. Here, by less challenging we mean that the posterior is quite well approximated by a Gaussian at the MAP point, e.g. the metric tensor is almost constant. This is true for the posterior in Figure \figref{posterior2} in which the Gaussian prior contribution is significant, i.e., $\alpha = 1$ instead of $\alpha = 0.1$. Conversely, the posterior is challenging if the metric tensor changes rapidly. Similar to the first example, one also see that the sample mean and the $95\%$ credibility region are almost the same for all methods. \begin{figure}[h!tb] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,width=0.97\columnwidth]{RMHMC_Thermal_ComparisonN5000Epsilon21al60alpha1m-1noise1m-2} \caption{Comparison among simRMMALA, RMMALA, simRMHMC, and RMHMC: chains with 5000 samples, burn-in of $100$, starting at the MAP point. In this example, $s = 0.6$, $\alpha = 0.1$, and $\sigma = 0.01$. Time step is $\varepsilon = 0.7$ for simRMMALA and RMMALA, and $\varepsilon = 0.02$ with the number of time steps $L = 100$ for simRMHMC and RMHMC. In the left column: the exact synthetic solution is black, the sample mean is red, and the shaded region is the $95\%$ credibility region. In the middle column: blue is the trace plot for $\u_1$ while green is for $\u_2$. In the right column: red and black are the autocorrelation function for $\u_1$ and $\u_2$, respectively.} \figlab{comparison3} \end{figure} In the third example we consider the combination $s = 0.6$, $\sigma = 0.01$, and $\alpha = 0.1$ which leads to a skinny posterior with a long ridge as shown in Figure \figref{posterior3}. For sRMHMC and RMHMC, we take time step $\varepsilon= 0.02$ with $L = 100$ time steps, while it is $1$ for both sRMMALA and RMMALA. Again, the acceptance rate is unity for both sRMHMC and RMHMC while it is $0.45$ for sRMMALA and RMMALA. The result for four methods is shown in Figure \figref{comparison3}. For this example, the RMHMC is more desirable than sRMHMC since the cost to generate an uncorrelated/independent sample is smaller for the former than the latter. The reason is that the total number of PDEs solves for the former is $40$ times more than the latter, but one out of very sixty samples is uncorrelated/independent. \subsection{Multi-parameter examples} In this section we choose to discretize $\Omega = \LRs{0,1}$ with $2^{10} = 1024$ elements, and hence the number of parameters is $1025$. For all simulations in this section, we choose $s = 0.6$, $\alpha = 10$, and $\sigma = 0.01$. For synthetic observations, we take $K = 64$ observations at $x_j = (j-1)/2^6$, $j = 1,\hdots,K$. Clearly, using the full blown RMHMC is out of the question since it is too expensive to construct the third derivative tensor and Newton method for each Stomer-Verlet step. For that reason, the sRMHMC becomes the viable choice. As studied in Section \secref{twoparameter}, though sRMHMC loses the ability to efficiently sample from highly nonlinear posterior surfaces compared to the full RMHMC it is much less expensive to generate a sample since it does not require the derivative of the Fisher information matrix. In fact sRMHMC requires to (approximately) compute the Fisher information at the MAP point and then uses it as the fixed constant metric tensor throughout all leap-frog steps for all samples. Clearly, the gradient \eqnref{Gradient} has to be evaluated at each leap-frog step, but it can be computed efficiently using the adjoint method presented in Section \secref{adjoint}. Nevertheless, constructing the exact Fisher information matrix requires $2\times 1025$ PDEs solves. This is impractical if the dimension of the finite element space increases, e.g. by refining the mesh. Alternatively, due to the compactness of the Hessian of the prior-preconditioned misfit as discussed in Section \secref{lowrank}, we can use the randomized singular value decomposition (RSVD) technique \cite{HalkoMartinssonTropp11} to compute its low rank approximations. Shown in Figure \figref{eigenSpectrum} are the first 35 dominant eigenvalues of the Fisher information matrix and its prior-preconditioned counterpart. We also plot 20 approximate eigenvalues of the prior-preconditioned Fisher information matrix obtained from the RSVD method. As can be seen, the eigen spectrum of the prior-preconditioned Fisher information matrix decays faster than that of the original one. This is not surprising since the prior-preconditioned Fisher operator is a composition of the prior covariance, a compact operator, and the Fisher information operator, also a compact operator. The power of the RSVD is clearly demonstrated as the RSVD result for the first 20 eigenvalues is very accurate. \begin{figure}[h!tb] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,width=0.97\columnwidth]{GNspectrum1024al60alpha1000noise1} \caption{The eigen spectrum of the Fisher information matrix, the prior-preconditioned Fisher matrix, and the first 20 eigenvalues approximated using RSVD. Here, $s = 0.6$, $\alpha = 10$, and $\sigma = 0.01$.} \figlab{eigenSpectrum} \end{figure} \begin{figure}[h!tb] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,height = 0.7\columnwidth, width=1\columnwidth]{FullLowRankFisher1024al60alpha1000noise1} \caption{MCMC results of three sRMHMC method with i) the low rank Gauss-Newton Hessian, ii) the exact Gauss-Newton Hessian, and iii) the full Hessian. In the figure are the empirical mean (red line), the exact distributed parameter used to generate the observation (black line), and $95\%$ credibility (shaded region).} \figlab{UQmultiparameter} \end{figure} Next, we perform the sRMHMC method using three different constant metric tensors: i) the low rank Gauss-Newton Hessian, ii) the exact Gauss-Newton Hessian, and iii) the full Hessian. For each case, we start the Markov chain at the MAP point and compute $5100$ samples, the first $100$ of which is then discarded as burn-ins. The empirical mean (red line), the exact distributed parameter used to generate the observation (black line), and $95\%$ credibility region are shown in Figure \figref{UQmultiparameter}. As can be seen, the results from the three methods are indistinguishable. The first sRMHMC is the most appealing since it requires $2 \times 20 = 40$ PDE solves to construct the low rank Fisher information while the others need $2\times 1025$ PDE solves. For large-scale problems with computationally expensive PDE solves, the first approach is the method of choice. To further compare the three methods we record the trace plot of the first two ($1$ and $2$) and the last two ($1024$ and $1025$) parameters in Figure \figref{UQmultiparameterTrace}. As can be observed, the chains from the three methods seem to be well-mixed and it is hard to see the difference among them. We also plot the autocorrelation function for these four parameters. Again, results for the three sRMHMC methods are almost identical, namely, they generate almost uncorrelated samples. We therefore conclude that low rank approach is the least computational extensive, yet it maintains the attractive features of the original RMHMC. As such, it is the most suitable method for large-scale Bayesian inverse problems with costly PDE solves. \begin{figure}[h!t!b!] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,height = 1.\columnwidth, width=1\columnwidth]{FullLowRankFisherTrace1024al60alpha1000noise1} \caption{MCMC results of three sRMHMC method with i) the low rank Gauss-Newton Hessian (left column), ii) the exact Gauss-Newton Hessian (middle column), and iii) the full Hessian at the MAP point (right column). In the figure are the trace plot of the first two ($1$ and $2$) and last two ($1024$ and $1025$) parameters} \figlab{UQmultiparameterTrace} \end{figure} \begin{figure}[h!t!b!] \includegraphics[trim=1cm 6.5cm 2cm 6.5cm,clip=true,height = 1.\columnwidth, width=1\columnwidth]{FullLowRankCorrelation1024al60alpha1000noise1} \caption{MCMC results of three sRMHMC method with i) the low rank Gauss-Newton Hessian (left column), ii) the exact Gauss-Newton Hessian (middle column), and iii) the full Hessian at the MAP point (right column). In the figure are the autocorrelation function plot of the first two ($1$ and $2$) and last two ($1024$ and $1025$) parameters} \figlab{UQmultiparameterCorrelation} \end{figure} \section{Conclusions and future work} \seclab{conclusions} We have proposed the adoption of a computationally inexpensive Riemann manifold Hamiltonian Monte method to explore the posterior of large-scale Bayesian inverse problems governed by PDEs in a highly efficient manner. We first adopt an infinite dimensional Bayesian framework to guarantee that the inverse formulation is well-defined. In particular, we postulate a Gaussian prior measure on the parameter space and assume regularity for the likelihood. This leads to a well-defined posterior distribution. Then, we discretize the posterior using the standard finite element method and a matrix transfer technique, and apply the RMHMC method on the resulting discretized posterior in finite dimensional parameter space. We present an adjoint technique to efficiently compute the gradient, the Hessian, and the third derivative of the potential function that are required in the RMHMC context. This is at the expense of solving a few extra PDEs: one for the gradient, two for a Hessian-vector product, and four for the product of third order derivative with a matrix. For large-scale problems, repeatedly computing the action of the Hessian and third order derivative is too computationally expensive and this motivates us to design a simplified RMHMC in which the Fisher information matrix is computed once at the MAP point. We further reduce the effort by constructing low rank approximation of the Fisher information using a randomized singular value decomposition technique. The effectiveness of the proposed approach is demonstrated on a number of numerical results up to $1025$ parameters in which the computational gain is about two orders of magnitude while maintaining the quality of the original RMHMC method in generating (almost) uncorrelated/independent samples. For more challenging inverse problems with significant change of metric tensor across the parameter space, we expect that sRMHMC with constant metric tensor is inefficient. In that case, RMHMC seems to be a better option, but it is too computational extensive for large-scale problems. Ongoing work is to explore approximation of the RMHMC methods in which we approximate the trace and the third derivative in \eqnref{Hamilton} using adjoint and randomized techniques. \section*{References} \bibliographystyle{unsrt}
1,314,259,995,978
arxiv
\section{Introduction} In essence, conformal predictors output systems of p-values: to each potential label of a test object a conformal predictor assigns the corresponding p-value, and a low p-value is interpreted as the label being unlikely. It has been argued, especially by Bayesian statisticians, that p-values are more difficult to interpret than probabilities; besides, in decision problems probabilities can be easily combined with utilities to obtain decisions that are optimal from the point of view of Bayesian decision theory. In this paper we will apply the idea of transforming p-values into probabilities (used in a completely different context in, e.g., \cite{vovk:1993logic}, Sect.~9, and \cite{sellke/etal:2001}) to conformal prediction: the p-values produced by conformal predictors will be transformed into probabilities. The approach of this paper is as follows. It was observed in \cite{\OCMXI} that some criteria of efficiency for conformal prediction (called ``probabilistic criteria'') encourage using the conditional probability $Q(y\mid x)$ as the conformity score for an observation $(x,y)$, $Q$ being the data-generating distribution. In this paper we extend this observation to label-conditional predictors (Sect.~\ref{sec: criteria for p-values}). Next we imagine that we are given a conformal predictor $\Gamma$ that is nearly optimal with respect to a probabilistic criterion (such a conformal predictor might be an outcome of a thorough empirical study of various conformal predictors using a probabilistic criterion of efficiency). Essentially, this means that in the limit of a very large training set the p-value that $\Gamma$ outputs for an observation $(x,y)$ is a monotonic transformation of the conditional probability $Q(y\mid x)$ (Theorem~\ref{thm: CP} in Sect.~\ref{sec: optimal}). Finally, we transform the p-values back into conditional probabilities using the distribution of p-values in the test set (Sect.~\ref{sec: calibration}). Following \cite{vovk:1993logic} and \cite{sellke/etal:2001}, we will say that at this step we \emph{calibrate} the p-values into probabilities, In Sect.~\ref{sec: experiments} we give an example of a realistic situation where use of the techniques developed in this paper improves on a standard approach. The performance of the probabilistic predictors considered in that section is measured using standard loss functions, logarithmic and Brier (Sect.~\ref{sec: criteria for probabilities}). \subsection*{Comparisons with related work} It should be noted that in the process of transforming p-values into probabilities suggested in this paper we lose a valuable feature of conformal prediction, its automatic validity. Our hope, however, is that the advantages of conformal prediction will translate into accurate probabilistic predictions. There is another method of probabilistic prediction that is related to conformal prediction, Venn prediction (see, e.g., \cite{vovk/etal:2005book}, Chap.~6, or \cite{\OCMVII}). This method does have a guaranteed property of validity (perhaps the simplest being Theorem~1 in \cite{\OCMVII}); however, the price to pay is that it outputs multiprobabilistic predictions rather than sharp probabilistic predictions. There are natural ways of transforming multiprobabilistic predictions into sharp probabilistic predictions (see, e.g., \cite{\OCMVII}, Sect.~4), but such transformations, again, lead to the loss of the formal property of validity. As preparation, we study label-conditional conformal prediction. For a general discussion of conditionality in conformal prediction, see \cite{\OCMV}. Object-conditional conformal prediction has been studied in \cite{lei/wasserman:2013} (in the case of regression). \section{Criteria of efficiency for label-conditional conformal predictors and transducers} \label{sec: criteria for p-values} Let $\mathbf{X}$ be a measurable space (the \emph{object space}) and $\mathbf{Y}$ be a finite set equipped with the discrete $\sigma$-algebra (the \emph{label space}); the \emph{observation space} is defined to be $\mathbf{Z}:=\mathbf{X}\times\mathbf{Y}$. A \emph{conformity measure} is a measurable function $A$ that assigns to every sequence $(z_1,\ldots,z_n)\in\mathbf{Z}^*$ of observations a same-length sequence $(\alpha_1,\ldots,\alpha_n)$ of real numbers and that is equivariant with respect to permutations: for any $n$ and any permutation $\pi$ of $\{1,\ldots,n\}$, $$ (\alpha_1,\ldots,\alpha_n) = A(z_1,\ldots,z_n) \Longrightarrow \left(\alpha_{\pi(1)},\ldots,\alpha_{\pi(n)}\right) = A\left(z_{\pi(1)},\ldots,z_{\pi(n)}\right). $$ The \emph{label-conditional conformal predictor} determined by $A$ is defined by \begin{equation}\label{eq: conformal predictor} \Gamma^{\epsilon}(z_1,\ldots,z_l,x) := \left\{ y \mid p^y>\epsilon \right\}, \end{equation} where $(z_1,\ldots,z_l)\in\mathbf{Z}^*$ is a training sequence, $x$ is a test object, $\epsilon\in(0,1)$ is a given \emph{significance level}, and for each $y\in\mathbf{Y}$ the corresponding \emph{label-conditional p-value} $p^y$ is defined by \begin{multline}\label{eq: p} p^y := \frac { \left|\left\{i=1,\ldots,l+1\mid y_i=y\And\alpha^y_i<\alpha^y_{l+1}\right\}\right| } { \left|\left\{i=1,\ldots,l+1\mid y_i=y\right\}\right| }\\ + \tau \frac { \left|\left\{i=1,\ldots,l+1\mid y_i=y\And\alpha^y_i=\alpha^y_{l+1}\right\}\right| } { \left|\left\{i=1,\ldots,l+1\mid y_i=y\right\}\right| }, \end{multline} where $\tau$ is a random number distributed uniformly on the interval $[0,1]$ and the corresponding sequence of \emph{conformity scores} is defined by \begin{equation*} (\alpha_1^y,\ldots,\alpha_l^y,\alpha_{l+1}^y) := A(z_1,\ldots,z_l,(x,y)). \end{equation*} It is clear that the system of \emph{prediction sets} (\ref{eq: conformal predictor}) output by a conformal predictor is nested, namely decreasing in $\epsilon$. The \emph{label-conditional conformal transducer} determined by $A$ outputs the system of p-values $(p^y\mid y\in\mathbf{Y})$ defined by (\ref{eq: p}) for each training sequence $(z_1,\ldots,z_l)$ of observations and each test object $x$. \subsection*{Four criteria of efficiency} Suppose that, besides the training sequence, we are also given a test sequence, and would like to measure on it the performance of a label-conditional conformal predictor or transducer. As usual, let us define the performance on the test set to be the average performance (or, equivalently, the sum of performances) on the individual test observations. Following \cite{\OCMXI}, we will discuss the following four criteria of efficiency for individual test observations; all the criteria will work in the same direction: the smaller the better. \begin{itemize} \item The sum $\sum_{y\in\mathbf{Y}}p^y$ of the p-values; referred to as the \emph{S criterion}. This is applicable to conformal transducers (i.e., the criterion is $\epsilon$-independent). \item The size $\left|\Gamma^{\epsilon}\right|$ of the prediction set at a significance level $\epsilon$; this is the \emph{N criterion}. It is applicable to conformal predictors ($\epsilon$-dependent). \item The sum of the p-values apart from that for the true label: the \emph{OF} (``observed fuzziness'') \emph{criterion}. \item The number of false labels included in the prediction set $\Gamma^{\epsilon}$ at a significance level $\epsilon$; this is the \emph{OE} (``observed excess'') \emph{criterion}. \end{itemize} The last two criteria are simple modifications of the first two (leading to smoother and more expressive pictures). \ifnotCONF\begin{remark}\label{rem: general}\fi Equivalently, the S criterion can be defined as the arithmetic mean $\frac{1}{\left|\mathbf{Y}\right|}\sum_{y\in\mathbf{Y}}p^y$ of the p-values; the proof of Theorem~\ref{thm: CP} below will show that, in fact, we can replace arithmetic mean by any mean (\cite{hardy/etal:1952}, Sect.~3.1), including geometric, harmonic, etc. \ifnotCONF\end{remark}\fi \section{Optimal idealized conformity measures for a known probability distribution} \label{sec: optimal} In this section we consider the idealized case where the probability distribution $Q$ generating independent observations $z_1,z_2,\ldots$ is known (as in \cite{\OCMXI}). The main result of this section, Theorem~\ref{thm: CP}, is the label-conditional counterpart of Theorem~1 in \cite{\OCMXI}; the proof of our Theorem~\ref{thm: CP} is also modelled on the proof of Theorem~1 in \cite{\OCMXI}. In this section we assume, for simplicity, that the set $\mathbf{Z}$ is finite and that $Q(\{z\})>0$ for all $z\in\mathbf{Z}$. An \emph{idealized conformity measure} is a function $A(z,Q)$ of $z\in\mathbf{Z}$ and $Q\in\PPP(\mathbf{Z})$ (where $\PPP(\mathbf{Z})$ is the set of all probability measures on $\mathbf{Z}$). We will sometimes write the corresponding conformity scores as $A(z)$, as $Q$ will be clear from the context. The \emph{idealized smoothed label-conditional conformal predictor} corresponding to $A$ outputs the following prediction set $\Gamma^{\epsilon}(x)$ for each object $x\in\mathbf{X}$ and each significance level $\epsilon\in(0,1)$. For each potential label $y\in\mathbf{Y}$ for $x$ define the corresponding \emph{label-conditional p-value} as \begin{multline}\label{eq: p-value} p^y = p(x,y) := \frac {Q(\{(x',y)\mid x'\in\mathbf{X} \And A((x',y),Q)<A((x,y),Q)\})} {Q_{\mathbf{Y}}(\{y\})}\\ + \tau \frac {Q(\{(x',y)\mid x'\in\mathbf{X} \And A((x',y),Q)=A((x,y),Q)\})} {Q_{\mathbf{Y}}(\{y\})} \end{multline} (this is the idealized analogue of (\ref{eq: p})), where $Q_{\mathbf{Y}}$ is the marginal distribution of $Q$ on $\mathbf{Y}$ and $\tau$ is a random number distributed uniformly on $[0,1]$. The prediction set is \begin{equation}\label{eq: prediction set} \Gamma^{\epsilon}(x) := \left\{ y\in\mathbf{Y} \mid p(x,y)>\epsilon \right\}. \end{equation} The \emph{idealized smoothed label-conditional conformal transducer} corresponding to $A$ outputs for each object $x\in\mathbf{X}$ the system of p-values $(p^y\mid y\in\mathbf{Y})$ defined by (\ref{eq: p-value}); in the idealized case we will usually use the alternative notation $p(x,y)$ for~$p^y$. \subsection*{Four idealized criteria of efficiency} In this subsection we will apply the four criteria of efficiency that we discussed in the previous section to the idealized case of infinite training and test sequences; since the sequences are infinite, they carry all information about the data-generating distribution $Q$. We will write $\Gamma^{\epsilon}_A(x)$ for the $\Gamma^{\epsilon}(x)$ in (\ref{eq: prediction set}) and $p_A(x,y)$ for the $p(x,y)$ in (\ref{eq: p-value}) to indicate the dependence on the choice of the conformity measure~$A$. Let $U$ be the uniform probability measure on the interval~$[0,1]$. An idealized conformity measure~$A$ is: \begin{itemize} \item \emph{S-optimal} if $ \Expect_{(x,\tau)\sim Q_{\mathbf{X}}\times U} \sum_yp_A(x,y) \le \Expect_{(x,\tau)\sim Q_{\mathbf{X}}\times U} \sum_yp_B(x,y) $ for any idealized conformity measure $B$, where $Q_{\mathbf{X}}$ is the marginal distribution of $Q$ on $\mathbf{X}$; \item \emph{N-optimal} if $ \Expect_{(x,\tau)\sim Q_{\mathbf{X}}\times U} \left|\Gamma^{\epsilon}_A(x)\right| \le \Expect_{(x,\tau)\sim Q_{\mathbf{X}}\times U} \left|\Gamma^{\epsilon}_B(x)\right| $ for any idealized conformity measure~$B$ and any significance level~$\epsilon$; \item \emph{OF-optimal} if \begin{equation*} \Expect_{((x,y),\tau)\sim Q\times U} \sum_{y'\ne y}p_A(x,y') \le \Expect_{((x,y),\tau)\sim Q\times U} \sum_{y'\ne y}p_A(x,y') \end{equation*} for any idealized conformity measure $B$; \item \emph{OE-optimal} if \begin{equation*} \Expect_{((x,y),\tau)\sim Q\times U} \left|\Gamma^{\epsilon}_A(x)\setminus\{y\}\right| \le \Expect_{((x,y),\tau)\sim Q\times U} \left|\Gamma^{\epsilon}_B(x)\setminus\{y\}\right| \end{equation*} for any idealized conformity measure~$B$ and any significance level~$\epsilon$. \end{itemize} The \emph{conditional probability (CP) idealized conformity measure} is $$ A((x,y),Q) := Q(y\mid x). $$ An idealized conformity measure $A$ is a (label-conditional) \emph{refinement} of an idealized conformity measure $B$ if \begin{equation} B((x_1,y))<B((x_2,y)) \Longrightarrow A((x_1,y))<A((x_2,y)) \end{equation} for all $x_1,x_2\in\mathbf{Z}$ and all $y\in\mathbf{Y}$. (Notice that this definition, being label-conditional, is different from the one given in \cite{\OCMXI}.) Let $\RRR(\CP)$ be the set of all refinements of the CP idealized conformity measure. If $C$ is a criterion of efficiency (one of the four discussed above), we let $\OOO(C)$ stand for the set of all $C$-optimal idealized conformity measures. \begin{theorem}\label{thm: CP} $\OOO(\SSS)=\OOO(\OF)=\OOO(\NNN)=\OOO(\OE)=\RRR(\CP)$. \end{theorem} \begin{proof} We start from proving $\RRR(\CP)=\OOO(\NNN)$. Fix a significance level $\epsilon$. A smoothed confidence predictor at level $\epsilon$ is defined as a random set of observations $(x,y)\in\mathbf{Z}$; in other words, to each observation $(x,y)$ is assigned the probability $P(x,y)$ that the observation will be outside the prediction set. Under the restriction that the sum of the probabilities $Q(x,y)$ of observations $(x,y)$ outside the prediction set (defined as $\sum_x Q(x,y)P(x,y)$ in the smoothed case) is bounded by $\epsilon Q_{\mathbf{Y}}(y)$ for a fixed $y$, the N criterion requires us to make the sum of $Q_{\mathbf{X}}(x)$ for $(x,y)$ outside the prediction set (defined as $\sum_x Q_{\mathbf{X}}P(x,y)$ in the smoothed case) as large as possible. It is clear that the set should consist of the observations with the smallest $Q(y\mid x)$ (by the usual Neyman--Pearson argument: cf.\ \cite{lehmann:1986}, Sect.~3.2). \ifFULL\bluebegin This argument in fact also shows that $\OOO(\NNN)\subseteq\RRR(\CP)$. \blueend\fi Next we show that $\OOO(\NNN)\subseteq\OOO(\SSS)$. Let an idealized conformity measure $A$ be N-optimal. By definition, \begin{equation*} \Expect_{x,\tau} \left|\Gamma^\epsilon_A(x)\right| \le \Expect_{x,\tau} \left|\Gamma^\epsilon_B(x)\right| \end{equation*} for any idealized conformity measure $B$ and any significance level $\epsilon$. Integrating over $\epsilon\in(0,1)$ and swapping the order of integrals and expectations, \begin{equation}\label{eq: N-S} \Expect_{x,\tau} \int_0^1 \left|\Gamma^\epsilon_A(x)\right| \dd{\epsilon} \le \Expect_{x,\tau} \int_0^1 \left|\Gamma^\epsilon_B(x)\right| \dd{\epsilon}. \end{equation} Since $$ \left|\Gamma^\epsilon(x)\right| = \sum_{y\in\mathbf{Y}}1_{\{p(x,y) > \epsilon\}}, $$ we can rewrite \eqref{eq: N-S}, after swapping the order of summation and integration, as \begin{equation*} \Expect_{x,\tau} \sum_{y \in \mathbf{Y}} \left( \int_0^1 1_{\{p_A(x,y) > \epsilon\}} \dd{\epsilon} \right) \le \Expect_{x,\tau} \sum_{y \in \mathbf{Y}} \left( \int_0^1 1_{\{p_B(x,y) > \epsilon\}} \dd{\epsilon} \right). \end{equation*} Since $$ \int_0^1 1_{\{p(x,y) > \epsilon\}} \dd{\epsilon} = p(x,y), $$ we finally obtain \begin{equation*} \Expect_{x,\tau} \sum_{y \in \mathbf{Y}} p_A(x,y) \le \Expect_{x,\tau} \sum_{y \in \mathbf{Y}} p_B(x,y). \end{equation*} Since this holds for any idealized conformity measure $B$, $A$ is S-optimal. The argument in the previous paragraph in fact shows that $\OOO(\SSS)=\OOO(\NNN)=\RRR(\CP)$. \ifnotCONF Indeed, that argument shows that \begin{equation*} \sum_{y \in \mathbf{Y}} p(x,y) = \int_0^1 \left|\Gamma^\epsilon(x)\right| \dd{\epsilon}, \end{equation*} and so to optimize a conformity measure in the sense of the S criterion it suffices to optimize it in the sense of the N criterion for all $\epsilon$ simultaneously (which can, and therefore should, be done). More generally, for any continuous increasing function $\phi$ we have \begin{multline*} \sum_{y \in \mathbf{Y}} \phi(p(x,y)) = \sum_{y \in \mathbf{Y}} \int_0^1 1_{\{\phi(p(x,y))>\epsilon\}} \dd{\epsilon} = \int_0^1 \sum_{y \in \mathbf{Y}} 1_{\{p(x,y)>\phi^{-1}(\epsilon)\}} \dd{\epsilon}\\ = \int_0^1 \left|\Gamma^{\phi^{-1}(\epsilon)}(x)\right| \dd{\epsilon} = \int \left|\Gamma^{\epsilon'}(x)\right| \phi'(\epsilon') \dd{\epsilon'}, \end{multline*} which proves Remark~\ref{rem: general}. \fi The equality $\OOO(\SSS)=\OOO(\OF)$ follows from $$ \Expect_{x,\tau} \sum_{y} p(x,y) = \Expect_{(x,y),\tau} \sum_{y'\ne y}p(x,y') + \frac12, $$ where we have used the fact that $p(x,y)$ is distributed uniformly on $[0,1]$ when $((x,y),\tau)\sim Q\times U$ (see \cite{vovk/etal:2005book} and \cite{\OCMXI}). Finally, we notice that $\OOO(\NNN)=\OOO(\OE)$. Indeed, for any significance level $\epsilon$, $$ \Expect_{x,\tau} |\Gamma^\epsilon(x)| = \Expect_{(x,y),\tau} |\Gamma^\epsilon(x) \setminus \{y\}| + (1-\epsilon), $$ again using the fact that $p(x,y)$ is distributed uniformly on $[0,1]$ and so $\Prob_{(x,y),\tau}(y\in\Gamma^\epsilon(x)) = 1 - \epsilon$. \end{proof} \section{Criteria of efficiency for probabilistic predictors} \label{sec: criteria for probabilities} Given a training set $(z_1,\ldots,z_l)$ and a test object $x$, a probabilistic predictor outputs a probability measure $P\in\PPP(\mathbf{Y})$, which is interpreted as its probabilistic prediction for the label $y$ of $x$; we let $\PPP(\mathbf{Y})$ stand for the set of all probability measures on $\mathbf{Y}$. The two standard way of measuring the performance of $P$ on the actual label $y$ are the \emph{logarithmic} (or \emph{log}) \emph{loss} $-\ln P(\{y\})$ and the \emph{Brier loss} $$ \sum_{y'\in\mathbf{Y}} \Bigl(1_{\{y'=y\}}-P(\{y'\})\Bigr)^2, $$ where $1_E$ stands for the indicator of an event $E$: $1_E=0$ if $E$ happens and $1_E=0$ otherwise. The efficiency of probabilistic predictors will be measured by these two loss functions. \ifFULL\bluebegin Remember that probabilistic predictors do not posses any properties of automatic validity (unlike Venn predictors, which are, however, multiprobabilistic predictors). \blueend\fi \ifFULL\bluebegin These are proper loss functions. \blueend\fi Suppose we have a test sequence $(z_{l+1},\ldots,z_{l+k})$, where $z_i=(x_i,y_i)$ for $i=l+1,\ldots,l+k$, and we want to evaluate the performance of a probabilistic predictor (trained on a training sequence $z_1,\ldots,z_l$) on it. In the next section we will use the \emph{average log loss} $$ -\frac1k \sum_{i=l+1}^{l+k} \ln P_i(\{y_i\}) $$ and the \emph{standardized Brier loss} $$ \sqrt { \frac{1}{k\left|\mathbf{Y}\right|} \sum_{i=l+1}^{l+k} \sum_{y'\in\mathbf{Y}} \Bigl(1_{\{y'=y_i\}}-P_i(\{y'\})\Big)^2 }, $$ where $P_i\in\PPP(\mathbf{Y})$ is the probabilistic prediction for $x_i$. Notice that in the binary case, $\left|\mathbf{Y}\right|=2$, the average log loss coincides with the mean log error (used in, e.g., \cite{\OCMVII}, (12)) and the standardized Brier loss coincides with the root mean square error (used in, e.g., \cite{\OCMVII}, (13)). \section{Calibration of p-values into conditional probabilities} \label{sec: calibration} \ifFULL\bluebegin We can use a hold-out set for calibration (say nonparametric, using monotonic regression, as in \cite{zadrozny/elkan:2002} in a related context). This might be too wasteful, but still we should run experiments. In this section we will discuss an alternative approach: how to calibrate p-values using the test set. \blueend\fi The argument of this section will be somewhat heuristic, and we will not try to formalize it in this paper. Fix $y\in\mathbf{Y}$. Suppose that $q:=P(y\mid x)$ has an absolutely continuous distribution with density $f$ when $x\sim Q_{\mathbf{X}}$. (In other words, $f$ is the density of the image of $Q_{\mathbf{X}}$ under the mapping $x\mapsto P(y\mid x)$.\ifFULL\bluebegin\ This assumption contradicts that assumption made earlier that $\mathbf{Z}$ is finite.\blueend\fi) For the CP idealized conformity measure, we can rewrite (\ref{eq: p-value}) as \begin{equation}\label{eq: ideal p-value} p(q) := \left. \int_0^q q' f(q') dq' \middle/ D \right., \end{equation} where $D:=Q_{\mathbf{Y}}(\{y\})$; alternatively, we can set $ D := \int_0^1 q' f(q') dq' $ to the normalizing constant ensuring that $p(1)=1$. To see how \eqref{eq: ideal p-value} is a special case of \eqref{eq: p-value} for the CP idealized conformity measure, notice that the probability that $Y=y$ and $P(Y\mid X)\in(q',q'+dq')$, where $(X,Y)\sim f$, is $q'f(q')dq'$. In (\ref{eq: ideal p-value}) we write $p(q)$ rather than $p^y$ since $p^y$ depends on $y$ only via $q$. \begin{algorithm}[bt] \caption{Conformal-type probabilistic predictor} \label{alg: PP} \begin{algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE training sequence $(z_1,\ldots,z_l)\in\mathbf{Z}^l$ \REQUIRE calibration sequence $(x_{l+1},\ldots,x_{l+k})\in\mathbf{X}^k$ \REQUIRE test object $x_0$ \ENSURE probabilistic prediction $P\in\PPP(\mathbf{Y})$ for the label of $x_0$ \FOR{$y\in\mathbf{Y}$} \STATE for each $x_i$ in the calibration sequence find the p-value $p_i^y$ by~\eqref{eq: p} \STATE \qquad (with $l+i$ in place of $l+1$) \STATE let $g_y$ be the antitonic density on $[0,1]$ fitted to $p_{l+1}^y,\ldots,p_{l+k}^y$ \STATE find the p-value $p_0^y$ by~\eqref{eq: p} (with $0$ in place of $l+1$) \STATE for each $y\in\mathbf{Y}$, set $P'(\{y\}):=g_y(1)/g_y(p_0^y)$ \ENDFOR \STATE set $P(\{y\}):=P'(\{y\})/\sum_{y'}P'(\{y'\})$ for each $y\in\mathbf{Y}$ \end{algorithmic} \end{algorithm} We are more interested in the inverse function $q(p)$, which is defined by the condition $$ p = \left. \int_0^{q(p)} q' f(q') dq' \middle/ D \right.. $$ When $q\sim f$, we have $$ \Prob(p(q)\le a) = \Prob(q\le q(a)) = \int_0^{q(a)} f(q') dq'. $$ Therefore, when $q\sim f$, we have $$ \Prob(a\le p(q)\le a+da) = \int_{q(a)}^{q(a+da)} f(q') dq' \approx \frac{1}{q(a)} \int_{q(a)}^{q(a+da)} q' f(q') dq' = \frac{Dda}{q(a)}, $$ and so $$ q(c) \approx \left. D \middle/\; \frac{\Prob(c\le p(q)\le c+dc)}{dc}. \right. $$ This gives rise to the algorithm given as Algorithm~\ref{alg: PP}, which uses real p-values \eqref{eq: p} instead of the ideal p-values \eqref{eq: p-value}. The algorithm is transductive in that it uses a training sequence of labelled observations and a calibration sequence of unlabelled objects (in the next section we use the test sequence as the calibration sequence); the latter is used for calibrating p-values into conditional probabilities. Given all the p-values for the calibration sequence with postulated label $y$, find the corresponding antitonic density $g(p)$ (remember that the function $q(p)$ is known to be monotonic, namely isotonic) using Grenander's estimator (see \cite{grenander:1956} or, e.g., \cite{devroye:1987}, Chap.~8). Use $D/g(p)$ as the calibration function, where $D:=g(1)$ is chosen in such a way that a p-value of $1$ is calibrated into a conditional probability of $1$. (Alternatively, we could set $D$ to the fraction of observations labelled as $y$ in the training sequence; this approximates setting $D:=Q_{\mathbf{Y}}(\{y\})$.) The probabilities produced by this procedure are not guaranteed to lead to a probability measure: the sum over $y$ can be different from 1 (and this phenomenon has been observed in our experiments). Therefore, in the last line of Algorithm~\ref{alg: PP} we normalize the calibrated p-values to obtain genuine probabilities. \ifFULL\bluebegin Grenander's estimator achieves MISE rate of convergence $n^{-1/3}$ (without any extra conditions): is this true? (It is true for $L_1$: see, e.g., \cite{devroye/gyorfi:1985}, Sect.~7.7.) If the density $g$ is assumed to have $m$ derivatives, the optimal rate of convergence without order conditions is $n^{-2m/(2m+1)}$, and further assuming antitonicity does not improve it (see \cite{efromovich:2001} and references therein). Kiefer \cite{kiefer:1982}: the rate of convergence is not affected when $m=1$. Efromovich \cite{efromovich:2001}: even the constant is not affected, for any $m\in\{1,2,\ldots\}$. \begin{remark} The topic of this paper is how to transform conformal predictors into probabilistic predictors. Moving in the opposite direction, from probabilistic to conformal predictors, seems to be much easier: given a probabilistic predictor, a natural conformity measure $\alpha_i$ for an observation $z_i=(x_i,y_i)$ in a sequence $z_1,\ldots,z_n$ is the probability $\alpha_i:=P(y_i)$, where $P$ is the probabilistic prediction for the label of $x_i$ found using that probabilistic predictor from $z_1,\ldots,z_n$ (or $z_1,\ldots,z_{i-1},z_{i+1},\ldots,z_n$) as the training set. \end{remark} \blueend\fi \section{Experiments} \label{sec: experiments} In our experiments we use the standard USPS data set of hand-written digits. The size of the training set is 7291, and the size of the test set is 2007; however, instead of using the original split of the data into the two parts, we randomly split all available data (the union of the original training and test sets) into a training set of size 7291 and test set of size 2007. (Therefore, our results somewhat depend on the seed used by the random number generator, but the dependence is minor and does not affect our conclusions at all; we always report results for seed 0.) A powerful algorithm for the USPS data set is the 1-Nearest Neighbour (1-NN) algorithm using tangent distance \cite{simard/etal:1993}. However, it is not obvious how this algorithm could be transformed into a probabilistic predictor. On the other hand, there is a very natural and standard way of extracting probabilities from support vector machines, which we will refer to it as \emph{Platt's algorithm} in this paper: it is the combination of the method proposed by Platt \cite{platt:2000} with pairwise coupling \cite{wu:2004} (unlike our algorithm, which is applicable to multi-class problems directly, Platt's method is directly applicable only to binary problems). In this section we will apply our method to the 1-NN algorithm with tangent distance and compare the results to Platt's algorithm as implemented in the function \texttt{svm} from the \texttt{e1071} R package (for our multi-class problem this function calculates probabilities using the combination of Platt's binary method and pairwise coupling). There is a standard way of turning a distance into a conformal predictor (\cite{vovk/etal:2005book}, Sect.~3.1): namely, the conformity score $\alpha_i$ of the $i$th observation in a sequence of observations can be defined as \begin{equation}\label{eq: NN} \frac { \min_{j: y_j \ne y_i} d(x_i,x_j) } { \min_{j\ne i: y_j = y_i} d(x_i,x_j) }, \end{equation} where $d$ is the distance; the intuition is that an object is considered conforming if it is close to an object labelled in the same way and far from any object labelled in a different way. \begin{table}[tb] \caption{The performance of the two algorithms, Platt's (with the optimal values of parameters) and the conformal-type probabilistic predictor based on 1-Nearest Neighbour with tangent distance} \label{tab: performance} \begin{center} \begin{tabular}{r|c|c} \hline algorithm & average log loss & standardized Brier loss\\ \hline\hline optimized Platt & 0.06431 & 0.05089\\ conformal-type 1-NN & 0.04958 & 0.04359\\ \end{tabular} \end{center} \end{table} \begin{table}[tb] \caption{The performance of Platt's algorithm with the polynomial kernels of various degrees for the cost parameter $C=10$} \label{tab: Platt} \begin{center} \begin{tabular}{r|c|c} \hline degree & average log loss & standardized Brier loss\\ \hline\hline 1 & 0.12681 & 0.07342\\ 2 & 0.09967 & 0.06109\\ 3 & 0.06855 & 0.05237\\ 4 & 0.11041 & 0.06227\\ 5 & 0.09794 & 0.06040 \end{tabular} \end{center} \end{table} Table~\ref{tab: performance} compares the performance of the conformal-type probabilistic predictor based on the 1-NN conformity measure \eqref{eq: NN}, where $d$ is tangent distance, with the performance of Platt's algorithm with the optimal values of its parameters. The conformal predictor is parameter-free but Platt's algorithm depends on the choice of the kernel. We chose the polynomial kernel of degree~3 (since it is known to produce the best results: see \cite{vapnik:1998}, Sect.~12.2) and the cost parameter $C:=2.9$ in the case of the average log loss and $C:=3.4$ in the case of the standardized Brier loss (the optimal values in our experiments). (Reporting the performance of Platt's algorithm with optimal parameter values may look like data snooping, but it is fine in this context since we are helping our competitor.) Table~\ref{tab: Platt} reports the performance of Platt's algorithm as function of the degree of the polynomial kernel with the cost parameter set at $C:=10$ (the dependence on $C$ is relatively mild, and $C=10$ gives good performance for all degrees that we consider). \ifFULL\bluebegin We do the usual normalization; it is interesting that Khutsishvili's method (\cite{vovk/zhdanov:2009}, appendix), which work extremely well for normalizing bets, leads to much poorer predictions (as perhaps was to be expected: Khutsishvili's theory is specifically designed for extracting probabilities from bets). \blueend\fi \ifFULL\bluebegin \section{Conclusion} This paper has proposed a way to turn conformal predictors into probabilistic ones. But perhaps it is not very efficient, and it appears that the main source of inefficiency is a separate treatment of different classes at the stage of calibrating p-values. \blueend\fi \subsubsection*{Acknowledgments.} In our experiments we used the R package \texttt{e1071} (by David Meyer, Evgenia Dimitriadou, Kurt Hornik, Andreas Weingessel, Friedrich Leisch, Chih-Chung Chang, and Chih-Chen Lin) and the implementation of tangent distance by Daniel Keysers. This work was partially supported by EPSRC (grant EP/K033344/1, first author) and Royal Holloway, University of London (third author).
1,314,259,995,979
arxiv
\part{Appendix} \label{appendix} \setcounter{section}{0} \section{Appendix A : Proof of Theorem~\ref{thm Compactness}} \label{appendix A} Assumption~A controls the number of zero $u_k$-blocks, whereas Assumption~B is used to control the geometry of the mesoscopic phase labels. The dependence of $k_0$ on $\delta$ could be described as follows: we choose $k_0$ so large that \begin{equation} \label{knull} \rho_k ~\leq ~\frac1{C(d)}\delta\qquad\text{for every}\ k\geq k_0, \end{equation} where $C(d)$ is a large enough fixed constant. Three terms on the left hand side on \eqref{tightbound} correspond to three different exponential estimates: \vskip 0.2cm \noindent \subsection{Estimate on the volume of zero $u_k$-blocks.} The domination by Bernoulli measure \eqref{A} implies that \begin{equation} \label{volume} {\ensuremath{\mathbb P}} _N \left( \#\{x\in\sTor{n-k}:u_k (x)=0\}\geq \delta\left(\frac{N}{2^k}\right)^d \right)~\leq~c_2 \; \text{exp}\left\{ -\delta\left(\frac{N}{2^k}\right)^d\log \frac{\delta}{\rho_k}\right\} . \end{equation} Each realization of the phase label function $u_k$ splits $\uTor$ into the disjoint union of three mesoscopic regions: $$ \uTor~=~\{x:u_k (x)=1\}\vee\{x:u_k (x)=-1\}\vee\{x:u_k (x)=0\} ~\stackrel{\gD}{=}~ {\bf A}_+\vee {\bf A}_-\vee {\bf A}_0 . $$ By the choice of the scale $k_0$ in \eqref{knull} the estimate \eqref{volume} is non-trivial for every $k\geq k_0$, and, in view of the target claim \eqref{tightbound}, we can restrict attention only to such realizations of $u_k$ for which \begin{equation} \label{A_0} \left| {\bf A}_0\right|~=~\int_{\uTor} 1_{\{u_k(x)=0\}}\text{d}x ~< ~\delta . \end{equation} This has the following important implication: if $u_k\in \ensuremath{\mathcal V}\left( K_a ,2\delta\right)^{\text{c}}$, the area of the boundary of any regular set $A$ such that ${\bf A}_+\subseteq A\subseteq\uTor\setminus {\bf A}_-$ is bounded below as \begin{equation} \label{partialA} \left|\partial A\right|~\geq ~a . \end{equation} Using the Assumption~B of the Theorem we are going to construct such sets $A$ on the finite $k_0$ scale; $A\in\ensuremath{\mathcal F}_{n-k_0}$, and in such a fashion that all the boundary $k_0$-blocks of $A$ will necessarily have zero $u_{k_0}$-labels. This reduction enables a uniform treatment of all coarser scales $k\geq k_0$. So let $k\geq k_0$, and assume that \eqref{A_0} holds. We denote by $A_-$ (resp. $A_+$) the set of all boxes $\sBox{n-k_0}$ in ${\bf A}_-$ (resp ${\bf A}_+$). We say that $x\in\sTor{n-k_0}$ is $-*$~connected to $A_-$; $x\stackrel{-*}{\longleftrightarrow}A_-$, if there exists a $*$-connected chain of ``$-$'' $u_{k_0}$ blocks leading from $\sBox{n-k_0}(x)$ (and including it) to $A_-$. Define now the complement $A^{\text{c}}$ as follows: $$ A^{\text{c}}~=~A_-\bigcup_{x\stackrel{-*}{\longleftrightarrow}A_-}\sBox{n-k_0}(x) . $$ By the virtue of the Assumption~B, ${\bf A}_+\subseteq A$. Moreover, by construction all the $k_0$-blocks of $A$ attached to the boundary $\partial A^c$ have zero $u_{k_0}$-labels. With a slight abuse of notation we proceed to denote this collection of boundary $k_0$-blocks as $\partial A$. By \eqref{partialA} the number of $k_0$-blocks in $\partial A$ is bounded below by \begin{equation} \label{kpartialA} \#_{k_0}\left(\partial A\right)~\geq ~\frac{c(d)a}{2^{(d-1)k_0}}N^{d-1} . \end{equation} Since, however, the total number of $k_0$-blocks in the corresponding decomposition of $\uTor$ equals to $N^d /2^{dk_0}$ the estimate \eqref{kpartialA} alone is not sufficient for giving the desirable upper bound on the probability $\Joint_N \left( u_k\in\ensuremath{\mathcal V}(K_a ,2\delta)^{\text{c}}\right)$. The required entropy cancelation stems from the fact that small connected contours of $\partial A$ cannot surround too much volume. Let us decompose $A$ to the disjoint union of its maximal connected components: $$ A~=~\bigvee_{i=1}^l A_i\qquad\text{respectively}\qquad \partial A~=~\bigvee_{i=1}^l \partial A_i . $$ We shall quantify contours $\partial A_i$ according to the size (or the number of $k_0$-blocks ) in $A_i$. Namely, the contour $\partial A_i$ is called small, if \begin{equation} \label{Kd} \#_{k_0}\left( A_i\right)~\leq ~K(d)\log N\qquad\text{or} \qquad \left| A_i\right|~ \leq ~K(d)\frac{2^{dk_0}}{N^d}\log N, \end{equation} where $K(d)$ is a sufficiently large constant. Otherwise, the contour $\partial A_i$ is called large. We claim that under \eqref{A_0} the following inclusion is valid: \begin{equation} \label{inclusion} \left\{u_k\in\ensuremath{\mathcal V}(K_a ,2\delta)^{\text{c}}\right\}~ \subseteq~\left\{\sum_{\partial A_i-\text{small}}|A_i | >\delta\right\}\bigcup \left\{\sum_{\partial A_i-\text{large}} |\partial A_i | > a \right\} . \end{equation} Indeed, if the total volume inside small contours is less than $\delta$, then repainting all the small components $A_i$ into ``$-1$'' and all the large components $A_j$ into ``$+1$'' we produce a $\{\pm 1\}$-valued function which is at most at the ${\ensuremath{\mathbb L}} _1$-distance $2\delta$ from $u_k$ and which, thereby, cannot belong to $K_a$. \vskip 0.2cm \noindent \subsection{Peierls estimate on the size of large contours.} \begin{equation} \label{peierls} \begin{split} \Joint_N \left( \sum_{\partial A_i-\text{large}}| \partial A_i | >a\right) ~&=~ \Joint_N \left( \sum_{\partial A_i-\text{large}}\#_{k_0}(\partial A_i)> \frac{c(d)a}{2^{(d-1)k_0}}N^{d-1} \right) \\ &\ \ \leq ~\text{exp}\left\{ -c_3 (d)\frac{a}{2^{(d-1)k_0}}N^{d-1}\right\} . \end{split} \end{equation} This immediately follows from Assumption~A, once the constant $K(d)$ in \eqref{Kd} has been properly chosen. \vskip 0.2cm \noindent \subsection{Estimate in the phase of small contours.} The volume of small components $A_i$ is related to the total number of $k_0$-blocks in these components as $$ \sum_{\partial A_i-\text{small}}|A_i | ~=~\left(\frac{N}{2^{k_0}}\right)^{-d} \sum_{\partial A_i-\text{small}}\#_{k_0}(A_i ) . $$ On the other hand, for every $l\in [1,...,n-k_0 ]$; \begin{equation*} \begin{split} \sum_{\partial A_i-\text{small}}\#_{k_0}(A_i )~&=~ \sum_{x\in\sTor{n-k_0}}\sum_{\partial A_i-\text{small}} 1_{\{x\in A_i\}}\\ &=~\sum_{t\in [0,...,2^l)^d} \sum_{x\in\sTor{n-k_0-l}}\sum_{\partial A_i-\text{small}} 1_{\{\theta_{t\Delta_0}x\in A_i\}} , \end{split} \end{equation*} where $\Delta_0\df2^{k_0 -n}$ is the step size on the embedded torus $\sTor{n-k_0}$, and $\theta_{\bullet}$ is the shift on this torus. Consequently, \begin{equation} \label{lsplit} \Joint_N \left(\sum_{\partial A_i-\text{small}}|A_i | >\delta\right)~\leq~\max_{t\in [0,...,2^l)^d} \Joint_N \left(\sum_{x\in\sTor{n-k_0-l}} \sum_{\partial A_i-\text{small}} 1_{\{\theta_{t\Delta_0}x\in A_i\}} > \delta\left(\frac{N}{2^{k_0 +l}} \right)^d\right) . \end{equation} If, however, $2^l > K(d)\log N$, then no two distinct points on the torus $\sTor{n-k_0 -l}$ (or any shift of it) can belong to the same small component $A_i$. This, in view of the domination by the independent Bernoulli site percolation (Assumption~A), suggests an application of the B-K inequality. Since, by the choice of the scale $k_0$ in \eqref{knull}; $$ \epsilon_{k_0}~\stackrel{\gD}{=} ~{\ensuremath{\mathbb P}} _{\text{perc}}^{\rho_{k_0}}\left(\exists~\text{a closed surface of zero $u_{k_0}$-blocks around $x$}\right)~< ~\delta , $$ for every $x\in\sTor{n-k_0}$, we readily obtain that the right hand side of \eqref{lsplit} is bounded above by $$ c_4 (d)\text{exp}\left\{ -\delta\left(\frac{N}{2^{k_0 +l}}\right)^d \log\left(\frac{\delta}{\epsilon_{k_0}}\right)\right\} . $$ The proof of Theorem \ref{thm Compactness} is concluded. \qed \section{Appendix B : Proof of the three-point lower bound Lemma~\ref{triple}} \label{appendix B} The proof of Lemma~\ref{triple} is based on the following positive stiffness property of the surface tension \cite{AkutsuAkutsu86}: \begin{equation} \label{B_stiff} \min_{\theta\in[0,2\pi]}\left\{ \frac{{\rm d}^2}{{\rm d}\theta^2} \tau_{\gb} \left(\vec{n}(\theta )\right)~+~\tau_{\gb} \left(\vec{n}(\theta )\right) \right\}\ =\ \min_{\theta\in[0,2\pi]}R_{\beta}\left( \vec{n}(\theta )\right)\ >\ 0. \end{equation} where the unit normal $\vec{n}(\theta )$ is defined via $\vec{n}(\theta ) = (\cos\theta ,\sin \theta )$, and $R_{\beta}\left( \vec{n}\right)$ is the radius of curvature of $\partial \ensuremath{\mathcal K} $ at the point supporting the tangent line orthogonal to $\vec{n}$. An integral version of \eqref{B_stiff} is the strong triangle inequality \cite{I1}, \cite{ Velenik97}: For any $u,v\in{\ensuremath{\mathbb R}} ^2$: \begin{equation} \label{B_strong} \tau_{\gb}\left( u\right)+ \tau_{\gb}\left( v\right)- \tau_{\gb}\left( u+v\right)~\geq ~ c_1(\beta )\left( \| u\|_2+ \| v\|_2 -\| u +v\|_2\right) . \end{equation} The latter inequality is used to control the fluctuations of the microscopic phase boundaries (in their random line representation of Section \ref{dima_skeletons}). Let now an $(s,\gep )$-compatible triple of points $(u,w,v)$ be given. Fix $K =K(\beta )$ large enough and define the ``oval'' neighborhood ${\bf N}_K (u,w )$ of $\{u,v\}$ as: $$ {\bf N}_K (u,w ) ~\stackrel{\gD}{=} ~\left\{ z\in{\ensuremath{\mathbb R}} ^2 :\ \tau_{\gb}\left(z- u\right)+ \tau_{\gb}\left( w-z\right)- \tau_{\gb}\left( w-u\right)\leq K\log s\right\} . $$ The oval neighborhood ${\bf N}_K (w,v )$ is defined exactly in the same fashion. Relations \eqref{3.4.qasub} and \eqref{3.4.OZ} readily imply that that the main contribution to $\langle\sigma_u\sigma_w\rangle_{f}^{\beta^*}$ (respectively to $\langle\sigma_w\sigma_v\rangle_{f}^{\beta^*}$ ) comes from the paths $\lambda_1$ (respectively $\lambda_2$ ) which stay in ${\bf N}_K (u,w )$ (respectively ${\bf N}_K (w,v )$). More precisely, \begin{equation}\label{eq_concentration} \sumtwo{\lambda_1 :u\to w}{\lambda\in{\bf N}_K (u,w )} q^{\beta^*}\left( \lambda_1\right)\geq \langle\sigma_u\sigma_w\rangle_{f}^{\beta^*}\left( 1+{\rm \small{o}}(1)\right) , \end{equation} uniformly in all $(s,\gep )$-compatible triples. Any such path $\lambda_1 =\left(\lambda_1 (0), ..., \lambda_1 (n_1)\right)$ could be decomposed as follows: Define $$ n_w~=~\max\left\{ k: \lambda_k \in {\bf N}_K (u,w )\setminus {\bf N}_K (w,v ) \right\} , $$ and set $\lambda_1^u = \left(\lambda_1 (0) ,...,\lambda_1 (n_w )\right)$, $\lambda_1^w = \left(\lambda_1 (n_w +1) ,...,\lambda_1 (n_1)\right)$; $\lambda_1 =\lambda_1^u\vee\lambda_1^w$. The decomposition $\lambda_2 =\lambda_2^u\vee\lambda_2^w$ is defined in a completely symmetric way. Notice that, by the construction, the paths $\lambda_1^u$ and $\lambda_2^v$ are disjoint and compatible, and, by \eqref{B_strong}; $$ \max\left\{ \| \lambda_1 (n^w )-w\|_2 , \| \lambda_2 (n^w )-w\|_2\right\}~\leq ~c_2 (\gep )\log s . $$ The claim of the lemma follows now from \eqref{3.4.qasuper} and \eqref{3.4.expbouns}.\qed \part{Boundary effects}\label{part_boundary} \setcounter{section}{0} In the previous parts, we explained how the thermodynamical variational problem describing the macroscopic geometry of coexisting phases can be derived in various lattice models of statistical physics. To simplify the analysis, we restricted our attention to periodic boundary conditions or to systems contained in a Wulff-shaped box, avoiding thus a discussion of the effect of a confining geometry on the behavior of the system. In this part, we would like to explain what happens when we take such effects into account. Boundary conditions play a particularly important role in the kind of problems presented in this review, since they concern the asymptotic behavior of large but finite systems and therefore the boundary cannot be simply ``sent to infinity'' as usually done. We will see that taking care of boundary effects not only provides a complete description of the geometry of these constrained systems thus allowing a rigorous description of the interaction between an equilibrium crystal and a substrate, but also allows to study the effect of so-called {\em boundary phase transitions}. For simplicity, we only discuss the case of the Ising model with nearest neighbors interaction. \section{Wall free energy} \setcounter{equation}{0} The vessel containing the system has not only the property of confining it, but can also act in an asymmetric way on the various phases inside, favoring some of them; indeed this is what happens typically in real systems. In fact, this is precisely the reason one introduces boundary conditions in the first place: To impose the equilibrium phase the system realizes. It appears to be convenient to have a parameter allowing a fine-tuning of the asymmetry, interpolating between pure $+$ or $-$ boundary conditions. Let us now describe how this is done. \bigskip Let $\gS = \setof{i\in{\ensuremath{\mathbb Z}} ^d}{i(d) = 0}$ and ${\bbL^d} = \setof{i\in{\ensuremath{\mathbb Z}} ^d}{i(d)\geq 0}$. The vessel of our system is the box \begin{equation*} {\Boxx{N,M}} = \setof{i\in{\bbL^d}}{-N\leq i(n) \leq N,\, n=1,\dots,d-1,\, 0\leq i(d) \leq M}\,, \end{equation*} and the {\em wall} is ${\theplane_N} = {\Boxx{N,M}}\cap\gS$. Let $\eta\in{\ensuremath{\mathbb R}} $; we consider the following Hamiltonian, \begin{equation*} \Ham_{\Boxx{N,M}}^\eta(\sigma) = - \sumtwo {\nnb ij\subset{\bbL^d}} {\nnb ij\cap{\Boxx{N,M}}\neq\eset} \sigma_i\sigma_j - \eta\sum_{i\in{\theplane_N}} \sigma_i\,. \end{equation*} Let $\overline\sigma\in\{-1,1\}^{\bbL^d}$; the Gibbs measure in ${\Boxx{N,M}}$ with boundary condition $\overline\sigma$ is the following probability measure on $\{-1,1\}^{\bbL^d}$~\footnote{\label{footnote}Note that we could equivalently consider $\Isbd{\overline\sigma}{{\Boxx{N,M}}}$ as a probability measure on $\{-1,1\}^{{\ensuremath{\mathbb Z}} ^d}$ by extending the b.c. $\overline\sigma$ by $\overline\sigma_i=1$ for all $i\in{\ensuremath{\mathbb Z}} ^d\setminus{\bbL^d}$; it is then possible to replace the boundary magnetic field $\eta$ by a coupling constant: $\eta\sum_{i\in{\theplane_N}} \sigma_i = \eta\sum_{\nnb ij:\,i\in{\theplane_N},\,j\not\in{\bbL^d}} \sigma_i\sigma_j$. This will be used when dealing with negative boundary field, see Subsection~\ref{ssec_2D}.}, \begin{equation*} \Isbd{\overline\sigma}{{\Boxx{N,M}}}(\sigma) = \begin{cases} (\PFbd{\overline\sigma}{{\Boxx{N,M}}})^{-1}\exp[-\beta\,\Ham_{\Boxx{N,M}}^\eta(\sigma)] & \text{if $\sigma_i=\overline\sigma_i$, $\forall i\not\in{\Boxx{N,M}}$,}\\ 0 & \text{otherwise.} \end{cases} \end{equation*} We'll usually use the short-hand notations $\Isbd{\overline\sigma}{N,M}$, $\PFbd{\overline\sigma}{N,M}$, .... As usual, we write $+$ for $\overline\sigma\equiv 1$ and $-$ for $\overline\sigma\equiv -1$. We therefore distinguish one of the sides of the box ${\Boxx{N,M}}$, ${\theplane_N}$, which we call the ``wall''. Notice that instead of usual boundary conditions, a {\em boundary magnetic field} $\eta$ is acting on ${\theplane_N}$; since setting $\eta=1$ produces $+$ b.c. on the wall, while setting $\eta=-1$ results in $-$ b.c., this provides the promised interpolation parameter. Of course, we could also consider more complicated situations, where (possibly inhomogeneous) boundary magnetic fields act on the whole boundary of the box. However, for simplicity, we restrict our attention to this particular case, which will turn out to be general enough that the basic phenomena induced by the use of boundary fields can already be analyzed. \medskip To quantify the preference of the wall toward one of the phases, it is convenient to introduce a new thermodynamic quantity, the {\em wall free energy}, \begin{equation}\label{eq_taubd} \tau_{\scriptscriptstyle\rm bd}(\beta,\eta) \stackrel{\gD}{=} \limtwo{N\rightarrow\infty}{M\rightarrow\infty} \frac 1{\abs{{\theplane_N}}}\log \frac{\PFbd{+}{N,M}}{\PFbd{-}{N,M}}\,. \end{equation} The existence of this quantity, and the remarkable fact that the two limits can be taken in any order, has been established in \cite{FroehlichPfister87a}; the proof relies on the simple identity \begin{equation}\label{eq_taubd2} \tau_{\scriptscriptstyle\rm bd}(\beta,\eta) = \limtwo{N\rightarrow\infty}{M\rightarrow\infty} \beta \int_{-\eta}^\eta \frac 1{\abs{{\theplane_N}}} \sum_{i\in{\theplane_N}} \Ebdf{+}{N,M}{\eta'}{\sigma_i}\,\dd \eta'\,. \end{equation} We'll return to this formula in the next section. The heuristics behind the definition of $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$ is that the free energy $\Fbd{+(-)}{N,M} = -\log \PFbd{+(-)}{N,M}$ of the $+$ ($-$) phase can be decomposed in the following way: \begin{align*} \Fbd{+}{N,M} &= f_{\scriptscriptstyle\rm b}(\beta) \,\abs{{\Boxx{N,M}}} + f^+_{\scriptscriptstyle\rm s}(\beta) \,\abs{\partial{\Boxx{N,M}}\setminus{\theplane_N}} + f^+_{\scriptscriptstyle\rm w}(\beta,\eta)\,\abs{{\theplane_N}} + o(\abs{\partial{\Boxx{N,M}}},\abs{{\theplane_N}})\,,\nonumber\\ \Fbd{-}{N,M} &= f_{\scriptscriptstyle\rm b}(\beta) \,\abs{{\Boxx{N,M}}} + f^-_{\scriptscriptstyle\rm s}(\beta) \,\abs{\partial{\Boxx{N,M}}\setminus{\theplane_N}} + f^-_{\scriptscriptstyle\rm w}(\beta,\eta)\,\abs{{\theplane_N}} + o(\abs{\partial{\Boxx{N,M}}},\abs{{\theplane_N}})\,, \end{align*} where \begin{align*} f_{\scriptscriptstyle\rm b}(\beta) &\stackrel{\gD}{=} -\lim_{N,M\rightarrow\infty} \abs{{\Boxx{N,M}}}^{-1}\, \log\PFbd{\overline\sigma}{N,M}\,, \\ f^+_{\scriptscriptstyle\rm s}(\beta)&\stackrel{\gD}{=} -\lim_{N,M\rightarrow\infty} \abs{\partial{\Boxx{N,M}}}^{-1}\, \bigl(\log \PFbdf{+}{N,M}{1} - f_{\scriptscriptstyle\rm b}(\beta) \abs{{\Boxx{N,M}}}\bigr)\,, \\ f^+_{\scriptscriptstyle\rm w}(\beta,\eta) &\stackrel{\gD}{=} -\lim_{N,M\rightarrow\infty} \abs{{\theplane_N}}^{-1}\, \bigl(\log\PFbd{\overline\sigma}{{\Boxx{N,M}}} - f_{\scriptscriptstyle\rm b}(\beta) \abs{{\Boxx{N,M}}}-f^+_{\scriptscriptstyle\rm s} (\beta) \abs{\partial{\Boxx{N,M}}\setminus{\theplane_N}}\bigr)\,, \end{align*} (and similarly for $f^-_{\scriptscriptstyle\rm s}(\beta)$ and $f^-_{\scriptscriptstyle\rm~w}~ (\beta,\eta)$). As the notations suggest, $f_{\scriptscriptstyle\rm b}(\beta)$ is independent of $\eta$ and $\overline\sigma$, $f^+_{\scriptscriptstyle\rm s}(\beta)$ is independent of $\eta$ and by symmetry $f^+_{\scriptscriptstyle\rm s}(\beta) = f^-_{\scriptscriptstyle\rm s}(\beta)$. Therefore, we see that $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta) = \limtwo{N\rightarrow\infty}{M\rightarrow\infty} \frac 1{\abs{{\theplane_N}}}\, (\Fbd{-}{N,M}-\Fbd{+}{N,M}) = f^-_{\scriptscriptstyle\rm w}(\beta,\eta) - f^+_{\scriptscriptstyle\rm w}(\beta,\eta)$ is nothing else than the leading order term of the difference in free energy between the two phases in the presence of the wall. The ultimate justification of \eqref{eq_taubd} however is that this quantity plays exactly the role of its thermodynamical analogue in the variational problem describing the macroscopic geometry of phase coexistence, see Theorems \ref{thm_sessile2D} and \ref{thm_sessile3D} below. The following Theorem states basic properties of $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$; since $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$ is obviously odd in $\eta$, we just state them for $\eta\geq 0$ (also $\tau_{\scriptscriptstyle\rm bd}(\beta,0)=0$). \begin{thm}\label{thm_taubdprop} \cite{FroehlichPfister87b} Let $\tau^*_\beta=\tau_\beta(\vec{e}_d)$ and suppose $\eta\geq 0$. Then \begin{itemize} \item $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$ is a non-negative, increasing function of $\beta$ and $\eta$, concave in $\eta$; moreover, if $\eta>0$, $$ \tau_{\scriptscriptstyle\rm bd}(\beta,\eta) >0 \Leftrightarrow \beta > {\gb_{\rm\scriptscriptstyle c}}\,. $$ \item For all $\beta$ and $\eta$, $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\leq \tau^*_\beta$. \item For all $\beta>{\gb_{\rm\scriptscriptstyle c}}$, there exists $1\geq{\bdf_{\rm\scriptscriptstyle w}}(\beta)>0$ such that $$ \tau_{\scriptscriptstyle\rm bd}(\beta,\eta)<\tau^*_\beta \Leftrightarrow \eta<{\bdf_{\rm\scriptscriptstyle w}}(\beta)\,. $$ \end{itemize} \end{thm} In the case of the 2D Ising model, ${\bdf_{\rm\scriptscriptstyle w}}(\beta)$ can be computed explicitly, see \cite{Abraham80, McCoyWu73} and Fig.~\ref{fig_droplets}. The following terminology is standard\footnote{This terminology only makes sense once we have chosen one of the equilibrium phase as reference; here it is the $-$ phase.}: when $\eta\geq{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, we say that the system is in the {\em complete drying} regime; when $\abs{\eta}<{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, it is in the {\em partial wetting} regime; and when $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, it is in the {\em complete wetting} regime. The reason for this terminology should become clear later. \section{Surface phase transition} \setcounter{equation}{0} In this section, we will see that the boundary magnetic field can trigger {\em surface phase transitions}: The behavior of the system in the vicinity of the wall depends dramatically on $\abs\eta$ being greater or smaller than ${\bdf_{\rm\scriptscriptstyle w}}(\beta)$. A more detailed discussion of these issues can be found in \cite{PfisterVelenik96}. \bigskip The state of the system in the middle of a big box ${\Boxx{N,M}}$ is entirely determined by the boundary conditions, and is independent of the value of the boundary field, so that the usual (infinite volume) Gibbs state simply doesn't provide any information on the behavior of the system close to the wall. To analyze the behavior of the system ``in the vicinity'' of the wall, it is therefore useful to introduce the notion of {\em surface Gibbs states}; these differ from the Gibbs states usually considered in these models by the fact that one does not work with a sequence of boxes converging to ${\ensuremath{\mathbb Z}} ^d$, but instead converging only to the half-space ${\bbL^d}$. More precisely, the surface Gibbs states are the weak limits of the measures $\Isbd{\overline\sigma}{N,M}$ when $N,M\rightarrow\infty$ (observe that ${\Boxx{N,M}}\nearrow{\bbL^d}$). Two of them are of particular importance for our discussion, $\Isbd{+}{\halfspace}$ and $\Isbd{-}{\halfspace}$, obtained respectively by taking weak limits of the measures with $+$ and $-$ boundary conditions. It is not difficult to show \cite{FroehlichPfister87a} that these two measures exist, are extremal, and are invariant under translations parallel to the wall; moreover, there is uniqueness of the surface Gibbs state if and only if $\Isbd{+}{\halfspace}=\Isbd{-}{\halfspace}$. There is a close relation between $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$ and the behavior of the system near the wall; this can be most easily seen from the following identity, consequence of \eqref{eq_taubd2} and symmetry \cite{FroehlichPfister87a}, \begin{equation}\label{eq_taubd3} \tau_{\scriptscriptstyle\rm bd}(\beta,\eta) = \int_{-\eta}^\eta \Ebdf{+}{{\bbL^d}}{\eta'}{\sigma_0}\;\dd \eta' = \int_0^\eta \bigl( \Ebdf{+}{{\bbL^d}}{\eta'}{\sigma_0} - \Ebdf{-}{{\bbL^d}}{\eta'}{\sigma_0} \bigr)\,\dd \eta'\,. \end{equation} Using \eqref{eq_taubd3}, it is possible to prove the following Theorem showing that a surface phase transition occurs at $\eta={\bdf_{\rm\scriptscriptstyle w}}(\beta)$; this is the so-called {\em wetting transition}. \begin{thm}\label{thm_wettingtransition} \cite{FroehlichPfister87b} There is a unique surface Gibbs state if and only if $\abs\eta\geq{\bdf_{\rm\scriptscriptstyle w}}(\beta)$. \end{thm} Let us briefly discuss the heuristics behind this result. The $+$ and $-$ boundary conditions fix the phase present in the bulk (i.e. in the middle of a big box ${\Boxx{N,M}}$). However, Theorem \ref{thm_wettingtransition} shows that when $\eta\geq{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, the surface Gibbs state is unique, and therefore the state of the system near the wall is {\em independent} of the boundary conditions, i.e. of the phase present in the bulk. The mechanism responsible for this is the following. Suppose that $\eta<0$ and consider $+$-boundary conditions; then it is natural to regard the boundary field as a negative b.c., and therefore to introduce an open contour with boundary $\partial{\theplane_N}$ separating the $-$ phase favored by the wall from the $+$ phase present in the bulk (see Section \ref{sec_tools} for more details). As long as $\eta>-1$, there is a competition between two effects: On the one hand it is energetically favorable for the open contour to follow the wall, on the other hand this would lead to a loss in entropy, since there is less room for fluctuations. When $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, the entropy wins: The contour is repelled away from the wall, at a distance diverging with the size of the box; this is the phenomenon of {\em entropic repulsion}. The surface Gibbs state then describes the behavior of the system below this surface, i.e. a mesoscopic film of $-$ phase along the bottom wall. The fact that the contour is sent away from the wall explains why we recover the surface tension, $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)=\tau^*_\beta$. When $\eta>-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$ energy wins, and this modifies completely the behavior of the microscopic surface: it sticks to the wall, making only small excursions away from it; in this case, the phase in the bulk can reach the wall and the surface Gibbs state depends on the choice of boundary conditions. Part of these heuristics can be made quite precise in the 2D case. Consider $+$ boundary conditions. When $0>\eta>-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, one can prove that the probability that a connected piece $I$ of the wall is not touched by the open contour is bounded above by $K\exp[-(\tau^*_\beta-\tau_{\scriptscriptstyle\rm bd}(\beta,\eta))\,\abs{I}]$, showing that the phase separation line really sticks to the wall \cite{PfisterVelenik97}. The informations available when $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$ are much less precise; the magnetization profile computed in \cite{Abraham80} shows that there is a film of width of order $\sqrt N$ along the wall. A related, much more precise result, which holds at sufficiently low temperature and for $\eta=-1$ is that the phase separation line, once suitably rescaled, converges weakly to the Brownian excursion \cite{Dobrushin93}; this should be true for any $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$. In higher dimensions, much less is known. When $\eta>-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, one can show that the probability that the open contour touches the middle of the wall is bounded away from $0$ uniformly in the size of the box \cite{FroehlichPfister87b}. When $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, very little is known,except in the simpler case of SOS models. Also, if it is known in dimension 2 that ${\bdf_{\rm\scriptscriptstyle w}}(\beta)<1$ (since the exact expression for ${\bdf_{\rm\scriptscriptstyle w}}(\beta)$ has been computed \cite{Abraham80}), this is an open problem in higher dimensions. \medskip Theorem \ref{thm_wettingtransition} gives a first explanation of the terminology introduced above: when the system is in the complete drying regime, the equilibrium phase along the wall is the $+$ phase, whatever the phase in the bulk is; when there is complete wetting, it is the $-$ phase; only in the regime of partial wetting can both phases be present near the wall. The fact that the phase transition is determined by ${\bdf_{\rm\scriptscriptstyle w}}(\beta)$ (i.e. the characterization of the partial wetting regime by $\tau^*_\beta=\abs{\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)}$) is known as {\em Cahn's criterion}. \section{Derivation of the Winterbottom construction} \setcounter{equation}{0} In this section, we show how Winterbottom construction, describing the equilibrium shape of a crystal in the presence of an attractive substrate, can be recovered from a microscopic theory. To do this, we consider the measure $\Isbd{+}{N,rN}$, for some $r\in{\ensuremath{\mathbb R}} $, conditioned with some canonical constraint (exact or approximate, see below). Of course, the situation here is more complicated than the one described in the introduction, since instead of an infinite wall, the system is contained in a finite vessel. This, of course, makes the problem more difficult: When the solution of the Winterbottom variational problem does not fit inside the box ${\uBox_r} \stackrel{\gD}{=} \setof{x\in{\ensuremath{\mathbb R}} ^d}{\abs{x(n)}\leq 1,\, n=1,\dots, d-1,\, 0\leq x(d) \leq r}$, the solution of the constrained problem will differ from Winterbottom shape. In fact, the general solution of the constrained problem is not known. In the way we state them below, the derivation of this variational problem from statistical mechanics still applies in the case when the solution is not known. \begin{figure}[t] \centerline{\psfig{figure=phdiag.ps,height=40mm}\hspace{.8cm} \psfig{figure=droplets.ps,height=40mm}} \figtext{ \writefig 1.40 4.17 {\footnotesize a} \writefig 3.40 4.17 {\footnotesize b} \writefig 5.40 4.17 {\footnotesize c} \writefig 1.40 2.17 {\footnotesize d} \writefig 3.40 2.17 {\footnotesize e} \writefig 5.40 2.17 {\footnotesize f} \writefig -3.80 4.20 {\footnotesize $T$} \writefig -3.85 3.50 {\footnotesize $T_c$} \writefig 0.00 0.20 {\footnotesize $\eta$} \writefig -1.04 0.20 {\footnotesize $1$} \writefig -6.27 0.20 {\footnotesize $-1$} \writefig -4.78 1.85 {\footnotesize (non-uniqueness of} \writefig -4.80 1.50 {\footnotesize surface Gibbs state)} \writefig -4.50 2.20 {\footnotesize Partial wetting} } \caption{The case of the 2D Ising model. Left: The phase diagram; the region of non-uniqueness of the surface Gibbs state is shaded. In the other region, there is a single surface Gibbs state. Right: A sequence of equilibrium shapes.} \label{fig_droplets} \end{figure} Before stating the main Theorems of this Part, we briefly describe how the wetting transition manifests itself in the macroscopic geometry of phase separation. To do this, let $\beta>{\gb_{\rm\scriptscriptstyle c}}$ be fixed, and choose a value $m$ for the canonical constraint so that the corresponding Wulff shape is small enough to be placed inside the box ${\uBox_r}$. If $\eta\geq{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, then $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)~=~\tau^*_\beta$, and the typical configurations will consist of a macroscopic droplet of $-$ phase, with Wulff shape, immersed in a background of $+$ phase; in particular, the shape of the droplet is independent of the value of the boundary field (Fig.~\ref{fig_droplets} a). This behavior persists up to the value $\eta={\bdf_{\rm\scriptscriptstyle w}}(\beta)$. Notice that as soon as $\eta<1$, it becomes {\em energetically} more favorable for the droplet to touch the wall. In dimension 2, however, since ${\bdf_{\rm\scriptscriptstyle w}}(\beta)<1$, the droplet stays away from the wall, because entropy loss is not compensated by energy gain until $\eta$ reaches the value ${\bdf_{\rm\scriptscriptstyle w}}(\beta)$. It is an interesting open problem to decide whether ${\bdf_{\rm\scriptscriptstyle w}}(\beta)=1$ for $d>2$. When $\eta<{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, the typical configurations consist of a macroscopic droplet, with Winterbottom shape, tied to the wall. The shape of the droplet now depends on the value of $\eta$, and decreasing the boundary field amounts to letting the droplet spread more and more (Fig.~\ref{fig_droplets} b--e). For some value $\widetilde\eta$, the droplet covers for the first time the entire wall (Fig.~\ref{fig_droplets} e). From this point on, the shape of the droplet is left unchanged when $\eta$ is decreased (Fig.~\ref{fig_droplets} f; the dashed line represent part of a possible ``true'' equilibrium shape for the unconstrained problem). From this discussion, we see that the wetting transition at ${\bdf_{\rm\scriptscriptstyle w}}(\beta)$ has a macroscopic manifestation in the canonical ensemble. Because of the confined geometry, however, the second transition, at $\eta=-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$ cannot be seen. To be able to detect it, one has to consider mesoscopic droplets (in the form of large moderate deviations, see the remark after Theorem~\ref{thm_sessile2D}). \medskip This also explains pretty well the terminology introduced previously: In the complete drying regime, the droplet stays away from the wall, and so the wall is completely dry w.r.t. the $-$ phase; in the partial wetting regime, the droplet touches the wall, and both the $+$ and $-$ phase are in contact with it (provided $\eta<\widetilde\eta$). The complete wetting regime cannot be distinguished from the partial wetting regime in this setting, but see the remark after Theorem~\ref{thm_sessile2D} for a discussion of this issue. \subsection{2D Ising model} Let $r\in{\ensuremath{\mathbb R}} $. The aim of this subsection is to describe the typical configurations under the measure \begin{equation*} \Isbd{+}{N,rN}\bigl(\,\cdot\,\big|\,M_N = m\,\abs{{\Boxx{N,rN}}}\bigr)\,, \end{equation*} where $m\in (-m^*, m^*)$ and $M_N = \sum_{i\in{\Boxx{N,rN}}}\sigma_i$; we will simplify the notations further by writing simply $\Isbd{+}{N}$ ($r$ being kept fixed). As in Part~\ref{part_strongWulff}, it is possible to obtain precise asymptotics for the large deviations, in the form of the following generalization of the first part of Theorem~\ref{thm DKS}. Let $\cW^\star_{\gb,\bdf}(m)$ be the infimum of the functional $\cW_{\gb,\bdf}$ on subsets of ${\widehat\bbD^2_r}$ with volume $\frac{m^* - m}{2m^* }\abs{{\widehat\bbD^2_r}}$. \begin{thm} \label{thm_largedevbd} Let the inverse temperature $\beta >\beta_c$ and the boundary magnetic field $\eta\in{\ensuremath{\mathbb R}} $ be fixed; let the sequence $\{a_N\}$; $-m*\abs{{\Boxx{N,rN}}}+a_N\in\text{\rm Range}(M_N )$, be such that the limit $$ a~=~\lim_{N\to\infty}\frac{a_N}{\abs{{\Boxx{N,rN}}}}~\in~(0,2m^* (\beta )) $$ exists. Then, $$ \log\Isbd{+}{N}\bigl(M_N = m^* \abs{{\Boxx{N,rN}}} - a_N \bigr) = -\cW^\star_{\gb,\bdf}\, (1 + O(N^{-1/2}\log N)). $$ \end{thm} A version of this Theorem, in an approximate canonical ensemble (as in~\eqref{eq_approx_can}), has been proven in~\cite{PfisterVelenik97}; this stronger version can be obtained by combining the techniques of~\cite{PfisterVelenik97} and of~\cite{IS}, see Section~\ref{sec_tools}. In Theorem~\ref{thm_largedevbd}, we have made no statement about the asymptotic description of the typical configurations under the conditioned measure. The reason is the following: These strong concentration results require the knowledge of stability properties of the variational problem in the form, for example, of Bonnesen inequality. However, in the present case, one does not always have that much information about the variational problem; in fact, even its solution is not always known. This prevents us from translating the energy estimates on the skeletons (see~\eqref{eq_energybd}, \eqref{eq_energybd2} and \eqref{eq_energybd3}) into strong concentration properties of the microscopic contours. Of course, in the situations when such stability properties are known (\cite{KoteckyPfister94} contains a simple derivation of such a result for many situations), it is possible to obtain statements of the same kind as those of Part~\ref{part_strongWulff}. This illustrates the fact that although the probabilistic theory in the 2D case is complete, in the sense that all the relevant information on the {\em microscopic scale} is available, the sharpness of the statements one can make on the macroscopic scale still depends on {\em macroscopic} stability properties, which are logically separated from the probabilistic aspect of the analysis. However, even without information about the stability properties of the variational problem, it is still possible to derive weak concentration properties, in a ${\ensuremath{\mathbb L}} _1$ setting close to the one of Part~\ref{part_weakWulff}. We present such a result in the way it is stated in~\cite{PfisterVelenik97}. In this paper, an approximate canonical ensemble was considered, i.e. the measure was $\Isbd{+}{N}(\,\cdot\,|\,\ensuremath{\mathcal A}(m;c))$, where \begin{equation} \label{eq_approx_can} \ensuremath{\mathcal A}(m;c) = \bigsetof{\sigma}{\bigabs{\abs{{\Boxx{N,rN}}}^{-1}M_N(\sigma) - m} \leq N^{-c}}\,, \end{equation} with $-m^* < m < m^*$, and $c$ is some real number not too large (see Theorem \ref{thm_sessile2D} below). We are going to prove that the phases concentrate near macroscopic droplets which belong to the set $\ensuremath{\mathcal D}(m)$ \begin{equation*} \ensuremath{\mathcal D}(m)=\bigsetof{ V\subset {\widehat\bbD^2_r}}{\abs{ V}=\frac{m^* - m}{2m^* }\abs{{\widehat\bbD^2_r}}\,,\, \cW_{\gb,\bdf}(\partial V)=\cW^\star_{\gb,\bdf}(m)}\,, \end{equation*} Recall that to each $V\in\ensuremath{\mathcal D}(m)$, we associate the function $\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_V = 1_{V^c} - 1_V$.\\ To state this phase segregation Theorem, we use the mesoscopic notation introduced in Part~\ref{part_weakWulff}. Recall that $N = 2^n$. For any $a < 1$, we define a magnetization profile $\ensuremath{\mathcal M}_{[an]} (\sigma,x)$ at the $2^{[an]}$-scale which is piecewise constant on boxes $\sBox{n - [an]}(x) $ with $x \in \sTor{n-[an]}$, \begin{equation} \ensuremath{\mathcal M}_{[an]}(\sigma,x) = 2^{- d [an]} \sum_{i \in \dBox{[an]}(2^n x)} \sigma_i \, . \end{equation} We get \begin{thm}\label{thm_sessile2D}\cite{PfisterVelenik97} Let $\beta>{\gb_{\rm\scriptscriptstyle c}}$, $\eta\in{\ensuremath{\mathbb R}} $, $-m^*<m<m^*$ and $1/4>c>0$. Then there exist a function $\delta(N)$ such that $\lim_{N\rightarrow\infty}\delta(N)=0$, a real number $\kappa>0$ and a coarse-graining parameter $1>a>0$ such that for $N$ large enough \begin{equation*} \Isbd{-}{N}\bigl(\frac{\ensuremath{\mathcal M}_{[an]}}{m^*}\in\union_{V\in\ensuremath{\mathcal D}(m)} \ensuremath{\mathcal V}(\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_V, \delta(N)) \, \big| \ \ensuremath{\mathcal A}(m;c)\,\bigr)\geq 1-\exp\{-O(N^{\kappa})\}\,. \end{equation*} \end{thm} \bigskip \noindent {\bf Remark:} In this case, it should also be possible to study the whole range of moderate deviations, combining the techniques of \cite{IS} and \cite{PfisterVelenik97}, although this has not been done explicitly. We briefly describe the results obtained for large deviations sufficiently close to volume order \cite{Velenik97}. As long as $\eta>-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, the results are similar to those obtained in the setting of Part~\ref{part_strongWulff}: The measure concentrates on configurations containing a single large droplet of $-$ phase, with Wulff or Winterbottom shape depending on $\eta$; in particular, the order of the large moderate deviations is still $\exp\{-O(\sqrt{a_N})\}$. There should not be any problem to extend this to the whole large deviations regime ($a_N\gg N^{4/3}$). More interesting is the case $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$. For those values of the boundary field, the system is in the complete wetting regime ($\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)=-\tau^*_\beta$), and the solution of the unconstrained variational problem is degenerate. The solution of the constrained variational problem in ${\widehat\bbD^2_r}$ is however still well-defined for every $N$; it is obtained by extracting the cap of a Wulff shape and rescaling it so that the basis of the cap completely covers the wall and the rescaled cap has the required volume. When $N$ goes to infinity, this droplet spreads out to become a thin film in the limit (covering the entire wall, hence the terminology complete wetting), and the corresponding value of the surface free energy functional goes to zero. As a result of this, the scale of the large moderate deviations {\em is not the same as when $\eta<{\bdf_{\rm\scriptscriptstyle w}}(\beta)$}; indeed the leading term of the asymptotics can again be computed explicitly, and is found to be of order $\exp\{-O((a_N)^2\,N^{-3})\}$. In particular, we see that the large moderate deviations cannot extend up to $a_N\sim N^{4/3}$, since $(a_N)^2\, N^{-3}$ is of order $1$ already when $a_N\sim N^{3/2}$. This should not be surprising since, in the complete wetting regime, the volume under the microscopic contour is expected to have typical fluctuations of order $N^{3/2}$ (this can be shown when $\eta=-1$ and $\beta$ is very large using the convergence to Brownian excursion stated in \cite{Dobrushin93}). Therefore, typical fluctuations of magnetization in the complete wetting regime are not governed by bulk fluctuations anymore, but by fluctuations of the microscopic phase separation line. To prove that this behavior is valid up to $a_N\sim N^{3/2}$ might be a non-trivial task.\qed \subsection{Ising model in $D\geq 3$} Let $r\in{\ensuremath{\mathbb R}} $ and let $\ensuremath{\mathcal D}(m)$ be the set of macroscopic droplets at equilibrium in ${\uBox_r}$, \begin{equation*} \ensuremath{\mathcal D}(m)=\bigsetof{ V\subset {\uBox_r}}{\abs{V}=\frac{m^*-m}{2m^*}\abs{{\uBox_r}}\,,\, \cW_{\gb,\bdf}(\partial V)=\cW^\star_{\gb,\bdf}(m)}\,. \end{equation*} The rest of the notations were introduced in Part \ref{part_weakWulff}. The main result is the following \begin{thm}\cite{BodineauIoffeVelenik99} \label{thm_sessile3D} For any $\beta$ in $\ensuremath{\mathfrak B}_p$, any $\eta\in{\ensuremath{\mathbb R}} $, any $m$ in $(-m^*, m^* )$, the following holds: For any $\delta >0$, there is $k_0 = k_0 (\delta)$ such that for $\nu < \frac{1}{d}$ \begin{eqnarray*} \lim_{N \to \infty} \; \min_{k_0 \leq k \leq \nu n} \; \Isbd{+}{N} \Biggl( \frac{\ensuremath{\mathcal M}_k}{m^*} \in \union_{V\in\ensuremath{\mathcal D}(m)} \ensuremath{\mathcal V}(\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_V,\delta) \ \Big| \quad M_{N} \leq m\,\abs{{\Boxx{N,rN}}} \Biggr) = 1\,. \end{eqnarray*} \end{thm} \section{The tools} \label{sec_tools} \setcounter{equation}{0} In this Section, we explain how the procedures described in Parts \ref{part_weakWulff} and \ref{part_strongWulff} have to be modified to take into account the effect of the boundary. \subsection{2D Ising model} \label{ssec_2D} We describe the main modifications one needs to apply to the proofs of Part \ref{part_strongWulff} in order to get the results stated in Theorems~\ref{thm_largedevbd} and \ref{thm_sessile2D}. We split this Subsection into two parts, one dealing with the lower bound on $\Isbd{-}{N}(\ensuremath{\mathcal A}(m;c))$ or $\Isbd{-}{N}(M_N=-m^*\abs{{\Boxx{N,rN}}}+a_N)$, the other one with the upper bound. \bigskip\noindent \underline {The lower bound.} The constrained variational problem is more difficult than the usual one. In fact, as noted above, the solution (and {\it a fortiori} its stability) is not known in general, although it is in many cases. This prevents us from proceeding as in Part \ref{part_strongWulff}, where the lower bound follows from summing over large contours fluctuating around the Wulff shape. It would then appear necessary to make the same kind of proof, but for any configurations of droplets surrounding the right volume (all potential solutions to the variational problem). This, however, would be tricky; indeed, since we want our results to hold for large, but {\em finite} boxes, it is compulsory to obtain estimates {\em uniform} over the droplet in the chosen set! Fortunately, properties of the surface tension and wall free energy allow us to restrict our analysis to a small class of well-behaved droplets: The solution of the variational problem is necessarily taken on a {\em single convex} droplet. This is a consequence of the convexity of $\tau_\beta$ (use Jensen inequality) and the fact that $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\leq\tau_\beta^*$, which imply that replacing a droplet by its convex hull cannot increase the surface free energy; rescaling the resulting droplet decreases the energy even more. It is thus enough to prove the following \begin{pro}\cite{PfisterVelenik97} \label{pro_lowersessile2D} Let $\beta>{\gb_{\rm\scriptscriptstyle c}}$ and $\eta\in{\ensuremath{\mathbb R}} $. There exists $N_0=N_0(\beta,\eta,m,c,r)$ and a constant $C$ such that, for any simple closed rectifiable curve $\ensuremath{\mathcal C}$ which is the boundary of a convex body of volume $\abs{{\widehat\bbD^2_r}} (m^*(\beta)+m)/2m^*(\beta)$ contained in ${\widehat\bbD^2_r}$, and for all $N\geq N_0$, \begin{equation*} \Isbd{-}{N}(\ensuremath{\mathcal A}(m;c)) \geq \exp\{-\cW_{\gb,\bdf}(\ensuremath{\mathcal C})\; N - \beta\,C\,N^{1/2}\log N\}\,. \end{equation*} A completely analogous statement holds in the case of the exact canonical ensemble. \end{pro} The proof of Proposition \ref{pro_lowersessile2D} is similar to the proof of Theorem \ref{lb}. We sketch now the main changes needed to deal with the boundary conditions. The case $\eta\leq 0$ requires a slightly more complicated proof than the case $\eta>0$ so we first consider the latter. \medskip\noindent {\it First case: $\eta>0$} \begin{figure}[t] \centerline{\psfig{file=pinning.ps,height=5cm}} \caption{When $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)<\tau^*_\beta$, the open contour connecting two sites close enough to the wall might not stay inside an elliptical set as in the bulk (dashed contour), but instead might get pinned by the wall (full contour). In such a case, the exponential decay-rate is in general not given by $\tau_\beta$ or $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$.} \label{fig_pinning} \end{figure} As in the usual case, we want to approximate $\ensuremath{\mathcal C}$ with some polygonal curve with vertices on the dual lattice, and then sum over all contours going through the latter; this would allow us to extract, for each piece of the contour, the surface tension of the corresponding part of the polygonal line. Here, however, we want to be able to extract the wall free energy when the curve $\ensuremath{\mathcal C}$ follows the wall. There are some complications related to this: If two vertices are close to the wall, but don't belong to it\footnote{Consider, for example, a family of curves $\ensuremath{\mathcal C}$ getting closer and closer to the wall; since we need estimates uniform in all such curves, one has to be able to deal with such a situation.}, the sum over the corresponding piece of contour might not yield simply $\tau_\beta$ or $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$, but some complicated mixture, since typical such contours might first go down to the wall, then follow it on some length, and only then go up to the other vertex, see Fig.~\ref{fig_pinning}; this kind of behavior has been studied in details in \cite{PfisterVelenik98}. It turns out that it is possible to construct a polygonal approximation to the curve $\ensuremath{\mathcal C}$ whose surface tension is not too large in comparison with that of $\ensuremath{\mathcal C}$, while removing these possible pathologies. The idea is the following. Let $\delta_N = N^{-1/2}\log N$, and set \begin{equation*} {\widehat\bbD^2_r}(N) = \setof{x\in{\widehat\bbD^2_r}}{\min_{y\not\in{\widehat\bbD^2_r}}\,\normI{y-x} > \delta_N}\,. \end{equation*} Let $V$ be the convex body with boundary $\ensuremath{\mathcal C}$ and set $\ensuremath{\mathcal C}_N = \partial(V\cap{\widehat\bbD^2_r}(N))$ We first construct a polygonal approximation for each of the components of $\ensuremath{\mathcal C}_N\cap{\widehat\bbD^2_r}(N)$ with segments of length $\delta_N$ (apart from at most 8 of them which may be shorter). Set $[x,y] = \setof{z\in\ensuremath{\mathcal C}_N}{z(2)=\delta_N}$. If $[x,y]\neq\eset$, we connect the two corresponding pieces of polygonal lines by a broken line from $x$ to $(x(1),0)$, then to $(y(1),0)$, and finally to $y$; we divide the segment between $(x(1),0)$ and $(y(1),0)$ into segments of length $\delta_N/2$ (except possibly for the last one which can be shorter). We repeat this construction for the three other sides of the box. The resulting closed polygonal line is denoted by $\widehat\ensuremath{\mathcal P}_N$ (see Fig.~\ref{fig_bd_lb_cg}). Notice that by construction there exists an absolute constant $C$ such that \begin{align*} \cW_{\gb,\bdf}(\ensuremath{\mathcal C}) \geq \cW_{\gb,\bdf}(\widehat\ensuremath{\mathcal P}_N)& - C\beta\delta_N\,,\\ \abs{{\rm vol}(\ensuremath{\mathcal C})-{\rm vol}\widehat\ensuremath{\mathcal P}_N} \leq& C\,\abs{{\widehat\bbD^2_r}}\, \delta_N\,. \end{align*} \begin{figure}[t] \centerline{ \psfig{file=bd_lb_droplet.ps,height=6cm}\hspace{1cm} \psfig{file=bd_lb_polygon.ps,height=6cm} } \caption{Left: The curve $\ensuremath{\mathcal C}$; the shaded area represents the convex body whose boundary is $\ensuremath{\mathcal C}_N$ and the dashed line is the boundary of ${\widehat\bbD^2_r}(N)$. Right: The polygonal approximation $\widehat\ensuremath{\mathcal P}_N$, the dots representing its vertices.} \label{fig_bd_lb_cg} \end{figure} We then rescale the polygonal line $\widehat\ensuremath{\mathcal P}_N$ by a factor $N$ and if necessary move slightly the rescaled vertices so that they belong to the dual lattice; the rescaled polygons is denoted by $\ensuremath{\mathcal P}_N$. We then define a class $\ensuremath{\mathfrak G}$ of closed contours going through the vertices of $\ensuremath{\mathcal P}_N$ (in the right order), and staying in some small boxes along its edges. For all edges of length smaller than $N\delta_N$, as well as for the (up to 8) pieces we added above to join $\ensuremath{\mathcal C}_N$ to the boundary, we impose that the corresponding piece of the contour is a fixed length-minimizing path between the vertices. The rest of the argument proceeds in a similar way as in the standard case. The estimates in the phase of small contours carry over without any problems since in that case the effect of the boundary field cannot propagate far away from the wall. We still have to explain how one can extract the correct surface tension for $\widehat\ensuremath{\mathcal P}_N$ from the sum over contours in the class $\ensuremath{\mathfrak G}$ introduced above. To do this, we use several results about the {\em random-line representation}, proved in \cite{PfisterVelenik97,PfisterVelenik98}. To lighten the notation, we simply write $\weightB{\eta}$ instead of $q_{{\dBoxx{N,rN}}}^{\beta^*,\eta^*}$; $\beta^*$ and $\eta^*$ are the dual of $\beta$ and $\eta$, see \eqref{eq_dualitygeneral}. The first inequality is just the analogue of \eqref{3.4.qasuper} in our case, which turns out to be valid for arbitrary ferromagnetic coupling constants: The weight of any high-temperature contour $\gga\in\ensuremath{\mathfrak G}$ satisfies (\cite{PfisterVelenik97}, Lemma~5.4) \begin{equation*} \weightB{\eta}(\gga) \geq \prod \weightB{\eta}(\gga_k) \end{equation*} where $\gga_k$ denotes the piece of the contour $\gga$ between the $k$th and $k+1$th vertices of $\ensuremath{\mathcal P}_N$. The next step is to replace $\weightB{\eta}(\gga_k)$ by the corresponding infinite-volume quantity. First, for any $\gga_k$ joining vertices not belonging to ${\theplane_N^\star} \stackrel{\gD}{=} \setof{i\in{\Boxx{N,rN}}^*}{i(2)=-\tfrac12}$ (note that $\gga_k$ stays necessarily at a distance $O(N\delta_N)$ from ${\theplane_N^\star}$) \begin{equation*} \weightB{\eta}(\gga_k) \geq (1-e^{-O(N\delta_N)})\;q^{\gb^*}(\gga_k)\,; \end{equation*} second, for the pieces $\gga_k$ joining two sites of ${\theplane_N^\star}$, we use \begin{equation*} \weightB{\eta}(\gga_k) \geq \weightL{\eta}(\gga_k)\,, \end{equation*} where ${\bbL^{d}_\star} \stackrel{\gD}{=} \setof{i\in{\ensuremath{\mathbb Z}} ^d_\star}{i(2)\geq -\tfrac12}$ (both results are proved in \cite{PfisterVelenik97}, Lemma~5.3). Finally, the remaining pieces have a length at most $8N\delta_N$, so that their total weight is larger than $e^{-CO(N\delta_N)}$. The last step is to extract the surface free energy. The basic tool to do this is, as in the proof of Theorem \ref{skeletonlb}, concentration properties for open contours between 2 fixed dual sites. For the pieces $\gga_k$ not touching the boundary, we can use the usual infinite volume results based on \eqref{eq_concentration}, setting $s=N\delta_N$. For the pieces along the boundary, one can use the following statement (\cite{PfisterVelenik98}, Lemma~6.10): \begin{equation}\label{eq_concwall} \sumtwo{\lambda:\,i\to j}{\lambda\subset{\bf N}_K (i,j)\cap{\bbL^{d}_\star}} \weightL{\eta}(\lambda) \geq \bk{\sigma_i\sigma_j}^{\beta^*,\eta^*}_{{\bbL^{d}_\star}}\;\left( 1+ o(1)\right) , \end{equation} where ${\bf N}_K (i,j)$ is defined in Appendix~B (with $s=N\delta_N$). (In fact, \eqref{eq_concwall} can be strengthened when $\eta<{\bdf_{\rm\scriptscriptstyle w}}(\beta)$: in this case, the set ${\bf N}_K (i,j)\cap{\bbL^{d}_\star}$ can be replaced by the set (\cite{PfisterVelenik98}, Lemma~6.13) $$ \setof{k\in{\bbL^{d}_\star}}{(i(1)\wedge j(1))-K\log \delta_N \leq k(1)\leq (i(1)\vee j(1))+K\log \delta_N,\, k(2) \leq K\log \delta_N}\,, $$ which is compatible with our picture of partial wetting.) The result then follows from lower bounds on the corresponding 2-point functions. The only new inputs are the following lower bounds on the boundary 2-point function, \begin{align} \bk{\sigma_i\sigma_j}^{\beta^*,\eta^*}_{{\bbL^d}^*} &\geq C\,\frac{\exp\{-\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\norm{j-i}\}} {\norm{j-i}^{3/2}} \quad\quad &\forall\eta\geq{\bdf_{\rm\scriptscriptstyle w}}(\beta)\,, \label{eq_lower2ptfwall1}\\ \bk{\sigma_i\sigma_j}^{\beta^*,\eta^*}_{{\bbL^d}^*} &\geq C\,\exp\{-\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\norm{j-i}\} &\forall\eta<{\bdf_{\rm\scriptscriptstyle w}}(\beta)\,, \label{eq_lower2ptfwall2} \end{align} for any $i,j\in\gS^*\stackrel{\gD}{=}\setof{k\in{\bbL^{d}_\star}}{k(2)=-\tfrac12}$. \eqref{eq_lower2ptfwall2} is proved in \cite{PfisterVelenik97}, Prop.~7.1, while \eqref{eq_lower2ptfwall1} follows from exact computations in the case $\eta^*=1$ \cite{McCoyWu73}, and \cite{PfisterVelenik97}, Prop.~7.1, \begin{equation*} \bk{\sigma_i\sigma_j}^{\beta^*,\eta^*}_{{\bbL^d}^*} \geq (\tanh\beta^*)^2\, \bk{\sigma_i\sigma_j}^{\beta^*,1}_{{\bbL^d}^*},\quad\quad\forall\eta\geq 0\,. \end{equation*} \medskip\noindent {\it Second case: $\eta=0$} This is a somewhat marginal case. The apparent difficulty is that in this case $\eta^*=\infty$. However, this does not create any real complications. One just has to modify the construction of the first case as follows: We replace the polygonal line $\widehat\ensuremath{\mathcal P}_N$ by the (possibly open) polygonal line $\widehat\ensuremath{\mathcal P}_N\setminus\setof{u\in{\ensuremath{\mathbb R}} ^2}{u(2)=0}$; we then sum over contours going through the vertices of this polygonal line (contours which are open if the polygonal line is open). This does not give any contribution for the part of $\ensuremath{\mathcal C}$ along the wall, which is what we want since $\tau_{\scriptscriptstyle\rm bd}(\beta,0)=0$. \medskip\noindent {\it Third case: $\eta<0$} \begin{figure}[t] \centerline{ \psfig{file=bd_lb_negh1.ps,height=6cm}\hspace{1cm} \psfig{file=bd_lb_negh2.ps,height=6cm} } \caption{The construction for $\eta<0$. Left: $I=\eset$ (two polygonal lines: one open and one closed. Right: $I\neq\eset$ (one open polygonal line).} \label{fig_bd_lb_cg_negh} \end{figure} This is slightly more tricky. In this situation, one may be even more pessimistic, since the duality is simply not defined when non-ferromagnetic interactions are present! However, this turns out to be a false problem. Indeed, we can use the following obvious identity to recover ferromagnetic interactions (see footnote \ref{footnote}, p.~\pageref{footnote}), \begin{equation*} \Isbdf{+}{N}{\eta} = \Isbdf{\pm}{N}{\abs\eta}\,, \end{equation*} where $\pm$ correspond to the boundary condition $\overline{\sigma}_i = 1$ if $i(2)\geq 0$ and $\overline{\sigma}_i = -1$ otherwise. We then construct $\widehat\ensuremath{\mathcal P}_N$ as in the first step and set $I=\widehat\ensuremath{\mathcal P}_N\cap \setof{x\in{\widehat\bbD^2_r}}{x(2)=0}$. If $I=\eset$, then we subdivide the set $\setof{x\in{\widehat\bbD^2_r}}{x(2)=0}$ into segments of length $\delta_N/2$ (except possibly for the last one, which might be shorter); this defines a second (open) polygonal line $\widehat\ensuremath{\mathcal P}_N'$ (with all its vertices along the wall) (see Fig.~\ref{fig_bd_lb_cg_negh}). We then introduce a class of {\em pair} of contours $(\gga,\gga')$, $\gga$ going through the vertices of $\ensuremath{\mathcal P}_N$ and defined as before, and $\gga'$ following the wall, going through the vertices of $\ensuremath{\mathcal P}_N'$ and staying inside small boxes along its edges, similarly as for the other one ($\gga'$ is open). By construction $\gga$ and $\gga'$ are disjoint. Duality then implies the following identity \begin{align} \Isbdf{\pm}{N}{\abs{\eta}}(\{\gga,\gga'\}\subset\boldsymbol\gga(\,\cdot\,)) &= (\PFbdf{\pm}{N}{\abs{\eta}})^{-1}\; w(\gga)w(\gga')\;\sumtwo{\underline\zeta:}{(\underline\zeta,\gga,\gga')\text{ $\Lambda^*$-comp.}} w(\underline\zeta) \nonumber\\ &= (1-e^{-O(N)})\;\frac {\PFbdf{+}{N}{\abs{\eta}}} {\PFbdf{\pm}{N}{\abs{\eta}}} \;\weightB{\abs{\eta}}(\gga,\gga')\,. \label{eq_dualityhneg} \end{align} The factor $(1-e^{-O(N)})$ comes from the fact that we can apply duality only to simply connected sets, and the exterior of $\gga$ is {\em not} simply connected. We must therefore forbid families $\underline\zeta$ for which duality does not hold; since such families must contain at least one contour surrounding $\gga$, we get the above correction. We can now proceed as in the first case. The only additional work to do is to analyze the ratio of partition functions in \eqref{eq_dualityhneg}, but this is easy, since by duality \begin{equation}\label{eq_ratio} \frac {\PFbdf{+}{N}{\abs{\eta}}} {\PFbdf{\pm}{N}{\abs{\eta}}} = \bigl( \bk{\sigma_{t_{\rm l}}\sigma_{t_{\rm r}}}^{\beta^*,\abs{\eta}^*}_{{\dBoxx{N,rN}}} \bigr)^{-1} \geq e^{\tau_{\scriptscriptstyle\rm bd}(\beta,\abs{\eta})\,(2N+1)}\,, \end{equation} where $t_{\rm l}=(-L-\tfrac12,-\tfrac12)$ and $t_{\rm r}=(L+\tfrac12,-\tfrac12)$ are the two dual sites at the lower left and lower right corners of ${\dBoxx{N,rN}}$, and the last inequality follows from the upper bound (see \cite{PfisterVelenik97} for example) \begin{equation}\label{eq_upbd2ptfbd} \bk{\sigma_i\sigma_j}^{\beta^*,\abs{\eta}^*}_{\dBoxx{N,rN}} \leq e^{-\tau_{\scriptscriptstyle\rm bd}(\beta,\abs{\eta})\norm{j-i}}\,, \end{equation} valid for any $i,j\in{\theplane_N^\star}$. We then see that the ratio of partition function cancels the contribution from the sum over the open contour $\gga'$, up to an error term $\exp\{\ensuremath{\mathcal O}(N\delta_N)\}$. \medskip If $I\neq\eset$, the situation is simpler. Let's write $I=[x,y]$; then we define a new polygonal line $\widehat\ensuremath{\mathcal P}_N^\pm$: $\widehat\ensuremath{\mathcal P}_N^\pm$ goes from the lower right corner of ${\widehat\bbD^2_r}$ to $a$ along the wall, then it follows $\widehat\ensuremath{\mathcal P}_N\setminus \setof{x\in{\widehat\bbD^2_r}}{x(2)=0}$ up to $b$ and finally goes from $b$ to the lower right corner of ${\widehat\bbD^2_r}$ (see Fig.~\ref{fig_bd_lb_cg_negh}). We subdivide as usual the part of $\widehat\ensuremath{\mathcal P}_N^\pm$ along the wall into segments of length $\delta_N/2$ and proceed as in the first case, with $\widehat\ensuremath{\mathcal P}_N^\pm$ replacing $\widehat\ensuremath{\mathcal P}_N$, using \eqref{eq_dualityhneg}. Summing over the open contour going through the vertices of $\ensuremath{\mathcal P}_N^\pm$ produces (up to the usual error term) a term $\exp\{-\intSTbdf{\abs{\eta}}(\widehat\ensuremath{\mathcal P}_N^\pm)\,N\}$. Combining this with \eqref{eq_ratio} and observing that \begin{equation*} \exp\{2\tau_{\scriptscriptstyle\rm bd}(\beta,\abs{\eta})\,N\}\; \exp\{-\intSTbdf{\abs{\eta}}(\widehat\ensuremath{\mathcal P}_N^\pm)\, N\} = \exp\{-\intSTbdf{\eta}(\widehat\ensuremath{\mathcal P}_N)\, N\}\,, \end{equation*} the conclusion follows as in the usual situation. \bigskip\noindent \underline {The upper bound.} Let us now turn our attention to the proof of the upper bound. The basic strategy is completely similar to that of the standard case, see Subsection~\ref{dima_structure_ub}. The only serious modification concerns the energy estimate, which should now associate the functional $\cW_{\gb,\bdf}$ to the probability of skeletons. Again, the case $\eta\geq 0$ is somewhat simpler than the other, so we start with this one. \medskip\noindent {\it First case: $\eta\geq 0$} The basic problem we encounter when trying to make the energy estimate is the same we met in the proof of the lower bound. Summing over an open contour connecting two dual sites $i$ and $j$ might not yield a decay of order $\exp\{-\tau_\beta(j-i)\}$ or $\exp\{-\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\norm{j-i}\}$ if $i$ and $j$ are close enough to the wall but not on it (see \cite{PfisterVelenik98}). However, the following bound, proven in \cite{PfisterVelenik97}, Lemma~5.1, is sufficient to derive the energy estimate, \begin{equation}\label{eq_upbddisj} \sumtwo{\lambda:\,i\rightarrow j}{\lambda\cap\ensuremath{\mathcal E}({\theplane_N^\star})=\eset}\weightB{\eta}(\lambda) \leq \exp\{-\tau_\beta(j-i)\}\,, \end{equation} for any $\eta\geq 0$; $\ensuremath{\mathcal E}({\theplane_N^\star})=\{e^*\subset{\theplane_N^\star}\}$. The definition of skeletons will be done in such a way as to ensure that the additional constraint $\lambda\cap\ensuremath{\mathcal E}({\theplane_N^\star})=\eset$ is automatically satisfied, see below. We also need to extract the wall free energy when summing over contours joining two dual sites belonging to ${\theplane_N^\star}$; this however is nothing else as \eqref{eq_upbd2ptfbd}. Let us now describe the construction of a skeleton $S=(u_1,\dots,u_n)$ of a closed contour $\gga$. Remember that we have to define the skeletons in such a way as to ensure that 1) the piece of the contour between two dual sites not both on the wall must be edge-disjoint from the wall, and 2) the Hausdorff distance between the contour $\gga$ and the polygonal line $\text{Pol}(S)$ is smaller than the cutoff parameter $s(N)$. \begin{figure}[t] \centerline{ \psfig{file=bd_ub_v.ps,height=4.5cm}\hspace{3.7mm} \psfig{file=bd_ub_skel.ps,height=4.5cm} } \figtext{ \writefig 0.10 0.88 {\footnotesize $s(N)$} \writefig -6.77 0.83 {\footnotesize $v_1$} \writefig -4.70 0.83 {\footnotesize $v_2$} \writefig -1.22 0.83 {\footnotesize $v_{2m}$} } \caption{Left: A contour touching the wall and the family $(v_1,\dots,v_{2m})$. Right: An $s$-skeleton for the contour.} \label{fig_bd_ub_skel} \end{figure} For contours $\gga$ which do not touch the wall, the definition of skeletons is the same as in Part~\ref{part_strongWulff}. Suppose $\gga\cap\ensuremath{\mathcal E}({\theplane_N^\star})\neq\eset$. Let us define $(v_1,\dots,v_{2m})$ as the {\em minimal} family of dual sites satisfying the following properties: \begin{enumerate} \item $v_k\in{\theplane_N^\star}\cap\gga$ for $k=1,\dots,2m$ and $v_k(1)< v_{k'}(1)$ if $k< k'$; \item $(v_1,\dots,v_m)$ split $\gga$ into pieces $\gga_1:v_1\rightarrow v_2,\dots,\gga_{2m}:v_{2m}\rightarrow v_1$, such that \begin{itemize} \item $\gga_{2k}\cap\ensuremath{\mathcal E}({\theplane_N^\star})=\eset$ for all $k=1,\dots,m$. \item $d_{\bbH}(\gga_{2k},\setof{x\in{\ensuremath{\mathbb R}} ^2}{x(2)=-1/2})>s(N)$ for all $k=1,\dots,m$. \item $d_{\bbH}(\gga_{2k+1},\setof{x\in{\ensuremath{\mathbb R}} ^2}{x(2)=-1/2})\leq s(N)$ for all $k=1,\dots,m$. \end{itemize} \end{enumerate} We then say that $S=(u_1,\dots,u_n)$ is an $s$-skeleton of $\gga$ if \begin{itemize} \item All vertices of $S$ belong to $\gga$. \item $v_1,\dots,v_{2m}$ are vertices of $S$. \item The only vertices of $S$ along $\gga_{2k+1}$ are $v_{2k+1}$ and $v_{2k+2}$, for all $k=1,\dots,m$. \item The distance between any successive pair of vertices $u_l,u_{l+1}$ of $S$ along $\gga_{2k}$ satisfies $s(N)/2\leq\normsup{u_l-u_{l+1}} \leq 2s(N)$, for all $k=1,\dots,m$. \item $d_{\bbH}(\gga,\text{Pol}(S))\leq s(N)$. \end{itemize} This definition has the nice property that either $u_l$ and $u_{l+1}$ both belong to ${\theplane_N^\star}$, or the part of $\gga$ between these two sites is edge-disjoint from ${\theplane_N^\star}$ (see Fig.~\ref{fig_bd_ub_skel}). This allows us to use the estimates \eqref{eq_upbd2ptfbd} and \eqref{eq_upbddisj}. This yields the following extension of \eqref{3.4.energy} \cite{PfisterVelenik97} \begin{equation}\label{eq_energybd} \Is^{\beta^*,\bdf^*}_{\dBoxxrN}(\ensuremath{\mathfrak S}) \leq \exp\{-\cW_{\gb,\bdf}(\ensuremath{\mathfrak S})\}\,. \end{equation} The analogue of the energy estimate \eqref{3.4.energyge} then follows easily, since $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\geq 0$ when $\eta\geq 0$ and therefore it is still possible to control the number of vertices of $\ensuremath{\mathfrak S}$ in terms of $\cW_{\gb,\bdf}(\ensuremath{\mathfrak S})$. This gives \begin{equation}\label{eq_energygebd} \Is^{\beta^*,\bdf^*}_{\dBoxxrN}(\ensuremath{\mathfrak S} \geq r) \leq \exp\Bigl\{-r(1-\frac{C\log N}{s(N)})\Bigr\}\,. \end{equation} Using this and the estimates in the phase of small contours, which still hold in the presence of a boundary field, the upper bound follows easily. \medskip\noindent {\it Second case: $\eta< 0$} As for the lower bound, we have to deal with the fact that, for $\eta<0$, the duality is not defined. The solution is the same as there: We just change boundary conditions, i.e. we look at the measure $\Isbdf{\pm}{N}{\abs\eta}$, which was defined when we dealt with the lower bound. Once we have done this, the main difference is that the family of low-temperature contours of any configurations compatible with these boundary conditions contains exactly one open contour, with endpoints $t_{\rm l} = (-N-\tfrac12, -\tfrac12)$ and $t_{\rm r} = (N+\tfrac12, \tfrac12)$. It is straightforward to generalize the notion of skeleton introduced in the preceding case to the present situation. What we get by this procedure is a family of skeletons $\ensuremath{\mathfrak S}^\pm=(S_0,S_1,\dots,S_n)$ containing exactly one skeleton, $S_0$, with $\text{Pol}(S_0)$ open with endpoints $t_{\rm l}$ and $t_{\rm r}$. \begin{figure}[t] \centerline{ \psfig{file=bd_ub_Spm.ps,height=3.7cm}\hspace{1cm} \psfig{file=bd_ub_S.ps,height=3.7cm} } \caption{Left: The family of polygonal lines associated to $\ensuremath{\mathfrak S}^\pm$. Right: The family of {\em closed} polygonal lines associated to $\ensuremath{\mathfrak S}$.} \label{fig_bd_ub_Spm} \end{figure} Since we want to compare the corresponding families of polygonal lines with the solution of the variational problem, i.e. with the boundary of a convex body in ${\widehat\bbD^2_r}$, it is convenient to introduce another family $\ensuremath{\mathfrak S}$ of skeletons whose associated polygonal lines are {\em closed}; $\ensuremath{\mathfrak S}$ possesses the same set of vertices (except for $t_{\rm l}$ and $t_{\rm r}$, but with a different set of edges, which is such that its associated family of polygonal lines satisfies \begin{equation*} \text{Pol}(\ensuremath{\mathfrak S}) = \text{Pol}(\ensuremath{\mathfrak S}^\pm) {\scriptstyle\triangle} \setof{x\in{\ensuremath{\mathbb R}} ^2}{-N/2-\tfrac12\leq x(1) \leq N/2+\tfrac12,\, x(2)=-\tfrac12} \end{equation*} where $\scriptstyle\triangle$ denotes symmetric difference (see Fig.~\ref{fig_bd_ub_Spm}). One then has the following relation \begin{equation*} \cW_{\gb,\bdf}(\ensuremath{\mathfrak S}) = \intSTbdf{\abs\eta}(\ensuremath{\mathfrak S}^\pm) - (2N+1)\; \tau_{\scriptscriptstyle\rm bd}(\beta,\abs\eta)\,. \end{equation*} In particular, the following version of \eqref{eq_energybd} holds \cite{PfisterVelenik97} \begin{align} \Isbdf{\pm}{N}{\abs\eta}(\ensuremath{\mathfrak S}^\pm) &\leq K_1\; \exp\{-\cW_{\gb,\bdf}(\ensuremath{\mathfrak S})\} &\eta&> -{\bdf_{\rm\scriptscriptstyle w}}(\beta) \label{eq_energybd2}\\ \Isbdf{\pm}{N}{\abs\eta}(\ensuremath{\mathfrak S}^\pm) &\leq K_2N^{3/2}\; \exp\{-\cW_{\gb,\bdf}(\ensuremath{\mathfrak S})\} &\eta&\leq -{\bdf_{\rm\scriptscriptstyle w}}(\beta) \label{eq_energybd3} \end{align} The energy estimate \eqref{eq_energygebd} is slightly more delicate now, since the wall free energy is negative. It turns out however that in the partial wetting regime, $\eta>-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, it is easy to reduce ourselves to a situation similar to the case $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\geq 0$. The case $\eta\leq -{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, i.e. complete wetting, is more subtle, but happens not to give too much problems as long as we consider volume-order large deviations (or, in fact, deviations close enough to volume order). Let us first consider the case of partial wetting; this regime is characterized by $\abs{\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)}<\tau^*_\beta$. Let us write $\cW_{\gb,\bdf}(\ensuremath{\mathfrak S}) = T^++T^-$, where $T^+$ ($T^-$) is the positive (negative) part of the functional. Then, since $T^+\geq (\tau^*_\beta/\tau_{\scriptscriptstyle\rm bd}(\beta,\eta))\,T^-$ and the number of vertices along the wall is at most two-third of the total number $\#(\ensuremath{\mathfrak S})$, we have \begin{equation*} \#(\ensuremath{\mathfrak S}) \leq \frac{K}{s(N)(\tau^*_\beta+\tau_{\scriptscriptstyle\rm bd}(\beta,\eta))}\;\cW_{\gb,\bdf}(\ensuremath{\mathfrak S})\,, \end{equation*} for some absolute constant $K$. This allows to prove that \begin{equation}\label{eq_energygepw} \Isbd{-}{N}(\ensuremath{\mathfrak S} \geq r) \leq \exp\Bigl\{-r(1-\frac{C\log N}{s(N)})\Bigr\}\,. \end{equation} When $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$, one cannot establish so good an upper bound. The best we can do is to use the fact that $T^-\geq (2N+1)\,\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$, which turns out to be enough to prove the following, weaker, version of the energy estimate \begin{equation}\label{eq_energygecw} \Isbd{-}{N}(\ensuremath{\mathfrak S} \geq r) \leq \exp\Bigl\{-r(1-\frac{C\log N}{s(N)})+C'\frac{N\log N}{s(N)}\Bigr\}\,. \end{equation} The reason why such an estimate is still sufficient to get the desired result is that the relevant values of $r$ are also of order $N$, so that the first term can always be made to dominate the second one. \bigskip Once we have \eqref{eq_energygepw} and \eqref{eq_energygecw}, the proof is concluded as usual, after observing that the estimate in the phase of small contours still applies in the presence of the boundary field $\abs\eta$. \subsection{Ising model in $D\geq 3$} The proof of Theorem~\ref{thm_sessile3D} is based on the ${\ensuremath{\mathbb L}} _1$-Theory introduced in Part~\ref{part_weakWulff}. We simply explain how the main ingredients of the proof should be modified and refer to \cite{BodineauIoffeVelenik99} for details.\\ The arguments of geometric measure Theory can be extended easily to this new setting. In particular, it is straightforward to check that the functional $\cW_{\gb,\bdf}$ is lower semi-continuous and that the approximation Theorems~\ref{thm approx} and \ref{theo ABCP} hold. The main problem is to define proper mesoscopic phase labels for the measures with a boundary magnetic field. If $\eta \geq 0$, then the mesoscopic phase labels introduced in Part~\ref{part_weakWulff} satisfy the Assumptions A and B, as well as Conditions C1-C3 under the measure $\Isbd{+}{N}$. Instead if $\eta < 0$, some problems occur because the FK measure looses its ferromagnetic properties and the random coloring measures are more complicated to deal with. Nevertheless, it is still possible to define mesoscopic phase labels and to derive estimates as in Section~\ref{Coarse graining and mesoscopic phase labels}. Other difficulties have to be overcomed in order to implement the general philosophy of the ${\ensuremath{\mathbb L}} _1$-Theory. In the case of a negative boundary magnetic field, the interface induced by the field prevents us from applying directly the techniques developed to prove the exponential tightness Theorem \ref{prop 1}. Therefore an alternative approach similar to the one described in Subsection \ref{ssec_2D} is required. The analysis of the surface tension needs also some care. We recall that the computation of surface tension is based on a localization procedure along the boundary of functions of bounded variation. For a given test function either locally its boundary is in the bulk and we recover the usual surface tension term or it intersects the wall and arguments similar to those used in the bulk enable us to derive the wall free energy. In this way the complexity of the problem is reduced because the difficult analysis of the fluctuations of the microscopic interface between the wall and the bulk is replaced by soft ${\ensuremath{\mathbb L}} _1$ estimates.\\ \section{Open problems} \setcounter{equation}{0} As in the previous parts, there are still a lot of open problems. Most of those presented before have natural analogues in the present situation. In the following, we restrict ourselves to problems intrinsically related to the topics discussed in this part. \subsubsection{2D nearest-neighbors Ising model} The fact that one is still unable to analyze non-pertur\-ba\-tively the fluctuations of the phase separation line is only strengthened when we would like to study boundary effects. Indeed, a general analysis of typical open paths with endpoints at general positions with respect to the wall has not been done even at low temperature. Problems related to this are the following: \begin{enumerate} \item Give a non-perturbative proof that the probability measure of a suitably rescaled version of an open contour with endpoints on the wall converges weakly to the measure of Brownian excursion when $\eta\leq-{\bdf_{\rm\scriptscriptstyle w}}(\beta)$ (as was sketched in the low-temperature case for $\eta=-1$ in \cite{Dobrushin93}). This would provide a way of analyzing the typical fluctuations of magnetization in the complete wetting regime, and would complete the heuristic picture of the wetting transition in the Grand-Canonical Ensemble. \item Establish Ornstein-Zernike behavior for the boundary 2-point function without having recourse to explicit computations. Even weaker lower bounds, like those given in \cite{Al}, have not been proved in such a constrained geometry. \end{enumerate} Another open problem is to investigate the full range of moderate deviations. This may require an understanding of point 1. above. \subsubsection{Higher dimensional nearest-neighbors Ising models} If fluctuations of phase separation lines are not yet understood, the situation is only much worse when considering their higher dimensional counterparts; in fact, even perturbative results are not always available. Here is a far from exhaustive list of related open problems. \begin{enumerate} \item Give a microscopic description of the behavior of phase boundaries in the partial and complete wetting regimes in the Grand-Canonical Ensemble to put some flesh on the heuristics given above. \item Decide whether ${\bdf_{\rm\scriptscriptstyle w}}(\beta)=1$ or not. The corresponding results for the SOS model \cite{Chalker} suggest that ${\bdf_{\rm\scriptscriptstyle w}}(\beta)<1$ in any dimension; numerical investigations confirm this in dimension~3 \cite{BinderLandau}. \end{enumerate} In fact, even much simpler problems related to behavior of higher dimensional interfaces are still open: proof of the existence of a roughening transition in $d=3$, proof of the unstability of the $(1,1,1)$ interface, ... In some simpler models of the SOS type some (but not all!) of these problems can be solved, but this does not seem to help in solving the original ones. \subsubsection{The wall} Another type of problems concerns properties of the wall. In particular, it might be interesting to answer the following questions. \begin{enumerate} \item What happens if the interaction with the wall is more complicated (say, non-nearest neighbor). \item What happens if the boundary field is not homogeneous (for example, is a ``random'' configuration of $\eta_1$ and $\eta_2$ macroscopically equivalent to some well-chosen homogeneous boundary field $\eta=\overline\eta$?). \end{enumerate} \part{Introduction} \label{part_Introduction} \setcounter{section}{0} \section{Phenomenological Wulff construction} \subsection{Equilibrium crystal shapes} The phenomenological theory of equilibrated crystals dates back at least to the beginning of the century \cite{Wulff}. Suppose that two different thermodynamic phases (say crystal and its vapor) coexist at a certain temperature $T$. Assuming that the whole system is in equilibrium, in particular that the volume $v$ of the crystalline phase is well defined, what could be said about the region this phase occupies? Of course, the issue cannot be settled in the language of bulk free energies - these do not depend neither on the shape, nor even on the prescribed volume $v$ of the crystal. Instead, possible phase regions are quantified by the value of the free energy of the crystal-vapor interface, or by the total surface tension between the crystal and the vapor\footnote{In this review, our point of view is that of mathematical physics; for an exposition of the problem from the viewpoint of theoretical physics, we refer to \cite{RW} and references therein. }. Equilibrium shapes correspond, in this way, to the regions of minimal interfacial energy. This is an isoperimetric-type problem: The surface tension $\tau_{\gb}$ (where, throughout the article, $\beta$ denotes the inverse temperature, $\beta = 1/T$) is an anisotropic function of the local direction of the interface. Thus, assuming that the crystal occupies a region $V\subset{\ensuremath{\mathbb R}} ^d$, the corresponding contribution $\ensuremath{\mathcal W}_{\beta}\left(V\right)$ to the free energy is equal to the integral of $\tau_{\gb}$ over the boundary $\partial V$ of $V$ (Fig.~\ref{wulff_tension}). \begin{figure}[t] \centerline{ \psfig{file=crystal.ps,height=5cm}\hspace{1cm} \raise 2.5 truecm \hbox{$ \ensuremath{\mathcal W}_{\beta}\left( V\right)~=~\int_{\partial V}\tau_{\gb} (\vec{n}_x )\,{\rm d}\ensuremath{\mathcal H}^{(d-1)}_x $} } \figtext{ \writefig -2.36 3.10 {\footnotesize $x$} \writefig -1.80 3.50 {\footnotesize $\vec{n}_x$} \writefig -5.35 2.00 {\footnotesize $\partial V$} \writefig -2.20 0.80 {\footnotesize \bf Vapor} \writefig -3.90 2.60 {\footnotesize \bf Crystal} } \caption{The free energy of the crystal-vapor interface is given by the integral of the anisotropic surface tension $\tau_{\gb}$ over $\partial V$. $\ensuremath{\mathcal H}^{(d-1)}$ is the $(d-1)$-dimensional Hausdorff measure.} \label{wulff_tension} \end{figure} The Wulff variational problem could then be formulated as follows: \vskip 0.2cm \noindent $ \left({\rm {\bf WP}}\right)_v$\qquad\qquad $\ensuremath{\mathcal W}_{\beta} \left( V\right)~\longrightarrow~{\rm min} \qquad\qquad{\rm Given}:\ {\rm vol}(V)~=~v $ \vskip 0.2cm \noindent As in the usual isoperimetric case $\left({\rm WP}\right)_v $ is scale invariant, \begin{eqnarray*} \forall s >0 , \qquad \ensuremath{\mathcal W}_\beta \big( \partial (sV) \big) = s^{d-1} \ensuremath{\mathcal W}_\beta \big( \partial V \big). \end{eqnarray*} Consequently, any dilatation of an optimal solution is itself optimal, and one really talks here in terms of optimal shapes. The canonical way to produce an optimal shape is given by the following Wulff construction (Fig.~\ref{fig_wulffconstruction}): Define \begin{eqnarray} \label{Wulff shape} \ensuremath{\mathcal K} = \bigcap_{\vec{n} \in {\ensuremath{\mathbb S}} ^{d-1}} \left\{ x \in {\ensuremath{\mathbb R}} ^d: \ x~\cdot ~\vec{n} \le \tau_{\gb} (\vec{n}) \right\}\ \stackrel{\gD}{=}\ \bigcap_{\vec{n} \in {\ensuremath{\mathbb S}} ^{d-1}} H_{\beta}\left(\vec{n}\right) . \end{eqnarray} \begin{figure}[t] \centerline{ \psfig{file=wulffconstruction.ps,height=5cm}} \figtext{ \writefig -1.30 1.00 {\footnotesize $H(\vec{n}_1)$} \writefig -4.7 3.60 {\footnotesize $H(\vec{n}_2)$} \writefig -4.0 1.00 {\footnotesize $H(\vec{n}_3)$} \writefig -0.30 3.20 {\footnotesize $\vec{n}_1$} \writefig -1.65 4.40 {\footnotesize $\vec{n}_2$} \writefig -3.60 3.30 {\footnotesize $\vec{n}_3$} } \caption{Function $\tau_\beta(\vec{n})$ (left) with three half-spaces $H(\vec{ n}_1 )$, $H(\vec{ n}_2 )$ and $H(\vec{ n}_3 )$ (for better visibility, only $H(\vec{ n}_1 )$ has been shaded). The intersection of {\bf all} such half-spaces gives rise to the corresponding Wulff shape (right).} \label{fig_wulffconstruction} \end{figure} It would be convenient to normalize $\ensuremath{\mathcal K}$ as $$ \ensuremath{\mathcal K}_1\ \stackrel{\gD}{=}\ \sqrt[d]{\frac{1}{{\rm vol}(\ensuremath{\mathcal K} )}}\ensuremath{\mathcal K} . $$ We refer to $\ensuremath{\mathcal K}_1$ as to the normalized, or unit volume, Wulff shape. The variational theory of $\left({\rm WP}\right)_v$, which we briefly address in the subsequent subsection, states that any solution to $\left({\rm WP}\right)_v$ can be obtained by a shift of the corresponding dilatation $\ensuremath{\mathcal K}_v\stackrel{\gD}{=} \sqrt[d]{v}\ensuremath{\mathcal K}_1$ of $\ensuremath{\mathcal K}_1$. \noindent \subsection{Variational methods} \label{Variational methods} The corresponding literature is rather rich and diverse, here we merely attempt to facilitate the orientation of the reader and to introduce some notations which will be useful in the sequel. Since the half-spaces $H_{\beta}\left(\vec{n}\right)$ in \eqref{Wulff shape} are convex, so is the Wulff shape $\ensuremath{\mathcal K}$. Furthermore, in all the problems we consider here, the surface tension $\tau_{\gb}$ is bounded above and below, \begin{equation} \label{st_bound} 0~<~\min_{\vec{n} \in {\ensuremath{\mathbb S}} ^{d-1}}\tau_{\gb} (\vec{n})~\leq~ \max_{\vec{n} \in {\ensuremath{\mathbb S}} ^{d-1}}\tau_{\gb} (\vec{n})~<~\infty . \end{equation} Accordingly, equilibrium crystal shapes are bounded and have non-empty interiors, $0\in {\rm int}\big(\ensuremath{\mathcal K}_v\big)$. The fact that $\ensuremath{\mathcal K}$ is optimal follows from the general Brunn-Minkowski theory: Let $\tau_\beta^{**}$ be the support function of $\ensuremath{\mathcal K}$, $\tau_\beta^{**}(x) = \sup \{ y \cdot x \ | \ y \in \ensuremath{\mathcal K} \}$. Of course, if the homogeneous extension of $\tau_{\gb}$ \begin{equation} \label{tau tilde} \tau_{\gb} (\vec{x})\stackrel{\gD}{=} \normII{ \vec{x}}\tau_{\gb} \left( \frac{\vec{x}}{\normII{\vec{x}}}\right), \end{equation} is convex, then $\tau_{\gb}$ and $\tau_\beta^{**}$ coincide. In general $\tau_\beta^{**}$ is the convex lower-semicontinuous regularization of $\tau_{\gb}$, in particular $\tau_\beta^{**}\leq\tau_{\gb}$. Nevertheless, for the Wulff shape $\ensuremath{\mathcal K}$, \begin{eqnarray*} \ensuremath{\mathcal W}_{\beta}^{**}\left (\ensuremath{\mathcal K}\right)~\stackrel{\gD}{=} ~ \int_{\partial \ensuremath{\mathcal K}} \tau_{\beta}^{**}(\vec{n}_x) \, d \ensuremath{\mathcal H}^{(d-1)}_x ~=~ \int_{\partial \ensuremath{\mathcal K}} \tau_{\gb} (\vec{n}_x) \, d \ensuremath{\mathcal H}^{(d-1)}_x . \end{eqnarray*} where, as before, $\vec{n}_x$ is the outward normal to $\partial V$ in $x$ and $\ensuremath{\mathcal H}^{(d-1)}$ is the $(d-1)$ dimensional Hausdorff measure in ${\ensuremath{\mathbb R}} ^d$. On the other hand, the action of the regularized functional $\ensuremath{\mathcal W}_{\beta}^{**}$ could be extended to any compact set $V\subset{\ensuremath{\mathbb R}} ^d$ in terms of the mixed volume $$ \ensuremath{\mathcal W}_{\beta}^{**}\left( V\right)~=~\liminf_{\gep \to 0} \frac{1}{\gep} \left( {\rm vol}(V + \gep \ensuremath{\mathcal K}) - {\rm vol}(V ) \right) , $$ the latter definition coincides with the integral definition of $\ensuremath{\mathcal W}_{\beta}^{**}$ for regular $V$. The Brunn-Minkowski inequality \cite{Schneider} \begin{eqnarray*} \label{Brunn-Minkowski} {\rm vol} ( A+B) \geq \left( {\rm vol}(A)^{\frac{1}{d}} + {\rm vol}(B)^{\frac{1}{d}} \right)^d \; , \end{eqnarray*} implies that for any regular $V$ with ${\rm vol}\left( V\right) = {\rm vol}\left(\ensuremath{\mathcal K}\right)$, $$ \ensuremath{\mathcal W}_{\beta}\left( V\right)~\geq ~ \ensuremath{\mathcal W}_{\beta}^{**}\left( V\right)~ \geq ~ d \, {\rm vol} (\ensuremath{\mathcal K}) = \ensuremath{\mathcal W}_\beta (\ensuremath{\mathcal K}) . $$ Of course, we have been rather sloppy above, and we refer the reader to the works \cite{Taylor},~\cite{Fonseca} and \cite{FonsecaMuller} for the comprehensive discussion and results, including the history of the variational Wulff problem. The language employed in the latter works is that of the geometric measure theory, and we proceed with setting up some of the corresponding notation which will also turn out to be useful for the ${\ensuremath{\mathbb L}} _1$-approach to the microscopic justification of the Wulff construction, as described in Part~2 of this review. In the latter case, the macroscopic state of the system will be determined by the value of an order parameter which specifies the phase of the system. In the systems that we will consider, the pure phases are characterized by their averaged density, which are encoded by two values $\rho_l(\beta)$ and $\rho_h(\beta)$, for example $\rho_h$ for the crystal and $\rho_l$ for the vapor. (In fact, we shall derive all the results in the symmetrized spin language, in which case the two values will be $\pm m^*(\beta)$, where $m^* (\beta )$ is the spontaneous magnetization (see Section~2) at the inverse sub-critical temperature $\beta > \beta_c$). For a given temperature, it is convenient to replace this order parameter by a parameter with values $\pm 1$. We suppose that the macroscopic region of ${\ensuremath{\mathbb R}} ^d$ where the system is confined is the unit torus $\uTor = \left( {\ensuremath{\mathbb R}} / {\ensuremath{\mathbb Z}} \right)^d$. The macroscopic system is described by a function $v$ taking values $\pm 1$ and the fact that $v_r = 1$ for some $r$ in $\uTor$ means that locally at $r$ the system is in equilibrium in the phase $m^*$.\\ For any measurable set $V$ in $\uTor$, the perimeter of $V$ is defined by \begin{eqnarray} \label{perimeter} \ensuremath{\mathcal P}(V) = \sup \left\{ \int_V {\rm div} \phi(x) \, dx \quad \big| \qquad \phi \in C^1( \uTor, {\ensuremath{\mathbb R}} ^d), \ \ |\phi| \leq 1 \right\} \; . \end{eqnarray} A function $v$ with values $\pm 1$ is said to be of bounded variation in $\uTor$ if the perimeter of the set $\{ v = 1 \}$ is finite. We denote by $\BV$ the set of functions of bounded variation in $\uTor$ with values $\pm 1$ (see \cite{EG} for a review). For any $v$ in $\BV$, there exists a generalized notion of the boundary of $\{ v = 1 \}$ called reduced boundary and denoted by $\partial^* v$. If $\{ v = 1 \}$ is a regular set, $\partial^* v$ coincides with the usual boundary $\partial v$. Furthermore, a blow-up Theorem (see \cite{EG} p. 199) ensures that for all $x$ in $\partial^* v$ an approximate tangent plane can be defined locally. This will imply the existence of a unit vector $\vec{n}_x$ called the measure theoretic unit normal to $\{ v = 1 \}$ at $x$. For any $x$ in ${\ensuremath{\mathbb R}} ^d$ and any vector $\vec{n}$, we define the half spaces \begin{eqnarray*} H^+(x, \vec{n}) & = & \{ y \in {\ensuremath{\mathbb R}} ^d \ | \qquad (y-x) \cdot \vec{n} \geq 0 \} \; , \\ H^-(x, \vec{n}) & = & \{ y \in {\ensuremath{\mathbb R}} ^d \ | \qquad (y-x) \cdot \vec{n} \leq 0 \} \; . \end{eqnarray*} Then for all $x$ in $\partial^* v$, there is a unit vector $\vec{n}_x$ such that \begin{eqnarray*} \lim_{r \to 0} \; \frac{1}{r^d} {\rm vol} \left( B(x,r) \inter \{ v = 1 \} \inter H^+(x, \vec{n}) \right) & = & 0 \; , \\ \lim_{r \to 0} \; \frac{1}{r^d} {\rm vol} \left( B(x,r) \inter \{ v = - 1 \} \inter H^-(x, \vec{n}) \right) & = & 0 \; , \end{eqnarray*} where $B(x,r)$ is the ball of radius $r$ centered in $x$. The previous property shows that the reduced boundary is not too wild (see Fig. \ref{fig_normext}). In fact, it is possible to prove that a set of finite perimeter has ``measure theoretically a $C^1$ boundary''. \\ \begin{figure}[t] \centerline{ \psfig{file=normext.ps,height=4cm}} \figtext{ \writefig -3.5 3.5 {\footnotesize $\{v=-1\}$} \writefig 0.22 2.00 {\footnotesize $\{v=1\}$} \writefig -0.3 3.5 {\footnotesize $\vec n$} \writefig 3.22 4.55 {\footnotesize $H^+(x,\vec n)$} \writefig -4.8 1.70 {\footnotesize $H^-(x,\vec n)$} \writefig 5.10 4.00 {\footnotesize $\partial^* v$} \writefig 0.23 3.07 {\footnotesize $x$} } \caption{Measure theoretic unit normal to $\{ v = 1 \}$ at $x$} \label{fig_normext} \end{figure} The functional $\ensuremath{\mathcal W}_\beta$ can be extended on ${\ensuremath{\mathbb L}} _1(\uTor ,[-\frac{1}{m^*},\frac{1}{m^*}])$ as follows \begin{eqnarray} \label{functional F} \ensuremath{\mathcal W}_\beta (v) = \left\{ \begin{array}{l} \int_{\partial^* v} \tau(\vec{n_x}) \, d \ensuremath{\mathcal H}^{(d-1)}_x, \qquad {\rm if} \quad v \in \BV \; ,\\ \infty \; , \qquad \qquad \qquad \qquad {\rm otherwise}. \end{array} \right. \end{eqnarray} Under the assumption that the homogeneous extension \eqref{tau tilde} of $\tau_{\gb}$ is convex, a result by Ambrosio and Braides (see \cite{Ambrosio}, Theorem 2.1) ensures that $\ensuremath{\mathcal W}_\beta$ is lower semi-continuous with respect to ${\ensuremath{\mathbb L}} _1$ convergence. In certain cases (attractive interactions) the convexity of $\tau_{\gb}$ can be derived from the properties of the corresponding microscopic system as will be explained later. To any measurable subset $A$ of $\uTor$, we associate the function $\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_A = 1_{A^c} - 1_A$ and simply write $\ensuremath{\mathcal W}_\beta (A)=\ensuremath{\mathcal W}_\beta (\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_A)$. In this new setting, the isoperimetric problem is to find the minimizers of \begin{eqnarray} \label{variational} \min \big\{ \ensuremath{\mathcal W}_\beta (v) \ \big| \ v \in \BV, \qquad \big| \, \int_{\uTor} m^* \, v_r \, dr \big| \le m \big\}, \end{eqnarray} where $m$ belongs to $]{\bar m}(\beta ) , m^*(\beta ) [$. The parameter ${\bar m}$ is chosen such that the minima of the variational problem above are translates of the set $\ensuremath{\mathcal K}_m$ deduced from the Wulff shape $\ensuremath{\mathcal K}$ by dilatation in order to satisfy the volume constraint. This restriction enables us to exclude pathological minimizers which occur from the periodicity. Nevertheless, notice that the precise shape or the uniqueness of the minimizers of the variational problem will be irrelevant for the microscopic derivation of the Wulff construction. \subsection{Stability properties} In two dimensions Wulff solutions to ${\rm (WP)}_v$ are stable in the metric of Hausdorff distance: let $V$ be a connected and simply connected subset of ${\ensuremath{\mathbb R}} ^2$ with a rectifiable boundary $\partial V$. Assume that ${\rm Area} (V)\geq 1$. Then, \begin{equation} \label{stability} \min_{x}{\rm d}_{{\ensuremath{\mathbb H}} }\left( V, x+\ensuremath{\mathcal K}_1\right)~\leq ~c_1\sqrt{\cW_\gb (V) - \cW_\gb (\ensuremath{\mathcal K}_1 )} . \end{equation} This result has been established in \cite{DKS} as a generalization of the classical Bonnesen inequality. If $V$ consists of several connected and simply connected components, $V=\vee_{i=1}^{n} V_i$, and the total surface tension of $V$ is close to the optimal, $$ \cW_\gb (V)~=~\sum_{i=1}^{n}\cW_\gb (V_i )~\leq ~\cW_\gb (\ensuremath{\mathcal K}_1 ) +\gep , $$ then, again assuming that ${\rm Area} (V) =\sum_{i=1}^{n}{\rm Area} (V_i) \geq 1$, an easy consequence of \eqref{stability} implies (see (2.9.7) and (2.9.8) in \cite{DKS}) that actually all but one components of $V$ are small, and that the only large component, say $V_1$, is close to a shift of $\ensuremath{\mathcal K}_1$. Namely $$ \sum_{i=2}^n{\rm Area} (V_i)~\leq~c_2\gep^2\qquad{\rm and}\qquad \sum_{i=2}^n\cW_\gb (V_i)~\leq ~c_3\gep , $$ and $V_1$ satisfies \eqref{stability}. These stability properties are indispensable for a sharp justification of the phenomenological Wulff construction directly from the microscopic assumptions on the local inter-particle interactions (see Section~\ref{dima_structure} of Part~\ref{part_strongWulff}). As far as we understand, stability properties of higher dimensional isoperimetric problems are much less studied. Already in three dimensions the Hausdorff distance is, of course, not an adequate measure of stability. Trivial rate-free stability properties in ${\ensuremath{\mathbb L}} _1$ simply follow from the uniqueness of Wulff solutions and the compactness of BV-balls in ${\ensuremath{\mathbb L}} _1$. On a more qualitative side there are well studied stability properties in the class of convex sets \cite{Schneider} and, also, for sets with a smooth boundary \cite{Hall}. We feel, however, that the statistical stability under the microscopic approximations in the problems we consider here might be better than the impartial stability of the corresponding variational problems. A result of this sort is supposed to appear in \cite{BodineauIoffeVelenik99}. \subsection{Winterbottom problem} The Wulff variational problem provides a description of an equilibrium crystal shape deep inside a region filled with gas phase. If, however, the spatial extent of the system is finite, it may happen that the boundary of the surrounding vessel exhibits a preference toward the crystal phase. In such a situation, the equilibrium state may not be given by the Wulff shape anymore, but may have the crystal attached to the boundary. We discuss briefly the simplest model of such an interaction between an equilibrium crystal and an attractive substrate. Suppose, for simplicity, that our system is contained in the half-space $H = \setof{x\in{\ensuremath{\mathbb R}} ^d}{x(d)\geq 0}$; the boundary of this half-space, the hyperplane $\ensuremath{\mathfrak w} = \setof{x\in{\ensuremath{\mathbb R}} ^d}{x(d)= 0}$ represents the boundary of the vessel and is called the {\em wall}. We also suppose to simplify the analysis, and because these assumptions will always be satisfied, that $\tau_\beta(\vec n)=\tau_\beta(-\vec n)$, and that the homogeneous extension of $\tau_\beta$ is convex\footnote{In the models we consider in this paper, this is a consequence of FKG inequality.}. To model the degree of attractiveness of the wall, we introduce a new thermodynamical quantity, the {\em wall free energy} $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$, which depends on both the inverse temperature $\beta$ and the ``chemical structure'' of the wall $\eta$, and modify the free energy functional accordingly, \begin{equation*} \cW_{\gb,\bdf}(V) \stackrel{\gD}{=} \cW_\gb(V) + (\tau_{\scriptscriptstyle\rm bd}(\beta,\eta) - \tau^*_\beta)\,\ensuremath{\mathcal H}^{(d-1)}(\partial V\cap \ensuremath{\mathfrak w})\,, \end{equation*} where $\tau^*_\beta \stackrel{\gD}{=} \tau_\beta(\vec{e}_d)$, $\vec{e}_d\in{\ensuremath{\mathbb R}} ^d$ with $\vec{e}_d(k)=\delta_{kd}$. The wall free energy replaces therefore the surface tension $\tau_\beta$ along the wall. At equilibrium, a thermodynamical stability argument shows that $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\leq \tau^*_\beta$ (this can also be proved in some microscopic models, see Part~\ref{part_boundary}), so that this last term is always non-positive. The new variational problem is \vskip 0.2cm \noindent $\left({\rm {\bf WBP}}\right)_v$\qquad\qquad $\cW_{\gb,\bdf} ( V) \longrightarrow {\rm min}$\qquad Given: $V\subset H$, ${\rm vol}(V) = v$ . \vskip 0.2cm \noindent It has first been studied in~\cite{Winterbottom67} and is called the Winterbottom variational problem. Let us now discuss what its solution looks like. It turns out that there are three cases to consider: \begin{enumerate} \item $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)= \tau^*_\beta$\hfill \vspace*{1mm} In this case, $\cW_{\gb,\bdf}(V) = \cW_\gb(V)$ and therefore the solution is the Wulff shape associated to $\tau_\beta$. The equilibrium crystal is not attached to the wall. This can happen even if {\it a priori} the chemical structure of the wall is such that it is energetically favorable for the crystal to lay on the wall, see Part~\ref{part_boundary} for a discussion from a microscopic point of view. \vspace*{1mm} \item $\abs{\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)}< \tau^*_\beta$\hfill \vspace*{1mm} \begin{figure}[t] \centerline{ \psfig{file=winterbottom.ps,height=5cm}} \figtext{ \writefig -0.40 3.20 {\footnotesize $0$} \writefig 1.00 1.60 {\footnotesize $\ensuremath{\mathcal K}^{\rm w}$} \writefig -5.00 1.93 {\footnotesize $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)$} \writefig -3.20 1.80 {\footnotesize $\tau^*_\beta$} } \caption{The Winterbottom shape is obtained by taking the intersection between the Wulff shape and the half-space $\{x(d)\geq-\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)\}$, and rescaling the obtained body.} \label{fig_winterbottom} \end{figure} Now the wall is really attractive for the crystal shape. The solution of the variational problem is given by a suitably rescaled version of the following set (see~Fig.~\ref{fig_winterbottom}), \begin{equation*} \ensuremath{\mathcal K}^{\rm w} \stackrel{\gD}{=} \ensuremath{\mathcal K} \cap \setof{x\in{\ensuremath{\mathbb R}} ^d}{x(d)\geq -\tau_{\scriptscriptstyle\rm bd}(\beta,\eta)} \end{equation*} so that the volume constraint is satisfied (notice that this variational problem is still scale invariant); see~\cite{KoteckyPfister94} for a simple proof. \vspace*{1mm} \item $\tau_{\scriptscriptstyle\rm bd}(\beta,\eta) = -\tau^*_\beta$\hfill \vspace*{1mm} This is a somewhat pathological case. Indeed, the solution of the variational problem is completely degenerate, the solution being unbounded. A minimizing sequence is, for example, \begin{equation*} R_n = \setof{x\in H}{\abs{x(k)}\leq n,\, k=1,\dots,d-1,\, x(d)\leq n^{1-d}\, v}\,. \end{equation*} As $n\rightarrow\infty$, $R_n$ covers the whole wall with a film of vanishingly small width; the limiting value of the surface free energy functional is $0$. This describes the regime of so-called {\em complete wetting} where the wall so strongly prefers the crystal that it wants to prevent any contact with the gas phase. \end{enumerate} \subsection{Microscopic justification} \label{ss_intro_pheno_sloppy} Microscopic models we consider here are simple lattice gas type models (in the magnetic interpretation), which are going to be defined precisely in the next section. The prototype situation when the Wulff construction is thought to be recovered as a law of large numbers as the size of the microscopic system tends to infinity could be loosely described as follows: Suppose that the particles of a certain substance live on the vertices of the integer lattice ${\ensuremath{\mathbb Z}} ^d$, so that each vertex of ${\ensuremath{\mathbb Z}} ^d$ could be either occupied by a particle or remain vacant. Thus, various particle configurations $n$ could be labeled by points of $\{ 0,1\}^{{\ensuremath{\mathbb Z}} ^d}$, where one puts $n_i =1$ if there is a particle at site $i\in{\ensuremath{\mathbb Z}} ^d$, and $n_i =0$, otherwise. These random configurations are sampled from a Gibbs distribution ${\ensuremath{\mathbb P}} $, which takes into account the assumptions on the microscopic interactions between the particles. The strength of the interaction is quantified by the value $\beta =1/T$ of the inverse temperature; the larger $\beta$ (respectively the smaller the temperature $T$) is, the stronger is the interaction. In many instances sufficiently low temperatures give rise to two stable phases - the low density phase (which we call vapor) with an average particle density per site $\rho_l$ and the high density phase (crystal) with a corresponding average density $\rho_h$, $0 <\rho_l <\rho_h <1$. Suppose now that all the particles are confined to a large finite volume vessel $\Lambda_N\subset{\ensuremath{\mathbb Z}} ^d$, where the subindex $N$ indicates the linear size of $\Lambda_N$; we put for simplicity $|\Lambda_N|=N^d$. Let us fix $\rho\in (\rho_l ,\rho_h )$ and ask what are the typical geometric properties of particle configurations $n$ under the conditional measure ${\ensuremath{\mathbb P}} \left(~\cdot ~\big| \sum_{i \in \Lambda_N}n_i =\rho N^d\right) $. In other words, we fix the total number of particles $\rho N^d$ in such a way that it falls in-between the two stable values $\rho_l N^d$ and $\rho_h N^d$. The prototype law of large numbers result we have in mind is schematically: \begin{equation*} {\ensuremath{\mathbb P}} \left(\; \raise -2.4 truecm \hbox{\psfig{file=typical.ps,height=5cm}}\quad \left| \;\;\sum_{i\in\Lambda_N}n_i =\rho N^d\right.\right)~\longrightarrow ~1\,. \end{equation*} Thus, with an overwhelming ${\ensuremath{\mathbb P}} \left(~\cdot ~\big| \sum_{i \in \Lambda_N}n_i =\rho N^d\right) $-probability particle configurations $n$ on $\Lambda_N$, $n\in \{0,1\}^{\Lambda_N}$, obey the following phase segregation pattern: $\Lambda_N$ splits into two regions, $\Lambda_N =\Lambda_N^h\vee \Lambda_N^l$, where $\Lambda_N^h$ is occupied by the high density phase, and, respectively, $\Lambda_N^l$ by the low density one. The relative volume of $\Lambda_N^h$ can be recovered from the canonical constraint $$ \rho_h\big|\Lambda_N^h\big|~+~\rho_l\big|\Lambda_N^l\big|~=~\rho N^d $$ and the shape of $\Lambda_N^h$ is asymptotically Wulff. There is a long way even towards making the above statement precise - we should define the microscopic models, quantify the notion of phases, in particular of phases over finite volumes, and explain how the surface tension is produced in the large $N$ limit. \section{Microscopic Models}\label{ssec_intro_models} \setcounter{equation}{0} \subsection{Models with finite-range ferromagnetic 2-body interactions} We want to introduce mathematically precise realizations of the models discussed in~subsection \ref{ss_intro_pheno_sloppy}. As described there, our interest lies in models of lattice gases. For simplicity we restrict our attention to a particular subclass of such models, which enjoy several nice properties, the Ising models with finite-range ferromagnetic 2-body interactions. We consider a family of random variables $n_i$, $i\in{\ensuremath{\mathbb Z}} ^d$, taking values $0$ and $1$. Any site $i$ of the lattice ${\ensuremath{\mathbb Z}} ^d$ is either occupied by a particle, in which case $n_i=1$, or empty, in which case $n_i=0$. The random variables $n_i$ are called {\em occupation numbers} and they completely describe a configuration of the lattice gas. We consider a formal Hamiltonian of the form \begin{equation*} \tfrac 12 \sum_{i,j} K_{ij}\,n_i n_j\,, \end{equation*} the 2-body interactions are such that $K_{ij}=K_{\normI{j-i}}$, $K_{ij}\geq 0$ and $K_{ij}=0$ if $\normI{i-j}>r$, where $r$ is the {\em range} of the interaction. We introduce two parameters, the {\em chemical potential} $\mu$ and the {\em inverse temperature} $\beta$, and set $\Lambda\Subset{\ensuremath{\mathbb Z}} ^d$. The Gibbs measure in $\Lambda$ with boundary condition $\overline{n}\in\{0,1\}^{{\ensuremath{\mathbb Z}} ^d}$ is the probability measure on $(\{0,1\}^{{\ensuremath{\mathbb Z}} ^d}, \ensuremath{\mathcal A})$, with $\ensuremath{\mathcal A}$ the usual product $\sigma$-field, defined by \begin{equation*} \lgm{\mu}{\overline{n}}(n) = \begin{cases} \frac 1{\PF^{\gb}_{\gL,\mu,\overline{n}}} \exp\bigl( \beta\mu\sum_{i\in\Lambda} n_i + \beta\displaystyle\sum_{\{i,j\}\cap\Lambda\neq\eset}K_{ij}\, n_i n_j \bigr) & \text{if $n_i = \overline{n}_i$, for all $i\not\in\Lambda$,} \\ 0 & \text{otherwise,} \end{cases} \end{equation*} where \begin{equation*} \PF^{\gb}_{\gL,\mu,\overline{n}} = \sum_{N\geq 0} e^{\beta\mu N} \sumtwo{n\,:}{\sum_{i\in\Lambda}n_i=N} e^{\beta\sum_{\{i,j\}\cap\Lambda\neq\eset} K_{ij}\, n_i n_j}\,. \end{equation*} Two types of boundary conditions are particularly relevant for us, the $\mathbf 1$ b.c., corresponding to setting $n\equiv 1$, and the $\mathbf 0$ b.c., $n\equiv 0$. We also need a different kind of boundary conditions: The Gibbs measure in $\Lambda$ with {\em free} boundary conditions is the probability measure on $(\{0,1\}^\Lambda, \ensuremath{\mathcal F}_\Lambda)$ defined by \begin{equation*} \lgmfree{\mu}(n) = \frac 1{\PF^{\gb}_{\gL,\mu}} \exp\bigl( \beta\mu\sum_{i\in\Lambda} n_i + \beta\sum_{\{i,j\}\subset\Lambda} K_{ij}\, n_i n_j \bigr)\,. \end{equation*} These measures describe the lattice gas in the {\em Grand Canonical Ensemble}, in which the total number of particles, or equivalently the {\em density} $\rho(n)=\frac 1{\abs\Lambda} \sum_{i\in\Lambda} n_i$, is not fixed. The description of a gas in the {\em Canonical Ensemble} corresponds to the conditioned measure \begin{equation*} \lgm{\mu}{\overline{n}} (\,\cdot\,|\,\rho(n)=\widetilde\rho)\,, \end{equation*} with $\widetilde\rho\in {\rm Range}(\rho)$ (this measure is obviously independent of $\mu$). The existence of the Gibbs states $\lgmiv{\mu}{\overline{n}}=\lim_{\Lambda\nearrow{\ensuremath{\mathbb Z}} ^d} \lgm{\mu}{\overline{n}}$, for $\overline{n}={\boldsymbol 0}$, $\boldsymbol 1$ or free, can be easily proved using correlations inequalities; moreover, it is unique if $\mu\neq -\tfrac12 \sum_j J_{0j}$. Restricting the chemical potential to the particular line $\mu=-\tfrac12 \sum_j J_{0j}$, it can be proved that there exists a critical value $\infty>{\gb_{\rm\scriptscriptstyle c}}>0$ such that \begin{itemize} \item For all $\beta<{\gb_{\rm\scriptscriptstyle c}}$, there is a unique Gibbs state and $\lgmiv{\mu}{\overline{n}}(\rho) = 1/2$. \item For all $\beta>{\gb_{\rm\scriptscriptstyle c}}$, $\rho_h(\beta) \equiv \lgmiv{\mu}{\mathbf 1}(\rho) > 1/2 > \lgmiv{\mu}{\mathbf 0}(\rho) \equiv \rho_l(\beta)$. \end{itemize} \medskip It is rather convenient to work with another, equivalent, formulation of these models, in which the symmetries present when $\mu=-\tfrac12\sum_j J_{0j}$ are more transparent; this is the {\em magnetic interpretation}. To do this, we introduce a new family of random variables $\sigma_i$, $i\in{\ensuremath{\mathbb Z}} ^d$, defined by \begin{equation*} \sigma_i = 2n_i-1\,. \end{equation*} The random variables $\sigma_i$ therefore take values in $\{-1,1\}$; $\sigma_i$ is called the {\em spin} at the site $i$. Expressed in these variables, the model is defined through the following Gibbs measure in $\Lambda$ with boundary conditions $\overline{\sigma} \in \{-1,1\}^{{\ensuremath{\mathbb Z}} ^d}$, \begin{equation*} \Ism{\boldsymbol h}{\overline{\sigma}}(\sigma) = \begin{cases} \frac 1{\PF^{\gb}_{\gL,\overline{\gs},h}} \exp\bigl( \beta \displaystyle\sum_{i\in\Lambda} h_i\, \sigma_i + \beta\displaystyle\sum_{\{i,j\}\cap\Lambda\neq\eset} J_{ij}\, \sigma_i \sigma_j \bigr) & \text{if $\sigma_i = \overline{\sigma}_i$, for all $i\not\in\Lambda$,} \\ 0 & \text{otherwise,} \end{cases} \end{equation*} where $h_i\in{\ensuremath{\mathbb R}} $ are called the {\em magnetic fields} and the {\em coupling constants} $J_{ij}=J_{\normI{i-j}}$ satisfy $J_{ij}\geq 0$ and $J_{ij}= 0$ if $\normI{i-j}>r$. A configuration $\sigma$ such that $\sigma_i = \overline{\sigma}_i$, for all $i\not\in\Lambda$, is said to be {\em compatible with b.c. $\overline\sigma$ in $\Lambda$}; the set of all such configurations is denoted by $\Omega_{\Lambda,\overline\sigma}$. We are particularly interested in the $+$ and $-$ b.c. corresponding respectively to $\overline{\sigma}\equiv 1$ and $\overline{\sigma}\equiv -1$. The Gibbs measure in $\Lambda$ with free b.c. is the probability measure on $(\{-1,1\}^\Lambda,\ensuremath{\mathcal F}_\Lambda)$ defined by \begin{equation*} \Ismfree{\boldsymbol h}(\sigma) = \frac 1{\PF^{\gb}_{\gL,h}} \exp\bigl( \beta\sum_{i\in\Lambda} h_i\, \sigma_i + \beta\sum_{\{i,j\}\subset\Lambda} J_{ij}\, \sigma_i \sigma_j \bigr)\,. \end{equation*} Expected value w.r.t. these measures are denoted with brackets notations, $\bk{\,\cdot\,}^{\beta}_{\Lambda,\overline\sigma,\boldsymbol h}$, ... In the magnetic formulation, the {\em Canonical Ensemble} corresponds to fixing the value of the {\em magnetization} (density) $m(\sigma) = \frac1{\abs\Lambda} \sum_{i\in\Lambda}\sigma_i$, \begin{equation*} \Ism{\boldsymbol h}{\overline{\sigma}} (\,\cdot\,|\,m(\sigma)=\widetilde m)\,, \end{equation*} where $\widetilde m\in {\rm Range}(m)$. If $h_i\equiv h$ for all $i$, then the (infinite-volume) Gibbs states $\Ismiv{h}{\overline\sigma}$ for $+$, $-$ and free b.c. can be shown to exist; it is always unique when $h\neq 0$. The phase transition statement takes now the following (simpler) form: There exists $\infty>{\gb_{\rm\scriptscriptstyle c}}>0$ such that \begin{itemize} \item For all $\beta<{\gb_{\rm\scriptscriptstyle c}}$, the Gibbs state is unique and $\bk{m}^\beta_{\overline{\sigma},0} = 0$. \item For all $\beta>{\gb_{\rm\scriptscriptstyle c}}$, $m^*(\beta) \equiv \bk{m}^\beta_{+,0} > 0 > \bk{m}^\beta_{-,0} = -m^*(\beta)$. \end{itemize} We will use the terminology {\em Ising models} to refer to the lattice gases in the magnetic formulation. When $h=0$, we will generally omit it from the notations. \medskip Ferromagnetic models are particularly well-suited for non-perturbative analyses. Indeed, they enjoy several very useful qualitative properties, most of which taking form of correlation inequalities. Of particular importance for us are the following statements ($\sigma_A \stackrel{\gD}{=} \prod_{i\in A}\sigma_i$): \begin{align*} \bk{\sigma_A}^{\beta}_{\Lambda,\boldsymbol h} &\geq 0\,, \\ \bk{\sigma_A\sigma_B}^{\beta}_{\Lambda,\boldsymbol h} &\geq \bk{\sigma_A}^{\beta}_{\Lambda,\boldsymbol h} \bk{\sigma_B}^{\beta}_{\Lambda,\boldsymbol h}\,, \\ \intertext{provided $h_i\geq 0$ for all $i$ (1st and 2nd Griffiths', or GKS, inequalities~\cite{Griffiths72,KellySherman68}); also,} \frac{\partial^2}{\partial h_i\partial h_j}\,\bk{\sigma_k}^{\beta}_{\Lambda,\boldsymbol h} &\leq 0\,, \intertext{for all $i$, $j$ and $k$, provided $h_l\geq 0$ for all $l$ (GHS inequalities~\cite{GriffithsHurstSherman70}); finally} \bk{fg}^{\beta}_{\Lambda,\boldsymbol h} &\geq \bk{f}^{\beta}_{\Lambda,\boldsymbol h} \bk{g}^{\beta}_{\Lambda,\boldsymbol h}\,, \end{align*} for any increasing\footnote{A function $f:\{-1,1\}^{{\ensuremath{\mathbb Z}} ^d}\rightarrow{\ensuremath{\mathbb R}} $ is {\em increasing} if $f(\sigma)\geq f(\sigma')$ as soon as $\sigma_i\geq \sigma'_i$, for all $i$; it is called {\em decreasing} if $-f$ is increasing.} functions $f$ and $g$, and any ${\boldsymbol h}\in{\ensuremath{\mathbb R}} ^\Lambda$ (FKG inequality~\cite{FortuinKasteleynGinibre71}). Observe that any b.c. can be obtained starting with free b.c. and applying suitable magnetic fields on the spins on the inner boundary of $\Lambda$, where the {\em inner boundary} of a set $A\subset{\ensuremath{\mathbb Z}} ^d$ is defined as \begin{equation*} \partial_{\rm in} A \stackrel{\gD}{=} \setof{i \in A}{\exists j \not\in A,\, i \sim j}\,, \end{equation*} where $i \sim j$ means that $J_{i,j} \not = 0$. Similarly, we define the {\em (exterior) boundary} of $A$ by \begin{equation*} \partial A \stackrel{\gD}{=} \setof{i \not\in A}{\exists j \in A,\, i \sim j}\,. \end{equation*} \subsection{2D nearest-neighbors ferromagnetic Ising model} \label{ssec_2dIsing} A particularly simple member of the above-mentioned class of models is the two-dimensional nearest-neighbors Ising model, in which $J_{ij}=0$ if $i$ and $j$ are not nearest-neighbors, and $J_{ij}=1$ if they are. This model has still additional remarkable features. First, even though this only plays a very marginal role in this review, it is the only one for which it is possible to compute explicitly various quantities (free energy, surface tension, correlations, ...). Of more importance for our purposes is the property of {\em self-duality}\footnote{The fact that this model is {\em self}-dual is very convenient, but is not required anywhere. What we need is to be able to control precisely the dual of the model; for example, the Ising model on the hexagonal lattice is not self-dual, but it would be possible to prove the same kind of statements for this model as for the one on the square lattice.} that it enjoys. The nearest-neighbors model admit a geometric description in terms of very simple objects, the contours. To define contours in the present context, it is useful to introduce the notion of the dual of the lattice ${\ensuremath{\mathbb Z}} ^2$. The {\em dual lattice} is the set of dual sites \begin{equation*} {\ensuremath{\mathbb Z}} ^2_\star = \setof{x\in{\ensuremath{\mathbb R}} ^2}{x+(\tfrac12,\tfrac12) \in {\ensuremath{\mathbb Z}} ^2}\,. \end{equation*} To each edge $e=\bk{x,y}$, $x,y\in{\ensuremath{\mathbb Z}} ^2$, we associate a dual edge $e^*$ connecting nearest-neighbors dual sites, which is the unique such edge intersecting $e$ (as subset of ${\ensuremath{\mathbb R}} ^2$). Now, if we consider the Ising model in $\Lambda\Subset{\ensuremath{\mathbb Z}} ^2$ with b.c. $\overline\sigma$, a configuration $\sigma\in\Omega_{\Lambda,\overline\sigma}$ is entirely determined by giving the following set of dual edges, \begin{equation*} \setof{e^*}{e^* \text{ dual to } e=\bk{i,j},\, \{i,j\}\cap\Lambda\neq\eset,\,\sigma_i\sigma_j=-1}\,. \end{equation*} The maximal connected components of these dual edges, seen as closed line segments in ${\ensuremath{\mathbb R}} ^2$, are called {\em contours}. We denote by $\boldsymbol\gga(\sigma)$ the contours of the configuration $\sigma$. The {\em boundary} $\partial\gga$ of a contour $\gga$ is the set of all dual sites belonging to an odd number of the dual edges composing $\gga$. A contour is said to be {\em closed} if $\partial\gga=\eset$, otherwise it is {\em open}. A set $\Lambda\Subset{\ensuremath{\mathbb Z}} ^2$ is {\em simply connected} if $\union_{i\in\Lambda}\setof{x\in{\ensuremath{\mathbb R}} ^2}{\normsup{x-i}\leq 1/2}$ is a simply connected subset of ${\ensuremath{\mathbb R}} ^2$. Given $\Lambda\subset{\ensuremath{\mathbb Z}} ^2$, its {\em dual} is $\Lambda^*=\setof{i\in{\ensuremath{\mathbb Z}} ^2_\star}{\exists j\in\Lambda,\, \normsup{j-i}=1/2}$. A family of contours is said to be {\em $\Lambda^*$-compatible} if they are disjoint (as sets of bonds and sites) and are included in $\Lambda^*$. A family of contours $\boldsymbol\gga$ is said to be {\em $(\Lambda,\overline\sigma)$-compatible} if there exists a configuration $\sigma\in\Omega_{\Lambda,\overline\sigma}$ such that $\boldsymbol\gga(\sigma) = \boldsymbol\gga$. It is easy to show that for simply connected $\Lambda$, $\Lambda^*$-compatibility of a family of closed contours is equivalent to $(\Lambda,+)$-compatibility. The measure $\Is^{\beta}_{\Lambda,\overline\sigma}$ can be easily written in terms of these objects; for any $\sigma\in\Omega_{\Lambda,\overline\sigma}$, \begin{equation}\label{eq_contours} \Is^\beta_{\Lambda,\overline\sigma}(\sigma) = \frac 1{Z^\beta_{\overline\sigma}(\Lambda)} \exp\{ -2\beta\sum_{\gga\in\boldsymbol\gga(\sigma)} \abs\gga \}\,, \end{equation} where $\abs{\gga}$ is the number of edges in $\gga$ and \begin{equation}\label{eq_dualLT} Z^\beta_{\overline\sigma}(\Lambda) = \sum_{\boldsymbol\gga\text{ $(\Lambda,\overline\sigma)$-comp.}} \exp\{ -2\beta\sum_{\gga\in\boldsymbol\gga} \abs\gga \} \equiv \sum_{\boldsymbol\gga\text{ $(\Lambda,\overline\sigma)$-comp.}} \prod_{\gga\in\boldsymbol\gga}w(\gga;\beta) \,. \end{equation} \medskip We now discuss the property of self-duality. Let $\Lambda\Subset{\ensuremath{\mathbb Z}} ^2$ be simply connected. We consider the model at inverse temperature $\beta^*$ in the box $\Lambda^*\Subset{\ensuremath{\mathbb Z}} ^2_\star$, with free boundary conditions. There exists another graphical representation for this model, the {\em high-temperature} representation, which results from writing \begin{equation*} e^{\beta^*\sigma_i\sigma_j} = \cosh\beta^* (1+\sigma_i\sigma_j\tanh\beta^*)\,, \end{equation*} opening all the brackets and expanding. After a simple summation over $\sigma$, this yields \begin{align}\label{eq_dualHT} Z^{\beta^*}_{\Lambda^*} = C(\Lambda)\, \sum_{\boldsymbol\gga\text{ $\Lambda^*$-comp.}} (\tanh\beta^*)^{\sum_{\gga\in\boldsymbol\gga} \abs\gga} &\equiv C(\Lambda)\, \sum_{\boldsymbol\gga\text{ $\Lambda^*$-comp.}} \prod_{\gga\in\boldsymbol\gga} w^*(\gga;\beta^*)\nonumber\\ &\equiv C(\Lambda)\,Z^{\beta^*}(\Lambda^*) \,, \end{align} where $C(\Lambda)$ is some constant which only depends on the set $\Lambda$. Setting $\tanh\beta^*=e^{-2\beta}$, we see from \eqref{eq_dualLT} and \eqref{eq_dualHT} that $Z^\beta_+(\Lambda)=Z^{\beta^*}(\Lambda^*)$, since $\Lambda$ is simply connected. In the same way, we can expand the 2-point function, for example, and get the following very useful identity \begin{equation}\label{eq_randomline} \bk{\sigma_i\sigma_j}^{\beta^*}_{\Lambda,+} = \sum_{\lambda:i\rightarrow j}q^{\beta^*}_{\Lambda^*}(\lambda)\,, \end{equation} where the sum is over all open contours $\lambda$ such that $\partial\lambda=\{i,j\}$, and \begin{align*} q^{\beta^*}_{\Lambda^*}(\lambda) &= w^*(\lambda;\beta^*) \, \frac {Z^{\beta^*}(\Lambda^*\,|\,\lambda)} {Z^{\beta^*}(\Lambda^*)}\,,\\ Z^{\beta^*}(\Lambda^*\,|\,\lambda) &= \sumtwo{\boldsymbol\gga \text{ closed}}{(\boldsymbol\gga, \lambda) \text{ $\Lambda^*$-comp.}} \prod_{\gga\in\boldsymbol\gga} w^*(\gga;\beta^*)\,.\nonumber \end{align*} Identity \eqref{eq_randomline} is the so-called {\em random-line representation} for the 2-point function of the Ising model, and plays a basic role in the approach to the DKS theory of Part~\ref{part_strongWulff} (see \cite{PfisterVelenik97, PfisterVelenik98} for much more details on this topic). What is particularly useful is that the weights $q^{\beta^*}_{\Lambda^*}$, which we have defined for an open contour, can be immediately extended to any family of $\Lambda^*$-compatible contours (closed or open). In particular, if $\boldsymbol\gga$ is a family of $\Lambda^*$-compatible {\em closed} contours, then the following identity holds \begin{equation*} q^{\beta^*}_{\Lambda^*}(\boldsymbol\gga) = \Is^\beta_{\Lambda,+}(\boldsymbol\gga \subseteq \boldsymbol\gga(\,\cdot\,))\,. \end{equation*} Applications and further results about the random-line representation are given in Section~\ref{dima_skeletons} and in Part~\ref{part_boundary}. The results stated above also hold when the coupling constants are allowed to vary from edge to edge, provided they remain ferromagnetic; if we denote by $J(e)$ the coupling constant at edge $e$, then the duality relation takes the form \begin{equation}\label{eq_dualitygeneral} \tanh(\beta^*J^*(e^*)) = e^{-2\beta J(e)}\,. \end{equation} \subsection{Kac models} In the original van der Waals Theory, the occurrence of phase transitions is due to long range attractive forces between molecules. In its statistical mechanics formulation, these forces are described by Kac potentials that depend on a positive scaling parameter $\gep$ which controls the strength and the range of the potential (see \cite{KUH}). The first probabilistic approach to this model was made in the celebrated paper of Lebowitz and Penrose \cite{LebPenrose}.\\ In dimension $d$, Ising systems with Kac potentials are defined by Gibbs measures with potentials depending on a scaling parameter $\gep>0$ $$ \forall \, i,j \in {\ensuremath{\mathbb Z}} ^d, \qquad J^\gep_{i,j} = \gep^d J(\gep \normII{i-j} ) \; , $$ and $J$ is a non-negative, smooth function supported by $[0,1]$ and normalized so that $$ \int_{{\ensuremath{\mathbb R}} ^d} \! dr \, J(\normII{ r }) = 1. $$ The Gibbs measure on the domain $\Lambda$ is denoted by $\Is^\beta_{\gep,\Lambda}$. The constant $\gep$ will be so that the system has finite but long range interaction. It is convenient to consider interaction parameters of the form $\gep =2^{-m}$ ($m$ is typically assumed to be large but fixed). This model bridges the finite range models and the mean field models. In particular, if the range of the interaction, i.e. $\gep^{-1}$, is scaled proportionally to the number of spins then the statistical properties of the system can be recovered from a mean field functional. In the true thermodynamic limit, when $\gep$ is kept fixed while the number of spins goes to infinity, the behavior of the system cannot be described by the mean field continuum limit. Nevertheless, by localizing in finite size regions it is possible to derive some informations from the mean field functional. This strategy was used to recover the phase diagram of the model and to prove that it is arbitrarily close to the one of the mean field model when $\gep$ goes to 0. More precisely, let us recall the following result which has been proven by Cassandro, Presutti \cite{CP} and by Bovier, Zaharadnik \cite{BZ} (see also \cite{BP}) \begin{thm} For any $\beta >1$, there is $\gep_0 >0$ such that for any $\gep$ smaller than $\gep_0$ a phase transition occurs and there are at least 2 distinct pure phases $\Is_{\gep}^+$ and $\Is_{\gep}^-$. \end{thm} \noindent If $\beta >1$, there is a breaking of symmetry and the spontaneous magnetization is denoted by $\Is_{\gep}^+ (\sigma_0) = m^*_\gep$. Define $m^* = \lim_{\gep \to 0} m^*_\gep$. This Theorem was proven via a renormalization procedure which we shall describe in Subsection \ref{subsection Kac potentials}. \subsection{Surface tension} \label{subsection Surface tension} \begin{figure}[t] \centerline{ \psfig{file=surfacetension.ps,height=6cm}} \figtext{ \writefig -3.10 1.70 {\footnotesize $M$} \writefig 1.30 1.40 {\footnotesize $N$} \writefig 0.60 5.30 {\footnotesize $\Lambda(N,M)$} \writefig 0.36 3.50 {\footnotesize $\vec{e}_1$} \writefig -0.30 3.66 {\footnotesize $\vec{n}$} \writefig 1.50 3.65 {\footnotesize $\gga$} } \caption{Definition of the surface tension.} \label{surface_tension} \end{figure} We fix $\vec{n}$ a vector in ${\ensuremath{\mathbb S}} ^{d-1}$ and consider an orthonormal basis $(\vec{e}_1, \dots, \vec{e}_{d-1}, \vec{n})$. Let ${\widehat \Lambda}(N,M)$ be the parallelepiped of ${\ensuremath{\mathbb R}} ^d$ centered at 0 with side length $N$ for the sides parallel to $(\vec{e}_1, \dots, \vec{e}_{d-1})$ and side length $M$ for the sides parallel to $\vec{n}$. The microscopic counterpart of ${\widehat \Lambda}(N,M)$ is denoted by ${\Lambda}(N,M)$. The boundary $\partial \Lambda(N,M)$ is split into 2 sets \begin{eqnarray*} \partial^+_{\vec{n}} \Lambda(N,M) & = & \{ i \in \partial \Lambda(N,M) \; | \; \vec{i}.\vec{n} \geq 0\},\\ \partial^-_{\vec{n}} \Lambda(N,M) & = & \{ i \in \partial \Lambda(N,M) \; | \; \vec{i}.\vec{n} < 0\}. \end{eqnarray*} We fix the boundary conditions outside $\Lambda(N,M)$ to be equal to 1 on $\partial^+_{\vec{n}} \Lambda(N,M)$ and to $-1$ on $\partial^-_{\vec{n}} \Lambda(N,M)$. The corresponding partition function on $\Lambda(N,M)$ is denoted by ${\bf Z}^\beta_{\Lambda(N,M),\vec{n}, \pm}$. Notice that any configuration $\sigma$ contributing to the partition function ${\bf Z}^\beta_{\Lambda(N,M),\vec{n}, \pm}$ contains a $\pm$-contour $\gamma$ which crosses $\Lambda(N,M)$ under the ``averaged'' direction orthogonal to $\vec{n}$ (Fig.~\ref{surface_tension}). Such a contour is absent in the configurations $\sigma$ contributing to partition functions ${\bf Z}^\beta_{\Lambda(N,M), + }$ with pure boundary conditions on $\partial \Lambda(N,M)$. This contour represents the microscopic $\pm$-interface under the direction $\vec{n}$. \medskip \noindent {\bf Definition :} The surface tension in the direction $\vec{n} \in {\ensuremath{\mathbb S}} ^{d-1}$ is defined\footnote{Notice that surface tension is sometimes defined with an extra multiplicative factor $\frac{1}{\beta}$.} by \begin{equation} \label{tau} \tau_\beta (\vec{n}) = \lim_{N \to \infty} \; \lim_{M \to \infty} \; - {1 \over N^{d-1}} \log { {\bf Z}^\beta_{\Lambda(N,M),\vec{n}, \pm } \over {\bf Z}^\beta_{\Lambda(N,M), + }}. \end{equation} \qed \noindent The proof the existence of the surface tension can be found in many papers ( \cite{Ab},~\cite{Pfister} to mention a few). A general approach has been developed by Messager, Miracle-Sole and Ruiz \cite{miracle}. The core of their proof is the sub-additivity of the sequence of finite-volume approximation to $\tau_\beta (\vec{n})$ which is obtained by means of FKG inequality. The proof is also valid for a wide range of models like Ising models with finite range interactions, Potts and SOS models. Furthermore, they showed that surface tension can be defined with parallelepipeds $\Lambda(N,M_N)$, where $M_N$ is a function of $N$ which diverges as $N$ goes to infinity. More general domains can also be considered provided they contain a parallelepiped of the type $\Lambda(N,M_N)$. The convexity of the homogeneous extension of $ \tau_\beta$ (see (\ref{tau tilde})) is a consequence of the pyramidal inequality proven in Theorem 3 of \cite{miracle} : Let $A_0, \dots, A_d$ be $d+1$ points of ${\ensuremath{\mathbb R}} ^d$ and denote by $(\Delta_i)_{i \le d}$ the simplex defined by these points. Let $\vec{n}_i$ be the unit normal to $\Delta_i$ and $|\Delta_i|$ its area. Then, the pyramidal inequality says \begin{eqnarray*} |\Delta_0| \, \tau_\beta (\vec{n}_0) \leq \sum_{i =1}^d |\Delta_i| \, \tau_\beta (\vec{n}_i). \end{eqnarray*} Note also that the homogeneous extension of $\tau_\beta$ is continuous because it is locally bounded and convex. Furthermore, $\tau_\beta$ is uniformly positive on ${\ensuremath{\mathbb S}} ^{d-1}$. This follows from the fact that the surface tension $\tau_\beta (\vec{n}_0)$ in the direction $\vec{n}_0 = (1,0, \dots ,0)$ is strictly positive as $\beta$ is larger than $\beta_c$ (see Lebowitz and Pfister \cite{LebPfister}). \section{Scope of the theory} \setcounter{equation}{0} The key notion behind the attempts to give a rigorous meaning to the type of the phase segregation phenomena, which have been vaguely discussed in Subsection~\ref{ss_intro_pheno_sloppy}, is that of {\bf renormalization} or {\bf coarse graining}. The energy (probability) competes with the entropy (number) of microscopic configuration in the corresponding energy shells. Macroscopic quantities like surface tension are produced in the aftermath of the entropy/energy cancelation, which is to say that in order to derive large-$N$ ($N$-linear size of the system) asymptotics one should renormalize appropriate microscopic objects. The appropriate objects here are, of course, microscopic phase boundaries, which decouple between different ``large'' microscopic phase regions. These renormalization procedures could follow two different trends, depending on whether the renormalized (mesoscopic) structures keep track of the microscopic or macroscopic state of the system. \subsection{Dobrushin-Koteck\'{y}-Shlosman Theory} The coarse graining of the DKS theory closely follows microscopic phase segregation patterns. Basic tools comprise a fluctuation analysis of the microscopic phase boundaries and sharp uniform local limit estimates over domains encircled by such boundaries. Thus, the notion of finite volume phases is quantified by the rate of the relaxation of the statistics of microscopic observables inside the microscopic phase regions towards the corresponding equilibrium values. The theory has been developed using the low-temperature cluster expansions in the seminal monograph \cite{DKS}. Our exposition in Part~3 is non-perturbative and follows the works \cite{Pfister}, \cite{I1}, \cite{I2}, \cite{PfisterVelenik97}, \cite{ScS2} and \cite{IS}. By and large the existing results are confined to the simplest two-dimensional models (percolation and nearest neighbor Ising). \subsection{${\ensuremath{\mathbb L}} _1$-Theory} The renormalization approach of the ${\ensuremath{\mathbb L}} _1$-theory is, in a sense, opposite to that of DKS. In the latter case the principal coarse grained objects (skeletons, see Part~\ref{part_strongWulff}) are built upon underlying families of large {\bf microscopic} contours. Such information is waved out in the ${\ensuremath{\mathbb L}} _1$-approach, and the basic renormalization objects here are the local (mesoscopic) order parameters or, in the spin language, locally averaged magnetization on various length scales. The idea is that on sufficiently large scales local averages of the magnetization are, with an overwhelming probability, close to one of the two equilibrium values $\pm m^*$. Thus, under the renormalization, configurations are characterized by their phase labels on different mesoscopic blocks. The objective of the ${{\ensuremath{\mathbb L}} }_1$-theory is to describe typical mesoscopic magnetization profiles (or their phase labels) under a relaxed canonical constraint of shell type. Unlike in the DKS case, the mesoscopic phase labels are classified by their proximity to various {\bf macroscopic} states. Combinatorial complexity of this approximation is reduced by an exponential tightness property of the mesoscopic phase labels (for a general claim of this sort see Theorem~\ref{thm Compactness}), which enables to restrict attention only to ${\ensuremath{\mathbb L}} _1$-compact subsets of feasible macroscopic states, namely to the phase-sets of finite perimeter. The core of the compactness estimates is based on the renormalization decoupling techniques introduced in \cite{Pisztora1} and on the methods developed to control the phase of small contours by \cite{I2}, \cite{PfisterVelenik97}, \cite{ScS2} and \cite{IS}. These techniques are robust enough to be applied on a renormalized scale in any dimensions in a non perturbative setting. Our exposition in this review is based on the work of \cite{Bo} with, though, one exception -- we specifically stress that all the relevant estimates of the ${\ensuremath{\mathbb L}} _1$-theory are obtained on appropriate {\bf finite} scales. The validity of Lemma \ref{lem surface tension} up to the slab percolation threshold follows from the results of \cite{CePi}. \medskip \subsection{Boundary Phenomena} Parts~\ref{part_weakWulff} and \ref{part_strongWulff} provide a derivation of Wulff construction from the basic principles of Equilibrium Statistical Mechanics. Part~\ref{part_boundary} is concerned with a study of the effect of the boundary conditions on the macroscopic geometry of the phase separation. In particular, it is shown how the interaction with the boundary of the vessel can be analyzed, and used to provide a derivation of Winterbottom construction. The relationship between the macroscopic geometry in this case and the wetting transition is also discussed. The presentation follows~\cite{PfisterVelenik97} for the 2D case, and~\cite{BodineauIoffeVelenik99} for the higher-dimensional ones. \medskip \subsection{Bibliographical review} The rigorous investigation of the macroscopic geometry of phase separation under a canonical constraint certainly started with two seminal papers of Minlos and Sinai in 1967-68~\cite{MinlosSinai67, MinlosSinai68}. In these papers, the authors considered nearest-neighbor very low temperature Ising models in arbitrary dimensions $d\geq 2$, even though they only wrote down the proof explicitly in the case $d=2$. Their results could be roughly stated in the following way: At sufficiently low temperatures, typical configurations of the Ising model in the exact canonical ensemble over finite vessels of linear size $N$, consist of a single large contour whose shape is ``nearly a square'', whereas the rest of the contours are small, that is at most of the order $\log N$. This is the picture of low temperature excitations of canonical ground states, and it has been treated by the authors as such. In particular, the entropic factor has been frequently suppressed by the microscopic energy cost. However, exact asymptotic results on the level of a microscopic justification of the Wulff construction depend, even at very low but still non-zero temperatures, on a non-trivial entropy/energy competition, and, hence, could not be derived in this way. Then there followed 15-20 years of a relative stagnation, the only contributions to the area being confined to generalizations of~\cite{MinlosSinai67, MinlosSinai68} to more complicated models~\cite{Kuroda82}. A popular interest to the problem has been revived towards mid-eighties in the framework of an on-going mingle between probability and statistical mechanics \cite{Roberto}, \cite{FollmerOrt}, \cite{JoelRoberto}, \cite{CCSc}. A breakthrough occurred around 1989, when Dobrushin, Koteck\'y and Shlosman found a way to derive the Wulff shape in a scaling limit of the low temperature 2D Ising model. They found much more: Essentially the monograph \cite{DKS} sets up a comprehensive mathematical theory of phase segregation. This theory happened to be an intrinsically probabilistic one. The DKS approach is, above all, to quantify the phenomenon of phase separation in terms of probabilistic limit theorems and, accordingly, to study the probabilistic structures related to the canonical states. Thus, in a sharp contrast with most of the preceding works, the ideology of \cite{DKS} has been from the start a very robust one and, actually, pertained to the whole of the phase transition region. It could be implemented, however, only at very low temperatures, since the authors used low temperature cluster expansions as the principal tool for proving the corresponding probabilistic theorems. The ideas of \cite{DKS} did not wait long to inspire a wave of investigations, even before the draft of the work started to circulate. Two subsequent works of a fundamental importance are \cite{Pfister}, where an alternative simplified proof of parts of the DKS results has been given using techniques, which are specific to the 2D Ising model, like self-duality, and \cite{ACC}, where the Wulff construction has been derived in the context of the 2D Bernoulli percolation, but in a completely non-perturbative fashion, that is down to the percolation threshold $1/2$. In both instances the exact canonical setting has been substituted by shell-type integral constraints, and, respectively, softer integral type limit results have been used instead of the local estimates of the original DKS theory. The results and techniques of \cite{ACC} and \cite{Pfister} have been combined with profound renormalization ideas of \cite{Pisztora1} and lead to an extension of this weak integral approach to the Wulff construction in the whole of the 2D Ising phase coexistence limit \cite{I1}, \cite{I2}. Simpler proofs of some of the basic estimates of these two works (e.g estimates in the phases of small contours or skeleton lower bounds) have been found in \cite{SS}, \cite{ScS1}, and the integral version of the two-dimensional DKS theory has been essentially completed in \cite{PfisterVelenik97}, the estimates of the latter work being already optimal along the lines of the integral approach. Furthermore, Pfister and Velenik ~\cite{PfisterVelenik96, PfisterVelenik97} investigated the effect of boundary conditions, and in particular studied the effect of an arbitrary boundary magnetic field, thus providing a derivation of the Winterbottom construction. In spite of these successes, a non-perturbative treatment of the full DKS theory was still out of reach, because a key ingredient was missing: only rough estimates were available in the phase of small contours. By proving a local limit theorem in the phase of small contours, Ioffe and Schonmann were finally able to provide a non-perturbative version of the strong Wulff theory \cite{IS}. The techniques of \cite{IS} are based on improved versions of asymptotic expansions in metastable cutoff phases developed in \cite{ScS2}. In principle, the two-dimensional DKS theory should lead to exact expansions of canonical partition functions up to zero-order terms. This, however, requires a superb control over the statistical behavior of microscopic phase boundaries, which is currently beyond the reach for the Ising model at moderately low temperatures. A certain progress, though, has been reported at very low temperatures \cite{DH}, \cite{H} or either in the case of simplified models \cite{HI}. Finally, it should be noted that at moderately low temperatures the success of the DKS theory in two dimensions has been by and large confined to the Ising and percolation models, and that there are serious technical and possibly theoretical challenges to extend it to more general two-dimensional models (see Section~\ref{dima_problems} for more on this). On the other hand, as it has been communicated to us, an appropriate version of the low temperature DKS theory (as originally developed in \cite{DKS}), should apply to any 2-phase model in the realm of the Pirogov-Sinai theory~\cite{Senya}. There is a strong interplay between dynamical properties of the Ising model and its behavior in equilibrium : in absence of phase transition, the correlations at equilibrium are related to the exponential relaxation of the system; instead as a phase transition occurs, the dynamics is driven by the evolution of droplets (nucleation, motion by mean curvature ...). We will not enter into details and simply refer to the seminal paper on metastability by Schonmann and Shlosman \cite{ScS2} and to the lecture notes by Martinelli \cite{Martinelli} (and references therein) for a survey of the recent works. Let us just mention that, as far as phase coexistence is considered, many dynamical results are only valid in dimension 2 because of the absence of a precise description of the equilibrium properties in higher dimensions. \bigskip If the 2D case was subject to rapid progress, the best results for higher dimensions remained for a long time those of Minlos and Sinai. The turning point of the latest developments should be traced back to the seminal works by Pisztora~\cite{Pisztora1} and by Cassandro and Presutti~\cite{CP}, where crucial renormalization decoupling estimates have been established in the case of the nearest neighbour Ising and, respectively, Kac interactions. The basic philosophy of the ${\ensuremath{\mathbb L}} _1$-approach has been originally developed in the works \cite{ABCP}, \cite{BCP}, \cite{BBBP}, \cite{BBP} in the context of the Ising systems with Kac potentials, and, in a less explicit way, elements and ideas of the theory already appeared in \cite{ACC}, \cite{Pisztora1}, \cite{I2} and \cite{PfisterVelenik97}. Using an embedding of the renormalized observables into a continuum setting, Alberti, Bellettini, Cassandro and Presutti \cite{ABCP}, \cite{BCP} emphasized the appropriateness of geometric measure theory setting, introduced relevant analytic approximation procedures (see Subsection~2.6.1) and proved large deviation bounds for the appearance of a droplet of the minority phase in a scaling limit when the size of the domain diverges not much faster than the range of the Kac potentials. In this scaling the system can be controlled by a continuum limit via the $\Gamma$-convergence of functionals associated to the spins system \cite{ABCP} and by compactness arguments~\cite{BCP}. The approach of \cite{ABCP} and \cite{BCP} has been extended by Benois, Bodineau, Butta and Presutti \cite{BBBP}, \cite{BBP} to the case when the range of the interaction remains fixed and does not change with the size of the system. The latter works are, already, structured in a way very similar to the one we expose here. Thus the main steps of \cite{BBBP} and \cite{BBP} comprise the coarse-graining of the rescaled magnetization profiles by the ${{\ensuremath{\mathbb L}} _1}$-proximity to various continuum sets of finite perimeter, surgery procedures to confine interfaces to tubes around the boundaries of such sets and exponential tightness arguments to reduce the combinatorial complexity of the rescaled problem. The essential model-related input has been provided by the decoupling estimates on the renormalized magnetization \cite{CP},~ \cite{BZ} and by the result on the instanton structure of Kac interfaces \cite{DOPT1, DOPT2}. The latter structure, however, yields only approximate bounds at each fixed finite interaction range. Consequently, the exact (van der Waals) surface tension could be recovered only when the range of the interaction tends to infinity, that is only in the Lebowitz-Penrose limit. Nevertheless, at long but finite range interactions one could say that the typical mesoscopic configurations concentrate on droplets with ${\ensuremath{\mathbb L}} _1$-almost spherical shapes. A complete picture of the higher-dimensional ${\ensuremath{\mathbb L}} _1$-Wulff construction has been, for the first time, grasped and worked out in a recent remarkable work \cite{Cerf}, where the corresponding results have been established in the context of the super-critical 3-dimensional Bernoulli bond percolation. Using novel and unusual renormalization procedures based on the decoupling results of \cite{Pisztora1}, he has essentially rediscovered all the main steps of the ${\ensuremath{\mathbb L}} _1$-approach as described above. The main turning point of \cite{Cerf} was the introduction of an alternative ingenious definition of the surface tension which happened to be compatible with the setup of ${\ensuremath{\mathbb L}} _1$-renormalization procedures \footnote{It should be noted, though, that despite relative technical simplicity of this observation, the work \cite{Cerf} most certainly prompted the completion of the ${\ensuremath{\mathbb L}} _1$-theory by many years.}. The work of \cite{Cerf} triggered a wave of new investigations. In \cite{Bo} his ideas on how to define and treat the surface tension have been combined with an appropriate adjustment of the renormalization approach of \cite{BBBP} and \cite{BBP}, which lead to a relatively short proof of the ${\ensuremath{\mathbb L}} _1$-Wulff construction for the nearest neighbour Ising model in three and higher dimensions and at sufficiently low temperatures. Most recently, a similar construction has been established up to the FK slab percolation threshold in \cite{CePi}. In the latter article new and important techniques have been developed in order to go around mixed boundary conditions via bulk relaxation properties of the FK-measures. Although the techniques of the ${\ensuremath{\mathbb L}} _1$-theory might look ``soft'' when compared to the local limit setting of the DKS approach, one should bear in mind that there is always a ``hard'' step needed to initialize the ${\ensuremath{\mathbb L}} _1$-machinery: The renormalized mesoscopic phase labels have to possess sufficiently good decoupling properties. For the case of Kac models the corresponding estimates have been established in \cite{CP}, \cite{BZ}, \cite{BMP}, and in the case of percolation (including FK for the nearest neighbor Ising model) models in dimension $d\geq 3$ in \cite{Pisztora1}, on which both \cite{Cerf},\cite{CePi} and \cite{Bo} rely in a fundamental way. \medskip Higher dimensional Winterbottom type shapes have been recovered in the context of effective interface models \cite{BolthausenIoffe}, \cite{BAD}, \cite{DGI}, \cite{DunlopMagnen98} following the original two-dimensional model defined and studied in \cite{Dunlopetall}. The results of these works have been also formulated in terms of ${\ensuremath{\mathbb L}} _1$ concentration properties, but the corresponding approach is quite different from the one we expose here. Thus, the analysis of \cite{BolthausenIoffe} heavily relies on specific properties of Gaussian interactions. It should be noted, though, that, unlike in the nearest neighbour higher dimensional Ising case, there is better insight into the fluctuation and relaxation properties of higher dimensional microscopic interfaces \cite{FunakiSpohn}, \cite{DGI}. On the other hand, the shapes produced by the effective interface models are much less ``physical'', in particular the equilibrium shapes are not scale invariant, and the corresponding surface tension is not convex. \part{Dobrushin-Koteck\'{y}-Shlosman (DKS) theory in 2D} \label{part_strongWulff} \setcounter{section}{0} In this part we review and explain the results on phase separation in the two-dimensional nearest neighbor Ising model as enforced by the canonical constraint on the magnetization \cite{DKS}, \cite{IS}. The theory is built upon sharp local estimates over finite volume vessels $\Lambda_N$ and on the probabilistic analysis of the random microscopic phase separation line. We focus here on the ``free'' spatial geometry of the phase segregation, that is disregarding the boundary effects. These effects could enter the picture in two different ways: in terms of the boundary conditions on $\partial\Lambda_N$ and in terms of the geometry of $\Lambda_N$. In the former case the minority phase could be absorbed by part of the boundary $\partial\Lambda_N$. This and related phenomena are discussed in Part~4. In the second case the finite vessel $\Lambda_N$ might not be able to accommodate the corresponding optimal crystal shape. Such a geometric constraint is, from the point of view of the microscopic theory, merely a technical nuisance, though, on the macroscopic level, it might lead to formidable variational problems. We go around this domain geometry issue by choosing $\Lambda_N$ to be of the Wulff shape itself $$ \Lambda_N~=~N{\mathcal K}_1\cap{\ensuremath{\mathbb Z}} ^2 , $$ where ${\mathcal K}_1$ is the unit area Wulff shape. Thus, $\Lambda_N$ accommodates any optimal shape of area smaller than $N^2$. The corresponding finite volume canonical Gibbs measure is then defined by \begin{equation} \label{3.1.muN} \Is_{\Lambda_N,-}^{\beta}\left(~\cdot~\big|~M_N (\sigma ) =-N^2m^* +a_N\right) , \end{equation} where $M_N \stackrel{\gD}{=}\sum_{i\in\Lambda_N}\sigma_i$ is the total spin, $m^* =m^* (\beta )$ is the spontaneous magnetization, and $a_N$ points inside the phase transition region, $a_N\in (0,2N^2m^* )$. In the sequel we shall use the shortcut $\Is_{N,-}^{\beta}$ for the finite volume measure $\Is_{\Lambda_N,-}^{\beta}$. \noindent {\bf Notation.} The values of positive constants $c_1, c_2, ...$ are updated with each subsection. \section{Main Result} \label{dima_main} \setcounter{equation}{0} DKS theory gives a comprehensive solution to the following problem of phase separation: \vskip 0.1cm \noindent {\bf Problem~1.} For $\beta >\beta_c$ and $a_N\in (0,2N^2 m^* )$ characterize typical spin configurations $\sigma$ under the canonical measure \eqref{3.1.muN} . \vskip 0.1cm An ostensibly simpler problem is \vskip 0.1cm \noindent {\bf Problem~2.} For $\beta >\beta_c$ and $a_N\in (0,2N^2 m^* )$ find sharp local asymptotics of $$ \Is_{N,-}^{\beta} \left( M_N~=~-m^*N^2 +a_N\right) . $$ In fact both problems are equivalent. In particular, the phenomenon behind the shift of the magnetization is inside the phase transition region not a bulk one (and hence is not in the realm of the usual theory of large deviations), and the crucial role is played by the spatial geometry of symmetry breaking. \noindent \subsection{Heuristics} \label{dima_main_heuristics} Under the finite volume pure state $\Is_{N,-}^{\beta}$ the typical maximal size of $\pm$~contours is of order $\log N$. One could then visualize a typical microscopic configuration $\sigma$ on $\Lambda_N$ in terms of an archipelago of small~(that is of the maximal size $\sim\log N$) ``$+$'' islands which could contain still smaller ``$-$'' lakes etc. This archipelago spreads out uniformly over $\Lambda_N$, and the density of the plus ``soil'', which spells out in terms of the magnetization $M_N (\sigma )$ as $\left( |\Lambda_N |+M_N (\sigma )\right)/2|\Lambda_N |$, is close to its equilibrium value $$ \frac{|\Lambda_N |+\left\langle M_N\ran{\beta}{N, -}}{2|\Lambda_N |}\ \sim\ \frac{1-m^*}{2} . $$ Thus, one could think of two different competing patterns behind the $a_N$-shifts, $a_N\geq 0$, of the magnetization $M_N$ from its equilibrium value $\left\langle M_N\ran{\beta}{N,- }\sim -m^* |\Lambda_N |$: \vskip 0.cm \noindent 1) The density of the archipelago increases in a spatially homogeneous fashion without, however, altering the typical sizes of the islands. \noindent 2) Spatial symmetry is broken, and an abnormally huge island of the ``$+$''~phase of excess area $\sim a_N/2m^*$ appears. \vskip 0.1cm \noindent Heuristically, the first scenario corresponds to Gaussian fluctuations, and its price, in terms of probability, should be of order $$ \exp\left( -c_1 (\beta )a_N^2/N^2\right) . $$ Phase segregation manifests itself in the second scenario, and the probabilistic price for creating such a huge island is proportional to the length of its boundary $$ \exp\left( -c_2 (\beta )\sqrt{a_N}\right) . $$ A comparison between the two expressions above suggests that the first scenario should be preferred whenever $a_N\ll N^{4/3}$, whereas large shifts $a_N\gg N^{4/3}$ should result in the phase segregation picture described in the second scenario. This indeed happens to be the case, and we refer to \cite{DS} and \cite{IS} for a complete rigorous treatment\footnote{The critical case of $a_N\sim N^{4/3}$ is still an open problem.}. For the sake of the exposition, we shall stick here to the possibly most interesting case of $a_N\sim N^2$, which corresponds also to the macroscopic type of scaling discussed in Part~2. The DKS theory gives then the following sharp characterization of the phase segregation in the canonical ensemble: under $\Is_{N,-}^{\beta}\left(~\cdot~\big| M_N =-m^* N^2 +a_N\right)$ a typical spin configuration $\sigma$ contains exactly one abnormally large contour $\gamma$ which decouples between the ``$+$''~phase (inside $\gamma$) and the ``$-$''~phase (outside $\gamma$). In particular, the average magnetization inside (respectively outside) $\gamma$ is close to $m^*$ (respectively $-m^*$), and the area encircled by $\gamma$ can be thus recovered from the canonical constraint, $$ m^*\left|\text{int}\left(\gamma\right)\right| -m^*\left( N^2 -\left|\text{int}\left(\gamma\right)\right|\right) ~\approx~-m^*N^2+a_N\ \ \Longrightarrow\ \ \left|\text{int}\left(\gamma\right)\right|~\approx~ \frac{a_N}{2m^*} . $$ Under the scaling of $\Lambda_N$ by $1/N$, that is into the normalized continuous shape $\ensuremath{\mathcal K}\subset{\ensuremath{\mathbb R}} ^2$, the microscopic phase boundary $\gamma$ sharply concentrates around a shift of the Wulff shape of the corresponding scaled area $a_N/2m^*N^2$ (Fig.~\ref{fig_rescaling}). \begin{figure}[t] \centerline{ \psfig{file=rescaling1.ps,height=6cm}\hspace{3cm} \raise 1.5 truecm \hbox{\psfig{file=rescaling2.ps,height=3cm}} } \figtext{ \writefig 1.10 3.30 {\Large $\stackrel{1/N}{\longrightarrow}$} \writefig 4.22 2.50 {\small $-$ phase} \writefig 3.63 3.70 {\small $+$ phase} \writefig -2.27 2.90 {\small $\gga$} } \caption{DKS picture under the $1/N$ scaling: On the left the microscopic $\Lambda_N$ box with the unique $K\log N$-large contour $\gamma$. On the right the continuous box $\ensuremath{\mathcal K}_1$ with the scaled image of $\gamma$.} \label{fig_rescaling} \end{figure} \subsection{DKS theorem} \label{dima_main_thm} More precisely, for any $r\in{\ensuremath{\mathbb R}} _{+}$ let ${\mathcal K}_{r}$ to denote the Wulff shape of the area $r$. Also given a number $s\in{\ensuremath{\mathbb R}} _+$, let us say that a microscopic contour $\gamma$ is $s$-large, if ${\rm diam}_{\infty}(\gamma )>s$. \begin{thm}[\cite{DKS}\footnote{},\cite{IS}]\footnotetext{In the original monograph [DKS] the corresponding results has been derived in the context of the Ising model with periodic boundary condition.} \label{thm DKS} Let the inverse temperature $\beta >\beta_c$ be fixed, and let the sequence $\{a_N\}$, $-m*N^2 +a_N\in\text{\rm Range}(M_N )$, be such that the limit $$ a~=~\lim_{N\to\infty}\frac{a_N}{N^2}~\in~(0,2m^* (\beta )) $$ exists. Then, $$ \log\Is^{\beta}_{N,-}\big(~M_N~=~-m^*N^2~+a_N~\big)\ =\ -\ensuremath{\mathcal W}_{\beta}\left({\partial\mathcal K}_{\frac{a_N}{2m^*}} \right)\big( 1~+~\mbox{O}\big( N^{-1/2}\log N\big)\big) . $$ Moreover, if $K=K(\beta )$ is large enough, with $\Is_{N,-}^{\gb}\left(~\cdot~|M_N=-N^2m^* + a_N\right)$-probability converging to $1$ as $N\to\infty$: \begin{enumerate} \item There is exactly one $K(\beta )\log N$-large contour $\gamma$. \item This $\gamma$ satisfies \begin{equation} \begin{align} \label{thm3.1.1} &\min_{x}\frac1{N}d_{{\Bbb H}}\big(~\gamma,x+\partial {\mathcal K}_{\frac{a_N}{2m^*}} \big) \ \leq\ c_1(\beta )N^{-1/4}\sqrt{\log N}\\ \intertext{and} \label{thm3.1.2} &\min_{x}\frac1{N^2}\text{\rm Area}\Big(~\text{\rm int}\left(\gamma \right) \Delta \left( x+{\mathcal K}_{\frac{a_N}{2m^*}}\right)~\Big)\ \leq\ c_2 (\beta )N^{-3/4}\sqrt{\log N}. \end{align} \end{equation} \end{enumerate} \end{thm} \subsection{DKS theory} \label{dima_main_theory} The DKS theory views the production of the event $\{ M_N -m^*N^2 +a_N\}$ in terms of a two-step procedure: On the first stage a length scale $s =s(N)$ is chosen, and {\bf all} the microscopic $s$-large contours $(\gamma_1 ,...,\gamma_n )$ are fixed. If the total area inside these $s(N)$-large contours is smaller than $a_N/2m^*$, then the total magnetization $M_N$ still has to be steered towards the imposed value $M_N = -m^*N^2 +a_N$, but already under the constraint that all the $\pm$~contours different from $(\gamma_1 ,...,\gamma_n )$ are $s(N)$-small. The probability $\Is_{N,-}^{\gb}\left( M_N = -m^*N^2 +a_N\right)$ reflects the price of the optimal strategy along these lines. We record the two steps of the DKS theory as follows: \noindent 1) Study the statistics of $s(N)$-large contours under $\Is_{N,-}^{\gb}$. \noindent 2) Give local limit estimates on the magnetization in the $s(N)$-restricted phases. \noindent The introduction of $s(N)$-cutoffs leads to the separation of the length scales which has a double impact on the problem: it sets up the stage for the renormalization analysis of microscopic phase boundaries, and it improves the control over the bulk magnetization inside the corresponding microscopic phase regions. Let us try to explain this in more details: As far as the statistics of the $s(N)$-large contours is considered, we are interested in giving sharp estimates on the $\Is_{N,-}^{\gb}$-probability of the events of the type $$ \left\{~\text{$s(N)$-large contours of $\sigma$ encircle a certain prescribed area~}\right\} . $$ The point is that the contribution of any particular microscopic contour to the probability of such an event is negligible. In other words, one also has to take into account the entropy (number) of all the contributing contours. The required entropy cancelation (and hence the production of the relevant limiting thermodynamic quantity - surface tension) is achieved by means of a certain coarse graining procedure, the so called skeleton calculus, which we describe in Section~\ref{dima_skeletons}. Roughly, instead of studying the probabilities of individual microscopic contours one considers the packets of all contours passing through the vertices of a given ``$s(N)$-skeleton'' $S= (u_1, u_2,...,u_n)$ and staying within a distance of the order $s(N)$ from the closed polygonal line $\text{Pol}(S)$ (Fig.~\ref{fig_skeletonS}). \begin{figure}[t] \centerline{ \psfig{file=skel1.ps,height=5cm}\hspace{1cm} \psfig{file=skel2.ps,height=5cm} } \figtext{ \writefig -6.05 3.70 {\footnotesize $u_1$} \writefig -6.40 2.50 {\footnotesize $u_2$} \writefig -6.20 1.40 {\footnotesize $u_3$} \writefig -5.30 5.30 {\footnotesize $u_{n-1}$} \writefig -5.20 4.40 {\footnotesize $u_n$} \writefig -3.58 5.20 {\footnotesize $\gga_1$} \writefig 2.50 4.80 {\footnotesize $\gga_2$} } \caption{Two microscopic contours $\gamma_1$ and $\gamma_2$ are compatible with the same skeleton $S= (u_1,...u_n)$.} \label{fig_skeletonS} \end{figure} The distance between successive vertices of $S$ complies with the length scale $s(N)$, $\normsup{ u_{i+1} -u_{i}}\sim s(N)$. Surface tension is produced on the level of skeletons. In fact, the probability of observing a $\pm$~contour compatible with a given skeleton $S$ admits an asymptotic with $s(N)\nearrow\infty$ description \begin{equation} \label{3.1.skeleton} \Is_{N,-}^{\gb}\left( S\right)\ \asymp\ \exp\left\{ -\ensuremath{\mathcal W}_{\beta}\left( \text{Pol} (S)\right)\right\} . \end{equation} We quote the precise result in Section~\ref{dima_skeletons}, which we devote to a general exposition of the skeleton calculus. Since the vertices of $S$ are $s(N)$-apart, and the surface tension $\tau_{\gb}$ is strictly positive for all $\beta >\beta_c$, the energy $\ensuremath{\mathcal W}_{\beta}\left( \text{Pol} (S)\right)$ controls the number $\# (S)$ of vertices of $S$ as \begin{equation} \label{3.1.snumber} \# (S)~\leq ~c_3 (\beta )\frac{\ensuremath{\mathcal W}_{\beta}\left( \text{Pol} (S)\right)}{s(N)} . \end{equation} When combined with \eqref{3.1.skeleton} this leads to the reduction of the combinatorial complexity of the problem: the number of different skeletons of a fixed energy $\widehat{\ensuremath{\mathcal W}}_N$ does not compete with the approximate probability $\exp\{-\widehat{\ensuremath{\mathcal W}}_N\}$ to observe any such skeletons. Thus, the study of $\{ M_N =-m^*N^2 +a_N\}$ reduces, in terms of skeletons, to the maximal term estimation. It should be stressed, however, that unlike the coarse graining procedures of the ${\ensuremath{\mathbb L}} _1$ theory, the mesoscopic objects (skeletons) of the DKS theory closely follow the microscopic structure of phase boundaries. The local limit estimates in the $s(N)$-restricted phases are, therefore, required uniformly over finite lattice domains whose boundaries are carved with $s(N)$-large contours compatible with not too costly skeletons. This imposes a natural restriction on the length of these boundaries, and we shall describe the appropriate family of domains in Section~\ref{dima_estimates} along with the exposition of the corresponding uniform local limit results. Intuitively, long contours are responsible for long range dependencies between spins, and, therefore, the $s(N)$-cutoff constraint improves the mixing properties of the system and helps to extend the validity of classical (Gaussian) behavior of moderate deviations. In Section~\ref{dima_bulk} we quote the corresponding relaxation and decay properties which lie in the heart of the local limit estimates. In Section~\ref{dima_structure} we give an outline of the proof of the DKS theorem. Finally, the (long) list of open problems is briefly addressed in Section~\ref{dima_problems}. \section{Estimates in the phases of small contours} \label{dima_estimates} \setcounter{equation}{0} As it has been mentioned, the estimates in the phase of small contours should be derived uniformly over a family of lattice domains whose boundaries are composed of not too costly $s(N)$-large contours. \noindent {\bf Definition} Basic family $\ensuremath{\mathcal D}_N$ of subsets $A\subseteq\Lambda_N$: We fix two numbers $a$ (small) and $R$ (big). $$ A\in\ensuremath{\mathcal D}_N~\Longleftrightarrow~aN^2\leq |A| \quad\text{and}\quad | \partial A|\leq RN\log N. $$ \qed We fix a basic scale $s(N) =K\log N$ of large contours, where $K= K(\beta )$ is a sufficiently large number, so that $K\log N$-contours are highly improbable under the pure state $\Is_{N,-}^{\beta}$. Of course, exactly the same number $K$ appears in the statement of Theorem~\ref{thm DKS}. The upper bound on $\partial A$ in the definition of the family $\ensuremath{\mathcal D}_N$ states that the configurations with total length of $K\log N$ large contour exceeding $RN\log N$ are ruled out. This conclusion is explained in more detail in Section~\ref{dima_skeletons} (see the remark following Lemma~\ref{energy}). \medskip \noindent \subsection{Structure of local limit estimates} \label{dima_estimates_structure} Let us turn now to the structure of local limit estimates in the $s(N)$-restricted phases. First of all, given any $A\subset{\ensuremath{\mathbb Z}} ^2$, the $s$-restricted phase on $A$ is defined via $$ \Is_{A,-}^{\beta,s}\left(~\cdot ~\right)~\stackrel{\gD}{=} ~ \Is_{A,-}^{\beta}\left(~\cdot ~\Big|\text{All $\pm$ contours are $s$-small}\right). $$ We would like to study the probabilities of deviations $a_N\geq 0$ of the total magnetization $M_A$ from the corresponding averaged value $\langle M_A\rangle_{A,-}^{\beta ,s}$. Let us define the set of feasible values of such deviations as $$ {\bf M}_A^{+}~= ~\left\{ a_N\geq 0:\ \langle M_A\rangle_{A,-}^{\beta ,s} +a_N\in\text{\rm Range}( M_A)\right\} . $$ Roughly, the cutoff $s$ extends the validity of Gaussian moderate deviations for the following reason: The price of shifting the magnetization by $a_N$ on the expense of $s(N)$-small contours is of the order $(a_N /s^2)s\sim a_N/s(N)$. This should be tested against the Gaussian moderate deviation exponent of the order $a_N^2 /N^2$. Thus the Gaussian behavior should prevail once $a_N\ll N^2/s(N)$. Of course, the latter constraint on $a_N$ becomes less stringent as $s(N)$ decreases. On the rigorous mathematical part the classical approach to estimating $$ \Is_{A,-}^{\beta,s}\left( M_A =\langle M_A\rangle_{A,-}^{\beta ,s} +a_N\right), $$ amounts to first finding the value of magnetic field $$ g = g(A, s(N),a_N ), $$ such that the expected magnetization under the $g$-tilted state is precisely what we want, \begin{equation} \label{3.2.sgexp} \langle M_A\rangle_{A,-,g}^{\beta ,s} \ =\ \langle M_A\rangle_{A,-}^{\beta,s} ~+~a_N, \end{equation} \noindent and, then, to rewrite the $\Is_{A,-}^{\beta ,s}$-probability in terms of the $\Is_{A,-,g}^{\beta ,s}$ one: \begin{equation} \label{3.2.transform} \begin{split} &\Is_{A,-}^{s}\left( M_A=\langle M_A\rangle_{A,-}^{\beta ,s}+a_N\right)~\\ &\qquad =~\exp\big\{-(\langle M_A\rangle_{A,-}^{s} +a_N)g~+~ \log\big<\text{e}^{g M_A}\big>_{A,-}^{\beta ,s}\big\}~ \Is_{A,-,g}^{\beta ,s}\big(~ M_A~=~\big< M_A\big>_{A,-,g}^{\beta ,s}~\big)\\ &\qquad=~ \exp\left\{-\int\limits_0^g\int\limits_r^g\big< M_A ; M_A\big>_{A,-,h}^{\beta ,s} \text{d}h\text{d}r\right\} ~\Is_{A,-,g}^{\beta ,s}\left(~ M_A~=~\big< M_A\big>_{A,-,g}^{\beta ,s}\right) . \end{split} \end{equation} One then tries to derive sufficiently precise estimates on the semi-invariants of $\Is_{A,-,h}^{\beta ,s}$ and to prove a local CLT under $\Is_{A,-,g}^{\beta ,s}$. Thus, it is extremely important to understand how the magnetization $\langle M_A\rangle_{A,-,g}^{\beta,s}$ and other semi-invariants of $\Is_{A,-,g}^{\beta,s}$ change with the magnetic field $g$ in the phase of $s(N)$-small contours. Breaking of the classical limit behavior in the $s(N)$-restricted phase manifests itself by the jump of the magnetization which is related to the appearance of abnormally large $\pm$-contours. Without cutoffs this jump occurs for $g\sim 1/N$, and imposing the $s(N)$ constraint would delay such a jump \cite{ScS2}. It is easy to imagine what should be the critical order of the magnetic field $g$, at which those large contours should start to be favored in the $s$-restricted phase: for a $\pm$~contour of the linear size $s(N)$ one wins $\sim s^2g$ on the level of magnetization and loses $\sim s$ on the level of surface energy. These two terms start to be comparable when $sg\sim 1$. Therefore no particular deviation from the classical behavior should be expected as far as $gs(N)\ \ll\ 1$. We refer to \cite{IS}, where all these heuristic considerations have been made precise. \noindent \subsection{Basic local estimate on the $K\log N$ scale} \label{dima_estimates_basic} Actually~\cite{IS} it is enough to consider only the basic $K\log N$-scale: \begin{lem}[\cite{IS}] \label{KlogN} Assume that a sequence of numbers $\{ b_N\}$ satisfies $$ \lim_{N\to\infty}\frac{b_N\log N}{N^2} ~= ~0 . $$ Then, on the basic scale $s(N)= K\log N$, the estimate \begin{equation} \label{3.2.KlogN} \begin{split} &\Is_{A,-}^{\beta ,s}\left( M_A=\langle M_A\rangle_{A,-}^{\beta ,s}+a_N\right)\\ &\qquad =\ \frac1{\sqrt{2\pi\chi_{\beta} |A|}} \exp\big\{-\frac{a_N^2}{2\chi_{\beta} |A|}+\mbox{O} \bigl(\frac{a_N^2}{N^3}(\log N\vee\frac{a_N}{N})\bigr)\big\} \big(1~+~\text{\small{o}}(1) \big), \end{split} \end{equation} holds uniformly in domains $A\in\ensuremath{\mathcal D}_N$ and in $a_N\in{\bf M}_A^{+}\cap [0,b_N]$, where $\chi_{\beta}$ is the susceptibility under the pure state $\Is_{-}^{\beta}$. \end{lem} \noindent \subsection{Super-surface estimates in the restricted phases} \label{dima_estimates_supersurface} Moderate deviations on the intermediate scales $s(N)\gg\log N$ are, for the purposes of the theory, controlled by the following super-surface order estimate in the phase of small contours (c.f. Lemma~2.5.1 in \cite{IS}) \begin{lem} \label{supersmall} Let the large contour parameter $s(N)\gg \log N$ be fixed. There exists a constant $c_1 =c_1 (\beta )>0$, such that for all $N>0$, $A\in\ensuremath{\mathcal D}_N$ and all $a_N\in{\bf M}_A^{+}$, \begin{equation} \label{supper} \Is_{A,-}^{\beta ,s} \big(~M_A = \langle M_A\rangle_{A,-}^{\beta ,s} +a_N~\big)\ \leq\ \exp\big(~-c_1\frac{a_N^2}{N^2}\wedge \frac{a_N}{s(N)}~\big) . \end{equation} \end{lem} The idea of the proof is simple: either an area of order $a_N /2m^*$ is exhausted by the $K\log N$ large contours, which, in the $\Is_{N,-}^{\beta s}$-restricted phase, should have a surface tension price with the exponent of the order $a_N /s(N)$, or $K\log N$ large contours cover an area much less than $a_N /2m^*$, which means that the remaining deficit of the magnetization should be compensated in the basic $K\log N$ restricted phase, where we can use Lemma~\ref{KlogN}. \section{Bulk Relaxation in Pure Phases} \label{dima_bulk} \setcounter{equation}{0} The term relaxation is used here in the equilibrium setting in order to describe the approximation of local finite volume statistics by the infinite volume ones. We successively describe the relaxation properties of pure ``$-$'' states with non-positive and small positive magnetic fields and in the restricted phases of small contours. \noindent \subsection{Non-positive magnetic fields $h\le 0$.} \label{dima_bulk_nonpositive} The crucial property of low temperature pure phases could be stated as follows: Let us say that the sites $i$ and $j$ are $*$-neighbors if $\| i-j\|_1 =1$. Given a spin configuration $\sigma$ on $\{-1,+1\}^{{\ensuremath{\mathbb Z}} ^2}$, let us say that the sites $i$ and $j$ are $+*$-connected, if there exists a $*$-connected chain of sites $i_1,...,i_n$, $i_1=i$ and $i_n =j$, such that $\sigma (i_k)=1$ for every $k=1,...,n$. \begin{thm}[ \cite{CCSc}] \label{ccs} For every $\beta >\beta_c$ there exists $c_{1} =c_{1} (\beta ) >0$, such that uniformly in subsets $A\subseteq {\ensuremath{\mathbb Z}} ^2$, $i,j\in A$ and in magnetic fields $h\le 0$, \begin{equation} \label{3.2.decay} \Is_{A,-,h}^{\beta}\left(~i\stackrel{+*}{\longleftrightarrow}j~\right)~ \le~{\rm e}^{-c_1 (\beta )\normsup{i-j}}. \end{equation} \end{thm} \noindent {\it Remark.} Of course, since $\left\{i\stackrel{+*}{\longleftrightarrow}j\right\}$ is a non-decreasing event, the uniformity follows from the FKG ordering, once \eqref{3.2.decay} is verified for the infinite volume zero-field measure $\Is_{-}^{\beta}$. \begin{cor}[Relaxation of local observables] Fix $k\in{\ensuremath{\mathbb Z}} $. Uniformly in $A\subseteq {\ensuremath{\mathbb Z}} ^2$, magnetic fields $h\le 0$ and local observables $f$ with $|{\rm supp}(f)|=k$, \begin{equation} \label{3.2.spin} \left|\langle f\rangle_{A,-,h}^{\beta}~-~\langle f\rangle_{-,h}^{\beta}\right|~\le~ c_2 (k){\rm e}^{-c_3(\beta ) {\rm dist}_{\infty}\big({\rm supp}(f),\partial A\big)} \end{equation} \end{cor} \noindent Furthermore, \begin{cor}[Relaxation and decay of semi-invariants] Fix $n\in{\ensuremath{\mathbb Z}} $. Uniformly in $A\subseteq {\ensuremath{\mathbb Z}} ^2$, magnetic fields $h\le 0$ and sites $i_1 ,...,i_n\in A$, \begin{equation} \begin{align} \label{3.2.semiA} &\left|\langle\sigma (i_1);...;\sigma(i_n )\rangle_{A,-,h}^{\beta}~-~\langle\sigma (i_1);...;\sigma(i_n ) \rangle_{-,h}^{\beta}\right|~\le~c_4 (n) {\rm e}^{-c_5 (\beta ){\rm dist}_{\infty}\big( \{i_1 ,...,i_n\},\partial A\big)}\\ \intertext{and} \label{3.2.semidecay} &\left|\langle\sigma (i_1);...;\sigma(i_n )\rangle_{A,-,h}^{\beta}\right|~\le ~c_6 (n){\rm exp}\left\{-c_7(\beta ) \frac{{\rm diam}_{\infty}\big( i_1, ...,i_n\big)}{n}\right\} . \end{align} \end{equation} \end{cor} \noindent Finally, \begin{cor}[Asymptotic expansions] Fix $n\in{\ensuremath{\mathbb Z}} $. Uniformly in $A\subseteq{\ensuremath{\mathbb Z}} ^2$ and in $i\in A$, \begin{equation} \label{3.2.expansion1} \left|\langle\sigma (i)\rangle_{A,-,h}^{\beta}~-~\big( -m^*(\beta )+ \sum_{k=1}^{n}\ensuremath{\mathfrak s}_k \frac{h^k}{k!}\big) \right|~\le ~c_8 (n) |h|^{n+1} +c_9(n){\rm e}^{-c_{10} (\beta ) {\rm dist}_{\infty}\big( i,\partial A\big)} , \end{equation} where $\ensuremath{\mathfrak s}_k$ is the $k$-th semi-invariant of the zero-field infinite volume measure $\Is_{-}^{\beta}$, $$ \ensuremath{\mathfrak s}_k ~\stackrel{\gD}{=}~\sum_{i_1 ,...,i_k\in{\ensuremath{\mathbb Z}} ^2}\langle \sigma (0) ; \sigma (i_1);...;\sigma(i_n )\rangle_{-}^{\beta} . $$ \end{cor} \noindent {\bf Remark} It is possible (and straightforward) to formulate \eqref{3.2.semiA}, \eqref{3.2.semidecay} and \eqref{3.2.expansion1} in the general case of $n$ local observables $f_1,...,f_n$.\qed \noindent \subsection{Positive magnetic fields $h > 0$.} \label{dima_bulk_positive} Modifying ``$-$'' states by negative magnetic fields $h<0$ amounts to moving away from the phase transition region. Relaxation properties of $\Is_{A,-,h}^{\beta}$ with $h>0$ are radically different - uniformity is lost, and the size of the domain $A$ starts to play a crucial role. Indeed, the unique infinite volume measure $\Is_{-,h}^{\beta}=\Is_{h}^{\beta}$ stochastically dominates $\Is_{+}^{\beta}$ whatever small $h>0$ is. Thus, for large domains $A$, the configuration in the bulk is flipped under $\Is_{A,-,h}^{\beta}$ into the ``$+$'' dominated state. It is easy to understand on the heuristic grounds what should be the order of the critical size of $A$ for such a ``flip'' to occur: given $h>0$, the surface energy of a $\pm$-contour $\gamma$ is of the order $|\gamma |$ and it competes with the bulk gain inside the contour which, in its turn, is proportional to $h{\rm Area}(\gamma )$. The latter factor wins (loses), once the linear size of $\gamma$ is much larger (respectively much smaller) than $1/h$. Thus the sign of the dominant spin under $\Is_{A,-,h}^{\beta}$ should depend on whether $A$ can accommodate large enough contours, or, in other words, on how the linear size of $A$ relates to $1/h$. The important and remarkable fact is that exponential relaxation properties of finite volume ``$-$'' states are uniformly preserved for domains of the sub-critical size. \begin{thm}[\cite{ScS2},~\cite{IS}] \label{thmh} There exists a constant $a=a(\beta )>0$ such that for any $h>0$ fixed, \begin{equation} \label{3.2.decayh} \Is_{A,-,h}^{\beta}\left(~i\stackrel{+*}{\longleftrightarrow}j~\right)~ \le~{\rm e}^{-c_1 (\beta )\normsup{i-j}}. \end{equation} uniformly in domains $A\subset{\ensuremath{\mathbb Z}} $ such that any connected component of $A$ has diameter bounded above by $a/h$. As a consequence exponential decay of semi-invariants \eqref{3.2.semidecay} and the asymptotic expansion estimate \eqref{3.2.expansion1} hold uniformly in such domains as well. \end{thm} \noindent \subsection{Phases of small contours} \label{dima_bulk_phases} Theorem~\ref{thmh} explains how the cutoff parameter $s(N)$ upgrades the regular behavior of ``$-$''-states with positive magnetic fields $h$: By the definition of the restricted phase $\Is_{A,-}^{\beta ,s}$ the diameter of any relevant microscopic domain is at most of the order $s(N)$. \begin{thm}[\cite{ScS2},~\cite{IS}] \label{thmhs} There exists a constant $a=a(\beta )>0$ such that for any $h>0$ and $s$ satisfying $hs\leq a(\beta )$, \begin{equation} \label{3.2.decayhs} \Is_{A,-,h}^{\beta ,s}\left(~i\stackrel{+*}{\longleftrightarrow}j~\right)~ \le~{\rm e}^{-c_1 (\beta )\normsup{i-j}}\,, \end{equation} uniformly in domains $A\subseteq{\ensuremath{\mathbb Z}} $ . \\ Furthermore, the expectations in restricted phase are controlled as follows: for every $k\in{\ensuremath{\mathbb Z}} $, \begin{equation} \label{3.2.spins} \left|\langle f\rangle_{A,-,h}^{\beta, s}~-~\langle f\rangle_{A\cap\Lambda_s (f),-,h}^{\beta}\right|~\le~ c_{2} (k){\rm e}^{-c_{3}(\beta )s}, \end{equation} uniformly in $A\subseteq {\ensuremath{\mathbb Z}} ^2$ and in local functions $f$, $\left|\big({\rm supp(f)}\big)\right| =k$, where we have used the following notation: $\Lambda_s (f)\stackrel{\gD}{=} \left\{i:{\rm d}_{\infty}\left( i,{\rm supp}(f)\right)\leq s\right\}$. Finally, the decay of the semi-invariants is controlled in the restricted phases as \begin{equation} \label{3.2.semidecays} \left|\langle\sigma (i_1);...;\sigma(i_n )\rangle_{A,-,h}^{\beta,s}\right|~\le ~c_4 (n){\rm exp}\left\{-c_5(\beta ) \frac{{\rm diam}_{\infty}\big( i_1, ...,i_n\big)}{n}\wedge s\right\} . \end{equation} \end{thm} \section{Calculus of Skeletons} \label{dima_skeletons} \setcounter{equation}{0} The renormalization analysis of large $\pm$~contours is performed on various cutoff scales $s$, the appropriate choice of $s$ typically depending on the linear size $N$ of the system $s = s(N)$. We shall state coarse graining estimates uniformly in finite domains $A\subset{\ensuremath{\mathbb Z}} ^2$ and in the cutoff scales $s$. \noindent \subsection{Definition} \label{dima_skeletons_definition} A $\pm$~contour $\gamma$ is said to be $s$-large if $\text{diam}_\infty(\gamma )\geq s$. Given a cutoff scale $s\in{\ensuremath{\mathbb N}} $ and an $s$-large $\pm$~contour $\gamma$ we say that $S =(u_1,...,u_n)$ is an $s$-skeleton of $\gamma$, $\gamma\sim S$ if \begin{enumerate} \item All vertices of $S$ lie on $\gamma$. \item $s(N)/2 \leq \normsup{u_i - u_{i+1}}\leq 2s,\ \forall~i=1,...,n$, where we have identified $u_{n+1}\equiv u_1$. \item The Hausdorff distance $d_{{\mathbb H}}$ between $\gamma$ and the polygonal line ${\rm Pol} (S)$ through the vertices of $S$ satisfies $$ d_{{\Bbb H}}\big(\gamma ,{\rm Pol} (S)\big)\ \leq\ s(N) . $$ \end{enumerate} Similarly, given the collection $\left( \gamma_1 ,...,\gamma_n\right)$ of all $s$-large contours of a configuration $\sigma\in\Omega_{A,-}$, let us say that a collection $\ensuremath{\mathfrak S} =(S_1 ,...,S_n) $ of $s$-large skeletons is compatible with $\sigma$, $\sigma\sim\ensuremath{\mathfrak S}$, if $\gamma_i\sim S_i$ for all $i=1,...,n$. Of course, a configuration $\sigma\in\Omega_{A,-}$ has, in general, many different compatible collections of $s$-skeletons. Nonetheless, for each particular $\ensuremath{\mathfrak S}$ the probability \begin{equation} \label{3.4.frS} \Is_{A,-}^{\beta}\left(\ensuremath{\mathfrak S}\right) ~\stackrel{\gD}{=} ~\Is_{A,-}^{\beta}\left(\sigma :~\sigma\sim \ensuremath{\mathfrak S}\right) \end{equation} is well defined. \noindent \subsection{Energy estimate} \label{dima_skeleton_energy} As the renormalization scale $s$ grows, the probabilities \eqref{3.4.frS} start to admit a sharp characterization in terms of the energies $\ensuremath{\mathcal W}_{\beta}(\ensuremath{\mathfrak S})$, $$ \ensuremath{\mathcal W}_{\beta }\left(\ensuremath{\mathfrak S} \right)~\stackrel{\gD}{=} ~\sum_1^n\ensuremath{\mathcal W}_{\beta }\left( {\rm Pol}(S_i) \right) , $$ for a collection $\ensuremath{\mathfrak S} =\left( S_1 ,...,S_n \right)$. Below we a give precise version of this crucial statement in terms of the upper and lower bounds on the corresponding probabilities. The first important renormalization energy estimates could be \cite{Pfister} formulated as follows \begin{lem}[\cite{Pfister}] \label{energy} On every skeleton scale $s$ and independently of $A\subset{\ensuremath{\mathbb Z}} ^2$, \begin{equation} \label{3.4.energy} \Is_{A,-}^{\beta}\big(~\ensuremath{\mathfrak S}~\big)\ \leq\ \exp\big\{~-\ensuremath{\mathcal W}_{\beta}(\ensuremath{\mathfrak S})~\big\} . \end{equation} Furthermore, uniformly in $A\subset{\ensuremath{\mathbb Z}} $ , $r >0$ and cutoff parameters $s$, \begin{equation} \label{3.4.energyge} \Is_{A,-}^{\beta}\left( \ensuremath{\mathcal W}_{\beta}(\ensuremath{\mathfrak S})\geq r\right)~\leq~{\rm exp}\left\{ -r\big( 1-\frac{c_1\log |A |}{s}\big)\right\} . \end{equation} \end{lem} Energy estimate \eqref{3.4.energy} provides an upper bound on the probability of observing $\pm$~contours in the vicinity of a skeleton. Before going to a complementary lower bound let us dwell on the sample path structure of the contours which is hidden behind these renormalization estimates. \noindent \subsection{Calculus of skeletons} \label{dima_skeleton_calculus} By definition a contour is a self-avoiding closed path of nearest neighbor bonds of ${\ensuremath{\mathbb Z}} ^2$. For every set $A\subseteq {\ensuremath{\mathbb Z}} ^2$ the Ising measure $\Is_{A,-}^{\beta}$ induces a weight function $q_{A^*}^{\beta^*}$ on the space of such self-avoiding polygons (see Subsection~\ref{ssec_2dIsing}), $$ q_{A^*}^{\beta^*}\left( \gamma\right)~=~\Is_{A,-}^{\beta}\left( \sigma\in\Omega :~ \gamma\ \text{is a $\pm$~contour of }\sigma\right) . $$ In terms of these weights the probability of observing a certain skeleton $S =\left\{ u_1 ,...,u_n\right\}$ could be written as $$ \Is_{A,-}^{\beta}\left( S\right) ~=~\sum_{\gamma\sim S} q_{A^*}^{\beta^*}\left( \gamma\right ) . $$ Each microscopic contour $\gamma$ compatible with $S$, $\gamma\sim S$, splits into the union of disjoint open self-avoiding lattice paths $\gamma_k: u_k\to u_{k+1}, \ k=1,...,n$. The analysis of limit properties of $\Is_{A,-}^{\beta}$ comprises two main steps which could be loosely described as follows: \noindent 1) As the renormalization scale $s$ grows, the statistical behavior of different pieces $\gamma_k$ decouple under $q_{A^*}^{\beta^*}$, that is \begin{equation} \label{3.4.gasplit} \sum_{\gamma\sim S} q_{A^*}^{\beta^*}\left( \gamma\right)\ \approx\ \prod_{k=1}^{n}\left( \sum_{\gamma_k : u_k\to u_{k+1}} q_{A^*}^{\beta^*}\left( \gamma_k\right)\right) . \end{equation} \noindent 2) The $k-th$ term ($k=1,...,n$) in the above product corresponds to a $\pm$~interface stretched in the direction of the vector $u_{k+1}-u_{k}\in{\ensuremath{\mathbb R}} ^2$, in other words \begin{equation} \label{3.4.taubk} q_{A^*}^{\beta^*}\left( \gamma_k\right) \ \approx\ \text{e}^{-\tau_{\beta} (u_{k+1}-u_{k})} . \end{equation} Thus, the skeleton calculus resembles a refined version of the sample path large deviation principle for genuinely two-dimensional random curves. At very low temperatures, a very precise local analysis of the phase separation line has been developed in \cite{DKS},\cite{DS} using the method of cluster expansions. Our approach here pertains to the whole of the phase transition region $\beta >\beta_c$, but is strongly linked to the very specific self-duality properties of the two-dimensional nearest neighbor Ising model. We refer to Subsection~\ref{ssec_2dIsing} and, eventually, to \cite{PfisterVelenik97,PfisterVelenik98} for comprehensive description and study of the relevant properties of the duality transformation. The output of these techniques could be recorded in the following form \begin{lem}[Probabilistic Structure of the Phase Separation Line \cite{PfisterVelenik97}] \label{psline} Given any $A\subset{\ensuremath{\mathbb Z}} ^2$ and any two compatible self-avoiding paths $\lambda_1$ and $\lambda_2$, \begin{equation} \label{3.4.qasuper} q_{A^*}^{\beta^*}\left(\lambda_1\cup\lambda_2 \right)~\geq ~q_{A^*}^{\beta^*}\left(\lambda_1\right) q_{A^*}^{\beta^*}\left(\lambda_2\right) . \end{equation} Furthermore, \begin{equation} \label{3.4.expbouns} \text{e}^{-c_1 (\beta )|\lambda_2 |}~\leq ~\frac{q_{A^*}^{\beta^*}\left(\lambda_1\cup\lambda_2 \right)}{q_{A^*}^{\beta^*}\left(\lambda_1\right)}~ \leq~\text{e}^{-c_2 (\beta )|\lambda_2 |} \end{equation} On the other hand, given any $A\subseteq {\ensuremath{\mathbb Z}} ^2$ and any three points $u,v,w\in A^*$, the $q^{\beta^*}_{A^*}$ weight of the paths going from $u$ to $v$ through $w$ is bounded above as \cite{PfisterVelenik97} \begin{equation} \label{3.4.qasub} \sumtwo{\lambda :u\to v}{w\in\lambda} q_{A^*}^{\beta^*}\left( \lambda\right) ~\leq~ \left(\sum_{\lambda_1 :u\to w}q_{A^*}^{\beta^*}\left( \lambda_1\right)\rb \left(\sum_{\lambda_2 :w\to v}q_{A^*}^{\beta^*}\left( \lambda_2\right)\rb . \end{equation} Finally, the weights $q_{A^*}^{\beta^*}$ are non-increasing in $A$, and are related to the dual connectivities as \begin{equation} \label{3.4.qarepr} \sum_{\lambda :~ u\to v}q_{A^*}^{\beta^*}\left(\lambda\right) ~=~\left\langle \sigma(u) \sigma (v)\ran{\beta^*}{A^*,f} . \end{equation} \end{lem} Relation \eqref{3.4.qarepr} is the link to the surface tension: first of all the impact of a particular set $A$ exponentially diminishes with the distance to $\partial A$ \cite{I1}, \begin{equation} \label{3.4.decay} \left\langle\sigma (u) \sigma (v)\ran{\beta^*}{f}-\exp\left\{ - c_2 (\beta )\text{d}\left(\{ u,v\}, \partial A\right)\right\}~\leq~ \left\langle\sigma (u) \sigma (v)\ran{\beta^*}{A^*,f}~\leq ~ \left\langle\sigma (u) \sigma (v)\ran{\beta^*}{f} . \end{equation} uniformly in $A^*\subseteq{\ensuremath{\mathbb Z}} ^2$ and any $u,v\in A^*$. Moreover the following Ornstein-Zernike type correction formula \cite{Al} holds uniformly in $u,v\in{\ensuremath{\mathbb Z}} ^2$: \begin{equation} \label{3.4.OZ} \exp\left\{-\tau_{\beta}\left( u-v\right) -c_3 (\beta )\log \normsup{u-v}\right\}~\leq ~ \left\langle\sigma (u) \sigma (v)\ran{\beta^*}{f}~\leq~ \exp\left\{-\tau_{\beta}\left( u-v\right)\right\}, \end{equation} \noindent \subsection{Skeleton lower bound} \label{dima_skeletons_lb} The energy estimate \eqref{3.4.energy} is an immediate consequence of the (iterated) sub-multiplicative property \eqref{3.4.qasub}, the representation formula \eqref{3.4.qarepr} and the right-most inequalities in \eqref{3.4.decay} and \eqref{3.4.OZ}. In order to prove a lower bound one essentially needs to reverse the inequality in \eqref{3.4.qasub}. An indirect way to do so is to use the FK representation (see \cite{ScS1} and \cite{IS}). We shall briefly present here a more direct approach which has been developed in \cite{I1} and \cite{PfisterVelenik97}. Qualitatively it gives the same order of corrections as the FK one, but has a clear advantage of being explicitly related to the statistics of the microscopic phase boundaries at different length scales. The basic idea is that the phase separation line has rather strong mixing properties, in particular paths $\lambda_1$ and $\lambda_2$ on the right hand side of \eqref{3.4.qasub} should interfere, in the case of $(u,v,w)$ being in a general position, only in a vicinity of $w$. Thus, at a price of lower order corrections (as we shall see these corrections are logarithmic with the skeleton scale $s$) the inequality \eqref{3.4.qasub} could be reversed using the super-multiplicativity property \eqref{3.4.qasuper}. The notion of ``general position'' simply means that $u,w$ and $v$ do not form too small an angle and live on the same length scale, and it is quantified by the following \noindent {\bf Definition.} Given a skeleton scale $s\in{\ensuremath{\mathbb N}} $ and a number $\gep >0$, let us say that that a triple $(u,w,v)$ of ${\ensuremath{\mathbb Z}} ^2$-lattice points is $(s,\gep )$-compatible, if $$ \frac{s}2 ~\leq ~\min\left\{ \normsup{ w-u},\normsup{ v-w}\right\} ~\leq \max\left\{ \normsup{ w-u},\normsup{ v-w}\right\} ~\leq ~ 2s , $$ whereas $\cos\left( w-u ,v-w\right) \geq -1+\gep $.\qed We shall state the lower bound in terms of the limiting weights $q^{\beta^*}\left(\cdot\right)\stackrel{\gD}{=}\lim_{A^*\nearrow{\ensuremath{\mathbb Z}} ^2_\star}q^{\beta^*}_{A^*}$ (which exist by Lemma~\ref{psline}). \begin{lem} \label{triple} Fix $\gep >0$. Then there exists a scale $s= s(\gep )$, such that \begin{equation} \label{3.4.triple} \sumtwo{\lambda :u\to v}{w\in\lambda} q^{\beta^*}\left( \lambda\right)~\geq~\exp\left\{ -\left( \tau_{\gb} (w-u)+\tau_{\gb} (v-w )\right) -c_1 (\beta )\log s\right\} , \end{equation} uniformly in all skeleton scales $s\geq s(\gep )$ and in all $(s,\gep )$-compatible triples $(u,w,v)$. \end{lem} We sketch the proof of this lemma in Appendix~B. Iterating \eqref{3.4.triple} we arrive to the following lower bound on the probability of observing a certain regular skeleton: \noindent {\bf Definition.} A skeleton $S =(u_1 ,...,u_n)$ is said to be $(s,\gep )$-regular, if any triple $(u_{i-1},u_i ,u_{i+1})$ of successive points of $S$ is $(s,\gep )$-compatible, and the distance between any two non-neighboring intervals $[u_i ,u_{i+1}]$ and $[u_j ,u_{j+1}]$ exceeds $\gep s$.\qed \begin{lem} \label{skeletonlb} For every $\gep >0$, there exists a number $c_2 =c_2 (\gep )<\infty$, such that uniformly in the skeleton scales $s$ and in all $(s, \gep )$-regular skeletons $S$, \begin{equation} \label{3.4.lb} \begin{split} \Is_{N, -}^{\beta}&\left( \exists~\text{a}~\pm~\text{contour}~\gamma :\ d_{{\ensuremath{\mathbb H}} }(\gamma , {\rm Pol}(S) )\le K(\beta )\sqrt{s}\log s \right) \\ &\ge~ {\rm exp}\left\{ -W_{\beta} \left({\rm Pol}(S) \right) ~-~c_2 (\gep )\# (S) \log s\right\}\\ &\ge ~ {\rm exp}\left\{ -W_{\beta} \left({\rm Pol}(S) \right)\left(1 ~-~c_3 (\gep,\beta )\frac{\log s}{s} \right)\right\} , \end{split} \end{equation} where $\# (S)$ denotes the number of vertices in $S$, and the last inequality follows from \eqref{3.1.snumber}. \end{lem} In fact we need lower bounds only for a very specific set of $s$-skeletons, namely on those approximating the Wulff shape $\ensuremath{\mathcal K}_{a_N /2m^*}$. These skeletons always satisfy the conditions of the above theorem. An academic attempt to prove a lower bound for all possible shapes will lead to annoying, though solvable, technicalities, but will fail to contribute much to the microscopic theory of phase separation, as we see it. \section{Structure of The Proof} \label{dima_structure} \setcounter{equation}{0} In order to give a probabilistic characterization of the microscopic canonical state $\Is_{N,-}^{\beta}\left( ~\cdot ~\big| M_N =-m^*N^2 +a_N\right)$ one first derives a sharpest possible lower bound on the probability $\Is_{N,-}^{\beta}\left(M_N =-m^*N^2 +a_N\right)$, and then rules out those geometric events (in terms of skeletons, but with an eventual translation to the language of microscopic spin configurations), which happen to qualify as improbable when compared with this lower bound. \noindent \subsection{Lower bound} \label{dima_structure_lb} The best lower bound comes as an outcome of the optimal combination of the basic local limit Lemma~\ref{KlogN} and the skeleton lower bound \eqref{3.4.lb}. We choose a skeleton approximation of the corresponding Wulff shape $\ensuremath{\mathcal K}_{a_N/2m^*}$, and using local limit estimates steer the magnetization towards the desirable value $-m^*N^2 +a_N$. Optimality reflects the choice of the best possible skeleton scale: Notice that the estimate \eqref{3.4.lb} becomes sharper with the growth of the cutoff parameter $s(N)$. On the other hand, the area of the microscopic phase region is controlled, with respect to the area inside ${\rm Pol}(S)\sim a_N/2m^*$, up to a $N\sqrt{s(N)}\log s(N)$ correction (see Appendix~B or \cite{IS}), which, of course, makes the local limit step more expensive for large values of $s(N)$. It happens that the bounds are balanced on the skeleton scale $s(N)\sim \sqrt[4]{a_N}$. \begin{thm}[\cite{IS}] \label{lb} Uniformly in $a_N\in {\bf M}_N^{+}$, that is for all $a_N\geq 0$, such that $-m^*N^2+a_N\in{\rm Range}(M_N)$, \begin{equation} \label{3.5.lb} \Is_{N,-}^{\beta}\left(M_N =-m^*N^2 +a_N\right)~\geq ~\exp\left\{ -\sqrt{\frac{a_N}{2m^*}}\ensuremath{\mathcal W}_{\beta}\left(\partial\ensuremath{\mathcal K}_1\right) - c_1 (\beta )\sqrt[4]{a_N}\log N \right\} . \end{equation} \end{thm} \noindent \subsection{Upper bounds} \label{dima_structure_ub} First of all, one derives an upper bound on the shift of the magnetization. On any skeleton scale, \begin{equation} \label{3.5.sdecomp} \Is_{N,-}^{\beta}\left(M_N =-m^*N^2 +a_N\right)~\leq ~\sum_{\ensuremath{\mathfrak S}} \Is_{N,-}^{\beta}\left(M_N =-m^*N^2 +a_N~;~\ensuremath{\mathfrak S} \right) . \end{equation} Due to the intrinsic entropy cancelation under the skeleton coarse graining, and in view of the lower bound \eqref{3.5.lb} and the energy estimate \eqref{3.4.energy} one could, for example, shoot for the maximal term in the above sum. If the phase volume (see \cite{DKS} for the precise definition ) of $\ensuremath{\mathfrak S}$ is much less than $a_N/2m^*$, then the deficit of the magnetization should be compensated in the phase of $s(N)$-small contours, which, by Lemma~\ref{supersmall} exerts a super-surface price in the exponent. On the other hand, if the phase volume of $\ensuremath{\mathfrak S}$ is close to $a_N/2m^*$, then by the isoperimetric inequality and by the energy estimate \eqref{3.4.energy}, the best possible price one should be prepared to pay is already close to ${\rm exp}\left\{ -\ensuremath{\mathcal W}_{\beta}\left(\ensuremath{\mathcal K}_{a_N/2m^*}\right)\right\}$. Again the resulting estimate is subject to an optimization via a careful choice of the skeleton scale $s(N)$. \begin{thm}[\cite{IS}] \label{ub} Uniformly in $a_N\sim N^2$, \begin{equation} \label{3.5.ub} \Is_{N,-}^{\beta}\left(M_N =-m^*N^2 +a_N\right)~\leq ~\exp\left\{ -\sqrt{\frac{a_N}{2m^*}}\ensuremath{\mathcal W}_{\beta}\left(\partial\ensuremath{\mathcal K}_1\right) +c_1 (\beta )\sqrt[4]{a_N}\log N \right\} . \end{equation} \end{thm} A more delicate study \cite{DKS},\cite{IS} of the typical sample properties of the microscopic configuration $\sigma$ under $\Is_{N,-}^{\beta}\left( ~\cdot ~\big| M_N =-m^*N^2 +a_N\right)$ is again based on the analysis of \eqref{3.5.sdecomp}. At this point the stability Bonnesen-type estimates (see Subsection~1.3 of the Introduction) for the Wulff variational problem become important - they enable to quantify the conclusion that only those collections $\ensuremath{\mathfrak S}$, which are close to the shifts of the Wulff shape $\ensuremath{\mathcal K}_{a_N/2m^*}$, have a chance to survive a comparison with the lower bound \eqref{3.5.lb}. A step further, involving local limit estimates of Lemma~\ref{KlogN}, is to conclude that all these collections actually contain exactly one large skeleton, which corresponds to the unique large contour as asserted by the DKS theorem. \section{Open Problems} \label{dima_problems} \setcounter{equation}{0} There are still important open problems even in the nearest neighbor Ising case. Notably, one knows how to control precise fluctuations of the phase separation line only at very low temperatures, that is using the method of cluster expansions \cite{DH}. This is a serious gap in the theory, since large scale statistics of microscopic phase boundaries are ultimately responsible for exact (up to zero order terms) expansions of canonical partition functions \cite{H}. So far qualitative probabilistic results have been obtained either for very low temperature models \cite{H}, or in the simplified setting of self-avoiding polygons \cite{I3}, \cite{HI} or Bernoulli bond percolation \cite{CI}. Another interesting and apparently important problem is to understand sample path properties of spin configurations in a situation when a canonical constraint is imposed in the restricted phase. Apart from giving rise to a potentially fascinating probabilistic structure, this question is closely related to the issue of the dynamical spinodal decomposition. There is absolutely no matching probabilistic study of the phase separation in multiphase two-dimensional models, for example $q$-states Potts models. Some results in this direction are reported in \cite{Velenik97}, but this issue is almost entirely open even in the context of the ${\ensuremath{\mathbb L}} _1$-theory. In particular, the corresponding phenomena is still not worked out on the level of macroscopic variational problems, see, however \cite{ABFH}, \cite{MoS} and the references therein. The key issue, however, which we feel is largely misunderstood is that at moderately low temperatures the DKS theory of two-dimensional phase segregation, say in the general context of finite range ferromagnetic models with pair interactions is far from being complete. What currently exists is an example of how these ideas could be implemented in the nearest neighbor case. At least from the mathematical point of view, the nearest neighbor case is a degenerate one, in a sense that it enables a reduction to pure boundary conditions over decoupled microscopic regions even at temperatures only moderately below critical. This should not be the case for more general range of interactions. In this respect the assertion that low temperature expansions should go through for general interactions much along the same lines as they do for the nearest neighbor model, seems to be rather irrelevant - the real issue is not to kill mixed boundary conditions, but to understand how they should be incorporated into the DKS theory. \part{${\ensuremath{\mathbb L}} _1$-Theory} \label{part_weakWulff} \setcounter{section}{0} On the macroscopic level the phenomenon of phase segregation is studied in terms of concentration properties of the locally averaged magnetization. Statistical properties of the microscopic phase boundaries are waved out, and the backbone of the ${\ensuremath{\mathbb L}} _1$-theory are hard model-oriented renormalization estimates, which enable a sharp surface order analysis of the mesoscopic magnetization profiles. Example of such coarse graining procedures in the case of Kac, percolation and Ising models are given in Section~\ref{Examples of mesoscopic phase labels}. \noindent The averaging is performed on various mesoscopic scales: \vskip 0.2cm \noindent {\bf Mesoscopic Notation.} All the intermediate scales are of the form $2^k ,k\in{\ensuremath{\mathbb N}} $. For any $M=2^k$ fixed we split the unit torus $\uTor$ into the disjoint union of the corresponding mesoscopic boxes, \begin{equation} \label{ksplit} \uTor ~=~\bigvee_{x\in\sTor{k}}\sBox{k}(x) , \end{equation} where $\sTor{k}$ is the scaled embedding of the discrete torus $\Tor{M} = \{1, \dots, M\}^d$ into $\uTor$ as $$ \sTor{k} ~\stackrel{\gD}{=} ~ \uTor\cap\left(\frac1M\Tor{M}\right) , $$ and, given $x\in\uTor$ the box $\sBox{k}(x)\subset\uTor$ is defined via $$ \sBox{k}(x)~\stackrel{\gD}{=} ~ x+\Big[-\!\frac{1}{2^{k+1}}~,~\frac{1}{2^{k+1}}~\Big)^d . $$ Let us use $\ensuremath{\mathcal F}_k$ to denote the (finite) algebra of the subsets of $\uTor$ generated by the partition \eqref{ksplit} . Given the size of the system $N = 2^n$, the local magnetization $\ensuremath{\mathcal M}_k$ on the $M=2^k\leq N$ scale is always an $\ensuremath{\mathcal F}_{n-k}$-measurable function. This notation should not be confusing: the subindex $k$ in $\ensuremath{\mathcal M}_k$ measures the ``coarseness'' of the mesoscopic magnetization profile. Thus, $\ensuremath{\mathcal M}_0$ corresponds to the microscopic configuration, and $\ensuremath{\mathcal M}_n$ identically equals to the averaged total magnetization. In general the local magnetization $\ensuremath{\mathcal M}_k$ is a piecewise constant function on $\uTor$ defined as $$ \forall x \in \sTor{n-k}, \forall y \in \sBox{n-k} (x), \qquad \ensuremath{\mathcal M}_k(\sigma,y) ~=~\frac{1}{M^d}\sum_{j\in\dBox{M}(2^nx )}\sigma_j \, . $$ Notice that the microscopic counterpart of the box $\sBox{n-k}(x)$ is the box $\dBox{M}(2^nx)$ of side length $M$ centered in $2^n x$.\\ We formulate all the results of Section~\ref{Results and the strategy of the proof} for the nearest neighbor Ising model. Along with the super-critical Bernoulli percolation this is the only instance when a relatively complete ${\ensuremath{\mathbb L}} _1$-theory has been developed. In both instances, the validity of the ${\ensuremath{\mathbb L}} _1$-Theory hinges in a crucial way on the validity of Pisztora's coarse graining \cite{Pisztora1}, which is by far the most profound model related fact employed. Nevertheless, the approach itself is rather robust, and in subsequent Subsections we shall try to distinguish between specific model dependent properties and more general results. In particular, compactness properties of local magnetization profiles are discussed in Section~\ref{Coarse graining and mesoscopic phase labels} without any reference to specific models. Instead we briefly indicate how the conditions of the corresponding general exponential tightness Theorem could be verified in several particular cases. \section{Results and the strategy of the proof} \label{Results and the strategy of the proof} \setcounter{equation}{0} \subsection{Main results} \label{subsection Main results} For simplicity, we restrict to the case of the torus $\Tor{N}$ and denote by $\Is_N$ the Gibbs measure with periodic boundary conditions. Define the total magnetization ${\bf M}_{\Tor{N}}$ as $$ {\bf M}_{\Tor{N}}~\stackrel{\gD}{=}~ \frac{1}{N^d} \sum_{i \in \Tor{N}} \sigma_i . $$ Let us define also the set $\ensuremath{\mathfrak B}_p$ as \begin{align*} \begin{split} \ensuremath{\mathfrak B}_p = \{ \beta \; : \; {\text{Pisztora's coarse-graining hold for the Ising model at inverse temperature $\beta$}} \}. \end{split} \end{align*} We refer to the original article \cite{Pisztora1} and \cite{CePi} for the precise relevant definitions (see also remark at the end of the Subsection \ref{subsection : Ising nearest neighbor}). It is known that $\ensuremath{\mathfrak B}_p$ contains all except for at most countably many points of the interval $]\tilde \beta_c,\infty[$, where $\tilde \beta_c$ is the so called slab percolation threshold, which is conjectured to coincide with $\beta_c$.\\ A compact way to state the main result of the ${\ensuremath{\mathbb L}} _1$-theory is: \begin{thm} \label{theo 1} For any $\beta \in \ensuremath{\mathfrak B}_p$ and $m$ in $] {\bar m}, m^*[$ \begin{eqnarray*} \lim_{N \to \infty} \; \frac{1}{N^{d-1}} \log \Is_{N} \big( \big| {\bf M}_{\Tor{N}} \big| \leq m \big) = - \ensuremath{\mathcal W}_\beta (\ensuremath{\mathcal K}_m ), \end{eqnarray*} where ${\bar m} ={\bar m}(\beta )$ and $\ensuremath{\mathcal K}_m$ were defined in Subsection \ref{Variational methods}. \end{thm} \noindent {\bf Remark.} The above Theorem has been established for $\beta \gg 1$ in \cite{Bo}. The only additional ingredient required for an extension of the results of the latter paper to the whole of the temperature range $\tilde \beta_c$ was the validity of the Lemma \ref{lem surface tension}. Such a statement happens to be highly non-trivial, and it has been proven in \cite{CePi} along with an alternative derivation of the claim of Theorem \ref{theo 1}. \qed\\ \noindent Theorem~\ref{theo 1} looks like a surface order large deviation principle. Such an appellation, however, would not help to explain the structure of the underlying phenomena. In fact Theorem~\ref{theo 1} is essentially equivalent to a seemingly stronger statement on the macroscopic geometry of the phase segregation of local magnetization profiles under the conditional measure $\Is_N\left( ~\cdot~\Big|\big|{\bf M}_{\Tor{N}} \big| \leq m \right)$: For any function $v$ in ${\ensuremath{\mathbb L}} ^1( \uTor, [- \frac{1}{m^*}, \frac{1}{m^*}])$, the $\delta$-neighborhood of $v$ is denoted by $\ensuremath{\mathcal V}(v,\delta)$ \begin{eqnarray*} \ensuremath{\mathcal V}(v,\delta) \stackrel{\gD}{=} \left\{ v' \in {\ensuremath{\mathbb L}} ^1 \big( \uTor, [- \frac{1}{m^*}, \frac{1}{m^*}] \big) \ \big| \qquad \int_{\uTor} |v_x ' - v_x| \, dx \leq \delta \right\}. \end{eqnarray*} The ${\ensuremath{\mathbb L}} _1$-Theorem on the phase separation says that for $\beta$ large enough with $\Is_{N} \left( \, . \; \Big| \, \big| {\bf M}_{\Tor{N}}\big| \le m \right)$-probability converging to 1, the function $\ensuremath{\mathcal M}_k$ is close to some translate of the Wulff shape $m^* \ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\ensuremath{\mathcal K}_m}$. \noindent More precisely, fix a number $\nu <1/d$. \begin{thm} \label{theo 2} For any $\beta \in \ensuremath{\mathfrak B}_p$ and $m$ in $]{\bar m}, m^*[$ the following holds: \noindent For every $\delta >0$, one can choose a scale $k_0 =k_0 (\beta ,\delta )$, such that \begin{eqnarray*} \lim_{N \to \infty} \; \min_{k_0\leq k\leq \nu n} \; \Is_{N} \left( \frac{\ensuremath{\mathcal M}_k}{m^*} \in \bigcup_{x \in \uTor} \ensuremath{\mathcal V}(\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\ensuremath{\mathcal K}_m + x},\delta) \; \Big| \, \big| {\bf M}_{\Tor{N}} \big| \leq m \right) = 1, \end{eqnarray*} where ${\bar m}$ and $\ensuremath{\mathcal K}_m$ were defined in Subsection \ref{Variational methods}. \end{thm} The proofs of Theorems \ref{theo 1} and \ref{theo 2} are similar and are divided into 2 steps. The first step amounts to prove a compactness Theorem and the second one to derive precise logarithmic asymptotics. \subsection{Exponential tightness} Recall \cite{EG} that for any $a$ positive, the set $$ K_a \stackrel{\gD}{=} \big\{ v \in \BV \; |\quad \ensuremath{\mathcal P} (\{v=1\}) \leq a \big\}, $$ is compact with respect to convergence in ${\ensuremath{\mathbb L}} ^1(\uTor)$. \begin{pro} \label{prop 1} Let $\beta$ be in $\ensuremath{\mathfrak B}_p$. Then there exists a constant $C(\beta)>0$ such that for all $\delta$ positive one can find $k_0(\delta)$ \begin{eqnarray*} \forall a >0, \qquad \limsup_{N \to \infty} \; \frac{1}{N^{d-1}} \max_{k_0(\delta) \leq k\leq \nu n} \log \Is_{N} \left( \frac{\ensuremath{\mathcal M}_k}{m^*} \in \ensuremath{\mathcal V}(K_a,\delta)^c \right) \leq - C(\beta) \, a, \end{eqnarray*} where $\ensuremath{\mathcal V}(K_a,\delta)$ is the $\delta$-neighborhood of $K_a$ in ${\ensuremath{\mathbb L}} ^1( \uTor, [-\frac{1}{m^*},\frac{1}{m^*}])$. \end{pro} This proposition tells us that only the configurations close to the compact set $K_a$ have a contribution which is of the surface order. This statement reduces the complexity of the problem : as $K_a$ is compact, it is enough to derive the leading terms in the logarithmic asymptotics for the probability of a finite number of events. In Section~\ref{Coarse graining and mesoscopic phase labels}, we prove that the analog of Proposition \ref{prop 1} holds for a broad class of models. \subsection{Precise logarithmic asymptotics} As the minimizers are known, it is sufficient to derive a lower bound for configurations concentrated close to $\ensuremath{\mathcal K}_m$. \begin{pro} \label{prop 2} Let $\beta$ be in $\ensuremath{\mathfrak B}_p$ and let $m$ be in $]{\bar m}, m^*[$ \begin{eqnarray*} \liminf_{N \to \infty} \; \frac{1}{N^{d-1}} \min_{k_0(\delta) \leq k \leq \nu n} \log \Is_{N} \left( \frac{\ensuremath{\mathcal M}_k}{m^*} \in \ensuremath{\mathcal V}( \ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\ensuremath{\mathcal K}_m},\delta) \right) \geq - \ensuremath{\mathcal W}_\beta(\ensuremath{\mathcal K}_m ) - o(\delta) \, , \end{eqnarray*} where the function $o(\cdot)$ depends only on $\beta$ and vanishes as $\delta$ goes to 0. \end{pro} According to proposition \ref{prop 1}, we will prove the upper bound only for a restricted class of events \begin{pro} \label{prop 3} Let $\beta$ be in $\ensuremath{\mathfrak B}_p$. Then for all $v$ in $\BV$ such that $\ensuremath{\mathcal W}_\beta(v)$ is finite, one can choose $\delta_0 = \delta_0 (v)$, such that uniformly in $\delta < \delta_0$ \begin{eqnarray*} \limsup_{N \to \infty} \; \frac{1}{N^{d-1}} \max_{k_0(\delta) \leq k\leq \nu n} \log \Is_{N} \left( \frac{\ensuremath{\mathcal M}_k}{m^*} \in \ensuremath{\mathcal V}(v,\delta) \right) \leq - \ensuremath{\mathcal W}_\beta(v) + o(\delta) \, . \end{eqnarray*} where the function $o(\cdot)$ depends only on $\beta$ and $v$ and vanishes as $\delta$ goes to 0. \end{pro} The Propositions above ensure that given a precision $\delta$, there is a finite scale $k_0(\delta)$ after which the phases are uniformly segregated with this precision. \subsection{Scheme of the proof} The scheme of the proof is well known in the soft context of large deviations: one first proves an exponential tightness property and then a weak large deviation principle (Proposition \ref{prop 2} holds also for any bounded variation function with finite perimeter). To be sure, the proof itself has nothing to do with the theory of large deviations: the central tools here are the renormalization estimates leading to Peierls type bounds and estimate in the phase of small contours, and, of course, the identification methods to produce the macroscopic surface tension in the precise logarithmic asymptotics. Thus, Proposition \ref{prop 1} tells us that, under the appropriate renormalization, the occurrence of many small contours or of very large contours is unlikely. It is a straightforward consequence of the general exponential tightness Theorem \ref{thm Compactness}, which we state in Section~\ref{Coarse graining and mesoscopic phase labels}. The statement is reminiscent to the results proven in \cite{BBP}, but the proof itself is based on the analysis of the phase of small contours developed in \cite{I2}, \cite{SS}, \cite{PfisterVelenik97}. To prove Propositions \ref{prop 2} and \ref{prop 3}, we first consider the macroscopic event $\big\{ \frac{\ensuremath{\mathcal M}_k}{m^*} \in \ensuremath{\mathcal V}(v,\delta) \big\}$ and by using several localization procedures, we reduce to compute the probability of microscopic events from which, adopting the procedure developed in \cite{Cerf}, we can derive the exact surface tension factor. This enables us to avoid the computations related to the microscopic phase boundaries at, however, a principal cost of loosing track of the latter. Since the most likely configurations in $\big\{ \frac{\ensuremath{\mathcal M}_k}{m^*} \in \ensuremath{\mathcal V}(v,\delta) \big\}$ are those for which both phases coexist along the boundary of $\partial^* v$, we would like to prove that a microscopic interface is localized close to the boundary. To derive the lower bound (Proposition \ref{prop 2}), one can enforce such a microscopic interface and then recover the surface tension factor. This is not the case for the upper bound (Proposition \ref{prop 3}) because the ${\ensuremath{\mathbb L}} _1$ constraint $\big\{ \frac{\ensuremath{\mathcal M}_k}{m^*} \in \ensuremath{\mathcal V}(v,\delta) \big\}$ imposed on the magnetization is not strong enough to localize the interface close to $\partial^* v$ : there might be mesoscopic fingers of one phase percolating into the other. To circumvent this problem, we follow an argument developed in \cite{BBBP} and first prove a weak localization on a mesoscopic level. This involves a surgery procedure called the minimal section argument. This procedure ensures that one can chop off the mesoscopic fingers without changing too much the probability of the event and therefore localize the interface on a mesoscopic level. The renormalization is an essential feature of this proof. Once the interface is localized on the mesoscopic level, it remains to identify surface tension. \medskip We now proceed by first defining a coarse graining and deducing the exponential tightness from Theorem \ref{thm Compactness}. Then we compute the logarithmic asymptotics. \section{Coarse graining and mesoscopic phase labels} \label{Coarse graining and mesoscopic phase labels} \setcounter{equation}{0} At every mesoscopic scale $M=2^k$ the local magnetization $\ensuremath{\mathcal M}_k$ gives a coarse grained representation of the system. Statistical properties of the microscopic configurations are waved out, and instead one keeps track only of the local order parameters over the corresponding mesoscopic blocks. These are quantified by three values $\pm 1$ and $0$ according to whether they are sufficiently close to one of the two equilibrium values $\pm m^*$ or not. $0$-blocks play the role of the mesoscopic phase boundaries, and the $\pm 1$ blocks of the corresponding mesoscopic phase regions. Thus, the outcome of the renormalization could be schematically represented as the following two-step diagram : \begin{equation*} \left\{ \begin{split} &\text{Microscopic}\\ &\text{configurations} \end{split} \right\} \ \longrightarrow\ \left\{ \begin{split} &\text{Local}\\ &\text{magnetization} \end{split} \right\} \ \longrightarrow\ \left\{ \begin{split} &\text{Mesoscopic}\\ &\text{phase labels} \end{split} \right\} . \end{equation*} \noindent There are two principal results to be discussed in this Subsection: we show that the ${\ensuremath{\mathbb L}} _1$-difference between the local magnetization and the corresponding phase labels vanishes on the exponential scale, and we give a general exponential tightness criterion for families of $\{\pm 1, 0\}$-valued phase label functions. In Section \ref{Examples of mesoscopic phase labels}, we will indicate how to construct phase labels in the case of Kac, percolation and nearest neighbor Ising models.\\ \noindent {\bf Definition :} A $\{\pm 1,0\}$-valued function $u$ on $\uTor$ is called a mesoscopic phase label, if there exists $k\in{\ensuremath{\mathbb N}} $, such that $u$ is an $\ensuremath{\mathcal F}_k$-measurable function. \medskip \subsection{Tightness theorem for mesoscopic phase labels} We fix now a sequence of non-negative numbers $\{\rho_k\}$ such that \begin{equation} \label{rhok} \lim_{k\to\infty} \rho_k ~=~0 . \end{equation} The following compactness result holds uniformly in the microscopic scales $N=2^n$. \begin{thm}[Tightness of Mesoscopic Phase Labels] \label{thm Compactness} Let $N=2^n$ and assume that $\{u_k (\omega ,x )\}$ is a sequence of random mesoscopic phase label functions defined on the common probability space $(\Omega_N ,\ensuremath{\mathcal A}_N ,{\ensuremath{\mathbb P}} _N )$, such that the realizations of $u_k\in\ensuremath{\mathcal F}_{n-k},\ k=1,...,n$, and for every $k$ the following two conditions hold: \noindent {\bf A.} The distribution of the family of random variables $\{| u_k (\omega ,x)|\}_{x\in\sTor{n-k}}$ is stochastically dominated by the Bernoulli site percolation measure ${\ensuremath{\mathbb P}} _{\text{\rm perc}}^{\rho_k}$ on $\sTor{n-k}$. In particular, \begin{equation} \label{A} {\ensuremath{\mathbb P}} _N\left( u_k (x_1 )=0,...,u_k (x_\ell )=0\right)~\leq (\rho_k)^\ell. \end{equation} \noindent {\bf B.} If for two different points $x,y\in \sTor{n-k}$ the corresponding $u_k$-phase labels have opposite signs, that is if $u_k (x)u_k (y)=-1$, then on any finer scale $k^{\prime} \leq k$ any $*$-connected chain of $\sBox{n-k^{\prime}}$ blocks joining $\sBox{n-k}(x)$ to $\sBox{n-k}(y)$ contains at least one block with zero $k^{\prime}$-label.\\ \noindent Then for every $a>0$ and $\delta >0$ there exists a finite scale $k_0 =k_0 (\delta )$, such that \begin{equation} \label{tightbound} \frac{1}{N^{d-1}}\log{\ensuremath{\mathbb P}} _N\left( u_k\in\ensuremath{\mathcal V} ( K_a ,2\delta)^{\text{c}}\right) \leq ~ -c_1 (d)\min\left\{ \delta 2^{n-dk}~ ,~\frac{a}{2^{(d-1)k_{0}}}~, \frac{\delta 2^{n-dk_0}}{n^d} \right\} ~, \end{equation} for all $k \geq k_0$ . \end{thm} \noindent {\bf Remark~.} The proof of this general theorem is given in Appendix~A. Notice that for $N$ sufficiently large we obtain a simpler surface order estimate which, for every $\nu <1/d$ fixed, holds uniformly in all mesoscopic scales $k_0 (\delta) \leq k\ \leq \nu\log N$, \begin{equation} \label{surface} \frac{1}{N^{d-1}}\log{\ensuremath{\mathbb P}} _N \left( u_k\in\ensuremath{\mathcal V} ( K_a ,2\delta)^{\text{c}}\right)~\leq~ -c_1 (d)\frac{a}{2^{(d-1)k_{0}}} . \end{equation} Also an inspection of the proof shows that the tightness of the phase labels on a certain scale $k$ does not depend on the validity of Assumptions~A and B on the successive scales $k^{\prime}>k$. In particular, the estimate \eqref{surface} is valid on fixed (large) finite scales $k=k_0$, once the Assumption~A is satisfied, and once any $*$-connected sign changing chain of $k_0$-blocks necessarily contains a $0$-block. This simplified version of Theorem~\ref{thm Compactness} is used in the case of Kac potentials which we discuss in Subsection~\ref{subsection Kac potentials}.\qed \\ \subsection{Relation to magnetization profiles} The original Gibbs measure is related to the above abstract setting in the following way: For every $N=2^n$, one constructs a (possibly enlarged) probability space $(\Omega_N ,\ensuremath{\mathcal A}_N ,{\ensuremath{\mathbb P}} _N )$, on which both the spin variables $\sigma\in \{ -1,+1 \}^{\Tor{N}}$ and various indexed families $\{ u_k^{\zeta}\}$ of mesoscopic phase labels are defined. Such construction should enjoy the following set of properties: \medskip \noindent {\bf C1.} The marginal distribution of $\sigma$ under ${\ensuremath{\mathbb P}} _N$ is precisely $\Is_N$. \noindent {\bf C2.} For every $\zeta >0$ the family $\{ u_k^{\zeta}\}$ of mesoscopic phase labels satisfies Assumption~A of Theorem~\ref{thm Compactness} with the corresponding sequence $\{ \rho_{k,\zeta}\}$ of site percolation probabilities obeying \eqref{rhok}. \noindent {\bf C3.} For every $k\in \{0,...,n\}$ and $\zeta > 0$ the local magnetization profile $\ensuremath{\mathcal M}_k$ and the phase label $u_{k}^{\zeta}$ are related as follows: ${\ensuremath{\mathbb P}} _N$-a.s., \begin{equation} \label{Mkuk} \left| \ensuremath{\mathcal M}_k (x) -m^{*}u_{k}^{\zeta} (x)\right|~\leq~\zeta\qquad\text{whenever}\ | u_{k}^{\zeta}(x) |=1 . \end{equation} Notice that both functions above are $\ensuremath{\mathcal F}_{n-k}$-measurable, that is \eqref{Mkuk} should be verified over the mesoscopic boxes indexed by the points $x\in\sTor{n-k}$.\\ Under conditions C1-C3, given any $\delta > 0$ one can choose the accuracy $\zeta$ of the coarse graining, a finite scale $k_0 =k_0 (\delta,\beta )$ and a sequence of mesoscopic phase labels $\{ u_k^\zeta \}$, such that for every $\nu <1/d$ fixed, \begin{equation} \label{Mkukbound} \frac{1}{N^{d-1}} \log{\ensuremath{\mathbb P}} _N \left( \max_{k_0\leq k\leq \nu n}\| \ensuremath{\mathcal M}_k -m^{*} u_{k}^\zeta \|_1 >\delta \right)~\leq ~ -c_2 \; 2^{(1 -d \nu) n} . \end{equation} Notice that \eqref{Mkukbound} holds uniformly in the size of the system $N=2^n$, once Assumptions C1-C3 do so. Let us check \eqref{Mkukbound}. By the very construction, $$ \| \ensuremath{\mathcal M}_k -m^{*}u_{k}^{\zeta} \|_1~\leq ~ \zeta +\frac{2}{|\sTor{n-k} |} \sum_{x\in\sTor{n-k}} 1_{u_k^{\zeta} (x)=0} . $$ Consequently, using the domination by the Bernoulli site percolation (Assumption~A), \begin{equation*} \begin{split} &{\ensuremath{\mathbb P}} _N\left( \| \ensuremath{\mathcal M}_k -m^{*}u_{k}^{\zeta} \|_1 >\delta\right)~\leq ~ {\ensuremath{\mathbb P}} _N\left(\frac{1}{|\sTor{n-k} |} \sum_{x\in\sTor{n-k}} 1_{u_k^{\zeta} (x)=0} >\frac{\delta -\zeta}{2}\right)\\ &\ \ \leq {\ensuremath{\mathbb P}} _{\text{perc}}^{\rho_{k,\zeta}}\left(\frac{1}{|\sTor{n-k} |} \sum_{x\in\sTor{n-k}} 1_{u_k^{\zeta} (x)=0} >\frac{\delta -\zeta}{2}\right)~\leq~ \text{exp}\left\{ -c_1 2^{d(n-k)}\log\frac{\delta -\zeta}{2\rho_{k,\zeta}}\right\} . \end{split} \end{equation*} The latter estimate is of the super-surface order once $ \rho_{k,\zeta} \ll (\delta -\zeta )/2$ and $k<n/d$. \section{Examples of mesoscopic phase labels} \label{Examples of mesoscopic phase labels} \setcounter{equation}{0} We show that mesoscopic phase labels can be constructed in the case of Kac, percolation and Ising models. \subsection{Kac potentials} \label{subsection Kac potentials} For this model mesoscopic phase labels are defined on the original space of spins $\sigma\in\{-1, +1\}^{\Tor{N}}$ : the coarse graining is obtained by averaging locally the magnetization. Recall that we are using dyadic length scales $N=2^n$. Phase labels are constructed in three steps. First, for any integer $k$ and $\zeta>0$, we introduce the block spin variables $\bar u_k^\zeta$ which label the boxes $\sBox{n-k}$ according to the averaged magnetization over the boxes of the linear size $M=2^k$. These $\bar u_k^\zeta$ are constant on each of the blocks $\sBox{n-k}(x)$ with $x \in \sTor{n-k}$ \begin{eqnarray*} \bar u_k^\zeta(\sigma, x)~ \stackrel{\gD}{=} ~ \left\{ \begin{array}{l} \pm 1 \qquad {\rm if} \ \quad | \frac{1}{M^d} \sum_{ i \in \dBox{M} (2^n x)} \sigma_i \mp m^* | < \zeta,\\ 0 \qquad {\rm otherwise}. \end{array} \right. \end{eqnarray*} In the Kac case we do not use Theorem~\ref{thm Compactness} in its full generality, the object of the coarse graining is to choose a finite scale $k_0$, such that the family of mesoscopic phase labels is exponentially tight in ${\ensuremath{\mathbb L}} _1$. Recall that the scaling parameter is chosen such that $\gep = 2^{-m}$ with $m$ large but fixed. Eventually finite renormalization scales $k_0$ are going to satisfy $k_0 = m+ a_0$, where $a_0$ depends on $\beta$ and $\zeta$, but not on $m$. The sign of the $k_0$-label over a box $\sBox{n-k_0}(x)$ depends on a more refined information on the fluctuations of the magnetization inside the box : we choose another scale $\ell_0;\ \ell_0 =m-b_0$, where, as in the case of $a_0$, the scale $b_0$ will eventually depend only on $\beta$ and $\zeta$, and define the family of modified block spins $\{\tilde u_{k_0}^\zeta\}$ on the $k_0$-scale as \begin{eqnarray*} \tilde u_{k_0}^\zeta (\sigma, x) \stackrel{\gD}{=} \left\{ \begin{array}{l} \pm 1 \qquad {\rm if} \quad \qquad \bar u_{\ell_0}^\zeta (\sigma, y) = \pm 1, \qquad \forall ~y\in \sTor{n - \ell_0} \cap \sBox{n-k_0}(x) \\ 0 \qquad {\rm otherwise}. \end{array} \right. \end{eqnarray*} Finally, we define the mesoscopic phase label functions $\{u^\zeta_{k_0}(\sigma,x)\}$. If $\tilde u^\zeta_{k_0} (\sigma,x) = 0$, we set $u^\zeta_{k_0} (\sigma,x) = 0$. If $x,y\in\sTor{n-k_0}$ are $*$-neighbors, but the corresponding modified blocks spins satisfy $\tilde u^\zeta_{k_0}(\sigma,x) \, \tilde u^\zeta_{k_0}(\sigma,y) < 0$ then $u^\zeta_{k_0} (\sigma,x) = u^\zeta_{k_0} (\sigma,y) = 0$. Otherwise, we set $u^\zeta_{k_0}(\sigma,x) = \tilde u^\zeta_{k_0} (\sigma,x)$.\\ A consequence of the Peierls estimate proven in \cite{CP} and \cite{BZ} is that assumption A is satisfied, namely \begin{thm} For any $\beta >1$, there exists $\zeta_0=\zeta_0 (\beta) >0$, such that the following holds: For any $\zeta <\zeta_0$ one can choose $\gep_0 = \gep_0(\zeta)$, $a_0 =a_0 (\zeta )$ and $b_0 =b_0 (\zeta )$, such that uniformly in the interaction parameters $\gep =2^{-m}<\gep_0$, \begin{eqnarray*} \Is_{\gep,N} \left( u^\zeta_{k_0} (x_1) =0, \dots , u^\zeta_{k_0} (x_r) =0 \right) \leq \exp \left( - \frac{c_0}{\gep^d} r \right), \end{eqnarray*} where, for every fixed $\gep =2^{-m}<\gep_0$, the mesoscopic phase labels $u^\zeta_{k_0}$ are constructed on the scales $k_0 = m+ a_0(\zeta )$ and $l_0 =m-b_0 (\zeta )$. \end{thm} \noindent {\bf Remark.} A more refined statement implying exponential decay of correlations was proven in \cite{BMP}. Notice that conditions C1-C3 of the previous Section are satisfied by definition of the mesoscopic phase label functions. Notice also that assumption B of Theorem~\ref{thm Compactness} is automatically satisfied on the $k_0$-scale. Thus, the family $\{ u^\zeta_{k_0}\}$ is exponentially tight in ${\ensuremath{\mathbb L}} _1$.\qed \\ A similar renormalization procedure was carried out by Lebowitz, Mazel and Presutti \cite{LMP} for a system of point particles in ${\ensuremath{\mathbb R}} ^d$ interacting with Kac potentials. In this case the study of phase transition in the continuum is much more involved. Beyond a proof of the liquid-vapor phase transition, their results provide an accurate description of the system in terms of mesoscopic phase labels which represent the liquid and the gaseous phases. Such a coarse graining should be helpful to obtain further results on phase coexistence in the continuum.\\ \subsection{Bernoulli bond percolation} Bernoulli bond percolation exhibits features similar to the Ising model as phase transition and surface order behavior in a regime of phases coexistence. Nevertheless, as the setting is different from the Ising model, we briefly recall some notation. The set of edges is ${\ensuremath{\mathbb E}} = \big\{ \{x,y\} \; | \; x \sim y \big\}$, where $x \sim y$ means that the vertices are nearest neighbors. An edge $b$ in ${\ensuremath{\mathbb E}} $ is open if $\omega_b =1$ and closed otherwise. To any subset $\Lambda\Subset{\ensuremath{\mathbb Z}} ^d$, we associate $[\Lambda]_e$ the set of edges in $\Lambda$. The space of bonds configurations in $\Lambda$ is $\Omega_\Lambda = \{ 0, 1\}^{[\Lambda]_e}$. For a given $p$ in $[0,1]$, we define the Bernoulli bond percolation measure on $\Omega_\Lambda$ by \begin{eqnarray*} \Perc^{p}_{\Lambda} (\omega) = \prod_{b \in [\Lambda]_e} ( 1 - p)^{1 - \omega_b} p^{\omega_b} \, . \end{eqnarray*} For simplicity $\Perc^{p}_N$ denotes the measure on $\Omega_N = \Omega_{\Tor{N}}$. Let $\omega$ be a configuration in $\Omega$, an open path $(x_1, \dots ,x_n)$ is a finite sequence of distinct nearest neighbors $x_1, \dots ,x_n$ such that on each edge $\omega_{\{x_i , x_{i+1} \}} = 1$. We write $\{ A \leftrightarrow B \}$ for the event such that there exists an open path joining a site of $A$ to one of $B$. The connected components of the set of open edges of $\omega$ are called $\omega$-clusters. A phase transition is characterized by the occurrence of an infinite cluster. Define $\Theta_p$ by \begin{eqnarray} \label{perc Theta} \Theta_p = \lim_{N \to \infty} \Perc_N^{p}(\{ 0 \leftrightarrow \partial \Tor{N} \}) \, , \end{eqnarray} then there is a critical value $p_c$ in $]0,1[$ such that for any $p$ below $p_c$ there is no percolation and $\Theta_p=0$, instead for any $p$ above $p_c$ the occurrence of an infinite cluster starting from 0 has positive probability $\Theta_p$. In the thermodynamic limit, there exists only one limiting Gibbs measure and almost surely a unique infinite cluster with local density $\Theta_p$. In order to mimic the coexistence of 2 phases in the finite domains $\Tor{N}$, we say that one phase is formed by the largest cluster and the other phase by the other clusters. For this model, Pisztora introduced a renormalization procedure \cite{Pisztora1}, \cite{DP}, \cite{Pisztora2} which holds as soon as $p>p_c$ and $d \geq 3$. The mesoscopic phase labels $\{ u^\zeta_k \}$ will be defined for any mesoscopic scale $M = 2^k$, where $k$ is an integer which eventually depends on $N$. This construction requires 2 steps. The first step is to retain only the main features of the typical configurations on finite size boxes $\dBox{M}$. Then we attribute a sign to the blocks $\sBox{n-k}$ according to the phase they represent. Set $M ' =2M$. For any $x$ in $\sTor{n-k}$, the following events depend only on configurations in the box $\dBox{M'}(2^n x)$. \begin{eqnarray*} U_x = \left\{ \omega \in \Omega_{N} \; \big| \; \text{there is a unique crossing cluster $C^*$ in $\dBox{M'}(2^n x)$} \right\}. \end{eqnarray*} A crossing cluster is a cluster which intersects all the faces of the box. Let $\ell$ be an integer smaller than $k$ which will be fixed later \begin{eqnarray*} R_x & = & U_x \bigcap \left\{ \omega \in \Omega_{N} \; \big| \; \text{every open path in $\dBox{M'}(2^n x)$ with diameter larger than $2^\ell$ } \right.\\ & & \text{is contained in $C^*$ } \Big\}, \end{eqnarray*} where the diameter of a subset $A$ of ${\ensuremath{\mathbb Z}} ^d$ is $\sup_{x,y \in A} \|x - y\|_1$. Finally, we consider an event which imposes that the density of the crossing cluster in $\dBox{M}(2^n x)$ is close to $\Theta_p$ with accuracy $\zeta >0$ \begin{eqnarray*} V_x^\zeta = U_x \bigcap \big\{ \omega \in \Omega_{N} \; \big| \quad | C^* \cap \dBox{M}(2^n x)| \in [\Theta_p -\zeta, \Theta_p + \zeta] \, M^d \big\}, \end{eqnarray*} where $| \cdot |$ denotes the number of vertices in a set. Each box $\sBox{n-k}(x)$ is labeled by the variable ${\tilde u}^\zeta_{k} (\omega,x)$ \begin{eqnarray*} \forall x \in \sTor{n-k}, \qquad {\tilde u}^\zeta_{k} (\omega,x) \stackrel{\gD}{=} \left\{ \begin{array}{l} 1 \qquad \text{if} \qquad \omega \in R_x \cap V_x^\zeta,\\ 0 \qquad \text{otherwise}. \end{array} \right. \end{eqnarray*} Let $\{x_1,\dots,x_r \}$ be vertices in $\sTor{n-k}$ not $*$-neighbors of $x$, then \cite{Pisztora1} implies that for every $p>p_c$, there exists $k_0(p, \zeta)$, and $\ell_0 (p)$ such that for all $k \geq k_0$ and $k \geq \ell \geq \ell_0$ \begin{eqnarray*} \Perc^{p}_{N} \left({\tilde u}^\zeta_{k} (x) = 0 \ \big| \ {\tilde u}^\zeta_{k} (x_1 ), \dots,{\tilde u}^\zeta_{k} (x_r) \right) \leq \exp( - c_1 \, 2^\ell) + \exp (- c_2 (\zeta) 2^k), \end{eqnarray*} From \cite{LSS} (Theorem 1.3), we deduce that for $k$ and $\ell$ large enough, the random variables $\{ {\tilde u}^\zeta_{k} (x)\}$ are dominated by a Bernoulli site percolation measure ${\ensuremath{\mathbb P}} ^{\rho_k}_{\rm perc}$ \begin{eqnarray} \label{domination Y} \rho_k \leq \exp( - c(\zeta) \, 2^{\ell}). \end{eqnarray} A straightforward way to recover the previous statement is to partition $\sTor{n-k}$ into $c(d)$ sub-lattices $\big( \sTor{n-k-1, i} \big)_{i \le c(d)}$ which are translates of $\sTor{n-k-1}$. Any collection of vertices $\{x_1,\dots,x_r \}$ in $\sTor{n-k}$ can be rearrange into $c(d)$ subsets $\{x_1^{(i)},\dots,x_{r_i}^{(i)} \}$ such that each $\{x_1^{(i)},\dots,x_{r_i}^{(i)} \}$ belongs to $\sTor{n-k-1, i}$. Applying H\"older inequality, we get \begin{eqnarray*} \Perc^{p}_N \left( {\tilde u}^\zeta_{k} (x_1 ) = 0, \dots,{\tilde u}^\zeta_{k} (x_r) = 0 \right) \leq \prod_{i=1}^{c(d)} \Perc^{p}_N \left( {\tilde u}^\zeta_{k} (x^{(i)}_1 ) = 0, \dots, {\tilde u}^\zeta_{k} (x^{(i)}_{r_i}) = 0 \right)^{\frac{1}{c(d)}} \, . \end{eqnarray*} As the vertices in $\sTor{n-k-1, i}$ are not $*$-neighbors in $\sTor{n-k}$, the domination by a Bernoulli product measure follows. We say that a block $\sBox{n-k}(x)$ is regular if $\tilde u^\zeta_k(x) =1$. Finally we define the mesoscopic phase labels $u^\zeta_k$ to be equal to 1 on the regular blocks connected to the largest cluster and to $-1$ on the regular blocks disjoint from the largest cluster. Otherwise, we set ${u}^\zeta_{k} (\omega,x) = {\tilde u}^\zeta_{k} (\omega,x) = 0$. From (\ref{domination Y}), the mesoscopic phase labels satisfy assumption A. Notice that if $x$ and $y$ are $*$-neighbors in $\sTor{n-k}$ the boxes $\dBox{M'}(2^n x)$ and $\dBox{M'}(2^n y)$ overlap. Choosing the parameter $\ell\leq k-3$ we insure that if the boxes $\sBox{n-k}(x)$ and $\sBox{n-k}(y)$ are both regular, then the crossing clusters in these boxes are connected. This implies that assumption B is satisfied : two blocks with $k$-labels of different signs cannot be $*$-connected. The Bernoulli bond percolation model is precisely described by Pisztora's coarse graining, namely on a sufficiently large scale $2^k$, the typical configurations have a unique crossing cluster surrounded by small islands of size smaller than $2^\ell$. According to Theorem \ref{thm Compactness}, the family $\{ u^\zeta_k \}$ is exponentially tight in ${\ensuremath{\mathbb L}} ^1$.\\ \subsection{Ising nearest neighbor.} \label{subsection : Ising nearest neighbor} An extension of the preceding renormalization procedure applicable to the Ising model has been also introduced in \cite{Pisztora1}. Unlike Ising model with Kac potentials, this coarse graining is defined on an enlarged phase space via the FK representation. For a review of FK measures, we refer the reader to \cite{Pisztora1}, \cite{ACCN} and \cite{Grimmett}. Let us recall the definition of the random cluster measures (or FK measures) which are a generalization of the Bernoulli bond percolation measures with correlated bond distribution. To any subset $\Lambda$ of ${\ensuremath{\mathbb Z}} ^d$ and $\pi$ included in $\partial \Lambda$, we associate a set of edges \begin{eqnarray*} [\Lambda]_e^\pi = \big\{ \{x,y\} \; | \; x \sim y, \ x \in \Lambda, \ y \in \Lambda \cup \pi \big\}, \end{eqnarray*} and the space of configurations in $\Lambda$ is $\Omega_\Lambda^\pi = \{ 0, 1\}^{[\Lambda]^\pi_e}$. The first step is to introduce a measure on $\Omega_\Lambda^\pi$. A vertex $x$ of $\Lambda$ is called $\pi$-wired if it is connected by an open path to $\pi$. We call $\pi$-clusters the clusters defined with respect to the boundary condition $\pi$ : a $\pi$-cluster is a connected set of open edges in $\Omega_\Lambda^\pi$ and we identify to be the same cluster all the clusters which are $\pi$-wired, i.e. connected to $\pi$. For a given $p$ in $[0,1]$, we define the FK measure on $\Omega_\Lambda^\pi$ with boundary conditions $\pi$ by \begin{eqnarray*} \Perc^{\pi,p}_{\Lambda} (\omega) = {1 \over {\bf Z}_{\Lambda}^{\pi,p}} \left( \prod_{b \in [\Lambda]_e^\pi} ( 1 - p)^{1 - \omega_b} p^{\omega_b} \right) 2^{c^\pi (\omega)}, \end{eqnarray*} where $Z_{\Lambda}^{\pi,p}$ is a normalization factor and $c^\pi (\omega)$ is the number of clusters which are not $\pi$-wired. If $\pi = \partial \Lambda$ then the boundary conditions are said to be wired and the corresponding FK measure on $\Omega^{\rm w}_{\Lambda}$ is denoted by $\Perc^{\rm w,p}_{\Lambda}$. Finally, the periodic measure on the torus $\Tor{N}$ is denoted by $\Perc^{\rm per,p}_N$ and the phase space by $\Omega_N^{\rm per}$. In order to recover the Gibbs measure $\Is_{\Lambda}$, we fix the percolation parameter $p_\beta = 1 - \exp(-2 \beta)$ and generate the edges configuration $\omega$ in $\Omega_N^{\rm per}$ according to the measure $\Perc^{\rm per ,p_\beta}_N$. Given $\omega$, we equip randomly each $\omega$-cluster with a color $\pm 1$ with probability ${1 \over 2}$ independently from the others. This amounts to introducing the measure $P_N^\omega$ on $\{-1,1\}^{\Tor{N}}$ such that the spin $\sigma_i$ has the color of the cluster attached to $i$. The Gibbs measure $\Is_{N}$ can be viewed as the first marginal of the coupled measure $\Joint_N (\sigma,\omega) = P_N^\omega (\sigma) \Perc_N^{\rm per,p_\beta}(\omega)$ on the space $\{-1,1\}^{\Tor{N}} \otimes \Omega_N^{\rm per}$. In the case of $\pi$-wired boundary conditions, the spins attached to the $\pi$-wired cluster are equal to 1.\\ As a consequence of this representation, one has for any increasing sequence of sets $\Lambda_N$ \begin{eqnarray*} m^* = \lim_{N \to \infty} \Is^+_{\Lambda_N}(\sigma_0) = \lim_{N \to \infty} \Perc_{\Lambda_N}^{\rm w,p_\beta}(\{ 0 \leftrightarrow \partial \Lambda_N \} ) = \Theta_{p_\beta} . \end{eqnarray*} In the following, we use $m^*$ or $\Theta_{p_\beta}$ depending on the context. Furthermore, we suppose that \begin{eqnarray} \label{Theta} \lim_{N \to \infty} \Perc_{\Lambda_N}^{\rm f,p_\beta}(\{ 0 \leftrightarrow \partial \Lambda_N \} ) = \lim_{N \to \infty} \Perc_{\Lambda_N}^{\rm w,p_\beta}(\{ 0 \leftrightarrow \partial \Lambda_N \} ) = \Theta_{p_\beta}. \end{eqnarray} This property is satisfied for all $\beta$ outside a subset of ${\ensuremath{\mathbb R}} $ which is at most countable (see Lebowitz \cite{Lebowitz} and Pfister \cite{Pf1}). On the scale $M= 2^k$, we define, in the same way as for Bernoulli bond percolation, the variables $\tilde u_k^\zeta (\omega,x)$ which are piecewise constant on each box $\sBox{n-k} (x)$ with $x$ in $\sTor{n-k}$. The mesoscopic phase labels depend on the averaged magnetization in regular blocks. Define the label of $\sBox{n-k} (x)$ by \begin{eqnarray*} u^\zeta_{k} (\sigma,\omega,x) \stackrel{\gD}{=} \left\{ \begin{array}{l} {\rm{sign}}(C^*) \qquad {\text{if}} \qquad {\tilde u}^\zeta_{k} (\omega,x) =1 \ \ {\rm{and}} \ \ | \ensuremath{\mathcal M}_k (\sigma, x) - {\rm{sign}}(C^*) \, m^* | < 2 \zeta,\\ 0 \qquad \qquad \quad \text{otherwise}, \end{array} \right. \end{eqnarray*} where $C^*$ is the crossing cluster in $\dBox{M} (2^n x)$.\\ In a regular box $\sBox{n-k} (x)$ (i.e. $\tilde u^\zeta_k (x) = 1$), the averaged magnetization is controlled by the random coloring of the small clusters included in $\dBox{M} (2^n x)$. So that the averaged magnetization in a regular box is independent of the configurations in the neighboring boxes. In the case of Ising model, the additional parameter $\ell=\ell(k)$ is tuned in order to control the fluctuations of the magnetization over the small clusters. As a consequence of this, assumptions A, B and C1-C3 are satisfied for $p_{\beta}$ above a certain non-trivial slab percolation threshold $p_{\tilde \beta_c }$, which is conjectured to coincide with $p_{\beta_c}$ (see \cite{Pisztora1} for details), and Theorem \ref{thm Compactness} holds. \noindent {\bf Remark~.} Using the notations of this Subsection, the set $\ensuremath{\mathfrak B}_p$ introduced in Subsection \ref{subsection Main results} could be defined as \begin{eqnarray*} \ensuremath{\mathfrak B}_p = \{ \beta \; : \; \beta > \tilde \beta_c \ \text{and \eqref{Theta} holds} \}. \end{eqnarray*} \section{Surface tension} \label{section_Surface tension} \setcounter{equation}{0} We are going to derive Propositions \ref{prop 2} and \ref{prop 3} for Ising model with nearest neighbor interaction. As explained before, the philosophy of the proof is to start from the macroscopic level and to localize successively on finer scales with the help of a coarse graining. The approach itself is quite general. Nevertheless the coarse graining is model dependent, therefore we will need first to state an alternative representation of the surface tension in terms of the FK representation in order to use the estimates which will be obtained from Pisztora's coarse graining. The idea of such definitions has been introduced in \cite{Cerf}.\\ \subsection{FK representation} We fix $\vec{n}$ a vector in ${\ensuremath{\mathbb S}} ^{d-1}$ and study $\tau_\beta (\vec{n})$. Following notation of Subsection~\ref{subsection Surface tension}, we consider, for any $\gep$ positive, the parallelepiped $\widehat \Lambda(N,\gep N)$ of ${\ensuremath{\mathbb R}} ^d$ oriented according to $\vec{n}$. Namely, the basis of $\widehat \Lambda(N,\gep N)$ with side lengths equal to $N$ is orthogonal to $\vec{n}$ and the other sides have lengths equal to $\gep N$. For simplicity its microscopic counterpart $\widehat \Lambda(N,\gep N) \cap {\ensuremath{\mathbb Z}} ^d$ will be denoted by $\Lambda_N (\gep)$.\\ By using the correspondence between the Ising model and the FK representation, one can rewrite $\tau_\beta$ in terms of the bond model. Let $\{ \partial^+ \Lambda_N (\gep) \not \leftrightarrow \partial^- \Lambda_N (\gep) \}$ be the event such that there is no open path inside $\Lambda_N (\gep)$ joining $\partial^+ \Lambda_N (\gep) $ to $\partial^- \Lambda_N (\gep) $. Then, \begin{eqnarray} \label{ST 1} \tau_\beta (\vec{n}) = \lim_{N \to \infty} \, - {1 \over N^{d-1}} \log \Perc^{\rm{w}, p_\beta}_{\Lambda_N(\gep)} \big( \{ \partial^+ \Lambda_N (\gep) \not \leftrightarrow \partial^- \Lambda_N (\gep) \} \big). \end{eqnarray} Notice that the event $\{ \partial^+ \Lambda_N (\gep) \not \leftrightarrow \partial^- \Lambda_N (\gep) \}$ takes only into account the paths inside $\Lambda_N (\gep) $ and not the identification produced by wired boundary conditions. The relation above will be useful only in the proof of Proposition \ref{prop 2}.\\ We are now going to state an approximate expression of the surface tension which is weakly dependent on the boundary conditions. It will be used in the derivation of Proposition \ref{prop 3}. Let $\Lambda_N ' (\gep)$ be the the parallelepiped \begin{eqnarray} \label{ST slab} \Lambda_N ' (\gep) = \left\{ i \in \Lambda_N(\gep) \quad \big| \quad \vec{i} \cdot \vec{n} \in [- \frac{\gep}{4} N,\frac{\gep}{4} N] \right\}, \end{eqnarray} and denote by $\partial^{\rm top} \Lambda_N ' (\gep) $ (resp $\partial^{\rm bot} \Lambda_N ' (\gep) $) the face of $\partial^+ \Lambda_N ' (\gep) $ (resp $\partial^- \Lambda_N ' (\gep) $) orthogonal to $\vec{n}$. Let $\{ \partial^{\rm top} \Lambda_N ' (\gep) \not \leftrightarrow \partial^{\rm bot} \Lambda_N ' (\gep) \}$ be the event such that there is no open path inside $\Lambda_N ' (\gep)$ connecting $\partial^{\rm top} \Lambda_N ' (\gep) $ to $\partial^{\rm bot} \Lambda_N ' (\gep) $. One has \begin{lem}{\rm \bf [\cite{Bo} $\beta \gg 1$, \cite{CePi} $\beta \in \ensuremath{\mathfrak B}_p$]} \label{lem surface tension} For any $\beta \in \ensuremath{\mathfrak B}_p$ \begin{eqnarray} \label{ST 2} \tau_\beta (\vec{n}) = - {1 \over N^{d-1}} \log \Perc^{\pi,\rm p_\beta}_{\Lambda_N (\gep) } \left( \{ \partial^{\rm top} \Lambda_N ' (\gep) \not \leftrightarrow \partial^{\rm bot} \Lambda_N ' (\gep)\} \right) + c_{\gep,N} (\pi), \end{eqnarray} where the function $c_{\gep,N}$ goes to 0 as $N$ tends to infinity and $\gep$ goes to 0, uniformly over the boundary conditions $\pi$ and $\vec{n} \in {\ensuremath{\mathbb S}} ^{d-1}$. \end{lem} As it will be explained in Part~\ref{part_boundary} on the wetting phenomenon, the system is in fact extremely sensitive to boundary conditions. Nevertheless in the above Lemma, the interface is constrained to be in $\Lambda_N '(\gep)$, so that it does not feel the influence of the boundary : the boundary conditions are screened because the system relaxes to equilibrium in the region $\Lambda_N (\gep) \setminus \Lambda_N '(\gep)$. Let us first examine the influence of the boundary conditions $\pi$ on the faces of $\Lambda_N (\gep)$ orthogonal to $\vec{n}$. As $\{ \partial^{\rm top} \Lambda_N ' (\gep) \not \leftrightarrow \partial^{\rm bot} \Lambda_N ' (\gep)\}$ is a decreasing event, FKG inequality imply that it is enough to check that \begin{eqnarray} \label{ST 3} \tau_\beta(\vec{n}) = \lim_{N \to \infty} \, - {1 \over N^{d-1}} \log \Perc^{\rm f, w, p_\beta}_{\Lambda_N (\gep)} \big( \{ \partial^+ \Lambda_N ' (\gep) \not \leftrightarrow \partial^- \Lambda_N ' (\gep) \} \big), \end{eqnarray} where $\Perc^{\rm f, w, p_\beta}_{\Lambda_N (\gep) }$ is the FK measure with free boundary conditions on the faces orthogonal to $\vec{n}$ and wired on the others. This can be proved by means of a Peierls argument for $\beta$ large enough \cite{Bo} or by an analysis of the relaxation of the clusters density for $\beta$ in $\ensuremath{\mathfrak B}_p$ \cite{CePi}. As already noticed in \cite{Cerf} in the context of percolation, the influence of the boundary conditions on the sides of $\Lambda_N (\gep)$ parallel to $\vec{n}$ is negligible as $\gep$ goes to 0. This explains that the factor $c_{\gep,N} ( \cdot)$ vanishes uniformly over the boundary conditions.\\ \subsection{Extended representation} We would like to stress that the previous treatment of the surface tension is not satisfactory and a more coherent approach would be to consider a more general definition independent of the model in terms only of mesoscopic phase labels. In fact, a definition of surface tension valid in an abstract setting would be difficult to use because the surgical procedure of the minimal section argument requires a precise knowledge of how the microscopic system is related to the mesoscopic phase labels. \section{Lower bound : Proposition \ref{prop 2}} \setcounter{equation}{0} The proof is divided into 3 steps. We first start by approximating the surface $\partial^* \ensuremath{\mathcal K}_m$ by a regular surface $\partial{\widehat K} $ and imposing the condition that a mesoscopic interface exists close to $\partial {\widehat K}$. Then, using the definition of surface tension (\ref{ST 1}), we derive Proposition \ref{prop 2}.\\ \subsection{Step 1 : Approximation procedure.} A polyhedral set has a boundary included in the union of a finite number of hyper-planes. The surface $\partial^* \ensuremath{\mathcal K}_m$ can be approximated as follows (see Fig. \ref{fig_approx_lowerbd}) \begin{thm} \label{thm approx} For any $\delta$ positive, there exists a polyhedral set ${\widehat K}$ such that $$ \ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\widehat K} \in \ensuremath{\mathcal V}( \ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\ensuremath{\mathcal K}_m}, \delta) \qquad {\rm and} \qquad \big| \ensuremath{\mathcal W}_\beta ({\widehat K})-\ensuremath{\mathcal W}_\beta (\ensuremath{\mathcal K}_m) \big| \leq \delta. $$ For any $h$ small enough there are $\ell$ disjoint parallelepipeds $\widehat R^1, \dots, \widehat R^{\ell}$ with basis $\widehat B^1, \dots, \widehat B^{\ell}$ included in $\partial {\widehat K}$ of side length $h$ and height $\delta h$. Furthermore, the sets $\widehat B^1, \dots, \widehat B^{\ell}$ cover $\partial {\widehat K}$ up to a set of measure less than $\delta$ denoted by $\widehat U^\delta =\partial {\widehat K} \setminus \bigcup_{i =1}^\ell \widehat B^i$ and they satisfy \begin{eqnarray*} \Big| \sum_{i = 1}^{\ell} \int_{\widehat B^i} \tau_\beta (\vec{n}_i) \, d \ensuremath{\mathcal H}^{(d-1)}_x - \ensuremath{\mathcal W}_\beta (\ensuremath{\mathcal K}_m) \Big| \leq \delta, \end{eqnarray*} where the normal to $\widehat B^i$ is denoted by $\vec{n}_i$. \end{thm} \noindent The proof is a direct application of Reshtnyak's Theorem and can be found in the paper of Alberti, Bellettini \cite{AlBe}. \begin{figure}[t] \centerline{ \psfig{file=approx_lowerbd.ps,height=5cm}} \caption{Polyhedral approximation.} \figtext{ \writefig 0.35 6.50 {\footnotesize $\widehat U^\delta$} \writefig -2.90 4.85 {\footnotesize $\vec n_i$} \writefig -4.60 2.65 {\footnotesize $\widehat B^i$} \writefig -1.70 3.60 {\footnotesize $\widehat R^i$} \writefig 0.00 2.30 {\footnotesize $\ensuremath{\mathcal K}_m$} } \label{fig_approx_lowerbd} \end{figure} Using Theorem \ref{thm approx}, we can reduce the proof of Proposition \ref{prop 2} to the computation of the probability of $\{ \frac{\ensuremath{\mathcal M}_k}{m^*} \in \ensuremath{\mathcal V}( \ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\widehat K}, \delta) \}$. According to (\ref{Mkukbound}) the estimates can be restated in terms of the mesoscopic phase labels. For any $\delta >0$, there exists $\zeta = \zeta(\delta)$ and $k_0 = k_0 (\delta)$ such that Proposition \ref{prop 2} will be implied by \begin{eqnarray} \label{lower 1} \liminf_{N \to \infty} \; { 1 \over N^{d-1}} \min_{k_0 (\delta) \leq k \leq \nu n} \, \log \Joint_N \left( u^\zeta_k \in \ensuremath{\mathcal V} (\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\widehat K},\delta) \right) \geq - \ensuremath{\mathcal W}_\beta ({\widehat K}) - o(\delta). \end{eqnarray} \subsection{Step 2 : Localization of the interface.} The images of ${\widehat K}$, $\widehat R^i$ and $\widehat U^\delta$ in $\Tor{N}$ will be denoted by $K_N$, $R^i_N$ and $U^\delta_N$. In order to enforce a mesoscopic interface which crosses each $R^i_N$, we define the event $$ \ensuremath{\mathcal A} = \inter_{i = 1}^{\ell} \{ \partial^+ R^i_N \not \leftrightarrow \partial^- R^i_N \} \; . $$ We consider also $\ensuremath{\mathcal B}$ the set of configurations such that the bonds at distance less than 10 of $U^\delta_N$ are closed. Notice that these events depend only on bonds variables. One has \begin{eqnarray} \label{lower 2} \Joint_N \left( u^\zeta_k \in \ensuremath{\mathcal V} (\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\widehat K}, \delta) \right) \geq \Joint_N \left( \left\{ u^\zeta_k \in \ensuremath{\mathcal V} (\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\widehat K}, \delta ) \right\} \cap \ensuremath{\mathcal A} \cap \ensuremath{\mathcal B} \right). \end{eqnarray} The interface imposed by the event $\ensuremath{\mathcal A} \cap \ensuremath{\mathcal B}$ decouples $K_N$ from its complement, therefore the system is in equilibrium in $K_N$ and $K_N^c$ : a proof similar to the one of Theorem \ref{thm Compactness} implies that one can choose $\zeta' = \zeta' (\delta)$ and $k_0 ' = k_0 '(\delta)$ such that \begin{eqnarray*} \lim_{N \to \infty} \; \max_{k_0 ' (\delta) \leq k \leq \nu n} \, \Joint_N \left( \int_\Lambda | u_k^{\zeta '} (x) - 1| \, dx \geq \frac{\delta}{2} \ {\rm or} \ \int_\Lambda | u_k^{\zeta '} (x) + 1| \, dx \geq \frac{\delta}{2} \ \Big| \ \ensuremath{\mathcal A} \cap \ensuremath{\mathcal B} \right) = 0 \; , \end{eqnarray*} where $\Lambda$ stands for $\widehat K$ or $\widehat K^c$. So that (\ref{lower 2}) can be rewritten for $N$ large enough as \begin{eqnarray} \label{lower 3} \min_{k_0 (\delta) \leq k \leq \nu n} \, \Joint_N \left( u^\zeta_k \in \ensuremath{\mathcal V} (\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\widehat K},\delta) \right) \geq \frac{1}{8} \, \Perc^{\rm per, p_\beta}_N \left( \ensuremath{\mathcal A} \cap \ensuremath{\mathcal B} \right). \end{eqnarray} \subsection{Step 3 : Surface tension.} Combining the definition of surface tension (\ref{ST 1}), inequality (\ref{lower 3}) and Theorem \ref{thm approx}, we get \begin{eqnarray*} \liminf_{N \to \infty} \, \frac{1}{N^{d-1}} \, \min_{k_0 (\delta) \leq k \leq \nu n} \, \log \Joint_N \left( u^\zeta_k \in \ensuremath{\mathcal V} (\ifmmode {1\hskip -3pt \rm{I}} \else {\hbox {$1\hskip -3pt \rm{I}$}}\fi_{\widehat K},\delta) \right) \geq - \sum_{i =1}^\ell \int_{{\widehat B}^i} \, \tau_\beta (\vec{n}_i) \, d \ensuremath{\mathcal H}_x^{d-1} - o(\delta). \end{eqnarray*} We have also used the fact that the event $\ensuremath{\mathcal B}$ is supported by at most $c(d,\delta) N^{d-1}$ edges where $c(d,\delta)$ vanishes as $\delta$ goes to 0. Therefore the probability of $\ensuremath{\mathcal B}$ is negligible with respect to a surface order. \section{Upper bound : Proposition \ref{prop 3}} \setcounter{equation}{0} The proof is divided into 3 steps. First we decompose $\partial^* v$ in order to reduce the proof to local computations in small regions. Then in each region we localize the interface on the mesoscopic level via the minimal section argument. Finally the last step is devoted to the computation of the surface tension factor. \subsection{Step 1 : Approximation procedure.} We approximate $\partial^* v$ with a finite number of parallelepipeds (see Fig. \ref{fig_approx_upperbd}). \begin{thm} \label{theo ABCP} For any $\delta$ positive, there exists $h$ positive such that there are $\ell$ disjoint parallelepipeds ${\widehat R^1}, \dots, \widehat R^{\ell}$ included in $\uTor$ with basis $\widehat B^1, \dots, \widehat B^\ell$ of size $h$ and height $\delta h$. The basis $\widehat B^i$ divides $\widehat R^i$ in 2 parallelepipeds $\widehat R^{i,+}$ and $\widehat R^{i,-}$ and we denote by $\vec{n}_i$ the normal to $\widehat B^i$. Furthermore, the parallelepipeds satisfy the following properties \begin{eqnarray*} \int_{\widehat R^i} | \ensuremath{\mathcal X}_{\widehat R^i}(x) - v (x)| \, dx \leq \delta \, {\rm vol}(\widehat R^i) \quad {\rm{and}} \quad \Big| \sum_{i = 1}^{\ell} \int_{\widehat B^i} \tau_\beta (\vec{n}_i) \, d \ensuremath{\mathcal H}^{(d-1)}_x - \ensuremath{\mathcal W}_\beta (v) \Big| \leq \delta, \end{eqnarray*} where $\ensuremath{\mathcal X}_{\widehat R^i} = 1_{\widehat R^{i,+}} - 1_{\widehat R^{i,-}}$ and the volume of $\widehat R^i$ is ${{\rm vol}(\widehat R^i)} = \delta h^d$. \end{thm} \noindent This Theorem is a rather standard assertion of the geometric measure Theory. A variation of it has been formulated and applied in the context of the ${\ensuremath{\mathbb L}} _1$-theory of phase segregation in \cite{ABCP} along with a sketch of the proof, which, however, contained a gap (see \cite{Bo} for a detailed proof along the lines of \cite{ABCP}). A very clean alternative derivation of a similar result has been given by Cerf \cite{Cerf} using the Vitali covering Theorem. \begin{figure}[t] \centerline{ \psfig{file=approx_upperbd.ps,height=5cm}} \figtext{ \writefig -3.05 4.22 {\footnotesize $h$} \writefig -4.6 2.80 {\footnotesize $\tfrac12\delta h$} \writefig -1.3 4.9 {\footnotesize $\{v=-1\}$} \writefig 0.70 1.40 {\footnotesize $\{v=1\}$} \writefig 0.88 4.00 {\footnotesize $\vec n_i$} \writefig -0.35 2.30 {\footnotesize ${\widehat B}^i$} \writefig 2.35 4.05 {\footnotesize ${\widehat R}^{i,+}$} \writefig 2.14 5.00 {\footnotesize ${\widehat R}^{i,-}$} \writefig 1.60 2.50 {\footnotesize ${\widehat R}^i$} } \caption{Approximation by parallelepipeds.} \label{fig_approx_upperbd} \end{figure} Theorem~\ref{theo ABCP} enables us to decompose the boundary into regular sets (see Fig. \ref{fig_approx_upperbd}) so that it will be enough to consider events of the type \begin{eqnarray*} \left\{ \frac{\ensuremath{\mathcal M}_k}{m^*} \in \bigcap_{i =1}^{\ell} \, \ensuremath{\mathcal V}(\widehat R^i , \delta {\rm vol}(\widehat R^i)) \right\} \, , \end{eqnarray*} where $\ensuremath{\mathcal V}( \widehat R^i , \gep)$ is the $\gep$-neighborhood of $\ensuremath{\mathcal X}_{\widehat R^i}$ \begin{eqnarray*} \ensuremath{\mathcal V} (\widehat R^i, \gep) = \left\{ v^\prime \in {\ensuremath{\mathbb L}} ^1 \big( \uTor \big) \ \big| \quad \int_{\widehat R^i} | v^\prime(x) - \ensuremath{\mathcal X}_{\widehat R^i}(x) | \, dx \leq \gep \right\}. \end{eqnarray*} Using (\ref{Mkukbound}), we see that to derive Proposition \ref{prop 3}, it is equivalent to prove the following statement for any $\delta$ positive and $k_0 = k_0 (\delta)$, $\zeta = \zeta (\delta)$ \begin{eqnarray*} \limsup_{N \to \infty} \frac{1}{N^{d-1}} \max_{k_0 (\delta) \leq k \leq \nu n} \, \log \Joint_N \big( u^\zeta_k \in \bigcap_{i =1}^{\ell} \, \ensuremath{\mathcal V}(\widehat R^i , \delta {\rm vol}(\widehat R^i)) \big) \leq - \ensuremath{\mathcal W}_\beta (v) + C(\beta,v) \delta. \end{eqnarray*} \subsection{Step 2 : Minimal section argument.} The microscopic domain associated to $\widehat R^i$ is $R^i_N = N \widehat R^i \cap \Tor{N}$. We also set $R^{i,+}_N = N \widehat R^{i,+} \cap \Tor{N}$ and $R^{i,-}_N = R^i_N \setminus R^{i,+}_N$. At the scale $M =2^k$, we associate to any configuration $(\sigma,\omega)$ the set of {\it bad} boxes which are the boxes $\dBox{M}$ intersecting $R_N^i$ labeled by $0$ and the ones intersecting $R^{i,+}_N$ (resp $R^{i,-}_N$) labeled by $-1$ (resp $1$). For any integer $j$, we set $\widehat B^{i,j} = \widehat B^i + j \, c(d) 2^{n-k} \, \vec{n}_i$ and define \begin{eqnarray*} B^{i,j}_N = \big\{ j ' \in {R^i_N} \ | \ \exists x \in \widehat B^{i,j}, \qquad \|j' - Nx\|_1 \leq 10 \big\}. \end{eqnarray*} Let $\ensuremath{\mathcal B}_i^j$ be the smallest connected set of boxes $\dBox{M}$ intersecting $B^{i,j}_N$. By construction the $\ensuremath{\mathcal B}_i^j$ are disjoint surfaces of boxes. For $j$ positive, let $n_i^+(j)$ be the number of {\it bad} boxes in $\ensuremath{\mathcal B}_i^j$ and define \begin{eqnarray*} n^+_i = \min \big\{ n_i^+(j): \qquad 0< j < \frac{\delta h}{2c(d)} 2^{n-k} \big\}. \end{eqnarray*} Call $j^+$ the smallest location where the minimum is achieved and define the minimal section in $R^{i,+}_N$as $\ensuremath{\mathcal B}_i^{j^+}$. For $j$ negative, we denote by $\ensuremath{\mathcal B}_i^{j^-}$ the minimal section in ${R^{i,-}_N}$ and $n_i^-$ the number of {\it bad} boxes in $\ensuremath{\mathcal B}_i^{j^-}$ (see Fig.~\ref{fig_minsec}). \begin{figure}[t] \centerline{ \psfig{file=minsec.ps,height=7cm}} \figtext{ \writefig 0.60 .14 {\footnotesize bad blocks} \writefig -1.9 7.40 {\footnotesize bad blocks} \writefig 2.20 4.9 {\footnotesize $\{v=-1\}$} \writefig -3.20 1.00 {\footnotesize $\{v=1\}$} \writefig 4.10 1.60 {\footnotesize $\ensuremath{\mathcal B}_i^{j^-}$} \writefig 4.10 6.30 {\footnotesize $\ensuremath{\mathcal B}_i^{j^+}$} \writefig -4.65 2.30 {\footnotesize ${R_N^{i,+}}'$} \writefig -4.65 5.30 {\footnotesize ${R_N^{i,-}}'$} } \caption{Minimal sections.} \label{fig_minsec} \end{figure} For any configuration $(\sigma,\omega)$ such that $u^\zeta_k (\sigma,\omega)$ belongs to $\bigcap_{i =1}^{\ell} \, \ensuremath{\mathcal V}(\widehat R^i , \delta {\rm vol}(\widehat R^i))$, one can bound the number of {\it bad} boxes in the minimal sections by \begin{eqnarray} \label{upper 3} \sum_{i =1}^\ell n^+_i + n_i^- \leq \delta C_1(v) 2^{(d-1)(n-k)} \, . \end{eqnarray} Such an estimate implies that a mesoscopic interface is mainly located between the 2 minimal sections and that only some mesoscopic fingers attached to the interface may percolate. As these fingers will cross the minimal sections through {\it bad} boxes, the strategy is therefore to modify the configuration $\omega$ on the {\it bad} boxes so that no fingers can percolate in the new configuration. More precisely, we introduce the set \begin{eqnarray*} \ensuremath{\mathcal A} = \big\{ \omega \in \Omega^{\rm per}_{N} \ \big| \quad \exists \sigma \ {\rm such \ that} \ u^\zeta_k (\sigma,\omega) \in \bigcap_{i =1}^{\ell} \, \ensuremath{\mathcal V}(\widehat R^i , \delta {\rm vol}(\widehat R^i)) \big\} \, , \end{eqnarray*} and for any $\omega$ in $\ensuremath{\mathcal A}$ define $\bar \omega$ the configuration with closed edges on the boundary of the {\it bad} blocks in the minimal sections and equal to $\omega$ otherwise. Inequality (\ref{upper 3}) implies that $\omega$ and $\bar \omega$ differ only on at most $\delta C_2(v) N^{d-1}$ edges, so that we can control precisely the cost of the surgical procedure which consists in isolating the {\it bad} blocks in the minimal sections by closing the edges around them. \begin{eqnarray} \label{upper 4} \Joint_N \left( u^\zeta_k (\sigma,\omega) \in \bigcap_{i =1}^{\ell} \, \ensuremath{\mathcal V}(\widehat R^i , \delta {\rm vol}(\widehat R^i)) \right) & \leq & \Perc_N^{\rm per, p_\beta} \big( \ensuremath{\mathcal A} \big)\\ & \leq & \exp \big( \delta \, C_3 (v,\beta) N^{d-1} \big) \; \Perc_N^{\rm per, p_\beta} \big( \bar \ensuremath{\mathcal A} \big) \, , \nonumber \end{eqnarray} where $\bar \ensuremath{\mathcal A} = \{ \bar \omega \; |\; \omega \in \ensuremath{\mathcal A}\}$. \subsection{Step 3 : Surface tension estimates.} Let ${\widehat R^i} \, '$ be the parallelepiped included in $\widehat R^i$ with basis $\widehat B^i$ and height $\frac{\delta}{2} h$. Its microscopic counterpart is ${R^i_N} \, '$. We are going to check now that $\bar \ensuremath{\mathcal A}$ is included in $\bigcap_{i =1}^\ell \{ \partial^{\rm top} {R^i_N} ' \not \leftrightarrow \partial^{\rm bot} {R^i_N} '\}$. This amounts to say that not only the minimal section argument enables us to find a mesoscopic interface in $R^i_N$ but that in fact this interface exists on the microscopic level. To see this, choose any configuration $\omega$ in $\ensuremath{\mathcal A}$ which contains an open path ${\bf C}$ joining $\partial^{\rm top} {R^i_N} '$ to $\partial^{\rm bot} {R^i_N} '$ and suppose that ${\bf C}$ crosses the minimal sections without intersecting a {\it bad} box. Then ${\bf C}$ intersects 2 regular boxes $\dBox{M}(2^n x^+)$ and $\dBox{M}(2^n x^-)$ in $\ensuremath{\mathcal B}_i^{j^+}$ and $\ensuremath{\mathcal B}_i^{j^-}$. According to the definition of the coarse graining, this would imply that the crossing clusters of $\dBox{M}(2^n x^+)$ and $\dBox{M}(2^n x^-)$ are connected to ${\bf C}$, so that ${\tilde u}^\zeta_k (x^+) = {\tilde u}^\zeta_k (x^-)$. Therefore one of these boxes has to be a {\it bad} box. From (\ref{upper 4}), we get \begin{eqnarray*} \label{upper 5} \Joint_{N} \left( u^\zeta_k \in \bigcap_{i =1}^{\ell} \, \ensuremath{\mathcal V}(\widehat R^i , \delta {\rm vol}(\widehat R^i)) \right) && \leq \exp \big( \delta \, C_3 (v,\beta) N^{d-1} \big) \\ && \qquad \Perc^{\rm per,p_\beta}_{N} \big( \bigcap_{i =1}^\ell \{ \partial^{\rm top} {R^i_N} ' \not \leftrightarrow \partial^{\rm bot} {R^i_N} ' \} \big). \end{eqnarray*} Conditioning outside each domain $R^i_N$ and using (\ref{ST 2}), we derive \begin{eqnarray*} \limsup_{N \to \infty} \; {1 \over N^{d-1}} \max_{k_0(\delta) \leq k \leq \nu n} \, & \log & \Joint_N \left( u^\zeta_k \in \bigcap_{i =1}^{\ell} \, \ensuremath{\mathcal V}(\widehat R^i , \delta {\rm vol}(\widehat R^i)) \right) \leq\\ && \qquad - \sum_{i =1}^\ell \int_{\widehat B_i} \tau_\beta (\vec{n}_i) \, d \ensuremath{\mathcal H}_x + C_4 (\beta,v) \delta. \end{eqnarray*} This concludes the Proposition. \section{Open problems} We would like mention some open questions related to the ${\ensuremath{\mathbb L}} _1$-theory \begin{enumerate} \item Extention of the ${\ensuremath{\mathbb L}} _1$-theory to general finite range models and to the context of Pirogov-Sinai Theory. \item Proof of the Wulff construction for continuum models in an ${\ensuremath{\mathbb L}} _1$-setting. \item Upgrade of the concentration properties to the Hausdorff distance, based on more delicate versions of the minimal section argument; some results of this sort should appear in \cite{BodineauIoffeVelenik99}. \item A more challenging problem would be to provide an accurate description of phase segregation \`a la DKS. In particular one should understand how to control phase boundaries and prove local limit results with boundary conditions which are only statistically pure. \end{enumerate}
1,314,259,995,980
arxiv
\section{Introduction} The OPERA experiment \cite{1}, located at the Gran Sasso laboratory (LNGS), aims at observing the $\nu_{\mu}\rightarrow\nu_{\tau}$ neutrino oscillation in the direct appearance mode in the CERN neutrino to Gran Sasso (CNGS) \cite{3a,3b} beam by detecting the decay of the $\tau$ produced in charged current (CC) interactions. A detailed description of the detector can be found in \cite{1,2a,2b,2c,2d,2e,2f}. The OPERA detector consists of two identical Super Modules (SM), each of them consisting of a target area and a muon spectrometer, as shown in Fig. \ref{strauss-fig1}. The target area consists of alternating layers of scintillator strip planes and target walls. The muon spectrometer is used to reconstruct and identify muons from $\nu_{\mu}$-CC interactions and estimate their momentum and charge. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig01-eps-converted-to.pdf}}} \caption{Picture of the OPERA detector, with a view of a reconstructed neutrino interaction occurring in the 2nd Super Module.} \label{strauss-fig1} \end{myfigure} \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig14-eps-converted-to.pdf}}} \caption{The CNGS neutrino beamline. Figure from \cite{6}.} \label{author-fig4} \end{myfigure} The target walls are trays in which target units of $10\times12.5\mathrm{ cm^2}$ and a depth of 10\,$X_0$ in lead (7.9\,cm) with a mass of around 10~kg each are stored: they are also refered to as bricks. A brick is formed by alternating layers of lead plates and emulsion films (2 emulsion layers separated by a plastic base) building an emulsion cloud chamber (ECC). This provides high granularity and high mass, which is ideal for $\nu_{\tau}$ interaction detection. Fig. \ref{strauss-fig3} left shows an image of the unwrapped ECC brick. On the right, the arrangement of the scintillator strip planes (Target Tracker, TT) and the ECC is shown. Note an extra pair of emulsion films in a removable box called a changeable sheet (CS), shown in blue. The total mass of each target area is about 625\,tons, leading to a target mass of 1.25\,ktons for 145'000 bricks. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig03-eps-converted-to.pdf}\includegraphics{thomasstrauss_2012_0n_fig05-eps-converted-to.pdf}}} \caption{$\tau$ detection principle in OPERA.} \label{strauss-fig3} \end{myfigure} \section{DAQ and analysis} The information from the reconstructed event, recorded by the electronic detectors, is used to predict the most probable ECC for the neutrino interaction vertex \cite{4}. A display of a reconstructed event is indicated in Fig. \ref{strauss-fig1}. Fig. \ref{strauss-fig2} shows the procedure for localizing the event vertex in the ECC, by extrapolating the reconstructed tracks from the electronic detector to the CS emulsion films. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig02-eps-converted-to.pdf}}} \caption{Event detection principle in the OPERA experiment. The candidate interaction brick is determined from the prediction of the electronic detector (blue). The changeable sheet (CS) is used to confirm the prediction. From the CS result, the tracks are followed up to the interaction point inside the ECC.} \label{strauss-fig2} \end{myfigure} The signal recorded in the CS films will confirm the prediction from the electronic detector, or will act as a veto and trigger a search in neighboring bricks to find the correct ECC in which the neutrino interaction is contained. After a positive CS result, the ECC will be unpacked and the emulsion films are developed and sent to one of the various scanning stations in Japan and Europe. Dedicated automatic scanning systems allow us to follow the tracks from the CS prediction up to their stopping point. Around these stopping points, a volume of 1\,cm$^2$ times 15\,emulsion films will be scanned to find the interaction vertex. As illustrated in Fig. \ref{strauss-fig2}, only track segments in the active emulsion volume are visible and a reconstruction of the event is needed to find tracks and vertices. A dedicated procedure, called a ``decay search" is used to search for possibly interesting topologies, like the $\tau$-decay pictured in Fig. \ref{strauss-fig2}. The accuracy of the track reconstruction goes from cm in the electronic detector, down to mm for the CS analysis and to micrometric precision in the final vertex reconstruction (after aligning the ECC emulsion plates with passing-through cosmic-ray tracks). \newline \subsection{Tau detection} $\tau$ detection is only possible due to the micrometric resolution of the emulsion films, as it allows us to separate the primary neutrino interaction vertex from the decay vertex of the $\tau$ particle. The most prominent background for $\tau$ decay is either hadron scattering or charged charm decays. The background from hadron scattering can be controlled by cuts applied to the event kinematics. The background due to charm can be reduced by identifying the muon at the primary vertex, as charm will occur primarily in $\nu_{\mu}$-CC interactions (for further details see \cite{2e,4}). After topological and kinematical cuts are applied, the number of background events in the nominal events sample is anticipated to be 0.7 at the end of the experiment. In 2010 the first $\nu_{\tau}$ candidate was reconstructed inside the OPERA emulsion. It was recorded in an event classified as a neutral current, as no muon was identified in the electronic detector. To crosscheck the $\tau$ hypothesis, all tracks were followed downstream of the vertex until their stopping or their re-interaction point. They were all attributed to be hadrons, and no soft muon ($E<2$\,GeV) was found. In 2010, the expected total number of $\tau$ candidates was 0.9, while the expected background was less than 0.1 events. More details on the analysis are presented in \cite{2e}. In Fig. \ref{strauss-fig8} shows a picture of this event. Track number 4, labeled as the parent, is the $\tau$, decaying into one charged daughter track. The two showers are most likely connected to the decay vertex of the $\tau$, rather than activity from the primary interaction. Thus the decay is compatible with: $\tau\rightarrow\eta^-(\pi^-+\pi_0)\nu_{\tau}$. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig08-eps-converted-to.pdf}}} \caption{Display of the 2010 $\tau^-$ candidate event. Top left: view transverse to the neutrino direction. Top right: same view zoomed on the vertices. Bottom: longitudinal view. Figure from \cite{2e}.} \label{strauss-fig8} \end{myfigure} \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig09-eps-converted-to.pdf}}} \caption{MC distribution of: a) the kink angle for $\tau$ decays, b) the path length of the $\tau$, c) the momentum of the decay daughter, d) the total transverse momentum $P_T$ of the detected daughter particles of $\tau$ decays with respect to the parent track. The red band shows the 68.3\,\% of values allowed for the candidate event, and the dark red line indicates the most probable value. The dark shaded area represents the excluded region corresponding to the a priori $\tau$ selection cuts. Figure from \cite{2e}.} \label{strauss-fig9} \end{myfigure} Fig. \ref{strauss-fig9} shows the cuts used for the selection criteria defined at the time of the proposal and the kinematic variables of the $\tau$ decay observed by the OPERA experiment. At the time of this conference, the number of expected $\tau$ candidates was 1.7, with 0.5 events expected in the single-prong channel. The expected background for the analyzed event sample corresponding to $4.9\times10^{19}$\,protons on target (pot) was $0.16\pm0.05$ events. At the time of writing these proceedings, a second $\tau$ candidate appeared \cite{7a}. \subsection{Physics run performance and data analysis status} Since 2007, the OPERA experiment has collected a total of $18.5\times10^{19}$\,pot. This corresponds to about 15'000 interactions in the target areas of the experiment. Fig. \ref{strauss-fig6} shows from top to bottom, as a function of time, the integrated number of events occurring in the target (showing the CNGS shutdown periods), with their vertex reconstructed by the electronic detectors, for which at least one brick has been extracted, for which at least one CS has been analysed, for which this analysis has been positive (track stubs corresponding to the event have been found), for which the brick has been analysed, for which the vertex has been located, for which the decay search has been completed. The efficiency of the analysis of the most probable CS is rather low, about 65\,\%, and is significantly lower for NC events, among which $\nu_{\tau}$ interactions are the most likely to be found. To recover this loss, multiple brick extraction is performed; this brings the final efficiency of observing tracks of the event in the CS to about 74\,\%. The efficiency for locating an event seen on the CS in the brick is about 70\,\%. At the time of this conference, a total of 4611 events has been localised in the bricks and the decay search has been completed for 4126 of them. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig07-eps-converted-to.pdf}}} \caption{Events recorded and analyzed in the OPERA experiment from 2008 until 2012.} \label{strauss-fig6} \end{myfigure} After an ECC has been identified to be the most likely interaction brick, the ECC is developed and sent to one of the scanning laboratories, where it is scanned within a short time. The efficiency for locating an event within the ECC is about 70\,\% with respect to the number of positive CS results. After the event has been located, a dedicated decay search is performed to obtain a data sample which can be compared to MC and which provides uniform data quality from all laboratories. This decay search includes the search for decay daughters and a reconstruction of the kinematics of the particles at the vertex. At the time of this conference, 4611 events have been localized in the ECC, with a total of 4126 CC and NC events having completed the decay search. \subsection{Charm decay topologies in neutrino interactions} In about 5\,\% of the $\nu_{\mu}$-CC interactions, the production of a charmed particle at the primary vertex takes place. Charmed particles have lifetimes similar to the $\tau$ and similar decay channels. Thus charm events provide a subsample of decay topologies similar to $\tau$ decay, for which the detection efficiency can be estimated based on MC simulation. A study of a high purity selection of charm events in 2008 and 2009 shows agreement with the data \cite{5}. One-prong charm decay candidates are retained if the charged daughter particle has a momentum larger than 1 GeV/c. This leads to an efficiency of $\epsilon_{\mbox{short}} = 0.31 \pm 0.02 (\mbox{stat.})\pm 0.03 (\mbox{syst.})$ for short and $\epsilon_{\mbox{long}} = 0.61 \pm 0.05 (\mbox{stat.}) \pm 0.06 (\mbox{syst.})$ for long charm decays, wherein `long' means the production and the decay vertices are not located in the same lead plate. The number of events for which the charm search is complete is 2167 CC interactions. In these we expect $51\pm7.5$ charm candidates, with a background of $5.3\pm2.3$ events. The number of observed candidates is 49, which is in agreement with expectations. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig10-eps-converted-to.pdf}}} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig11-eps-converted-to.pdf}}} \caption{Display of one of the charm events. Top: View of the reconstructed event in the emulsion. Bottom: Zoom in the vertex region of the primary and secondary vertex. } \label{strauss-fig10} \end{myfigure} Fig. \ref{strauss-fig10} shows a charm decay detected in the OPERA experiment. In the electronic detector reconstruction, two muons were observed, one charged positively, the other negatively. The $\mu^-$ is attached to the primary vertex, while the $\mu^+$ is connected to the decay vertex. This topology corresponds with a charged charm decaying into a muon and the measured kinematic parameters are a flight length of 1330\,$\mu$m and a kink angle of 209\,mrad. The impact parameter (IP) of the $\mu^+$ with respect to the primary vertex is 262\,$\mu$m, and its momentum is measured as 2.2\,GeV/c. This accounts for a transverse momentum ($P_T$) of 0.46\,GeV/c. \section{$\nu$-velocity measurement} Due to the time structure of the CERN SPS beam, the OPERA experiment is able to trigger on the proton spill hitting the CNGS target. As a result, the electronic detector provides a time signal of the recorded events, which can be used to measure the neutrino velocity in the CNGS beam. One needs to measure with precision of some ns the flight time (time of flight - TOF) between CERN and LNGS, and the distance between reference points in the detector and the CNGS. The concept of the neutrino time of flight measurement is illustrated in Fig. \ref{strauss-fig15}. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig15-eps-converted-to.pdf}}} \caption{Scheme of the time of flight measurement. Figure from \cite{6}.} \label{strauss-fig15} \end{myfigure} The procedures are explained in great detail in \cite{6}. Since the time of the conference, an instrumental mistake has been identified that makes the results presented at this conference obsolete. Updated results taken from \cite{7b} are presented below. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig16-eps-converted-to.pdf}}} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig17-eps-converted-to.pdf}}} \caption{Top: Scheme of the timing sytem at CERN. Bottom: Scheme of the timing system at LNGS. Figures from \cite{6}.} \label{strauss-fig16} \end{myfigure} Fig. \ref{strauss-fig16} shows the timing systems both at CERN and LNGS, which allowed time calibration between both sites within accuracy of $\pm4$\,ns. The distance between CERN and the OPERA detector was measured via GPS geodesy and extrapolation down to the location of both the CNGS target and the OPERA detector with terrestrial traverse methods. The effective baseline is measured as $731278.0\pm0.2$\,m. The proton wave form for each SPS extraction was measured with a beam current transformer (BCT). The sum of the wave forms restricted to those associated to a neutrino interaction in OPERA was used as PDF for the time distribution of the events within the extraction. The maximum likelihood method was used to extract the time shift between the two distributions, i.e. the neutrino time of flight. Internal NC and CC interactions in the OPERA target and external CC interactions occurring in the upstream rock from the 2009, 2010 and 2011 CNG runs were used for this analysis. As shown in Fig. \ref{strauss-fig20}, it is measured to be: $$\delta t = \mbox{TOF}_c - \mbox{TOF}_\nu = (6.5\pm7.4(\mbox{stat.})\pm ^{+8.3}_{-8.0}(\mbox{syst.}))\,\mbox{ns.}$$ Modifying the analysis by using each neutrino interaction waveform as PDF instead of their sum gives a comparable result of $$\delta t = (3.5\pm5.6(\mbox{stat.})\pm ^{+9.4}_{-9.1}(\mbox{syst.}))\,\mbox{ns.}$$ No energy dependence was observed. \begin{myfigure} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig19-eps-converted-to.pdf}}} \centerline{\resizebox{70mm}{!}{\includegraphics{thomasstrauss_2012_0n_fig20-eps-converted-to.pdf}}} \caption{Top: Comparison of the measured neutrino interaction time distributions (data points) and the proton PDF (red and blue line) for the two SPS extractions resulting from the maximum likelihood analysis. Bottom: Blow-up of the leading edge (left plot) and the trailing edge (right plot) of the measured neutrino interaction time distributions (data points) and the proton PDF (red line) for the first SPS extraction after correcting for $\delta t=6.5$\,ns. Within errors, this second extraction is equal to the first one. Figures from \cite{6}.} \label{strauss-fig20} \end{myfigure} To cross-check for systematic effects, a dedicated bunched beam run was performed, where the SPS proton delivery is split into 3\,ns long spills, separated by 524\,ns in time during autumn 2011, and a similar mode of 3\,ns with 100\,ns separation in spring 2012. The value of $\delta t$ obtained in 2011 by using timing information provided by the target tracker is $1.9\pm3.7$\,ns; it is $0.8\pm3.5$\,ns when based on the spectrometer data \cite{6}. For the 2012 run, the corresponding values of $\delta t$ are $\delta t =(-1.6\pm1.1(stat.)^{+6.1}_{-3.7})$\,ns \cite{7b}. All results are in agreement with the measurement from standard CNGS beam operation. \section{Conclusions} The OPERA experiment detected two $\tau$ neutrino events appearing in the CNGS beam. Further, we measured the neutrino velocity to be in agreement with the speed of light in a vacuum to the O($10^{-6}$). Other short decay topologies like $\nu_e$ or charm decays can also be detected and are in agreement with MC expectations, thus providing a benchmark for validating the $\tau$ efficiency expectations. \thanks Firstly, I thank the organizers of the workshop and the OPERA PTB for the possibility to join this workshop. The OPERA collaboration thanks CERN, INFN and LNGS for their support and work. In addition OPERA is grateful for funding from the following national agencies: Fonds de la Recherche Scientique - FNRS and Institut Interuniversitaire des Sciences Nucl\'eaires for Belgium; MoSES for Croatia; CNRS and IN2P3 for France; BMBF for Germany; INFN for Italy; JSPS (Japan Society for the Promotion of Science), MEXT (Ministry of Education, Culture, Sports, Science and Technology), QFPU (Global COE program of Nagoya University, ``Quest for Fundamental Principles in the Universe", supported by JSPS and MEXT) and Promotion and Mutual Aid Corporation for Private Schools of Japan for Japan; The Swiss National Science Foundation (SNF), the University of Bern and ETH Zurich for Switzerland; the Russian Foundation for Basic Research (grant 09-02-00300 a), the Programs of the Presidium of the Russian Academy of Sciences ``Neutrino Physics" and ``Experimental and theoretical researches of fundamental interactions connected with work on the accelerator of CERN", the support programs of leading schools (grant 3517.2010.2), and the Ministry of Education and Science of the Russian Federation for Russia; the Korea Research Foundation Grant (KRF-2008-313-C00201) for Korea; and TUBITAK ``Scientific and Technological Research Council of Turkey", for Turkey. In addition the OPERA collaboration thanks the technical collaborators and the IN2P3 Computing Centre (CC-IN2P3).
1,314,259,995,981
arxiv
\section{Introduction} Efficient quantum algorithms for determining the ground and excited states of many-body systems are of fundamental interest to chemistry, condensed matter physics, and materials science \cite{Lloyd:1996ch, aspuru2005simulated, kandala2017hardware, kandala2019error}. The ability of quantum devices to represent $N$-body states with qubits scaling linearly in $N$ make them particularly appealing for representing highly entangled states, as is common in systems with strongly correlated electrons. Therefore, quantum (and hybrid quantum-classical) algorithms offer an alternative to methods such as the density matrix renormalization group \cite{White1992DensityMatrix} (DMRG), selected configuration interaction \cite{Huron1973IterativePerturbation, Buenker1974IndividualizedConfiguration} (SCI), determinant-based Monte-Carlo \cite{Booth2009FermionMonte}, variants of coupled-cluster (CC) theory \cite{coester1960short, vcivzek1966correlation} amenable to treating strong correlation \cite{Piecuch:1990wa,piecuch2005renormalized, Limacher2013NewMean, Bulik2015CanSingle}, and multireference CC (MRCC) methods \cite{vcivzek1969use, lindgren1978coupled, jeziorski1981coupled,Lyakh:2012cn,Kohn:2013cp,evangelista2018perspective}. Although these classical algorithms can accurately predicted energies and properties of certain classes of strongly correlated systems, they still have high-order polynomial or exponential cost in the general case. Since Feynman's proposal to use a controlled quantum system to carry out simulations \cite{feynman1982simulating}, significant algorithmic and experimental advances have been made. The earliest demonstrations of quantum simulation for small molecules \cite{aspuru2005simulated} utilized the quantum phase estimation algorithm \cite{kitaev1995quantum, Abrams:1997ha, Abrams:1999ur} with Suzuki--Trotter decomposed time evolution \cite{trotter1959product, suzuki1993improved} of an adiabatically prepared trial state. It is believed that some combination of these techniques will permit the efficient simulation \cite{von2020quantum, lee2020even, babbush2018low, reiher2017elucidating} of certain classes of Hamiltonians \cite{kempe2006complexity}, but that they may require deep circuits with high fidelity, a requirement incompatible with current noisy intermediate-scale quantum (NISQ) devices \cite{preskill2018quantum}. Several low-depth quantum-classical hybrid algorithms have been developed for NISQ hardware. These algorithms prepare and measure properties of many-body states on a quantum device, but they store and optimize the parameters that define such states on a classical computer. The variational quantum eigensolver (VQE) approach \cite{Peruzzo:2014kca, yung2014transistor, McClean:2016bs, grimsley2019adaptive} has been used in several landmark experiments, demonstrating quantum calculations on non-trivial molecular systems \cite{OMalley:2016dc, kandala2017hardware, colless2018computation, shen2017quantum, hempel2018quantum, nam2020ground}. In VQE, the ground state is approximated by a normalized trial state $\ket{\tilde{\Psi}} = \hat{U}(\mathbf{t}) \ket{\Phi_0}$, in which the unitary operator $\hat{U}(\mathbf{t})$ depends on the parameter vector $\mathbf{t}$ and $\Phi_0$ is (usually) an unentangled reference state. The VQE energy ($E_{\rm{VQE}}$) is then obtained by minimization of the trial state energy expectation value as \begin{equation} \label{eq:vqe} E_{\rm{VQE}} = \min_\mathbf{t} \bra{\Phi_0} \hat{U}^\dagger(\mathbf{t}) \hat{H} \hat{U}(\mathbf{t}) \ket{\Phi_0}. \end{equation} The VQE scheme employs an optimization algorithm running on a classical computer to minimize the energy expectation value, with all inputs (energy/gradients) being evaluated with the help of a quantum computer. An important advantage of VQE over classical many-body methods is the ability to use trial states that cannot be represented efficiently on a classical computer. VQE was initially implemented with an exponential operator ansatz inspired by unitary coupled-cluster (UCC) theory \cite{szalay1995alternative, taube2006new, cooper2010benchmark, evangelista2011alternative, harsha2018difference,Filip:2020ib,Chen:2021fa}, but has more recently been extended to hardware-efficient \cite{kandala2017hardware} and qubit-space \cite{Ryabinkin:2018jw} UCC variants as well. We exclusively use the abbreviation UCC to refer to unitary coupled-cluster theory, and not unrestricted formulations of conventional coupled-cluster methods \cite{knowles1993coupled}, which historically share this abbreviation. Other promising hybrid approaches include quantum imaginary time evolution \cite{motta2019determining,PRXQuantum.2.010317}, and quantum subspace diagonalization techniques \cite{mcclean2017hybrid, motta2019determining, Parrish:2019tc, Stair_2020, huggins2020non}. Despite the indisputable importance of VQE in the field of quantum simulation, there are a few drawbacks that make its practical application challenging to large-scale problems. One such issue is the slow convergence of VQE due to noise in the measured energy and gradients, and the large-scale nonlinear nature of the optimization problem. These issues are compounded by the sizable number of total measurements needed for operator averaging \cite{wecker2015progress}. Another challenge is the potentially large number of classical parameters and resulting circuit depth necessary to predict sufficiently accurate energies. These two problems are likely exacerbated in systems with strongly correlated electrons. Progress addressing these deficiencies of VQE has been made on several fronts. For example, grouping commuting Pauli operators \cite{McClean:2016bs, kandala2017hardware, gokhale2019minimizing, yen2020measuring, verteletskyi2020measurement}, utilizing integral factorization strategies \cite{huggins2021efficient}, and employing alternative bases~\cite{babbush2018low, mcclean2020discontinuous} have been shown to reduce the number of measurements needed for operator averaging. Concurrently, computationally feasible approaches for measuring analytical gradients with quantum devices using the parameter-shift rule \cite{schuld2019evaluating}, or its recent lower-cost variant \cite{kottmann2020feasible}, have allowed gradient-based VQE to become potentially realizable on NISQ hardware. Other advances, of particular importance to this work, include VQE ans\"{a}tze constructed iteratively, as done in the adaptive derivative-assembled pseudo-Trotterized VQE \cite{grimsley2019adaptive} (ADAPT-VQE) and the iterative qubit coupled-cluster \cite{ryabinkin2020iterative} (iQCC) methods. The primary advantage of ADAPT-VQE and iQCC is their ability to produce compact ans\"{a}tze that result in fewer classical parameters, and shallower quantum circuits than those from UCC truncated to a given particle-hole rank. However, these advantages come at the cost of a greater number of energy and gradient evaluations for optimizing and selecting new unitary operators. Investigating more efficient ways to select important operators is an ongoing area of research \cite{zhang2020mutual, liu2020efficient}. In this work, we present an alternative to VQE for optimizing the amplitudes of a factorized form of the UCC ansatz (often referred to as Trotterized \cite{McClean:2016bs} or quantum \cite{Barkoutsos:2018hm} UCC), given by a product of exponential operators rather than the exponential of a sum of operators. We refer to this ansatz as disentangled UCC (dUCC\xspace)---a terminology borrowed from the field of Lie theory---to reflect the fact that it is not an approximation of UCC \cite{evangelista2019exact}. Inspired by the projective approach used in classical coupled-cluster theory \cite{vcivzek1966correlation, vcivzek1969use}, we propose an alternative trial state optimization algorithm that we deem the projective quantum eigensolver (PQE). PQE does not rely on variational minimization and therefore does not require evaluation of the energy gradients. Instead, PQE requires only the evaluation of residuals, that is, projections of the Schr\"{o}dinger equations onto a linearly independent basis. As shown in this paper, residuals may be easily measured on NISQ devices with similar or fewer measurements than analytical gradients, and require quantum circuits that contain only one additional exponential term. We also propose a new selection scheme for identifying important operators based on the residual vector. This selected variant of PQE (SPQE) requires no pre-defined operator pool and employs only a small number of measurements to identify important operators. To demonstrate the practical advantages of PQE, we perform a comparison of VQE and PQE using a fixed dUCC\xspace ansatz for several molecular systems in the regime of weak and strong correlation, also considering the effect of stochastic noise. Finally, we compare SPQE against the ADAPT-VQE approach, selected configuration interaction, and the density matrix renormalization group. \section{Theory} \label{sec:theory} \subsection{The Projective Quantum Eigensolver approach} \label{sec:pqe_theory} In this work, we propose to obtain the ground state of a general many-body system using a projective approach. Like in VQE, we approximate the ground state using a trial state $\ket{\tilde{\Psi}(\mathbf{t})}= \hat{U}(\mathbf{t}) \ket{\Phi_0}$. After inserting the definition of the trial state in the Schr\"{o}dinger equation and left-multiplying by $\hat{U}^\dagger(\mathbf{t})$, we obtain the condition \begin{equation} \hat{U}^\dagger(\mathbf{t}) \hat{H} \hat{U}(\mathbf{t}) \ket{\Phi_0} = E \ket{\Phi_0}. \end{equation} Projection onto the reference state $\Phi_0$ yields the PQE energy ($E_\text{PQE}$) \begin{equation} E_\text{PQE}(\mathbf{t}) = \bra{\Phi_0} \hat{U}^\dagger(\mathbf{t}) \hat{H} \hat{U}(\mathbf{t}) \ket{\Phi_0}, \label{eq:ucc1} \end{equation} a quantity that is still an upper bound to the exact ground state energy. Projections onto the complete set of orthonormal many-body basis functions complementary to $\Phi_0$, here denoted as $Q = \{\Phi_\mu \}$, yields a set of residual conditions \begin{equation} r_\mu(\mathbf{t}) \equiv \bra{\Phi_\mu} \hat{U}^\dagger(\mathbf{t}) \hat{H} \hat{U}(\mathbf{t}) \ket{\Phi_0} = 0 \quad \forall \Phi_\mu \in Q, \label{eq:ucc2} \end{equation} where $r_\mu$ is an element of the residual vector and $\mu$ runs over all elements of the many-body basis. Eqs.~\eqref{eq:ucc1} and \eqref{eq:ucc2} form a system of nonlinear equations in the parameter vector $\mathbf{t}$, that may be solved via a classical iterative solver. For an approximate ansatz with number of parameters less than the dimension of the $Q$ space, Eq.~\eqref{eq:ucc2} can be enforced only for a subset of the residuals. Then, the complete projection space $Q$ can be partitioned into two sets: i) $R$, the space of basis functions $\Phi_\mu$ for which $r_\mu = 0$ is enforced, and ii) $S = Q \setminus R$ the complementary space for which $r_\mu$ may not be null. Figure~\ref{fig:residual} illustrates the connection between the PQE residual condition and the uncertainty in the ground-state energy estimated via Eq.~\eqref{eq:ucc1}. By the Gershgorin circle theorem, the difference between the exact ($E$) and the PQE ($E_{\mathrm{PQE}}$) ground-state energy, $|E_{\mathrm{PQE}} - E|$, is bound by the radius $\rho = \sum_{\mu \neq 0} |r_\mu|$, where $\mu$ runs over the entire many-body basis, excluding the reference determinant. Therefore, when the residual is null ($\rho = 0$), the PQE energy is exact. When the PQE equation is satisfied only by a subset of the many-body basis---as in the case of an approximate trial state---the error $|E_{\mathrm{PQE}} - E|$ is bound by the sum of the absolute value of the residual elements $|r_\nu|$ with $\Phi_\nu \in S$, for which the PQE equation is not satisfied. Note that the residual condition [Eq.~\eqref{eq:ucc2}] is satisfied by any eigenstate, and the Gershgorin circle theorem error bound applies also to excited states. A potential disadvantage is that PQE could converge on an excited state (an issue we did not experience in this study). However, this feature could be used to formulate excited state algorithms based on PQE, which use the residual condition as a criterion for convergence and do not require costly measurement of the variance, as is commonly done in VQE \cite{McClean:2016bs}. \begin{figure}[ht!] \centering \includegraphics[width=3.375in]{figure_1.pdf} \caption{Connection between the norm of the PQE residual and the energy error via the Gershgorin circle theorem. (a) Structure of the transformed Hamiltonian in the basis of orthogonal states $\{\Phi_\mu\}$. (b) The residual vector is the first column of the transformed Hamiltonian matrix (first element excluded). (c) The difference between the approximate ground-state PQE energy ($E_{\mathrm{PQE}}$) and the exact eigenvalue ($E$) is bound by the radius $\rho$, which is equal to the 1-norm of the residual vector. The part of $r_\mu$ that corresponds to states in the projection manifold $R$ is null for the PQE solution, while elements involving projections on the space $S = Q \setminus R$ is generally nonzero and determines the value of $\rho$.} \label{fig:residual} \end{figure} The PQE is a general approach, however, in the following, we will focus on its applications to interacting fermions using a disentangled (factorized) form of the unitary coupled-cluster ansatz. We assume that our system is described by the general two-body Hamiltonian \begin{equation} \hat{H} = \sum_{pq} h_{pq} \cop{p} \aop{q} + \frac{1}{4} \sum_{pqrs} v_{pqrs} \cop{p} \cop{q} \aop{s} \aop{r}, \end{equation} where $\aop{p}$ ($\cop{q}$) is a fermionic annihilation (creation) operator, while $h_{pq}$ and $v_{pqrs}$ are one-electron and antisymmetrized two-electron integrals, respectively \cite{crawford2000introduction}. \subsection{Traditional and disentangled unitary coupled-cluster ans\"{a}tze} \label{sec:trad_dis_ucc} In UCC, the reference state is an easily-prepared single determinant $\ket{\Phi_0} = \ket{\psi_1 \psi_2 \cdots}$ specified by the occupied spin orbitals $\{ \psi_i \}$. A UCC unitary is parameterized using a pool of anti-Hermitian operators $\mathcal{P} = \{ \hat{\kappa}_\mu : \mu =1 ,\ldots, N_\mathrm{op}^\mathrm{pool} \}$. A generic anti-Hermitian operator $\hat{\kappa}_\mu = \hat{\tau}_\mu - \hat{\tau}_\mu^\dagger$ is defined in terms of the particle-hole excitation operators $ \hat{\tau}_\mu \equiv \hat{\tau}_{ij\cdots}^{ab\cdots} = \cop{a} \cop{b} \cdots \aop{j} \aop{i}$. Note that we have re-interpreted $\mu$ as the multi-index $\mu \equiv ((i,j,..),(a,b,..))$ of unique excitations from hole/occupied ($\psi_i \psi_j \cdots$) to particle/unoccupied ($\psi_a \psi_b \cdots$) spin orbitals. Using this parameterization, when a cluster operator $ \hat{\kappa}_\mu$ acts on the reference, it generates elements of the many-body basis (excited determinants) of the form \begin{equation} \ket{\Phi_\mu} = \hat{\kappa}_\mu \ket{\Phi_0} = \ket{\Phi_{ij\cdots}^{ab\cdots}}, \end{equation} and since in the case of a UCC (or dUCC) ansatz there is a 1-to-1 correspondence between operators and determinants, we may label them with the same index. Note that this operator basis satisfies the orthonormality condition $\bra{\Phi_0} \hat{\kappa}^\dagger_\mu \hat{\kappa}_\nu \ket{\Phi_0} = \braket{\Phi_\mu|\Phi_\nu} = \delta_{\mu\nu}$. In traditional UCC \cite{szalay1995alternative, taube2006new, cooper2010benchmark, evangelista2011alternative, harsha2018difference}, the wave function is generated by an exponential operator \begin{equation} \label{eq:trad_ucc} \hat{U}(\mathbf{t}) = e^{\hat{\sigma}} = e^{\sum_\mu t_\mu \hat{\kappa}_\mu}, \end{equation} assuming the cluster amplitudes $t_\mu$ are real. In principle it is possible to construct a circuit that exactly implements the action of the UCC operator defined in Eq.~\eqref{eq:trad_ucc}, but in practice it is common to use a unitary with a simpler, and shallower, circuit. This is frequently accomplished using a factorized (disentangled) form of the UCC ansatz \begin{equation} \label{eq:st_ucc} \hat{U}(\mathbf{t})= \prod_\mu e^{ t_\mu \hat{\kappa}_\mu}. \end{equation} Because the operators $\hat{\kappa}_\mu$ do not commute, an ansatz of the disentangled form is uniquely defined by an \textit{ordered} set (or subset) of operators $\mathcal{A} = ( \hat{\kappa}_{\mu_i}: i = 1, \ldots, N_\mathrm{op} )$ built from the pool $\mathcal{P}$. The operators in $\mathcal{A}$ are then used to form an ordered product of exponential unitaries \begin{equation} \label{eq:qucc} \hat{U}(\mathbf{t}) = e^{t_{\mu_1} \hat{\kappa}_{\mu_1}} e^{t_{\mu_2} \hat{\kappa}_{\mu_2}} \cdots e^{t_{\mu_{N_\mathrm{op}}} \hat{\kappa}_{\mu_{N_\mathrm{op}}}}, \end{equation} where $t_{\mu_i}$ is the amplitude corresponding to the operator $\hat{\kappa}_{\mu_i}$. The disentangled UCC ansatz may be viewed as a first-order Suzuki--Trotter approximation to UCC; however, it was recently shown \cite{evangelista2019exact} that dUCC is substantially different from conventional UCC \cite{szalay1995alternative, taube2006new, cooper2010benchmark, evangelista2011alternative, harsha2018difference}. An arbitrary quantum state can always be represented in the form of Eq.~\eqref{eq:qucc} using particle-hole excitation/de-excitation operators \cite{evangelista2019exact}. However, only certain operator orderings $\mathcal{A}$ of a complete operator pool (containing up to $N$-body particle/hole excitations) can represent any quantum state. Nevertheless, orderings that do not satisfy this condition can exactly represent states that are close to the reference. In this work we assume that all operators appear at most once in $\mathcal{A}$, but the more general case in which $\mathcal{A}$ contains repetitions has also been considered in other contexts \cite{grimsley2019adaptive, lee2018generalized}. For PQE formulated using a UCC or dUCC trial state, it is possible to show (see Appendix~\ref{sec:appD}) that for an exact ansatz the residual condition [Eq.~\eqref{eq:ucc2}] and the VQE energy stationarity condition ($\partial E_\mathrm{VQE} / \partial t_\mu = 0$) are equivalent. However, for an approximate ansatz we find that the gradient of the PQE energy contains a contribution due to the nonzero residual elements corresponding to the subspace $S$: \begin{equation} \label{eq:grad_residual_connection} \frac{\partial E_\mathrm{PQE}(\mathbf{t})}{\partial t_\mu} = 2 \, \mathrm{Re} \sum_{\Phi_\nu \in S} r_\nu^* \bra{\Phi_\nu} \hat{U}^\dagger(\mathbf{t}) \frac{\partial \hat{U}(\mathbf{t})}{\partial t_\mu} \ket{\Phi_0}, \end{equation} where $\mathbf{t}$ is a solution of the PQE equations in the projection space $R$. Suppose that $R$ is chosen to be the space of single and double excitations and $S$ its complement. Then this result shows that even if $r_\mu = 0$ for all singles and doubles, the gradient of $E_\mathrm{PQE}$ with respect to singles and doubles may not be zero because residuals $r_\nu$ corresponding to triple and higher excitations and the term $\bra{\Phi_0} \hat{U}^\dagger(\mathbf{t}) \partial \hat{U}(\mathbf{t})/\partial t_\mu \ket{\Phi_\nu}$ are generally not null. Therefore, PQE and VQE energies obtained from approximate ans\"{a}tze will be different. We note that the combination of PQE and dUCC produces energies that are additive for non-interacting fragments (size consistent) when using a localized basis. This property follows from the fact that dUCC excitation operators for non-interacting fragments act only on orbitals localized on each fragment, and therefore, commute. Consequently, the dUCC wave function is multiplicatively separable (as long as the order of the operators within a fragment is preserved). We have verified numerically that dUCC with singles and doubles trial states optimized with PQE energy are size consistent by performing calculations on H$_4$ + H$_2$ separated at a 1000 \AA{} and verified to within numerical convergence ($10^{-10}$ $E_{\rm h}$\xspace) that the energy is additive in the fragments. \subsection{Numerical solution of the dUCC\xspace-PQE amplitude equation} To realize the PQE scheme on a quantum computer, the reference state, the Hamiltonian, and the unitary must be represented in a qubit basis via a fermionic mapping. After such a transformation, the Hamiltonian is a sum of the form $\hat{H} = \sum_\ell h_\ell \hat{O}_\ell,$ where $h_\ell$ is an electron integral multiplied by a coefficient and $\hat{O}_\ell = \prod_{i} \hat{\sigma}_{j_{\ell_i}}^{q_{\ell_i}}$ is a unique product of $\hat{\sigma}_{x}$, $\hat{\sigma}_{y}$, or $\hat{\sigma}_{z}$ Pauli operators acting on qubits $q_{\ell_i}$. Similarly, each term in the unitary $\exp(t_\mu \hat{\kappa}_\mu )$ may be implemented using a combination of one- and two-qubit operators following standard approaches \cite{Peruzzo:2014kca, yung2014transistor, McClean:2016bs}. To solve the PQE equations we measure the residuals corresponding to the operators contained in $\mathcal{A}$ on a quantum computer and update the parameter vector using a simple quasi-Newton iteration approach \begin{equation} \label{eq:fixed_point} t_\mu^{(n +1)} = t_\mu^{(n)} + \frac{r^{(n)}_\mu}{\Delta_\mu}, \end{equation} where the superscript ``$(n)$'' indicates the amplitude at iteration $n$. The quantities $\Delta_\mu$ are standard M{\o}ller--Plesset denominators $\Delta_\mu \equiv \Delta_{ij\cdots}^{ab\cdots} = \epsilon_i + \epsilon_j + \ldots -\epsilon_a -\epsilon_b \ldots$ where $\epsilon_i$ are Hartree--Fock orbital energies. This update equation is derived in Appendix~\ref{sec:appC} using Newton's method and taking the leading contributions to the Jacobian to be the diagonal elements of the Fock operator \cite{doi:https://doi.org/10.1002/9781119019572.ch13}. It is further assumed that the amplitudes are small, so that the Jacobian can be approximated by terms linear in the operators $\hat{\kappa}_{\mu}$ and issues with non-commuting operators are avoided. Therefore, convergence of this quasi-Newton scheme is not mathematically guaranteed if one or more amplitudes are large. We found it useful to improve numerical stability and speed up convergence of the PQE equations, to combine amplitude updates via Eq.~\eqref{eq:fixed_point} with the direct inversion of the iterative subspace (DIIS) convergence accelerator algorithm \cite{pulay1980convergence,SCUSERIA1986236}. It is important to note that the current formulation of PQE is compatible with any ansatz such that the metric matrix \begin{equation} S_{\mu_i \mu_j} = \bra{\Phi_0} \hat{\kappa}^\dagger_{\mu_i} \hat{\kappa}_{\mu_j} \ket{\Phi_0}, \quad \forall \hat{\kappa}_{\mu_i}, \hat{\kappa}_{\mu_j} \in \mathcal{A} \end{equation} is the identity. In more general cases (e.g., when $S_{\mu_i \mu_j}$ is non-diagonal or singular), the PQE formalism requires a generalization of the amplitude update equations or the use of residual norm minimization instead of Eq.~\eqref{eq:ucc2}. These variants of PQE would allow one to consider ans\"{a}tze that contain repeated operators in $\mathcal{A}$, employ general many-body operators, or a basis of qubit operators. These extensions go beyond the scope of this work and will be considered in future studies. There are two advantages to the combination of the PQE and dUCC\xspace described above. Firstly, as we will show in the following subsection, one element of the residual vector ($r_\mu$) can be evaluated with essentially the same resources required to measure the energy in VQE. Secondly, the magnitude of the residuals provides an indication of the importance of an excitation operator $\hat{\kappa}_\mu$, which in turn may be used to define a selection procedure to form the sequence of unitiaries that enter in $\hat{U}(\mathbf{t})$. The next two subsections describe these two points in detail. \subsection{Efficient measurement of the residual elements} For a trial state built from the ordered pool $\mathcal{A}$, the number of the residual elements that must be evaluated to solve the PQE equations is equal to the size of the pool $|\mathcal{A}|$. The PQE residuals can be expressed as the off-diagonal matrix elements of the operator $\bar{H} = \hat{U}^\dagger(\mathbf{t}) \hat{H} \hat{U}(\mathbf{t})$ as $r_\mu = \bra{\Phi_\mu} \bar{H} \ket{\Phi_0}$ (we use this notation throughout the paper, but we note that we never explicitly form the operator $\bar{H}$ on a classical computer). Then, in principle, the residuals can be measured on a quantum computer using a variant of the Hadamard test \cite{aharonov2009polynomial}, but we have found an ancilla-free procedure in which these matrix elements are computed by measuring diagonal quantities. Acting on the reference with the operator $e^{\theta \hat{\kappa}_\mu}$ yields the state \begin{equation} \ket{\Omega_\mu(\theta)} = e^{\theta \hat{\kappa}_\mu} \ket{\Phi_0} = \cos(\theta) \ket{\Phi_0} + \sin(\theta) \ket{\Phi_\mu}, \end{equation} noting that the above expression is valid because $\hat{\kappa}_\mu\ket{\Phi_0} = \ket{\Phi_\mu}$ and $\hat{\kappa}^2_\mu\ket{\Phi_0} = -\ket{\Phi_0}$ (see also \cite{Filip:2020ib,Chen:2021fa}). Taking the expectation value of the similarity transformed Hamiltonian with respect to $\Omega_\mu(\theta)$ using $\theta = \pi / 4$, and noting that the wave function is real, leads to the following equation for the residual elements \begin{equation} r_\mu = \bra{\Omega_\mu(\pi/4)} \bar{H} \ket{\Omega_\mu(\pi/4)} - \frac{1}{2}E_\mu - \frac{1}{2}E_0, \label{eq:res_measure} \end{equation} where $E_0 = \bra{\Phi_0} \bar{H} \ket{\Phi_0}$ and $E_\mu = \bra{\Phi_\mu} \bar{H} \ket{\Phi_\mu}$. All of these quantities are expectation values of $\bar{H}$ with respect to reference states that are easily generated with short quantum circuits. The evaluation of the exact residual via Eq.~\eqref{eq:res_measure} has a cost similar to the evaluation of exact gradients in VQE via the shift rule \cite{schuld2019evaluating, kottmann2020feasible}. \subsection{Efficient operator selection} \label{sec:selection} In this section, we generalize the PQE method to utilize a flexible dUCC ansatz built iteratively using a full operator pool. As shown in the case of VQE \cite{grimsley2019adaptive, ryabinkin2020iterative}, significantly more compact and flexible approximations may be achieved if the operators that define the unitary are selected according to an importance criterion. To formulate a selected version of the PQE approach, we propose to combine information about the residual with a cumulative importance criterion. Since the residuals $r_\mu$ are zero for an eigenstate, we propose to estimate the importance of the operators $\hat{\kappa}_\mu$ using the magnitude of the residual ($|r_\mu|$). However, instead of evaluating the importance of all the operators in the pool via operator averaging (like in gradient-based selection schemes \cite{grimsley2019adaptive, ryabinkin2020iterative}), we propose to sample a quantum state whose probability amplitudes encode the importance of \textit{all} operators up to rank $N$. Suppose that we have determined a unitary $\hat{U}$ that satisfies the residual condition $r_\mu = 0$ for all $\hat{\kappa}_\mu$ in the current ordered set $\mathcal{A}$. In our approach we prepare a (normalized) quantum state of the form $ \ket{\tilde{r}} = \tilde{r}_0 \ket{\Phi_0} + \sum_\mu \tilde{r}_\mu \ket{\Phi_\mu}$, where the quantities $ \tilde{r}_\mu $ are approximately proportional to the residuals $r_\mu$. When $\ket{\tilde{r}}$ is represented in a qubit basis, there is a one-to-one mapping between elements of the computational basis and the states $\Phi_\mu$. Therefore, a measurement of the state $\ket{\tilde{r}}$ in the qubit basis will yield one of the states $\Phi_\mu$ with probability $P_\mu = |\tilde{r}_\mu|^2$. Repeated measurement of the state $\ket{\tilde{r}}$ provides a way to approximately determine the elements of the residual $r_\mu$ with the largest magnitude, and the corresponding operators $\hat{\kappa}_\mu$ that should be included in the unitary. When this strategy is combined with an efficient way to prepare the state $\ket{\tilde{r}}$, it is much more cost effective than evaluating all the elements of $r_\mu$ not included in $\mathcal{A}$ via operator averaging [Eq.~\eqref{eq:res_measure}]. Construction of the state $\ket{\tilde{r}}$ would require one to apply the Hamiltonian, which is not a unitary operator. Therefore, we evaluate $\tilde{r}$ using the unitary operator $e^{-i \Delta t \hat{H}} = 1 -i \Delta t \hat{H} + \mathcal{O}(\Delta t^2)$ instead of $\hat{H}$. By choosing a small time step, we can ensure that the nonlinear terms and errors due to the approximate implementation of $e^{-i \Delta t \hat{H}}$ (e.g., via Trotterization) do not affect the residual to leading order in $\Delta t$. The residual state can then be defined as \begin{equation} \begin{split} \ket{\tilde{r}} &= \hat{U}^{\dagger} e^{i \Delta t \hat{H}} \hat{U} \ket{\Phi_0} \\ &= (1 + i\Delta t \hat{U}^{\dagger} \hat{H} \hat{U}) \ket{\Phi_0} + \mathcal{O}(\Delta t^2). \end{split} \end{equation} The time-evolution operator may be approximated via Trotterization \cite{trotter1959product, suzuki1993improved} in combination with low-rank representations of the Hamiltonian \cite{berry2019qubitization}. With a sufficiently large number of measurements $M$ of the state $\ket{\tilde{r}}$, we may approximate the values of the (normalized) squared residuals as \begin{equation} \label{eq:approx_res_sq} \norm{\tilde{r}_\mu} \approx \frac{N_\mu }{ M }, \end{equation} where $N_\mu$ is the number of times the state $\ket{\Phi_\mu}$ is measured. Encoding the residual in a single quantum state allows us to efficiently sample the \textit{entire} operator pool without the need to generate and store individual elements of the residual vector in memory. This distinctive feature makes it possible to employ this selection procedure with an operator pool that includes particle-hole excitation/de-excitation operators of arbitrary order. To select important operators, we adopt a cumulative threshold approach that allows us to add a batch of operators at a time. Our goal is to iteratively construct a unitary that contains the fewest operators. This is realized with a selection procedure that adds the operators with the largest value of $\tilde{r}_\mu$ to $\mathcal{A}$ (as motivated by the Gershgorin circle theorem bounds discussed in Sec.~\ref{sec:pqe_theory}), and excludes all other operators in such a way that sum of their residuals squared is less than a threshold $\Omega^2$. Specifically, we enforce that \begin{equation} \sum_{\hat{\kappa}_\mu \notin \mathcal{A}}^\text{excluded} \norm{r_\mu} \approx \sum_{\hat{\kappa}_\mu \notin \mathcal{A}}^\text{excluded} \frac{\norm{\tilde{r}_\mu}}{\Delta t^2} \leq \Omega^2, \label{eq:op_thresh} \end{equation} where we have used the fact that $|\tilde{r}_\mu| \approx \Delta t | r_\mu|$. In practice, we sort the operators in ascending order according to $\norm{\tilde{r}_\mu}$, and starting from the first element, we discard operators until Eq.~\eqref{eq:op_thresh} is satisfied. The remaining operators are appended to the end of $\mathcal{A}$ in order of decreasing $\norm{\tilde{r}_\mu}$. The resulting operator ordering is consistent with the following renormalization transformation of the Hamiltonian \begin{equation} \begin{split} \hat{H} & \rightarrow e^{-t_{\mu_{1}} \hat{\kappa}_{\mu_{1}}} \hat{H} e^{t_{\mu_1}} = \bar{H}_1 \\ & \rightarrow e^{-t_{\mu_{2}} \hat{\kappa}_{\mu_{2}}} \bar{H}_1 e^{t_{\mu_{2}} \hat{\kappa}_{\mu_{2}}} = \bar{H}_2 \\ & \rightarrow\cdots \end{split} \end{equation} which begins with the largest many-body rotation and gradually continues with smaller ones. Note that this ordering is the reverse of the one used in ADAPT-VQE and iQCC, where operators with the largest gradient are applied first to the reference. In applications of this selection scheme, we found that our operator ordering is more numerically robust and accurate than its reverse. This selection procedure is easily integrated in the PQE algorithm by performing a series of computations with increasingly larger ordered sets. When no new operators are added to $\mathcal{A}$, the computation is considered converged, and the final operator set satisfies Eq.~\eqref{eq:op_thresh}. The details of the selected PQE algorithm are discussed in Sec.~\ref{sec:algorithm}. Throughout this work, we ignore errors that arise from finite measurement of the approximate residual $\ket{\tilde{r}}$. However, since operator selection only requires an approximate determination of the $|\tilde{r}_\mu|^2$ values, it is not necessary to perform a large number of measurements. Indeed, in Appendix~\ref{apdx:aa_reduce_cost}, we discuss a simple strategy based on sampling $\ket{\tilde{r}}$ a fixed number of times that performs as well as the exact scheme. \subsection{Outline of the selected PQE algorithm} \label{sec:algorithm} \begin{figure*}[ht!] \centering \includegraphics[width=5.5in]{figure_2.pdf} \caption{Outline of the adaptive PQE algorithm. Steps labeled ``QPU'' indicate parts of the algorithm that run on a quantum processing unit.} \label{fig:algorithm} \end{figure*} The combination of the PQE approach with the selection procedure described in Sec.~\ref{sec:selection} leads to a very efficient flexible ansatz quantum algorithm, which we refer to as selected PQE (SPQE). The selected PQE algorithm requires the simultaneous solution of the residual conditions and the selection of important excitation operators. To realize this scheme, we alternate micro-iterations to converge the residual equations (for the current ordered operator set) with macro-iterations that perform importance selection of new operators. The SPQE procedure is illustrated in Fig.~\ref{fig:algorithm} and consists of the following steps: \begin{enumerate} \item \textbf{Initialization}. The user provides the occupation numbers that define the reference state $\Phi_0$. Start at macro-iteration number $k = 0$ with an empty operator set ($\mathcal{A}^{(0)} = \{ \}$). \item \textbf{Importance selection}. At macro-iteration $k$, perform $M$ measurements in the computational basis for the the state $\ket{\tilde{r}^{(k)}} = \hat{U}^{(k)\dagger} e^{i \Delta t \hat{H}} \hat{U}^{(k)} \ket{\Phi_0}$. The number of times the state $\Phi_\mu$ is measured is accumulated in the variable $N_\mu$. These numbers are used to estimate the square residuals via Eq.~\eqref{eq:approx_res_sq}, which are in turn used to select important excitations not included in the current pool $\mathcal{A}_{k}$. The current operator set $\mathcal{A}_{k}$ and all the new selected operators are included in the new set $\mathcal{A}_{k + 1}$. When forming this new ordered set, we append the new operators---sorted in decreasing value of the approximate squared residual---to $\mathcal{A}_{k}$. If the sum in Eq.~\eqref{eq:op_thresh} over \textit{all} approximate residuals is less than the threshold $\Omega^2$, such that no new operators are added, then return the final energy. \item \textbf{Solution of the PQE equations}. Using the new set $\mathcal{A}_{k + 1}$, solve the PQE equations via quasi-Newton micro-iterations. These micro-iterations alternate the evaluation of the residuals [Eq.~\eqref{eq:ucc2}] and the amplitude update [Eq.~\eqref{eq:fixed_point}]. The micro-iterations are considered converged when the norm of the residual vector $\normnorm{\mathbf{r}}$ is less than a user specified threshold $\omega_r$. Once converged, this step produces a new set of amplitudes [$\mathbf{t}^{(k + 1)}$] and the corresponding unitary [$\hat{U}^{(k + 1)}$], as well as the updated energy [$E^{(k + 1)}$]. Increase the macro-iteration number by one ($k \leftarrow k + 1$) and go to Step 2. \end{enumerate} All VQE and PQE methodologies were implemented in a development branch of the open source package \textsc{QForte} , and utilize its state--vector simulator. All VQE calculations use a micro-iteration convergence threshold $\omega_g = 10^{-5}$~$E_{\rm h}$\xspace for the gradient norm $\normnorm{\mathbf{g}}$ and all PQE calculations use a micro-iteration threshold $\omega_r = 10^{-5}$~$E_{\rm h}$\xspace for the residual norm $\normnorm{ \mathbf{r} }$. Note that when the $\hat{\kappa}_\mu$ operators are mapped to a qubit basis they can be expressed as a sum of Pauli operator strings that commute \cite{Romero:2019hk}, and therefore, a single operator $e^{t_\mu \hat{\kappa}_\mu}$ can be implemented exactly as a product of exponentials without invoking the Trotter approximation. \section{Results and Discussion} \label{sec:results} \subsection{Comparison of PQE and VQE with a disentangled UCC ansatz} \begin{figure*}[ht!] \centering \includegraphics[width=6.0in]{figure_3.pdf} \caption{dUCCSD energy convergence for linear \ce{H4}--\ce{H10} chains in a STO-6G basis at (a) $r_{\rm{H-H}} = 0.75$~{\AA}, and (b) $r_{\rm{H-H}} = 1.50$~{\AA}. $| E^{(n)} - E^{(n-1)}|$ is the absolute value of the energy change between subsequent iterations. Both plots compare PQE~vs.~VQE convergence with respect to number of residual (for PQE) or gradient (for VQE) evaluations ($N_{\rm{res/grad}}$).} \label{fig:Hn_econv_both} \end{figure*} Our initial goal is to compare the performance of PQE and VQE using a unitary coupled-cluster trial state truncated at a given particle-hole excitation level. We test these two methods on a family of linear hydrogen chains (with identical nearest-neighbor distance) ranging from four to ten atoms, both near their equilibrium geometries ($r_\text{H-H} = 0.75$ \AA{}) and stretched geometries ($r_\text{H-H} = 1.5$ \AA{}). Hydrogen models such as these have been studied experimentally with VQE \cite{google2020hartree} and have recently been used as a benchmark for both quantum \cite{grimsley2019adaptive, Stair_2020} and classical \cite{Motta2017TowardsThe, motta2020ground, stair2020exploring} algorithms. Figure~\ref{fig:Hn_econv_both} shows the energy convergence of PQE and VQE using a disentangled UCC ansatz with singles and doubles (dUCCSD). All calculations employed restricted Hartree-Fock (RHF) orbitals from the quantum chemistry package \textsc{Psi4} \cite{smith2020psi4}, and Pauli-operator Hamiltonians obtained via the Jordan--Wigner transformation implemented in \textsc{QForte} \cite{stairqforte}. To achieve optimal performance for both VQE and PQE, we employ the Broyden--Fletcher--Goldfarb--Shannon (BFGS) algorithm \cite{broyden1970convergence, fletcher1970new, goldfarb1970family, shanno1970conditioning} (as implemented in the \textsc{SciPy} \cite{virtanen2020scipy} scientific computing library) with analytical gradients for VQE, and DIIS \cite{pulay1980convergence,SCUSERIA1986236} to accelerate amplitude convergence of PQE. These computations use the same operator ordering for both approaches, with all amplitudes initialized to zero. The ordering of the operators $e^{t_{\mu_i} \hat{\kappa}_{\mu_i}}$ entering Eq.~\eqref{eq:qucc} is defined by the binary representation of the corresponding determinants $\ket{\Phi_{\mu_i}} = \hat{\tau}_{\mu_i} \ket{\Phi_0}$ in the occupation number representation. Because the disentangled UCCSD state cannot exactly parameterize an eigenstate of the Hamiltonian, the numerically converged PQE and VQE energies are not identical. Nevertheless, for all the cases we examined the converged PQE and VQE energies differ by less than $10^{-6}$~$E_{\rm h}$\xspace. Near the equilibrium geometry, we find that the PQE energy converges significantly faster than the VQE energy with number of residual vs. gradient evaluations, respectively. For example, to converge the near-equilibrium \ce{H10} energy to $10^{-6}$~$E_{\rm h}$\xspace, PQE requires only seven residual evaluations, while VQE requires approximately 23 gradient evaluations. In the case of VQE, we also observe that the number of required gradient evaluations grows with system size, with \ce{H10} taking twice as many gradient vector evaluations than \ce{H4} to converge. On the contrary, PQE computations converge with similar speed for all equilibrium hydrogen systems. Plots of the energy change vs. the norm of the residual/gradient vector show similar trends and are reported in Appendix~\ref{sec:add_num_vqe_compare}. At the stretched geometry, strong correlation effects cause the disentangled UCCSD trial state to perform more poorly, with both PQE and VQE, yielding energy errors that range from 1.39~m$E_{\rm h}$\xspace (for \ce{H4}) to 13.59~m$E_{\rm h}$\xspace (for \ce{H10}). We see that PQE converges slightly more slowly in the stronger correlation regime, with stretched \ce{H10} requiring 13 residual evaluations (instead of seven) to converge the energy to $10^{-6}$~$E_{\rm h}$\xspace. However, PQE always converges faster than VQE, with the latter requiring 19 gradient-vector evaluations to converge stretched \ce{H10} to the same accuracy level. We also find similar trends in PQE and VQE convergence for \ce{BeH2}, for which convergence data can be found in Appendix~\ref{sec:add_num_vqe_compare}. In summary, this initial set of results suggests that for a given trial state, optimization via PQE is faster than VQE and less dependent on the number of parameters to optimize. We expect this to be the case also for VQE based on numerical gradients or gradient-free optimization methods, since these two variants are known to be slower compared to the the BFGS approach adopted here \cite{Romero:2019hk}. \subsection{Effect of stochastic errors on the convergence of PQE and VQE} \begin{figure*}[ht!] \centering \includegraphics[width=3.375in]{figure_4.pdf} \caption{Energy and residual/gradient norm ($\normnorm{\mathbf{r}}$)/($\normnorm{\mathbf{g}}$) convergence of dUCCSDTQ wave functions optimized with PQE/VQE with various amounts of stochastic noise added to the residuals/gradients. Energy error is relative to FCI. $\sigma$ controls the degree of noise and is the standard deviation of the normal distribution, centered at the exact residual/gradient value, from which all residuals/gradients used in the calculations are randomly sampled [see Eq.~\eqref{eq:res_noise}]. Values at each PQE/VQE iteration are averages over 50 runs on \ce{H4} at $r_{\mathrm{H-H}} = 1.0$~\AA. Error bars denote one standard deviation.} \label{fig:H4_w_noise} \end{figure*} The results presented in the previous section assumed error-free quantum gates and arbitrarily precise measurements. In practice, calculations performed on NISQ hardware are affected by decoherence errors, poor gate fidelity, readout errors, and loss of precision due to insufficient measurements. These sources of error will lead to incorrect gradients and residuals that are then passed to a classical optimizer. Therefore, it is interesting to compare the resilience of PQE and VQE procedures when the residuals and gradients are affected by stochastic errors. To model the presence of errors, we modify the PQE procedure by adding to the residual vector a stochastic error sampled from a Gaussian distribution with standard deviation $\sigma$ [$\mathcal{N}(0,\sigma^2)$] \begin{equation} \label{eq:res_noise} r_\mu^\text{measured} = r_\mu + \mathcal{N}(0,\sigma^2). \end{equation} For VQE, we similarly add stochastic noise to the exact energy gradients. Using Eq.~\eqref{eq:res_noise} as a noise model mainly emulates errors that arise from finite measurement. Because the inexact residuals $r_\mu^\text{measured}$ are used to update the cluster amplitudes via Eq.~\eqref{eq:fixed_point}, this noise model also gives rise to control errors, or errors that refer to the difference between the unitiaries for noiseless updated amplitudes $\hat{U}(\mathbf{t})$ and noisy updated amplitudes $\hat{U}(\mathbf{t}+\Delta\mathbf{ t})$. Control errors due to finite measurement have been modeled this way in previous studies \cite{Romero:2019hk} and will always propagate through optimization on physical hardware. We note, however, that using Eq.~\eqref{eq:res_noise} exclusively as a noise model is insufficient to capture more nuanced or device-specific errors such as decoherence. We tested the performance of PQE and VQE under noise by performing a batch of 50 computations on the linear \ce{H4} molecule with nearest-neighbor distance set to 1.0~{\AA}. We use the disentangled UCC ansatz with up to quadruple excitations, which spans the full operator set for this system. Figure~\ref{fig:H4_w_noise} shows a comparison of PQE and VQE optimized with various levels of noise, controlled by the magnitude of $\sigma$ in Eq.~\eqref{eq:res_noise}. We find that for all values of $\sigma > 0$, the energy convergence of PQE is essentially identical to that of noiseless PQE until some point, after which the energy error hovers around a finite value. Similar behavior can be seen for convergence of the residual vector. We find that VQE has similar characteristics to PQE in the presence of noise, but that it is able to achieve slightly more accurate energies for a given $\sigma$ value. With the same noise level, however, VQE generally requires two to three times the number of gradient evaluations as the number of residual evaluations required by PQE. An important aspect of this comparison is that, for a given $\sigma$, both the residual and gradient vectors yield comparable asymptotic errors in PQE and VQE, respectively. Conversely, PQE and VQE computations of the comparable energy accuracy require similar precision in the measurement of the residual and gradient vectors. In Appendix~\ref{sec:formal_vqe_compare} we use this result to estimate the relative cost of PQE and VQE via a formal analysis. \subsection{Selected PQE based on a full dUCC operator pool} \begin{figure*}[ht!] \centering \includegraphics[width=6.5in]{figure_5.pdf} \caption{Ground state potential energy curve for the symmetric dissociation of (a) \ce{BeH2} and (b) \ce{H6} computed using a minimal (STO-6G) basis. The energy error relative to FCI (top), number of classical parameters used (middle), and number of individual elements of the gradient (for VQE) or residual (for PQE) evaluated (bottom) are given as a function of the \ce{Be}--\ce{H} and \ce{H}--\ce{H} bond length. Here, ADAPT-VQE uses a generalized singles and doubles operator pool and is optimized with the BFGS algorithm, and gradient convergence thresholds $10^{-1}$ ($\epsilon_1$) and $10^{-3}$ ($\epsilon_3$). SPQE results use macro-iteration convergence thresholds $\Omega=10^{-1}$ and $10^{-2}$. The top plots also show the energy error corresponding to chemical accuracy, here defined as 1 kcal/mol $\approx$ 1.59 m$E_{\rm h}$\xspace.} \label{fig:H6_BeH2_pes_compare} \end{figure*} Here we compare the results of selected PQE (SPQE) with an arbitrary order particle-hole operator pool, and ADAPT-VQE, which unless otherwise noted, uses a generalized singles and doubles pool (operators of the form $\cop{p}\aop{q}$ and $\cop{p}\cop{q}\aop{s}\aop{r}$, where the indices $p,q,r,s$ run over all spin orbitals). To test these methods, we compute the energy as a function of bond distances for: 1) the symmetric dissociation of the linear \ce{BeH2} molecule and 2) the symmetric dissociation of a chain of six hydrogen atoms. In both cases, there is a build up of strong correlation effects as the bond length increases. For each system, we report two sets of results for SPQE using the cumulative threshold $\Omega = 10^{-1}$ and $10^{-2}$~$E_{\rm h}$\xspace, and two sets of ADAPT-VQE results with gradient threshold $10^{-1}$ and $10^{-3}$~$E_{\rm h}$\xspace ($\epsilon_1$ and $\epsilon_3$ in the original notation used by the authors). Since we later compare these methods to classical approaches formulated in a determinant basis, we do not employ a pool of spin-adapted operators. The dissociation curve of \ce{BeH2} shown in Fig.~\ref{fig:H6_BeH2_pes_compare} (a) demonstrates that both SPQE and ADAPT-VQE are able to achieve significantly smaller energy errors than dUCCSD, while using only 10--20 more parameters. Although the two approaches employ different selection schemes, the ADAPT-VQE($\epsilon_3$) and SPQE($\Omega = 10^{-2}$) approaches produce compact trial states with a similar number of classical parameters and comparable errors. The same trends are seen in symmetric dissociation curve of \ce{H6}, which is shown in Fig.~\ref{fig:H6_BeH2_pes_compare} (b). However, for this system we find that achieving sub-m$E_{\rm h}$\xspace accuracy---particularly with the onset of strongly correlation at $r_{\rm{H-H}}$ values greater than 1.5 \AA{}, requires a number of parameters that approaches the size of the full Hilbert space (200) for both ADAPT-VQE and SPQE. The need to saturate Hilbert space to accurately describe \ce{H6} is likely due to how small this example is, and it speaks more to the ability of the trial states to produced compact representations rather than the performance of these algorithms in optimizing such ans\"{a}tze. The most noticeable difference between SPQE and ADAPT-VQE can be seen in the bottom panel of Fig.~\ref{fig:H6_BeH2_pes_compare} (a)-(b): for trial states with comparable number of parameters, SPQE requires significantly fewer residual element evaluations than gradient element evaluations in ADAPT-VQE. For example, at a \ce{Be}--\ce{H} bond distance of 1.65~\AA{}, both methods produce very similar energy errors using almost the same number of parameters, but ADAPT-VQE($\epsilon_3$) requires the evaluation of 35155 elements of the gradient, whereas SPQE($\Omega = 10^{-2}$) requires only 1220 elements of the residual. Importantly, we note that the bottom panels of Fig.~\ref{fig:H6_BeH2_pes_compare} exclusively count the number of elements of the gradient or residual required by the optimization, and do not include the additional measurements required for operator selection. \begin{table*}[!ht] \centering \caption{Ground state of \ce{H6} computed using a minimal (STO-6G) basis with RHF orbital convergence threshold of $10^{-10}$~$E_{\rm h}$\xspace. Comparison of SPQE with threshold $\Omega$ and ADAPT-VQE using the same number of parameters as SPQE. ADAPT-VQE results are computed for both a generalized singles and doubles operator pool (GSD) and a particle hole singles and doubles pool (SD). The properties reported are the energy error with respect to FCI [$\Delta E$, in $E_{\rm h}$\xspace], the number of classical parameters used [$N_{\rm{par}}$], the number of parameters corresponding to three-body or higher excitations [$N_{\rm{T+}}$], the number of CNOT gates used in the unitary [$N_{\rm{CNOT}}$] (not optimized), and the total number of residual or gradient element evaluations [$N_\mathrm{res}$ or $N_\mathrm{grad}$]. $r$ denotes the H-H nearest neighbor distance in \AA{}ngstrom.} \scriptsize \begin{tabular*}{\textwidth } {@{\extracolsep{\stretch{1.0}}}*{1}{c}*{13}{r}@{}} \hline \hline & \multicolumn{5}{c}{SPQE ($\Omega = 10^{-1}$ $E_{\rm h}$\xspace) } & \multicolumn{4}{c}{ADAPT-VQE-GSD } & \multicolumn{4}{c}{ADAPT-VQE-SD } \\ \cline{2-6} \cline{7-10} \cline{11-14} $r$ & $\Delta E$ & $N_{\rm{par}}$ & $N_{\rm{T+}}$ & $N_{\rm{CNOT}}$ & $N_\mathrm{res}$ & $\Delta E$ & $N_{\rm{par}}$ & $N_{\rm{CNOT}}$ & $N_\mathrm{grad}$ & $\Delta E$ & $N_{\rm{par}}$ & $N_{\rm{CNOT}}$ & $N_\mathrm{grad}$ \\ \hline 0.50 & 0.002153 & 30 & 0 & 2400 & 339 & 0.002152 & 30 & 2400 & 8378 & 0.002152 & 30 & 2400 & 8378 \\ 1.00 & 0.006050 & 32 & 0 & 2720 & 503 & 0.005872 & 32 & 2720 & 8399 & 0.005872 & 32 & 2720 & 8399 \\ 1.50 & 0.012487 & 36 & 0 & 2944 & 1103 & 0.011176 & 36 & 3040 & 8046 & 0.011176 & 36 & 3040 & 8046 \\ 2.00 & 0.015066 & 43 & 8 & 20272 & 1087 & 0.011204 & 43 & 3560 & 15176 & 0.010350 & 43 & 3312 & 12592 \\[6pt] & \multicolumn{5}{c}{SPQE ($\Omega = 10^{-2}$ $E_{\rm h}$\xspace) } & \multicolumn{4}{c}{ADAPT-VQE-GSD } & \multicolumn{4}{c}{ADAPT-VQE-SD } \\ \cline{2-6} \cline{7-10} \cline{11-14} $r$ & $\Delta E$ & $N_{\rm{par}}$ & $N_{\rm{T+}}$ & $N_{\rm{CNOT}}$ & $N_\mathrm{res}$ & $\Delta E$ & $N_{\rm{par}}$ & $N_{\rm{CNOT}}$ & $N_\mathrm{grad}$ & $\Delta E$ & $N_{\rm{par}}$ & $N_{\rm{CNOT}}$ & $N_\mathrm{grad}$ \\ \hline 0.50 & 0.000013 & 79 & 24 & 15568 & 1127 & 0.000012 & 79 & 6608 & 87506 & 0.000085 & 79 & 4608 & 29707 \\ 1.00 & 0.000031 & 105 & 46 & 33232 & 2076 & 0.000033 & 105 & 9244 & 166119 & 0.000064 & 105 & 8128 & 388182 \\ 1.50 & 0.000079 & 166 & 111 & 166032 & 6074 & 0.000004 & 166 & 14528 & 566143 & 0.000028 & 166 & 12576 & 1437917 \\ 2.00 & 0.000141 & 169 & 114 & 226768 & 9537 & 0.000018 & 169 & 14776 & 719390 & 0.000096 & 169 & 12944 & 1312127 \\[3pt] \hline \hline \end{tabular*} \label{tab:N_CNOT_comparison_2} \end{table*} Since ADAPT-VQE and SPQE select new operators from their pools using different importance criteria, it is not possible to perform a direct comparison of their performance using fixed thresholds. To facilitate this comparison, in Table \ref{tab:N_CNOT_comparison_2} we report SPQE results using two values of $\Omega$ ($10^{-1}$ and $10^{-2}$~$E_{\rm h}$\xspace) together with ADAPT-VQE results obtained using an ansatz with the same number of parameters as SPQE. We include ADAPT-VQE results obtained using both a generalized singles and doubles operator pool (GSD), and a particle-hole singles and doubles pool (SD). The results in Tab.~\ref{tab:N_CNOT_comparison_2} obtained with $\Omega$ = $10^{-1}$~$E_{\rm h}$\xspace show SPQE and the two variants of ADAPT-VQE to perform equally well at all bond distances. The second set of results obtained with a tighter threshold ($\Omega=10^{-2}$) show similar performance of the methods at short bond lengths, with two notable exceptions. First, at $r_{\rm{H-H}} = 2.0$~\AA{} ADAPT-VQE-GSD yields more accurate results than ADAPT-VQE-SD and SPQE. At this point, the SPQE ansatz contains 55 singles and doubles, and 114 operators of higher rank, yielding an error of about 0.14~m$E_{\rm h}$\xspace, while ADAPT-VQE-GSD is an order of magnitudes more accurate. Second, also at $r_{\rm{H-H}} = 2.0$~\AA{}, SPQE uses only 9537 residual element evaluations, while ADAPT-VQE requires 719390 (GSD) and 1312127 (SD) gradient element evaluations. The evaluation of fewer residual elements in SQPE will correspond to approximately the same savings in total number of measurements (see Appendix~\ref{sec:formal_vqe_compare}). A final important aspect to compare between SPQE and ADAPT-VQE is the number of native CNOT gates, which we consider as a proxy for circuit depth. Tab.~\ref{tab:N_CNOT_comparison_2} reports the number of CNOT gates for the converged trial states. These numbers overestimate the actual gate count since they ignore optimizations such as the cancellation of Jordan--Wigner strings \cite{Haystings2015Improving}, especially for three- and higher-body operators. We see that at the larger threshold values ($\Omega=10^{-1}$ for SPQE and $\epsilon_1=10^{-1}$ for ADAPT) the number of CNOTs for all three approaches are relatively similar, and are generally within a factor of 2 of one another. However, at tighter thresholds the SPQE circuit contains significantly more CNOT gates than the one for ADAPT-VQE. For example, $r_{\rm{H-H}} = 2.0$~{\AA}), 114 of the 169 operators used in SPQE are three-body or higher, while ADAPT-SD and ADAPT-GSD of course only contain only up to two-body operators. Consequently, the SPQE($\Omega=10^{-2}$) circuit contains more CNOT gates (226768) than ADAPT-VQE-GSD (14776). As discussed in Appendix~\ref{sec:formal_vqe_compare}, this large difference in CNOT count is due to the growth in cost to implement the exponential of $n$-body second-quantized operators as a function of $n$ (and the lack of circuit optimization). Nevertheless, it is important to note that the systems studied here are not large enough to draw definitive conclusions about the relative performance of SPQE and ADAPT-VQE. For example, the pool of generalized singles and doubles (GSD) for \ce{H6} contains 870 operators, a number significantly larger than the size of the full Hilbert space (200). This scenario is unlike most systems of interest, where the number of generalized singles and doubles is much less than the number of particle-hole operators. Unfortunately, our attempts to compare numbers for systems larger than \ce{BeH2} and \ce{H6} were unsuccessful due to the high computational cost of simulating ADAPT-VQE. \begin{figure}[h!] \centering \includegraphics[width=3.35in]{figure_6.pdf} \caption{Results for the 1D, 2D, and 3D \ce{H10} models use a STO-6G basis at $r_{\rm{H-H}}=1.50$~{\AA}. Singlet ground state energy errors relative to FCI as a function of number of variational parameters $N_{\rm{par}}$ for dUCC-SPQE, ACI, and DMRG. Unsigned energy errors for methods with fixed number of parameters (taken to be the number of cluster amplitudes) are shown by colored dots. Energy errors ACI with a second order perturbative correction (ACI+PT2) are also shown by orange dotted lines. For ACI+PT2, CCSD(T), CRCC(2,3), and Mk-MRCCSD(T) we only count the number of nonperturbative amplitudes. The accuracy volume threshold [0.1~m$E_{\rm h}$\xspace per electron] is plotted as a grey dotted line.} \label{fig:H10_123D_econv} \end{figure} A second aspect we investigate, is the ability of the selected PQE approach to compactly represent wave functions for systems displaying strong correlation effects, were many-body methods commonly fail due to the breakdown of the mean-field approximation. We compare the performance of SPQE with the adaptive configuration interaction \cite{Schriber2016Adaptive} (ACI) and DMRG using data generated in our recent benchmark study of hydrogen systems \cite{stair2020exploring}. In this work, we characterize the resource requirements of a computational method $X$ with the \textit{accuracy volume} ($\mathcal{V}_X$), defined as the smallest number of parameters necessary to achieve a given energy error per electron (here taken to be $10^{-4}$ $E_{\rm h}$\xspace/electron). We consider three \ce{H10} model systems: a one dimensional (1D) linear chain, a two dimensional (2D) triangular lattice and a three dimensional (3D) close-packed pyramid. Using a minimal STO-6G basis and a 1 qubit to 1 spin-orbital mapping, the 1D, 2D, and 3D \ce{H10} models are represented with $2^{20}$ computational basis states, but are restricted to 31752 and 15912, and 15912 determinants, respectively, after accounting for spin and Abelian point-group symmetries. Figure~\ref{fig:H10_123D_econv} (a) displays the SPQE, ACI, and DMRG energy error as a function of classical variational parameters for the 1D \ce{H10} system at a stretched bond length of $r_{\rm{H-H}}=1.50$~{\AA}. DMRG, which is ideally suited to simulate gapped 1D systems, affords the most compact wave function for the 1D chain, with an accuracy volume of only 176 variational parameters ($\mathcal{V}_{\rm{DMRG}}^{\rm{1D}} = 176$). The SPQE exponential ansatz is less compact than the DMRG matrix product state, with an accuracy volume $\mathcal{V}_{\rm{SPQE}}^{\rm{1D}} = 3510$ parameters. We observe that the ACI wave function---a linear ansatz built by selecting the determinants with the largest energy contributions---gives the least compact representation, such that $\mathcal{V}_{\rm{ACI}}^{\rm{1D}} = 22989$ at the same level of accuracy. When the ACI energy is augmented with a second-order perturbative correction---which accounts for determinants excluded from the wave-function expansion---a more compact ansatz is sufficient to achieve an energy error of $10^{-3}$~$E_{\rm h}$\xspace. This suggests that it might be valuable to formulate a classical perturbative correction to the SPQE energy to help capture even more correlation energy. Results for the 2D \ce{H10} lattice [Fig.~\ref{fig:H10_123D_econv} (b)] show that SPQE yields a more compact representation than DMRG or ACI until an energy error of approximately 1~m$E_{\rm h}$\xspace. For the 2D system the accuracy volume follows the order SPQE (4193) $\approx$ DMRG (4218) $<$ ACI (11122). At energy errors less than 1~m$E_{\rm h}$\xspace DMRG still affords the most compact representation for the 2D system. Finally, for the 3D \ce{H10} pyramid lattice shown in Fig.~\ref{fig:H10_123D_econv} (c), we find that DMRG yields the most compact representation ($\mathcal{V}_{\rm{DMRG}}^{\rm{3D}} = 3549$), but not nearly by the same margin as for the 1D system. Again, the SPQE has a significantly more compact wave function than ACI such that $\mathcal{V}_{\rm{SPQE}}^{\rm{3D}} = 5544$ and $\mathcal{V}_{\rm{ACI}}^{\rm{3D}} = 10 519$. While further investigation of larger strongly-correlated systems will be necessary, it is encouraging to see that SPQE performs similarly to or better than two state-of-the-art classical methodologies when applied to the 2D and 3D \ce{H10} systems. We have also included in Fig.~\ref{fig:H10_123D_econv} energy errors for several classical coupled-cluster variants. Specifically we have included results from Ref.~\cite{stair2020exploring} for CC with singles and doubles excitations \cite{Purvis1982FullCoupled} (CCSD), CCSD with perturbative triples \cite{Raghavachari1989AFifth} [CCSD(T)], and the completely renormalized CC with perturbative triples \cite{piecuch2005renormalized} [CRCC(2,3)]. These CC methods are (generally) more computationally affordable than the other methods. In the 2D and 3D systems, CCSD performs comparably to dUCC-SQPE (with the same number of parameters), while CCSD(T) and CRCC(2,3) produce more accurate energies. However, all coupled-cluster energies are more than 10 m$E_{\rm h}$\xspace off from the FCI energy. Fig.~\ref{fig:H10_123D_econv} also reports results computed using Mukherjee MRCC with singles and doubles \cite{Mahapatra:1999tm,Evangelista:2007hz} (Mk-MRCCSD) and Mk-MRCCSD augmented with perturbative triples \cite{Evangelista:2010cq} [Mk-MRCCSD(T)] using an active space containing the highest occupied and lowest unoccupied Hartree--Fock orbitals. The Mk-MRCC results improve upon single-reference coupled-cluster methods at the cost of doubling the number of cluster amplitudes. The Mk-MRCC methods are particularly accurate for the 1D system, where they produce errors of the order of 2--3 m$E_{\rm h}$\xspace. Despite the improvement shown by the multireference CC methods, it is important to note that these methods have a computational cost that scales exponentially with the number of active space determinants. \section{Conclusions} \label{sec:conclusions} In this work, we present a new NISQ-friendly algorithm---the projective quantum eigensolver (PQE)---to compute the ground state of a many-body problem using disentangled (factorized) unitary coupled-cluster trial states. The PQE approach consists of a nonlinear optimization problem whose solution requires the evaluation of projections of the Schr\"{o}dinger equation onto a many-body basis (residual vector) but still gives energies that are a variational upper bound to the ground state energy. We show how to efficiently evaluate the residual vector via measurement of simple expectation values, with a cost that is twice that of an energy evaluation (per element). For small molecular systems, we find that PQE and VQE with a fixed dUCCSD trial state converge to nearly identical energies; however, the number of residual evaluations required by PQE is smaller than that of the gradient evaluations needed by VQE. PQE shows similar resiliency as VQE and still converges more rapidly in the presence of stochastic noise. To treat strongly correlated electrons, we introduce a selected variant of PQE in which the trial state is constructed iteratively by adding batches of important operators. The resulting SPQE algorithm can construct efficient unitary circuits like ADAPT-VQE, but it requires orders of magnitude fewer residual element evaluations than gradient element evaluations for the latter. In SPQE, the selection of new operators is done according to the magnitude of the elements of the residual vector, and is performed by sampling a quantum state that directly encodes in its probability amplitudes the importance of the entire operator pool. Because the selection cost in SQPE is not affected by the size of the operator pool, the unitary can include operators of rank up to the total number of particles. Finally, we compare the energy convergence with the number of parameters for 1D, 2D, and 3D \ce{H10} lattices using SPQE and two classical methods well suited to treat strong electronic correlation: the adaptive configuration interaction and the density matrix renormalization group. Given a target accuracy of up to approximately 1~m$E_{\rm h}$\xspace, we find that SPQE produces significantly more compact trial states for the 2D system than ACI and is comparable to DMRG. However, DMRG affords the most compact wave function parameterization in 1D, for accuracies below 1~m$E_{\rm h}$\xspace in 2D, and by a much smaller margin, in 3D. Taken together, PQE and SPQE are very promising tools for studying many-body systems both in the strong and weak correlation regimes using NISQ hardware. In summary, the PQE approach is a viable and more economical alternative to variational quantum algorithms. In its current formulation, PQE can be applied to any trial state generated by exponentiating a set of linearly independent operators with identity metric matrix, as it is the case for disentangled unitary coupled-cluster ans\"{a}tze. For these trial states, methods to reduce the number of measurements and exploit symmetries \cite{yen2020measuring, verteletskyi2020measurement, setia2020reducing} developed for VQE can likewise be used to improve PQE. Interesting extensions of PQE include a generalization to unitaries that may contain repeated operators, that use generalized excitations/de-excitations pools, and hardware-efficient ans\"{a}tze. In particular, a promising research direction is the formulation of a selected PQE using a basis of general one- and two-body operators, which could yield trial states with lower circuit depth than the current formulation. Within the greater ecosystem of quantum algorithms, PQE could be used to determine initial guess amplitudes for subsequent optimization via VQE. Additionally, similarly to VQE, PQE can be used as an alternative to adiabatic approaches to prepare initial states for quantum phase estimation. Although we only explore applications of PQE for quantum many-body simulation, the framework outlined by Eqs.~\eqref{eq:ucc1} and \eqref{eq:ucc2} can be used to solve a variety of eigenvalue problems. With appropriate modifications, for example, PQE could be employed to diagonalize covariance matrices (after quantum encoding \cite{giovannetti2008quantum, lloyd2014quantum}) for use in machine learning or data-analysis. Moreover, because there is no requirement that the PQE (or SPQE) trial states have low-entanglement, PQE could be used as an alternative to methods such as quantum principle analysis \cite{lloyd2014quantum}, or variational quantum state diagonalization \cite{larose2019variational}. Most importantly, the most immediate impact of PQE could be speeding up quantum computations on current or near-term devices. \section*{Acknowledgements} This work was supported by the U.S. Department of Energy under Award No. DE-SC0019374 and the NSF under grant CHE-2038019. N.H.S. was supported by a fellowship from The Molecular Sciences Software Institute under NSF grant ACI-1547580.
1,314,259,995,982
arxiv
\section{Introduction} Blind gain and phase calibration (BGPC), the joint recovery of the unknown gains and phases in the sensing system and the unknown signal, is a bilinear inverse problem that arises in many applications: the joint estimation of albedo and lighting conditions in inverse rendering \cite{Nguyen2013}; the joint recovery of phase error and radar image in synthetic aperture radar (SAR) autofocus \cite{Morrison2009}; and auto-calibration of sensor gains and phases in array processing \cite{Paulraj1985}. There exists a long line of research regarding the solutions for each application. However, theoretical analysis of the problem and error bounds for its solutions have been established only recently \cite{Li2015,Li2015e,Ling2016,Wang2016}. In this paper, we reformulate the BGPC problem as an eigenvalue/eigenvector problem. In the subspace case, we use algorithms that find principal eigenvectors such as the power iteration algorithm (also known as the power method) \cite[Section 8.2.1]{Golub1996}, to find the concatenation of the gain and phase vector and the vectorized signal matrix in the form of the principal component of a structured matrix. In the sparsity case, the problem resembles sparse principal component analysis (sparse PCA) \cite{Moghaddam2006}. We then propose to solve the sparse eigenvector problem using truncated power iteration \cite{Yuan2013}. The main contribution of this paper is the theoretical analysis of the error bounds of power iteration and truncated power iteration for BGPC in the subspace and joint sparsity cases, respectively. When the measurement matrix is random, and the signals and the noise are adversarial, our algorithms stably recover the unknown gains and phases, and the unknown signals with high probability under near optimal sample complexities. Since truncated power iteration relies on a good initial estimate, we also propose a simple initialization algorithm, and prove that the output is sufficiently good under certain technical conditions. We complement the theoretical results with numerical experiments, which show that the algorithms can indeed solve BGPC in the optimal regime. We also demonstrate that the algorithms are robust against noise and an inaccurate initial estimate. Experiments with different initialization schemes show that our initialization algorithm significantly outperforms the baseline. Then we apply the power iteration algorithm to inverse rendering, and showcase its effectiveness in real-world applications. The rest of the paper is organized as follows. In this section, we introduce the formulation of the BGPC problem, and related work. We then introduce the power iteration algorithms and our main theoretical results in Sections \ref{sec:algorithm} and \ref{sec:main}, respectively. Sections \ref{sec:fun_est} and \ref{sec:proof} give some fundamental estimates regarding the structured matrix in our BGPC formulation, and the proofs for our main results. We conduct some numerical experiments in Section \ref{sec:experiment}, and conclude the paper with some discussion in Section \ref{sec:conclusion}. \subsection{Notations} \label{sec:notation} We use $A^\top$, $\overline{A}$, and $A^*$ to denote the transpose, the complex conjugate, and the conjugate transpose of a matrix $A$, respectively. The $k$-th entry of a vector $\lambda$ is denoted by $\lambda_k$. The $j$-th column, the $k$-th row (in a column vector form), and the $(k,j)$-th entry of a matrix $A$ are denoted by $a_{\cdot j}$, $a_{k \cdot}$, and $a_{k j}$, respectively. Upper script $t$ in a vector $\eta^{(t)}$ denotes the iteration number in an iterative algorithm. We use $I_n$ to denote the identity matrix of size $n\times n$, and $\bm{1}_{n,m}$ and $\bm{0}_{n,m}$ to denote the matrices of all ones and all zeros of size $n\times m$, respectively. The $i$-th standard basis vector is denoted by $e_i$, whose ambient dimension is clear in the context. The $\ell_p$ norm and $\ell_0$ ``norm'' of a vector $x$ are denoted by $\norm{x}_p$ and $\norm{x}_0$, respectively. The Frobenius norm and the spectral norm of a matrix $A$ are denoted by $\norm{A}_\mathrm{F}$ and $\norm{A}$, respectively. The support of a sparse vector $x$ is denoted by $\mathrm{supp}(x)$. The vector $\mathrm{vec}(X)$ denotes the concatenation of the columns of $X=[x_{\cdot 1},x_{\cdot 2},\dots, x_{\cdot N}]$, i.e., $\mathrm{vec}(X) = [x_{\cdot 1}^\top,x_{\cdot 2}^\top,\dots,x_{\cdot N}^\top]^\top$. A diagonal matrix with the entries of vector $x$ on the diagonal is denoted by $\mathrm{diag}(x)$. The Kronecker product is denoted by $\otimes$. We use $\gtrsim$ to denote the relation greater than up to log factors. We use $[n]$ to denote the set $\{1,2,\dots,n\}$. For an index set $T$, the projection operator onto $T$ is denoted by $\Pi_T$, and the operator that restricts onto $T$ is denoted by $\Omega_T$. We use these operator notations for different spaces, and the ambient dimensions will be clarified in the context. \subsection{Problem Formulation} \label{sec:formulation} In this section, we introduce the BGPC problem with a subspace constraint or a sparsity constraint. Suppose $A\in\mathbb{C}^{n\times m}$ is the known measurement matrix, and $\lambda \in\mathbb{C}^{n}$ is the vector of unknown gains and phases, the $k$-th entry of which is $\lambda_k = |\lambda_k|e^{\sqrt{-1}\varphi_k}$. Here, $|\lambda_k|$ and $\varphi_k$ denote the gain and phase of the $k$-th sensor, respectively. The BGPC problem is the simultaneous recovery of $\lambda$ and the unknown signal matrix $X\in\mathbb{C}^{m\times N}$ from the following measurement: \begin{equation} \label{eq:bgpc} Y = \mathrm{diag}(\lambda) A X + W, \end{equation} where $W\in\mathbb{C}^{n\times N}$ is the measurement noise. The $(k,j)$-th entry in the measurement $y_{kj}$ has the following expression: \[ y_{kj} = \lambda_k \, a_{k \cdot}^\top \, x_{\cdot j} + w_{kj}. \] Clearly, BGPC is a bilinear inverse problem. The solution $(\lambda,X)$ suffers from scaling ambiguity, i.e., $(\lambda/\sigma,\sigma X)$ generates the same measurements as $(\lambda,X)$, and therefore cannot be distinguished from it. Despite the fact that the solution can have other ambiguity issues, in this paper, we consider the generic setting where the solution suffers only from scaling ambiguity \cite{Li2015e}.\footnote{An example of another ambiguity is a shift ambiguity when $A$ is the discrete Fourier transform matrix \cite{Li2015,Wang2016}. For a generic matrix $A$, the solution to BGPC does not suffer from shift ambiguity.} Even in this setting, the solution is not unique, unless we exploit the structure of the signals. In this paper, we solve the BGPC problem under two scenarios -- BGPC with a subspace structure, and BGPC with sparsity. 1) \textbf{Subspace case:} Suppose that the known matrix $A$ is tall ($n>m$) and has full column rank. Then the columns of $AX$ reside in the low-dimensional subspace spanned by the columns of $A$. The problem is effectively unconstrained with respect to $X$. 2) \textbf{Sparsity case:} Suppose that $A$ is a known dictionary with $m \geq n$, while the columns of $X$ are $s_0$-sparse, i.e., $\norm{x_{\cdot j}}_0 \leq s_0$ for all $j\in [N]$. A variation of this setting is that the columns of $X$ are jointly $s_0$-sparse, i.e., there are at most $s_0$ nonzero rows in $X$. In this case, the subspace constraint on $AX$ no longer applies, and one must solve the problem with a sparsity (or joint sparsity) constraint. The BGPC problem arises in applications including inverse rending, sensor array processing, multichannel blind deconvolution, and SAR autofocus. We refer the reader to our previous work \cite[Section II.C]{Li2015e} for a detailed account of applications of BGPC. For consistency, from now on, we use the convention in sensor array processing, and refer to $n$ and $N$ as the numbers of sensors and snapshots, respectively. \begin{table}% \renewcommand{\arraystretch}{1.2} \caption{Comparison of Sample Complexities with Prior Work} \label{tab:compare} \begin{tabular}{|c||c|c|c|} \hline & Subspace & Joint Sparsity & Sparsity \\ \hline \hline Unique Recovery \cite{Li2015e} & \begin{tabular}[c]{@{}c@{}}$n > m$\\ $N \geq \frac{n-1}{n-m}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$n > 2s_0$\\ $N \geq \frac{n-1}{n-2s_0}$\end{tabular} & -- \\ \hline Least Squares \cite{Ling2016} & \begin{tabular}[c]{@{}c@{}} $n\gtrsim m$ \\ $N \gtrsim 1$ \end{tabular} & -- & -- \\ \hline $\ell_1$ Minimization \cite{Wang2016} & -- & -- & \begin{tabular}[c]{@{}c@{}} $n\gtrsim s_0$ \\ $N \gtrsim n$ \end{tabular} \\ \hline This Paper & \begin{tabular}[c]{@{}c@{}} $n\gtrsim m$ \\ $N \gtrsim 1$ \end{tabular} & \begin{tabular}[c]{@{}c@{}} $n\gtrsim s_0$ \\ $N \gtrsim \sqrt{s_0}$ \end{tabular} & -- \\ \hline \end{tabular} \vspace{0.1in} {\scriptsize \textbf{Note:} $n$, $N$, $m$ and $s_0$ represent the number of sensors, the number of snapshots, the subspace dimension, and the sparsity level, respectively.} \end{table} \subsection{Our Contributions} \label{sec:contribution} We reformulate BGPC as the problem of finding the principal eigenvector of a matrix (or operator). In the subspace case, this can be solved using any eigen-solver, e.g., power iteration (Algorithm \ref{alg:pi}). In the sparsity case, we propose to solve this problem using truncated power iteration (Algorithm \ref{alg:tpi}). Our main results can be summarized as follows: \begin{theorem} \label{thm:summary} Under certain assumptions on $A$, $\lambda$, $X$, and $W$, one can solve the BGPC problem with high probability using: 1) \textbf{Subspace case:} algorithms that find the principal eigenvector of a certain matrix, e.g., power iteration, if $n\gtrsim m$ and $N\gtrsim 1$. 2) \textbf{Joint sparsity case:} truncated power iteration with a good initialization, if $n\gtrsim s_0$ and $N\gtrsim \sqrt{s_0}$. \end{theorem} In Table \ref{tab:compare}, we compare the above results with the sample complexities for unique recovery in BGPC \cite{Li2015e}, and previous guaranteed algorithms for BGPC in the subspace and sparsity case \cite{Ling2016,Wang2016}. In the subspace case, power iteration solves BGPC using optimal (up to log factors) numbers of sensors and snapshots. These sample complexities are comparable to the least squares method in \cite{Ling2016}. Moreover, we show that power iteration is empirically more robust against noise than least squares. Truncated power iteration solves BGPC with a joint sparsity structure, with an optimal (up to log factors) number of sensors, and a slightly suboptimal (within a factor of $\sqrt{s_0}$ and log factors) number of snapshots. In comparison, the $\ell_1$ minimization method for the sparsity case of BGPC uses a similar number of sensors, but a much larger number of snapshots. Numerical experiments show that truncated power iteration empirically succeed, in both the joint sparsity case and the more general sparsity case, in the optimal regime. The success of truncated power iteration relies on a good initial estimate of $X$ and $\lambda$. We propose a simple initialization algorithm (Algorithm \ref{alg:init}) with the following guarantee: \begin{theorem} \label{thm:summary_init} Under additional assumptions on the absolute values of the nonzero entries in $X$, our initialization algorithm produces a sufficiently good estimate of $\lambda$ and $X$ if $n\gtrsim s_0^2$. (We not require any additional assumption on the number $N$ of snapshots.) \end{theorem} Despite the above scaling law predicted by theory, numerical experiments suggest that our initialization scheme is effective when $n\gtrsim s_0$. \subsection{Related Work} \label{sec:related} BGPC arises in many real-world scenarios, and previous solutions have mostly been tailored to specific applications such as sensor array processing \cite{Paulraj1985,Rockah1988,Weiss1990}, sensor network calibration \cite{Balzano2007,Lipor2014}, synthetic aperture radar autofocus \cite{Morrison2009}, and computational relighting \cite{Nguyen2013}. A special case for BGPC is multichannel blind deconvolution (MBD) with a circular convolution model. Most previous works on MBD consider linear convolution with a finite impulse response (FIR) filter model (see \cite{Tong1991,Moulines1995,Harikumar1998,Harikumar1999}, and a recent stabilized method \cite{Lee2017a,Lee2016}). In comparison, the BGPC problem discussed in this paper involves a more general subspace or sparsity model. The idea of solving BGPC by reformulating it into a linear inverse problem, which is a key idea in this paper, has been proposed by many prior works \cite{Balzano2007,Morrison2009,Nguyen2013}. In particular, Bilen et al. \cite{Bilen2014} provided a solution to BGPC with high-dimensional but sparse signals using $\ell_1$ minimization. However, such methods have not been carefully analyzed until recently. Ling and Strohmer \cite{Ling2016} derived an error bound for the least squares solution in the subspace case of BGPC. In this paper, the power iteration method has sample complexities comparable to those of the least squares method \cite{Ling2016}, and is empirically more robust to noise than the latter. Wang and Chi \cite{Wang2016} gave a theoretical guarantee for $\ell_1$ minimization that solves BGPC in the sparsity case, where they assumed that $A$ is the discrete Fourier transform (DFT) matrix and $X$ is random following a Bernoulli-Subgaussian model. In this paper, we give a guarantee for truncated power iteration under the assumption that $A$ is a complex Gaussian random matrix, and $X$ is \emph{jointly} sparse, well-conditioned, and deterministic. In this sense, we consider an adversarial scenario for the signal $X$. Our sample complexity results require a near optimal number $n$ of sensors, and a much smaller number $N$ of snapshots. Moreover, truncated power iteration is more robust against noise and inaccurate initial estimate of phases. Very recently, Eldar et al. \cite{Eldar2017} proposed new methods for BGPC with signals whose sparse components may lie off the grid. Similar to earlier work on blind calibration of sensor arrays \cite{Paulraj1985}, these methods rely on empirical covariance matrices of the measurements and therefore need a relatively large number of snapshots. To position BGPC in a more broad context, it is a special bilinear inverse problem \cite{Li2015}, which in turn is a special case of low-rank matrix recovery from incomplete measurements \cite{Davenport2016,Li2016,Kech2016,Lee2013}. A resurgence of interest in bilinear inverse problems was pioneered by the recent studies in single-channel blind deconvolution of signals with subspace or sparsity structures, where both the signal and the filter are structured \cite{Ahmed2014,Ling2015,Chi2015,Lee2015a,XLi2016}. Another related bilinear inverse problem is blind calibration via repeated measurements from multiple sensing operators \cite{Bahmani2015a,Tang2014,Cambareri2016,Cambareri2017,Ahmed2016a,Cosse2017}. Since blind calibration with repeated measurements is in principle an easier problem than BGPC \cite{Ling2016}, we believe our methods for BGPC and our theoretical analysis can be extended to this scenario. Also related is the phase retrieval problem \cite{Fienup1982}, where there only exists uncertainty in the phases (and not the gains) of the sensing system. An active line of work solves phase retrieval with guaranteed algorithms (see \cite{Candes2013a,Netrapalli2013,Candes2015,Cai2016,Bahmani2016,Goldstein2016} and \cite{Shechtman2015} for a recent review). The error bounds of power iteration and truncated power iteration have been analyzed in general settings, e.g., in \cite[Section 8.2.1]{Golub1996} and \cite{Yuan2013}. These previous results hinge on spectral properties of matrices such as gaps between eigenvalues, which do not translate directly to sample complexity requirements. This paper undertakes analysis specific to BGPC. We relate spectral properties in BGPC to some technical conditions on $\lambda$, $A$, $X$, and $W$, and derive recovery error under near optimal sample complexities. We also adapt the analysis of sparse PCA \cite{Yuan2013} to accommodate a structured sparsity constraint in BGPC. BGPC and our proposed methods are non-convex in nature. In particular, our truncated power iteration algorithm can be interpreted as projected gradient descent for a non-convex optimization problem. There have been rapid developments in guaranteed non-convex methods \cite{Sun2015} in a variety of domains such as matrix completion \cite{Keshavan2010,Jain2013,RSun2016}, dictionary learning \cite{Sun2017,Sun2017a}, blind deconvolution \cite{Lee2015a,XLi2016}, and phase retrieval \cite{Candes2015,Netrapalli2013,Sun2016}. It is a common theme that carefully crafted non-convex methods have better theoretical guarantees in terms of sample complexity than their convex counterparts, and often have faster implementations and better empirical performance. This paper is a new example of such superiority of non-convex methods. \section{Power Iteration Algorithms for BGPC} \label{sec:algorithm} Next, we describe the algorithms we use to solve BGPC. In Section \ref{sec:linear}, we introduce a simple trick that turns the bilinear inverse problem in BGPC to a linear inverse problem. In Sections \ref{sec:pi} and \ref{sec:tpi}, we introduce the power iteration algorithm we use to solve BGPC with a subspace structure, and the truncated (or sparse) power iteration algorithm we use to solve BGPC with sparsity, respectively. \subsection{From Bilinearity to Linearity}\label{sec:linear} We use a simple trick to turn BGPC into a linear inverse problem \cite{Balzano2007}. Without loss of generality, assume that $\lambda_k\neq 0$ for $k\in[n]$. Indeed, if any sensor has zero gain, then the corresponding row in $Y$ is all zero or contains only noise, and we can simply remove the corresponding row in \eqref{eq:bgpc}. Let $\gamma$ denote the entrywise inverse of $\lambda$, i.e., $\gamma_k = 1/\lambda_k$ for $k\in[n]$. We have \begin{equation} \label{eq:bgpc_linear} \mathrm{diag}(\gamma) Y_\mathrm{s} = AX, \end{equation} where $Y_\mathrm{s} = \mathrm{diag}(\lambda) AX$ is the noiseless measurement. Equation \eqref{eq:bgpc_linear} is linear in all the entries of $\gamma$ and $X$. The bilinear inverse problem in $(\lambda, X)$ now becomes a linear inverse problem in $(\gamma, X)$. In practice, since only the noisy measurement $Y$ is available, one can solve $\mathrm{diag}(\gamma) Y \approx AX$. This technique was widely used to solve BGPC with a subspace structure, in applications such as sensor network calibration \cite{Balzano2007}, synthetic aperture radar autofocus \cite{Morrison2009}, and computational relighting \cite{Nguyen2013}. Recently, Ling and Strohmer \cite{Ling2016} analyzed the least squares solution to \eqref{eq:bgpc_linear}. Wang and Chi \cite{Wang2016} considered a special case where $A$ is the DFT matrix, and analyzed the solution of a sparse $X$ by minimizing the $\ell_1$ norm of $A^{-1}\mathrm{diag}(\gamma)Y$. We use the same trick in our algorithms. Define \begin{equation} \label{eq:D} D \coloneqq \begin{bmatrix} I_N \otimes a_{1\cdot}^\top \\ \vdots \\ I_N \otimes a_{n\cdot}^\top \end{bmatrix}, \end{equation} \begin{equation} \label{eq:E} E \coloneqq \begin{bmatrix} y_{1\cdot} & & \\ & \ddots & \\ & & y_{n\cdot} \end{bmatrix}. \end{equation} We can decompose $E$ into $E = E_\mathrm{s} + E_\mathrm{n}$, where \[ E_\mathrm{s} \coloneqq \begin{bmatrix} \lambda_1 X^\top a_{1\cdot} & & \\ & \ddots & \\ & & \lambda_n X^\top a_{n\cdot} \end{bmatrix}, \] \[ E_\mathrm{n} \coloneqq \begin{bmatrix} w_{1\cdot} & & \\ & \ddots & \\ & & w_{n\cdot} \end{bmatrix}. \] Define also \begin{equation} \label{eq:B} B \coloneqq \begin{bmatrix} D^*D & \alpha D^*E \\ \alpha E^*D & \alpha^2 E^*E \end{bmatrix}, \end{equation} \[ B_\mathrm{s} \coloneqq \begin{bmatrix} D^*D & \alpha D^*E_\mathrm{s} \\ \alpha E_\mathrm{s}^*D & \alpha^2 E_\mathrm{s}^*E_\mathrm{s} \end{bmatrix}, \] where $\alpha$ is a nonzero constant specified later. Clearly, \eqref{eq:bgpc_linear} can be rewritten as \[ D x - E_\mathrm{s} \gamma = 0, \] where $x=\mathrm{vec}(X)$. Equivalently, $\eta = [x^\top, -\gamma^\top/\alpha]^\top$ is a null vector of $B_\mathrm{s}$. When certain sufficient conditions are satisfied, $\eta$ is the unique null vector of $B_\mathrm{s}$. For example, if $\lambda$, $A$, and $X$ are in general positions in $\mathbb{C}^n$, $\mathbb{C}^{n\times m}$, and $\mathbb{C}^{m\times N}$, respectively, then $N \geq \frac{n-1}{n-m}$ snapshots are sufficient to guarantee uniqueness of the solution to BGPC in the subspace case. We refer readers to our work on the identifiability in BGPC for more details \cite{Li2015,Li2015e}. Since only the noisy matrix $B$ is accessible in practice, one can instead find the minor eigenvector, i.e., the eigenvector corresponding to the smallest eigenvalue of $B$. The rest of this section focuses on algorithms that find such an eigenvector of $B$, with no constraint (in the subspace case), or with a sparsity constraint (in the sparsity case). \subsection{Power Iteration for BGPC with a Subspace Structure}\label{sec:pi} In the subspace case ($n>m$), we solve for the minor eigenvector of the positive definite matrix $B$. In Section \ref{sec:main}, we derive an upper bound on the error between this eigenvector and the true solution $\eta$. The minor eigenvector of $B$ can be computed by a variety of methods. Here, we propose an algorithm that remains computationally efficient for large scale problems. By eigenvalue decomposition, the null vector of $B$ is identical to the principal eigenvector of \begin{equation} \label{eq:G} G = \beta I_{mN+n}-B, \end{equation} for a large enough constant $\beta$. This eigenvector can be computed using the power iteration algorithm (see Algorithm \ref{alg:pi}). The size of $G$ is $(Nm+n)\times (Nm+n)$. An advantage of Algorithm \ref{alg:pi} over an eigen-solver that decomposes $G$, is that one does not need to explicitly compute the entries of $G$ to iteratively apply it to a vector. Furthermore, rather than $O((Nm+n)^2)$, by the structure of $D$ and $E$, the per iteration time complexity of applying the operator $G$ to a vector is only $O(mnN)$. This can be further reduced if $A$ and $A^*$ are linear operators with implementations faster than $O(mn)$. The rule of thumb for selecting parameter $\alpha$ is that the $\ell_2$ norms of the columns of $D$ be close to those of $\alpha E$ so that $G$ in \eqref{eq:G} exhibits good spectral properties for power iterations. A safe choice for $\beta$ is $\norm{B}$, which may be conservatively large in some cases, but works well in practice. In Section \ref{sec:main}, we discuss our choice of parameters $\alpha,\beta$ under certain normalization assumptions (see Remark \ref{rem:parameter}). Algorithm \ref{alg:pi} converges to the principal eigenvector of $G$, as long as the initial estimate $\eta^{(0)}$ is not orthogonal to that eigenvector. This insensitivity to initialization is a privilege not shared by the sparsity case (see Section \ref{sec:tpi}). \begin{algorithm} \DontPrintSemicolon \SetKwInput{KwPara}{Parameters} \caption{Power Iteration for BGPC} \label{alg:pi} \KwIn{$A\in\mathbb{C}^{n\times m}$, $Y\in\mathbb{C}^{n\times N}$, initial estimate $\eta^{(0)}\in\mathbb{C}^{Nm+n}$} \KwOut{$\eta^{(t)} \in \mathbb{C}^{Nm+n}$} \KwPara{$\alpha$, $\beta$} Compute operator $G: \mathbb{C}^{Nm+n} \rightarrow \mathbb{C}^{Nm+n}$ by \eqref{eq:D}, \eqref{eq:E}, \eqref{eq:B}, \eqref{eq:G} \; $t \leftarrow 1$ \; \Repeat{convergence criterion is met}{ Compute $\eta^{(t)} = G \eta^{(t-1)}/\norm{G \eta^{(t-1)}}_2$ \; $t \leftarrow t+1$ \; } \end{algorithm} \subsection{Truncated Power Iteration for BGPC with Sparsity}\label{sec:tpi} When $2\leq n\leq m$, $[D,\alpha E]\in\mathbb{C}^{Nn\times (Nm+n)}$ is a fat matrix, and the null space of $B$ has dimension at least $2$. Therefore, there exist at least two linearly independent eigenvectors corresponding to the largest eigenvalue of $G$. To overcome the ill-posedness, one can leverage the sparsity structure in $X$ to make the solution to the eigenvector problem unique. Let $\Pi_s(x)$ denote the projection of a vector $x$ onto the set of $s$-sparse vectors. It is computed by setting to zero all but the $s$ entries of $x$ of the largest absolute values. Let $\Pi'_s(X)$ denote the projection of a matrix $X$ onto the set of matrices whose columns are jointly $s$-sparse. This projection is computed by setting to zero all but the $s$ rows of $X$ of the largest $\ell_2$ norms. We define two projection operators on $\eta = [x^\top, -\gamma^\top/\alpha]^\top$ that will be used repeatedly in the rest of this paper: \begin{align*} & \widetilde{\Pi}_s(\eta) \coloneqq [\Pi_s(x_{\cdot 1})^\top,\Pi_s(x_{\cdot 2})^\top,\dots,\Pi_s(x_{\cdot N})^\top,-\gamma^\top/\alpha]^\top ,\\ & \widetilde{\Pi}'_s(\eta) \coloneqq [\mathrm{vec}\bigl(\Pi'_s(X) \bigr)^\top,-\gamma^\top/\alpha]^\top . \end{align*} For the sparsity case of BGPC, we adapt the eigenvector problem in Section \ref{sec:pi} by adding a sparsity constraint: \begin{align} \label{eq:sparse_pca} \begin{split} \max_{\eta} \quad & \eta^* G \eta\\ \mathrm{s.t.} \quad & \norm{\eta}_2 = 1,\\ & \widetilde{\Pi}_{s_0}(\eta) = \eta. \end{split} \end{align} This nonconvex optimization is very similar to the sparse PCA problem. The only difference lies in the structure of the sparsity constraint. In sparse PCA, the principal component is $s_0$-sparse. In \eqref{eq:sparse_pca}, the vector $\eta$ consists of $s_0$-sparse vectors $x_{\cdot 1},x_{\cdot 2},\dots,x_{\cdot N}$, and a dense vector $-\gamma/\alpha$. To solve \eqref{eq:sparse_pca}, we adopt a sparse PCA algorithm called truncated power iteration \cite{Yuan2013}, and revise it to adapt to the sparsity structure of BGPC (see Algorithm \ref{alg:tpi}). One can choose parameters $\alpha$ and $\beta$ using the same rules as in Section \ref{sec:pi}. Note that we use a sparsity level $s_1\geq s_0$ in this algorithm, for two reasons: (a) in practice, it is easier to obtain an upper bound on the sparsity level instead of the exact number of nonzero entries in the signal; and (b) the ratio $s_0/s_1$ is an important constant in the main results, controlling the trade-off between the number of measurements and the rate of convergence. For the joint sparsity case, we use essentially the same algorithm, with $\widetilde{\Pi}_{s_1}$ replaced by $\widetilde{\Pi}'_{s_1}$. Since \eqref{eq:sparse_pca} is a nonconvex optimization problem, a good initialization $\eta^{(0)}$ is crucial to the success of Algorithm \ref{alg:tpi}. Algorithm \ref{alg:init} outlines one such initialization. We denote by $\Pi_{T_x}$ the projection onto the support set $T_x$, which sets to zero all rows of $D^*E$ but the $s_1$ rows of the largest $\ell_2$ norms in each block. (The $j$-th block of $D^*E$ consists of $m$ contiguous rows indexed by $\{(j-1)m+\ell\}_{\ell \in [m]}$.) Then the normalized left and right singular vectors $u$ and $v$ of $\Pi_{T_x} D^*E$ are computed as initial estimates for $x$ and $\lambda$. We use $1./v$ to denote the entrywise inverse of $v$ except for zero entries, which are kept zero. In Section \ref{sec:main}, we further comment on how to choose a proper initial estimate $\eta^{(0)}$ (see Remark \ref{rem:initialize}). \begin{algorithm} \DontPrintSemicolon \SetKwInput{KwPara}{Parameters} \caption{Truncated Power Iteration for BGPC with Sparsity} \label{alg:tpi} \KwIn{$A\in\mathbb{C}^{n\times m}$, $Y\in\mathbb{C}^{n\times N}$, initial estimate $\eta^{(0)}\in\mathbb{C}^{Nm+n}$} \KwOut{$\eta^{(t)} \in\mathbb{C}^{Nm+n}$} \KwPara{$\alpha$, $\beta$, $s_1$} Compute operator $G: \mathbb{C}^{Nm+n}\rightarrow \mathbb{C}^{Nm+n}$ by \eqref{eq:D}, \eqref{eq:E}, \eqref{eq:B}, \eqref{eq:G} \; $t \leftarrow 1$ \; \Repeat{convergence criterion is met}{ Compute $\tilde{\eta}^{(t)} = G \eta^{(t-1)}/\norm{G \eta^{(t-1)}}_2$ \; Compute $\eta^{(t)} = \widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})/\norm{\widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})}_2$ \; $t \leftarrow t+1$ \; } \end{algorithm} \begin{algorithm} \DontPrintSemicolon \SetKwInput{KwPara}{Parameters} \caption{Initialization for Truncated Power Iteration} \label{alg:init} \KwIn{$A\in\mathbb{C}^{n\times m}$, $Y\in\mathbb{C}^{n\times N}$} \KwOut{initial estimate $\eta^{(0)}\in\mathbb{C}^{Nm+n}$} \KwPara{$s_1$} Compute matrix $D^*E\in\mathbb{C}^{Nm \times n}$ by \eqref{eq:D}, \eqref{eq:E}\; $T_x \leftarrow \emptyset$ \; \For{$j\in [N]$}{ Compute the row norms $\norm{d_{\cdot((j-1)m+\ell)}^*E}_2$ for $\ell\in [m]$\; Find subset $T_j\subset [m]$ ($|T_j|=s_1$) s.t. for $\ell \in T_j$ and $\ell' \in [m]\backslash T_j$: \[ \norm{d_{\cdot((j-1)m+\ell)}^*E}_2 \geq \norm{d_{\cdot((j-1)m+\ell')}^*E}_2 \] \vspace{-0.1in} \; Merge support $T_x \leftarrow T_x\bigcup \left(T_j + \{(j-1)m\} \right)$ \; } Compute the principal left and right singular vectors $u$, $v$ of $\Pi_{T_x} D^*E$\; $\eta^{(0)} \leftarrow [u^\top, -(1./v^\top)/n]^\top$\; $\eta^{(0)} \leftarrow \eta^{(0)} / \norm{\eta^{(0)}}_2$\; \end{algorithm} \subsection{Alternative Interpretation as Projected Gradient Descent} Algorithms \ref{alg:pi} and \ref{alg:tpi} can be interpreted as gradient descent and projected gradient descent, respectively. Next, we explain such equivalence using the sparsity case as an example. Recall that BGPC is linearized as $\begin{bmatrix}D & \alpha E\end{bmatrix}\eta = 0$. Relaxing the sparsity level from $s_0$ to $s_1$, the optimization in \eqref{eq:sparse_pca} is equivalent to: \begin{align*} \min_{\eta} \quad & \frac{1}{2}\left\| \begin{bmatrix}D & \alpha E\end{bmatrix} \eta \right\|_2^2\\ \mathrm{s.t.} \quad & \norm{\eta}_2 = 1,\\ & \widetilde{\Pi}_{s_1}(\eta) = \eta. \end{align*} The gradient of the objective function at $\eta^{(t-1)}$ is \[ \begin{bmatrix} D^*\\ \alpha E^* \end{bmatrix} \begin{bmatrix} D & \alpha E \end{bmatrix} \eta^{(t-1)} = B \eta^{(t-1)}. \] Each iteration of projected gradient descent consists of two steps: \noindent(i) \textbf{Gradient descent} with a step size of $1/\beta$: \begin{align*} \tilde{\eta}^{(t)} = \eta^{(t-1)} - \frac{1}{\beta} B\, \eta^{(t-1)} = \frac{1}{\beta} G \, \eta^{(t-1)}. \end{align*} \noindent(ii) \textbf{Projection} onto the constraint set, i.e., the intersection of a cone ($\widetilde{\Pi}_{s_1}(\eta) = \eta$) and a sphere ($\norm{\eta}_2 = 1$): \[ \eta^{(t)} = \widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})/\norm{\widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})}_2. \] Clearly, the two steps are identical to those in each truncated power iteration except for a different scaling in Step (i), which, due to the normalization in Step (ii), is insignificant. \section{Main Results} \label{sec:main} In this section, we give theoretical guarantees for Algorithms \ref{alg:pi} and \ref{alg:tpi} in the subspace case and in the joint sparsity case, respectively. We also give a guarantee for the initialization by Algorithm \ref{alg:init}. \subsection{Main Assumptions} \label{sec:assumption} We start by stating the assumptions on $A$, $\lambda$, $X$ and $W$, which we use throughout this section. \begin{assumption} \label{ass:A} $A$ is a complex Gaussian random matrix, whose entries are i.i.d. following $\mathcal{CN}(0, \frac{1}{n})$. Equivalently, the vectors $\{a_{k\cdot}\}_{k=1}^{n}$ are i.i.d. following $\mathcal{CN}(\bm{0}_{m,1}, \frac{1}{n}I_m)$. \end{assumption} \begin{assumption} \label{ass:lambda} The vector $\lambda$ has ``flat'' gains in the sense that $1-\delta \leq |\lambda_k|^2 \leq 1+\delta$ for some $\delta \in (0,1)$. \end{assumption} \begin{assumption} \label{ass:X} The matrix $X\in\mathbb{C}^{m\times N}$ is normalized and has good conditioning, i.e., $\norm{X}_\mathrm{F} = 1$, and for some $\theta \in (0,1)$ we have: \begin{itemize} \item \textbf{Subspace case:} \[ \min\{\norm{NX^*X-I_N}, \norm{mXX^*-I_m}\} \leq \theta, \] \item \textbf{Joint sparsity case:} \[ \min\{\norm{NX^*X-I_N}, \norm{s_0\Omega_{T_0}XX^*\Omega_{T_0}^*-I_{s_0}}\} \leq \theta, \] \end{itemize} where $\Omega_T$ denotes the operator that restricts a matrix to the row support $T$, and $T_0 \coloneqq \{i\in[m] | \norm{e_i^\top X}_2 > 0 \}$ ($|T_0|=s_0$) is the row support of $X$. \end{assumption} Assumptions \ref{ass:A} -- \ref{ass:X} can be relaxed in practice. \begin{itemize} \item The complex Gaussian distribution in Assumption \ref{ass:A} can be relaxed to $\mathcal{CN}(0,\sigma_A^2)$ for any $\sigma_A>0$. We choose the particular scaling $\sigma_A^2 = 1/n$, because then $A$ satisfies the restricted isometry property (RIP) \cite{Candes2005}, i.e., $(1-\delta_s)\norm{x}_2^2\leq \norm{Ax}_2^2 \leq (1+\delta_s)\norm{x}_2^2$ for some $\delta_s\in(0,1)$, when $n$ is large compared to the number $s$ of nonzero entries in $x$. \item The gains can center around any $\sigma >0$, i.e., $\sigma(1-\delta) \leq |\lambda_k|^2 \leq \sigma(1+\delta)$. Due to bilinearity, we may assume that $\lambda_k$'s center around $1$ without loss of generality by solving for $(\lambda/\sigma, \sigma X)$. \item The Frobenius norm $\norm{X}_\mathrm{F}$ of matrix $X$ can be any positive number. If $\norm{X}_\mathrm{F}$ is known, one can scale $X$ to have unit Frobenius norm before solving BGPC. In practice, the norm of $X$ is generally unknown. However, due to Assumptions \ref{ass:A} (RIP) and \ref{ass:lambda} (``flat'' gains), we have \begin{align*} \sqrt{(1-\delta_s)(1-\delta)}\leq \frac{\norm{\mathrm{diag}(\lambda)AX}_\mathrm{F}}{\norm{X}_\mathrm{F}} \\ \leq \sqrt{(1+\delta_s)(1+\delta)}. \end{align*} Hence $\norm{Y}_\mathrm{F}$ is a good surrogate for $\norm{X}_\mathrm{F}$ in noiseless or low noise settings, and one can scale $X$ by $1/\norm{Y}_\mathrm{F}$ to achieve the desired scaling. The slight deviation of $\norm{X}_\mathrm{F}/\norm{Y}_\mathrm{F}$ from $1$ does \emph{not} have any significant impact on our theoretical analysis. Therefore, we assume $\norm{X}_\mathrm{F} = 1$ to simply the constants in our derivation. \item The conditioning of $X$ can also be relaxed. When $N$ is large, one can choose a subset of $N'<N$ columns in $Y$, such that the matrix formed from the corresponding columns of $X$ has good conditioning. When noise amplification is not of concern (noiseless or low noise settings), one can choose a preconditioning matrix $H\in\mathbb{C}^{N\times N}$ such that $X' = XH$ is well conditioned, and then solve the BGPC with $Y' = YH$. \end{itemize} In summary, we can manipulate the BGPC problem and make it approximately satisfy our assumptions. For example, \eqref{eq:bgpc} can be rewritten as: \begin{align*} \frac{1}{\norm{YH}_\mathrm{F}} YH = &~ \mathrm{diag}\Bigl(\frac{\lambda}{\sigma}\Bigr) \Bigl(\frac{1}{\sqrt{n}\sigma_A}A \Bigr) \Bigl(\frac{\sqrt{n}\sigma\sigma_A}{\norm{YH}_\mathrm{F}}XH \Bigr) \\ & + \frac{1}{\norm{YH}_\mathrm{F}} WH. \end{align*} We can run Algorithms \ref{alg:pi} and \ref{alg:tpi} with input $\frac{1}{\sqrt{n}\sigma_A}A$ and $\frac{1}{\norm{YH}_\mathrm{F}} YH$, and solve for $\frac{\lambda}{\sigma}$ and $\frac{\sqrt{n}\sigma\sigma_A}{\norm{YH}_\mathrm{F}}XH$. The above manipulations do not have any significant impact on the solution, or on our theoretical analysis. However, by making these assumptions, we eliminate some tedious and unnecessary discussions. We also need an assumption on the noise level. \begin{assumption} \label{ass:W} The noise term $W$ satisfies \begin{itemize} \item \textbf{Subspace case:} $\max_{k\in [n],j\in[N]} |w_{kj}| \leq \frac{C_W}{\sqrt{nN}}$ \item \textbf{Joint sparsity case:} $\max_{k\in [n],j\in[N]} |w_{kj}| \leq \frac{C_W}{\sqrt{n}N^2}$ \end{itemize} for an absolute constant $C_W > 0$. \end{assumption} In the subspace case, the assumption on the noise level is very mild. Because under Assumptions \ref{ass:A} -- \ref{ass:X}, $\norm{\mathrm{diag}(\lambda)AX}_\mathrm{F} \leq \sqrt{(1+\delta_s)(1+\delta)}$, the noise term $W$, which satisfies $\norm{W}_\mathrm{F} \leq C_W$, can be on the same order in terms of Frobenius norm as the clean signal $\mathrm{diag}(\lambda)AX$. Finally, the following assumption is required for a theoretical guarantee of the initialization. \begin{assumption} \label{ass:X_flat} For all $j\in [N]$, there exists $T_j'\subset \mathrm{supp}(x_{\cdot j}) \subset[m]$, such that for all $\ell \in T_j'$, \[ \frac{|x_{\ell j}|^2}{\norm{x_{\cdot j}}_2^2} \geq \frac{\omega}{s_0}, \] for some absolute constant $\omega$, and \[ \frac{\sum_{\ell'\in [m]\backslash T_j'}|x_{\ell' j}|^2}{\norm{x_{\cdot j}}_2^2} \leq \delta_X, \] for some small absolute constant $\delta_X \in (0,1)$. \end{assumption} Assumption \ref{ass:X_flat} says that the support of $x_{\cdot j}$ can be partitioned into two subsets. The absolute values of the entries in the first subset $T_j'$ are sufficiently large. Moreover, the total energy (sum of squares of the entries) in the second subset is small compared to the squared norm of $x_{\cdot j}$. For example, the assumption is satisfied in the following special case: $T_j'=\mathrm{supp}(x_{\cdot j})$ (therefore $x_{\ell' j} = 0$ for $\ell'\in [m]\backslash T_j'$), and the absolute values of the nonzero entries are all comparable, e.g., $x_{\ell j} = \pm \frac{\norm{x_{\cdot j}}}{\sqrt{s_0}}$. Before introducing our main results, we disclose the choice of parameters $\alpha$ and $\beta$ for our theoretical analysis of Algorithms \ref{alg:pi} and \ref{alg:tpi}. \begin{remark} \label{rem:parameter} When Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied, we choose $\alpha = \sqrt{n}$ and $\beta = 3/2$. \end{remark} \subsection{A Perturbation Bound for the Eigenvector Problem} Next, we introduce a key result, a perturbation bound for the eigenvector problem, which is used to derive error bounds for power iteration algorithms. Let $\{T_j\}_{j=1}^N$ denote subsets of $[m]$, such that $|T_j|= s$ and $\mathrm{supp}(x_{\cdot j}) \subset T_j$. We define $T_x \subset [Nm]$ and $T_\eta \subset [Nm + n]$ as follows: \begin{align} \label{eq:Tx} & T_x \coloneqq \bigcup_{j\in[N]} \bigl( T_j + \{(j-1)m\} \bigr), \\ \label{eq:Teta} & T_\eta \coloneqq T_x \bigcup \bigl( [n] + \{Nm\} \bigr). \end{align} Recall that $\Omega_{T}$ restricts a vector to the support $T$, and hence $\Omega_{T}^*\Omega_{T}$ is the projection operator onto the support $T$. Clearly, we have $x = \Omega_{T_x}^*\Omega_{T_x} x$, and $\eta = \Omega_{T_\eta}^*\Omega_{T_\eta}\eta$. In the subspace case discussed in Theorem \ref{thm:perturbation}, we have $s = m$, $T_j = [m]$, $T_x = [Nm]$, and $T_\eta = [Nm+n]$. In the \emph{joint} sparsity case, we have $T_1 = T_2 =\dots = T_N$. We set $|T_j| = s = s_0 + 2s_1$, which we justify later in the analysis of truncated power iteration. Let \[ \dot{\eta} \coloneqq \frac{\eta}{\norm{\eta}_2} \] denote the normalized version of $\eta$, which is the eigenvector of $B_\mathrm{s}$ and $\mathbb{E} B_\mathrm{s}$ corresponding to eigenvalue $0$. Let $\hat{\eta}$ denote the principal eigenvector of $G$. In the joint sparsity case, let $\hat{\eta}_{T_\eta}$ denote the principal eigenvector of $\Omega_{T_\eta} G \Omega_{T_\eta}^*$, where $T = T_1 = \dots = T_N$, $|T|=s$, and the support of $\eta$ is a subset of $T_\eta$ defined in \eqref{eq:Teta}. In Algorithms \ref{alg:pi} and \ref{alg:tpi} and in our analysis, vectors $\dot{\eta}$, $\hat{\eta}$, and $\eta^{(t)}$ are normalized to unit norm. However, multiplication by a scalar of unit modulus is a remaining ambiguity, i.e., the set $\{e^{\sqrt{-1}\varphi}\dot{\eta}: \varphi \in [0,2\pi)\}$ is an equivalence class for $\dot{\eta}$. Our main results use $d(\eta,\eta') \coloneqq \min_\varphi \norm{e^{\sqrt{-1}\varphi}\eta - \eta'}_2$ to denote the distance between $\eta$ and $\eta'$, which is a metric on the set of such equivalence classes. \begin{theorem}[\textbf{Subspace Case}] \label{thm:perturbation} Let $\alpha =\sqrt{n}$, and suppose Assumptions \ref{ass:A} -- \ref{ass:W} are satisfied with $\delta < 1/3$ and a sufficiently small absolute constant $C_W > 0$. Then there exist absolute constants $c,C,C' > 0$, such that if \begin{align} \max\Bigl\{\frac{m\log^2(Nm+n)}{n}, \frac{\log(Nm+n)}{N}, \nonumber\\ \frac{\log(Nm+n)}{m} \Bigr\} \leq C, \label{eq:size_subspace} \end{align} then with probability at least $1-2n^{-c} - e^{-cm}$, \[ d(\hat{\eta},~\dot{\eta}) \leq \Delta, \] where \begin{align} \Delta \coloneqq \frac{8C'}{1-3\delta} \max\{\nu,~\nu^2\}, \label{eq:Delta} \end{align} and \begin{align} \nu \coloneqq \sqrt{nN} \max_{k\in[n], j\in[N]}|w_{kj}|. \label{eq:nu} \end{align} \end{theorem} When $m$ is large (e.g., $m\geq n$), \eqref{eq:size_subspace} does not hold, hence the perturbation bound of the eigenvector $\hat{\eta}$ of $G$ in Theorem \ref{thm:perturbation} is no longer true. We can, however, bound the perturbation of the eigenvector of a submatrix of $G$. \begin{theorem}[\textbf{Joint Sparsity Case}] \label{thm:perturbation2_alt} Let $\alpha =\sqrt{n}$ and $s=s_0+2s_1$, and suppose Assumptions \ref{ass:A} -- \ref{ass:W} are satisfied with $\delta < 1/3$ and a sufficiently small absolute constant $C_W > 0$. Then there exist absolute constants $c,C,C' > 0$, such that if \begin{align} \max\Bigl\{\frac{(s+N)\log^8 n \log^2(sN+m)}{n}, \nonumber\\ \frac{\sqrt{s}\log^2 n \log(sN+m)}{N}, \nonumber\\ \frac{\log^4 n \log^2(sN+m)}{s_0} \Bigr\} \leq C, \label{eq:size_sparsity_alt} \end{align} then with probability at least $1-2n^{-c} - m^{-cs}$, \[ d(\hat{\eta}_{T_\eta},~\Omega_{T_\eta} \dot{\eta}) \leq \widetilde{\Delta}, \] where \begin{align} \widetilde{\Delta} \coloneqq \frac{8C'}{1-3\delta} \max \{N^{3/2} \nu,~\nu^2 \}, \label{eq:Delta_alt} \end{align} and $\nu$ is defined in \eqref{eq:nu}. \end{theorem} The error bounds for Algorithms \ref{alg:pi} and \ref{alg:tpi} in the next section rely on Theorems \ref{thm:perturbation} and \ref{thm:perturbation2_alt}, and existing analysis of power iteration \cite{Golub1996} and truncated power iteration \cite{Yuan2013}. Additionally, the perturbation bounds in this section are of independent interest. In particular, Theorem \ref{thm:perturbation} shows that if the assumptions and the prescribed sample complexities in \eqref{eq:size_subspace} are satisfied, then with high probability the principal eigenvector $\hat{\eta}$ of $G$ is an accurate estimate of the vector $\dot{\eta}$ that concatenates the unknown variables. It gives an error bound for any algorithm that finds the principal eigenvector of $G$. Similarly, we believe that Theorem \ref{thm:perturbation2_alt} can be used to analyze other algorithms that find the sparse principal component of $G$. \subsection{Error Bounds for the Power Iteration Algorithms} In this section, we give performance guarantees for Algorithms \ref{alg:pi} and \ref{alg:tpi} under the assumptions stated in Section \ref{sec:assumption}. \begin{theorem}[\textbf{Subspace Case}] \label{thm:pi} Suppose Assumptions \ref{ass:A} -- \ref{ass:W} are satisfied with $\delta < 1/4$ and a sufficiently small absolute constant $C_W > 0$. Let $\alpha =\sqrt{n}$, and $\beta = 3/2$. Assume that $\xi \coloneqq |\hat{\eta}^*\eta^{(0)}| > 0$. Then there exist absolute constants $c,C,C' > 0$, such that if \eqref{eq:size_subspace} is satisfied, then with probability at least $1-2n^{-c} - e^{-cm}$, the iterates in Algorithm \ref{alg:pi} satisfy \begin{align*} d(\eta^{(t)},~\dot{\eta}) \leq \rho^t d(\eta^{(0)},~\dot{\eta}) + 2\Delta. \end{align*} where $\Delta$ is defined in \eqref{eq:Delta}, and \begin{equation} \label{eq:rho} \rho \coloneqq \Bigl\{ 1 - \frac{1}{2} \Bigl[1-\Bigl(\frac{1+6\delta}{3-2\delta}\Bigr)^2\Bigr]\xi(1 + \xi) \Bigr\}^{1/2}. \end{equation} \end{theorem} Theorem \ref{thm:pi} shows that the power iteration algorithm requires $n=O(m\log^2(Nm+n))$ sensors and $N=O(\log(Nm+n))$ snapshots to successfully recover $X$ and $\lambda$. This agrees, up to log factors, with the sample complexity required for the uniqueness of $(\lambda, X)$ in the subspace case, which is $n>m$ and $N\geq \frac{n-1}{n-m}$ \cite{Li2015e}. Next, we compare Theorem \ref{thm:pi} with a similar error bound for the least squares approach by Ling and Strohmer \cite[Theorem 3.5]{Ling2016}. The sample complexity in Theorem \ref{thm:pi} matches the numbers required by the least squares approach $n=O(m\log^2(Nm+n))$ and $N=O(\log^2(Nm+n))$ (up to one log factor). One caveat in the least squares approach is that, apart from the linear equation \eqref{eq:bgpc_linear}, it needs an extra linear constraint to avoid the trivial solution $\gamma=0$, $X=0$. Unfortunately, in the noisy setting, the recovery error by the least squares approach is sensitive to this extra linear constraint. Our numerical experiments (Section \ref{sec:experiment}) show that power iteration outperforms least squares in the noisy setting. \begin{theorem}[\textbf{Joint Sparsity Case}] \label{thm:tpi_alt} Suppose Assumptions \ref{ass:A} -- \ref{ass:W} are satisfied with $\delta < 1/4$ and a sufficiently small absolute constant $C_W > 0$. Let $\alpha =\sqrt{n}$, $\beta = 3/2$, $s_1\geq s_0$ in Algorithm \ref{alg:tpi}, and define $s=s_0+2s_1$. Then there exist absolute constants $c,C,C' > 0$, such that if $|\dot{\eta}^*\eta^{(0)}|\geq \xi + \widetilde{\Delta}$ for some $\xi\in(0,1)$, and \eqref{eq:size_sparsity_alt} is satisfied, then with probability at least $1-2n^{-c} - m^{-cs}$, the iterates in Algorithm \ref{alg:tpi} for the \emph{joint} sparsity case satisfy \begin{align*} d(\eta^{(t)},~\dot{\eta}) \leq \tilde{\rho}^t d(\eta^{(0)},~\dot{\eta}) + \frac{2\sqrt{5}\widetilde{\Delta}}{1-\tilde{\rho}}. \end{align*} where $\widetilde{\Delta}$ is defined in \eqref{eq:Delta_alt}, and $\tilde{\rho} < 1$ has the following expression: \begin{align} \tilde{\rho} \coloneqq \rho \cdot \Bigl(1 + 2\sqrt{\frac{s_0}{s_1}} + \frac{2s_0}{s_1}\Bigr)^{1/2}, \label{eq:rho_tilde} \end{align} and $\rho$ is defined in \eqref{eq:rho}. \end{theorem} Theorem \ref{thm:tpi_alt} is only valid when $\tilde{\rho} < 1$. With the choice $s_1 = 2s_0$, when $\delta$ approaches $0$, and $\xi$ approaches $1$, the convergence rate $\tilde{\rho}$ is roughly $\frac{1}{3}\sqrt{1 + \sqrt{2} + 2} \approx 0.62$. We discuss a more realistic scenario next. \begin{remark} \label{rem:initialize} A good initialization for $\lambda$ alone is usually sufficient. Suppose one has a good initial estimate for the gains and phases, i.e., $\lambda$ satisfies $|\lambda_k - e^{\sqrt{-1}\varphi_k}| < \sqrt{1+\delta}-1$ for known phase estimates $\{\varphi_k\}_{k=1}^n$. One can initialize Algorithm \ref{alg:tpi} with $\eta^{(0)} = [\bm{0}_{Nm,1}^\top, e^{-\sqrt{-1}\varphi_1},\dots, e^{-\sqrt{-1}\varphi_n}]^\top$, then when $\Delta$ is negligible (noiseless or low noise settings), $\xi$ in Theorem \ref{thm:tpi_alt} can be set to $1/\sqrt{(1+\delta)(2+\delta)}$. For example, if $\delta = 0.05$ and $s_1 \geq 10 s_0$, then $\tilde{\rho}<1$. Since we do not attempt to optimize the constants in this paper, the constants in this exemplary scenario are conservative. \end{remark} Theorem \ref{thm:tpi_alt} states that for Algorithm \ref{alg:tpi} to recover $\lambda$ and a jointly sparse $X$, it is sufficient to have $n = O(s_0\log^8n\log^2(s_0N+m))$ sensors and $N=O(\sqrt{s_0}\log^2n \log(s_0N+m))$ snapshots. In comparison, the (up to a factor of $2$) optimal sample complexity for unique recovery in the joint sparsity case is $n > 2s_0$ and $N\geq \frac{n-1}{n-2s_0}$ \cite{Li2015e}. Hence, the number of sensors required in Theorem \ref{thm:tpi_alt} is (up to log factors) optimal, but the number of snapshots required is suboptimal. Another drawback is that these results apply only to the joint sparsity case, and not to the more general sparsity case. However, we believe these drawbacks are due to artifacts of our analysis. For both the joint sparsity case and the sparsity case, we have $Nn$ complex-valued measurements, and $Ns_0 + n -1$ complex-valued unknowns. One may expect successful recovery when $n$ and $N$ are (up to log factors) on the order of $s_0$ and $1$, respectively. In fact, numerical experiments in Section \ref{sec:experiment} confirms that truncated power iteration successfully recovers $\lambda$ and $X$ in this regime for the more general sparsity case. Wang and Chi \cite{Wang2016} analyzed the performance of $\ell_1$ minimization for BGPC in the sparsity case, where they assumed that $A$ is the DFT matrix, and $X$ is a Bernoulli-Subgaussian random matrix. Their sample complexity for $\ell_1$ minimization is $n=O(s)$ and $N = O(n\log^4 n)$. The success of their algorithm relies on a restrictive assumption that $\lambda_k\approx 1$, which is analogous to the dependence of our algorithm on a good initialization of $\lambda_k$. In the next section, we show that such dependence can be relaxed under some additional conditions using the initialization provided by Algorithm \ref{alg:init}. \subsection{A Theoretical Guarantee of the Initialization} The next theorem shows that, under certain conditions, Algorithm \ref{alg:init} recovers the locations of the large entries in $X$ correctly, and yields an initial estimate $\eta^{(0)}$ that satisfies $|\dot{\eta}^* \eta^{(0)}| > 1-2\delta$ (close to $1$). \begin{theorem}[Initialization] \label{thm:init} Suppose Assumptions \ref{ass:A} -- \ref{ass:X_flat} are satisfied. Then there exist absolute constants $C'', c''>0$, such that if \[ n > C'' s_0^2 \log^6 (nmN), \] then with probability at least $1-n^{-c''}$, for all $j\in [N]$ the set $T_j'$ in Assumption \ref{ass:X_flat} is a subset of $T_j$ in Algorithm \ref{alg:init}. Additionally, in the joint sparsity case, if sample complexity \eqref{eq:size_sparsity_alt} is satisfied with a sufficiently large $C$, Assumption \ref{ass:W} is satisfied with a sufficiently small $C_W$, and Assumption \ref{ass:X_flat} is satisfied with a sufficiently small $\delta_X$, then $\eta_0$ produced by Algorithm \ref{alg:init} will satisfy that $|\dot{\eta}^* \eta^{(0)}|$ is arbitrarily close to \[ \frac{n^{3/2}+\norm{\lambda}_2\norm{\gamma}_2^2}{\sqrt{n^2+\norm{\lambda}_2^2\norm{\gamma}_2^2}\sqrt{n+\norm{\gamma}_2^2}} > 1-2\delta. \] \end{theorem} By Theorem \ref{thm:init}, the constant $\xi$ in Theorem \ref{thm:tpi_alt} can be set to $1-2\delta$ in a low noise setting. For $\delta < 0.19$, this constant $\xi$ is larger than the one in Remark \ref{rem:initialize}, and allows $\tilde{\rho} < 1$ for more choices of $s_1$. Our guarantee for the initialization requires that the number $n$ of sensors scales quadratically (up to log factors) in the sparsity $s_0$, which seems suboptimal. Since similar suboptimal sampling complexities show up in sparse PCA \cite{Berthet2013} and sparse phase retrieval \cite{Netrapalli2013,Cai2016,Jaganathan2017}, we conjecture that such a scaling law is intrinsic in the problem of sparse BGPC. In the joint sparsity case, instead of estimating the supports of $x_{\cdot 1},x_{\cdot 2},\dots,x_{\cdot N}$ separately, one can estimate the row support of $X$ directly by sorting $\sum_{j\in[N]} \norm{d^*_{\cdot((j-1)m+\ell)}E}_2^2$ for $\ell\in [m]$ and finding the $s_1$ largest. In this case, Assumption \ref{ass:X_flat} can be changed to: There exists a subset $T'$ of large rows (in terms of $\ell_2$ norm), such that for all $\ell \in T'$, \[ \frac{\sum_{j\in[N]} |x_{\ell j}|^2}{\norm{X}_\mathrm{F}^2} \geq \frac{\omega}{s_0}, \] and \[ \frac{\sum_{j\in [N], \ell' \in [m]\backslash T'} |x_{\ell' j}|^2}{\norm{X}_\mathrm{F}^2} \leq \delta_X. \] In this case, the subset $T'$ can be identified and an initialization $\eta^{(0)}$ can be computed under the same conditions as in Theorem \ref{thm:init}, which can be proved using the same arguments. \section{Fundamental Estimates} \label{sec:fun_est} To prove the main results, we must first establish some fundamental estimates specific to BGPC. Proofs of some lemmas in this section can be found in the appendix. \subsection{A Gap in Eigenvalues} \label{sec:gap} A key component in establishing a perturbation bound for an eigenvector problem (e.g., Theorem \ref{thm:perturbation}) is bounding the gap between eigenvalues. Lemma \ref{lem:expB} gives us such a bound. \begin{lemma} \label{lem:expB} Suppose Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied and $\alpha =\sqrt{n}$. Then the smallest eigenvalue of $\mathbb{E} \Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^*$ is $0$, and the rest of the eigenvalues reside in the interval $[\frac{(1-\delta)^2}{1+\delta},\, 2(1+\delta)]$. \end{lemma} \subsection{Perturbation Due to Randomness in $A$} \label{sec:concentration} Next, we show that $\Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^*$, whose randomness comes from $A$, is close to its mean $\mathbb{E} \Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^*$ under certain conditions. \begin{lemma} \label{lem:Bs} Suppose Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied, and $\alpha =\sqrt{n}$. For any constant $\delta_B > 0$, there exist absolute constants $C,c>0$, such that: \begin{itemize} \item \textbf{Subspace case:} If \eqref{eq:size_subspace} is satisfied with $C$, then \[ \norm{B_\mathrm{s} -\mathbb{E} B_\mathrm{s} } \leq \delta_B \] with probability at least $1 - n^{-c} -e^{-cm}$. \item \textbf{Joint sparsity case:} If \eqref{eq:size_sparsity_alt} is satisfied with $C$, then \[ \norm{\Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^* -\mathbb{E} \Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^*} \leq \delta_B \] for all $T_1=\dots = T_N$ and $T_\eta$ defined in \eqref{eq:Teta}, with probability at least $1 - n^{-c} -m^{-cs}$. \end{itemize} \end{lemma} \begin{IEEEproof}[Proof of Lemma \ref{lem:Bs}] Recall that \[ \Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^* = \begin{bmatrix} \Omega_{T_x} D^*D \Omega_{T_x}^* & \sqrt{n}\Omega_{T_x}D^*E_\mathrm{s} \\ \sqrt{n}E_\mathrm{s}^*D\Omega_{T_x}^* & n E_\mathrm{s}^*E_\mathrm{s} \end{bmatrix} \] It follows that \begin{align} \nonumber & \norm{\Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^* -\mathbb{E} \Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^*}\\ \label{eq:sn1} & \leq \norm{\Omega_{T_x} D^*D \Omega_{T_x}^* -\mathbb{E} \Omega_{T_x} D^*D \Omega_{T_x}^*}\\ \label{eq:sn2} & + n \norm{E_\mathrm{s}^*E_\mathrm{s} - \mathbb{E} E_\mathrm{s}^*E_\mathrm{s}} \\ \label{eq:sn3} & + 2\sqrt{n} \norm{\Omega_{T_x}D^*E_\mathrm{s} - \mathbb{E} \Omega_{T_x}D^*E_\mathrm{s}}. \end{align} Lemma \ref{lem:Bs} follows from the bounds on the spectral norms in \eqref{eq:sn1} -- \eqref{eq:sn3} in Lemmas \ref{lem:DstarD} -- \ref{lem:DstarEs_alt}, respectively. \end{IEEEproof} \begin{lemma} \label{lem:DstarD} Suppose Assumption \ref{ass:A} is satisfied, then there exist absolute constants $C_1, c_1 >0$, such that: \begin{itemize} \item \textbf{Subspace case:} \[ \norm{D^*D - \mathbb{E} D^*D} \leq C_1\sqrt{\frac{m}{n}}, \] with probability at least $1-e^{-c_1 m}$. \item \textbf{Joint sparsity case:} For any $\{T_j\}_{j=1}^N$ and $T_x$ defined in \eqref{eq:Tx}, \[ \norm{\Omega_{T_x} D^*D \Omega_{T_x}^* - \mathbb{E} \Omega_{T_x} D^*D \Omega_{T_x}^*} \leq C_1\sqrt{\frac{s}{n}\log m}, \] with probability at least $1-m^{-c_1 s}$. \end{itemize} \end{lemma} \begin{lemma} \label{lem:EsstarEs} Suppose Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied, then there exist absolute constants $C_2, c_2>0$, such that \begin{itemize} \item \textbf{Subspace case:} \begin{align*} \norm{E_\mathrm{s}^*E_\mathrm{s} - \mathbb{E} E_\mathrm{s}^*E_\mathrm{s}} \leq \frac{C_2}{n}\max\Bigl\{\sqrt{\frac{\log n}{N}},\sqrt{\frac{\log n}{m}}, \\ \frac{\log n}{N},\frac{\log n}{m}\Bigr\} \end{align*} \item \textbf{Joint sparsity case:} \begin{align*} \norm{E_\mathrm{s}^*E_\mathrm{s} - \mathbb{E} E_\mathrm{s}^*E_\mathrm{s}} \leq \frac{C_2}{n}\max\Bigl\{\sqrt{\frac{\log n}{N}},\sqrt{\frac{\log n}{s_0}}, \\ \frac{\log n}{N},\frac{\log n}{s_0}\Bigr\} \end{align*} \end{itemize} with probability at least $1-n^{-c_2}$. \end{lemma} \begin{lemma}[\textbf{Subspace Case}] \label{lem:DstarEs} Suppose Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied, and $\min\{N,m\}>\log n$, then there exist absolute constants $C_3, c_3>0$, such that \begin{align*} \norm{D^*E_\mathrm{s}-\mathbb{E} D^*E_\mathrm{s}} \leq C_3 \max\Bigl\{ \sqrt{\frac{\log (Nm+n)}{nN}}, \\ \sqrt{\frac{\log(Nm+n)}{nm}},\frac{\sqrt{m}\log(Nm+n)}{n}\Bigr\} \end{align*} with probability at least $1-n^{-c_3}$. \end{lemma} \begin{lemma}[\textbf{Joint Sparsity Case}] \label{lem:DstarEs_alt} Suppose Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied, then there exist absolute constants $C_3, c_3>0$, such that for all $T_1 = \dots = T_N$, \begin{align*} & \norm{\Omega_{T_x}D^*E_\mathrm{s}-\mathbb{E} \Omega_{T_x}D^*E_\mathrm{s}} \\ & \leq \frac{C_3 s_0^{1/4} (s+N)^{1/4} (\sqrt{n}+\sqrt{s+N})^{1/2}}{n \min\{\sqrt{s_0}, \sqrt{N}\}} \\ &\quad \log^3 n \log(sN+m), \end{align*} with probability at least $1-n^{-c_3}$. \end{lemma} \subsection{Perturbation Due to Noise} \label{sec:noise} We established some fundamental estimates regarding $B_\mathrm{s}$ in Sections \ref{sec:gap} and \ref{sec:concentration}. In this section, we turn to perturbation caused by noise. By the definitions of $B$, $B_\mathrm{s}$, $E$, $E_\mathrm{s}$, and $E_\mathrm{n}$, we have \[ B = B_\mathrm{s} + B_\mathrm{n}, \] where \[ B_\mathrm{n} \coloneqq \begin{bmatrix} 0 & \alpha D^* E_\mathrm{n} \\ \alpha E_\mathrm{n}^*D & \alpha^2 (E_\mathrm{s}^*E_\mathrm{n} + E_\mathrm{n}^* E_\mathrm{s} + E_\mathrm{n}^* E_\mathrm{n}) \end{bmatrix}. \] Therefore, \begin{align*} & \Omega_{T_\eta}B_\mathrm{n} \Omega_{T_\eta}^* \\ & = \begin{bmatrix} 0 & \alpha \Omega_{T_x} D^* E_\mathrm{n} \\ \alpha E_\mathrm{n}^*D \Omega_{T_x}^* & \alpha^2 (E_\mathrm{s}^*E_\mathrm{n} + E_\mathrm{n}^* E_\mathrm{s} + E_\mathrm{n}^* E_\mathrm{n}) \end{bmatrix}. \end{align*} Lemma \ref{lem:Bn} gives an upper bound on the spectral norm of the perturbation from noise. \begin{lemma} \label{lem:Bn} Suppose Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied. Let $\alpha =\sqrt{n}$ and let $\nu$ be defined by \eqref{eq:nu}. Then there exist absolute constants $c,C,C'>0$ such that: \begin{itemize} \item \textbf{Subspace case:} If \eqref{eq:size_subspace} is satisfied, then with probability at least $1 - n^{-c}$ \begin{align*} \norm{B_\mathrm{n}} \leq C' \max \{\nu,~\nu^2 \}. \end{align*} Additionally, for any constant $\delta_W >0$, there exists an absolute constant $C_W>0$, if Assumption \ref{ass:W} is satisfied with $C_W$, then the above bound becomes \[ \norm{B_\mathrm{n}} \leq \delta_W. \] \item \textbf{Joint sparsity case:} If \eqref{eq:size_sparsity_alt} is satisfied, then with probability at least $1 - n^{-c}$ \begin{align*} \norm{\Omega_{T_\eta} B_\mathrm{n} \Omega_{T_\eta}^*} \leq C' \max \{N^{3/2}\nu,~\nu^2 \} \end{align*} for all $T_1=\dots = T_N$ and $T_\eta$ defined in \eqref{eq:Teta}. Additionally, for any constant $\delta_W >0$, there exists an absolute constant $C_W>0$, if Assumption \ref{ass:W} is satisfied with $C_W$, then the above bound becomes \[ \norm{\Omega_{T_\eta}B_\mathrm{n} \Omega_{T_\eta}^*} \leq \delta_W. \] \end{itemize} \end{lemma} \begin{IEEEproof} To complete the proof, we bound the spectral norms of $\Omega_{T_x} D^* E_\mathrm{n}$, $E_\mathrm{s}^*E_\mathrm{n}$, and $E_\mathrm{n}^* E_\mathrm{n}$ in Lemmas \ref{lem:DstarEn}, \ref{lem:EsstarEn}, and \ref{lem:EnstarEn}, respectively. \end{IEEEproof} \begin{lemma}[\textbf{Subspace Case}] \label{lem:DstarEn} Suppose Assumption \ref{ass:A} is satisfied, and $m >\log n$, then there exist absolute constants $C_4, c_4>0$, such that \begin{align*} & \norm{D^*E_\mathrm{n}} \leq C_4 \max\Bigl\{\sqrt{\log (Nm+n)}, \\ & \qquad\qquad \sqrt{\frac{Nm}{n}} \log (Nm+n) \Bigr\} \max_{k\in[n], j\in[N]}|w_{kj}|, \end{align*} with probability at least $1-n^{-c_4}$. \end{lemma} \begin{lemma}[\textbf{Joint Sparsity Case}] \label{lem:DstarEn_alt} Suppose Assumption \ref{ass:A} is satisfied, then there exist absolute constants $C_4, c_4>0$, such that for all $T_1 = \dots = T_N$, \begin{align*} \norm{\Omega_{T_x}D^*E_\mathrm{n}} \leq C_4 (\sqrt{s}N + \sqrt{sN\log m} + \sqrt{N\log^3 n}) \\ \times \sqrt{\log n}\max_{k\in[n], j\in[N]}|w_{kj}|, \end{align*} with probability at least $1-n^{-c_4}$. \end{lemma} \begin{lemma} \label{lem:EsstarEn} Suppose Assumptions \ref{ass:A} -- \ref{ass:X} are satisfied, then there exist absolute constants $C_5, c_5>0$, such that \begin{itemize} \item \textbf{Subspace case:} \begin{align*} \norm{ E_\mathrm{s}^*E_\mathrm{n} } \leq C_5\sqrt{\frac{N}{n}} \max\Bigl\{1, \sqrt{\frac{\log n}{N}}, \sqrt{\frac{\log n}{m}}\Bigr\} \\ \times \max_{k\in[n], j\in[N]}|w_{kj}|, \end{align*} \item \textbf{Joint sparsity case:} \begin{align*} \norm{ E_\mathrm{s}^*E_\mathrm{n} } \leq C_5\sqrt{\frac{N}{n}} \max\Bigl\{1, \sqrt{\frac{\log n}{N}}, \sqrt{\frac{\log n}{s_0}}\Bigr\} \\ \times \max_{k\in[n], j\in[N]}|w_{kj}|, \end{align*} \end{itemize} with probability at least $1-n^{-c_5}$. \end{lemma} \begin{lemma} \label{lem:EnstarEn} \[ \norm{ E_\mathrm{n}^*E_\mathrm{n} } \leq N\max_{k\in[n], j\in[N]}|w_{kj}|^2, \] \end{lemma} \subsection{Scalar Concentration} \label{sec:scalar} We now introduce a few scalar concentration bounds that are useful in the proof of Theorem \ref{thm:init}. \begin{lemma} \label{lem:ineq_square} Suppose Assumptions \ref{ass:A} -- \ref{ass:W} is satisfied, then there exist absolute constants $C_6, c_6>0$, such that for all $j\in[N]$ and $\ell\in[m]$, we have \begin{align} & \left| \sum_{k\in[n]} \left( | \lambda_k \overline{a_{k\ell}} a_{k\cdot}^\top x_{\cdot j}|^2 - \mathbb{E} | \lambda_k \overline{a_{k\ell}} a_{k\cdot}^\top x_{\cdot j}|^2\right) \right| \nonumber \\ & \leq \frac{C_6 \norm{x_{\cdot j}}_2^2 \log^3 (nmN)}{n^{3/2}}, \label{eq:ineq1} \end{align} \begin{align} & \left| \sum_{k\in[n]} \lambda_k a_{k\ell} \overline{a_{k\ell}} a_{k\cdot}^\top x_{\cdot j} \overline{w_{kj}}\right| \nonumber \\ & \leq \frac{C_6 \norm{x_{\cdot j}}_2 \log^2 (nmN)}{n} \max_{k\in[n],j\in[N]} |w_{kj}| \nonumber \\ & \leq \frac{C_6 C_W \norm{x_{\cdot j}}_2^2 \log^2 (nmN)}{\sqrt{1-\theta} n^{3/2}}, \label{eq:ineq2} \end{align} and \begin{align} & \left| \sum_{k\in[n]} \left( | \overline{a_{k\ell}} w_{kj} |^2 - \mathbb{E} | \overline{a_{k\ell}} w_{kj} |^2 \right) \right| \nonumber \\ & \leq \frac{C_6 \log^2 (nmN)}{n^{1/2}} \max_{k\in[n],j\in[N]} |w_{kj}|^2 \nonumber\\ & \leq \frac{C_6 C_W^2 \norm{x_{\cdot j}}_2^2 \log^2 (nmN)}{(1-\theta) n^{3/2}}, \label{eq:ineq3} \end{align} with probability at least $1-n^{-c_6}$. \end{lemma} \section{Proofs of the Main Results} \label{sec:proof} \subsection{Proof of the Perturbation Bound for the Eigenvector Problem} In this section, we prove Theorem \ref{thm:perturbation}. Theorem \ref{thm:perturbation2_alt} can be proved similarly. \begin{IEEEproof}[Proof of Theorem \ref{thm:perturbation}] First, \begin{equation} \label{eq:Gsum} G = \beta I_{Nm+n} - B = (\beta I_{Nm+n} - \mathbb{E} B_\mathrm{s}) - (B_\mathrm{s} - \mathbb{E} B_\mathrm{s}) - B_\mathrm{n}. \end{equation} Lemma \ref{lem:expB} establishes a gap in the eigenvalues of the matrix $\mathbb{E} B_\mathrm{s}$ -- the smallest and the second smallest eigenvalues of $\mathbb{E} B_\mathrm{s}$ are separated by a gap of at least \[ \frac{(1-\delta)^2}{1+\delta} \geq 1 - 3\delta > 0. \] Therefore, the gap between the largest and the second largest eigenvalues of $\beta I_{Nm+n} - \mathbb{E} B_\mathrm{s}$ is at least $1-3\delta$. By Lemmas \ref{lem:Bs} and \ref{lem:Bn}, there exist absolute constants $c,C,C', C_W > 0$ such that if all the assumptions are satisfied, then with probability at least $1-2n^{-c} - e^{-cm}$, \begin{equation} \label{eq:perturb1} \norm{(B_\mathrm{s} - \mathbb{E} B_\mathrm{s}) + B_\mathrm{n}} \leq \norm{B_\mathrm{s} - \mathbb{E} B_\mathrm{s}} + \norm{B_\mathrm{n}} \leq \frac{1-3\delta}{4}, \end{equation} \begin{align} \norm{B_\mathrm{n}} \leq C' \max \{\nu,~\nu^2 \}. \label{eq:perturb2} \end{align} Recall that $\dot{\eta}$ is the principal eigenvector of $\beta I_{Nm+n} - \mathbb{E} B_\mathrm{s}$. By the Davis-Kahan $\sin\theta$ Theorem (\cite{Davis1970}; see also \cite[Theorem 8.1.12]{Golub1996}), \eqref{eq:perturb1} and \eqref{eq:perturb2} imply \begin{align*} & \sin\angle(\dot{\eta}, \hat{\eta}) \\ \leq & \frac{4}{1-3\delta} \norm{ (B_\mathrm{s} - \mathbb{E} B_\mathrm{s} + B_\mathrm{n}) \dot{\eta} }_2 \\ \leq & \frac{4}{1-3\delta} \norm{B_\mathrm{n}} \\ \leq & \frac{4C'}{1-3\delta} \max \{\nu,~\nu^2 \}, \end{align*} where the second inequality is due to $B_\mathrm{s} \dot{\eta} = \mathbb{E} B_\mathrm{s} \dot{\eta} = 0$. Theorem \ref{thm:perturbation} follows from the above bound, and the fact that \begin{align*} d(\dot{\eta}, \hat{\eta}) = \sqrt{2 - 2 \cos\angle(\dot{\eta},\hat{\eta})} = 2\sin\frac{\angle(\dot{\eta}, \hat{\eta})}{2} \\ \leq 2\sin\angle(\dot{\eta}, \hat{\eta}). \end{align*} \end{IEEEproof} One can prove Theorem \ref{thm:perturbation2_alt} using the same steps as in the proof of Theorem \ref{thm:perturbation}, by restricting rows and columns of matrices to the support $T_\eta$ and applying the corresponding bounds on submatrices. \subsection{Proof of the Error Bound for Algorithm \ref{alg:pi}} \begin{IEEEproof}[Proof of Theorem \ref{thm:pi}] Recall that the largest eigenvalue of $\beta I_{Nm+n} - \mathbb{E} B_\mathrm{s}$ is $\beta - 0 = \frac{3}{2}$, and all other eigenvalues reside in the interval $[\frac{3}{2} - 2(1+\delta), \frac{3}{2} - \frac{(1-\delta)^2}{1+\delta}]$. By Lemmas \ref{lem:Bs} and \ref{lem:Bn}, there exist constants $c,C,C_W>0$ such that \begin{align*} \norm{(B_\mathrm{s} - \mathbb{E} B_\mathrm{s}) + B_\mathrm{n}} \leq \norm{B_\mathrm{s} - \mathbb{E} B_\mathrm{s}} + \norm{B_\mathrm{n}} \\ \leq \min\Bigl\{\delta, \frac{(1-\delta)^2}{1+\delta} +3\delta -1\Bigr\}, \end{align*} with probability at least $1-2n^{-c}-e^{-cm}$. By \eqref{eq:Gsum}, the largest eigenvalue of $G$ is $\norm{G} \geq \frac{3}{2} - \delta$, the corresponding eigenvector is $\hat{\eta}$, and all the other eigenvalues of $G$ reside in the interval $[-\frac{1}{2}-3\delta, \frac{1}{2}+3\delta]$. By the eigenvalue decomposition of $G$ and the Pythagorean theorem, \[ G\hat{\eta} =\norm{G} \hat{\eta}, \] \begin{align*} &\norm{G\eta^{(t-1)}} \\ &\leq \sqrt{\norm{G}^2|\hat{\eta}^*\eta^{(t-1)}|^2+\Bigl(\frac{1}{2}+3\delta\Bigr)^2(1- |\hat{\eta}^*\eta^{(t-1)}|^2)}. \end{align*} Therefore, \begin{align*} & |\hat{\eta}^*\eta^{(t)}| \\ = & \frac{|\hat{\eta}^*G\eta^{(t-1)}|}{\norm{G\eta^{(t-1)}}_2} \\ \geq & \frac{\norm{G}|\hat{\eta}^*\eta^{(t-1)}|}{\sqrt{\norm{G}^2|\hat{\eta}^*\eta^{(t-1)}|^2+(\frac{1}{2}+3\delta)^2(1- |\hat{\eta}^*\eta^{(t-1)}|^2)}} \\ \geq & |\hat{\eta}^*\eta^{(t-1)}| \frac{1}{\sqrt{|\hat{\eta}^*\eta^{(t-1)}|^2+(\frac{1+6\delta}{3-2\delta})^2(1- |\hat{\eta}^*\eta^{(t-1)}|^2)}} \\ = & |\hat{\eta}^*\eta^{(t-1)}| \frac{1}{\sqrt{1-\bigl(1-(\frac{1+6\delta}{3-2\delta})^2\bigr)(1- |\hat{\eta}^*\eta^{(t-1)}|^2)}} \\ \geq & |\hat{\eta}^*\eta^{(t-1)}| \Big[ 1 + \frac{1}{2} \bigl(1-(\frac{1+6\delta}{3-2\delta})^2\bigr)(1- |\hat{\eta}^*\eta^{(t-1)}|^2) \Big], \end{align*} where the last inequality is due to $\frac{1}{\sqrt{1-z}} \geq 1 + \frac{1}{2}z$ for $z\in (0,1)$. It follows that \begin{align} & [1 - |\hat{\eta}^*\eta^{(t)}|] \leq [1 - |\hat{\eta}^*\eta^{(t-1)}|] \nonumber\\ & \times \Big[ 1 - \frac{1}{2} \bigl(1-(\frac{1+6\delta}{3-2\delta})^2\bigr)|\hat{\eta}^*\eta^{(t-1)}|(1 + |\hat{\eta}^*\eta^{(t-1)}|) \Big]. \label{eq:converge} \end{align} Clearly, $\{ |\hat{\eta}^*\eta^{(\tau)}| \}_{\tau = 0}^t$ is monotonically increasing unless $|\hat{\eta}^*\eta^{(0)}| = 0$. By the definition $\xi \coloneqq |\hat{\eta}^*\eta^{(0)}| $, the convergence rate in \eqref{eq:converge} is bounded by $\rho^2 < 1$. It follows that \begin{align*} [1 - |\hat{\eta}^*\eta^{(t)}|] & \leq \rho^2[1 - |\hat{\eta}^*\eta^{(t-1)}|] \\ & \leq \rho^{2t} [1 - |\hat{\eta}^*\eta^{(0)}|] \end{align*} \begin{align*} d(\hat{\eta}, \eta^{(t)}) \leq \rho^t d(\hat{\eta}, \eta^{(0)}) \end{align*} By Theorem \ref{thm:perturbation}, for $\tau = 0,\dots, t$ \[ d(\dot{\eta}, \hat{\eta}) \leq \Delta. \] It follows that \begin{align*} d(\dot{\eta}, \eta^{(t)}) \leq \rho^t d(\dot{\eta}, \eta^{(0)}) + 2\Delta. \end{align*} \end{IEEEproof} \subsection{Proof of the Error Bound for Algorithm \ref{alg:tpi}} \begin{IEEEproof}[Proof of Theorem \ref{thm:tpi_alt}] In the joint sparsity case, any iterate $\eta^{(\tau)} = [x^{(\tau)\top}, -\gamma^{(\tau)\top}/\alpha]^\top$ satisfies that $x^{(\tau)}$ is the concatenation of jointly sparse $\{x_{\cdot j}^{(\tau)}\}_{j=1}^N$. In the $t$-th iteration, we define a support set $T^{(t)}$ that has cardinality $s = s_0 + 2s_1$, and satisfies \[ \mathrm{supp}(x_{\cdot j}) \bigcup \mathrm{supp}(x_{\cdot j}^{(t-1)}) \bigcup \mathrm{supp}(x_{\cdot j}^{(t)}) \subset T^{(t)}, \] for all $j\in[N]$. Define $T_\eta^{(t)}$ using \eqref{eq:Tx} and \eqref{eq:Teta} with $T_1 = \dots = T_N = T^{(t)}$. Next, we focus on the submatrix $\Omega_{T_\eta^{(t)}} G \Omega_{T_\eta^{(t)}}^*$ and subvectors $\Omega_{T_\eta^{(t)}} \dot{\eta}$ and $\Omega_{T_\eta^{(t)}} \eta^{(t)}$, etc. Since the supports of $\eta^{(t)}$ and $\dot{\eta}$ are subsets of $T_\eta^{(t)}$, we have $|\dot{\eta}^*\Omega_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\eta^{(t)}| = |\dot{\eta}^*\eta^{(t)}|$. We prove by induction that $\{|\dot{\eta}^*\eta^{(\tau)}|\}_{\tau=0}^t$ is monotonically increasing (until it crosses a threshold specified later in the proof). Suppose $\{|\dot{\eta}^*\eta^{(\tau)}|\}_{\tau=0}^{t-1}$ is monotonically increasing. Next, we prove \[ |\dot{\eta}^*\eta^{(t)}| > |\dot{\eta}^*\eta^{(t-1)}|. \] By the assumption that $|\dot{\eta}^*\eta^{(0)}|\geq \xi + \widetilde{\Delta}$ and Theorem \ref{thm:perturbation2_alt}, we have \begin{align*} & |\hat{\eta}_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\eta^{(t-1)}| \\ \geq & |\dot{\eta}^*\eta^{(t-1)}| - d(\Omega_{T_\eta^{(t)}}\dot{\eta},~\hat{\eta}_{T_\eta^{(t)}}) \\ \geq & \xi + \widetilde{\Delta} - \widetilde{\Delta} \\ = & \xi. \end{align*} Following the same steps in the proof of Theorem \ref{thm:pi}, we obtain a bound for $\tilde{\eta}^{(t)}$ similar to \eqref{eq:converge}: \begin{align*} & [1 - |\hat{\eta}_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\tilde{\eta}^{(t)}|] \\ \leq & [1 - |\hat{\eta}_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\eta^{(t-1)}|] \Big[ 1 - \frac{1}{2} \bigl(1-(\frac{1+6\delta}{3-2\delta})^2\bigr) \\ & \qquad\qquad |\hat{\eta}_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\eta^{(t-1)}|(1 + |\hat{\eta}_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\eta^{(t-1)}|) \Big] \\ \leq & [1 - |\hat{\eta}_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\eta^{(t-1)}|] \Big[ 1 - \frac{1}{2} \bigl(1-(\frac{1+6\delta}{3-2\delta})^2\bigr)\xi(1 + \xi) \Big] \\ = & \rho^2 [1 - |\hat{\eta}_{T_\eta^{(t)}}^*\Omega_{T_\eta^{(t)}}\eta^{(t-1)}|], \end{align*} where $\rho$ is defined in \eqref{eq:rho}. It follows that \begin{align*} d(\hat{\eta}_{T_\eta^{(t)}},~ \Omega_{T_\eta^{(t)}}\tilde{\eta}^{(t)}) \leq \rho \cdot d(\hat{\eta}_{T_\eta^{(t)}},~ \Omega_{T_\eta^{(t)}}\eta^{(t-1)}) \end{align*} We use the perturbation bound in Theorem \ref{thm:perturbation2_alt} one more time: \begin{align*} d(\Omega_{T_\eta^{(t)}}\dot{\eta},~ \Omega_{T_\eta^{(t)}}\tilde{\eta}^{(t)}) \leq \rho \cdot d(\Omega_{T_\eta^{(t)}}\dot{\eta},~ \Omega_{T_\eta^{(t)}}\eta^{(t-1)}) + 2\widetilde{\Delta}. \end{align*} Equivalently, \begin{equation} \label{eq:before_proj} \sqrt{1 - | \dot{\eta}^*\tilde{\eta}^{(t)} |} \leq \rho \sqrt{1 - | \dot{\eta}^* \eta^{(t-1)} |} + \sqrt{2}\widetilde{\Delta}. \end{equation} Next, we show that the truncation step amplifies the error only by a small factor. The vector $\widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})$ is the projection of $\tilde{\eta}^{(t)}$ onto the set of structured sparse vectors, and $\eta^{(t)}$ is the normalized version. We define three index sets \begin{align*} & T_a = \mathrm{supp}(\dot{\eta})\backslash \mathrm{supp}(\eta^{(t)}), \\ & T_b = \mathrm{supp}(\dot{\eta})\bigcap \mathrm{supp}(\eta^{(t)}), \\ & T_c = \mathrm{supp}(\eta^{(t)})\backslash \mathrm{supp}(\dot{\eta}). \end{align*} By the Cauchy-Schwarz inequality, \begin{align*} & |\dot{\eta}^*\tilde{\eta}^{(t)}|^2 \\ \leq & \norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2^2 + \norm{\Omega_{T_b} \tilde{\eta}^{(t)}}_2^2 \\ \leq & 1 - \norm{\Omega_{T_c} \tilde{\eta}^{(t)}}_2^2 \\ \leq & 1 - \frac{|T_c|}{|T_a|} \norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2^2, \end{align*} where the last inequality is due to projection rule, i.e., $\widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})$ keeps the largest entries of $\tilde{\eta}^{(t)}$ (in the part corresponding to $x$). Since ${|T_c|}/{|T_a|}\geq s_1/s_0$, we have \begin{equation} \label{eq:Tat} \norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2 \leq \sqrt{\frac{s_0}{s_1} (1-|\dot{\eta}^*\tilde{\eta}^{(t)}|^2)}. \end{equation} Also by the Cauchy-Schwarz inequality, \begin{align*} & |\dot{\eta}^*\tilde{\eta}^{(t)}|^2 \\ & \leq (\norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2 \norm{\Omega_{T_a} \dot{\eta}}_2 + \norm{\Omega_{T_b} \tilde{\eta}^{(t)}}_2 \norm{\Omega_{T_b} \dot{\eta}}_2)^2 \\ & \leq \Bigl(\norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2 \norm{\Omega_{T_a} \dot{\eta}}_2 \\ & \qquad + \sqrt{1-\norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2^2} \sqrt{1-\norm{\Omega_{T_a} \dot{\eta}}_2^2} ~\Bigr)^2 \\ & \leq 1 - (\norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2 - \norm{\Omega_{T_a} \dot{\eta}}_2)^2. \end{align*} It follows that \begin{equation} \label{eq:Tad} \norm{\Omega_{T_a} \dot{\eta}}_2 \leq \norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2 + \sqrt{1-|\dot{\eta}^*\tilde{\eta}^{(t)}|^2}. \end{equation} By \eqref{eq:Tat} and \eqref{eq:Tad}, \begin{align} \nonumber & |\dot{\eta}^*\tilde{\eta}^{(t)}| - |\dot{\eta}^*\widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})| \\ \nonumber \leq & |\dot{\eta}^*\bigl(\tilde{\eta}^{(t)}-\widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})\bigr)| \\ \nonumber = & \norm{\Omega_{T_a} \tilde{\eta}^{(t)}}_2\norm{\Omega_{T_a} \dot{\eta}}_2 \\ \label{eq:proj} \leq & \Big(\sqrt{\frac{s_0}{s_1}} + \frac{s_0}{s_1}\Big) (1-|\dot{\eta}^*\tilde{\eta}^{(t)}|^2). \end{align} By \eqref{eq:before_proj} and \eqref{eq:proj}, \begin{align*} & \sqrt{1-|\dot{\eta}^* \eta^{(t)}|} \\ \leq & \sqrt{1-|\dot{\eta}^* \widetilde{\Pi}_{s_1}(\tilde{\eta}^{(t)})|} \\ \leq & \sqrt{1-|\dot{\eta}^* \tilde{\eta}^{(t)}|} \sqrt{1 + \Big(\sqrt{\frac{s_0}{s_1}} + \frac{s_0}{s_1}\Big)(1 + |\dot{\eta}^* \tilde{\eta}^{(t)}|)} \\ \leq & \sqrt{1-|\dot{\eta}^* \tilde{\eta}^{(t)}|} \sqrt{1 + 2\Big(\sqrt{\frac{s_0}{s_1}} + \frac{s_0}{s_1}\Big)} \\ \leq & \rho \sqrt{1 + 2\sqrt{\frac{s_0}{s_1}} + \frac{2s_0}{s_1}} \sqrt{1 - | \dot{\eta}^* \eta^{(t-1)} |} + \sqrt{10}\widetilde{\Delta} \\ \leq & \tilde{\rho} \sqrt{1 - | \dot{\eta}^* \eta^{(t-1)} |} + \sqrt{10}\widetilde{\Delta}. \end{align*} Therefore, $\{|\dot{\eta}^* \eta^{(\tau)}|\}_{\tau=0}^t$ indeed monotonically increases unless $\sqrt{1-|\dot{\eta}^* \eta^{(\tau)}|}$ reaches $\sqrt{10}\widetilde{\Delta}/(1-\tilde{\rho})$ for some $\tau$. The proof by induction is complete. It follows that \[ \sqrt{1-|\dot{\eta}^* \eta^{(t)}|} \leq \tilde{\rho}^t \sqrt{1-|\dot{\eta}^* \eta^{(0)}|} + \frac{\sqrt{10}\widetilde{\Delta}}{1-\tilde{\rho}}, \] or equivalently \[ d(\dot{\eta}, \eta^{(t)}) \leq \tilde{\rho}^t d(\dot{\eta}, \eta^{(0)}) + \frac{2\sqrt{5}\widetilde{\Delta}}{1-\tilde{\rho}}. \] \end{IEEEproof} \subsection{Proof of the Guarantee for Algorithm \ref{alg:init}} \begin{IEEEproof}[Proof of Theorem \ref{thm:init}] We first show that, under the conditions in Theorem \ref{thm:init}, the support $T_j$ in Algorithm \ref{alg:init} contains $T_j'\subset \mathrm{supp}(x_{\cdot j})$ in Assumption \ref{ass:X_flat}. To this end, we prove that the norms of the rows of $D^*E$ indexed by $T_j'$ are larger than those outside $\mathrm{supp}(x_{\cdot j})$. For a fixed $j\in [N]$, the $j$-th block of $D^*E$ is indexed by the set $(j-1)m + [m]$. Therefore, the goal is to show that \begin{align*} & \min_{\ell \in T_j'}\norm{d^*_{\cdot ((j-1)m + \ell)}E}_2^2 \\ & > \max_{\ell' \in [m]\backslash \mathrm{supp}(x_{\cdot j})}\norm{d^*_{\cdot((j-1)m + \ell')}E}_2^2, \end{align*} or equivalently, \begin{align*} &\min_{\ell \in T_j'} \sum_{k\in[n]} |\overline{a_{k\ell}}y_{kj}|^2 \\ & > \max_{\ell' \in [m]\backslash \mathrm{supp}(x_{\cdot j})} \sum_{k\in[n]} |\overline{a_{k\ell'}}y_{kj}|^2. \end{align*} Since \[ \mathbb{E} |\overline{a_{k\ell}}y_{kj}|^2 = \frac{1}{n^2} |\lambda_k|^2 (\norm{x_{\cdot j}}_2^2 + |x_{\ell j}|^2) + \frac{1}{n} |w_{kj}|^2, \] it suffices to show that for all $\ell\in T_j'$ and $\ell''\in [m]$, \begin{align} &\frac{1}{n^2} \sum_{k\in[n]} |\lambda_k|^2 |x_{\ell j}|^2 \nonumber \\ & > 2 \left| \sum_{k\in[n]} \left( |\overline{a_{k\ell''}}y_{kj}|^2 - \mathbb{E} |\overline{a_{k\ell''}}y_{kj}|^2\right) \right|. \label{eq:suffice} \end{align} Recall that \[ y_{kj} = \lambda_k a_{k\cdot}^\top x_{\cdot j} + w_{kj}. \] By the triangle inequality and Lemma \ref{lem:ineq_square}, for all $j\in[N]$ and $\ell\in[m]$, \begin{align*} & \left| \sum_{k\in[n]} \left( |\overline{a_{k\ell}}y_{kj}|^2 - \mathbb{E} ||\overline{a_{k\ell}}y_{kj}|^2\right) \right| \\ & \leq \left| \sum_{k\in[n]} \left( | \lambda_k \overline{a_{k\ell}}a_{k\cdot}^\top x_{\cdot j}|^2 - \mathbb{E} | \lambda_k \overline{a_{k\ell}}a_{k\cdot}^\top x_{\cdot j}|^2\right) \right| \\ & + 2 \left| \sum_{k\in[n]} \mathrm{Re}\left(\lambda_k a_{k\ell} \overline{a_{k\ell}} a_{k\cdot}^\top x_{\cdot j} \overline{w_{kj}} \right) \right| \\ & + \left| \sum_{k\in[n]} \left( | \overline{a_{k\ell}} w_{kj} |^2 - \mathbb{E} | \overline{a_{k\ell}} w_{kj} |^2 \right) \right| \\ & \leq C_6 \left(1+\frac{C_W}{\sqrt{1-\theta}}\right)^2\frac{\norm{x_{\cdot j}}_2^2 \log^3 (nmN)}{n^{3/2}}, \end{align*} with probability at least $1-n^{-c_6}$. By Assumptions \ref{ass:lambda} and \ref{ass:X_flat}, if we plug the above result into \eqref{eq:suffice}, then the following sample complexity is sufficient for Algorithm \ref{alg:init} to correctly identify the subsets $T_j'$ ($j\in[N]$) with probability at least $1-n^{-c_6}$: \[ n^{1/2} > \frac{2C_6}{\omega(1-\delta)} \left(1+\frac{C_W}{\sqrt{1-\theta}}\right)^2 s_0\log^3 (nmN). \] Thus the first half of Theorem \ref{thm:init} is proved. Given that the support $T_j$ covers the large entries indexed by $T_j'$, \begin{align} & \norm{\mathbb{E} \Pi_{T_x} D^* E - \frac{1}{n} x \lambda^\top} \nonumber\\ & = \norm{\frac{1}{n}\Pi_{T_x} x \lambda^\top - \frac{1}{n} x \lambda^\top} \nonumber\\ & \leq \sqrt{\frac{1+\delta}{n} \sum_{j\in [N],\ell'\in[m]\backslash T_j'} |x_{\ell' j}|^2} \nonumber\\ & \leq \sqrt{\frac{(1+\delta)\delta_X}{n}}. \label{eq:small_energy} \end{align} We also have \begin{align} & \norm{ \Pi_{T_x} D^* E - \mathbb{E} \Pi_{T_x} D^* E } \nonumber\\ & \leq \norm{ \Omega_{T_x} D^* E_\mathrm{s} - \mathbb{E} \Omega_{T_x} D^* E_\mathrm{s} } + \norm{ \Omega_{T_x} D^* E_\mathrm{n} } \nonumber\\ & \leq \frac{1}{\alpha} (\norm{ \Omega_{T_\eta} B_\mathrm{s} \Omega_{T_\eta}^* - \Omega_{T_\eta} \mathbb{E} B_\mathrm{s} \Omega_{T_\eta}^*} + \norm{ \Omega_{T_\eta} B_\mathrm{n} \Omega_{T_\eta}^* } ) \nonumber\\ & \leq \frac{1}{\sqrt{n}} (\delta_B + \delta_W), \label{eq:use_perturbation} \end{align} where the last inequality follows from Lemmas \ref{lem:Bs} and \ref{lem:Bn}, give that the conditions of Theorem \ref{thm:tpi_alt} are satisfied. By the triangle inequality, and \eqref{eq:small_energy} and \eqref{eq:use_perturbation}, \[ \norm{ \Pi_{T_x} D^* E - \frac{1}{n} x \lambda^\top } \leq \frac{1}{\sqrt{n}} (\delta_B + \delta_W + \sqrt{(1+\delta)\delta_X}), \] where $\delta_B$ can be made arbitrarily small by a sufficiently large $C$ in \eqref{eq:size_sparsity_alt}, $\delta_W$ can be made arbitrarily small by a sufficiently small $C_W$ in Assumption \ref{ass:W}, and the last term can be made arbitrarily small by a sufficiently small $\delta_X$ in Assumption \ref{ass:X_flat}. Therefore, the first left and right singular vectors $u$ and $v$ can become arbitrarily close to $x$ and to $\lambda/\norm{\lambda}_2$ (up to a global phase factor, i.e., a constant of unit modulus), respectively, and $|\dot{\eta}^* \eta^{(0)}|$ approaches \[ \frac{n^{3/2}+\norm{\lambda}_2\norm{\gamma}_2^2}{\sqrt{n^2+\norm{\lambda}_2^2\norm{\gamma}_2^2}\sqrt{n+\norm{\gamma}_2^2}} > 1-2\delta. \] The inequality follows from Assumption \ref{ass:lambda}, i.e., $\sqrt{1-\delta} \leq |\lambda_k| \leq \sqrt{1+\delta}$, and $1/\sqrt{1+\delta} \leq |\gamma_k| = 1 / |\lambda_k| \leq 1/\sqrt{1-\delta}$. \end{IEEEproof} \section{Numerical Experiments} \label{sec:experiment} In this section, we test the empirical performance of Algorithm \ref{alg:pi} and Algorithm \ref{alg:tpi}. \subsection{Subspace Case: Power Iteration vs. Least Squares} \label{sec:exp_pi} In Algorithm \ref{alg:pi}, we choose $\alpha = \sqrt{n}$, and $\beta = \norm{B}$ (computed using another power iteration on $B$). We compare Algorithm \ref{alg:pi} with the least squares approach in \cite[Section 3.3]{Ling2016}, where $\gamma_1 = 1$ is used to avoid the trivial solution. We generate $A\in\mathbb{C}^{n\times m}$ as a complex Gaussian random matrix, whose entries are drawn independently from $\mathcal{CN}(0,\frac{1}{n})$, i.e., the real and imaginary part are drawn independently from $\mathcal{N}(0, \frac{1}{2n})$. The unknown gains and phases $\lambda_k$ are generated as follows: \begin{equation} \label{eq:lambda_exp} \lambda_k = e^{\sqrt{-1}\varphi_k} \Big(1+(\sqrt{1+\delta}-1)e^{\sqrt{-1}\varphi_k'}\Big), \quad \forall k\in[n], \end{equation} such that $\lambda_k$ is on a small circle of radius $\sqrt{1+\delta}-1$ centered at a point on the unit circle, and $\varphi_k$ and $\varphi_k'$ are drawn independently from a uniform distribution on $[0,2\pi)$. Figure \ref{fig:phase} visualizes one such synthesized $\lambda_k$ in the complex plane. We set $\delta = 0.1$ in all the numerical experiments. \begin{figure}% \centering \includegraphics[width=0.5\columnwidth]{phase.png}% \caption{Illustration of $\lambda_k$ in the complex plane.}% \label{fig:phase}% \end{figure} The entries of $X\in\mathbb{C}^{m\times N}$ are drawn independently from $\mathcal{CN}(0,\frac{1}{Nm})$, so that the Frobenius norm of $X$ is approximately $1$. In the noisy setting, we generate complex white Gaussian noise $W\in\mathbb{C}^{n\times N}$, whose entries are drawn from $\mathcal{CN}(0,\frac{\sigma_W^2}{Nn})$. We define measurement signal-to-noise ratio (MSNR) and recovery signal-to-noise ratio (RSNR) as: \[ \text{MSNR} \coloneqq 20\log_{10} \frac{\norm{\mathrm{diag}(\lambda)AX}_\mathrm{F}}{\norm{W}_\mathrm{F}}, \] \[ \text{RSNR} \coloneqq -10\log_{10}(2-2|\dot{\eta}^*\eta^{(t)}|). \] We test the two approaches at four noise levels: $\sigma_W=0$, $0.1$, $0.2$, and $0.5$, which roughly correspond to MSNR of $\infty$, $20$dB, $14$dB, and $6$dB. At these noise levels, we say the recovery is successful if the RSNR exceeds $30$dB, $20$dB, $14$dB, $6$dB, respectively. The success rates do not change dramatically as functions of these thresholds. In the experiments, we set $n=128$, $N=16$, and $m= 8, 16, 24, \dots, 64$. For each $m$, we repeat the experiments $100$ times and compute the empirical success rates, which are shown in Figure \ref{fig:subspace}. As seen in Figure \subref*{fig:subspace_a}, both power iteration and least squares achieve perfect recovery in the noiseless setting. However, as seen in Figures \subref*{fig:subspace_b} -- \subref*{fig:subspace_d}, power iteration is clearly more robust against noise than least squares, whose performance degrades more severely in the noisy settings. \begin{figure}[htbp]% \centering \subfloat[]{\input{pi_0.tex} \label{fig:subspace_a}} \subfloat[]{\input{pi_1.tex} \label{fig:subspace_b}} \\ \subfloat[]{\input{pi_2.tex} \label{fig:subspace_c}} \subfloat[]{\input{pi_5.tex} \label{fig:subspace_d}} \caption{Subspace case: The empirical success rates of power iteration (blue solid line) and least squares (red dashed line). The $x$-axis represents $m$, and the $y$-axis represents the empirical success rate. (a) -- (d) are the results with $\sigma_W=0$, $0.1$, $0.2$, and $0.5$, respectively.}% \label{fig:subspace}% \end{figure} The empirical phase transitions of power iteration are shown in Figure \ref{fig:pi_pt}. We fix $N=16$ and plot the phase transition with respect to $n$ and $m$ (Figure \subref*{fig:pi_pt_n}); we then fix $n=2m$ and plot the phase transition with respect to $N$ and $m$ (Figure \subref*{fig:pi_pt_N}). Clearly, to achieve successful recovery, $n$ must scale linearly with $m$, but $N$ can be small compared to $m$ and $n$. This confirms the sample complexity in Theorem \ref{thm:pi}, of $n\gtrsim m$ and $N\gtrsim 1$. Careful readers may notice in Figure \subref*{fig:pi_pt_N} that for $N = 5$ the success rates at $m < 16$ are worse than those at $m \geq 16$. This seemingly peculiar phenomenon is caused by a small $n = 2m$, which does not belong to the large number regime associated with a high probability. \begin{figure}[htbp]% \centering \subfloat[]{\input{pi_pt_n.tex} \label{fig:pi_pt_n}} \subfloat[]{\input{pi_pt_nn.tex} \label{fig:pi_pt_N}} \caption{The empirical phase transition of power iteration. Grayscale represents success rates, where white equals 1, and black equals 0. (a) The $x$-axis represents $m$, and the $y$-axis represents $n$. (b) The $x$-axis represents $m$, and the $y$-axis represents $N$.}% \label{fig:pi_pt}% \end{figure} \subsection{Sparsity Case: Truncated Power Iteration vs. $\ell_1$ Minimization} \label{sec:exp_tpi} In the sparsity case, we use the same setup described in the previous section, except for the signal $X$. The supports of the $s_0$-sparse columns of $X$ are chosen uniformly at random, and the nonzero entries follow $\mathcal{CN}(0,\frac{1}{Ns_0})$. This unstructured sparsity case is more challenging than the joint sparsity case in Theorem \ref{thm:tpi_alt}. In Algorithm \ref{alg:tpi}, we choose $\alpha = \sqrt{n}$, and $\beta = \norm{B}$. In all the experiments, we assume that the sparsity level $s_0$ is known, and set $s_1 = 2s_0$ for convenience. A more sophisticated scheme that decreases $s_1$ as the iteration number increases may lead to better empirical performance \cite{Yuan2013}. For the experiment we suppose that the phases $\{\varphi_k\}_{k=1}^n$ in \eqref{eq:lambda_exp} are available, and let \begin{equation} \label{eq:gamma0} \gamma^{(0)} \coloneqq [e^{-\sqrt{-1}\varphi_1},\dots, e^{-\sqrt{-1}\varphi_n}]^\top \end{equation} denote the initial estimate of $\gamma$, which is close to but different from the true $\gamma$, i.e., the entrywise inverse of $\lambda$ in \eqref{eq:lambda_exp}. See Figure \ref{fig:phase} for an illustration of $\lambda_k$, $\gamma_k$, and $\gamma_k^{(0)}$. Then we initialize Algorithm \ref{alg:tpi} with $\eta^{(0)} = [\bm{0}_{Nm,1}^\top, \gamma^{(0)\top}]^\top$. We compare Algorithm \ref{alg:tpi} with an $\ell_1$ minimization approach. Wang and Chi \cite{Wang2016} adopted an approach tailored for the case where $A$ is the DFT matrix and $\lambda_k \approx 1$. They use a linear constraint $\sum_{k\in[n]}\gamma_k = n$ to avoid the trivial solution of all zeros. For fair comparison, we revise their approach to accommodate arbitrary $A$ and $\lambda$. The revised approach uses the alternating direction method of multipliers (ADMM) \cite{Boyd2010} to solve the following convex optimization problem: \footnote{In the noisy setting, one could replace the linear constraint $\mathrm{diag}(\gamma) Y = AX$ with an ellipsoid constraint $\norm{\mathrm{diag}(\gamma) Y - AX}_\mathrm{F} \leq \epsilon$. However, the parameter $\epsilon$ needs to be adjusted with noise levels. For fair comparison of robustness to noise, we use the linear constrained $\ell_1$ minimization in the noisy setting (similar to \cite{Wang2016}).} \begin{align*} \min_{\gamma,X} \quad & \norm{\mathrm{vec}(X)}_1 \\ \text{s.t.} \quad & \mathrm{diag}(\gamma) Y = AX, \\ & \gamma^{(0)*} \gamma = n. \end{align*} Here, $\gamma^{(0)}$ is the initial estimate of $\gamma$ defined in \eqref{eq:gamma0}, and used as initialization in our Algorithm \ref{alg:tpi} in this comparison. We conduct numerical experiments with the same four noise levels and criterion for successful recovery as in Section \ref{sec:exp_pi}. In the experiments, we set $n=128$, $m=256$, $N=16$, and $s_0 = 8, 16, 24, \dots, 64$. For each $s_0$, we repeat the experiments $100$ times and compute the empirical success rates, which are shown in Figure \ref{fig:sparsity}. In the noiseless case (Figure \subref*{fig:sparsity_a}), $\ell_1$ minimization achieves a slightly higher success rate near the phase transition. However, truncated power iteration is more robust against noise than $\ell_1$ minimization, which breaks down completely at the higher noise levels (Figures. \subref*{fig:sparsity_b} -- \subref*{fig:sparsity_d}). Figure \subref*{fig:sparsity_a} clearly shows that truncated power iteration recovers $\eta$ successfully when $n=128$, $N=16$, and $s_0 = 32$. This suggests that truncated power iteration may succeed when $n$ and $N$ are (up to log factors) on the order of $s_0$ and $1$, respectively. However, while the scaling with the number of sensors $n$ agrees with Theorem \ref{thm:tpi_alt}, success with such small number of snapshots $N$ is not guaranteed by our current theoretical analysis. \begin{figure}[htbp]% \centering \subfloat[]{\input{tpi_0.tex} \label{fig:sparsity_a}} \subfloat[]{\input{tpi_1.tex} \label{fig:sparsity_b}} \\ \subfloat[]{\input{tpi_2.tex} \label{fig:sparsity_c}} \subfloat[]{\input{tpi_5.tex} \label{fig:sparsity_d}} \caption{Sparsity case: The empirical success rates of truncated power iteration (blue solid line) and $\ell_1$ minimization (red dashed line). The $x$-axis represents $s_0$, and the $y$-axis represents the empirical success rate. (a) -- (d) are the results with $\sigma_W=0$, $0.1$, $0.2$, and $0.5$, respectively.}% \label{fig:sparsity}% \end{figure} Next, we assume that only a subset of the phases $\{\varphi_k\}_{k=1}^n$ are available, and examine to what extent Algorithm \ref{alg:tpi} and $\ell_1$ minimization depend on a good initial estimate of $\gamma$. In the numerical results shown in Figure \ref{fig:wrong_phase}, we consider only the noiseless setting of BGPC with sparsity, and set $s_0 = 4, 8, 12, \dots, 32$. In Figures \subref*{fig:wp_a} and \subref*{fig:wp_b}, we replace $1/2$ and $3/4$ of $\{\varphi_k\}_{k=1}^n$ with random phases, respectively, and use the resulting bad estimate $\gamma^{(0)}$ in Algorithm \ref{alg:tpi} and $\ell_1$ minimization. As seen in Figure \ref{fig:wrong_phase}, truncated power iteration is less dependent on accurate initial estimate of $\gamma$. \begin{figure}[htbp]% \centering \subfloat[]{\input{tpi_h.tex} \label{fig:wp_a}} \subfloat[]{\input{tpi_q.tex} \label{fig:wp_b}} \caption{Sparsity case: The empirical success rates of truncated power iteration (blue solid line) and $\ell_1$ minimization (red dashed line), with bad initial estimate of the phases. The $x$-axis represents $s_0$, and the $y$-axis represents the empirical success rate. (a) and (b) are the results for which $1/2$ and $3/4$ of $\{\varphi_k\}_{k=1}^n$ are initialized with random phases.}% \label{fig:wrong_phase}% \end{figure} We repeat the above experiments for the joint sparsity case, where we replace $\widetilde{\Pi}_{s_1}$ in Algorithm \ref{alg:tpi} with $\widetilde{\Pi}'_{s_1}$. We also replace the $\ell_1$ norm $\norm{\mathrm{vec}(X)}_1$ in the competing approach with a mixed norm: \[ \norm{X}_{2,1} = \sum_{\ell\in[m]} \Bigl( \sum_{j\in[N]} |x_{\ell j}|^2 \Bigr)^{1/2}, \] which is a well known convex method for the recovery of jointly sparse signals. The results for different noise levels and for inaccurate $\gamma^{(0)}$ are shown in Figures \ref{fig:jsparsity} and \ref{fig:js_wrong_phase}, respectively. In the joint sparsity case, truncated power iteration is robust against noise, but seems less robust against errors in the initial phase estimate. We conjecture that the failure of Algorithm \ref{alg:tpi} in the joint sparsity case is due to the restriction of $\widetilde{\Pi}'_{s_1}$. By projecting onto jointly sparse supports, the algorithm is likely to converge prematurely to an incorrect support. When compared to the results in Figures \subref*{fig:jwp_a} and \subref*{fig:jwp_b}, Figures \subref*{fig:jwp_c} and \subref*{fig:jwp_d} show that using $\widetilde{\Pi}_{s_1}$ instead of $\widetilde{\Pi}'_{s_1}$ in the first half of the iterations indeed improves the performance of Algorithm \ref{alg:tpi} in the joint sparsity case. In the rest of the experiments, we use $\widetilde{\Pi}_{s_1}$ during the first half of the iterations in Algorithm \ref{alg:tpi} for the joint sparsity case. \begin{figure}[htbp]% \centering \subfloat[]{\input{js_tpi_0.tex} \label{fig:jsparsity_a}} \subfloat[]{\input{js_tpi_1.tex} \label{fig:jsparsity_b}} \\ \subfloat[]{\input{js_tpi_2.tex} \label{fig:jsparsity_c}} \subfloat[]{\input{js_tpi_5.tex} \label{fig:jsparsity_d}} \caption{Joint sparsity case: The empirical success rates of truncated power iteration (blue solid line) and mixed minimization (red dashed line). The $x$-axis represents $s_0$, and the $y$-axis represents the empirical success rate. (a) -- (d) are the results with $\sigma_W=0$, $0.1$, $0.2$, and $0.5$, respectively.}% \label{fig:jsparsity}% \end{figure} \begin{figure}[htbp]% \centering \subfloat[]{\input{js_tpi_h.tex} \label{fig:jwp_a}} \subfloat[]{\input{js_tpi_q.tex} \label{fig:jwp_b}} \\ \subfloat[]{\input{js_tpi_h_2.tex} \label{fig:jwp_c}} \subfloat[]{\input{js_tpi_q_2.tex} \label{fig:jwp_d}} \caption{Joint sparsity case: The empirical success rates of truncated power iteration with $\widetilde{\Pi}'_{s_1}$ (blue solid line) and mixed minimization (red dashed line), with bad initial estimate of the phases. The $x$-axis represents $s_0$, and the $y$-axis represents the empirical success rate. (a) and (b) are the results for which $1/2$ and $3/4$ of $\{\varphi_k\}_{k=1}^n$ are initialized with random phases. In (c) and (d), we repeat the experiments, but use $\widetilde{\Pi}_{s_1}$ instead of $\widetilde{\Pi}'_{s_1}$ in the first half of the iterations.}% \label{fig:js_wrong_phase}% \end{figure} Next, we plot the phase transitions for truncated power iteration. We fix $N=16$ and $m=2n$ and plot the empirical phase transition with respect to $n$ and $s_0$ (sparsity case in Figure \subref*{fig:tpi_pt_n}, and joint sparsity case in Figure \subref*{fig:js_tpi_pt_n}); we then fix $n=4s_0$ and $m=2n$ and plot the empirical phase transition with respect to $N$ and $s_0$ (sparsity case in Figure \subref*{fig:tpi_pt_N}, and joint sparsity case in Figure \subref*{fig:js_tpi_pt_N}). It is seen that, to achieve successful recovery, $n$ must scale linearly with $s_0$, but $N$ can be small compared to $s_0$ and $n$. On the one hand, the scaling law $n\gtrsim s_0$ in Theorem \ref{thm:tpi_alt} is confirmed by Figure \ref{fig:tpi_pt}; on the other hand, $N\gtrsim \sqrt{s_0}$ seems conservative and might be an artifact of our proof techniques. We have yet to come up with a theoretical guarantee that covers the more general sparsity case, or requires a less demanding sample complexity $N\gtrsim 1$. In Figures \subref*{fig:tpi_pt_N} and \subref*{fig:js_tpi_pt_N}, the success rates at smaller $s_0$ are lower than those at a larger $s_0$, because the number of sensors $n = 4s_0$ is too small to yield a high probability. \begin{figure}[htbp]% \centering \subfloat[]{\input{tpi_pt_n.tex} \label{fig:tpi_pt_n}} \subfloat[]{\input{tpi_pt_nn.tex} \label{fig:tpi_pt_N}} \\ \subfloat[]{\input{js_tpi_pt_n.tex} \label{fig:js_tpi_pt_n}} \subfloat[]{\input{js_tpi_pt_nn.tex} \label{fig:js_tpi_pt_N}} \caption{The empirical phase transition of truncated power iteration. Grayscale represents success rates, where white equals 1, and black equals 0. (a) Sparsity case: The $x$-axis represents $s_0$, and the $y$-axis represents $n$. (b) Sparsity case: The $x$-axis represents $s_0$, and the $y$-axis represents $N$. (c) Joint sparsity case: The $x$-axis represents $s_0$, and the $y$-axis represents $n$. (d) Joint sparsity case: The $x$-axis represents $s_0$, and the $y$-axis represents $N$.}% \label{fig:tpi_pt}% \end{figure} \subsection{Sparsity Case: Initialization} \label{sec:exp_init} In this section, we examine the quality of the initialization produced by Algorithm \ref{alg:init} by comparing it with two different initializations: (i) the good initialization $\eta^{(0)} = [\bm{0}_{Nm,1}^\top, \gamma^{(0)\top}]^\top$ aided by side information on the phase in Section \ref{sec:exp_tpi}; and (ii) a baseline initialization $\eta^{(0)} = [\bm{0}_{Nm,1}^\top, \bm{1}_{n,1}^\top]^\top$. We use the same setting as in Section \ref{sec:exp_tpi}, except that $N=32$. We let $\sigma_W=0.1$, and claim the recovery is successful if the RSNR exceeds $20$dB. In the experiment for the joint sparsity case, for the reason mentioned in Section \ref{sec:exp_tpi}, we ignore the joint sparsity structure and estimate the support of different columns of $X$ independently in the initialization and during the first half of the iterations. Only in the second half of the iterations, we use the projection $\widetilde{\Pi}'_{s_1}$ onto jointly sparse supports. Figure \ref{fig:init} shows that, although the initialization provided by Algorithm \ref{alg:init} is not as good as the accurate initialization with side information, it is far better than the baseline. Figure \ref{fig:pt_init} shows the empirical phase transition with respect to $n$ and $s_0$, when Algorithm \ref{alg:init} is used to initialize truncated power iteration (sparsity case in Figure \subref*{fig:tpi_pt_init_s}, and joint sparsity case in Figure \subref*{fig:tpi_pt_init_js}). The results suggest that when $n$ scales linearly with $s_0$, Algorithm \ref{alg:init} can provide a sufficiently good initialization for truncated power iteration. For example, in \subref*{fig:tpi_pt_init_s}, the success rate is 1 when $n=256$ and $s_0=20$. Therefore, the sample complexity $n\gtrsim s_0^2$ in Theorem \ref{thm:init} could be overly conservative and an artifact of our analysis. \begin{figure}[htbp]% \centering \subfloat[]{\input{tpi_init.tex} \label{fig:init_s}} \subfloat[]{\input{js_tpi_init.tex} \label{fig:init_js}} \caption{The empirical success rates of truncated power iteration with the initialization in Algorithm \ref{alg:init} (blue solid line), with a baseline initialization $\eta^{(0)} = [\bm{0}_{Nm,1}^\top, \bm{1}_{n,1}^\top]^\top$ (red dashed line), and with the accurate initialization $\eta^{(0)} = [\bm{0}_{Nm,1}^\top, \gamma^{(0)\top}]^\top$ with side information in Section \ref{sec:exp_tpi} (black dash-dot line). The $x$-axis represents $s_0$, and the $y$-axis represents the empirical success rate. (a) is the result for the sparsity case, and (b) is the result for the joint sparsity case.}% \label{fig:init}% \end{figure} \begin{figure}[htbp]% \centering \subfloat[]{\input{tpi_pt_n_init.tex} \label{fig:tpi_pt_init_s}} \subfloat[]{\input{js_tpi_pt_n_init.tex} \label{fig:tpi_pt_init_js}} \caption{The empirical phase transition of truncated power iteration with the initialization in Algorithm \ref{alg:init}. The $x$-axis represents $s_0$, and the $y$-axis represents $n$. (a) is the result for the sparsity case, and (b) is the result for the joint sparsity case.}% \label{fig:pt_init}% \end{figure} \subsection{Application: Inverse Rendering} \label{sec:inverse_render} In this section, we apply the power iteration algorithm to the inverse rendering problem in computational relighting -- given images of an object under different lighting conditions (Figure \subref*{fig:cat_i}), and the surface normals of the object (Figure \subref*{fig:cat_n}), the goal is to recover the albedos (also known as reflection coefficients) of the object surface and the lighting conditions. In this problem, the columns of $Y=\mathrm{diag}(\lambda)AX \in \mathbb{R}^{n\times N}$ represent images under different lighting conditions, which are the products of the unknown albedo map $\lambda \in\mathbb{R}^n$ and the intensity maps of incident light under different conditions $AX$. For Lambertian surfaces, it is reasonable to assume that the intensity of incident light resides in a subspace spanned by the first $9$ spherical harmonics computed from the surface normals \cite{Nguyen2013}, which we denote by the columns of $A \in \mathbb{R}^{n\times 9}$. Then the columns of $X$ are the coordinates of the spherical harmonic expansion, which parameterize the lighting conditions. We can solve for $\lambda$ and $X$ using Algorithm \ref{alg:pi}. Our approach is similar to that of Nguyen et al. \cite{Nguyen2013}, which also formulates inverse rendering as an eigenvector problem. Despite the fact that the two approaches solve for the eigenvectors of different matrices, they yield identical solutions in the ideal scenario where the model is exact and the solution is unique. In our experiment, we obtain $N=12$ color images and the surface normals of an object under different lighting conditions,\footnote{The images are downloaded from \emph{https://courses.cs.washington. edu/courses/csep576/05wi/projects/project3/project3.htm} on September 16, 2017. The surface normals are computed using the method described in the same webpage.} and we compute the first $m=9$ spherical harmonics. We apply Algorithm \ref{alg:pi} to each of the three color channels, and the albedo map recovered using 200 power iterations is shown in Figure \subref*{fig:cat_a}. We also compute new images of the object under new lighting conditions (Figure \subref*{fig:cat_new}). \begin{figure}[htbp]% \centering \subfloat[]{\includegraphics[width=0.9\columnwidth]{cat_image.png} \label{fig:cat_i}} \\ \subfloat[]{\includegraphics[width=0.4\columnwidth]{cat_normal.png} \label{fig:cat_n}} \subfloat[]{\includegraphics[width=0.4\columnwidth]{cat_albedo.png} \label{fig:cat_a}} \\ \subfloat[]{\includegraphics[width=0.9\columnwidth]{cat_new.png} \label{fig:cat_new}} \caption{Inverse rendering and relighting. (a) We use 12 images of the object under different lighting conditions. (b) The surface normals. The three dimensions of the normal vectors are represented by the RGB channels of the color image. (c) The recovered albedo map. (d) Computed images of the object under new lighting conditions.}% \label{fig:cat}% \end{figure} \section{Conclusion} \label{sec:conclusion} We formulate the BGPC problem as an eigenvector problem, and propose to solve BGPC with power iteration, and solve BGPC with a sparsity structure with truncated power iteration. We give theoretical guarantees for the subspace case with a near optimal sample complexity, and for the joint sparsity case with a suboptimal sample complexity. Numerical experiments show that both power iteration and truncated power iteration can recover the unknown gain and phase, and the unknown signal, using a near optimal number of samples. It is an open problem to obtain theoretical guarantees with optimal sample complexities, for truncated power iteration that solves BGPC with joint sparsity or sparsity constraints.
1,314,259,995,983
arxiv
\section{Algorithms} \label{sec:appendixalgorithms} \input{algorithms/fftbased} \input{algorithms/fvcconstructshift.tex} \section{Concluding Remarks} In this paper, we proposed efficient algorithms for the FVC-matching and PVC-matching problems. The FVC-matching problem has been discussed by du Mouza \textit{et~al.}{}~\cite{ref:du2007ppatternqueries} as a generalization of the function matching problem, while the PVC-matching problem is newly introduced in this paper, which can be seen as a generalization of the parameterized pattern matching problem. We have fixed a flaw of the algorithm by du Mouza \textit{et~al.}{}\ for the FVC-matching problem. There can be further variants of matching problems. For example, one may think of a pattern with don't care symbols in addition to variables and constants. This is not interesting when don't care symbols appear only in a pattern in function matching, since don't care symbols can be assumed to be distinct variables. However, when imposing the injection condition on a matching function, don't care symbols play a different role from variables. This generalization was tackled in~\cite{ref:yigarashi2017}. We can consider an even more general problem by allowing texts to have variables, where two strings $P$ and $S$ are said to match if there is a function $\pi$ such that $\hat{\pi}(P)=\hat{\pi}(S)$. This is a special case of the \emph{word equation problem}, where a string instead of a symbol can be substituted, and word equations are very difficult to solve in general. Another interesting restriction of word equations may allow to use different substitutions on compared strings, i.e., $P$ and $S$ match if there are functions $\pi$ and $\rho$ such that $\hat{\pi}(P)=\hat{\rho}(S)$. Those are interesting future work. \subsection{Extended KMP Algorithm} This subsection discusses an algorithm for the FVC-matching problem. In the matching phase, our extended KMP algorithm compares the pattern and a substring of the text in the same manner as the classical KMP algorithm except that we must maintain a function by which prefixes of the pattern match some substrings of the text. That is, our extended KMP algorithm compares symbols $T[i]$ and $P[k]$ from $i = k = 1$ with the empty function $\pi$. If $P[k]$ is not in the domain $\mathrm{dom}(\hat{\pi})$ of $\hat{\pi}$, we expand $\pi$ by letting $\pi(P[k]) = T[i]$ and increment $i$ and $k$. If $\hat{\pi}(P[k])$ is defined to be $T[i]$, we increment $i$ and $k$. Otherwise, we say that \emph{a mismatch occurs at position $k$ with a function $\pi$}. Note that the mismatch position refers to that of $P$ rather than $T$. When we find a mismatch, we must calculate the appropriate position $j$ of $P$ and function $\pi'$ with which we resume comparison. If instances are variable-free, the position is solely determined by the longest border size of $P[1:k]$ and we have no function. In the case of FVC-matching, the resuming position depends on the function $\pi$ in addition to $k$. \begin{example}\label{ex:exkmp} Let us consider the pattern $P=\mathtt{AABaaCbC}$ where $\Pi = \{{\tt A},{\tt B},{\tt C}\}$ and $\Sigma = \{{\tt a},{\tt b}\}$ in Fig.~\ref{fig:kmpexkmp}. If the concerned substring of the text is $T'=\mathtt{bbbaaabb}$, a mismatch occurs at $k=8$ with a function $\pi$ such that $\pi(\mtt{A})=\pi(\mtt{B})=\mtt{b}$ and $\pi(\mtt{C})=\mtt{a}$. In this case, we can resume comparison with $P[7]$ and $T'[8]$, since we have $\hat{\pi}'(P[1:6])=T'[2:7]$ for $\pi'$ such that $\pi'(\mtt{A})=\pi'(\mtt{C})=\mtt{b}$ and $\pi'(\mtt{B})=\mtt{a}$. On the other hand, for $T''= \mtt{bbaaaabb}$, the first mismatch occurs again at $k=8$ with a function $\rho$ such that $\rho(\mtt{A})=\mtt{b}$ and $\rho(\mtt{B})=\rho(\mtt{C})=\mtt{a}$. In this case, one cannot resume comparison with $P[7]$ and $T''[8]$, since there is no $\rho'$ such that $\hat{\rho}'(P[1:6])=T''[2:7]$, since $P[1] = P[2]$ but $T''[2] \neq T''[3]$. We should resume comparison between $P[4]$ and $T''[8]$ with $\rho'$ such that $\rho'(\mtt{A})=\mtt{a}$ and $\rho'(\mtt{B})=\mtt{b}$, for which we have $\hat{\rho}'(P[1:3])=T''[5:7]$. Note that $\rho'(\mtt{C})$ is undefined. \begin{figure}[t] \centering \centering \includegraphics[width=0.7\textwidth]{figures/exkmp} \caption{Examples of possible shifts in the Extended KMP algorithm} \label{fig:kmpexkmp} \end{figure} \end{example} The goal of the preprocessing phase is to prepare a data structure by which one can efficiently compute the \emph{failure function} in the matching phase: \begin{itemize} \item[] \textbf{Input:} the position $k+1$ (where a mismatch occurs) and a function $\pi$ whose domain is $\Pi_{P[1:k]}$, \item[] \textbf{Output:} the largest position $j+1 < k+1$ (at which we will resume comparison) and the function $\pi'$ with domain $\Pi_{P[1:j]}$ such that $\hat{\pi}'(P[1:j]) = \hat{\pi}(P[k-j+1:k])$. \end{itemize} We call such $\pi$ a \emph{preceding function}, $\pi'$ a \emph{succeeding function} and the pair $(\pi,\pi')$ a \emph{$(k,j)$-shifting function pair}. The substrings $P[1:j]$ and $P[k-j+1:k]$ may not be a border of $P[1:k]$ but under preceding and succeeding functions they play the same role as a border plays in the classical KMP algorithm. The succeeding function $\pi'$ is uniquely determined by a preceding function $\pi$ and positions $k,j$. The condition that functions $\pi$ and $\pi'$ form a $(k,j)$-shifting function pair can be expressed using the \emph{$(k,j)$-shifting graph} (on $P$), defined as follows. \begin{definition} Let $\Pi'$ be a copy of $\Pi$ and $P'$ be obtained from $P$ by replacing every variable in $\Pi$ with its copy in $\Pi'$. For two numbers $k,j$ such that $0 \le j < k \le m$, the \emph{$(k,j)$-shifting graph} $G_{k,j} = (V_{k,j},E_{k,j})$ is defined by \begin{align*} V_{k,j}&=\Sigma_P \cup \Pi_{P[k-j+1:k]} \cup \Pi'_{P'[1:j]}, \\ E_{k,j} &=\{\, (P[k-j+i], P'[i]) \mid 1 \le i \le j < k \text{ and } P[k-j+i] \neq P'[i] \,\} \,. \end{align*} We say that $G_{k,j}$ is \emph{invalid} if there are distinct $p,q \in \Sigma_P$ that belong to the same connected component. Otherwise, it is \emph{valid}. \end{definition} Note that $G_{k,0} = (\Sigma_{P},\emptyset)$ is valid for any $k$. Figure~\ref{fig:shiftinggraph} shows the $(7,6)$-shifting and $(7,3)$-shifting graphs for $P=\mathtt{AABaaCbC}$ in Example~\ref{ex:exkmp}. \begin{figure}[h!] \centering \begin{minipage}[t]{0.36\hsize} \centering \includegraphics[width=\textwidth]{figures/tripartiteex}\\ \ \ \ \small{(a) $(7,6)$-shifting graph} \end{minipage}\quad\quad \begin{minipage}[t]{0.36\hsize} \centering \includegraphics[width=\textwidth]{figures/tripartiteexb}\\ \ \ \ \small{(b) $(7,3)$-shifting graph} \end{minipage} \caption{The $(7,6)$-shifting graph (a) and $(7,3)$-shifting graph (b) on $P = \texttt{AABaaCbC}$, which corresponds to Fig.~\ref{fig:kmpexkmp}(i) and (ii).} \label{fig:shiftinggraph} \end{figure} Using functions $\pi$ and $\pi'$ whose domains are $\textrm{dom}(\pi) = \Pi_{P[k-j+1:k]}$ and $\textrm{dom}(\pi') = \Pi_{P[1:j]}$, respectively, let us label each node $p \in \Sigma$, $x \in \Pi$, $x' \in \Pi'$ of $G_{k,j}$ with $p,\pi(x),\pi'(x)$, respectively. Then $(\pi,\pi')$ is a $(k,j)$-shifting pair if and only if every node in each connected component has the same label. Obviously $G_{k,j}$ is valid if and only if it admits a $(k,j)$-shifting function pair. Thus, the resuming position should be $j+1$ for a mismatch at $k+1$ with a preceding function $\pi$ if and only if $j$ is the largest such that $G_{k,j}$ is valid and \begin{itemize} \item[(a)] if $x \in \Pi$ and $p \in \Sigma$ are connected in $G_{k,j}$, then ${\pi}(x)=p$, \item[(b)] if $x \in \Pi$ and $y \in \Pi$ are connected in $G_{k,j}$, then ${\pi}(x)=\pi(y)$. \end{itemize} In that case, we have $\hat{\pi}'(P[1:j])=\hat{\pi}(P[k-j+1:k])$ for $\pi'$ determined by \begin{itemize} \item[(c)] $\pi'(x)=\hat{\pi}(y)$ if $x' \in \Pi'_{P[1:j]}$ and $y \in \Pi \cup \Sigma$ are connected. \end{itemize} We call the conditions (a) and (b) the \emph{$(k,j)$-preconditions} and (c) the \emph{$(k,j)$-postcondition}. Note that every element in $\Pi'_{P'[1:j]}$ is connected to some element in $\Pi_{P[k-j+1:k]} \cup \Sigma_P$ in $G_{k,j}$ and thus $\pi'$ is well-defined. \begin{remark} The algorithm by du Mouza \textit{et~al.}{}~\cite{ref:du2007ppatternqueries} does not treat the condition induced by two nodes of distance more than 1 correctly. For example, let us consider the pattern $P=\mathtt{AABaaCbC}$ in Example~\ref{ex:exkmp}. For a text $T = \mtt{bbaaaabbb}$, the first mismatch occurs at $k=8$, where $\hat{\rho}(P[1:7]) = \mtt{bbaaaab}$ for $\rho(\mtt{A})=\mtt{b}$ and $\rho(\mtt{B})=\rho(\mtt{C})=\mtt{a}$. To have $(\rho,\rho')$ a $(7,6)$-shifting pair for some $\rho'$, it must hold $\rho(\mathtt{A})=\rho(\mathtt{B})$. That is, one can resume the comparison at position 6 only when the preceding function assigns the same constant to $\mathtt{A}$ and $\mathtt{B}$. The preceding function $\rho$ in this case does not satisfy this constraint. However, their algorithm performs this shift and reports that $P$ matches $T$ at position 2. \end{remark} To efficiently compute the failure function, our algorithm constructs another data structure instead of shifting graphs. The \emph{shifting condition table} is a collection of functions $A_{k,j}: \Pi_{P[k-j+1:k]} \to \Pi_{P[k-j+1:k]} \cup \Sigma_{P}$ and $A'_{k,j}:\Pi'_{P'[1:j]} \to \Pi_{P[k-j+1:k]} \cup \Sigma_{P}$ for $1 \le j < k \le m$ such that $G_{k,j}$ is valid. The functions $A_{k,j}$ can be used to quickly check the $(k,j)$-preconditions (a) and (b) and $A'_{k,j}$ is for the $(k,j)$-postcondition (c). Those functions satisfy the following properties: for each connected component $\alpha \subseteq V_{k,j}$, there is a representative $u_\alpha \in \alpha$ such that \begin{itemize} \item if $\alpha \cap \Sigma \neq \emptyset$, then $u_\alpha \in \Sigma$, \item if $\alpha \cap \Sigma = \emptyset$, then $u_\alpha \in \Pi$, \item for all $x \in \alpha \cap \Pi$, then $A_{k,j}(x)=u_\alpha$, \item for all $x' \in \alpha \cap \Pi'$, then $A'_{k,j}(x') \in \alpha \cap (\Pi \cup \Sigma)$. \end{itemize} Recall that $G_{k-1,j-1}$ is a subgraph of $G_{k,j}$, where the difference is at most two nodes and one edge. Hence, we can compute $A_{k,j}$ and $A'_{k,j}$ in $O(|\Pi|)$ time from $A_{k-1,j-1}$ and $A'_{k-1,j-1}$ maintaining the inverse $U_{k,j}$ of $A_{k,j}$ whose domain is restricted to $\Pi$, i.e., $U_{k,j}(x) = \{\, y \in \Pi_{P[k-j+1:k]} \mid A_{k,j}(y) = x \,\}$ for $x \in \Pi_{P[k-j+1:k]}$. Each set $U_{k,j}(x)$ can be implemented as a linked list. The updating time $O(|\Pi|)$ is due to the size of $U_{k,j}$. Moreover, when computing $A_{k,j}$ and $A'_{k,j}$, we can verify the validness of $G_{k,j}$. A pseudo code for constructing the shifting condition table is shown as Algorithms~\ref{alg:fvctable} and \ref{alg:addconditionexf} in Appendix~\ref{sec:appendixalgorithms}. \begin{lemma} The shifting condition table can be calculated in $O(|\Pi|m^2)$ time. \end{lemma} Suppose that we have a mismatch at position $k+1$ with a preceding function $\pi$. By using the shifting condition table, a naive algorithm may compute the failure function in $O(k|\Pi|^2)$ time by finding the largest $j$ such that $\pi$ satisfies the $(k,j)$-precondition and then compute a function $\pi'$ satisfying the $(k,j)$-postcondition with which we resume comparison at $j+1$. The calculation of $\pi'$ can be done in $O(|\Pi|)$ time just by referring to the array $A'_{k,j}$. We next discuss how to reduce the computational cost for finding $j$ by preparing an elaborated data structure in the preprocessing phase. Du Mouza et al.~\cite{ref:du2007ppatternqueries} introduced a bitmap data structure concerning the precondition (a), which can be constructed using $A_{k,j}$ in the shifting condition table as follows. Here we extend the domain of $A_{k,j}$ to $\Pi$ by defining $A_{k,j}(x)=x$ for each $x \in \Pi \setminus \Pi_{P[k-j+1:k]}$. \begin{definition}[\cite{ref:du2007ppatternqueries}] For every $0 \le j < k \le m$, $x \in \Pi$ and $p \in \Sigma_P$, we define \begin{eqnarray*} r_{x, p}^{k}[j] &=& \begin{cases} 0 & (\text{$G_{k,j}$ is invalid or $A_{k,j}(x) \in \Sigma \setminus \{p\}$}) \\ 1 & (\text{otherwise}) \end{cases} \end{eqnarray*} \end{definition} \begin{lemma}[\cite{ref:du2007ppatternqueries}]\label{lem:precona} A preceding function $\pi$ satisfies the $(k,j)$-precondition (a) if and only if $ \bigwedge_{x \in \Pi}{r_{x, \pi(x)}^{k}[j]} = 1$. \end{lemma} We define a data structure corresponding to the $(k,j)$-precondition (b) as follows. \begin{definition} For every $0 \le j < k \le m$ and $x,y \in \Pi$, define \begin{eqnarray*} s_{x, y}^{k}[j] &=& \begin{cases} 0 & (\text{$G_{k,j}$ is invalid or $A_{k,j}(x) = y$}) \\ 1 & (\text{otherwise}) \end{cases} \end{eqnarray*} \end{definition} \begin{lemma}\label{lem:preconb} A preceding function $\pi$ satisfies the $(k,j)$-precondition (b) if and only if $ \bigwedge^{x,y \in \Pi}_{\pi(x) \neq \pi(y)} s_{x, y}^{k}[j] = 1$\,. \end{lemma} Therefore, we should resume comparison at $j+1$ for the largest $j$ that satisfies the conditions of Lemmas~\ref{lem:precona} and~\ref{lem:preconb}. To calculate such $j$ quickly, the preprocessing phase computes the following bit sequences. For every $x \in \Pi$, $p \in \Sigma_P$ and $1 \le k \le m$, let $r_{x, p}^{k}$ be the concatenation of $r_{x, p}^{k}[j]$ in ascending order of $j$: \begin{eqnarray*} r_{x, p}^{k} = r_{x, p}^{k}[0] r_{x, p}^{k}[1] \cdots r_{x, p}^{k}[k-1] \,,\end{eqnarray*} and for every $x,y \in \Pi$ and $1 \le k \le m$, let \begin{eqnarray*} s_{x, y}^{k} = s_{x, y}^{k}[0] s_{x, y}^{k}[1] \cdots s_{x, y}^{k}[k-1] \,.\end{eqnarray*} Calculating $r_{x, p}^{k}$ and $s_{x, y}^{k}$ for all $x,y \in \Pi$, $p \in \Sigma_P$ and $1 \le k \le m$ in the preprocessing phase requires $O(|\Pi|(|\Sigma_{P}|+|\Pi|)m^2)$ time in total. When a mismatch occurs at $k+1$ with a preceding function $\pi$, we compute \[ J = \bigwedge_{x \in \Pi} r_{x, \pi(x)}^{k} \wedge \bigwedge_{\substack{x,y \in \Pi \\ \pi(x) \neq \pi(y)}} s_{x, y}^{k} \,.\] Then the desired $j$ is the right-most position of 1 in $J$. This operation can be done in $O(\lceil \frac{m}{w} \rceil|\Pi|^2)$ time, where $w$ denotes the word size of a machine. That is, with $O(|\Pi|(|\Sigma_{P}|+|\Pi|)m^2)$ preprocessing time, the failure function can be computed in $O(|\Pi|^2 \lceil \frac{m}{w} \rceil)$ time. For most applications, we can assume that $m$ is smaller than the word size $w$, i.e. $\lceil \frac{m}{w} \rceil = 1$. \begin{theorem} The FVC-matching{} problem can be solved in $O(|\Pi|^2 \lceil \frac{m}{w} \rceil n)$ time with $O(|\Pi|(|\Sigma_{P}|+|\Pi|)m^2)$ preprocessing time. \end{theorem} \section{Convolution-based Methods} In this section, we show that the FVC-matching{} problem can be solved in $O(|\Sigma_{P}|n\log{m})$ time by reducing the problem to the function matching problem and the wildcard matching problem, for which several efficient algorithms are known. The PVC-matching{} problem can also be solved using the same reduction technique with a slight modification. For strings $P$ of length $m$ over $\Sigma \cup \Pi$ and $T$ of length $n$ over $\Sigma$, we define $\Pi' = \Pi_{P} \cup \Sigma_{T}$. Let $P_{\!\!\dontcare} \in (\Sigma \cup \{{\tt \ast}\})^*$ be a string obtained from $P$ by replacing all variable symbols in $\Pi$ with \emph{don't care symbol} ${\tt \ast}$. Let $P_{\!\Pi} \in \Pi'^*$ be a string obtained from $P$ by removing all constant symbols in $\Sigma$. Moreover, for $1 \le i < n-m$, let $\Tvar{i}$ be a string defined by $\Tvar{i} = \isvar{1} \isvar{2} \cdots \isvar{m}$, where $\isvar{j} = T[i+j-1]$ if $P[j] \in \Pi$ and $\isvar{j} = \epsilon$ otherwise. Note that both the lengths of $\Tvar{i}$ and $P_{\!\Pi}$ are equal to the total number of variable occurrences in $P$. \begin{example} For $T = {\tt aabcbc}$ and $P={{\tt A} {\tt a} {\tt B} {\tt B} {\tt b}}$ over $\Pi = \{{\tt A},{\tt B}\}$ and $\Sigma = \{{\tt a},{\tt b},{\tt c}\}$, we have $P_{\!\!\dontcare} = {\tt \ast} {\tt a} {\tt \ast} {\tt \ast} {\tt b}$, $P_{\!\Pi}={\tt A}{\tt B}\ttB$, $\Tvar1 = {\tt a} {\tt b} {\tt c}$, and $\Tvar2= {\tt a} {\tt c} {\tt b}$. \end{example} For both FVC-matching{} and PVC-matching{} problems, the following lemma is useful to develop algorithms to solve them. \begin{lemma} \label{obs:fftreduction} $P$~\text{FVC-matches{} (resp.\ PVC-matches{})} $T[i:i+m-1]$ if and only if \begin{enumerate}[topsep=0pt] \item $P_{\!\!\dontcare}$ wildcard matches $T[i:i+m-1]$, and \item $P_{\!\Pi}$ function matches (resp.\ parameterized matches) $\Tvar{i}$. \end{enumerate} \end{lemma} Lemma~\ref{obs:fftreduction} suggests that the FVC-problem would be reducible to the combination of wildcard matching problem and function matching problem. The wildcard matching problem~(a.k.a.\ Pattern matching with don't care symbol)~\cite{ref:fischer1974string} is one of the fundamental problems in pattern matching. There are many algorithms for solving the wildcard matching problem. Fischer \textit{et~al.}~\cite{ref:fischer1974string} gave an algorithm for (a generalization of) this problem, which runs in $O({|\Sigma|}n\log{m})$ time. Cole and Hariharan~\cite{ref:cole2002verifying} improved it to $O(n\log{m})$ time by using convolution. On the other hand, Pinter~\cite{ref:pinter1985efficient} gave an $O(n + m + \alpha)$-time algorithm, where $\alpha$ is the total number of occurrences of the maximal consecutive constant substrings of the pattern in the text. This algorithm uses Aho-Corasick algorithm instead of convolution. Iliopoulos and Rahman~\cite{ref:iliopoulos2007pattern} proposed an algorithm which utilizes suffix arrays for text. The algorithm preprocesses a text in $O(n)$ time and runs in $O(m+\alpha)$ time. However, Lemma~\ref{obs:fftreduction} does \emph{not} imply the existence of a \emph{single} string $T'$ such that $P$ FVC-matches $T[i:i+m-1]$ if and only if $P_{\!\!\dontcare}$ wildcard matches $T[i:i+m-1]$ and $P_{\!\Pi}$ function matches $T'[i:i+m-1]$. A naive application of Lemma~\ref{obs:fftreduction} to compute $\Tvar{i}$ explicitly for each $i$ requires $O(mn)$ time in total. We will present an algorithm to check whether $P_{\!\Pi}$ function matches (parameterized matches) $\Tvar{i}$ for all $1 \le i < n-m$ in $O(\log{|\Sigma|}\,n\log{m})$ time in total. Without loss of generality, we assume that $\Sigma$ and $\Pi$ are disjoint finite sets of positive integers in this section, and for integers $a$ and $b$, the notation $a \cdot b$ represents the multiplication of $a$ and $b$ but not the concatenation. \begin{definition}\label{def:convolution} For integer arrays $A$ of length $n$ and $B$ of length $m$, we define an integer array $R$ by \( R[j] = \sum_{i=1}^{m}{A[i+j-1]\cdot B[i]} \) for $1 \le j \le n-m+1$. We denote $R$ as $A\otimes B$. \end{definition} In a computational model with word size $O(\log{m})$, the discrete convolution can be computed in time $O(n\log{n})$ by using the Fast Fourier Transform (FFT)~\cite{gormen1990introduction}. The array $R$ defined in Definition \ref{def:convolution} can also be computed in the same time complexity by just reversing array $B$. Amir \textit{et~al.}{}~\cite{ref:amir2006function} proved the next lemma for function matching. \begin{lemma}[\cite{ref:amir2006function}]\label{lem:amirlemma} For any natural numbers $a_{1},\cdots,a_{k}$, the equation \\ $k\!\cdot\!\sum_{i = 1}^{k}{(a_i)^2} = ( \sum_{i=1}^{k}{a_i} )^2$ holds if and only if $a_i = a_j \mbox{ for any } 1 \le i, j \le k$. \end{lemma} Let ${\textit{\textbf{T}}}$ be the string of length $n$ such that ${\textit{\textbf{T}}}[i] = (T[i])^2$ for every $1 \leq i \leq n$. For a variable $x \in \Pi_{P}$, let $c_{x}$ denote the number of occurrences of $x$ in $P$, and let $P_x$ be the string of length $m$ such that $P_x[j] = 1$ if $P[j] = x$ and $P_x[j] = 0$ otherwise, for every $1 \le j \le m$. By Lemma~\ref{lem:amirlemma}, we can prove the following lemma. \begin{lemma} \label{lemma:uniqueSymbol} All the symbols (values) in $\Tvar{i}$ at every position $j$ satisfying $P_{\!\Pi}[j] = x$ are the same, if and only if the equation $c_{x} \!\cdot\! (({\textit{\textbf{T}}} \otimes P_x)[i]) = ((T \otimes P_x)[i])^2$ holds. \end{lemma} Thus, $P_{\!\Pi}$ function matches $\Tvar{i}$ if and only if the equation in Lemma~\ref{lemma:uniqueSymbol} holds for all $x \in \Pi_{P}$. Both the convolutions ${\textit{\textbf{T}}} \otimes P_x$ and $T \otimes P_x$ can be calculated in $O(n\log{m})$ time by simply dividing $T$ into $2\times\frac{n}{2m}$ overlapping substrings of length $2m$. For parameterized pattern matching problem, we have only to check additionally whether the value ${(T \otimes P_x)[i]}/c_{x}$ is unique among all $x \in \Pi_{P}$. A pseudo code for solving the PVC-matching{} problem using convolution is shown as Algorithm~\ref{alg:fftbased} in Appendix~\ref{sec:appendixalgorithms}. \begin{theorem} The FVC-matching{} problem and PVC-matching{} problem can be solved in $O(|\Sigma_{P}|\,n\log{m})$ time. \end{theorem} \section{Introduction} The \emph{parameterized pattern matching} problem was proposed by Baker~\cite{ref:baker1996parameterized} about a quarter of a century ago. Problem instances are two strings called a pattern and a text, which are sequences of two types of symbols called constants and variables. The problem is to find all occurrences of substrings of a given text that a given pattern matches by substituting a variable in the text for each variable in the pattern, where the important constraint is that the substitution should be an injective map. She presented an algorithm for this problem that runs in $O(n\log{n})$ time using \emph{parameterized suffix trees}, where $n$ is the length of text. By removing the injective constraint from the parameterized pattern matching problem, Amir \textit{et~al.}{}~\cite{ref:amir2006function} proposed the \emph{function matching} problem, where the same variable may be substituted for different variables. Yet another but an inessential difference between parameterized pattern matching and function matching is in the alphabets. The function matching problem is defined to be constant-free in the sense that patterns and texts are strings over variables. However, this simplification is inessential, since it is known that the problem with variables and constants is linear-time reducible to the constant-free case~\cite{ref:amir1994alphabet}. This reduction technique works for the parameterized pattern matching as well. Their deterministic algorithm solves this problem in $O(|\Pi|n\log{m})$ time, where $n$ and $m$ are the lengths of the text and pattern, respectively, and $|\Pi|$ is the number of different symbols in the pattern. After that, Amir and Nor~\cite{ref:amir2007generalized} introduced the \emph{generalized function matching} problem, where one can substitute a string of arbitrary length for a variable. In addition, both a pattern and a text may contain ``don't care'' symbols, which are supposed to match arbitrary strings. The parameterized pattern matching problem and its extensions have been great interests not only to the pattern matching community~\cite{ref:mendivelso2015parameterized} but also to the database community. Du Mouza \textit{et~al.}{}~\cite{ref:du2007ppatternqueries} proposed a variant of the function matching problem, where texts should consist solely of constants and a substitution maps variables to constants, which is not necessarily injective. Let us call their problem \emph{function matching with variables-to-constants, FVC-matching} in short.\footnote{They called the problem \emph{parameterized pattern queries}. However, to avoid misunderstanding the problem to have the injective constraint, we refrain from using the original name in this paper.} The function matching problem is linear-time reducible to this problem by simply assuming the variables in a text as constants. Therefore, this problem can be seen as a generalization of the function matching problem. Unfortunately, as we will discuss in this paper, their algorithm is in error. In this paper, we introduce a new variant of the problem by du Mouza \textit{et~al.}{} with the injective constraint, which we call \emph{parameterized pattern matching with variables-to-constants mapping (PVC-matching)}. For each of the FVC-matching and PVC-matching problems, we propose two kinds of algorithms\footnote{Source codes for those algorithms are available at \\\texttt{https://github.com/igarashi/matchingwithvcmap}.}: a \emph{convolution-based method} and an \emph{extended KMP-based method}. The convolution-based methods and extended KMP-based methods are inspired by the algorithm of Amir \textit{et~al.}{}~\cite{ref:amir2006function} for the function matching problem and the one by du Mouza \textit{et~al.}{}~\cite{ref:du2007ppatternqueries} for the FVC-matching problem, respectively. As a result, we fix the flaw of the algorithm by du Mouza \textit{et~al.}{} The convolution-based methods for both problems run in $O(|\Sigma_{P}|n\log{m})$ time, where $\Sigma_P$ is the set of constant symbols that occur in the pattern $P$. Our KMP-based methods solve the PVC-matching and FVC-matching problems with $O(|\Pi|(|\Sigma_{P}|+|\Pi|)m^2)$ and $O(|\Pi|n)$) preprocessing time and $O(|\Pi| \lceil \frac{m}{w} \rceil n)$ and $O(|\Pi_{P}|^2 \lceil \frac{m}{w} \rceil n)$ query time, respectively, where $\Pi$ is the set of variables and $w$ is the word size of a machine (Table~\ref{fig:tcomplexity}). \input{fig/complexity.tex} \section{KMP-based Methods} Du Mouza \textit{et~al.}{} proposed a KMP-based algorithm for the FVC-matching{} problem, which, however, is in error. In this section, we propose a correction of their algorithm, which runs in $O(|\Pi|^2 \lceil \frac{m}{w} \rceil n)$ query time with $O(|\Pi|(|\Sigma_{P}|+|\Pi|)m^2)$ preprocessing time, where $w$ denotes the word size of a machine. This algorithm will be modified so that it solves the PVC-matching{} problem in $O(|\Pi| \lceil \frac{m}{w} \rceil n)$ query time with $O(|\Pi||\Sigma_{P}|m^2)$ preprocessing time. The KMP algorithm~\cite{ref:knuth1977fast} solves the standard pattern matching problem in $O(n)$ time with $O(m)$ preprocessing time. We say that a string $Y$ is a \emph{border} of $X$ if $Y$ is simultaneously a prefix and a suffix of $X$. A border $Y$ is \emph{nontrivial} if $Y$ is not $X$ itself. For the preprocessing of the KMP algorithm, we calculate the longest nontrivial border $b_{k}$ for each prefix $P[1:k]$ of pattern $P$, and store them as \emph{border array} $B[k] = |b_{k}|$ for each $0 \le k \le m$. Note that $b_0 = b_1 = \epsilon$. In the matching phase, the KMP algorithm compares symbols $T[i]$ and $P[k]$ from $i = k = 1$. We increment $i$ and $k$ if $T[i] = P[k]$. Otherwise we reset the index for $P$ to be $k' = B[k-1]+1$ and resume comparison from $T[i]$ and $P[k']$. \input{docs/exkmp.tex} \input{docs/paraexkmp.tex} \subsection{Extended KMP Algorithm for PVC-match{}} In this section, we consider the PVC-matching{} problem. We redefine the \emph{(mis)match} and \emph{failure function} in the same manner as described in the previous section except that all the functions are restricted to be injective. We define $G_{k,j}$ exactly in the same manner as in the previous subsection. However, the condition represented by that graph should be strengthened in accordance with the injection constraint on matching functions. We say that $G_{k,j}$ is \emph{injectively valid} if for each $\Delta \in \{\Sigma , \Pi,\Pi'\}$, any distinct nodes from $\Delta$ are disconnected. Otherwise, it is \emph{injectively invalid}. There is a $(k,j)$-shifting injection pair if and only if $G_{k,j}$ is injectively valid. For $P=\mtt{AABaaCbC}$ in Example~\ref{ex:exkmp} (see Fig.~\ref{fig:shiftinggraph}), the $(7,6)$-shifting graph $G_{7,6}$ for $P=\mtt{AABaaCbC}$ is valid but injectively invalid, since $\mtt{A}$ and $\mtt{B}$ are connected. On the other hand, $G_{7,3}$ is injectively valid. In the PVC-matching, the condition that an injection pair $(\pi,\pi')$ to be $(k,j)$-shifting is described using the graph labeling by $(\pi,\pi')$ as follows: \begin{itemize} \item two nodes are assigned the same label if and only if they are connected. \end{itemize} Under the assumption that $G_{k,j}$ is injectively valid, the $(k,j)$-precondition on a preceding function $\pi$ is given as \begin{itemize} \item[(a)] if $x \in \Pi$ and $p \in \Sigma$ are connected, then ${\pi}(x)=p$, \item[(b')] if $x \in \Pi$ and $x' \in \Pi'$ are connected and $y' \in \Pi' \setminus \{x'\}$ and $p \in \Sigma$ are connected, then ${\pi}(x)\neq p$. \end{itemize} Since each connected component of an injectively valid shifting graph $G_{k,j}$ has at most 3 nodes, it is cheap to compute the function $F_{k,j}:V \to 2^{V_{k,j}}$ such that $F_{k,j}(u) = \{\, v \in V_{k,j} \mid \text{$u$ and $v$ are connected in $G_{k,j}$}\,\}$. Note that $F_{k,j}(u) = \emptyset$ if $u \notin \Pi_{P[k-j+1:k]}$. Using $P[k],P[j]$, and $F_{k-1,j-1}$, one can decide whether $G_{k,j}$ is injectively valid and can compute $F_{k,j}$ (if $G_{k,j}$ is injectively valid) in constant time. Suppose that we have a preceding function $\pi$ at position $k$. By using the function $F_{k,j}$, a naive algorithm can compute the failure function in $O(k|\Pi|)$ time. We define a bitmap $t_{x, p}^{k}[j]$ to check if $\pi$ satisfies preconditions (a) and (b'). \begin{definition} For every $0 \le j < k \le m$, $x \in \Pi$ and $p \in \Sigma_P$, we define \begin{eqnarray*} t_{x, p}^{k}[j] &=& \begin{cases} 0 & (\text{$G_{k,j}$ is injectively invalid or }F_{k,j}(x) \cap \Sigma \nsubseteq \{p\} \\ & \text{ or } |F_{k,j}(x) \cap F_{k,j}(p) \cap \Pi'| = 2 ) \\ 1 & (\text{otherwise}) \end{cases} \end{eqnarray*} \end{definition} \begin{lemma} The preceding function $\pi$ satisfies the $(k,j)$-preconditions (a) and (b') if and only if $ \bigwedge_{x \in \Pi}{t_{x, \pi(x)}^{k}[j]} = 1$. \end{lemma} In the preprocessing phase, we calculate \begin{eqnarray*} t_{x, p}^{k} = t_{x, p}^{k}[0] t_{x, p}^{k}[1] \cdots t_{x, p}^{k}[k-1] \end{eqnarray*} for all $x \in \Pi$, $p \in \Sigma_P$ and $1 \le k \le m$, which requires $O(|\Pi||\Sigma_P|m^2)$ time. When a mismatch occurs at $k+1$ with a function $\pi$, we compute \[ J = \bigwedge_{x \in \Pi} t_{x, \pi(x)}^{k} \,\] where the desired $j$ is the right-most position of 1 in $J$. We resume comparison at $j+1$. The calculation of the failure function can be done in $O(|\Pi|\lceil \frac{m}{w} \rceil)$ time, where $w$ denotes the word size of a machine. \begin{theorem} The PVC-matching{} problem can be solved in $O(|\Pi|\lceil \frac{m}{w} \rceil n)$ time with $O(|\Pi||\Sigma_{P}|m^2)$ preprocessing time. \end{theorem} \section{Preliminaries} For any set $Z$, the cardinality of $Z$ is denoted by $|Z|$. Let $\Sigma$ be an alphabet. We denote by $\Sigma^*$ the set of strings over $\Sigma$. The empty string is denoted by $\epsilon$. The concatenation of two strings $X,Y \in \Sigma^*$ is denoted by $X Y$. For a string $X$, the length of $X = X[1] X[2] \cdots X[n]$ is denoted by $|X| = n$. The substring of $X$ beginning at $i$ and ending at $j$ is denoted by $X[i:j] = X[i] X[i+1] \cdots X[j-1] X[j]$. Any substrings of the form $X[1:j]$ and $X[i:n]$ are called a \emph{prefix} and a \emph{suffix} of $X$. For any number $ k $, we define $X[k:k-1]=\epsilon$. The set of symbols from a subset $\Delta$ of $\Sigma$ occurring in $X$ is denoted by $\Delta_X = \{\, X[i] \in \Delta \mid 1 \le i \le n \,\}$. This paper is concerned with matching problems, where strings consist of two kinds of symbols, called \emph{constants} and \emph{variables}. Throughout this paper, the sets of constants and variables are denoted by $\Sigma$ and $\Pi$, respectively. Variables are supposed to be replaced by another symbol, while constants are not. \begin{definition}\label{def:funcapplication} For a function $\pi: \Pi \to (\Sigma \cup \Pi)$, we extend it to $\hat{\pi}: (\Pi \cup \Sigma)^* \to (\Pi \cup \Sigma)^*$ by \begin{eqnarray*} \piapply{\pi}(X) = \hat{\pi}(X[1]) \hat{\pi}(X[2]) \cdots \hat{\pi}(X[n]) , \text{where\space} \piapply{\pi}(X[i]) = \begin{cases} \pi(X[i]) & (X[i] \in \Pi) \\ X[i] & {\rm (otherwise)} \end{cases} \end{eqnarray*} \end{definition} \section*{Appendix} \input{docs/appendix} \end{document}
1,314,259,995,984
arxiv
\section{Introduction}\label{sec:intro} Everyday and everywhere we are surrounded by objects. Objects augment our environment and serve as entities in space which provide information about state. They are essential to interpret the state of environment and situation by their presence, pose, composition, etc. One may say objects discretize our environment and form building blocks that \emph{describe} our environment. Visual perception of objects is essential for a plethora of tasks in service and industrial robotics: robots are supposed to assemble parts on assembly lines, unload shipping containers, maintain stock shelves in super markets, guiding visitors in museums, reason about objects in big data applications like cloud robotics, etc.~\cite{JonschkowskiEHM16,7553531,winkler16shopping,CloudRoboticsSurvey-IEEETASE15}. Consequently, robots are constantly confronted with object-related tasks like detection, recognition or categorization of objects as well as estimating or tracking their pose. In particular, robots are increasingly applied in manipulation, monitoring or surveying tasks where they face a significant amount of unknown objects making object perception tasks more and more challenging. Therefore a perception system is required that is flexible and scalable to classify a large set of individual, even unknown, object instances into a significantly smaller set of groups in which instances within a group share commonalties such as in form of shape appearance; such group can be denoted as a category which underlies a shape concept. Besides object detection, where potential object candidates are localized in a scene~\cite{richtsfeld2014learning,MuellerBirkIcra2016}, the goal of instance recognition is to learn a particular model w.r.t. a very specific known object instance (e.g. Heinz Tomato 57 Varieties Ketchup, 397gramm, plastic bottle), whereas in categorization the goal is to learn a generic model of a \emph{bottle}. Therefore the challenging task in categorization, in contrary to recognition, is to learn a generic model from instance appearances that belong to the same \emph{type} or category based on similarities or commonalities~\cite{Sloutsky2010}. In other words, in our example case of a \emph{bottle}, the goal is to learn the essence of what makes objects appear as a \emph{bottle}. Consequently, information about individual instance specifics have generally a neglectable relevance in categorization. One can say that \emph{recognition} is the generalization task of identifying \emph{known} objects from \emph{unknown} viewpoints whereas \emph{categorization} is the generalization task of classifying \emph{unknown} objects to \emph{known} categories which also entails the classification of \emph{unknown} objects from \emph{unknown} viewpoints~\cite{Palmeri2004a}. Note that categorization of objects bears also the risk of uncertainty by the absence of an explicit object model in contrary to instance recognition where an explicit object model which is supposed to be recognized is given beforehand. Therefore the major goal in the categorization task is robustness towards inter-category and intra-category variability, i.e. the extraction of category-specific characteristics while considering the diversity in instance appearances within each category. Given raw 3D point cloud data, reasoning about object semantics such as a shape category requires an abstraction process: the data is processed towards an abstract level such that clusters of points are detected as a single entity that can be labeled to be part of a category (e.g., \emph{box}). % One can identify three subproblems: \textbf{1) Robust extraction of regions} that abstract semantically meaningful segments in the scene from real-world (noisy and raw) sensor data. % So-called over-segmentation techniques~\cite{local_Comaniciu2002,Papon13CVPR} are often applied to segment a scene into uniformed-sized (fine-grained) segments. % A subsequent goal is to segment distinctive and homogeneous regions (e.g., Super Patch Segments~\cite{MuellerBirkIcra2016}), which ideally represent components or parts of objects and that can be used as building blocks for object reasoning purposes. \textbf{2) Detection and localization of object candidates} in scenes that reflect clusters or groups of regions. The objective is to detect groups of neighboring regions, which comply with generic-object appearance characteristics or patterns~\cite{MuellerBirkIcra2016,Frintrop2010,richtsfeld2014learning}. Such appearances can be derived from theories of cognitive science, e.g. saliency analysis~\cite{Rahtu2010} or rule-based approaches such as the \emph{minima rule}~\cite{Hoffman1997}. The latter infers objects based on the assumption that parts of objects do predominantly appear in a convex alignment. By considering such geometric or texture features in the decision process, unknown primitive-shaped objects like boxes or cylinders can even be extracted in cluttered scenes~\cite{ContextSemanticLabelling-IJRR13,local_mueller2013a,richtsfeld2014learning}. A robust detection of more complex-shaped objects requires the exploration of more sophisticated object appearance analyses (e.g. so-called \emph{objectness}~\cite{Alexe:2012:MOI:2377349.2377551} or analysis based on compositions of regions~\cite{MuellerBirkIcra2016}) in order to identify appearance patterns of shape part compositions and relationships among those which can represent complex-shaped objects. \textbf{3) Classification of object candidates} to specific object categories, which is the focus of this work (see Fig.~\ref{fig:recognition_eval}), with two main shape-related aspects as further motivated in Sec.~\ref{sec:rw}: i) the surface description and ii) the representation of shape structure. \begin{figure*}[t] \centering \includegraphics[width=0.99\linewidth]{demo_combined5_1_2_bg_scene_label_comp} \caption{Example categorization of two unstructured scenes A and B. Object candidates (randomly colored) are segmented using our previous work~\cite{MuellerBirkIcra2016}. The shape category labels of the classification results according to the work presented here are colored as in Fig.~\ref{fig:vw_dict_distri}. } \label{fig:recognition_eval} \end{figure*} \section{Motivation and Related Work}\label{sec:rw} Robust reasoning about the shape of an object that is based on single-shot point cloud data has to deal with several challenges~\cite{JonschkowskiEHM16} like sensor noise, object occlusions, variations of object appearances from different viewpoints, quality of extracted object segmentations provided by an object candidate detector, etc. Under these challenging conditions, shape analysis with the goal of learning categories relies on a robust i) \emph{description} and ii) \emph{representation} of objects \cite{Biasotti:2008:DSG:1391729.1391731,dicarlo:tics_2007}. Concerning i), an initial low-level abstraction of point cloud surfaces of objects can be accomplished by descriptors (e.g., Fast Point Feature Histograms (FPFH)~\cite{5152473}), which encode the point cloud characteristics in the form of \emph{description vectors}. As shown by concepts like Bag-of-Words~\cite{DBLP:conf/iccv/JurieT05} an abstraction to a symbolic representation of according vectors is beneficial for perception tasks. Therein, a crucial initial step is the quantization of the description space into a set of partitions, which are described as (visual) words of a dictionary or codebook. A word can be interpreted as an \emph{abstract symbol that represents a description vector}, which is set at an approximately optimal pose in description space -- identified by unsupervised classification techniques, like \emph{k}-means or more advanced techniques~\cite{Jain:1999:DCR:331499.331504}. % The granularity of the dictionary is a major parameter that corresponds to the variation of the object appearance. Inspired by~\cite{6942984}, the impact of this parameter can be alleviated with a hierarchical quantization which encodes surface characteristics in a coarse-to-fine manner -- see Sec.~\ref{sec:dict}. % Concerning ii), the methodology of our work is inspired by cognitive and psychology theories on object recognition, which suggest a hierarchical~\cite{Riesenhuber:1999,Leonardis2011,DiCarlo2012} and component-based~\cite{ObjectRecognition_Biederman1987,CERELLA19801,Kirkpatrick-steger98effectsof} representation of object information. These hierarchical approaches combine the benefits of so-called \emph{local} and \emph{global-based approaches} as follows. Solely local-based classification approaches represent objects as a composition of features like segments or key points to encode object appearances. The actual constellation of these features is analyzed to infer patterns, which are predominant for certain objects. Local-based approaches are successfully used in object recognition and also categorization tasks. Many variations exist, which analyze not only the occurrence of feature constellations but also texture and geometry relationships among features~\cite{6942984,ContextSemanticLabelling-IJRR13,Leibe04combinedobject,Prasad11a,7139358}. Given a constellation model, the inference is often based on the analysis of local evidences, i.e., in a constrained spatial range w.r.t.~the object (using e.g. Markov Networks~\cite{citeulike:8742196,10.1109/TPAMI.1984.4767596}). As a consequence, the global shape aspects of objects are insufficiently reflected in local methods. This leads to difficulties to distinguish complex structures, which are only apparent on a global scale. Global-based approaches hence represent and analyze objects as single entities, encoding their surface structural properties. Accordingly, global approaches (e.g., template-based approaches) can handle complex structures, but partial object observations, e.g., caused by occlusions, may lead to distortions regarding the encoding of the structural properties. Sophisticated mechanisms~\cite{6751365} have to be applied to enhance the robustness under these circumstances. As a consequence, partial absence of features due to occlusions or even deformations can be more effectively dealt with using local-based approaches. Existing work of both approaches have in common that applied descriptors are typically handcrafted, domain-dependent, and fine-tuned -- with the goal to transform the point cloud space into a discrete lower-dimensional space, which often results in a loss of information. Consequently, sophisticated classification procedures are required to compensate for the loss and to identify distinctive patterns. Another concept -- artificial neural network-based Deep Learning -- fuses the description and classification process. In a hierarchical fashion, layers learn local statistical evidences (e.g. from RGB images) which are composed to more abstract evidences on higher layers~\cite{zhang2015fgs-struct,7487310}. Recently, so-called Geometric Deep Learning~\cite{DBLP:journals/spm/BronsteinBLSV17,DBLP:conf/cvpr/MontiBMRSB17} specifically focuses on learning geometry in non-Euclidean space from manifolds or graphs. However these deep neural networks are successful and do not require handcrafted descriptors, but they suffer from the demand of large training data and expensive parameterization cost~\cite{7487310,DBLP:journals/spm/BronsteinBLSV17}. Another related research field focuses on compositional hierarchies~\cite{DBLP:conf/nips/Utans93,FidlerChapter09,DBLP:conf/iccv/OzayAWL15} in which in general geometric entities like edges or contours are hierarchically composed to unions of those. Inspired by Bag-of-Words, composition and template-based models, and hierarchical abstraction methodology, following \textbf{contributions} are presented in this article: \textbf{i)} A hierarchical quantization of the \emph{description space} (Sec.~\ref{sec:dict}) is introduced to alleviate quantization effects like over-fitting and under-fitting. Therein, visual words are learned, which describe surfaces in a coarse-to-fine manner -- from surface primitives to fine-grained individual object appearances. This hierarchical abstraction of raw sensor data facilitates the recognition of constellation patterns on a \emph{symbolic} level. \textbf{ii)} W.r.t.~ the \emph{shape space}, a new shape constellation model is proposed (Sec.~\ref{sec:ch}) that hierarchically decomposes object shapes based on their surface characteristics. Shape decompositions are \emph{symbolically} expressed in \textbf{i)} and gradually encoded in a local-to-global bottom-up manner -- from single part appearances over part compositions to objects represented as single entities. This representation captures facets which we denote as shape motifs, across multiple levels that are exploited as evidences for classification. \textbf{iii)} Our approach is data-driven; it is inherently capable of evolving continuously by integrating new shape information. The shape category inference drawn is non-invasive and based on single-shots captured from noisy 2.5D point clouds. \section{Object Instance Representation}\label{sec:refine} The \emph{correspondence problem} among detected parts and previously observed parts which are associated to a certain meaning such as a category is a major challenge when dealing with noisy data. Therefore, a robust detection of parts is required, i.e. parts are repetitively and stably detectable so that they can be used as building blocks which constitute objects. For instance, given a set of various \emph{cans} from different perspectives in 2.5D, the goal is to extract similar parts from all \emph{cans}, e.g., one planar and one cylindrical part. Due to the repetitive appearance of planar and cylindrical parts in certain constellations, patterns can be learned, recognized and associated to the shape category \emph{can}. This inference has to be robust to noisy sensor data as well as partial and occluded object observations. The object detection process (see subproblems \textbf{1)} and \textbf{2)} in Sec.~\ref{sec:intro}) which segments object candidates often does not require this repetitive detection of parts. However to contribute to a robust segmentation that alleviates the correspondence problem, the (over-)~segmented candidate can be refined in order to facilitate a confident shape reasoning for categorization purposes. A graph representation is chosen for further analysis of the object topology and of its characteristics, i.e., a graph $g^{os}\mathrm{=}(V^{os},E^{os})$ represents the neighborhood of segments in object $o$ -- see Table~\ref{tab:nocl:inst_rep}. \begin{table}[tb] \centering \small \caption{Nomenclature -- object instance representation} \label{tab:nocl:inst_rep} \begin{tabular}{|ll|} \hline $P^o$ & Point cloud of object $o$ \\ $S^o\mathrm{=}\{s_1,s_2, ...\}$ & Point cloud segments of $P^o$\\ $g^{os}\mathrm{=}(V^{os},E^{os})$& Neighborhood graph of $S^o$ (see Fig.~\ref{fig:ref_can5}): \\ &vertices: $V^{os}\mathrm{=}\{v_1,v_2, ...\}$ \\ &\quad\quad\quad\quad with $v_{i}\mathrm{=}\langle s_i\in S^o\rangle$, \\ &edges: $E^{os}\mathrm{=}\{e_1,e_2, ...\}$ \\ &\quad\quad\quad with $\text{if } v_i \ue v_j$ (is adjacent) : $e_{k}\mathrm{=}\{v_i,v_j\}$\\ \hline \end{tabular} \end{table} Each vertex $v\in V^{os}$ represents an extracted segment. Spatially neighboring segments $v_i,v_j\in V^{os}$ are connected with an edge $e_k\in E^{os}$. In Alg.~\ref{alg:seg_refinement}, the proposed refinement procedure is shown. % \begin{algorithm} \small \caption{\small Segment Refinement} \label{alg:seg_refinement} \begin{algorithmic}[1] \floatname{algorithm}{Procedure} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Object instance graph $g^{os}=(V^{os},E^{os})$ (e.g. as given in Fig.~\ref{fig:ref_can1}) and merging threshold $\theta$~(e.g. $\theta=0.3$) \REPEAT \STATE $is\_merged$ $\gets$ $false$ \STATE Create list $\Gamma$ of sorted edges in $E^{os}$ w.r.t. segment size in ascending order. Note that, given both connected vertices of an edge, the vertex with the smaller segment size is selected as reference for sorting. \FORALL{$e_k \in \Gamma$ \AND $is\_merged = false$} \STATE Compute mean surface normal $\mu_i$, $\mu_j$ in border region of segment $v_i$ and $v_j$ connected to edge $e_k\in\Gamma$ \STATE $\sigma$ $\gets$ compute bounded cosine similarity of $\mu_i$ and $\mu_j$ \IF {$\sigma$ $<$ $\theta$} \STATE Merge segments and update $g^{os}$ \STATE $is\_merged$ $\gets$ $true$ \ENDIF \ENDFOR \UNTIL{$is\_merged = false$} % \ENSURE Refined $g^{os}$ (e.g., result in Fig.~\ref{fig:ref_can5}, if instance in Fig.~\ref{fig:ref_can1} was given as input) \end{algorithmic} \end{algorithm} Considering that the signal-to-noise ratio (between segments and noise) is low for small-sized segments, our objective is to minimize the number of small segments by merging them with neighboring larger segments while also considering topological aspects such as surface similarities based on surface normals. In this merging process small segments are prioritized to facilitate an eventual extraction of segments (see Fig.~\ref{fig:refinement_samples}) which can be semantically meaningful and function as building blocks for further analyses. \begin{figure}[tb] \centering \subfigure[$i\mathrm{=}1$]{\label{fig:ref_can1}\includegraphics[width=0.12\linewidth]{can_iter1}} \subfigure[$i\mathrm{=}2$]{\label{fig:ref_can2}\includegraphics[width=0.12\linewidth]{can_iter2}} \subfigure[$i\mathrm{=}3$]{\label{fig:ref_can3}\includegraphics[width=0.12\linewidth]{can_iter3}} \subfigure[$i\mathrm{=}4$]{\label{fig:ref_can4}\includegraphics[width=0.12\linewidth]{can_iter4}} \subfigure[$i\mathrm{=}5$]{\label{fig:ref_can5}\includegraphics[width=0.12\linewidth]{can_iter5}} \caption{Segment refinement over 5 iterations ($i$) of an over-segmented $can$.} \label{fig:refinement_samples} \end{figure} \section{Hierarchical Description of Object Segments}\label{sec:dict} Given a refined object segmentation and the corresponding graph, point cloud segments are projected to a symbol space. Therein a segment represented as a vertex of an object instance graph is labeled with a symbol. The process can also be interpreted as a quantization of point cloud data to a constrained set of symbols (a.k.a.~visual words) of a dictionary. It facilitates reasoning about shape categories on the basis of the analysis of symbol patterns which can be associated to particular shape appearances that may form categories (see Sec.~\ref{sec:ch}). For the dictionary generation process, surface-structural properties of segments are initially described on the basis of the local pose-invariant descriptor FPFH\footnote{We use the well-known FPFH descriptor as a baseline to illustrate the effectiveness of the proposed shape representation approach.}\cite{5152473}, which relies on the analysis of angle differences among surface normals of 3D points. In order to receive a global description of an entire segment, a mean histogram is typically created over all local histograms of the point cloud~\cite{5152473}. This mean histogram forms the \emph{description vector} of a segment point cloud. The quantization procedure is crucial for the assignment of a visual word to a description vector. Therein, a word is assigned to a vector with the shortest distance to it using $L^2$-\textit{norm}. % The objective is to assign similar description vectors to the same visual word. However, % a too small set of words is not able to express information variations due to under-fitting of the description space, whereas a too large set of words over-fits the space and hence small description variations can lead to assignments to different words. In order to reduce these quantization effects, the feature space is decomposed here in a hierarchical top-down manner. % Similar to divisive clustering methods~\cite{Jain:1999:DCR:331499.331504}, a set of training description vectors is initially clustered into two child clusters by applying $k$-means, i.e. $k\mathrm{=}2$. The description vectors that are assigned to the child clusters are further clustered in an iterative manner into two for the next level. Each cluster center represents a visual word and it is assigned to the current level, which we denote as \emph{description level}. As a result, a tree-like structure is obtained that leads to a dictionary $\mathcal{D}\mathrm{=}\{d_1, d_2,...,d_n\}$ of $n$ description levels, in which each $d_f$ consists of a set of $2^f$ words ($f$ indicates the description level in $\mathcal{D}$) -- see Table \ref{tab:nocl:dict}; an illustration is shown in Fig.~\ref{fig:dict_illustration}. \begin{table}[] \centering \small \caption{Nomenclature -- Hierarchical Dictionary} \label{tab:nocl:dict} \begin{tabular}{|ll|} \hline $\mathcal{D}\mathrm{=}\{d_1,..., d_n\}$ & dictionary with $n$ description levels, $d_i\mathrm{=}\{w_1,..., w_{2^i}\}$,\\ &see Fig.~\ref{fig:dict_illustration} \\ $\kappa(v)\mathrm{=}p$ & point cloud description (FPFH) $p$ of vertex $v\in V^{os}$\\ $\omega_f(\kappa(v))\mathrm{=}w$ & visual word assignment $w \in d_f$ to\\ &point cloud segment of $v\in V^{os}$\\ \hline \end{tabular} \end{table} \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth]{dictionary_sketch3} \caption{An example hierarchical dictionary $\mathcal{D}$ in which the first three \emph{description levels}~$\{d_1, d_2, d_3,...\}$ are depicted. For illustration, each visual word is depicted as circle containing a colored polygon.} \label{fig:dict_illustration} \end{figure} Similar to our previous work~\cite{6942984}, this quantization of the description space allows to distinguish surface-structural properties in a coarse-to-fine manner. Words on lower levels reflect primitive surfaces such as planar and curved shapes, whereas words on higher levels gradually distinguish individual object facets. \section{Shape Motif Hierarchy}\label{sec:ch} This section deals with the analysis of object segment constellations for shape categorization. The Shape Motif Hierarchy approach is proposed, which hierarchically decomposes shape into different granularity levels. The constellations of decompositions are encoded in a bottom-up manner: from fine-grained segments to groups of segments, which can gradually cover object parts to entire objects. The objective is to exploit these hierarchical decompositions of an object candidate to detect different topological aspects that function as evidences for certain categories. Given an object instance $o$ represented as a graph $g^o$, each vertex in $g^o$ corresponds to a segment in $o$, subsequently the vertex is labeled with a word $w \in d_f$ ($d_f \in \mathcal{D}$, $f$ indicates the description level in $\mathcal{D}$) according to the corresponding appearance of the segment (see Sec. \ref{sec:dict}). Our goal is to detect distinctive relations, ranging from relations among particular vertices to subgraphs in $g^o$, which allow inferring the respective shape category $y$ from a set of categories $\mathcal{Y}$. The shape motif hierarchy allows to encode these relations and provides the capability to analyze the appearance in a hierarchical way. % A shape motif hierarchy model $\mathcal{H}$, as illustrated in Fig.~\ref{fig:ch_illustration} (see Table~\ref{tab:nocl:ch}), consists of multiple hierarchical levels in which the observed vertices from an object instance graph are propagated in a bottom-up fashion, beginning from particular segments over composition of segments until the object instance is represented as a single composition. \begin{table}[] \centering \small \caption{Nomenclature -- Shape Motif Hierarchy} \label{tab:nocl:ch} \begin{tabular}{|ll|} \hline $g^o\mathrm{=}\{V^o, E^o\}$ & augmented object $g^{os}\mathrm{=}(V^{os},E^{os})$\\ &$V^o\mathrm{=}\{v^o_1,v^o_2...\}$ where $v^o_i\mathrm{=}\langle s_i\in S^o,p^y_i\mathrm{=}\kappa(s_i),w_i\mathrm{=}\omega_f(p_i)\rangle$\\ &(for description level $f$, given a label $y$),\\ &$E^o\mathrm{=}\{e^o_1,e^o_2...\}$ where $\text{if } v^o_j \ue v^o_k$ (is adjacent): $e_{i}\mathrm{=}\{v^o_j,v^o_k\}$ \\ $\tau^s(v^o_i)\mathrm{=}s_i$& returns $s_i$ of $v^o_i$\\ $\tau^p(v^o_i)\mathrm{=}p^y_i$& returns $p^y_i$ of $v^o_i$ given a label $y$\\ $\tau^w(v^o_i)\mathrm{=}w_i$& returns $w_i$ of $v^o_i$\\ $\mathcal{Y}\mathrm{=}\{y_1, y_2,...\}$ & set of category labels \\ $\mathcal{P}\mathrm{=}\{\{\mathcal{P}_y\}_{\forall y \in \mathcal{Y}} \}$& set of labeled motif prototype descriptions, see Fig.~\ref{fig:ch_legend}\\ $\mathcal{P}_y\mathrm{=}\{p_1^y,p_2^y,...\}$ & motif prototype descriptions of label $y$\\ $\mathcal{H}\mathrm{=}\{h^1,h^2...\}$ & motif hierarchy, see Fig.~\ref{fig:ch_illustration} \\ $h^l\mathrm{=}(V^{h^l},E^{h^l})$ &motif graph, $h^l\in \mathcal{H}$ at motif level $l$, see Fig.~\ref{fig:ch_legend}\\ & $V^{h^l}\mathrm{=}\{v^{h^l}_1,v^{h^l}_2...\}$ \\ & $E^{h^l}\mathrm{=}\{e^{h^l}_1,e^{h^l}_2...\}$ where $\text{if } v_m^{h^l} \ue v_n^{h^l}$: $e_{k}^{h^l}\mathrm{=}\{v_m^{h^l},v_n^{h^l}\}$ \\ $v^{h^l}_j$& motif vertex $j$ of motif graph $h$ \\ &at motif level $l$ (see Fig.~\ref{fig:dict_illustration_and_ch}, \inlineimagetable{sample_clique_vertex_a.pdf})\\ $v^{h^{l+1}}_j\mathrm{=}\pi(e^{h^l})$ & propagation step for an edge in $h^l$ where \\ &$\pi(e^{h^l})\mathrm{=}\pi(v^{h^l}_m\ue v^{h^l}_n) $\\ &\quad\quad \ \ $= \langle \ s^{l+1}_j\mathrm{=}\langle s^{l}_m\cup s^{l}_n\rangle,$ \\ &\hspace{1.43cm}$p^{l+1}_j\mathrm{=}\kappa(s^{l+1}_j),$\\ &\hspace{1.35cm}$w^{l+1}_j\mathrm{=}\langle w^{l}_m\cup w^{l}_n \rangle \ \rangle$\\ $\langle s^{l}_m\mathrm{\cup}s^{l}_n\rangle\mathrm{=}s^{l+1}_j$ & propagation step of segments. Merging of two \\ & point cloud segments. \\ $\langle w^{l}_m\mathrm{\cup}w^{l}_n\rangle\mathrm{=}w^{l+1}_j$ & propagation step of words. Union that considers the word\\ & constellations in $w^{l}_m$ and $w^{l}_n$.\\ & E.g.,$\langle w^{l}_m\mathrm{=}$ \sampleWordCliqueB $\cup$\ $w^{l}_n\mathrm{=}$ \sampleWordCliqueA $\rangle \ = \ w^{l+1}_j\mathrm{=}$\sampleWordCliqueC\\ &where $w$ can be interpreted as a word motif (see Fig.\ref{fig:ch_legend})\\ & -- initially represented, as a single word, e.g. \protect\sampleWordC, \sampleWordA, \sampleWordB, etc.\\ $\mathds{1}_{v^{h^l}_j}(g^o)$ & motif vertex Indicator function, where \\ & $v^{h^l}_j\in V^{h^l},v^o_k \subseteq V^o$ \\ & {\footnotesize $\begin{cases} 1, & \text{\footnotesize if }(\tau^w(v^{h^l}_j) \mathrm{=}w_j)\mathrm{\cap}(\tau^w(v^o_k)\mathrm{=}w_k)\mathrm{\neq}\emptyset\\ & \text{\footnotesize i.e. word motif } w_j \text{\footnotesize\ exists in object } g^o\text{\footnotesize.} \\ &\text{\footnotesize E.g., }w_j \text{\footnotesize\ in \sampleWordCliqueCompleteA and } w_k \text{\footnotesize \ in \sampleObjectGraphA match}\\ &\text{\footnotesize as shown in see Fig.~\ref{fig:ch_illustration}, motif level 3.} \\ 0, & \text{\footnotesize otherwise}\\ \end{cases}$ }\\ \hline \end{tabular} \end{table} \begin{figure}[tb] \centering \subfigure[Shape motif hierarchy]{\label{fig:ch_illustration}\includegraphics[width=0.35\linewidth]{clique_hierarchy_sketch6_2_color}} \subfigure[Motif level components]{\label{fig:ch_legend}\includegraphics[width=0.33\linewidth]{clique_hierarchy_sketch_legend}} \caption{A shape motif hierarchy example is shown in \usubref{fig:ch_illustration}, consisting of multiple \emph{motif levels}. Each node \inlineimage{sample_clique_vertex_a} represents a specific \emph{motif vertex}, whereas each smaller linked node represents a \emph{comparator} \protect \resizebox{.03\linewidth}{!}{\inlineimage{sample_comparatorA}} of a specific shape category $y$ consisting of several \emph{motif prototypes} \protect \samplePattern). A sample propagation of a box \inlineimage{sample_object_a} through (\protect \inlineimage{sample_ch_prop_object}) the hierarchy is shown that consists of three segments (\protect\resizebox{.02\linewidth}{!}{\inlineimage{box_super_part_a_single}}, \inlineimage{box_super_part_c_single}, \inlineimage{box_super_part_b_single}) with corresponding words (\protect\sampleWordC, \protect\sampleWordB, \protect\sampleWordA). Feasible propagations which have been previously encoded in the hierarchy during training phase but are not affected by the \emph{box} are depicted as \protect \inlineimage{sample_ch_prop}. Components of a \emph{motif level} are illustrated in \usubref{fig:ch_legend}. } \label{fig:dict_illustration_and_ch} \end{figure} These compositions are denoted as \emph{motifs}. Motifs are described and propagated in a symbolic manner using the respective words, which are assigned to the object segments. % As a result multiple evidences are extracted and analyzed over multiple motif levels, which contribute to a more confident prediction of a shape category compared to purely local-based or purely global-based approaches. Ergo, objects are fine-grained and locally analyzed on lower motif levels and with increasing motif level, more global semantic properties affect the analysis until the analysis is based on a single entity representing the entire object. Each motif vertex \inlineimage{sample_clique_vertex_a} in $\mathcal{H}$ represents a unique word constellation which we denote as word motif (e.g. $w\mathrm{=}$\sampleWordCliqueC), see Fig.~\ref{fig:ch_legend}. A word motif can be interpreted as a component that can function as a \emph{building block} of an (\emph{unknown}) observed object during the inference phase. Basically, words can appear in various constellations. Due to the fact that word constellations which are encoded in $\mathcal{H}$ are only observed from objects, these constellations can be inherently denoted as motifs, respectively, shape motifs in this context that describe objects. \subsection{Training Phase}\label{sec:ch:tr} The shape motif hierarchy $\mathcal{H}$ generation is data-driven, i.e., each object instance graph from a given labeled training set of categories $\mathcal{Y}\mathrm{=}\{y_1, y_2,...\}$ is propagated through motif levels in a bottom-up fashion as shown in Alg.~\ref{alg:clique_hiearchy_generation}. \begin{algorithm} \small \def \scalecol{0.7} \caption{\small Shape Motif Hierarchy -- Training Phase \newline For explanatory purposes, illustrations are provided for a \emph{box} object sample with three segments that represents a graph of three vertices with their corresponding words: \newline \protect \resizebox{.5\textwidth}{!}{ $\quad\quad\quad\quad\quad$ \inlineimage{sample_complete_box}} } \label{alg:clique_hiearchy_generation} \begin{algorithmic}[1] \floatname{algorithm}{Procedure} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Set of object instance graphs $\mathcal{G}\mathrm{=}\{g^o_1 \dots g^o_n \}$ and empty $\mathcal{H}$ model \FORALL{$g^o \in \mathcal{G}$} \STATE $y\gets$ label assigned to $g^o$, given by supervision \STATE Level $l \gets 1 $ \FORALL{$v \in V^o$ of $g^o$} \STATE \begin{minipage}[t]{\scalecol\linewidth} Introduce word $w\mathrm{=}\tau^w(v)$ to $\mathcal{H}$ by augmenting the corresponding motif vertex $v_m^{h^{l}} \in h^{l}$ with $s\mathrm{=}\tau^s(v)$, $p^y\mathrm{=}\tau^p(v)$ and $w\mathrm{=}\tau^w(v_m^{h^{l}})$, see Table~\ref{tab:nocl:ch}. For instance, in the illustration on the right, word \sampleWordA \ representing vertex of segment \inlineimage{box_super_part_b_single} is assigned to motif vertex \inlineimage{sample_clique_vertex_a_w_word2}. The prototype description $p^y$ is added to the corresponding comparator \resizebox{.05\linewidth}{!}{\inlineimage{sample_comparatorB}}, where $y$ represents the label \emph{box}. \end{minipage} $\quad\hspace{0.5cm}$ \includegraphics[width=0.19\linewidth,valign=t]{single_object_intro_vertices} \STATE \begin{minipage}[t]{.64\textwidth} Create motif edge (\inlineimage{sample_clique_edge}) $e^{h^{l}}$ between $v_m^{h^{l}}$ and a neighboring vertex $v_n^{h^{l}}$, only if corresponding segments of $v_m^{h^{l}}$ and $v_n^{h^{l}}$ are neighbors in $g^o$. Subsequently, in the illustration on the right, both segments (\resizebox{.045\linewidth}{!}{\inlineimage{box_super_part_a_single}} and \inlineimage{box_super_part_b_single} ) are neighbors, thus an edge is created between \inlineimage{sample_clique_vertex_a_w_word1} and \inlineimage{sample_clique_vertex_a_w_word2} \end{minipage} \quad\hspace{0.5cm} \includegraphics[width=0.15\textwidth,valign=t]{single_object_intro_edges} \ENDFOR \STATE $E^{h^l} \gets get\_edges(h^l)$ \REPEAT \FORALL{$e^{h^{l}}\in E^{h^l}$} \STATE \begin{minipage}[t]{.6\textwidth} Propagate \inlineimage{sample_ch_prop_object} the edge $e$ according to $\pi(e^{h^{l}})$ to receive $v_m^{h^{l+1}}\in h^{l+1}$, see Table~\ref{tab:nocl:ch}. A propagation of the sample \emph{box} object is illustrated as \inlineimage{sample_ch_prop_object} as well as shown in Fig.~\ref{fig:ch_illustration}. \end{minipage} \quad \includegraphics[width=0.22\textwidth,valign=t]{single_propagation2} \STATE \begin{minipage}[t]{.6\textwidth} Create motif edge (\inlineimage{sample_clique_edge}) $e^{h^{l+1}}$ between $v_m^{h^{l+1}}$ and a neighboring vertex $v_n^{h^{l+1}}$, only if $v_m^{h^{l+1}}$ and $v_n^{h^{l+1}}$ contain word(s) representing the same segment(s) of $g^o$. E.g., in illustration on the right, a motif edge is created, since both motif vertices \inlineimage{sample_clique_vertex_a} contain \sampleWordCliqueA which represent the same segments \inlineimage{box_super_part_b} in \inlineimage{box_super}. \end{minipage} \hspace{0.35cm} \includegraphics[width=0.21\textwidth,valign=t]{single_propagation_edge_generation} \ENDFOR % \STATE $l\gets l+1$ \STATE $E^{h^l} \gets get\_edges(h^l)$ \UNTIL{$E^{h^l} \neq \emptyset$} % \ENDFOR \ENSURE $\mathcal{H}$ model augmented with instances $\mathcal{G}$ \end{algorithmic} \end{algorithm} Note that, as Fig.~\ref{fig:ch_illustration} suggests, given $m$ motif levels, the motif vertices and edges form a separate motif graph $h^l$ in motif level $l$ of % $\mathcal{H}\mathrm{=}\{h^1,h^2,...,h^m\}$. Each motif vertex in $\mathcal{H}$ represents a word motif (see Fig.~\ref{fig:ch_legend}) which has been observed in instance graphs of the training set. Edges among the motif vertices are created if two motif vertices contain words which correspond to the same segment of the propagated instance (general case: step 12 in Alg.~\ref{alg:clique_hiearchy_generation}). Given the set of vertices and edges in $\mathcal{H}$ at level $l$, the edges are propagated as vertices to the next higher level $l+1$ (see \inlineimage{sample_ch_prop_object} in Fig.~\ref{fig:ch_illustration} and step 11 in Alg.~\ref{alg:clique_hiearchy_generation}) such that the word motifs of the connected vertices of a motif edge (see \inlineimage{sample_clique_edge} in Fig.~\ref{fig:ch_illustration}) are merged to generate a larger motif, which is used to represent the motif vertex of the higher $l+1$ motif level. As a result, the higher the motif level, the greater the word motif order within each motif vertex until on higher motif levels a single motif vertex is generated that can encode a word motif representing entire objects. Only unique motif vertices are created and propagated, i.e., each vertex contains a unique word motif (see Fig.~\ref{fig:ch_legend}). Consequently, $\mathcal{H}$ evolves and reflects different facets of the respective category with the propagation of multiple instance graphs. The rationale behind encoding of object-related information in $\mathcal{H}$ is that (unknown) similar-shaped objects reflect similar propagations through $\mathcal{H}$ across motif levels. Given an object $o$ represented as a graph $g^o$, two aspects of similarity are considered which will be utilized during the inference phase. First we consider, the \emph{activation} of a motif vertex, i.e. a subgraph of words in object $g^o$ matches with a word motif contained in a motif vertex $v$ in $\mathcal{H}$. In other words, the word motif contained in $v$ is a subgraph in $g^o$. This matched word motif occurrence that is denoted as an \emph{activation} is represented with the Indicator function $\mathds{1}_{v}(g^o)$ which returns $1$ if a match is found, otherwise $0$ if no match is found, see Table \ref{tab:nocl:ch}. Second, the spatial appearance similarity of the matched motif is considered where a motif vertex serves as a comparator, which returns a \emph{stimulus} that reflects the similarity. Therein, descriptions $\mathcal{P}_y\mathrm{=}\{p_1^y,p_2^y,...\}$, which are observed during the propagation of training instances, are memorized with the corresponding category label $y\in \mathcal{Y}$ in a motif vertex $v$, i.e. $\{\{\mathcal{P}_y\}_{\forall y \in \mathcal{Y}} \}$ descriptions are associated to $v$. Each description $p_i^y$ represents a description vector of the corresponding point cloud constituting of segments that are observed in instances during training phase and subsequently propagated through $\mathcal{H}$. Spatial variations of these constituted point clouds are naturally encoded by the description in the motif vertex. Inspired by the Prototype Theory~\cite{Rosch1973}, these descriptions can be called as \emph{motif prototypes} of shape appearances associated to the corresponding motif vertex w.r.t.~$y$. Subsequently, the concept of Probabilistic Neural Networks~\cite{Huang:2004:APN:1011980.1011984} applied as a comparator utilizes the prototype descriptions in $v$ to compute the stimulus -- see Fig.~\ref{fig:ch_illustration} and Fig.~\ref{fig:ch_legend}. This template-based network does not abstract or generalize the descriptions, consequently avoiding the loss of hidden category-related properties at this stage. Note that, the prototype descriptions describing the shape of a motif vertex are pose-invariant, which facilitates the correct classification of an object from new viewpoints and the reusability as building block to deal with new shape information of unknown objects. Similarly to the concept of stimuli in visual-perceptual processes~\cite{Edelman1998-EDERIR,dicarlo:tics_2007,Kriegeskorte2013a}, the computed stimuli of an unknown object can be exploited for category inference purposes by the analysis of stimuli patterns of motif vertices across the motif levels (see Sec.~\ref{sec:inference_phase}), since similar object appearances can lead to similar stimuli patterns and vice versa. \subsection{Inference Phase} \label{sec:inference_phase} Given a graph $g^o$ of an unknown query object $o$ which is augmented with the corresponding word for each segment of object $o$, $g^o$ is propagated through $\mathcal{H}$ as described in the training phase. However, motif vertices and edges are not modified at this stage. The inference is based on the evaluation of \emph{activations} of motif vertices and the corresponding \emph{stimuli}. % The evaluation strategy is two-fold: intra-level inference focuses on a particular motif level whereas inter-level inference focuses on the fusion of motif level results. Concerning the intra-level inference, probabilities are computed during training, which allow to associate motif vertices to certain shape categories, i.e., $P(y|v)$, where $y$ is a specific shape category that is associated to a given motif vertex $v$. If an activation of motif vertex $v$ is found, the stimulus is computed using % the Jenson-Shannon divergence~($JS$) and an adapted Gaussian kernel combination -- see Eq.~\ref{eq:ch_js}, where $v$ denotes a motif vertex, $q$ the respective description of object segments in $g^o$ which match with the word motif of $v$, $\mathcal{P}_y$ the prototype descriptions associated to $v$ of label $y$, and $\sigma$ (e.g. $\sigma\mathrm{=}0.025$) denotes the bandwidth. \begin{equation} \label{eq:ch_js} \alpha(v,g^o,y)\mathrm{=} \begin{cases} \frac{1}{|\mathcal{P}_y|} \cdot \sum^{|\mathcal{P}_y|}_{i=1}e^{\tfrac{JS(p_i^y \in \mathcal{P}_y,q)^2}{-2\sigma^2}},&\text{\footnotesize if $\mathds{1}_{v}(g^o)\mathrm{=}1$}\\ 0, & \text{\footnotesize otherwise} \end{cases} \end{equation} The stimulus is normalized by $|\mathcal{P}_y|$ in order to account for an unbalanced distribution of prototype descriptions from particular categories observed during training phase. This procedure is applied on all activated motif vertices within a motif level as shown in Eq.~\ref{eq:ch_fusion_intra_level}. \begin{equation} \label{eq:ch_fusion_intra_level} \begin{split} \beta(g^o,y,l) = \frac{\sum^{|V^l|}_{i=1} \alpha(v^{l}_i,g^o,y) \cdot P(y|v^l_i)}{\sum^{|\mathcal{Y}|}_{j=1} \sum^{|V^l|}_{i=1} \alpha(v^{l}_i,g^o,\mathrm{\emph{y}}_j) \cdot P(\mathrm{\emph{y}}_j|v^l_i)} \end{split} \end{equation} Consequently, given object graph $g^o$, a normalized response w.r.t.~label $y$ at a motif level $l$ is returned by $\beta(g^o,y,l)$. In a similar manner, the inter-level inference is drawn by the accumulated responses over all motif levels as shown in Eq.~\ref{eq:ch_fusion_inter_level}, where $m$ is the number of motif levels. \begin{equation} \label{eq:ch_fusion_inter_level} \begin{split} \gamma(g^o, y) = \frac{ \sum^{m}_{l=1} \beta(g^o,y,l) \cdot P(y|l)}{\sum^{|\mathcal{Y}|}_{j=1}\sum^{m}_{l=1} \beta(g^o,\mathrm{\emph{y}}_j,l) \cdot P(\mathrm{\emph{y}}_j|l)} \end{split} \end{equation} Furthermore, a probability $P(y|l)$ for a shape category $y$ is computed in this case for a given motif level $l$ that regards the shape complexity through the distribution of motif orders observed for objects instances of particular categories in the training. \section{Shape Motif Hierarchy Ensemble} Our main goal is to generate various perspectives on an object point cloud that lead to evidences which reveal distinctive patterns which can be used for shape inference purposes. A hierarchical dictionary $\mathcal{D}\mathrm{=}\{d_1, d_2,...,d_n\}$ consisting of $n$ description levels was introduced in Sec.~\ref{sec:dict}. It allows to generate multiple evidences, e.g. in case of $n\mathrm{=}3$ description levels, the segment \inlineimage{box_super_part_c_single} of the box instance shown in Fig.~\ref{fig:dict_illustration} is represented in level $1$ with the word \sampleWordD, level $2$ with \sampleWordB\ and level $3$ with \sampleWordE. As a result for the decomposed shape of the box instance, shown in Fig.~\ref{fig:dict_illustration} and Fig.~\ref{fig:hch_illustration}, three instances graphs can be generated that are augmented with words of the respective description level. Note that, by the increase of description level more words quantize the description space, i.e. words at higher levels describe more and more specific appearance variations of segments, whereas words at lower levels rather describe general appearances like flat or round segments. One may interpret that words at lower levels under-fit the space (see Fig.~\ref{fig:hch_illustration}, two segments (\inlineimage{box_super_part_c_single} and \inlineimage{box_super_part_b_single}) at $d_1$ are assigned to the same word \sampleWordD) where as on higher levels over-fit, i.e. different words are assigned to minor appearance variations of segments. \begin{figure}[tb] \centering \includegraphics[width=0.65\linewidth]{overall_level3} \caption{Illustration of an example shape motif hierarchy ensemble $\mathcal{HE}$ based on three shape motif hierarchies $\{\mathcal{H}_1,...,\mathcal{H}_3\}$ (see Fig.~\ref{fig:ch_illustration}) using respective description levels $\{d_1,...,d_3\}$ of $\mathcal{D}$ (see Fig.~\ref{fig:dict_illustration}).} \label{fig:hch_illustration} \end{figure} Based on the symbolic representation of point cloud segments with a hierarchical dictionary, a shape motif hierarchy $\mathcal{H}_i$ encodes the shape decompositions and word assignments of object instances in the form of word motif occurrences described by the \emph{particular} set of words from description level $d_i\in\mathcal{D}$. Subsequently, a Shape Motif Hierarchy Ensemble $\mathcal{HE}\mathrm{=}\{\mathcal{H}_1,...,\mathcal{H}_n\}$ given $n$ description levels is created: one shape motif hierarchy per description level, where object segments which are assigned to words of the respective $d_i$ description level are propagated through the respective shape motif hierarchy $\mathcal{H}_i$ as illustrated in Fig.~\ref{fig:hch_illustration}. In the training phase, as described in Sec.~\ref{sec:ch:tr} each $\mathcal{H}_i$ evolves and generates evidences in form of word motifs according to observed word occurrences at description level $i$ of training samples. Note that, in this data-driven fashion, shape motif hierarchies evolve differently due to the different variety of words in each description level, subsequently word motifs vary among shape motif hierarchies and can be exploited as evidences. As a result, an ensemble of multiple motif hierarchies $\mathcal{HE}\mathrm{=}\{\mathcal{H}_1,...,\mathcal{H}_n\}$ (see Table~\ref{tab:nocl:che}) allows to span the space of evidences while considering the quantization of the description space of $n$ description levels and the hierarchical shape decomposition within each motif hierarchy. \begin{table}[] \centering \small \caption{Nomenclature -- Shape Motif Hierarchy Ensemble} \label{tab:nocl:che} \begin{tabular}{|ll|} \hline $\mathcal{HE}\mathrm{=}\{\mathcal{H}_1,\mathcal{H}_2,\dots\}$ & shape motif hierarchy ensemble, where $\mathcal{H}_i\mathrm{=}\{h^1,h^2, ...\}$ \\ &using description level $d_i\in\mathcal{D}$\\ \hline \end{tabular} \end{table} In the inference phase, segments of object instance graph $g^o$ are assigned to words of the respective description level and accordingly propagated through the corresponding shape motif hierarchy as illustrated in Fig.~\ref{fig:hch_illustration}. Based on Eq.~\ref{eq:ch_fusion_inter_level} each shape motif hierarchy $\mathcal{H}_i$ returns an individual shape category $y\in\mathcal{Y}$ response ($\gamma^{\mathcal{H}_i}(g^o, y)$) w.r.t. the observed object instance $g^o$. As shown in Alg.~\ref{alg:hierarchical_clique_hiearchy_prediction}, these responses are accumulated and fused using majority voting (see Fig.~\ref{fig:hch_illustration}, \inlineimage{sample_majority_voting}) to identify the final category label $y^*$ for $g^o$. \begin{algorithm} \small \caption{\small Shape Motif Hierarchy Ensemble -- Inference Phase} \label{alg:hierarchical_clique_hiearchy_prediction} \begin{algorithmic}[1] \floatname{algorithm}{Procedure} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Object graph $g^o$, shape motif hierarchy ensemble $\mathcal{HE}$ \STATE Propagate $g^o$ through each shape motif hierarchy $\mathcal{H}_i\in\mathcal{HE} $ \STATE Find the final label $y^{*}$ for $g^o$ based on Eq.~\ref{eq:hierarchical_clique_hiearchy_prediction_mv} in which the responses $\gamma^{\mathcal{H}_i}(g^o, y)$ are computed for the respective $\mathcal{H}_i$ for all $y \in \mathcal{Y}$ (Eq.~\ref{eq:ch_fusion_inter_level}) \ENSURE Corresponding label $y^{*}$ of instance $g^o$ \end{algorithmic} \end{algorithm} \begin{equation} \label{eq:hierarchical_clique_hiearchy_prediction_mv} \begin{split} y^{*} = \argmax_{y \in \mathcal{Y}} \frac{ \sum^{|\mathcal{D}|}_{i=1} \gamma^{\mathcal{H}_i}(g^o, y)}{ \sum^{|\mathcal{Y}|}_{j=1} \sum^{|\mathcal{D}|}_{i=1} \gamma^{\mathcal{H}_i}(g^o, \mathrm{\emph{y}}_j)} \end{split} \end{equation} \section{Experimental Evaluation} \label{sec:experiment} For experimental evaluation, objects from seven different shape categories (\emph{sack}, \emph{can}, \emph{box}, \emph{teddy}, \emph{ball}, \emph{amphora}, \emph{plate}) are used, which show little, partial or strong similarity: e.g., \emph{plates}, \emph{cans}, \emph{boxes} contain flat parts, \emph{balls}, \emph{amphoras}, \emph{teddies}~(head) contain spherical parts, and \emph{cans}, \emph{sacks}, \emph{teddies}~(limbs) contain bulging surfaces. For this purpose, we created a publicly available dataset, \emph{Object Shape Category Dataset}\footnote{\textbf{http://www.robotics.jacobs-university.de/datasets/2017-object-shape-category-dataset-v01/index.php}} (OSCD), that consists of about $66$ scans per category where each category contains multiple object instances. The scans are randomly split into a training/testing set with an average ratio of $75\%$/$25\%$ per category. A scan is a 2.5D object point cloud (see Fig.~\ref{fig:trainexamples}) which is captured from a random viewpoint with an RGB-D (Kinect-style) camera. % \begin{figure}[tb] \centering \def \scaleImg{1.15} \subfigure[]{\label{fig:trainsack}\scalebox{\scaleImg}{\includegraphics[width=0.16\textwidth]{sack_train}}} \subfigure[]{\label{fig:trainbarrel}\scalebox{\scaleImg}{\includegraphics[width=0.1\textwidth]{barrel_train}}} \subfigure[]{\label{fig:trainparcel}\scalebox{\scaleImg}{\includegraphics[width=0.16\textwidth]{parcel_train}}} \subfigure[]{\label{fig:trainteddy}\scalebox{\scaleImg}{\includegraphics[width=0.09\textwidth]{teddy}}} \subfigure[]{\label{fig:trainball}\scalebox{\scaleImg}{\includegraphics[width=0.1\textwidth]{ball2}}} \subfigure[]{\label{fig:trainamphora}\scalebox{\scaleImg}{\includegraphics[width=0.08\textwidth]{amphora}}} \subfigure[]{\label{fig:trainplate}\scalebox{\scaleImg}{\includegraphics[width=0.12\textwidth]{plate}}} \caption{2.5D point cloud sample scan of one random object instance per category of the dataset: \emph{sack}~\usubref{fig:trainsack}, \emph{can}~\usubref{fig:trainbarrel}, \emph{box}~\usubref{fig:trainparcel}, \emph{teddy}~\usubref{fig:trainteddy}, \emph{ball}~\usubref{fig:trainball}, \emph{amphora}~\usubref{fig:trainamphora} and \emph{plate}~\usubref{fig:trainplate}.} \label{fig:trainexamples} \end{figure} In Fig.~\ref{fig:thumbnail_db_overview} a preview is shown of a random set of scans captured from dataset objects illustrating the appearance variety regarding scale, deformability and spatial dimensionality. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{thumbnail_overview_comp} \caption{A preview of a random subset of unique sample scans (50\% of all scans from each category) of the OSCD dataset.} \label{fig:thumbnail_db_overview} \end{figure} Fig.~\ref{fig:recognition_eval} illustrates qualitative results of our approach based on two example scenes A and B. The object candidates are segmented based on our previous work~\cite{MuellerBirkIcra2016}. As these are real-world unstructured scenes, the segmented objects tend to be noisy, occluded, and distinctive shape features can be hidden due to the viewpoint. Candidates representing the background like the wall or the ground are successfully rejected due to low classification confidences ($<30\%$). In scene A on the left, one can observe classifications with high confidences, even for the noisily perceived point cloud of the purple \emph{can} or the partially visible blue \emph{teddy}. In the right scene B, visible parts of the shelf are classified to \emph{box} due to strong shape similarity, but with a lower confidence ($40\%$ and $66\%$). In the same scene, the occluded \emph{sack}, the non-visible top of the \emph{can} or the missing left handle of the detected \emph{amphora} candidate still lead to a correct classification of the category. The computationally non-optimized, single-threaded implementation of our method on an \emph{Intel Core i7-3770} machine leads to a mean $2.3$\emph{s} ($\pm$ $2.2$\emph{s}, median $1.4$\emph{s}) runtime per object w.r.t.~the testing set. $44.2$\% of the runtime is dedicated to the generation of segment descriptions, whereas $55.8$\% is used for the classification~($\mathcal{HE}$). \subsection{Hierarchical Description of Object Segments} \label{sec:exp:dict} With increasing description level, improved discrimination is expected of the segments w.r.t.~their surface-structural appearance. In Fig.~\ref{fig:vw_dict_distri}, the first three description levels are shown of a trained dictionary, which consists of seven levels in total. \begin{figure}[tb] \centering \subfigure[Visual word assigmnent distribution regarding category labels of first three description levels $\mathcal{D}\mathrm{=}\{d_1, d_2,d_3,...\}$. ]{\label{fig:vw_dict_distri}\includegraphics[width=0.45\linewidth]{dictionary_eval_sketch2}} \hspace{0.2cm} \subfigure[Motif level size evolution of shape motif hierarchies. ]{\label{fig:ch_vertices_vs_edges}\includegraphics[width=0.45\linewidth]{ch_vertices_vs_edges_new}} \caption{The first three levels are shown in \protect\usubref{fig:vw_dict_distri}\ \ of a hierarchical dictionary which is trained with $50$ randomly selected segments per category (in total 350 segments). Each node represents a visual word showing the assignment distribution of the segments w.r.t.~the categories. In \usubref{fig:ch_vertices_vs_edges}\ \ for each description level (d-level 1 to 4 ) the respective motif hierarchy size (number of vertices and edges per motif level) of the ensemble is shown. } \label{fig:vw_dict_distri_and_ch_vertices_vs_edges} \end{figure} Each node represents a visual word and shows the assignment distribution of object segments of the respective shape category. A coarse distinction among segments can already be observed for $d_1$ description level: $65.1\%$ of the visual word representing the left branch is assigned to segments of objects, which are associated to shape categories that feature planar surfaces like \emph{box}~($30.1\%$) and \emph{plate}~($35\%$); whereas, the right branch is mostly assigned to curved instances ($62.8\%$): \emph{sack}~($19.1\%$), \emph{teddy}~($20.8\%$) or \emph{ball}~($22.9\%$). In further levels a more fine-grained distinction is observable, i.e., certain words are mainly assigned to a particular category or to a small group of categories, e.g., in level $d_2$, \emph{plate}~($67.6\%$, first word from left), \emph{box}~($44\%$, second word) or \emph{sack}~($37.5\%$, fourth word). In level $d_3$ a clear separation is observable for \emph{plate}~(second word from left), \emph{box}~(third word) and \emph{ball}~(fifth word). We can conclude that shape characteristics are separable in a top-down and coarse-to-fine manner. This separation can be interpreted as an initial (unsupervised) classification step. \subsection{Shape Motif Hierarchy} \label{sec:exp:ch} Given an object instance, the proposed shape motif hierarchy representation aims from the group of object instance segments to discover facets which serve as evidence to infer a category. In the training phase, the training set is applied to augment $\mathcal{H}$ with shape category-related information. This process is data-driven (see Sec.~\ref{sec:ch}), i.e. according to the appearances of training instances, $\mathcal{H}$ can continuously evolve and adapt to the appearances. % In Fig.~\ref{fig:ch_vertices_vs_edges} the evolution of the motif hierarchies is illustrated according to the description levels. It is observable that the more fine-grained the description space is (increasing number of words), the more motif vertices and edges are generated to encode the individual shape appearances. However, a decrease in the number of vertices and edges can be observed at higher motif levels, which can be attributed to the fact that on higher motif levels the variety of word motifs decreases. Consequently, fewer vertices at higher motif levels sufficiently represent observed instances in the training phase (see Sec.~\ref{sec:ch:tr}). Furthermore, a trained $\mathcal{H}$ is expected to encode distinctive mappings of motif vertices to certain shape categories, which facilitates a confident classification. In Fig.~\ref{fig:bull_ch_assignement}, the assignment distribution is shown for segments of the shape categories w.r.t. to the motif hierarchies of the first four description levels. \begin{figure}[tb] \centering \subfigure[$\mathcal{H}_1$]{\label{fig:bull_ch_assignement_d1}\includegraphics[width=0.425\linewidth]{bullseye_desc_level1_annotated}}\hspace{0.2cm}% \subfigure[$\mathcal{H}_2$]{\label{fig:bull_ch_assignement_d2}\includegraphics[trim=0 -15 0 0 ,width=0.32\linewidth]{bullseye_desc_level2}}\hspace{0.2cm}% \subfigure[$\mathcal{H}_3$]{\label{fig:bull_ch_assignement_d3}\includegraphics[width=0.14\linewidth]{labels_legend}\includegraphics[width=0.32\linewidth]{bullseye_desc_level3}}\hspace{0.2cm}% \subfigure[$\mathcal{H}_4$]{\label{fig:bull_ch_assignement_d4}\includegraphics[width=0.32\linewidth]{bullseye_desc_level4}}\hspace{0.2cm}% \caption{Category assignment distribution of $\mathcal{H}_{\{1,2,3,4\}}$ of the first four motif levels. Each motif level is represented as a ring; beginning from the inside to the outside. The distribution of a motif vertex is represented as a partition within a ring separated by black bars. The size of the colored segments within a partition corresponds to the proportion of a category in the distribution. } \label{fig:bull_ch_assignement} \end{figure} The inner ring reflects the distributions of motif level $1$, that corresponds to single segment occurrences (see Fig.~\ref{fig:ch_illustration} motif level $1$). It is observable that in the first motif level, curved and planar segments are separated by the motif vertices in $\mathcal{H}_{\{1,2,3,4\}}$. For instance, at this level in $\mathcal{H}_{1}$ segments of \emph{cans}, \emph{boxes} and \emph{plates} share a high proportion (see motif vertex represented by the lower half of the inner ring) due to the similar appearance of segments -- e.g., top of \emph{can} and side of \emph{box}. Considering motif vertices reflecting two segments (second ring from center) in $\mathcal{H}_{1}$, \emph{boxes} are clearly separated from instances of other categories, which feature non-planar segments such as \emph{teddies} and \emph{amphoras}. Note that the higher the motif level, vertices feature a higher correlation with particular categories. This can be especially observed in $\mathcal{H}_{4}$ for \emph{boxes}, \emph{teddies} and \emph{amphoras}. The main observation is that word motifs as part of motif vertices show a repetitive activation/occurrence for similar shaped instances. Subsequently, motif vertices show a potential for \emph{reusability} and can be used as \emph{building blocks} for \emph{unknown} objects, which represents a key condition for a well performing classification. In Fig.~\ref{fig:eval:hch_dict_vs_clique_confidence} the mean confidences of testing set classifications are shown of motif levels from each motif hierarchy $\mathcal{H}_{\{1,2,3,4\}}$, respectively, description level. \begin{figure}[tb] \centering \subfigure[Confidence result]{\label{fig:eval:hch_dict_vs_clique_confidence}\includegraphics[width=0.45\linewidth]{hch_cv_desc_clique_confidences_w}} \subfigure[Classification result]{\label{fig:eval:hch_dict_vs_clique_error}\includegraphics[width=0.45\linewidth]{hch_cv_desc_clique_w}} \subfigure[Ranked classification result]{\label{fig:eval:hch:ranked_class}\includegraphics[width=0.49\linewidth]{hch_top_k_per_label_w}} \subfigure[Classification rate]{\label{fig:hch:classification}\includegraphics[width=0.49\linewidth]{hch_misclas_per_label_w}} \caption{Mean classification results (of 5 repetitions) of the description and motif levels w.r.t. the testing set (gray marked cell = no result is evident).} \label{fig:recognition_error} \end{figure} The classification confidence of a motif level is the response of the direct comparison between a query object and the prototype descriptions of motif vertices of the respective motif hierarchy as shown in Eq.~\ref{eq:ch_fusion_intra_level} for the given label $y$ of a testing instance. It can be interpreted that high confidences correspond to a high similarity to the model encoded in the respective motif hierarchy. By increasing motif level, the confidence generally increases, i.e., the more evidences are observed in the form of larger word motifs, which cover an object by a greater extent, the more confident are the motif hierarchy responses. % However, at high motif levels, for instance at level $4$, the confidence drops in case of description level $1$ and $4$, i.e. for $\mathcal{H}_1$ and $\mathcal{H}_4$. % A low confidence can be mainly explained by the dissimilarity to the prototype descriptions encoded in motif hierarchy, which lead to low stimuli. These observations are expected: a motif hierarchy encodes the description in a general-to-specific manner w.r.t.~the motif levels, i.e., lower levels can represent generic building blocks whereas higher levels reflect specific appearances of objects; if such specific appearances lead to a high confidence we can interpret the result as an instance recognition rather than as a categorization result. \subsection{Shape Motif Hierarchy Ensemble} \label{sec:exp:hch} Given a set of $n$ description levels $\mathcal{D}\mathrm{=}\{d_1,...,d_n\}$, a motif hierarchy is trained for each description level and integrated to the ensemble $\mathcal{HE}\mathrm{=}\{\mathcal{H}_1,...,\mathcal{H}_n\}$. The dictionary and motif hierarchies are closely interrelated. In Fig.~\ref{fig:eval:hch_dict_vs_clique_error}, the effects are shown of motif level and description level on the classification accuracy. Considering each $\mathcal{H}$ independently, the accuracy generally improves over the increase of motif levels. For $\mathcal{H}_1$ a low increase of accuracy can be found due to under-fitting symptoms as the description level $1$ consists of only two words -- see Fig.~\ref{fig:vw_dict_distri}. Although, motifs in $\mathcal{H}_1$ consist of vertices that are represented by two words, a classification error of only $14.8\%$ is already achieved. Similarly for $\mathcal{H}_4$, over-fitting symptoms are found on higher motif levels due to the larger word variety, which leads to larger motif vertex variety compared to hierarchies that base on lower description levels. % The integrated results in the form of $\mathcal{HE}$ show that it outperforms the individual motif hierarchies. Furthermore, excluding the description level $4$, which is affected by over-fitting symptoms, the error gradually decreases from $14.8\%$ to $9.5\%$ over the first three description levels w.r.t.~the testing set. This shows that the hierarchical shape decomposition of object instances with multiple description levels allows to collect evidences that facilitate a confident category inference. % Shape categories can share similar surface-structural properties that do not allow a clear distinction. Considering also uncertainty caused by viewpoint variations, instances of different categories can appear similar, e.g., a \emph{sack} and a \emph{teddy}. In Fig.~\ref{fig:eval:hch:ranked_class} the ranked classification results are shown. While \emph{plates} and \emph{balls} are clearly distinctive, since they are classified as first choice (by rank), \emph{cans}, \emph{boxes} and \emph{teddies} are correctly classified within the first two ranks, whereas \emph{sacks} and \emph{amphoras} are not classified correctly -- $8.3\%$ of \emph{sacks} and $7.2\%$ of \emph{amphoras} are misclassified. This observation is also reflected in the first-rank classification rates shown in Fig.~\ref{fig:hch:classification}. It shows that instances of categories can be misclassified to categories, which share similar shape structures like planar, cylindrical, bulging surfaces etc. It is expected that only using shape information, under certain conditions like viewpoint and self-occlusions, such misclassifications are encountered due to shape similarity. \subsection{Comparison to Alternative Methods} \label{sec:comparison} For comparison of the categorization results, three alternative methods (Vocabulary Tree, FPFH, and Deep Learning) are evaluated using our training and testing set. The vocabulary tree~\cite{shen2016graph} is trained with extracted description vectors (see Sec.~\ref{sec:dict}) of the dataset using a tree depth of $6$ and a branch number of $8$. A $13.8$\% testing set error is achieved (Fig.~\ref{fig:eval:comat_voc}). \begin{figure}[tb] \centering % \subfigure[Vocab. tree~\cite{shen2016graph}]{\label{fig:eval:comat_voc}\includegraphics[height=0.3185\linewidth]{voc_misclas_per_label_wo_legend2}} \subfigure[FPFH~\cite{5152473}]{\label{fig:eval:comat_fpfh}\includegraphics[height=0.3185\linewidth]{fpfh_misclas_per_label_wo_legend2}} \subfigure[Deep convolutional neural network~\cite{NIPS2012_4824}]{\label{fig:eval:comat_dl}\includegraphics[height=0.3225\linewidth]{dl_misclas_per_label_wo_legend}} \caption{Mean classification rates (of 5 repetitions) of other approaches w.r.t. the testing set (gray marked cell = no result is evident). A mean error of $13.8$\% for the vocabulary tree, $29.1$\% for the FPFH and $16.7$\% for the deep convolutional neural network approach have been achieved.} \label{fig:eval:comat} \end{figure} In Fig.~\ref{fig:eval:comat_fpfh} a category discrimination is shown that is solely based on the FPFH descriptor: for each description of a query object, a nearest-neighbor search on the descriptions of the labeled training instances is performed. A mean distance for each category label is computed there based on the nearest-neighbor distances using $L^2$-\textit{norm}. The inferred label of the query is the category with the smallest mean distance. % This simplistic comparison of descriptions does not lead to satisfying results ($29.1$\% testing set error) due to description ambiguities across locally similar shaped instances from different categories. In comparison (Fig.~\ref{fig:eval:comat_dl}), a deep convolutional neural network (CNN)~\cite{jia2014caffe} with the point cloud instances as input in form of range images~\cite{7487310} reaches a $16.7$\% testing set error which saturated within maximum 1000 training iterations of the CNN. % The AlexNet~\cite{NIPS2012_4824} architecture is chosen for the CNN and solely trained with our dataset. It consists of $5$ convolutional layers and $3$ succeeding fully connected layers. The final softmax layer is reduced to a size of $7$ corresponding to our $7$ categories of the dataset. The relatively low amount of training samples is likely to have contributed to the error (as discussed in \cite{7487310}), which shows that our approach can deal with such conditions and provide a lower $9.5$\% testing set error (see Sec.~\ref{sec:exp:hch}) in the experiment. \subsection{Comparison to Alternative Datasets} \label{sec:alter_db} When dealing with supervisedly generated datasets several aspects have to be considered. \textbf{1) Generalization}: conventional experimental evaluation, in particular in case of instance recognition, is generally performed with a dataset in which samples are drawn from a particular \emph{distribution}, i.e. the conditions of gathering samples are kept same regarding capturing procedure including used sensor, viewing perspective, type of clutter or occlusion, etc. An illustration of dataset distribution variations is shown in Fig.~\ref{fig:eval:instance_variety}; these instances are labeled as \emph{can} or \emph{cylindrical} respectively. \begin{figure}[tb] \centering \def \scaleImg{0.9} \subfigure[]{\label{fig:eval:instance_variety0}\scalebox{\scaleImg}{\includegraphics[height=0.2\linewidth]{can0}}} \subfigure[]{\label{fig:eval:instance_variety1}\scalebox{\scaleImg}{\includegraphics[height=0.17\linewidth]{can56}}} \subfigure[]{\label{fig:eval:instance_variety2}\scalebox{\scaleImg}{\includegraphics[height=0.06\linewidth]{food_can_1_1_1}}} \subfigure[]{\label{fig:eval:instance_variety3}\scalebox{\scaleImg}{\includegraphics[height=0.09\linewidth]{food_can_14_1_1}}} \subfigure[]{\label{fig:eval:instance_variety4}\scalebox{\scaleImg}{\includegraphics[height=0.11\linewidth]{osd_cyl_sample_learn34_3}}} \subfigure[]{\label{fig:eval:instance_variety5}\scalebox{\scaleImg}{\includegraphics[height=0.12\linewidth]{osd_cyl_sample_test42_4}}} \caption{Illustration of appearance variations of sample point clouds from different distributions (datasets): \usubref{fig:eval:instance_variety0}, \usubref{fig:eval:instance_variety1} show \emph{can}~\emph{0} and \emph{56} of the OSCD-training set, whereas \usubref{fig:eval:instance_variety2} and \usubref{fig:eval:instance_variety3} show \emph{food\_can\_1\_1\_1} and \emph{food\_can\_14\_1\_1} of the \emph{Washington RGB-D Dataset}~\cite{5980382} and \usubref{fig:eval:instance_variety4}, \usubref{fig:eval:instance_variety5} show cylindrical instances from scenes \emph{learn 34} and \emph{test 42} of the \emph{Object Segmentation Database}~\cite{6385661}.} \label{fig:eval:instance_variety} \end{figure} Therefore the evaluation outcome of a method which is applied on a dataset, is mainly correlated to specific properties and conditions of the dataset. Consequently, methods have to be tuned for specific datasets to comply and to achieve competitive results. If not tuned, such evaluations will erroneously lead to disadvantages results even though the design and concept of the untuned method is superior compared to tuned methods that provide better results for the specific dataset. Note that, such tuning contradicts with the major objective of creating a method which is capable to cope with various conditions which have not been even observed during training phase. This generalization capability is essential in many real-world scenarios~\cite{JonschkowskiEHM16,7553531}. \textbf{2) Supervision}: samples of datasets are generally sampled and labeled in a supervised manner by humans. Human-based supervised labeling bears the risk of incorporating additional abstraction and model knowledge gained from experience. Due to human individuality such labeling process does not guarantee an unique association of samples to a specific category~\cite{McCloskey1978}, particularly in case of ambiguous object observations caused by viewpoint, occlusions, etc. Furthermore, humans perceive objects through the integration of a composition of various modalities~\cite{Seward97a,Palmeri2004a,ZimgrodHommel2013}: visual, auditory, tactual sensations, or semantic knowledge which fused together may allow to label a \emph{mug} not only by shape but also by material and functional knowledge like that a concave object represents a container for liquids which can be attributed to \emph{mugs}. However this knowledge is not reflected in datasets with an unary classification label (e.g. instance or category label) of the sample, for instance Fig.~\ref{fig:eval:instance_variety3} shows only a half cylinder without a top or bottom part, nevertheless it is labeled as \emph{can} but could also be a part of a \emph{mug}, \emph{bottle} or limb of a \emph{teddy}. As a consequence, one may suggest that an unary label does not allow to sufficiently describe a sample -- it is not expressive enough. Consequently, datasets whose purpose is the evaluation of a specific cue (e.g. visual reasoning of geometry or shape) are during the labeling process highly vulnerable to incorporate knowledge which is not inferable from the given data (e.g. point clouds). Especially in case of 2.5D datasets~\cite{5980382,6385661} ambiguities in the classification occur where a \emph{hard-classification}, i.e. labeling a sample to a unique instance, is not feasible due to viewpoint, occlusions, etc. \textbf{3) Data-driven learning}: supervised approaches are generally trained in a data-driven manner, i.e. pre-grouped samples w.r.t. specific labels are exposed to the method. Consequently, approaches only encode the observations made in the training phase (often in form of RGB images or point clouds) irrespective of semantic meaning or correlations among labels. Therefore the generated prediction model may only partially cover the variety of possible sample appearances regarding particular labels. This aspect plays even a more important role in categorization tasks where an explicit object model-to-predict is absent. As a result, an unique object-to-label assignment is often not feasible due to shape ambiguities compared to instance recognition task where a specific object model-to-predict is given beforehand. \medskip \noindent Under these circumstances, the following experiment is focused on the generalization ability of the proposed shape motif hierarchy ensemble method. We aim to decouple the semantical meaning of the dataset labels which our method is trained with and aim to investigate the ensemble response behavior with other datasets. Initially the shape motif hierarchy ensemble $\mathcal{HE}$ is trained \textbf{once} with the training set of our OSCD dataset containing the seven categories. Instances from other datasets, the \emph{Washington RGB-D Object Dataset}\footnote{This dataset contains of 51 categories which can be understood as \emph{semantic} categories; e.g., instances of the \emph{calculator} category, \emph{food box} category or the \emph{cereal box} category share all a very similar shape, namely the shape of a \emph{box}, i.e. a shape categorization approach should not differentiate those instances rather it is aimed to overcome these shape variations and to classify those instances to their common shape category \emph{box}.}~\cite{5980382} (WD) and \emph{Object Segmentation Database}~\cite{6385661} (SD), are propagated through $\mathcal{HE}$ without retraining; note that all three datasets are sampled from different distributions as above explained and illustrated in Fig.~\ref{fig:eval:instance_variety}. In order to account for ambiguous classifications due to sample appearance variations in different datasets and to allow to analyze the spectrum of responses for an unknown instance, an instance $g^o$ is not only classified by the predicted unary label according to Eq.~\ref{eq:hierarchical_clique_hiearchy_prediction_mv}, in this experiment we represent $g^o$ as the set of category responses $g^o\mathrm{=}\{r_1, r_2,...\}$ (see Eq.~\ref{eq:stimuli_response_space}), where $r_i$ is the response of label $y_i$ ($y_i\in \mathcal{Y} \wedge |\mathcal{Y}|\mathrm{=}7$, given seven categories). \begin{equation} \label{eq:stimuli_response_space} \begin{split} r_i = \delta(g^o,y_i) = \frac{\sum^{|\mathcal{D}|}_{j=1} \gamma^{\mathcal{H}_j}(g^o, y_i)}{|\mathcal{D}|} \end{split} \end{equation} To infer the actual degree of response w.r.t. $y$ we do not normalize over $\mathcal{Y}$. Each label response $r_i$ can be interpreted as a \emph{stimulus} which is learned in a data-driven manner from the supervisedly grouped set of samples that is the training set of our OSCD dataset. As a result each instance is represented in a $|\mathcal{Y}|$-dimensional space encompassing stimuli. This procedure can be denoted as \emph{soft-classification}. In this way, the generalization capability can be investigated by the created $|\mathcal{Y}|$-dimensional continuous space that allows to observe similarities among instances compared to a hard-classification where only an unary label is associated to an instance. In Fig.~\ref{fig:eval:interCatDist} the mean distances (cell color) and standard deviation (annotation within cell) are shown among the stimuli of dataset instances of the respective category. \begin{figure}[tb] \centering % \subfigure[OSCD dataset]{\label{fig:eval:interCatDistOSCD}\includegraphics[height=0.34\linewidth]{oscd_std_only}} \subfigure[WD dataset]{\label{fig:eval:interCatDistWD}\includegraphics[height=0.31\linewidth]{wd_std_only}} \subfigure[SD dataset]{\label{fig:eval:interCatDistSD}\includegraphics[height=0.32\linewidth]{sd_std_only}} \subfigure[OSCD $\cup$ WD $\cup$ SD datasets combined]{\label{fig:eval:interCatDistCombined}\includegraphics[height=0.34\linewidth]{oscd_wd_sd_std_only}} \caption{Inter-category distances based on $\mathcal{HE}$ stimuli. $\mathcal{HE}$ is trained once with OSCD training set. For each category the mean Manhattan distances are computed among all labeled instances. Distances among instances of the respective dataset(s) are scaled ($[0,1]$). Each cell is colored according to the mean distance and annotated with the \emph{standard deviation} of the distances.} \label{fig:eval:interCatDist} \end{figure} It provides a general indication of the coherency among stimuli of different datasets. Generally categories are distinctive: obviously categories of the OSCD testing set are most discriminative in comparison to WD and SD. Nevertheless, instances or parts of these may appear similar, for instance in Fig.~\ref{fig:eval:interCatDistCombined} it is observed that \emph{balls} and \emph{teddies} share similarity: this can be explained that the head or torso of a teddy (OSCD) and a ball including a (prolate spheroidal-shaped) football (WD) may appear similar. Note that, complex-shaped instances (e.g. \emph{teddy}) may represent compositions of primitive-shaped instances (e.g. \emph{cylinder} or \emph{sphere}). In order to visualize the $|\mathcal{Y}|$-dimensional space of stimuli we make use of the t-SNE~\cite{ictdbid:2777} embedding method to reduce the dimensionality to two. We denote this two-dimensional space as stimuli-response space $\mathcal{SR}$. The embedding is performed in an unsupervised manner, i.e. label-agnostic. Given $\mathcal{SR}$, instances are projected in this space from the WD, SD and the testing set of our OSCD dataset. Fig.~\ref{fig:oscd_lai_osd_tsne} shows the space combining instances from OSCD, WD and SD dataset, whereas for the sake of readability and illustration purposes, Fig.~\ref{fig:lai_tsne} and Fig.~\ref{fig:osd_tsne} focus on the respective datasets. \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{oscd_lai_oscd_tsne_bg_shaded_plus_density_annotation} \caption{A two-dimensional embedded space $\mathcal{SR}$ showing instances from the OSCD testing set, \emph{Washington RGB-D Object Dataset} (WD) and \emph{Object Segmentation Database}~(SD). In total 603 instance scans are extracted (WD: 335, SD: 154, OSCD testing set: 114); further details about the compilation of instances from WD can be found in Fig.~\ref{fig:lai_tsne} and from SD in Fig.~\ref{fig:osd_tsne}, respectively. Exemplary instances are annotated with their respective scan and category response.} \label{fig:oscd_lai_osd_tsne} \end{figure} \begin{figure}[t] \centering \begin{minipage}[b]{0.6\linewidth} \centering \includegraphics[width=1.0\linewidth]{lai4_tsne_bg_shaded_plus_density_annotation3} \end{minipage} \begin{minipage}[b]{0.39\linewidth} \centering \scriptsize \setlength{\tabcolsep}{2pt} \renewcommand{\arraystretch}{1.0} \begin{tabular}[b]{ l|| l |l|l |l ||l } \multicolumn{6}{c}{} \\ % &\multicolumn{2}{c|}{WD instance} &\multicolumn{2}{c||}{OSCD testing}& \\ &\multicolumn{2}{c|}{associations }&\multicolumn{2}{c||}{set instances}& \ $\Sigma$ \\ Cat. & Scans &\ \#& Scans& \ \#\\ \hline \hline \emph{sack} & \emph{food bag} 1-8 & 40&\emph{sack} 0-17&18&58\\ \hline \emph{can} & \emph{food can} 1-14 &70&\emph{can} 0-18&19&\multirow{2}{*}{119}\\ & \emph{soda can} 1-6 & 30&&&\\ \hline \emph{box} & \emph{cereal box} 1-5 & 25&\emph{box} 0-18&19&\multirow{2}{*}{104}\\ & \emph{food box} 1-12 & 60&&&\\ \hline \emph{ball} & \emph{ball} 1-7 & 35&\emph{ball} 0-9&10&\multirow{3}{*}{85}\\ & \emph{lime} 1-4 & 20&&&\\ & \emph{orange} 1-4 & 20&&&\\ \hline \emph{plate}& \emph{plate} 1-7 &35&\emph{plate} 0-19&20&55 \\ \hline \hline $\Sigma$& - & 335 & - & 86 & 421 \end{tabular} \includegraphics[height=0.8\linewidth]{lai4_tsne_hist} \end{minipage} \caption{A two-dimensional embedded space $\mathcal{SR}$ showing instances (large circles) from the OSCD testing set and instances (small circles) from the \emph{Washington RGB-D Object Dataset}~\cite{5980382} (WD). Remarks: teddies and amphoras are only excluded in this visualization due to the non-availability of corresponding instances in WD; for each instance the $1^{st}$ to $5^{th}$ point cloud scans are selected of the first video sequence; only for visualization purposes instances are colored according to the best-fit OSCD-category. In total 335 instance scans are extracted from WD. Exemplary instances are annotated with their respective scan and category response. The bar chart presents the sample distribution according to their supervisedly given label within each region.} \label{fig:lai_tsne} \end{figure} \begin{figure}[t] \centering \begin{minipage}[b]{0.6\linewidth} \centering \includegraphics[width=1.0\linewidth]{osd_tsne_bg_shaded_plus_density_annotation5} \end{minipage} \begin{minipage}[b]{0.39\linewidth} \centering \scriptsize \setlength{\tabcolsep}{2pt} \renewcommand{\arraystretch}{1.0} \begin{tabular}[b]{ l|| l |l|l |l ||l } \multicolumn{6}{c}{} \\ % &\multicolumn{2}{c|}{SD instance} &\multicolumn{2}{c||}{OSCD testing}& \\ &\multicolumn{2}{c|}{associations }&\multicolumn{2}{c||}{set instances}& \ $\Sigma$ \\ Cat. & Scans &\ \#&Scans&\ \#\\ \hline \hline \emph{can} & \emph{learn} 33-44 &38&\emph{can} 0-18&19&\multirow{2}{*}{99}\\ & \emph{test} 31-42 & 42&&&\\ \hline \emph{box} & \emph{learn} 0-16 & 38&\emph{box} 0-18&19&\multirow{2}{*}{93}\\ & \emph{test} 0-15 & 36&&&\\ \hline \hline $\Sigma$& - & 154 & - & 38 & 192 \end{tabular} \includegraphics[height=0.8\linewidth]{osd_tsne_hist} \end{minipage} \caption{A two-dimensional embedded space $\mathcal{SR}$ showing instances (large circles) from the OSCD testing set and instances (small circles) from the \emph{Object Segmentation Database}~\cite{6385661}~(SD). Remarks: for visualization purposes instances are colored according to the best-fit OSCD-category . In total 154 instance scans are extracted from scenes of SD tagged as ``Boxes'' and ``Cylindric Objects''. Exemplary instances are annotated with their respective scan and category response. The bar chart presents the sample distribution according to their supervisedly given label within each region.} \label{fig:osd_tsne} \end{figure} Without considering the actual unary and supervisedly given instance label when creating $\mathcal{SR}$, $\mathcal{SR}$ allows to objectively analyze similarity among instances and also the characteristics of locations and regions in $\mathcal{SR}$. Each projected instance in $\mathcal{SR}$ can be interpreted in this context as a \emph{prototype} which contributes to span the $\mathcal{SR}$ space. For visual illustration, regions in $\mathcal{SR}$ are colored according to their dedication to a certain label by exploiting the projected instances as anchor points in space. Therefore, an uniform fine grid is created within the 2D $\mathcal{SR}$ space and for each cell in the grid the $k$-nearest-instances are determined (e.g. $k\mathrm{=}$5\% of total number of instances used in the respective experiment); the label of the majority of the $k$ determined instances leads to the label of the cell in the grid; each cell is weighted ($[0,1]$) and visually depicted in form of cell opacity. The weight represents the observed proportion of instances associated to the majority-label compared to the other labels; the opacity is depicted in an interval of low to high proportion $[$transparent (white)$\mathrm{=}0$, opaque (solid majority label color)$\mathrm{=}1$$]$. In the context of \emph{Cognitive Science}, specifically in the field of representation architectures, $\mathcal{SR}$ can be interpreted as a \emph{Conceptual Space}~\cite{2000:CSG:518647,zenker2015} where points (prototypes) in space represent multidimensional vectors of \emph{stimuli} and regions in space \emph{concepts}. These stimuli are often denoted as \emph{Quality Dimensions} and can be interpreted as the stimuli responses of $\mathcal{HE}$. Further on another attribute can be observed that $\mathcal{HE}$ stimuli of similar instances appear close in $\mathcal{SR}$ in comparison to dissimilar ones: majority of instances of the respective label are closest (see Fig.~\ref{fig:eval:interCatDist}) or within the same region and form groups, see Fig.~\ref{fig:oscd_lai_osd_tsne}. In contrary to a conventional evaluation analyzing discrete and unary classifications to specific labels (\emph{hard-classification}), the continuous space $\mathcal{SR}$ shown in Fig.~\ref{fig:lai_tsne} and Fig.~\ref{fig:osd_tsne} allows to learn about the regional characteristics and relations among locations in $\mathcal{SR}$ and instances of the three datasets. A main observation is that instances from different datasets are propagated through $\mathcal{HE}$ and resulting $\mathcal{HE}$ responses show \emph{coherency}. It can be shown that instances of all evaluated datasets together form interrelated and coherent groups -- see uniformly colored regions in Fig.~\ref{fig:lai_tsne} and Fig.~\ref{fig:osd_tsne} as well as the variety of instances within the region in form of the bar chart. In case of the WD dataset, the bar chart in Fig.~\ref{fig:lai_tsne} and the confusion matrix in Fig.~\ref{fig:eval:lai_cfmat:dist} show that $76\%$ of instances within the \emph{sack}-region are \emph{sacks} ($80\%$ in case of \emph{cans}, $100\%$ for \emph{boxes}, $78\%$ for \emph{balls} and $100\%$ for \emph{plate}). \begin{figure}[t] \centering \subfigure[]{\label{fig:eval:lai_cfmat:dist}\includegraphics[width=0.39\linewidth]{lai4_tsne_region_to_instance}} \subfigure[]{\label{fig:eval:lai_cfmat:assigm}\includegraphics[width=0.38\linewidth]{lai4_tsne_instance_to_region_r}} \caption{Confusion matrices regarding WD dataset showing the distribution of instances within a region~\usubref{fig:eval:lai_cfmat:dist} and the assignment of instances to particular regions~\usubref{fig:eval:lai_cfmat:assigm} (gray marked cell = no result is evident).} \label{fig:eval:lai_cfmat} \end{figure} In Fig.~\ref{fig:eval:lai_cfmat:assigm} the distribution of instances according to their supervisedly given labels are shown: \emph{ball} instances are found at regions of \emph{cans} and \emph{sacks} which can be explained that \emph{balls} share similar properties like \emph{cans} or \emph{sacks} in 2.5D -- all three shape types may contain bulging and roundish surfaces. A similar observation is made for the SD dataset in Fig.~\ref{fig:osd_tsne} and Fig.~\ref{fig:eval:osd_cfmat}. \begin{figure}[t] \centering % \subfigure[]{\label{fig:eval:osd_cfmat:dist}\includegraphics[width=0.39\linewidth]{osd_tsne_region_to_instance}} \subfigure[]{\label{fig:eval:osd_cfmat:assigm}\includegraphics[width=0.38\linewidth]{osd_tsne_instance_to_region_r}} \caption{Confusion matrices regarding SD dataset showing the distribution of instances within a region~\protect\usubref{fig:eval:osd_cfmat:dist} and the assignment of instances to particular regions~\protect\usubref{fig:eval:osd_cfmat:assigm}.} \label{fig:eval:osd_cfmat} \end{figure} OSCD instances and instances from other datasets are interrelated and form groups in $\mathcal{SR}$ but they show deviations in their positions: an extreme case can be found for \emph{ball}, see Fig.~\ref{fig:lai_tsne}: OSCD instances are predominately accumulated at region ($x \mathrm{\approx} 0.2|y \mathrm{\approx} 0.15$) whereas WD instances are partially scattered in vicinity of the accumulation. This can be explained that OSCD instances do not necessarily cover the entire region of the respective shape given the $\mathcal{HE}$ response -- note, $\mathcal{HE}$ is trained in a data-driven manner with the training set that represents a subset of possible shape appearances of a particular type or label in space. Thus it provides a specific perspective on shape appearances in space. Therefore instances of other datasets may cover or extend related regions dedicated to the respective shape. % Boundaries among (labeled) regions are continuous as shapes undergo deformations in space and a supervisedly given label may change at border regions; keeping in mind that objects may not be uniquely assignable to the given set of labels due to shape ambiguities, especially at border regions. To illustrate the shape variation within $\mathcal{SR}$, sample locations are annotated and depicted in Fig.~\ref{fig:lai_tsne} and Fig.~\ref{fig:osd_tsne} with the respective point cloud and response. Samples at border regions obviously are not uniquely assignable to a particular label due to the encountered higher shape ambiguity which is reflected by a lower response of the supervisedly given label or by responses of multiple labels as depicted for SD dataset instances in Fig.~\ref{fig:eval:osd_gallery}. \begin{figure}[!tb] \centering \subfigure[Cylindrical instance scans (total: 80) from \emph{learn} 33-44 and \emph{test} 31-42 scenes of SD dataset with $\mathcal{HE}$ responses (bar chart) and below scalar \emph{can} response.]{\label{fig:eval:osd_can_gallery} \includegraphics[width=0.47\linewidth]{thumbnail_ranked_gallery_label_2}} \hspace{0.1cm} \subfigure[Box scan instances (total: 74) from \emph{learn} 0-16 and \emph{test} 0-15 scenes of SD dataset with $\mathcal{HE}$ responses (bar chart) and below scalar \emph{box} response.]{\label{fig:eval:osd_box_gallery}\includegraphics[width=0.47\linewidth]{thumbnail_ranked_gallery_label_3}} \caption{In descending order, ranked instances from the SD dataset regarding the $\mathcal{HE}$ response of the respective labels \emph{can} and \emph{box}. These instances are extracted from scenes of the SD dataset. Therefore instances can appear partial due to occlusions.} \label{fig:eval:osd_gallery} \end{figure} In Fig.~\ref{fig:eval:osd_gallery} instances of the SD dataset are ranked in descending order according to their response of the supervisedly given label. This ranking reveals the increasing shape degradation, for instance the first 15 higher ranked cylindric objects ($18.75\%$ of all cylindric objects in Fig.~\ref{fig:eval:osd_can_gallery}) are represented with a single \emph{can} response (avg. $0.87$ response). At lower ranked instances, higher ambiguities can be observed: among others \emph{bowls} and \emph{pots} are found, where a bowl can be interpreted as a mixture of cylinder and cone but was labeled as a cylindric object in SD dataset (see in Fig.~\ref{fig:osd_tsne} bowls are located at border regions of \emph{can} group). The box ranking in Fig.~\ref{fig:eval:osd_box_gallery} show a similar pattern. Noteworthy, boxes experience substantial occlusions but still show a high \emph{box} response, which can be explained that box related features (partial but distinctive box constellation of segments like flat surfaces in $\approx90\ensuremath{^\circ}$ alignment) are mostly unaffected by the occlusion; $93.2\%$ of boxes responded with box as strongest stimulus. At the lower end \emph{small} boxes are observed with \emph{curved edges} which can lead to ambiguity; it is also worthy of mention that sensor noise has a higher impact on smaller objects (signal-to-noise ratio) and may lead to non-reliable responses. Previous experimental results in Fig.~\ref{fig:oscd_lai_osd_tsne}, \ref{fig:lai_tsne} and \ref{fig:osd_tsne} have shown coherent responses in the respectively generated $\mathcal{SR}$ space from different datasets. Given the combined instances from the OSCD testing set, \emph{Washington RGB-D Object Dataset} (WD) and \emph{Object Segmentation Database}~(SD) in Fig.~\ref{fig:eval:combined_db_tsne} results are shown to illustrate the effect in $\mathcal{SR}$ space regarding the number of \emph{description} and \emph{motif levels} applied in $\mathcal{HE}$. \begin{figure}[!tb] \centering \subfigure[Embedded $\mathcal{SR}$ space]{\label{fig:eval:complete_tsne_cf} \includegraphics[width=0.45\linewidth]{dict_vs_clique_responses_tsne} \includegraphics[trim=0 -10 0 0,width=0.1\linewidth]{labels_legend} } \subfigure[Distribution of instances within a region.]{\label{fig:eval:complete_tsne_cf_reg_in} \includegraphics[width=0.46\linewidth]{dict_vs_clique_responses_tsne_region_instance} } \subfigure[Assignment of instances to particular regions.]{\label{fig:eval:complete_tsne_cf_in_reg} \includegraphics[width=0.46\linewidth]{dict_vs_clique_responses_tsne_instance_region} } \caption{ A two-dimensional embedded space $\mathcal{SR}$ \usubref{fig:eval:complete_tsne_cf} generated from the OSCD testing set, \emph{Washington RGB-D Object Dataset} (WD) and \emph{Object Segmentation Database}~(SD) instances for particular description level and motif level $\mathcal{HE}$ configurations. Given $\mathcal{SR}$, the respective distribution of instance within a region \usubref{fig:eval:complete_tsne_cf_reg_in} and the assignment of instances to a region \usubref{fig:eval:complete_tsne_cf_in_reg} are shown -- specifically focusing on alternative dataset (WD, SD) labels. Each cell (description vs. motif level) within \usubref{fig:eval:complete_tsne_cf}, \usubref{fig:eval:complete_tsne_cf_reg_in} and \usubref{fig:eval:complete_tsne_cf_in_reg} is analogously generated according to the description and motif level configuration: for \usubref{fig:eval:complete_tsne_cf} as Fig.~\ref{fig:oscd_lai_osd_tsne}, for \usubref{fig:eval:complete_tsne_cf_reg_in} as Fig.~\ref{fig:eval:lai_cfmat:dist} (including order of labels on axes) and for \usubref{fig:eval:complete_tsne_cf_in_reg} as Fig.~\ref{fig:eval:lai_cfmat:assigm} (including order of labels on axes). For \usubref{fig:eval:complete_tsne_cf_reg_in} and \usubref{fig:eval:complete_tsne_cf_in_reg}, the axis label within a cell is shown below the main axis label in parentheses. } \label{fig:eval:combined_db_tsne} \end{figure} Confusion matrices are generated for description and motif level configurations. It is observable that the discrimination which can be interpreted as distinguishable borders among homogeneous regions, increases with the number of description and motif level until a saturation is reached. This tendency is similarly observed in previous results shown in Fig.~\ref{fig:eval:hch_dict_vs_clique_error}; however Fig.\ref{fig:eval:complete_tsne_cf} provides a further insight into the distribution of the dataset instances. For instance few description or motif levels lead to a distorted definition of regions which may has been caused by under-fitting whereas higher levels provide more distinctive separations. Results in Fig.~\ref{fig:eval:complete_tsne_cf_reg_in} and Fig.~\ref{fig:eval:complete_tsne_cf_in_reg} are based on the generated $\mathcal{SR}$ spaces shown in Fig.~\ref{fig:eval:complete_tsne_cf}. A blue (100\%) diagonal in each cell of a particular description and motif level configuration represents a perfect discrimination -- note, that $\mathcal{SR}$ is unsupervisly (label-agnostic) generated and purely based on object shape appearances. Following the diagonal from lower levels to higher levels, one can observe an increasing discrimination expressed by polarization of rates to 100\%~(blue) at diagonals and 0\%~(red) at non-diagonals or even no distortion marked as gray cells. For instance, at description level 1 and motif level 1 within the \emph{box} region $70.8\%$ \emph{boxes} are present whereas at description 4 and motif level 5, $97.1\%$ \emph{boxes}, $2.4\%$ \emph{cans} and $0.6\%$ \emph{plates} are present (see Fig.~\ref{fig:eval:complete_tsne_cf_reg_in}). On the other hand, at description level 1 and motif level 1, $67.1\%$ of \emph{ball} instances are assigned to \emph{ball} region whereas at description 4 and motif level 5, $88.2\%$ are assigned to ball region (see Fig.~\ref{fig:eval:complete_tsne_cf_in_reg}). \subsection{Evaluation Summary} Our conducted evaluation can be divided into three parts. In the first part, Sec.~\ref{sec:exp:dict} to Sec.~\ref{sec:exp:hch}, experiments were conducted to provided insights into process of the propagation of objects through the hierarchical levels of $\mathcal{HE}$. Thereby intermediate steps were analyzed of the internal $\mathcal{HE}$ components. % In the second part, Sec.~\ref{sec:comparison}, a comparison of other approaches was conducted. Each approach was \emph{separately} trained and tested with our OSCD dataset. In this experiment an unary classification task was performed which allows to compare the final classification capability of the respective approach under same conditions. Such experiment performs a classification accuracy comparison under supervision but neglects the analysis of the approach's generalization capability. In the last evaluation part, Sec.~\ref{sec:alter_db}, other datasets were applied to our approach. In this experiment we have not fine-tuned $\mathcal{HE}$ for each dataset and we have not separately conducted experiments for each dataset and retrained $\mathcal{HE}$. % The proposed approach was confronted with instances from all evaluated datasets (OSCD, WD and SD) in the same experiment and $\mathcal{HE}$ was initially trained once with the training set of our OSCD dataset. It provides a valuable feedback about the actual generalization capability of the proposed approach which is a crucial property for applications in real world scenarios. For instance, in the field of robotic logistics \cite{JonschkowskiEHM16,7553531} instance appearances may encounter strong variations (as they were drawn from different distributions) due to unstructured and confined spaces, clutter or even limited maneuverability or kinematic constraints of the robot platform, see Fig.~\ref{fig:robot_scene}. \begin{figure}[!tb] \centering \subfigure[Powerball-Husky platform]{\label{fig:robot1Anno}\includegraphics[height=0.28\textwidth]{husky_short_anno_comp}} \subfigure[Platform located in front of scene.]{\label{fig:robot1}\includegraphics[height=0.28\textwidth]{robot_comp}} \subfigure[RGB scene observed from platform's perspective.]{\label{fig:robot2}\includegraphics[height=0.28\textwidth]{RGB_comp}} \subfigure[Point cloud scene observed from platform's perspective.]{\label{fig:robot3}\includegraphics[height=0.38\textwidth]{RGBD_comp}} \subfigure[Object candidate segmentation~\cite{MuellerBirkIcra2016} and classification results ($\mathcal{HE}$)]{\label{fig:robot4}\includegraphics[height=0.3\textwidth]{Result_comp}} \caption{Illustration of a visual perception task with a Powerball-Husky platform of the Robotics Group (Jacobs University Bremen) in an unstructured scene where instances are noisy and partially perceived due to viewpoint and maneuverability limitations caused by platform's kinematic constraints and setup conditions. Object candidates (randomly colored) are segmented using our previous work~\cite{MuellerBirkIcra2016}. The shape category labels of the classification results of the work presented here ($\mathcal{HE}$) are colored as in Fig.~\ref{fig:vw_dict_distri}.} \label{fig:robot_scene} \end{figure} Therefore the performed experiment of our approach with instances which are drawn from different distributions in form of different datasets (see Fig.~\ref{fig:eval:instance_variety}) provides insights into the generalization capability compared to single and independent dataset evaluations. \section{Conclusion} \label{sec:conclusion} We presented a part-based object shape categorization approach that focuses on two aspects. \textbf{i)} A quantization of the \emph{description space} with a \emph{Hierarchical Dictionary} allows to classify point cloud segments in an unsupervised way to symbols in a coarse-to-fine manner, i.e., from basic surface primitives to fine-grained facets of individual object instances. Consequently this allows to abstract and analyze unstructured point cloud data in a symbolic manner. \textbf{ii)} Our hierarchical representation of object shape decompositions with a \emph{Shape Motif Hierarchy} allows w.r.t.~the \emph{shape space} to reveal topological shape patterns in form of motifs by gradually encoding the decompositions in a local to global manner, i.e. from single segments over composition of segments to a single composition that represents an entire object. Furthermore, part-based shape reasoning approaches can only perform as good as coherency and stability of the extracted parts permits. Parts have to be coherently and stably extractable from unstructured and noisy scenes considering various objects of different shape complexity. The shape motif hierarchy can alleviate this segmentation problem since the propagation of fine-granular object segments through the hierarchy allows to observe segment compositions at different granularity levels. These segments composed at different levels can be interpreted as object segmentation results at different topological levels. Subsequently, the shape motif hierarchy encompasses a beneficial representation within the context of segmentation and shape reasoning. The combination of both, the abstraction from point cloud data to symbols on multiple levels and the hierarchical decomposition of shapes on multiple levels, leads to a representation in form of a \emph{Shape Motif Hierarchy Ensemble}, which discriminatively encodes shape properties of object categories. % The effectiveness of the proposed representation of shape information is reflected in our experiments in which a classification error of $9.5\%$ was achieved. We can interpret the classification of objects as a discretization process from a continuous point cloud space in which objects are represented in, to a supervisely determined set of labels which are supposedly uniquely assigned to objects. Keeping in mind that the categorization of shapes bears uncertainty by the absence of an explicit object model, an unique object-to-label assignment is often not feasible due to shape ambiguities in comparison to instance recognition tasks where an explicit model-to-predict is given beforehand. % Thus the generalization in shape categorization is even more challenging. We believe, a meaning of an unknown object appearance can be drawn by projecting the unknown object into a similarity space where known shape prototype appearances serve as anchors in space which subsequently provide a perspective on the unknown object. The generated stimuli of a trained $\mathcal{HE}$ from various object observations, can be exploited to create this space ($\mathcal{SR}$). Consequently, $\mathcal{HE}$ can function as a \emph{descriptor} in form of retrieved stimuli from unknown object instances that can be projected into $\mathcal{SR}$ for reasoning purposes. Going beyond the shape categorization task, reasoning about shapes where \emph{commonalities} in form of similar $\mathcal{HE}$-\emph{descriptions} lead to similar behavior finds its application in many robotic areas ranging from household to industry such as in generation of grasping primitives for similar object appearances in manipulation~\cite{6696928}, finding substitutes for currently absent objects~\cite{DBLP:conf/icra/AbelhaGS16}, etc. Further on in the evaluation, an a-priori trained $\mathcal{HE}$ was confronted with object instances from alternative datasets which were not known at the training phase. As a result the effectiveness of the proposed approach was revealed as a distinction of instances w.r.t. shape categories was observed which supports the generalization capability of the proposed shape motif hierarchy ensemble under heterogeneous conditions present in different datasets. \section*{Acknowledgement} The research leading to the results presented here has received funding from the European Community's Horizon 2020 Framework Programme (H2020-EU.3.2.) within the project (ref.: 635491) ``Effective dexterous ROV operations in presence of communication latencies (DexROV)''. \bibliographystyle{spmpsci} %
1,314,259,995,985
arxiv
\section{Introduction}\label{sec intro} This paper is devoted to the Hermite interpolation with the surfaces possessing Pythagorean normal vector fields (PN surfaces). These surfaces were introduced by \cite{Po95}. We can understand them as surface counterparts to the Pythagorean hodograph (PH) curves first studied by \cite{FaSa90}. PN surfaces have rational offsets and thus provide an elegant solution to many offset-based problems occurring in various practical applications. In particular, in the context of the computer-aided manufacturing, the tool path does not have to be approximated and it can be described exactly in the NURBS form, which is nowadays a standard format of the CAD/CAM applications. For the survey of the theory and applications of PH/PN objects, see \cite{Fa08} and references therein. Many interesting theoretical questions related to this subject have been studied in the past years. Let us mention in particular the analysis of the geometric and algebraic properties of the offsets, such as the determination of the number and type of their components and the construction of their suitable rational parameterizations \citep{ArSeSe97,ArSeSe99,Ma99,SeSe00,VrLa10}. Despite natural similarities between the PH curves and the PN surfaces, the two classes of Pythagorean objects exhibit also some important differences. For example, the set of all polynomial PH curves within the set of all rational PH/PN curves was exactly identified in \citep{FaPo96}. On the other hand for the PN surfaces only the rational ones are described explicitly using a dual representation and the subset of the polynomial ones have not been revealed yet. Polynomial solution of the Pythagorean condition in the surface case started in \citep{LaVr11} for cubic parameterizations and recently an approach based on bivariate polynomials with quaternion coefficients was presented by \cite{KoKrVi16}. A survey discussing rational surfaces with rational offsets and their modelling applications can be found in \citep{KrPe10}. The previous problem is also strongly related to the construction techniques for PN surfaces, in particular to the Hermite interpolation, which is the main topic of this paper. There exist many Hermite interpolation results for the polynomial and rational PH curves yielding piecewise curves of various continuity, see \citep{Fa08,KoLa14}. Concerning direct algorithms for the interpolations with PN surfaces the situation is different. By `direct' we mean in this context the construction of the object together with its PN parameterization. Indeed, some constructions for special surfaces, which become PN only after a suitable reparameterization, were designed. For instance in \cite{BaJuKoLa08}, there was designed a method for the construction of the exact offsets of quadratic triangular B\'{e}zier surface patches, which are in fact PN surfaces. However their PN parameterizations were gained via certain reparameterization. A similar approach based on reparameterization was also used in the paper \citep{JuSa00} devoted to the surfaces with linear normals \citep{Ju98}, which generally admit a non-proper PN parameterizations, see \cite{VrLa14a} for more explanations. First we emphasize that our method requires that given points being interpolated are arranged in a rectangular grid; for further details about quadrilateral mesh generation and processing, including surface analysis and mesh quality, simplification, adaptive refinement, etc. we refer to survey paper \citep{BoLePiPuSiTaZo13} and references therein. Next, unlike the approaches presented in the papers cited in the previous paragraph we plan to interpolate a set of given points $\f p_{ij}$ with the associated normal vectors $\f n_{ij}$ by a rational parameterized PN surface in a direct way, i.e., without the necessary subsequent reparameterization. The advantage of such direct PN interpolation techniques is obvious -- no complicated trimming procedure in the parameter space is necessary. As far as we are aware, a similar method was discussed only in \citep{PePo96}, where a surface design scheme with triangular patches on parabolic Dupin cyclides was proposed. In addition, in \citep{Gravesen2007} the interpolation of triangular data using the support function is studied. The Gauss image is first constructed and then the support function interpolating the values and gradients at certain points (the given normals) is determined. Our approach interpolates the normals and the support function simultaneously in the isotropic space. This way we are able to produce local patches with global $G^1$ continuity. In the beginning of this paper, we also very shortly address the related open problem of the Hermite interpolation with the {\em polynomial} PN surfaces. Rather than solve this problem, we show its complexity. Indeed, by simply considering the required number of free parameters we show that this problem is much harder than in the curve case. We use this consideration as a certain defense for using a dual technique, which leads to rational solutions. Even so we consider the Hermite interpolation with polynomial PN surfaces directly as a promising and challenging direction for our future research. The remainder of this paper is organized as follows. Section~$2$ recalls some basic facts concerning surfaces with Pythagorean normal vector fields. We will also briefly sketch how to satisfy the PN condition in the polynomial case, i.e., how to find polynomial PN parameterizations. In Section 3 the representation of PN surfaces in the Blaschke and in the isotropic model is presented. We also discuss the usefulness of these representations for the solution of the interpolation problem. Section 4 is devoted to bicubic Coons patches in the isotropic model and their usage in the construction of smooth PN surfaces. The method is described, discussed and presented on a particular example in Section~5. Finally, we conclude the paper in Section~6. \section{Surfaces with Pythagorean normals}\label{sec prelim} In this section we recall some fundamental facts about surfaces with rational offsets. \begin{definition}% Let $\av{X}$ be a real algebraic surface in $\mathbb{R}^3$, let $\av{X}^r$ denote the set of regular points of $\av{X}$, and let us denote by $\f n_{\f p}\in \av{S}^2$ a unit normal vector at a point $\f p\in\av{X}^r$. Then the \emph{$d$-offset} $\oo{X}$ of $\cal X$ is defined as the closure of the set $\{\f p\pm d\f n_{\f p}\mid\, \f p\in\av{X}^r\}$. \end{definition} If $\cal X$ is rational and $\f x:\mathbb{R}^2\rightarrow \mathbb{R}^3$ is its parameterization, we may write down a parameterization of the offset explicitly in the form \begin{equation}\label{eq param offset} \f x (u,v)\pm d \f n_{\f x}(u,v), \end{equation} where $\f n_{\f x}(u,v)$ is the unit normal vector field associated to the parameterization $\f x (u,v)$. It turns out that \eqref{eq param offset} is rational if and only if $\f n_{\f x}(u,v)$ is. This is equivalent to the existence of a rational function $\sigma(u,v)$ such that \begin{equation}\label{PNcondition} \|\f{x}_u\times\f{x}_v\|^2=\sigma^2, \end{equation} where $\f{x}_u$ and $\f{x}_v$ are partial derivatives with respect to $u$ and $v$, respectively. \begin{definition}% $C^1$ regular parametric surfaces fulfilling condition \eqref{PNcondition} are called {\em surfaces with Pythagorean normal vector fields} (or {\em PN surfaces}, in short) and condition \eqref{PNcondition} is referred to as \emph{PN condition} or \emph{PN property}. \end{definition} \medskip PN surfaces were defined by \cite{Po95} as surface analogies to Pythagorean hodograph (PH) curves distinguished by the PH condition $\|\f{x}'(t)\|^2=\sigma(t)^2$. These curves were introduced as planar polynomial objects. Later, the concept was generalized also to the rational PH curves, see \citep{Po95}. The interplay between the different approaches to polynomial and rational curves with Pythagorean hodographs was studied by \cite{FaPo96} and the former were established as a proper subset of the latter by presenting simple algebraic constraints. Unfortunately more than 20 years from their introduction, the situation is still completely different for the PN surfaces. This is also reflected when solving the interpolation problems, in which the points and the normal vectors are prescribed as input data. The most natural (and expected) way of handling the PN surfaces would be probably similar to the one used for PH curves, see e.g. \cite{Fa08}. Let us show it on the polynomial case. All the polynomials satisfying the PH condition $x'(t)^2+y'(t)^2=\sigma(t)^2$ can be described explicitly using polynomial Pythagorean triples. The corresponding PH curve $\f x(t)=(x(t),y(t))$ is then obtained simply by integration. In the surface case, however, we cannot reproduce this approach. It is possible to describe explicitly all the polynomial Pythagorean normal fields $\f N(u,v)$ of degree $k$ having the polynomial length, i.e., $||\f N(u,v)||^2$ is a perfect square; cf. \citep{DiHoJu93}. To determine an associated PN parameterization of degree $\ell+1$ in a direct way, we have to find suitable polynomial vector fields \begin{equation} \begin{array}{c} \displaystyle \f P(u,v)= \left( \sum_{i+j\leq\ell} \mbox{\hspace*{-0ex}}p_{1ij}{u^iv^j}, \sum_{i+j\leq\ell} \mbox{\hspace*{-0ex}}p_{2ij}{u^iv^j}, \sum_{i+j\leq\ell} \mbox{\hspace*{-0ex}}p_{3ij}{u^iv^j} \right),\\[4ex] \displaystyle \f Q(u,v)= \left( \sum_{i+j\leq\ell} \mbox{\hspace*{-0ex}}q_{1ij}{u^iv^j}, \sum_{i+j\leq\ell} \mbox{\hspace*{-0ex}}q_{2ij}{u^iv^j}, \sum_{i+j\leq\ell} \mbox{\hspace*{-0ex}}q_{3ij}{u^iv^j} \right), \end{array} \end{equation} which will play the role of $\f x_u$, $\f x_v$, respectively, that satisfy the following conditions \begin{equation}\label{eq PN soustava} \begin{array}{rcl} \f P \cdot \f N & = & 0,\\ \f Q \cdot \f N & = & 0,\\[1ex] \displaystyle \frac{\partial\f P}{\partial v} - \displaystyle \frac{\partial\f Q}{\partial u} & = & 0, \end{array} \end{equation} where the third equation expresses the condition for the~integrability. Since a polynomial of degree $n$ in two variables possesses $\binom{n+2}{2}$ coefficients, the problem is now transformed to solving a system of $2\binom{k+\ell+2}{2}+3\binom{\ell+1}{2}$ homogeneous linear equations with $6\binom{\ell+2}{2}$ unknowns $p_{1ij}, p_{2ij}, p_{3ij},q_{1ij},q_{2ij},q_{3ij}$. The corresponding PN parameterization is then obtain as \begin{equation} \f x(u,v)=\int\f {P}(u,v)\,\mathrm{d}u+\f {c}(v), \mbox{ where} \qquad \f{c}(v)=\left[\int \f{Q}(u,v)\,\mathrm{d}v-\int \f{P}(u,v)\,\mathrm{d}u\right]_{u=0}. \end{equation} However, we must stress that not for every given polynomial Pythagorean normal field $\f N(u,v)$ there exists a corresponding polynomial surface $\mathbf x(u,v)$ for which $\mathbf x_u \times \mathbf x_v=\f N(u,v)$. For this to hold we need $\ell=k/2$. Nevertheless, in this case the number of unknowns is less than the number of equations so one cannot expect a solution, in general. On the other hand for $\ell$ large enough, the system of equations \eqref{eq PN soustava} is solvable. In this case we obviously arrive at a PN parameterization such that $\mathbf x_u \times \mathbf x_v=f(u,v)\bf N(u,v)$, where $f(u,v)$ is some non-constant polynomial. \section{PN surfaces in the isotropic model of the dual space}\label{PN_isotrop} As the offsets have a considerably simplier description if we apply the dual approach, we recall in this section the representation of PN surfaces in the Blaschke and isotropic model of the dual space. Moreover, this concept is later used for formulating our Hermite interpolation algorithm. For the sake of brevity, we exclude developable surfaces from our considerations and assume a non-degenerated Gauss image $\gamma(\av{X})$ of all studied surfaces $\av{X}$ in what follows. This means that the {\em duality} $\delta$ maps a surface $\av{X}$ to its {\em dual surface $\av{X}^*$}. Recall that a non-developable surface ${\cal X}: f(\f x)=0$ has the {\em dual representation} \begin{equation}\label{dual} {\cal X}^*:\ F^*(\vek[n],h)=0, \end{equation} where $F^*$ is a homogeneous polynomial in $\f n=(n_1,n_2,n_3)$ and $h$. If $F^*(\f n,h)=0$ then the set of all planes \begin{equation}\label{tangents} T_{\f n,h}: \f n\cdot \f x=h \end{equation} forms a~system of {\em tangent planes} of $\cal X$ with the normal vectors $\f n$ (i.e., $\av{X}^*$ is considered as the set of tangent planes of $\av{X}$). Furthermore, if we assume $\|\f n\|=1$ then the value of $h$ is the oriented distance of the tangent plane to the origin. Moreover, if the partial derivative $\partial F^*/\partial h$ does not vanish at $(\f n_0,h_0)\in {\cal X}^*$, then \eqref{dual} implicitly defines a function \begin{equation} \f n\mapsto h(\f n) \end{equation} in a certain neighborhood of $(\f n_0,h_0)$. The restriction of this function to the unit sphere $\av{S}^{2}$ is called the {\em support function} of the primal surface, see \cite{Gravesen2007,AiJuGVSc09,GrJuSi08,LaBaSi10,SiGrJu08} for more details. Let us stress out that that the dual representation \eqref{dual} does not require the normal vectors $\f n$ to be unit vectors. However, whenever we use the support function then its argument $\f n$ will be assumed to be a unit vector. Conversely, from any smooth real function on (a subset of) $\av{S}^2$ we can reconstruct the corresponding primal surface by the parameterization $\f x_h:\,\av{S}^2\rightarrow\mathbb{R}^2$ \begin{equation}\label{envelope} \f x_h(\f{n})=h(\f{n})\f{n}+\nabla_{\!\av{S}^2}h(\f n), \end{equation} where the vector $\nabla_{\!\av{S}^2}h$ is obtained by embedding the intrinsic gradient of $h$ with respect to $\av{S}^2$ into the space $\mathbb{R}^3$, see \cite{GrJuSi08} for more details. The vector-valued function $\f x_h$ gives a parameterization of the envelope of the set of tangent planes \eqref{tangents}. Hence, all surfaces with the associated rational support function are rational. It is enough to substitute into \eqref{envelope} any rational parameterization of $\av{S}^2$, for instance ${\f{n}(u,v)=(2u/(1+u^2+v^2),2v/(1+u^2+v^2),(1-u^2-v^2)/(1+u^2+v^2))}$. Furthermore, several important geometric operations correspond to suitable modifications of the support function, see \cite{SiGrJu08}. In particular the one-sided offset of a surface at the distance $d$ is obtained by adding the constant $d$ to the support function $h$. For using the support function for computing the convolutions (i.e., the general offsets) of two hypersurfaces see e.g. \cite{SiGrJu07}. \bigskip In \cite{PoPe98}, the rational surfaces with rational offsets were studied in the so-called Blaschke model. Consider in ${\mathbb{R}^4}$ the quadric $\av{B}=\av{S}^2\times\mathbb{R}:\, \|\f n\|^2-1=0$. This quadratic cylinder is called the {\em Blaschke cylinder}. It holds that parallel tangent planes are then represented as points lying on the same generator (a line parallel to the $x_4$-axis) of $\av{B}$. In what follows the map that sends a point in $\av{X}$ to the tangent plane, i.e., a point in $\av{X}^*$, is called the {\em Blaschke mapping} and is denoted $\beta$. \begin{proposition} Any non-developable PN surface is the image of a rational surface on the Blaschke cylinder $\mathcal{B}$ via the mapping $\phi=\delta^{-1}\circ\beta$. \end{proposition} Next, consider the generator line $w$ of $\mathcal{B}$ containing the point $\f w= (0,0,1,0)$. Let $\av{I}$ be the hyperplane $x_3=0$ in ${\mathbb{R}^4}$, which is parallel to $w$. We use the new coordinate functions $y_1=x_1$, $y_2=x_2$, $y_3=x_4$ and define the {\em isotropic mapping} \begin{equation} \iota:\quad \av{B}\setminus w\rightarrow \av{I},\, (x_1,x_2,x_3,x_4) \mapsto (y_1,y_2,y_3)=\frac{1}{1-x_3}(x_1,x_2,x_4). \end{equation} $\av{I}$ is called the {\em isotropic model} of the dual space, see Fig.~\ref{blaschke_isotrop}. Clearly, the tangent planes with the unit normal $(0,0,1)$ do not have an image point in $\av{I}$. Other parallel tangent planes are represented as points on the same line parallel to the $y_3$-axis; these lines are called the {\em isotropic lines}. By a direct computation one obtains \begin{equation} \iota^{-1}:\quad \av{I}\rightarrow \av{B}\setminus w,\, (y_1,y_2,y_3) \mapsto (x_1,x_2,x_3,x_4)=\frac{1}{1+y_1^2+y_2^2}(2y_1,2y_2,1-y_1^2-y_2^2,2y_3). \end{equation} \begin{figure}[t] \begin{center} \psfrag{x3}{$x_3$} \psfrag{x4}{$x_4$} \psfrag{x1}{$x_1,x_2$} \psfrag{y3}{$y_3$} \psfrag{y1}{$y_1,y_2$} \psfrag{w}{$w$} \psfrag{W}{$\f w=(0,0,1,0)$} \psfrag{B}{$\av{B}$} \psfrag{I}{$\av{I}$} \psfrag{p}{\hspace*{-1ex}$\f p = (\f n,h)$} \psfrag{i}{$\iota(\f p)$} \psfrag{n}{$\f n$} \includegraphics[width=0.45\textwidth]{blaschke2.eps}\hfill \begin{minipage}{0.9\textwidth} \caption{The Blaschke cylinder $\av{B}$ and the isotropic model $\av{I}$ of the dual space. \label{blaschke_isotrop}} \end{minipage} \end{center} \end{figure} All the above mentioned properties and mappings are summarized in the following proposition and diagram, which are essential for our method, see \eqref{eq diagram}. \begin{equation}\label{eq diagram} \begin{array}{c} \scalebox{1.2}{ \xymatrix{ & \av{B} \ar[r]_-{\iota} \ar[d]^{\beta} \ar[ld]_{\phi} & \ar@/^{-3pc}/[lld]_\xi \av{I} \ar[ld]^{\theta} \\ \av{X} \ar[r]^{\delta} & \av{X}^* \vphantom{\bigr)} } } \end{array} \end{equation} \begin{corollary}\label{PNisotrop} Any non-developable PN surface is the image of a rational surface in $\av{I}$ via the mapping $\xi=\phi\circ\iota^{-1}$. \end{corollary} \medskip The effectiveness of the construction presented later is guaranteed by the following continuity result. \begin{proposition}\label{G1primar} Let $\mathbf y$ be a piecewise rational $C^1$ surface in $\av{I}$. If $\mathbf x=\xi(\mathbf y)$ is regular then it is a $G^1$ piecewise rational surface with Pythagorean normals. \end{proposition} \begin{proof} By the regularity of $\mathbf x$ we mean that at every point there is a suitable tangent plane so that the projection of the surface to this plane is a homeomorphism on some neighborhood of this point. This essentially means that we exclude the sharp edges (ridges). Conditions for the regularity are discussed in more detail in the paragraph following this proof. We have $\xi=\phi\circ\iota^{-1}$. The mapping $\iota$ (and its inverse) is a diffeomorphism which does not change the continuity. So the patch $\iota^{-1}(\mathbf y)$ on the Blaschke cylinder is clearly also $C^1$. The mapping $\phi$ is given by formula \eqref{envelope}, which contains the first order differentiation. For this reason the surface $\mathbf x=\xi(\mathbf y)$ is only~$C^0$. However, in fact the continuity of $\iota^{-1}(\mathbf y)$ describes a continuous variation of a certain well defined plane. It is shown in \citep{Gravesen2007} that if $\mathbf x$ is regular then the inversion of the projection to this plane is locally $C^1$ which shows the global $G^1$ continuity of $\mathbf x$. \end{proof} As it has been noticed earlier \citep{PePo96,Gravesen2007,SiGrJu08,Blazkova2014358} despite the fact that the variation of the planes in the previous proposition is continuous, the resulting surface may sometimes exhibit sharp edges (ridges). To understand this phenomena let us first investigate, for the sake of simplicity, two examples of planar curves, see Fig.~\ref{ridges}. In this case $h(\mathbf n)$ is univariate and the function $h+h''$ gives the oriented radius of curvature \citep{SiGrJu08}. If this expression vanishes the curve exhibits a cusp at which the curvature goes to infinity. The first example is a part of a hypocycloid (Fig.~\ref{ridges}, left) where the support function $h(\mathbf n)$ is perfectly smooth but still a cusp occurs. The second example (Fig.~\ref{ridges}, right) shows two circle segments connected tangentially and producing a sharp jump in the signed curvature. \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[height=0.18\textwidth]{ridges1.eps}& \includegraphics[height=0.18\textwidth]{ridges2.eps}\\ \includegraphics[width=0.33\textwidth]{h1.eps}& \includegraphics[width=0.33\textwidth]{h2.eps}\\ \end{tabular} \begin{minipage}{0.9\textwidth} \caption{Curves with cusps and continuous tangent line. The support function (blue) is $C^\infty$ on the left and $C^1$ on the right. The value of $h+h''$ (radius of curvature) is displayed in red.\label{ridges}} \end{minipage} \end{center} \end{figure} \newcommand{{\mathrm{Hess}}}{{\mathrm{Hess}}} A similar behavior can be described for surfaces. In this case the critical expression (corresponding to $h+h''$) is the matrix function ${\mathrm{Hess}}_{\mathcal{S}^2}h+hI$, where ${\mathrm{Hess}}_{\mathcal{S}^2}$ denotes the intrinsic Hessian with respect to the unit sphere $\mathcal{S}^2$ (the base of the Blaschke cylinder $\cal B$) and $I$ is the identity. In fact as shown in \citep{SiGrJu08}, it holds that $d \mathbf x_h={\mathrm{Hess}}_{\mathcal{S}^2}h+hI$, so this quantity allows us to control the features of the resulting surface. Vanishing of the $\det\left({\mathrm{Hess}}_{\mathcal{S}^2}h+hI\right)$ or a jump in the signs of one its eigenvalues indicates the occurrence of a sharp edge. For practical modeling purposes let us remark, that a sharp edge typically occurs when the data from a surface with parabolic curves are interpolated. In other cases this phenomena will disappear under subdivision. Furthermore, as our method is based on the construction of Coons patches with boundaries being Fergusson cubics determined by suitably chosen tangent vectors at given points in the isotropic space (see Section~\ref{sec isotr}), it is theoretically also possible to avoid ridges by optimizing the lengths of the tangent vectors (which can serve as free modelling shape parameters, see Section~\ref{sec alg}) with a suitable objective function. One can for example use the function $\int_{\Omega} \det\left({\mathrm{Hess}}_{\mathcal{S}^2}h+hI\right)^{-2}\mathrm{d}A_{\mathcal{S}^2}$, where $\mathrm{d}A_{\mathcal{S}^2}$ is the area element on the sphere and ${\Omega}\subset \mathcal{S}^2$ is the Gauss image of the constructed surface. In fact $\det\left({\mathrm{Hess}}_{\mathcal{S}^2}h+hI\right)^{-2}=K^2$ and when minimizing its integral we can avoid the ridges at which the Gauss curvature $K$ tends to infinity, see also \citep{Gravesen2007}. \section{Coons patches in the isotropic model and PN patches in the primal space}\label{sec isotr} We will use rectangular patches throughout this paper. In order to construct a piecewise PN interpolation surface $\mathbf x$ in the primal space, we will consider rational patches in the isotropic model. \begin{figure}[t] \begin{center} \psfrag{1}{$\f a_{00}$} \psfrag{2}{$\f c_0(u)$} \psfrag{3}{$\f a_{10}$} \psfrag{4}{$\f d_0(v)$} \psfrag{5}{$\f y(u,v)$} \psfrag{6}{$\f d_1(v)$} \psfrag{7}{$\f a_{01}$} \psfrag{8}{$\f c_1(u)$} \psfrag{9}{$\f a_{11}$} \includegraphics[width=0.45\textwidth]{coons.eps}\hfill \begin{minipage}{0.9\textwidth} \caption{The Coons patch $\f y(u,v)$ determined by \eqref{krivky}. \label{fig coons}} \end{minipage} \end{center} \end{figure} Suppose we are given four $C^1$ continuous boundary curves $c_0(u)$, $c_1(u)$, $d_0(v)$, $d_1(v)$ in the isotropic space $\av{I}$ which meet at the four corners \begin{equation}\label{krivky} c_0(0)=d_0(0)=\f a_{00},\quad c_0(1)=d_1(0)=\f a_{10},\quad c_1(0)=d_0(1)=\f a_{01},\quad c_1(1)=d_1(1)=\f a_{11}, \end{equation} see Fig.~\ref{fig coons}. Then we can apply the construction of the so called {\em bicubic Coons patch}, see e.g. \cite{Farin1988}. It is a parametric surface $\f y(u,v):\, [0,1]\times[0,1]\rightarrow \mathbb{R}^k$ ($k=3$ in our case) determined by the identity \begin{equation}\label{Coons_bicubic} \big(F_0(u),-1,F_1(u)\big)\cdot \begin{pmatrix} \f a_{00} & d_0(v) & \f a_{01}\\ c_0(u) & \f y(u,v) & \f c_1(u)\\ \f a_{10} & d_1(v) & \f a_{11} \end{pmatrix} \cdot\big(F_0(v),-1,F_1(v)\big)^T =0, \end{equation} where the blending functions $F_0,F_1$ are two of the basic cubic Hermite polynomials used in the construction of the Ferguson cubic, i.e., $F_0(t)=2t^3-3t^2+1$ and $F_1(t)=-2t^3+3t^2$. We recall that the matrix in \eqref{Coons_bicubic} directly reflects the scheme in Fig.~\ref{fig coons}. Formula \eqref{Coons_bicubic} ensures that the constructed patch interpolates all the given boundary curves $c_0(u),c_1(u),d_0(v),d_1(v)$. Note that if only two tangent vectors at every point $\f a_{ij}$ instead of the whole boundary curves are given then one has to first construct some boundary curves via interpolating these points and vectors by a suitable $C^1$ Hermite interpolation curves. \medskip By a direct computation \citep{Farin1988} it can be proved a fundamental property satisfied by the bicubically blended Coons patches \begin{lemma}\label{C1Coons} Two bicubic Coons patches sharing the same boundary curve and the same tangent vectors at the end points of the adjacent transversal boundary curves are connected with the $C^1$ continuity. \end{lemma} From this lemma follows one of the nicest application properties of the bicubic Coons construction. Specifically, given a network of curves, the global interpolating surface that one gets using the bicubic Coons construction is globally a $C^1$ surface. Combined with Proposition~\ref{G1primar} we obtain the fundamental theoretical result justifying our Hermite PN construction. \begin{proposition}\label{G1coons} Let $\f{y}$ be a globally $C^1$ continuous network of piecewise rational Coons patches in the space $\av{I}$. Then $\f{x}=\xi(\f{y})$ is a piecewise $G^1$ surface with Pythagorean normals. \end{proposition} The observations and results above allow us to design a simple construction algorithm which is essentially local. More precisely, for a given network of position data (points) and first order data (normals) we will construct a family of PN patches yielding a piecewise surface which is globally $G^1$ continuous. Specifically, a modification of some of these data will modify only the adjacent patches. \bigskip Suppose we are given a network of the points $\f p_{i,j}$ with the associated unit normal vectors $\f n_{i,j}$ in the primal space, where $i\in \{0,1,\ldots, m \}$ and $j\in \{0,1,\ldots, n \}$. Our goal is to construct a set of rational PN patches $\f x_{i,j}(u,v)$ for $i\in \{1,\ldots, m \}$, $j\in \{1,\ldots, n \}$. Each patch will be defined on the interval $[0,1]\times [0,1]$ and will interpolate the corner points $\f p_{i-1,j-1}$, $\f p_{i,j-1}$, $\f p_{i-1,j}$, $\f p_{i,j}$ together with the corresponding normals. Moreover the union of these patches $\f x=\bigcup_{i,j}\f x_{i,j}$ is required to be globally $G^1$ continuous. Based on the theoretical results from the previous sections, we will construct the patches $\f x_{i,j}(u,v)$ as the images of the rational patches $\f y_{i,j}(u,v)$ in the isotropic space $\av{I}$, i.e., \begin{equation} \f x_{i,j}(u,v)=\xi(\f y_{i,j}(u,v)). \end{equation} First, for each point we evaluate the support function $h_{i,j}= \f p_{i,j} \cdot \f n_{i,j}$, cf. \eqref{tangents}. Next we obtain the corresponding network of points in the isotropic space $\av{I}$ as \begin{equation}\label{transfbody} \f a_{i,j}=\iota(\f n_{i,j},h_{i,j}). \end{equation} \medskip In order to apply the bicubic Coons patch construction, we need to construct boundary curves between the points $\f a_{i,j}$. From the identity \begin{equation} \f n(u,v) \cdot \f x (u,v) - h (u,v) =0, \end{equation} it follows \begin{equation}\label{der_support} \begin{array}{rcl} \f (n_u,h_u)\cdot (\f p,-1) &=& 0,\\ \f (n_v,h_v) \cdot (\f p,-1) &=& 0. \end{array} \end{equation} So, let us observe that any curve $\f c(t)$ lying on the piecewise surface $\f{y}$ such that $\f c(t_0)=\mathbf a_{i,j}$ must satisfy \begin{equation} [J (\iota^{-1}) \f c' (t_0)]\cdot (\f p_{i,j},-1)=0, \end{equation} where $J (\iota^{-1})$ denotes the Jaccobi matrix of the mapping $\iota^{-1}$. It means that the patch possessing $\f a_{i,j}$ as its corner point (in the isotropic space) must be tangent to the 2-plane $\tau_{i,j}$ given as \begin{equation} \tau_{i,j}=\{ \f v:\, [J (\iota^{-1}) \f v]\cdot (\f p_{i,j},-1)=0\} \end{equation} at this point. Let us stress out that the original PN interpolation problem (prescribed points and normal vectors, i.e., tangent planes) in the primal space was difficult to solve. Using the methods presented above we have transformed it to the same kind of the interpolation problem (prescribed points and normal vectors, i.e., tangent planes), now in the isotropic space~$\av{I}$. However, after the transformation we do not have to care about the PN property -- this property is now obtained for free. \begin{remark}\rm One limitation of the presented method should be noted. As the north pole $\f w$, see Fig.~\ref{blaschke_isotrop}, is the center of the stereographic projection, the points on the unit sphere $\mathcal{S}^2$ (the unit normals $\f n_{i,j}$) must be suitably distributed. In other words, the Gauss image of the interpolating surface cannot contain $\f w$. This means that in some cases a preliminary coordinate transformation is needed. Let us also remark that one can alternatively interpolate the Gauss image (given data $\f n_{i,j}$) and the support function (data $h_{ij}$ computed from given data $\f n_{i,j}$ and $\f p_{i,j}$) separately, cf. \citep{Gravesen2007}. Firstly, one interpolates data $\f n_{i,j}$ by a piecewise rational $C^1$ surface on $\mathcal{S}^2$, see e.g. \citep{AlNeSchu96}. Then using \eqref{der_support} we arrive at the values of the partial derivatives $h_u, h_v$ at the points $\f p_{i,j}$ and computing e.g. one-dimensional Coons patches we arrive at the piecewise $C^1$ function $h(u,v)$. The sought PN parameterization is obtained just by switching from the dual to the primary space. For the sake of lucidity we prefer to apply the isotropic model as this approach is more illustrative and needs less steps. \end{remark} \section{PN patches interpolating given data}\label{sec alg} To start the Coons construction in $\av{I}$, we must first construct curves $\f c_{i,j}(u)$ connecting the points $\f a_{i,j}$ and $\f a_{i+1,j}$ and curves $\f d_{i,j}(v)$ connecting the points $\f a_{i,j}$ and $\f a_{i,j+1}$, simultaneously satisfying the condition that they are tangent to the planes $\tau_{i,j}$ at each of the two boundary points. Clearly, any arbitrary curve fulfilling these constraints may be considered as one of the input boundary curves for scheme \eqref{Coons_bicubic}. For the sake of simplicity we can, for instance, take the Ferguson cubics interpolating with $C^1$ continuity the given points and some suitably chosen associated boundary vectors. Other possible polynomial curves of low parameterization degree, which can be easily used, might be e.g. parabolic biarcs. \smallskip We have considered the following boundary vectors at the points from the network in $\av{I}$, which represent a natural choice for the tangent vectors of the boundary curves: \begin{itemize} \item for an inner point $\f a_{i,j}$ (see Fig.~\ref{points_in_net}, green) we have taken the projections of the difference vectors $\f a_{i+1,j}-\f a_{i-1,j}$ and $\f a_{i,j+1}-\f a_{i,j-1}$ into the tangent plane $\tau_{i,j}$; \item for a non-corner point $\f a_{i,0}$, or $\f a_{i,n}$ on the $u$-boundary (see Fig.~\ref{points_in_net}, blue) we have taken the projections of the difference vectors $\f a_{i+1,0}-\f a_{i-1,0}$ and $2(\f a_{i,1}-\f a_{i,0})$, or $\f a_{i+1,n}-\f a_{i-1,n}$ and $2(\f a_{i,n}-\f a_{i,n-1})$, respectively, into the tangent plane $\tau_{i,0}$, or $\tau_{i,n}$, respectively; \item in a similar way, for a non-corner point $\f a_{0,j}$, or $\f a_{n,j}$ on the $v$-boundary (see Fig.~\ref{points_in_net}, blue) we have taken the projections of the difference vectors $2(\f a_{1,j}-\f a_{0,j})$ and $\f a_{0,j+1}-\f a_{0,j-1}$, or $2(\f a_{n,j}-\f a_{n-1,j})$ and $\f a_{n,j+1}-\f a_{n,j-1}$, respectively, into the tangent plane $\tau_{0,j}$, or $\tau_{n,j}$, respectively; \item for the corner point $\f a_{0,0}$ (see Fig.~\ref{points_in_net}, red) we have taken the projections of the difference vectors $2(\f a_{1,0}-\f a_{0,0})$ and $2(\f a_{0,1}-\f a_{0,0})$ into the tangent plane $\tau_{0,0}$, for the corner point $\f a_{n,0}$ we have taken the projections of the difference vectors $2(\f a_{n,0}-\f a_{n-1,0})$ and $2(\f a_{n,1}-\f a_{n,0})$ into the tangent plane $\tau_{n,0}$, for the corner point $\f a_{0,n}$ we have taken the projections of the difference vectors $2(\f a_{1,n}-\f a_{0,n})$ and $2(\f a_{0,n}-\f a_{0,n-1})$ into the tangent plane $\tau_{0,n}$, and for the corner point $\f a_{n,n}$ we have taken the projections of the difference vectors $2(\f a_{n,n}-\f a_{n-1,n})$ and $2(\f a_{n,n}-\f a_{n,n-1})$ into the tangent plane $\tau_{n,n}$. \end{itemize} Of course, the lengths of the chosen vectors can be easily modified and serve as possible modelling shape parameters. This is useful, for instance, when we want to avoid ridges by optimizing these lengths with respect to a suitable objective function, cf. the final paragraph in Section~\ref{PN_isotrop}. Subsequently, we construct the rational patches $\f y_{i,j}$ using formula \eqref{Coons_bicubic} and applying $\xi$ we obtain the patches $\f x_{i,j}$ and thus the sought piecewise smooth PN surface $\f{x}$. \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{net.eps}\hfill \begin{minipage}{0.9\textwidth} \caption{A net of points in $\av{I}$ -- corner points (red), non-corner boundary points (blue) and inner points (green). \label{points_in_net}} \end{minipage} \end{center} \end{figure} \medskip In what follows we will show the functionality of the designed algorithm on a particular example. We will demonstrate the whole technique on one macro-element consisting of nine ordered points with the associated normals, i.e., a smooth surface consisting of four PN patches is constructed. For a bigger network the process will be the same, as the designed method is strictly local. \begin{example}\rm Let be given a network of the points $\f p_{i,j}$ \begin{equation} (\f p_{i,j})=\left( \begin{array}{ccc} \displaystyle (0,0,0) & \displaystyle \left(0,-\frac{11}{72},-\frac{1}{12}\right) & \displaystyle \left(0,-\frac{2}{9},-\frac{1}{3}\right) \\[2ex] \displaystyle \left(\frac{11}{72},0,\frac{1}{12}\right) & \displaystyle \left(\frac{7}{36},-\frac{7}{36},0\right) & \displaystyle \left(\frac{23}{72},-\frac{11}{36},-\frac{1}{4}\right) \\[2ex] \displaystyle \left(\frac{2}{9},0,\frac{1}{3}\right) & \displaystyle \left(\frac{11}{36},-\frac{23}{72},\frac{1}{4}\right) & \displaystyle \left(\frac{5}{9},-\frac{5}{9},0\right) \end{array} \right) \end{equation} with the associated (non-unit) normal vectors $\f n_{i,j}$ \begin{equation} (\f n_{i,j})=\left( \begin{array}{ccc} (0,0,-1) & (0,4,-3) & (0,1,0) \\[1ex] (4,0,-3) & (2,2,-1) & (4,8,1) \\[1ex] (1,0,0) & (8,4,1) & (2,2,1) \end{array} \right), \end{equation} where $i,j=0,1,2$, see Fig.~\ref{zadani}. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth]{zadani.eps} \begin{minipage}{0.9\textwidth} \caption{A network of given points $\f p_{i,j}$ with the associated normal directions $\f n_{i,j}$. \label{zadani}} \end{minipage} \end{center} \end{figure} \smallskip Using \eqref{transfbody} we find the nine points $\f a_{i,j}$ (4 corner points, 4 non-corner boundary points, 1 inner point) in $\av{I}$, see Fig.~\ref{isotropic}, with the associated tangent vectors of the boundary curves obtained by the approach from the beginning of this section. Next, we construct 12 Fergusson cubics, see Fig.~\ref{isotropic}, as the input boundary curves for the bicubic Coons construction. \begin{figure}[H] \begin{center} \includegraphics[width=0.7\textwidth]{isotropic.eps} \begin{minipage}{0.9\textwidth} \caption{4 corner points (red), 4 non-corner boundary points (blue), 1 inner point(green) in $\av{I}$ with the associated tangent vectors of the boundary curves spanning the tangent planes $\tau_{ij}$, and the constructed Fergusson cubics (orange). \label{isotropic}} \end{minipage} \end{center} \end{figure} After computing the four bicubic Coons patches and applying the mapping $\xi$ on each of them we obtain a smooth piecewise PN surface (given by PN parameterizations of each patch) interpolating given Hermite data, see Fig.~\ref{reseni}. Finally, computations show that $\det\left(\mathrm{Hess}_{\mathcal{S}^2}h+hI\right)\neq 0$ at all points so no sharp edges occur for given data, cf.~Section~3. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\textwidth]{reseni.eps} \begin{minipage}{0.9\textwidth} \caption{A smooth piecewise interpolation surface consisting of four PN patches. \label{reseni}} \end{minipage} \end{center} \end{figure} \end{example} \begin{remark}\rm One of the advantages of the designed method (based on exploiting the Coons patches) is the possibility to use the length of the tangent vectors in the isotropic space as free construction parameters (as already mentioned at~the end of Section~\ref{PN_isotrop}). Figure \ref{Sridges} shows how a suitable choice of these vector can improve the resulting patch and help to avoid the ridges. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.45\textwidth]{ridge.eps}\qquad \includegraphics[width=0.45\textwidth]{smooth.eps} \begin{minipage}{0.9\textwidth} \caption{PN patches interpolating the same boundary data. A suitable choice of the tangent vectors leads to a smooth patch (right), while some choices may produce ridges (left).\label{Sridges}} \end{minipage} \end{center} \end{figure} \end{remark} \begin{remark} \rm A natural question is why not to use the Coons (or some other boundary-curves) construction already in the primal space. A possible way could be for instance to prescribe some boundary curves satisfying given data, construct a patch given by this boundary and then to modify suitably the obtained patch (simultaneously preserving the conditions at the boundary) to gain a new patch which is PN. However, this construction assumes a necessary requirement that the prescribed curves must be PSN, i.e., curves on the surfaces along which the surface admits Pythagorean normals, cf. \cite{VrLa14a}. Using the dual approach and the isotropic model for this is considerably simpler. \end{remark} \section{Conclusion}\label{Concl} The main goal of this paper was to present a simple functional algorithm for computing piecewise Hermite interpolation surfaces with rational offsets. The obtained PN surface interpolates a set of given points with associated normal directions. The isotropic model of the dual space was used for formulating the algorithm. This setup enables us to apply the standard bicubic Coons construction in the dual space for obtaining the interpolation PN surface in the primal space. The presented method is completely local and yields a surface with $G^1$~continuity. Moreover the method solves the PN interpolation problem directly, i.e., without the need for any subsequent reparameterization, which must be always followed by trimming of the parameter domain. Together with its simplicity, this is a main advantage of the designed technique. It can be used by designers anytime when surfaces with rational offsets are required for modelling purposes. \section*{Acknowledgments} The authors Miroslav L\'{a}vi\v{c}ka and Jan Vr\v{s}ek were supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports. We thank to all referees for their valuable comments, which helped us to improve the paper.
1,314,259,995,986
arxiv
\section{Preliminaries} During the last decades it has become clear that heavy-tailed random variables are needed in realistic mathematical models. Consequently, heavy-tailed analysis has seen an explosive growth in the number of publications, making it an active research field of high current interest. A cornerstone of heavy-tailed thinking is the \emph{principle of a single big jump}. Unfortunately, there does not seem to exist consensus about the exact definition of this principle. Nevertheless, the principle always consists of the idea that the most likely way for a sum to be large is that one of the summands is large. Some authors refer to this principle whenever there exists a dominating random variable \cite{embrechts:1997,foss:2007}, whereas other reserve the expression for subexponential distributions \cite{armendariz:2011,asmussen:1996,denisov:2008} or their generalisations \cite{beck:2015}. Some properties are also studied in the case of dependent variables \cite{albrecher:2006}. The aim of this note is to study the principle of a single big jump in a rigorous setting. In \cite{foss:2013}, the behaviour of the process $(Z_d):=(Z_d)_{d>0}$ is used to illustrate the phenomenon of a single big jump. Our plan is to study the process $(Z_d)$ further and to present general results whose applicability can be verified using the density function $f$. In order to do this, we define two convergence types for the process $(Z_d)$: \begin{enumerate}[I)] \item $\mathcal{L}(Z_{d})\to\frac{1}{2} \delta_0+\frac{1}{2} \delta_1$ \label{a} and \item $\mathcal{L}(Z_{d})\to \delta_{\frac{1}{2}}$.\label{b} \end{enumerate} In \ref{a} and \ref{b} the notation $\mathcal{L}(Z_{d})$ refers to the law of $Z_d$ and the convergence is understood as convergence in distribution in the limit $d\to \infty$. In Types \ref{a} and \ref{b}, $\delta_x$ signifies a distribution concentrated to the point $x\in \{0,1/2,1\}$. Behaviour \ref{a} resembles the way many heavy-tailed variables are known to behave: if the sum $X_1+X_2$ is large then one of the variables is large. Behaviour \ref{b} is related to a phenomenon encountered within the class of light-tailed distributions: both of the variables $X_1$ and $X_2$ contribute equally. Recall that a random variable $X$ is called \emph{heavy-tailed} if $E(e^{sX})=\infty$ for all $s>0$ and light-tailed otherwise. We will show that, in the sense of Behaviour \ref{a}, the principle can occur outside the class of heavy-tailed distributions. Traditionally the idea of the principle of a single big jump is almost exclusively associated with a subclass of heavy-tailed distributions called subexponential distributions. The subexponential class and its extensions are further discussed in Section \ref{discussion} below. \subsection{Assumptions}\label{assumptions} The non-negative random variables $X_1$ and $X_2$ are independent and identically distributed. The variable $X_1$ has an unbounded support and a density function $f$. Set $F(x):=P(X_1\leq x)$ and $\overline{F}(x):=1-F(x)$. The function $f$ is assumed to be twice differentiable in the set $[0,\infty)$ and eventually decreasing. A property is said to hold \emph{eventually} if there exists $y_0\in \mathbb{R}$ such that the property is valid in the set $[y_0,\infty)$. \subsection{Basic Properties} The density function $f_{Z_d}$ of the variable $Z_d$ can be directly obtained from the conditional distribution of $X_1|\{X_1+X_2\}$. Its density is concentrated in the interval $[0,1]$ and given by formula \begin{equation}\label{f1} f_{Z_d}(x)=\frac{f(d x)f(d(1-x))}{\displaystyle\int_{0}^1 f(d y)f(d(1-y)) \, dy},\quad x\in [0,1]. \end{equation} The function $f_{Z_d}$ can be viewed as a function of two variables as $$g(x,d):=f_{Z_d}(x)\colon [0,1] \times (0,\infty)\to [0,\infty).$$ For a fixed $d>0$ the function $f_{Z_d}(x)$ is symmetric with respect to the point $x=1/2$. Hence, it suffices to formulate the results only for $x\in[0,1/2]$. Conditions implying Behaviours \ref{a} or \ref{b} typically involve estimation of decay rates of integrals. What is more, neither of the behaviours needs to occur; the distributional limit may exist without any concentration of probability mass. To see this, consider the following example. \begin{example}\label{ex1} Suppose $f$ is a gamma density function $f(x)=Cx^{a-1}e^{-x}$, where $x>0$, $a>0$ and $C>0$ is an integration constant. Then $f_{Z_d}$ of \eqref{f1} reduces to $$f_{Z_d}(x)=\frac{x^{a-1}(1-x)^{a-1}}{\int_0^1 y^{a-1}(1-y)^{a-1} \, dy},$$ for all $d>0$. So, $\mathcal{L}(Z_d)$ does not depend on $d$ and belongs to the family of Beta distributions. \end{example} In order to understand the behaviour of the process $(Z_d)$ one needs additional assumptions to those made in Section \ref{assumptions}. One way to proceed is to demand that the function $f_{Z_d}$ should eventually stay convex or concave at the midpoint of $[0,1]$. This leads to the following characterisation. \begin{lemma}\label{lchar}Suppose \begin{equation}\label{L} L:=\lim_{x \to \infty} \textnormal{sign}\left( \frac{d^2}{dx^2} \log f(x) \right) \end{equation} exists, where \begin{displaymath} \textnormal{sign}(x) := \left\{ \begin{array}{rl} 1 & : x>0 \\ 0 & : x =0 \\ -1 & :x<0. \end{array} \right. \end{displaymath} Then the function $f_{Z_d}$ of Formula \eqref{f1} is eventually, in $d$, strictly convex with respect to the variable $x$ at point $x=1/2$ if and only if $L=1$. Similarly, $f_{Z_d}$ is eventually, in $d$, strictly concave with respect to the variable $x$ at point $x=1/2$ if and only if $L=-1$. \begin{proof} Consider the eventually convex case; the eventually concave case is analogous. Let $d>0$. For any $x\in (0,1)$, \begin{eqnarray*} f_{Z_d}''(x)&=&\frac{d^2}{\int_0^1 f(dy)f(d(1-y)) \, dy} [ f''(dx)f(d(1-x))-f'(dx)f'(d(1-x)) \\ &-&f'(dx)f'(d(1-x))+f(dx)f''(d(1-x))]. \end{eqnarray*} The requirement $f_{Z_d}''(1/2)>0$ simplifies to $f''(d/2)f(d/2)-f'(d/2)^2>0. $ This is equivalent with the statement \begin{equation}\label{limi} \left( \frac{d^2}{dx^2} \log f(x)\right)_{|x=d/2}>0. \end{equation} The claim follows upon noticing that $L=1$ holds if and only if \eqref{limi} holds eventually in $d$. \end{proof} \end{lemma} \begin{remark} If $L=0$ in \eqref{L}, then $f''(x)f(x)-f'(x)^2=0$ eventually. The function $f(x)=C_1e^{C_2 x}$ solves this differential equation. Here, $C_1,C_2\in \mathbb{R}$ are suitable constants. Direct application of \eqref{f1} shows that $Z_d$ is eventually uniformly distributed. \end{remark} \subsection{Relation of Log-convexity and Log-concavity to Failure Rates} The condition \eqref{L} implies eventual convexity or concavity of $f$. \begin{definition} A twice differentiable function $g$ is said to be \emph{eventually strictly convex} if there exists a number $x_0>0$ such that $g''(x)>0$ for all $x>x_0$. \emph{Eventually strictly concave} functions are defined similarly. \end{definition} If $L=1$ ($L=-1$) in Equation \eqref{L}, the function $f$ is eventually strictly log-convex (log-concave). This is equivalent with the function $f'(x)/f(x)$ being eventually strictly increasing (decreasing). Proceeding as in Lemma 4 of \cite{bagnoli:2005} one obtains for eventually strictly log-convex $f$ and for any $x>x_0$ that \begin{equation}\label{barv1} \frac{f'(x)}{f(x)}\int_x^\infty f(y) \, dy<\int_x^\infty \frac{f'(y)}{f(y)}f(y) \, dy. \end{equation} Straightforward calculation reveals Equation \eqref{barv1} being equivalent with \begin{equation}\label{dfr} \frac{d}{dx} \left( \frac{f(x)}{\overline{F}(x)} \right)>0. \end{equation} Equation \eqref{dfr} implies that the \emph{failure rate} $f(x)/\overline{F}(x)$ is an eventually strictly increasing function and that $\overline{F}$ is a an eventually strictly log-convex function. It can be shown similarly that eventually strictly log-concave densities lead to eventually strictly decreasing failure rates and eventual strict log-concavity of the function $\overline{F}$. The log-concavity and log-convexity are known be the determining properties in several economical, statistical, probabilistic and operations research related concepts. These classical properties are closely linked, as shown above, to the variables whose failure rate is increasing or decreasing. For additional properties the reader is referred to \cite{hansen:1988,wang:1986,lariviere:2006,banciu:2013}. In the current note a new phenomenon where log-convexity or log-concavity plays a central role is discovered. It is the deciding factor that determines the eventual shape of the density of $Z_d$. \section{The Main Result and Applications} As mentioned earlier, additional conditions need to be imposed in order to obtain Behaviour \ref{a} or \ref{b}. The first result, Proposition \ref{typechar}, does exactly this, but it requires that $f_{Z_d}(x)\to 0$ for all $x\in(0,1/2)$. This may be tedious to check unless the density is extremely simple. However, the latter result, Theorem \ref{ekalause}, provides a sufficient condition which guarantees the validity of the required property. \subsection{Theoretical Results}\label{tresults} \begin{proposition}\label{typechar} Suppose the limit $L$ of Equation \eqref{L} exists. Assume further that $f_{Z_d}(x)\to 0$ for all $x\in (0,1/2)$, as $d\to \infty$. If $L=1$, then \ref{a} holds. If $L=-1$, then \ref{b} holds. \begin{proof} Suppose $L=-1$. Now, there exists a number $x_0$ such that for all $x>x_0$ it holds that \begin{equation}\label{derkaava1} \frac{d}{dx} \left( \frac{f'(x)}{f(x)} \right)<0. \end{equation} Equation \eqref{derkaava1} implies that the function $f'(x)/f(x)$ is strictly decreasing for $x>x_0$. Suppose $d>2 x_0$. Direct calculation reveals that $f_{Z_d}'(x)=0$ if and only if \begin{equation}\label{critical} \frac{f'(dx)}{f(dx)}=\frac{f'(d(1-x))}{f(d(1-x))}. \end{equation} Therefore, the point $x=1/2$ is always a critical point. In addition, based on Equation \eqref{derkaava1} and symmetry, there are no other critical points in the interval $[x_0/d,1/2]$ i.e. the function $f_{Z_d}$ is monotone in the interval $[x_0/d,1/2]$. The critical point at $x=1/2$ must be a maximum, because $f_{Z_d}''(1/2)<0$. Hence the function $f_{Z_d}$ is increasing in the interval $[x_0/d,1/2]$ Next, it is shown that $f_{Z_d}\to 0$ uniformly in the set $[0,c]$, where $c\in (0,1/2)$. It suffices to note the following. Set $$M:= \frac{1}{f(x_0)}\max_{z \in [0,x_0]} f(z). $$ Recall that $f$ is continuous and eventually decreasing. Let $y_0\in \mathbb{R}$ be chosen so that $f(x)$ is decreasing when $x>y_0$. Now, for any $x\in[0,x_0/d]$ and all $d$ satisfying $d(1-x_0/d)>y_0$ it holds that \begin{eqnarray} f_{Z_d}(x)&=&\frac{f(d x)f(d(1-x))}{\int_{0}^1 f(d y)f(d(1-y)) \, dy} \nonumber\\ &\leq& \frac{\left( \max_{z \in [0,x_0]} f(z) \right)f(d(1-x))}{\int_{0}^1 f(d y)f(d(1-y)) \, dy} \nonumber\\ &\leq& \frac{\left( \max_{z \in [0,x_0]} f(z) \right)f(d(1-x_0/d))}{\int_{0}^1 f(d y)f(d(1-y)) \, dy} \nonumber\\ &=& M f_{Z_d}(x_0/d) \nonumber\\ &\leq& M f_{Z_d}(c).\label{vikaper} \end{eqnarray} The right hand side of \eqref{vikaper} converges to $0$, as $d\to \infty$. This implies the desired uniform convergence. The uniform convergence and symmetry of the function $f_{Z_d}$ with respect to the point $x=1/2$ imply that for any open set $A\subset [0,1]$ one has $$\liminf_{d \to \infty} P(Z_d\in A)\geq \delta_\frac{1}{2}(A).$$ This is precisely the Portmanteau characterisation of the distributional convergence and the proof of the case $L=-1$ is complete. If $L=1$, the proof is simpler. In this case the monotonicity together with the assumption $f_{Z_d}(x)\to 0$ for all $x\in (0,1/2)$, as $d\to \infty$ implies uniform convergence in every set $A$ with a positive distance from points $0$ and $1$. This means that for any open set $A\subset [0,1]$ one has $$\liminf_{d \to \infty} P(Z_d\in A)\geq \left(\frac{1}{2} \delta_0+\frac{1}{2} \delta_1 \right)(A)$$ and the proof is complete. \end{proof} \end{proposition} The proof of Theorem \ref{ekalause} requires the following purely analytic lemma. \begin{lemma}\label{ekalemma} Let $(g_d)_{d>0}$ be a family of increasing or decreasing functions defined on the interval $[a,b],$ where $-\infty<a<b<\infty$. Assume further that for every $d>0$ the function $g_d$ is continuously differentiable on the whole interval $[a,b]$. Finally, assume that $0<|g_d(x)|<M$ holds for every $d>0$ and every $x\in [a,b]$. If \begin{equation}\label{oletus} \lim_{d\to \infty} \left| \frac{g_d'(x)}{g_d(x)}\right|=\infty \end{equation} for every $x\in (a,b)$, then, for all $x\in (a,b)$, $$g_d(x)\to0,$$ as $d\to \infty$. \begin{proof} Without loss of generality we may assume that $(g_d)_{d>0}$ is a family of increasing and positive functions. Suppose in the contrary that there exists a number $\eta\in(a,b)$ such that \begin{equation}\label{ehto1} \limsup_{d\to \infty} g_d(\eta)>0. \end{equation} Equation \eqref{ehto1} implies that there exists a sequence of functions $(g_{d_k})_{k=1}^\infty$, where $d_k\uparrow \infty$, as $k\to \infty$ such that $ C:=\liminf_{k\to \infty} g_{d_k}(\eta)>0$. The fact that $g_d$ is increasing for any $d>0$ implies the inequality \begin{equation}\label{ehto2} \inf_{x\in [\eta,b)} \{\liminf_{k\to \infty} g_{d_k}(x)\}\geq C. \end{equation} Hence, for large enough $d_k$ and all $x\in [\eta,b)$ it holds that \begin{equation}\label{contradiction} \log (C/2)< \log g_{d_k}(x)<\log M. \end{equation} Rewriting Assumption \eqref{oletus} as \begin{equation}\label{oletusver2} \lim_{d\to \infty} \left| \frac{d}{dx} \log g_d(x)\right|=\infty \end{equation} gives $$\lim_{k\to \infty} \frac{d}{dx} \log g_{d_k}(x)=\infty$$ for all $x\in [\eta,b)$. Set $h_{d_k}(x):=\log g_{d_k}(x)$. Now, using the fundamental theorem of calculus and the lower limit of \eqref{contradiction}, one obtains $$h_{d_k}(b)\geq \log(C/2)+\int_{\eta}^b h_{d_k}'(y) \, dy.$$ This yields a contradiction: The function $h_{d_k}'$ is non-negative because $g_{d_k}$, and thus $\log g_{d_k}$, is increasing. Therefore, application of Fatou's lemma implies $$\int_{\eta}^b h_{d_k}'(y) \, dy\to \infty, $$ as $k\to \infty$, contradicting the upper bound of \eqref{contradiction}. \end{proof} \end{lemma} \begin{theorem}\label{ekalause} Suppose $f$ is eventually strictly log-convex or log-concave. Assume further that \begin{equation}\label{hyvaol} \lim_{d \to \infty} d \left| \frac{f'(dx)}{f(dx)}-\frac{f'(d(1-x))}{f(d(1-x))} \right|=\infty. \end{equation} for every $x\in(0,1/2)$. Then $f_{Z_d}(x)\to 0$ for every $x\in (0,1/2)$. \begin{proof} Let $x\in (0,1/2)$. Based on the proof of Lemma \ref{typechar} it is possible to choose a number $d_0$ such that the function $f_{Z_d}$ is monotone in the interval $(x-\epsilon,x+\epsilon)\subset (0,1/2)$, when $d>d_0$, and $\epsilon>0$ is a small enough number. We plan to apply Lemma \ref{ekalemma} to family $(f_{Z_d})_{d>d_0}$ and interval $(a,b):=(x-\epsilon,x+\epsilon)$. To do this, note that the derivative of \eqref{f1} may be written as \begin{equation}\label{der1} f_{Z_d}'(x)=d f_{Z_d}(x) \left[ \frac{f'(dx)}{f(dx)}-\frac{f'(d(1-x))}{f(d(1-x))} \right]. \end{equation} Thus, Assumption \eqref{hyvaol} corresponds to Assumption \eqref{oletus} of Lemma \ref{ekalemma}. The remaining assumptions are clearly valid. \end{proof} \end{theorem} \subsection{Main Corollary and Examples} Theoretical results of Section \ref{tresults} imply the following surprising corollary. It is based on Proposition \ref{typechar} and Theorem \ref{ekalause}. \begin{corollary} There exist non-negative random variables $X$ and $Y$ such that: \begin{enumerate} \item \label{part1} The variable $Y$ is asymptotically dominated by $X$, i.e. \begin{equation}\label{dom1} \lim_{x \to \infty} \frac{P(Y>x)}{P(X>x)}=0, \end{equation} yet $Y$ is of Type \ref{a} while $X$ is of Type \ref{b}. \item \label{part2} There exists a light-tailed random variable of Type \ref{a}. \end{enumerate} \begin{proof} Define the densities $f_X$ and $f_Y$ of variables $X$ and $Y$ by $$f_X(x):=C_Xe^{-x+\sqrt{x}}$$ and $$f_Y(x):=C_Ye^{-x-\sqrt{x}}$$ for $x>0$, where $C_X^{-1}=\int_0^\infty e^{-y+\sqrt{y}} \, dy$ and $C_Y^{-1}=\int_0^\infty e^{-y-\sqrt{y}} \, dy$. Application of L'H\^{o}pital's rule shows \eqref{dom1}. For any $x>0$, $$\frac{d^2}{dx^2}\log f_X(x)=-\frac{1}{4} x^{-\frac{3}{2}}\, \textnormal{ and }\, \frac{d^2}{dx^2}\log f_Y(x)=\frac{1}{4} x^{-\frac{3}{2}}. $$ Furthermore, for $x\in(0,1/2)$, we obtain $$d \left( \frac{f_X'(dx)}{f_X(dx)}-\frac{f_X'(d(1-x))}{f_X(d(1-x))} \right)=\frac{1}{2}\sqrt{d} ( x^{-1/2} - (1-x)^{-1/2} )\stackrel{d\to \infty}{\to} \infty$$ and $$d \left( \frac{f_Y'(dx)}{f_Y(dx)}-\frac{f_Y'(d(1-x))}{f_Y(d(1-x))} \right)=\frac{1}{2}\sqrt{d} (- x^{-1/2} + (1-x)^{-1/2} )\stackrel{d\to \infty}{\to} -\infty.$$ Hence, Theorem \ref{ekalause} combined with Proposition \ref{typechar} gives the result of Part \ref{part1}. The statement of Part \ref{part2} is clear because for $0<s<1$ it holds that $$E(e^{sY})=\int_0^\infty e^{sy}f_Y(y) \, dy<\infty $$ and thus $Y$ is a light-tailed random variable. \end{proof} \end{corollary} The condition $f_{Z_d}(x)\to0$ for all $x\in (0,1/2)$ of Proposition \ref{typechar} can be difficult to verify directly. However, the sufficient condition of Theorem \ref{ekalause} seems to cover the most common situations. The class of power densities forms a notable exception. These densities are simple enough to be handled directly via Proposition \ref{typechar}. This is demonstrated in Example \ref{ex2} below. \begin{example}\label{ex11} We check Condition \eqref{hyvaol} for certain distribution types. In all cases $x\in (0,1/2)$ and $C$ is an integration constant. \begin{enumerate}[a)] \item \label{weibull}Suppose $f(t)=Ce^{-t^\alpha}$, where $t>0$ and $\alpha>0$. Then \begin{eqnarray*} && d \left( \frac{f'(dx)}{f(dx)}-\frac{f'(d(1-x))}{f(d(1-x))} \right) \\ &=&\alpha d^\alpha ((1-x)^{\alpha-1}-x^{\alpha-1})\stackrel{d\to \infty}{\to}\begin{cases} \infty & \alpha>1 \\ -\infty & 0<\alpha<1\\ 0 & \alpha=1. \end{cases} \end{eqnarray*} \item \label{lognormal} Suppose $f(t)=Ct^{-1}e^{-(\log t)^2}$, where $t>t_0>0$ for some $t_0$, $\beta\in \mathbb{R}$ and $\gamma>1$. Then \begin{eqnarray*} && d \left( \frac{f'(dx)}{f(dx)}-\frac{f'(d(1-x))}{f(d(1-x))} \right) \\ &=&2 \log d\left( \frac{1}{1-x}-\frac{1}{x}\right)+2\left( \frac{\log(1-x)+1/2}{1-x}-\frac{\log(x)+1/2}{x}\right)\stackrel{d\to \infty}{\to}-\infty. \end{eqnarray*} \end{enumerate} \end{example} Example \ref{ex11} shows that Condition \eqref{hyvaol} is satisfied by Weibull and Lognormal type densities. The next example illustrates a situation where \eqref{hyvaol} does not apply, but instead Proposition \ref{typechar} can be applied directly. \begin{example}\label{ex2} Suppose $f(t)=t^{-\alpha}$ for some $\alpha>1$ and and all $t>t_0>0$ for some $t_0$. Then for $d>t_0/x$, where $x\in (0,1/2)$ it holds that $$d \left( \frac{f'(dx)}{f(dx)}-\frac{f'(d(1-x))}{f(d(1-x))} \right)=\alpha\left( \frac{1}{1-x}-\frac{1}{x}\right),$$ i.e. \eqref{hyvaol} is not valid. However, a direct calculation using \eqref{f1} reveals that $$f_{Z_d}(x)=\frac{f(d x)f(d(1-x))}{\int_{0}^1 f(d y)f(d(1-y)) \, dy}\leq \frac{x^{-\alpha}(1-x)^{-\alpha}}{\int_{t_0/d}^{1-t_0/d} y^{-\alpha}(1-y)^{-\alpha} \, dy} \to 0,$$ as $d\to \infty$. \end{example} \section{Discussion}\label{discussion} Recall that $X_1$ and $X_2$ are i.i.d non-negative variables. The class of subexponential distributions $\mathcal{S}$ consists of those distributions for which \begin{equation}\label{subexp} \lim_{x \to \infty} \frac{P(X_1+X_2>x)}{P(X_1>x)}=2 \end{equation} or equivalently \begin{equation}\label{subexp2} \lim_{x \to \infty} P(X_1>x|X_1+X_2>x)=\frac{1}{2}. \end{equation} In addition, the class of locally subexponential or $\Delta$-subexponential distributions $\mathcal{S}_\Delta$ can be determined by demanding that for some $\Delta>0$: \begin{equation}\label{locsubexp} \lim_{x \to \infty} \frac{P(X_1+X_2\in (x,x+\Delta])}{P(X_1\in (x,x+\Delta])}=2 \end{equation} and that for any $y>0$ \begin{equation} \lim_{x \to \infty} \frac{P(X_1\in (x,x+y+\Delta])}{P(X_1\in (x,x+\Delta])}=1. \end{equation} These distributions and their connections to the principle of a single big jump have been extensively studied in \cite{watanabe:2010,teugels:1975,foss:2013,asmussen:2003,embrechts:1997,borovkov:2008}. It is important to note that the requirement of subexponentiality or local subexponentiality does not impose detailed requirements about the distribution of $X_1$ given $X_1+X_2$. Hence, it can be argued that the process $(Z_d)$ is more suitable to describe the phenomenon of a single big jump than the membership of these distribution classes. Furthermore, it is known that $\mathcal{S}_\Delta\subset \mathcal{S}$ and that all subexponential distributions are heavy-tailed. In conclusion, the transition between different asymptotic Behaviours \ref{a} or \ref{b} seems to be connected to the eventual convexity or concavity of the function $\log f$. In this sense, heavy-tailedness or membership of a subexponential class has perhaps less impact on the asymptotic behaviour of $(Z_d)$ than what has been anticipated earlier. \section*{Acknowledgements} The deepest gratitude is expressed to the Finnish Doctoral Programme in Stochastics and Statistics (FDPSS) and the Centre of Excellence in Computational Inference (COIN) for financial support (Academy of Finland grant number 251170). Special thanks are due to Harri Nyrhinen for his diligent guidance throughout the writing of the paper. \begin{footnotesize}
1,314,259,995,987
arxiv
\section{Introduction} Topological properties of defects and textures of order parameters and gauge fields in real space are generally classified by homotopy groups~\cite{Thouless,Volovik,Shifman}. The homotopy analysis can also be applied for topological classification of quasi-particle band structures in reciprocal space, with Bloch Hamiltonian and Berry connection respectively assuming the roles of order parameter and gauge fields~\cite{Thouless,Volovik,FuKaneMele2007,FuKane,Moore2007,qi2008topological,RyuLudwigPRB,Roy20093D}. For three-dimensional (3D) insulators, preserving parity ($\mathcal{P}$) and time-reversal ($\mathcal{T}$) symmetries, tunneling configurations of $SU(2)$ Berry connection $\bs{A}_i(\bs{k})$ of $i$-th Kramers degenerate band can be classified by the Chern-Simons invariant~\cite{qi2008topological,RyuLudwigPRB,ryu2010topological} \begin{eqnarray}\label{A11} \mathcal{CS}_i &=&\frac{1}{8\pi^2} \; \int d^{3}k \; \epsilon^{abc} \; \text{Tr}[A_{a,i}\partial_{b}A_{c,i}+\frac{2i}{3}A_{a,i}A_{b,i}A_{c,i}], \nn \\ \\ &=&\frac{n_i}{2}, \; \text{with} \; n_i \in \mathbb{Z}. \end{eqnarray} When a proper Abelian gauge-fixing procedure is implemented, the winding number $n_i$ describes the third homotopy class of $i$-th Kramers-degenerate band, and the diagonal matrix $n_{ij} =n_i \; \delta_{ij}$ identifies the third homotopy class of $2N \times 2N $ Bloch Hamiltonian~\cite{tyner2021symmetry}. Similarly, the third homotopy class of magnetic systems can be described by a diagonal matrix of Abelian Chern-Simons invariants of non-degenerate bands~\cite{moore2008topological,Lapierre2021,tyner2021symmetry}. Field theory calculations show that the \emph{adiabatic, electrodynamic response} of $\mathcal{T}$-symmetric topological insulators (TIs) is described by the effective action~\cite{qi2008topological} \begin{equation}\label{eq:1} S_{eff}= \int dt d^3x \left[\frac{1}{2} (\epsilon \bs{E}^2 - \frac{1}{\mu} \bs{B}^2) + \frac{\theta e^2}{4 \pi^2 \hbar} \mathbf{E}\cdot \mathbf{B} \right], \end{equation} where $\epsilon$ ($\mu$) is the dielectric permittivity (magnetic permeability) of medium, and \begin{equation}\label{gentheta} \theta= 2\pi \sum_{i}^{l} \mathcal{CS}_i = \pi \sum_{i=1}^{l} n_i = n \pi, \end{equation} is the quantized, magneto-electric (ME) coefficient or axion angle~\cite{PhysRevLett.38.1440,Wilczek1987}, and $l$ is the number of occupied bands. Depending on the detailed topological properties of occupied bands, one can realize (i) $\mathbb{Z}_2$ TIs with ME coefficient $\theta=(2s+1) \pi$, (ii) $\mathbb{Z}_2$-trivial, TIs with ME coefficient $\theta= 2s \pi$, and (iii) $\mathbb{Z}_2$-trivial, and magneto-electrically trivial, TIs with $\theta=0$, but $n_i \neq 0$~\cite{tyner2021symmetry}. An insulating ground state with trivial third homotopy class must have $n_i=0$ for all occupied bands. Under periodic boundary conditions (PBC) in (3+1)-D space-time, due to the quantization of electric and magnetic flux, the electromagnetic Berry phase \begin{eqnarray} e^{i \frac{S_\theta}{\hbar}} = e^{i \frac{\theta e^2}{h^2} \int dt d^3x \mathbf{E}\cdot \mathbf{B}} =e^{i n \pi N_E N_B } \end{eqnarray} can distinguish between $n=(2s+1)$ and $n=2s$, when the instanton number of electromagnetic fields $N_E N_B$ is an odd integer, with $N_E $ ($N_B $) being the number of electric (magnetic) flux quantum~\cite{QiZhangmonopole,Karch2009,Vazifeh2010,Chen2011}. No conclusive statements can be made when $N_E N_B$ is an even integer. In past fourteen years, many intriguing consequences of axion electrodynamics have been studied for weakly correlated, non-magnetic $\mathbb{Z}_2$ TIs~\cite{EssinMagnetoelectric,Essin2010,malashevich2010theory,QiZhangmonopole,Karch2009,Vazifeh2010,Chen2011,Maciejko1,TseMacDonald,rosenberg2010witten,RosenbergWormhole,Coh2011,zhao2012magnetic,wu2016quantized,zirnstein2017time,zirnstein2020topological}, anti-ferromagnetic TIs~\cite{li2010dynamical,Mong,Oshikawa,dziom2017observation,Zhang2019,zhu2021tunable}, chiral TIs~\cite{RyuMooreLudwig,NeupertRyuPRB}, and fractionalized TIs~\cite{Maciejko2,Swingle2011,Metlitski2013}. However, the direct calculation of $\theta$ for realistic band structure has remained a formidable task, and limited progress has been made for $\mathbb{Z}_2$ TIs~\cite{EssinMagnetoelectric,Essin2010,malashevich2010theory,Coh2011}. Recently, we have shown how to determine $n_{i}$ from tight-binding models and $|n_{i} |$ from \emph{ab initio} band structures by avoiding direct calculation of Chern-Simons invariant~\cite{tyner2021symmetry}. This development compelled us to raise the following questions. (i) Can $n=0$ and $n=2s$ be distinguished with electromagnetic probes? (ii) Can different odd integers be distinguished? (iii) How does topological quantum phase transition (TQPT) affect $\theta$? To answer these questions, we will go beyond the adiabatic theory of ME response, and perform thought experiments by embedding a test, Dirac monopole of strength $g=\frac{\hbar m}{2e}$ inside TIs, with $m \in \mathbb{Z}$~\cite{dirac1931quantised}. Both magnetic flux-tubes~\cite{Laughlin1,Laughlin2} and monopoles~\cite{Haldane} are extensively used as singular probes of Chern number, describing the second homotopy class of 2D quantum Hall systems~\cite{TKNN}. In this work, we will establish monopoles as singular probes of third homotopy class of 3D insulators. While the electric charge is odd under charge-conjugation ($\mathcal{C}$) and even under $\mathcal{P}$ and $\mathcal{T}$, the magnetic charge $g$ is odd under $\mathcal{C}$, $\mathcal{P}$, and $\mathcal{T}$. Therefore, $eg=\hbar m/2$ is a topologically non-trivial pseudo-scalar that directly couples to $\theta$, allowing us to determine dc ME coefficient. For an infinite system, described by Eq.~\ref{eq:1}, with a constant $\theta \neq 0$, monopoles can bind electric charge and turn into dyons~\cite{witten1979dyons}. This phenomenon is known as the Witten effect (WE). The $\theta$-term leads to modified constitutive relations \begin{eqnarray} \bs{D}=\epsilon \bs{E} + \frac{\theta e^2}{4\pi^2 \hbar} \bs{B}, \; \bs{H}= \frac{\bs{B}}{\mu} - \frac{\theta e^2}{4\pi^2 \hbar} \bs{E}. \end{eqnarray} In the absence of external electric fields, static monopoles give rise to the displacement field \begin{equation} \bs{D}(\bs{r})=\bs{P}_{ME}(\bs{r}) = \frac{\theta e^2}{4\pi^2 \hbar} \bs{B}(\bs{r})= \frac{e m \theta}{8 \pi^2} \frac{\hat{\bs{r}}}{r^2}, \end{equation} where $\bs{P}_{ME}(\bs{r})$ is the ME polarization, and the induced or bound electric charge density \begin{equation} \delta \rho(\bs{r})=-\bs{\nabla} \cdot \bs{P}_{ME} = - \frac{em \theta}{ 2 \pi } \delta^3(\bs{r}). \label{rhoe} \end{equation} Due to the oversimplified nature of $S_{eff}$, the induced electric charge density is sharply localized on monopoles, and the total induced electric charge \begin{equation} \delta Q= \int d^3 r^\prime \delta \rho(\bs{r}^\prime) = -\frac{e m \theta}{2 \pi} \end{equation} does not depend on the radius of Gaussian surface. For half-filled, insulators, preserving $\mathcal{C}$, $\mathcal{P}$ and $\mathcal{T}$ symmetries \begin{equation}\label{Wittencont} \delta Q= - \frac{e m n }{2} \end{equation} can only acquire integer and half-integer values. \emph{Therefore, $\delta Q/e$ for a unit monopole $m=+1$ can track the winding number $n$ or $\mathbb{Z}$-classification of third homotopy class}. Furthermore, two insulators with distinct winding numbers $n_1$ and $n_2$ can show identical WE, if $m_1 n_1 = m_2 n_2$. Non-perturbative, numerical calculations of WE for TIs were reported by Rosenberg and Franz~\cite{rosenberg2010witten}. They showed a minimal Dirac monopole ($m=+1$), embedded in half-filled, strong $\mathbb{Z}_2$ TIs could support $\delta Q=\pm e/2$, both in the presence and absence of Zeeman coupling between fermion spin and spatially varying magnetic field of monopole. No connection with the third homotopy classification was made. The validity of their results for half-filled systems, in the absence of Zeeman coupling was questioned in Ref.~\cite{zhao2012magnetic}. In this work, we will perform comprehensive analysis of fermion spectrum and WE for general $m$ and $n$, without introducing spatially varying Zeeman coupling between fermion spin and monopoles. Our manuscript is organized as follows. In Sec.~\ref{Continuum}, we describe analytical and numerical calculations for continuum model of spherical, first-order TIs (FOTIs), which \emph{support gapless surface-states under all orientations}. In Sec.~\ref{FOTI}, we describe thought experiments on tight-binding models of FOTIs, exhibiting \emph{gapless surface-states along all high-symmetry axes}. In Sec.~\ref{SOTI}, Sec.~\ref{AppA}, and Sec.~\ref{TOTI1}, we respectively describe our analysis for magneto-electric/chiral higher-order topological insulators (HOTI), magnetic HOTIs, and octupolar HOTIs, which can possess \emph{gapless or gapped surface-states, depending on the orientation of surface}. The effects of spatially varying Zeeman coupling are briefly discussed in Appendix~\ref{AppC}. For conceptual clarity, we work with minimal four-band models in Secs.~\ref{Continuum}-\ref{AppA}. In Sec.~\ref{TOTI1} we deal with eight-band model. Our main results are as follows. \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[scale=0.25]{Presentation1.pdf} \label{fig:presentation1}} \subfigure[]{ \includegraphics[scale=0.25]{Presentation2.pdf} \label{fig:presentation2}} \caption{Schematics of spin-charge separation and $\mathbb{N}$-classification of bulk topology of $\mathcal{C}$-, $\mathcal{P}$-, and $\mathcal{T}$- symmetric first-order topological insulators under open boundary conditions. In the absence of a pseudo-scalar training field ($M^\prime=0$), the degenerate half-filled state cannot exhibit Witten effect. However, with electron or hole doping we can probe induced electric charge on monopole for charged $SU(2|nm|)$-multiplets. The occupation numbers of fermion zero-modes are labeled by $0$ and $1$, and $N_e$ corresponds to the number of doped electrons. The induced charge on monopole (surface) is denoted by $\delta Q$ ($\delta Q_s$). Topological insulators with winding numbers $n= \pm 1$ also exhibit identical $SU(4)$-multiplets in the presence of double monopoles $m=\pm 2$. A trivial insulator does not support spin-charge separation. } \label{Presentation} \end{figure*} \begin{enumerate} \item In Secs.~\ref{Continuum} and ~\ref{FOTI}, we show that a static monopole, embedded in an \emph{infinite}, $\mathcal{C}$-, $\mathcal{P}$- and $\mathcal{T}$- symmetric FOTI supports $|n m|$ number of exponentially localized, fermion zero-modes (FZMs). The FZMs give rise to the \emph{vacuum expectation value of pseudo-scalar mass operator}, which is proportional to the chirality of FZMs or $\text{sgn}(n m)$. Therefore, infinite FOTIs in the presence of an isolated, static monopole break $\mathcal{P}$, $\mathcal{T}$, $\mathcal{CP}$, and $\mathcal{CT}$ symmetries. Consequently, $\text{sgn}(n)$ and $\mathbb{Z}$-classification can be determined, either by computing the expectation value of pseudo-scalar mass or WE. \item However, under open boundary conditions (OBC), the gapless surface-states of \emph{finite-size} FOTIs also give rise to $|n m|$-number of FZMs of opposite chirality. The hybridization between monopole-localized and surface-localized FZMs leads to $2 |n m|$-number of exponentially split, near zero-modes (NZMs). As the chirality of FZMs is obscured by hybridization, \emph{finite-size, half-filled FOTIs cannot exhibit WE}. \item Therefore, finite-size effects must be subdued with \emph{a pseudo-scalar, training field ($M^\prime$) to observe WE}. We will calculate the induced charge enclosed by a Gaussian sphere of radius $R$, as a scaling function $\delta Q(L, R,\xi, M^\prime) \neq 0$, where $L$ is the system size, the correlation length $\xi \sim |M|^{-1}$, and $2|M|$ is the bulk band gap, when $M^\prime=0$. We show that maximum value of $\delta Q(L, R,\xi, M^\prime)$ in the adiabatic, scaling regime $\xi \ll |M^\prime|^{-1} \ll L$ approaches quantized results of Eq.~\ref{Wittencont} (see Subsec.~\ref{WEHF} ). As the overall half-filled system is charge neutral, compensating charge $\delta Q_s=-\delta Q$ would be localized around the boundary. \item \emph{In the absence of pseudo-scalar training field}, we can only detect $\mathbb{N}$-classification of FOTIs by studying ``spin-charge separation" [see Fig.~\ref{Presentation} ]. In the thermodynamic limit ($L \to \infty$), the energy splitting due to hybridization can be ignored, and the charge-neutral, half-filled state follows the self-conjugate representation of $\mathfrak{su}(2|nm|)$ algebra, with degeneracy \begin{equation}\label{grounddeg} N_G=\frac{(2|nm|)!}{[(|nm|)!]^2}. \end{equation} By doping one electron (or hole) at a time, we will probe $\mathcal{C}$-odd, charged multiplets that follow non-self-conjugate representations of $\mathfrak{su}(2|nm|)$ (see Subsec.~\ref{WED}). When the number of doped electrons ($N_e$) is varied between $-|n m|$ and $|n m|$, the maximum value of monopole-bound charge $\delta Q(L, R,\xi, M^\prime=0)$ saturates to $-e N_e/2$, and the surface-bound charge also shows saturation $\delta Q_s \to -e N_e/2$. Thus, maximum values of $\delta Q$ shows oscillations between half-integer and integer values, until NZMS become completely empty or occupied. The completely occupied and empty NZMs describe non-degenerate, many-body states that follow singlet representation of $\mathfrak{su}(2|nm|)$. This method is useful for identifying $|n|$ with unit monopoles $m=\pm 1$, and the equivalence between $|n_1 m_1|=|n_2 m_2|$. Moreover, TQPTs can be addressed by studying the finite-size scaling of $\delta Q (L , R, \xi, M^\prime=0)$, as a function of $L/\xi$. \item It is often assumed that the induced electric charge $\delta Q= s e$, with $s \in \mathbb{Z}$ cannot be measured for crystalline systems, as the total charge inside a unit cell can be changed by an integer multiple of $e$. Such arguments do not apply to the results of thought experiments with monopoles, as the induced electric charge density is not described by a Dirac delta function [Eq.~\ref{rhoe}]. The maximum value of $\delta Q$ occurs for $R=R^\ast$, such that $\delta Q$ remains distributed over multiple unit cells. The compensating charge of opposite sign also stays distributed over multiple unit cells, close to the boundary for TIs, and over the entire sample for topological quantum critical points (TQCPs). \item In Sec.~\ref{SOTI}, we identify the third homotopy class of $\mathcal{C}$-preserving, but $\mathcal{P}$- and $\mathcal{T}$- breaking chiral, HOTIs~\cite{Schindlereaat0346}. This is a special class of $\mathcal{CP}$-odd, $\mathcal{PT}$-even ME insulators that can exhibit $\theta=\pm \pi$, protected by the combined $C^z_4\mathcal{T}$ symmetry, where $C^z_4$ denotes four-fold rotations about $z$-axis. Akin to FOTIs, chiral HOTIs also support FZMs on monopole and surface under OBC, and the half-filled state must be trained with a pseudo-scalar field ($M^\prime$) to observe WE. In the absence of training field, the spin-charge separation of chiral HOTI under OBC follows Fig.~\ref{fig:presentation1}. As the system supports direction-sensitive gapless surface states, the surface-localized FZMs can be entirely eliminated, with suitable boundary conditions. The non-degenerate ground state, under \emph{mixed boundary conditions} (MBC) can display WE, without any pseudo-scalar training field. To gain physical insights into tunneling configurations of $SU(2)$ Berry connection, we probe the second homotopy class of $C^z_4$-symmetric planes, with a magnetic $\pi$-flux tube. We show these planes, which are described by 2D quadrupolar HOTIs~\cite{Benalcazar61,BBHPrb} support quantized $SU(2)$ Berry flux~\cite{tyner2020topology,sur2022mixed}. The presence (absence) of non-trivial third homotopy class is precisely related to the presence (absence) of tunneling of $SU(2)$ Berry flux~\cite{tyner2021symmetry}. \item In Sec.~\ref{AppA}, we consider an example of $C^z_4$-symmetric, magnetic TIs, breaking $\mathcal{C}$, $\mathcal{P}$, $\mathcal{T}$, and the combined discrete symmetries $\mathcal{CP}$, $\mathcal{PT}$, and $\mathcal{CT}$. Even under OBC, the states localized on monopole and surface are not degenerate. \emph{Hence, the third homotopy class of such insulators can be detected by computing WE without any pseudo-scalar training field}. With a magnetic $\pi$-flux tube, we demonstrate the existence of $C^z_4$-symmetry-protected, tunneling configuration of Berry connection. \item In Sec.~\ref{TOTI1}, we consider octupolar HOTIs that preserve $\mathcal{C}$, $\mathcal{P}$, and $\mathcal{T}$ symmetries, and display gapped surface-states, and corner-localized zero-energy-states, under cubic-symmetry-preserving OBC~\cite{Benalcazar61,BBHPrb}. This system combines the exotic physics of FZMs, $\mathcal{CP}$-violation, and $SU(2)$ flavor-symmetry-breaking. In Refs.~\onlinecite{Benalcazar61,BBHPrb}, Benalcazar \emph{et al.} have claimed that the octupolar HOTIs do not support Chern-Simons coefficient and dipolar ME response. \emph{Contrary to such claims, we show that the octupolar HOTIs can exhibit quantized ME effect, with $\theta= \pm 2 \pi$, in the presence of infinitesimal, pseudo-scalar training field}. This result is independent of the existence of corner-localized states, which we substantiate by studying fermion spectrum and WE under OBC and MBC. We also address spin-charge separation and TQPT by doping electrons and holes. Moreover, with magnetic $\pi$-flux tube, we demonstrate the $C_3$-symmetry-protected, tunneling configuration of $SO(5)$ Berry connection, along $[111]$ direction~\cite{sur2022mixed}. \item Throughout this work, we emphasize that only the monopole-localized FZMs precisely describe universal aspects of bulk topology. The surface- and corner- localized modes are additional contributions, which depend on the details of model Hamiltonians (FOTIs vs. HOTIs), and non-universal aspects of boundary conditions. Indiscriminate use of bulk-boundary correspondence usually leads to incorrect conclusions about bulk topology. \end{enumerate} \begin{figure} \centering \subfigure[]{ \includegraphics[scale=0.8]{Figures/LinkingV.png} \label{fig:linkingV}} \subfigure[]{ \includegraphics[scale=0.32]{Figures/MutuallyUnlinked.pdf} \label{fig:unlinked}} \subfigure[]{ \includegraphics[scale=0.32]{Figures/MutuallyCritical.pdf} \label{fig:criticallinked}} \subfigure[]{ \includegraphics[scale=0.32]{Figures/MutuallyLinked.pdf} \label{fig:Hopflinked}} \caption{(a) Phase diagram of continuum model [Eq.~\ref{cont1} ]. The winding number of $O(4)$ unit vector [Eq.~\ref{winding1} ] controls the third homotopy class of $SU(2)$ Berry connection, which causes self-linking of lines of Berry curvatures for all three color components or spin projections. They are unlinked in the trivial (NI) phase, touching at the quantum critical point, and Hopf-linked in the topologically non-trivial (TI) phase. Mutual linking of three color components for (b) NI, (c) critical point, and (d) TI.} \label{fig:MutualLink} \end{figure} \section{Continuum theory}\label{Continuum} To gain analytical and numerical insights, we first work with the following continuum model of spherical TIs \begin{eqnarray}\label{cont1} H_{sph}(\bs{k})&=& \bs{N}(\bs{k}) \cdot \boldsymbol \Gamma =\hbar v \sum_{j=1}^{3} \; k_j \Gamma_j + (M+B \bs{k}^2) \Gamma_{5}, \nn \\ \end{eqnarray} which operates on a four-component spinor $\Psi(\bs{k})$, and $\Gamma_{j=1,2,3}=\tau_{1}\otimes \sigma_{j}$, $\Gamma_{4}=\tau_{2}\otimes \sigma_{0}$, and $\Gamma_5=\tau_3 \otimes \sigma_0$ are five, mutually anti-commuting $4 \times 4$ matrices. The $2\times 2$ identity and Pauli matrices $\tau_{i=0,1,2,3}(\sigma_{i=0,1,2,3})$ act on orbital/parity (spin) index. The $\mathcal{P}$, $\mathcal{T}$, and $\mathcal{C}$ symmetries are described by the operations $\Gamma_5 H (-\bs{k}) \Gamma_5= H (\bs{k})$, $ \Gamma_{31} H^\ast(-\bs{k}) \Gamma_{31}=H(\bs{k})$, and $ \Gamma_{25}H^\ast(-\bs{k})\Gamma_{25}=-H(\bs{k})$, respectively, and $\Gamma_{ab}=[\Gamma_a, \Gamma_b]/(2i)$. The $O(4)$ unit vector $\hat{\bs{N}}(\bs{k})$, describing maps from $\mathbb{R}^3 \to S^3$ can be written as \begin{eqnarray} \hat{\bs{N}}&=&[\mathrm{sgn}(v) \; \sin (\alpha(\bs{k})) \; \hat{\bs{k}}, \cos (\alpha(\bs{k}))], \\ \cos (\alpha(\bs{k})) &=& \frac{M+B \bs{k}^2}{\sqrt{\hbar^2 v^2 \bs{k}^2 + (M+B \bs{k}^2)^2}}. \end{eqnarray} where $0 \leq \alpha \leq \pi$ is the polar angle of $S^3$. The topologically non-trivial, instanton configuration of $\hat{N}(\bs{k})$ is classified by the third spherical homotopy group $\pi_3(S^3)=\mathbb{Z}$ and the winding number is given by \begin{eqnarray} \mathcal{N}_3 &=& \frac{1}{2\pi^2} \; \int d^3k \; \epsilon^{\mu \nu \rho \lambda} \hat{N}_{\mu}\partial_1 \hat{N}_{\nu} \partial_2 \hat{N}_{\rho} \partial_3 \hat{N}_{\lambda}, \label{winding} \\ &=&\frac{ 1}{2} \text{sgn}(vB) \; \left[1- (1- \delta_{M,0}) \; \mathrm{sgn}(M B) \right], \label{winding1} \end{eqnarray} where $\epsilon^{\mu \nu \rho \lambda}$ is the four-dimensional, Levi-Civita symbol, and the Greek indices take values from $1$ through $4$. At the TQCP with $M=0$, $\alpha(\bs{k}) $ can only scan the northern or southern hemisphere of $S^3$, leading to the \emph{sphaleron configuration, in reciprocal space, possessing half-integer winding number} $\mathcal{N}_3 = \mathrm{sgn}(vB)/2$. In Fig.~\ref{fig:MutualLink}, we illustrate topological properties of $SU(2)$ Berry connection. \subsection{Monopoles and fermion zero-modes}\label{AppB} Next we carry out analytical calculations of FZMs in the presence of a Dirac monopole at the origin $\bs{r}=(0,0,0)$, for an infinite system. For convenience, we first perform a global unitary rotation $\psi(\bs{r}) \to e^{i \frac{\pi}{4} \Gamma_{45} }\chi(\bs{r})$, such that mass operators $\psi^\dagger \Gamma_5 \psi \to - \chi^\dagger \Gamma_4 \chi$, and $\psi^\dagger \Gamma_4 \psi \to \chi^\dagger \Gamma_5 \chi$, and obtain the rotated, and gauged Hamiltonian operator \begin{eqnarray}\label{eq:contGauge} H_{sph}[\bs{a}]&=&-i \hbar v \sum_{j=1}^{3} D_j \Gamma_j - (M - B \bs{D}^2) \Gamma_4 + M^\prime \Gamma_5, \nn \\ &=& \begin{bmatrix} M^\prime \sigma_0 & \mathcal{D}^\dagger \\ \mathcal{D} & -M^\prime \sigma_0 \end{bmatrix} , \end{eqnarray} where $D_j=(\partial_j - i \frac{e}{\hbar} a_j)$ is the covariant derivative, $\bs{a}$ is the vector potential, and \begin{eqnarray} \mathcal{D}=-i\hbar v \sum_{j=1}^{3} D_j \sigma_j - i(M - B \bs{D}^2)\sigma_0. \end{eqnarray} In the absence of monopole, $\mathcal{D} \to - i |\bs{N}(k)| u(\bs{k})$, and the third homotopy class of $SU(2)$ matrix \begin{equation} u(\bs{k})=\hat{N}_5(\bs{k}) \sigma_0 + i \hat{N}_j(\bs{k}) \sigma_j \end{equation} is given by $\mathcal{N}_3$. The third homotopy class of $u^\dagger(\bs{k})$ is given by $-\mathcal{N}_3$. Following Ref.~\onlinecite{wu1976dirac}, we work with the non-singular gauge, \begin{equation}\label{eq:7} \bs{a}(\bs{r})=\begin{cases} \frac{g}{r\sin\theta}(1-\cos\theta) \; \hat{\phi},\;0\leq\theta<\frac{\pi}{2}+\delta\\ \frac{-g}{r\sin\theta}(1+\cos\theta) \; \hat{\phi},\;\frac{\pi}{2}-\delta<\theta\leq\pi \end{cases}. \end{equation} The wave sections for the northern and southern hemispheres are related by the gauge transformation $\psi = e^{i 2 m \phi} \psi$. When $M^\prime=0$, the $\mathbb{Z}_2$ particle-hole symmetry follows from $\{H[\bs{a}], \Gamma_5 \}=0$. As a consequence of this symmetry, if $\chi_{n}(\bs{x})$ is an eigenstate of $H[\bs{a}]$ with energy $E_{n}$, $\chi^\prime_{n} = \Gamma_5 \chi_{n} (\bs{x})$ will be an eigenstate with energy $-E_{n} $. Hence, the finite-energy eigenstates $\chi_n$ and $\Gamma_5 \chi_n$ are orthogonal, and $\langle \chi^\dagger_n \Gamma_5 \chi_n \rangle =0$. If normalizable, FZMs exist, they must be eigenstates of $\Gamma_5$ with eigenvalue $s=+1$ (positive chirality) or $s=-1$ (negative chirality). Furthermore, these solutions must satisfy $\mathcal{D} \chi_{0,+} =\mathcal{D}_+ \chi_{0,+}=0$, and $\mathcal{D}^\dagger \chi_{0,-} =\mathcal{D}_- \chi_{0,-}=0$, where $\mathcal{D}_+=\mathcal{D}^\dagger\mathcal{D}$, and $\mathcal{D}_-=\mathcal{D} \mathcal{D}^\dagger$. If there are $n_s$ FZMs with chirality $s$, \begin{equation} \langle \chi^\dagger \Gamma_5 \chi \rangle=\sum_s \langle \chi^\dagger_{0,s} \Gamma_5 \chi_{0,s} \rangle = (n_+ -n_-). \end{equation} The goal of index theorem is to find a precise relationship between $(n_+ - n_-)$, the bulk invariant $\mathcal{N}_3$, and the monopole strength $m$. Due to spherical symmetry, $H[\bs{a}]$, $\bs{J}^2$, and $J_z$ can be simultaneously diagonalized~\cite{Kazama1977,shnir2006magnetic}, where the total angular momentum, and the orbital angular momentum operators are given by \begin{eqnarray} \bs{J}&=&\bs{L} + \frac{\hbar}{2} \tau_0 \otimes \boldsymbol \sigma, \\ \bs{L}&=&-i \hbar \; \bs{r} \times \bs{D} - \frac{\hbar}{2} m \hat{\bs{r}}. \end{eqnarray} The FZMs can be labeled by the following quantum numbers: (i) the total angular momentum $j=(|m|-1)/2$, (ii) the projection of total angular momentum $-j \leq j_z \leq +j$, and (iii) the orbital angular momentum $l = j+1/2= |m|/2$. Without the higher gradient term $-B \bs{D}^2 \Gamma_4$, the radial differential operator of linearized Dirac theory exhibits pathological short-distance behavior for $j=|m|-1/2$ channel, due to the vanishing of centrifugal barrier~\cite{Kazama1977,shnir2006magnetic}. This is circumvented by defining self-adjoint extension of Hamiltonian that modifies boundary conditions with \emph{an external parameter} $\vartheta$~\cite{Goldhaber1977,Callias1977,yamagishi1983fermion,Grossman1983}. Only $\vartheta=0, \pi$ can preserve $\mathcal{CP}$ symmetry. The results for Witten effect is obtained by identifying $\vartheta$ with axion angle $\theta$~\cite{yamagishi1983fermion,Grossman1983}. The parameter $\vartheta$ can only be determined from a detailed analysis of UV-complete, microscopic models. Alternatively, Kazama \emph{et al.}~\cite{Kazama1977} have used an infinitesimal, anomalous magnetic moment or Zeeman coupling \begin{equation}\label{AZ} H_{AZ}= \frac{\kappa \;\hbar e }{2 m_e} \; g \; \sum_{j=1}^{3} \frac{\hat{\bs{r}}_j}{r^2} \psi^\dagger \Gamma_{j4} \psi = \frac{\kappa \; \hbar e }{2 m_e} \; g \; \sum_{j=1}^{3} \frac{\hat{\bs{r}}_j}{r^2} \chi^\dagger \Gamma_{j5} \chi, \end{equation} which maintains a centrifugal barrier during intermediate steps of calculations, and $\kappa \to 0^+$ limit is taken at the end of calculations. The anomalous Zeeman coupling term obeys $\mathbb{Z}_2$ particle-hole symmetry, as $\{ H_{AZ}, \Gamma_5 \}=0$. While these UV regulators allow normalizable FZMs, the number of FZMs cannot be clearly related to any unambiguous, bulk topological invariant. When $B \neq 0$, we have a natural short-distance regulator. As $r \to 0$, $H[\bs{a}] \to -B \bs{D}^2 \Gamma_4$ describes decoupled, non-relativistic Schr\"{o}dinger Hamiltonians, supporting non-vanishing centrifugal barriers. Consequently, the radial wave functions for all angular momentum channels vanish as $r \to 0$. Only in the large distance limit $r \to \infty$, the asymptotic behavior can be approximated by linearized Dirac theory. For $j=(|m|-1)/2$, $l=j+1/2=|m|/2$ channel, we substitute the ansatz \begin{eqnarray} && \chi^T_{m,j,j_z}(\mathbf{r})=[ F_+(r)\eta^{T}_{m,j,j_z}, F_-(r)\eta^{T}_{m,j,j_z}], \\ && \eta^T_{m,j,j_z}(\theta,\phi)=\bigg[ -\bigg(\frac{j-j_z+1}{2j+2}\bigg)^{\frac{1}{2}}Y_{m/2,j+1/2,j_z-1/2}(\theta,\phi), \nn \\ && \bigg(\frac{j+j_z+1}{2j+2}\bigg)^{\frac{1}{2}}Y_{m/2,j+1/2,j_z+1/2}(\theta,\phi) \bigg], \end{eqnarray} and obtain coupled radial equations \begin{eqnarray}\label{eq:conteigenstates} && \Big[M^\prime \tau_{3}-i \; \hbar v \; \mathrm{sgn}(m) \; \left(\partial_{r} +\frac{1}{r} \right)\tau_{1} + \Big \{ M-B \Big (\partial_{r}^2 \nn \\ && +\frac{2}{r} \partial_{r}- \frac{|m|}{2r^2} \Big ) \Big \}\tau_2 \Big]\begin{pmatrix} F_+(r)\\ F_-(r) \end{pmatrix}=E \begin{pmatrix} F_+(r)\\ F_-(r) \end{pmatrix}. \end{eqnarray} Here, $F_\pm(r)$ are two independent radial functions, $\eta_{m,j,j_z}$ is the spinor monopole harmonics of the third type, and $Y_{m/2,j+1/2, j_z \pm 1/2}$ are spherical monopole harmonics~\cite{Kazama1977}. When $M^\prime=0$, only TIs can support normalizable FZMs, with \begin{eqnarray} F_s (r)&=& [1 -s \; \mathrm{sgn}(m \mathcal{N}_3) ] \; \frac{e^{-\frac{r\Lambda }{2} }}{2\sqrt{r}} \; [c_1 \; J_{\nu}(\tilde{r}) \; \Theta(4-\xi \Lambda) \nn \\ &&+ \; c_2 \; I_{\nu}(\tilde{r}) \; \Theta(\xi \Lambda-4)+c_3 \; (r\Lambda)^{\nu} \; \delta_{4,\xi \Lambda}] \end{eqnarray} where $c_i$'s are normalization constants, $J_{\nu}(\tilde{r})$ is the Bessel function of first kind, $\nu=\frac{1}{2} \sqrt{1+2 |m|}$, and $\Theta(x)$ is the Heaviside step function. We have defined dimensionless radial variable $\tilde{r}=- \frac{i}{2} \; \Lambda r f(\xi\Lambda)$, and $f(\xi \Lambda)= \sqrt{|1- 4 (\xi \Lambda)^{-1}|}$, where $\xi=\hbar |v|/|M|$ is the correlation length, and $\Lambda^{-1}= |B|/(\hbar |v|)$ is a short-distance scale. When $r \to 0$, wave functions vanish as $r^{\nu -1/2}$. For large distance ($r \to \infty$), the decay of wave functions follows $F_s \sim e^{-r/\xi}/r$, when $\xi \Lambda > 4$, which also describes the long-wavelength behavior in the vicinity of TQCP ($\xi \Lambda \to \infty$). Far from the TQCP ($\xi \Lambda \leq 4$), the exponential decay is controlled by $\Lambda$, and $F_s \sim e^{-r \Lambda/2}$. The scaling behavior of normalization constants is given by \begin{eqnarray} &&|c_j| = \Big[\frac{\Lambda^2 \; 2^{2\nu-1} \; \Gamma(1+\nu) \; \Gamma(\frac{1}{2}) \; f^{-2\nu} \; |m|^{-1}}{ \Gamma (\nu +\frac{3}{2}) \; _2F_1(\nu+\frac{1}{2}, \nu+\frac{3}{2}, 2\nu+1, (-1)^jf^{2})} \Big]^{1/2}, \nn \\ \end{eqnarray} with j=1,2, and \begin{eqnarray} |c_3|=\frac{\Lambda^{3/2}}{\sqrt{4 |m| \Gamma(2+2\nu)}}. \end{eqnarray} With explicit solutions of normalizable FZMs, we can formulate the following index theorem \begin{eqnarray} \langle \psi^\dagger \Gamma_4 \psi \rangle = \langle \chi^\dagger \Gamma_5 \chi \rangle = - m \mathcal{N}_3. \label{index1} \end{eqnarray} When $M^\prime \neq 0$, the positive (negative) chirality zero-modes move to $E= M^\prime$ ($-M^\prime$). We also note that the infinitesimal, anomalous Zeeman coupling term of Eq.~\ref{AZ} leaves the structure of FZMs unaffected. Only the index of Bessel function is modified: $\nu \to \frac{1}{2} \sqrt{1+2 |\tilde{m}|}$, where $\tilde{m}=m (1+ \frac{\kappa \hbar^2}{2 m_e B} ) $. \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=0.44]{Figures/PseudoExpectation.PNG} \label{fig:conta}} \subfigure[]{ \includegraphics[scale=0.43]{Figures/dQContinuum.PNG} \label{fig:contb}} \subfigure[]{ \includegraphics[scale=0.75]{Figures/Continuum_Critical_DeltaQ.PNG} \label{fig:contc}} \caption{Numerical results for spin-charge separation for a spherically-symmetric, first-order topological insulator [Eq.~\ref{cont1} ], with winding number $\mathcal{N}_3=-1$, in the presence of unit monopole $m=+1$, under open boundary conditions. All lengths are measured in units of short-distance scale $\Lambda^{-1}=|B|/(\hbar |v|)$. (a) The system-size dependence of chirality of near-zero-modes, obtained from Eq.~\ref{eq:conteigenstates}, in the presence of a small pseudo-scalar mass $M^\prime>0$. Here, $L$ is the radius of the system and $\xi=\hbar v/|M|$ is the correlation length. When $L/\xi \gg 1$, the chirality (acting as the third component of spin) is clearly resolved. (b) The induced electric charge on monopole $\delta Q(\tilde{R})$ (in units of $-e$), enclosed by a Gaussian sphere of radius $R$ [see Eq.~\ref{dopedcont} ], when $M^\prime=0$, and the system is doped with one electron [$N_e=+1$ case of Fig.~\ref{fig:presentation1} ]. The dimensionless ratios $\tilde{R}=R \Lambda$ and $\tilde{M}= \text{sgn}(MB) /(\xi \Lambda)$. For a topological insulator ($\tilde{M}<0$), the maximum induced charge saturates to $-e/2$. (c) At the topological quantum critical point ($\tilde{M}=0$), the maximum induced charge saturates to $-e/4$, when $L \geq 40 \Lambda^{-1}$, confirming the half-integer winding number of quantum critical, Dirac semimetal. The quantized values of maximum induced charge have been calculated up to a numerical accuracy $10^{-4}$. Notice that the maximum values of induced charge occur for $R^\ast \approx L/2$. } \label{fig:ContMono} \end{figure*} Intriguingly, Eq.~\ref{index1} describes a topological mechanism for breaking $\mathcal{P}$, $\mathcal{T}$, $\mathcal{CP}$, and $\mathcal{CT}$ symmetries. Using the exact eigenstates of gauged Hamiltonian, fermion field operators can be expanded as \begin{eqnarray} \Psi(\bs{r},t)= \sum_n \; c_n \psi_n(\bs{r}) e^{- i E_n t/\hbar}, \\ \Psi^\dagger(\bs{r},t)= \sum_n \; c^\ast_n \psi^\dagger_n(\bs{r}) e^{ i E_n t/\hbar}, \end{eqnarray} where $c_n$ ($c^\ast_n$) is the fermion annihilation (creation) operator, and $\{ c_n, c^\ast_l \} =\delta_{n,l}$. Therefore, the vacuum expectation value of pseudo-scalar mass operator is given by \begin{eqnarray} \langle \Psi^\dagger \Gamma_4 \Psi \rangle = \int d^3r \; \Psi^\dagger(\bs{r},t) \Gamma_4 \Psi(\bs{r},t)=-\frac{m \mathcal{N}_3}{2} , \end{eqnarray} and the factor of $1/2$ arises from the half-filling of FZMs. The calculation of induced charge is more involved, as all occupied states must be taken into account. Following Refs.~\cite{yamagishi1983fermion,Grossman1983}, we can show that $\delta Q=-\frac{m e \mathcal{N}_3}{2} $, when the radius of Gaussian surface is sent to $\infty$. \subsection{Flavor-symmetry} If we consider $N_f$ copies of spherical topological insulators, the model will support $SU(N_f)$ flavor symmetry. In the presence of monopoles, each flavor will give rise to FZMs, and the flavor-singlet, topological response will be determined by $\sum_{i=1}^{N} \langle \Psi_i^\dagger \Gamma_4 \Psi_i \rangle = -\frac{ N_f m \mathcal{N}_3}{2}$, and the total induced electric charge $\delta Q=-\frac{N_f m e \mathcal{N}_3}{2}$. By introducing hybridization between different species, flavor-symmetry will be reduced to a subgroup of $SU(N_f)$. The generic form of particle-hole-symmetric, flavor-symmetry-breaking is described by \begin{eqnarray} &&H=\mathbb{1}_{N_f \times N_f} \otimes H_{sph}(\bs{k}) + H_{FSB}, \nn \\ &&H_{FSB}= \sum_{i=1}^{N^2-1} \; M_i (\bs{k}) \; \hat{\lambda}_i \otimes \Gamma_4, \end{eqnarray} where $\hat{\lambda}_i$'s are the Hermitian generators of $\mathfrak{su}(N_f)$ algebra, and $M_i(\bs{k})$ are momentum-dependent hybridization parameters. When $M_i$'s are momentum independent constants, $H_{FSB}$ can be diagonalized in flavor-space with $SU(N_f)$ rotations, to obtain \begin{equation} H_{FSB} \to \text{diag} [m_{11},..,m_{NN}] \otimes \Gamma_4, \end{equation} where $m_{jj}$'s are $N_f$ eigenvalues of the matrix $\sum_{i} M_i (\bs{k}) \; \hat{\lambda}_i$. If all eigenvalues are distinct, $SU(N_f)$ symmetry is reduced to $U(1) \times U(1) \times...\times U(1) = [U(1)]^{N-1}$. Therefore, the resulting theory is made out of $N_f$ species of decoupled ME insulators, with different strengths of pseudo-scalar mass $M^\prime_j=m_{jj}$, and non-quantized ME coefficients $\theta_j$. When probed with monopoles, FZMs will be split by $H_{FSB}$ and appear at reference energies $m_{jj} \text{sgn} (m \mathcal{N}_3)$. Naturally, such theories give rise to non-quantized ME response. Important question is whether \emph{$k$-dependent hybridization terms can qualitatively alter such conclusions and display quantized, flavor-singlet, ME response}. There is no generic answer to this question and one must perform explicit calculations. By inserting monopoles and measuring WE one can avoid all technical difficulties associated with the direct computation of Chern-Simons coefficient. In Sec.~\ref{TOTI1}, we will demonstrate that the octupolar HOTI breaks $SU(2)$ flavor-symmetry with momentum-dependent hybridization terms. But the hybridization terms obey underlying cubic symmetry and monopoles bind $N_f |m \mathcal{N}_3|$ FZMs, and $\delta Q=-\frac{N_f m e \mathcal{N}_3}{2}$, with $N_f=2$. Next, we address finite-size effects for spherical FOTIs, with $N_f=1$. \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[scale=0.18]{Figures/CS_Phase_Diagram_2.pdf} \label{fig:1a}} \subfigure[]{ \includegraphics[scale=0.47]{Figures/TI_Planes.pdf} \label{fig:1b}} \caption{(a) Phase diagram for the lattice model of first-order topological insulators [see Eq.~\ref{eq:tbmodel} ], where $\Delta$ is a dimensionless, tuning parameter, and $\mathcal{N}_3$ is the bulk topological invariant [see Eq.~\ref{winding} ]. The topological quantum critical points support half-integer winding numbers $\mathcal{N}_3=\pm 1/2$. (b) Illustration of tunneling configurations of $SU(2)$ Berry connection of valence bands in terms of mirror Chern numbers of four-fold symmetric mirror planes, as $\mathcal{N}_3=C_M(k_j=\pi) -C_M(k_j=0)$, with $j=1,2,3$. At $\Delta=-3 $, all three mirror planes, passing through the zone center exhibit quantum Hall plateau transitions, with $C_M(k_j=0)=1/2$. TI$_2$ phase supports $C_M(k_j=\pi)=+1$ and $C_M(k_j=0)=-1$, leading to $\mathcal{N}_3=+2$, which cannot be deduced from weak $\mathbb{Z}_2$ indices. } \label{fig:Berry} \end{figure*} \subsection{Finite-size effects, spin-charge separation, and quantum criticality} In the absence of monopole, the surface-states of a large finite-size system of radius $L$ ($L \Lambda >>1$) under spherical-symmetry-preserving OBC are described by two-component, Dirac fermions living on $S^2$. Employing the eigenstates of $\hat{\bs{r}} \cdot \boldsymbol \sigma$, the surface Hamiltonian can be written as \begin{eqnarray} H_{surface}(\theta, \phi)= -i \frac{\hbar v}{L} [\sigma_2 (\partial_\theta + \frac{1}{2} \cot \theta) - \sigma_1 \frac{1}{\sin \theta} \partial_\phi ], \nn \\ \end{eqnarray} leading to the minimum, spectral gap $2 \hbar v /L$~\cite{SphericalTI1}. In the presence of monopole, $H_{surface}[\bs{a}]$ supports FZMs~\cite{,PhysRevB.78.195426,JELLAL2008361}, with degeneracy $|m \mathcal{N}_3|$, and the chirality $\text{sgn}(m \mathcal{N}_3)$. With numerical solutions of Eq.~\ref{eq:conteigenstates}, we can study hybridization between FZMs of opposite chirality and spin-charge separation. The energy splitting due to hybridization is given by \begin{equation} \label{xi0} E_h \approx \frac{\hbar v}{L} \exp \left[-\frac{L}{\xi_0} \right], \end{equation} where $\xi_0 \sim \max \{ \xi, \Lambda^{-1} \}$ is the localization length of FZMs. In Fig.~\ref{fig:conta}, we show the chirality of NZMs for a minimal monopole $m=+1$, as a function of $L/\xi=L|M|/(\hbar v)$. We have introduced a small pseudo-scalar mass $M^\prime \neq 0$ to control hybridization effects. In the thermodynamic limit, when $M^\prime > E_h$, the chirality of NZMs can be clearly resolved. For small system-size, $M^\prime \leq E_h$, and the chirality is suppressed. When $M^\prime=0$, \emph{the half-filled state is two-fold degenerate} [see Fig.~\ref{fig:presentation1} ]. Due to the obscured chirality of NZMs, monopoles cannot exhibit WE, i.e., $\delta Q(L, R, \xi, M^\prime=0)=0$. This happens due to a precise cancelation between the contributions of occupied NZM and Dirac sea, which will be precisely described for lattice models. To address the induced charge for doped $SU(2)$ singlets, we calculate the contributions of NZMs to the regulated charge density operator: \begin{equation} \delta \rho(\bs{r},L,\xi,M^\prime=0)=\rho_m(\bs{r}) -\rho_0(\bs{r}), \end{equation} where $\rho_m(\bs{r}) $ [$\rho_0(\bs{r})$] denotes the charge density operator in the presence [absence] of monopole. By integrating $\delta \rho(\bs{r}^\prime)$ over a Gaussian surface of radius $R$, we obtain the induced charge \begin{equation}\label{dopedcont} \delta Q(L,R,\xi,M^\prime=0)= \int_{|\bs{r}^\prime | < R} \; d^3r^\prime \delta \rho(\bs{r}^\prime,L,\xi,M^\prime=0). \end{equation} When the ratio $L/\xi \gg 1$, the maximum induced charge for TI saturates to $\delta Q_{max}= \frac{-e}{2} \; \text{sgn}(N_e)$ [see Fig.~\ref{fig:contb}]. Since by doping, we have introduced electric charge $-e \; \text{sgn}(N_e)$, the surface-states will support $\delta Q_s = -\frac{e}{2} \; \text{sgn}(N_e)$. If the quantum critical, Dirac semimetal is doped by a single electron or hole, $\delta Q_{max}$ saturates to $-\frac{e}{4} \; \text{sgn}(N_e)$. As the critical system does not support any surface-states, rest of the doped charge $-\frac{3e}{4} \text{sgn}(N_e)$ will be distributed over the entire sample. In Fig. ~\ref{fig:contc}, we show the system-size dependence of $\delta Q(L,R,\xi=\infty,M^\prime=0)$ at TQCP. The saturation of maximum induced charge to $-\frac{e}{4} \; \text{sgn}(N_e)$ corroborates half-integer winding number for TQCP. With analytical and numerical insights gained from continuum theory, we now consider lattice models of FOTIs that can exhibit odd and even integer winding numbers. \section{Lattice model of first order topological insulators}\label{FOTI} We will work with the following model on a simple cubic lattice, \begin{eqnarray}\label{eq:tbmodel} H_1(\bs{k})=\bs{N}(\bs{k}) \cdot \boldsymbol \Gamma= t \sum_{j=1}^{3} \; \sin k_j \Gamma_j+ t^\prime[\Delta + \sum_{j=1}^{3} \cos k_j ] \Gamma_5. \nn \\ \end{eqnarray} The Bloch Hamiltonian $H_1$ still operates on a four-component spinor $\Psi(\bs{k})$, and the definitions of gamma matrices, and discrete symmetry operations $\mathcal{P}$, $\mathcal{T}$, and $\mathcal{C}$ are unchanged. Here, $t$ and $t^\prime$ are two independent hopping parameters, and the dimensionless tuning parameter $\Delta$ controls TQPTs. For convenience, we have set the lattice constant $a$ to be one. The $O(4)$ unit vector $\hat{N}(\bs{k})$ now describes maps from Brillouin zone three-torus $T^3$ to $S^3$, and instanton configurations are again classified by the third spherical homotopy group $\pi_3(S^3)=\mathbb{Z}$. The winding number $\mathcal{N}_3$ now counts the number of times $T^3$ can wrap around $S^3$. The phase diagram is shown in Fig.~\ref{fig:1a}. The TI$_2$ phase is a crystalline-symmetry-protected insulator, possessing higher winding number. Thus, many physical properties of TI$_2$ cannot be properly described by continuum theories. At TQCPs $\Delta= \pm 3 \; (\pm 1)$, the bulk band gap vanishes at one (three) time-reversal invariant momentum points $\bs{k}=\pi(n_1,n_2,n_3)$, where $n_j=0,1$. This gives rise to three-dimensional, massless Dirac fermions as quantum critical excitations. For $\Delta=-3 \; (+3)$, the Dirac point is located at the center (corner) of Brillouin zone. In contrast to this, the inequivalent Dirac points for $\Delta= -1 \; (+1)$ are located at three $X$ [$M$] points, with $\bs{k}=(\pi,0,0), \; (0,\pi, 0), \; (0, 0, \pi)$ [$\bs{k}=(\pi,\pi,0), \; (\pi,0,\pi), \; (0, \pi, \pi)$]. Notice that the number of inequivalent massless Dirac fermions and the winding number at a TQCP are given by the difference and the average of winding numbers of adjacent insulating states, respectively. Hence, TQCPs of lattice models also correspond to \emph{sphaleron configurations of $\hat{N}(\bs{k})$, possessing half-integer winding numbers}. As illustrated in Fig.~\ref{fig:1b}, $\mathcal{N}_3$ is directly related to the tunneling of mirror Chern numbers (or second homotopy class) of four-fold symmetric planes. We note that the mirror symmetry operations $H_1(k_a, k_b, k_c) = \Gamma_{c4} H_1(k_a, k_b, -k_c) \Gamma_{c4}$ are implemented by non-commuting matrices $\Gamma_{c4}=\frac{1}{2} \epsilon_{cab} \Gamma_5 \Gamma_{ab}$, reflecting the non-Abelian nature of the physical problem. At $\Delta = 3$ ($-3$), all mirror planes, passing through the zone corner (center) simultaneously display quantum Hall plateau transitions. In literature, the TI$_2$ phase is often referred to as a weak TI~\cite{FuKane,FuKaneMele2007}, with strong $\mathbb{Z}_2$ index $\nu_0=0$, and weak $\mathbb{Z}_2$ indices $[1,1,1]$. The weak indices only describe the parity (even vs. odd integer) of mirror Chern numbers for $k_j=\pi$ planes. Being insensitive to the sign of mirror Chern numbers, the weak indices cannot address 3D tunneling configuration of TI$_2$ phase. This is generally true for any state whose third homotopy class is described by an even integer winding number~\cite{tyner2021symmetry}. Since $\{H(\bs{k}), \Gamma_5 \} =0$, the model of Eq.~\ref{eq:tbmodel} exhibits $\mathbb{Z}_2$ particle-hole symmetry, which can introduce large gauge ambiguities for Chern-Simons invariant. We avoid such ambiguities by regulating our model Hamiltonian with a pseudo-scalar mass: $H(\bs{k}) \to H(\bs{k}) + M^\prime \Gamma_4$. After obtaining $ \mathcal{CS} (M^\prime)$, using Eq.~\ref{A11}, we can show that \begin{eqnarray} \mathcal{CS}_\pm=\mathcal{CS}(M^\prime \to 0^\pm)-\mathcal{CS}(M \to \pm \infty)= \pm \frac{\mathcal{N}_3}{2}. \label{ChernSimonscont} \end{eqnarray} \emph{In our numerical calculations, we will consistently implement $M^\prime >0$, and show the measured induced charge for half-filled system tracks} $\theta_+ = 2 \pi \mathcal{CS}_+ = \pi \mathcal{N}_3$. \subsection{Fermion spectrum with monopoles} To diagnose bulk topology of lattice model with magnetic monopoles, we first obtain a tight-binding Hamiltonian in real space, by performing Fourier transformation of $H(\bs{k})$. Subsequently, all hopping parameters ($t_{ij}$'s ), connecting two different sites $\bs{r}_i$ and $\bs{r}_j$ are modified by Peierls phase factors $e^{i\nu_{ij}}$, where $\nu_{ij}=(e/\hbar)\int_{\bs{r}_i}^{\bs{r}_j}\mathbf{a}\cdot d\mathbf{l}$, and $\mathbf{a}$ is the vector potential. The monopole will be placed at the center of the system $\bs{r}=(0,0,0)$, such that the sites of cubic lattice are labeled by $\bs{r}_i=\frac{a}{2}(n^x_i,n^y_i,n^z_i)$, and $n^a_i \in \mathbb{Z}$. All calculations will be performed under OBC, by employing the singular, north-pole gauge \begin{eqnarray} \bs{a}^N(\bs{r}_i)= \frac{g}{r_i} \cot \frac{\theta_i}{2} \; \hat{\phi}_i=g \; \frac{-y_i \hat{x} + x_i \hat{y}}{r_i(r_i+z_i)}, \end{eqnarray} with the Dirac string oriented along negative $z$-axis. Since the Dirac quantization condition $2eg/\hbar=m \in Z$ makes the string unobservable, the fermion spectrum remains unchanged for the south-pole gauge: \begin{equation} \bs{a}^S(\bs{r}_i)= -\frac{g}{r_i} \tan \frac{\theta_i}{2} \; \hat{\phi}_i=g \; \frac{y_i \hat{x} - x_i \hat{y}}{r_i(r_i-z_i)}, \end{equation} with the Dirac string, along positive $z$-axis, and (ii) the patch-wise, smooth gauge of Eq.~\ref{eq:7} that avoids Dirac strings. To reduce finite-size effects, we have considered system-size up to $(L/a)^3=30^3$, yielding $(4\times 30^3)=108000$ eigenstates. Our results for fermion spectrum are demonstrated in Fig.~\ref{fig:monopolespectra} for $m=+1$. With $M^\prime >0$, the degeneracy of bound states with $E \approx \pm |M^\prime|$ is determined by $|\mathcal{N}_3|$, as shown in Fig.~\ref{fig:2a} and \ref{fig:2b}. In Fig.~\ref{fig:2c} and \ref{fig:2d}., we plot the localization patterns of these states for the TI$_1$ phase, with $\mathcal{N}_3=-1$. Since $-\text{sgn}(m \mathcal{N}_3) =+1$, the positive (negative) energy state remains bound to the monopole (surface), in agreement with Eq.~\ref{index1}. As we set $M^\prime=0$, these finite-energy, bound states transform into topologically protected, NZMs. As long as $|M^\prime| > E_h \approx 2 t a/L \exp(-L/\xi_0)$, the identity or chirality of the bound states can be clearly revealed. The topologically trivial insulator (NI) does not support NZMs. By studying non-minimal monopoles with $|m|>1$, we have found that the total number of NZMs is given by $2|m \mathcal{N}_3|$. \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[scale=0.55]{Figures/OBCStateVsEnergyTIM.pdf} \label{fig:2a}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/OBCStateVsEnergyMiddlewMp.pdf} \label{fig:2b}} \subfigure[]{ \includegraphics[scale=0.45]{Figures/Monopole_Bound_State_Loc.pdf} \label{fig:2c}} \subfigure[]{ \includegraphics[scale=0.45]{Figures/Surface_Bound_State_Loc.pdf} \label{fig:2d}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/OBCStateVsEnergyTIPhase.pdf} \label{fig:2e}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/OBCStateVsEnergyMiddle.pdf} \label{fig:2f}} \caption{Fermion spectrum for the lattice model of first-order topological insulators [see Eq.~\ref{eq:tbmodel} ] in the presence of unit monopole $m=+1$. We have used band parameters $t=t^\prime$, and $t>0$. Only fifty low lying states near zero-energy are shown. (a) Energy vs. number of states plots for TI$_1$ phase, with tuning parameter $\Delta=2$, perturbed by a small, pseudo-scalar mass, $M^\prime=+0.05 t$. (b) Energy vs. number of states plots for TI$_2$ phase, with $\Delta=0$, $M^\prime=+0.05 t$. The degeneracy of the states closest to $E=0$, with energy $E \approx \pm M^\prime$ is $|\mathcal{N}_3|$. (c)-(d) For the TI$_1$ phase, these states are localized on the monopole and the boundary of the sample, respectively. The maximum probability density of boundary-localized state around twelve hinges of cubic sample is not related to higher-order topology. (e)-(f) When $M^\prime=0$, these bound states transform to near zero-energy modes, with an exponentially small hybridization gap due to finite-size effects. } \label{fig:monopolespectra} \end{figure*} \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[scale=0.45]{Figures/AxionAngleFan.png} \label{fig:3a}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/Lattice_deltaQvsR_Mp_Update.pdf} \label{fig:3b}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/Lattice_deltaQvsR_Mp_Limit.pdf} \label{fig:3c}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/InducedChargeN3vsm} \label{fig:3d}} \caption{Witten effect for half-filled systems, in the presence of unit monopole $m=+1$, and a pseudo-scalar mass $M^\prime>0$ (in units of hopping parameter $t$). Numerical calculations of induced charge have been performed for system-size $(L/a)^3=30^3$. (a) The band theory results for $\theta$ of magneto-electric insulators, in the thermodynamic limit ($L \to \infty$) [see Eq.~\ref{MEhalffilled} ]. The dotted line tracks $\theta=-\pi/2$, and it terminates at the quantum critical point ($\Delta=3$, $M^\prime=0$) . (b) The induced electric charge $\delta Q(\tilde{R})$ (in units of $-e$), enclosed by a Gaussian sphere of radius $R$, which is centered at the monopole [see Eq.~\ref{eqdeltaq} ], and $\tilde{R}=R/a$. We have used $t>0$, $t^\prime=t$ and $\Delta=2$. When $M^\prime$ is bigger than the hybridization scale $E_h$ of bound states, the maximum values of $\delta Q$ agree with the predictions of band theory (dashed lines). For $M^\prime=0$, the ground state is two-fold degenerate, and $\delta Q=0$ implies the absence of Witten effect. (c) The induced electric charge (in units of $-e$) for $M^\prime =+0.05 t$, and different values of $2<\Delta<4$. Far from the quantum critical point, $\xi \approx a |3-\Delta|^{-1}$ is of the order of lattice spacing, and $M=t|(3-\Delta)|>M^\prime>E_h$. In this adiabatic regime, the maximum value of $\delta Q/(-e)$ approaches $\mathcal{N}_3/2 =-1/2$. For larger system-size, $M^\prime$ can be gradually reduced to zero. (d) Witten effect for TI$_2$ phase with unit monopole $m=+1$, and TI$_1$ phase with double monopole $m=+2$. The maximum induced charge follows Eq.~\ref{gen1}. Notice that the maximum values of charge is found for $4 a \leq R^\ast \leq 6 a$.} \label{fig:Witten_Num1} \end{figure*} \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[scale=0.4]{Figures/Witten_Effect_TI_Localization_Quadrant_1.pdf} \label{fig:5a}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/Lattice_deltaQvsR.pdf} \label{fig:5b}} \subfigure[]{ \includegraphics[scale=0.2]{Figures/MaxQvsDeltaUpdated.pdf} \label{fig:5c}} \caption{Spin-charge separation for $\mathcal{P}$ and $\mathcal{T}$ symmetric first-order topological insulator TI$_1$, without pseudo-scalar training field, in the presence of unit monopole $m=+ 1$, under fully open boundary conditions. The system is doped with one electron [$N_e=+1$ case in Fig.~\ref{fig:presentation1} ]. All numerical calculations have been performed for a system-size $(L/a)^3=30^3$. (a) Intensity plot of induced charge density around the monopole for TI$_1$ phase, with $\Delta=2.5$. For clarity, only the first octant is shown. (b) The total induced charge (in units of $-e$), localized within a Gaussian sphere of radius $R$, which is centered the monopole [described by Eq.~\ref{eqdeltaq} with $M^\prime=0$], for different values of $\Delta$, and $\tilde{R}=R/a$. The data clearly identifies the maximum induced charge $-e/2$ ($-e/4$) for TI$_1$ phase (quantum critical point). (e) The finite size scaling of maximum induced charge (in units of $-e$). In the vicinity of critical point and inside TI$_1$ phase, the data (blue circles) can be well approximated by $f(L,\Delta)= \frac{1}{2} \left \{1+ \exp \left[(\Delta -3)\frac{L}{2a}\right]\right \}^{-1}$ (solid, red line). Notice that the maximum values of induced charge for topological insulator and quantum critical point are found for $R^\ast \approx 6 a$.} \label{fig:Witten_Num2} \end{figure*} \subsection{Witten effect at half-filling} \label{WEHF}To determine the induced charge on monopole, we closely follow the procedure of Ref.~\onlinecite{rosenberg2010witten}. The exact diagonalization of Hamiltonian is performed with and without the monopole. Using normalized eigenstates $\psi_{n, m}(\bs{r}_i)$ and energy eigenvalues $\epsilon_{n,m}$, we compute the charge density \begin{equation} \rho_{m}(\bs{r}_i,L,\xi,M^\prime)=-e \sum_n |\psi_{n, m}(\bs{r}_i)|^2 \Theta(E_{F,m}-\epsilon_{n,m}), \end{equation} in the presence of monopole, where $E_{F,m}$ is the Fermi energy. Similarly, we define the charge density \begin{equation} \rho_{0}(\bs{r}_i,L,\xi,M^\prime)= -e \sum_n |\psi_{n, 0}(\bs{r}_i)|^2 \Theta(E_{F,0}-\epsilon_{n,0}), \end{equation} in the absence of monopole. The Fermi energies are obtained by keeping the total number of particles fixed and in general $E_{F,m} \neq E_{F,0}$. As we are implementing the particle-hole symmetric regulator $M^\prime \to 0^+$, the half-filled condition is enforced by $E_{F,m}=E_{F,0}=0$. The induced charge density on monopole is found from \begin{equation} \delta \rho(\bs{r}_i,L,\xi,M^\prime)= \rho_m(\bs{r}_i,L,\xi,M^\prime) - \rho_0(\bs{r}_i,L,\xi,M^\prime), \end{equation} and the total induced charge inside a spherical Gaussian surface of radius $R$, centered around the monopole is obtained from \begin{equation}\label{eqdeltaq} \delta Q(R,L,\xi,M)=\sum_{|\bs{r}_{i}|<R} \delta \rho(\bs{r}_{i},L,\xi,M^\prime). \end{equation} All lengths will be measured in units of lattice spacing. To benchmark our results for non-degenerate half-filled state, we first consider $\mathcal{P}$ and $\mathcal{T}$ breaking ME insulators, with $M^\prime > 0 $, and $2<\Delta<4$. The regularized ME coefficient \begin{equation} \label{MEhalffilled} \theta_+(M^\prime)=2\pi [\mathcal{CS}_+(M^\prime)-\mathcal{CS}(M^\prime=+\infty)] \end{equation} from band theory calculations is shown in Fig.~\ref{fig:3a}. Notice that the TQCP at $\Delta=3$, $M^\prime=0$ is a multi-critical point, where the ME insulator with $\theta_+(M^\prime \neq 0) =-\pi/2$, the TI$_1$ phase with $\theta_+(M^\prime \to 0^+)=-\pi$, and the NI with $\theta_+(M^\prime \to 0^+)=0$ meet. The results for $\delta Q(\tilde{R})$ for $\Delta=2$ and various $M^\prime$ are shown in Fig.~\ref{fig:3b}. Since $\Delta=2$ is far from TQCP, $\xi_0 \sim a$, and finite-size effects are very weak, when $L/a=30$. Consequently, the maximum values of induced electric charge $\delta Q_{max} /(-e)$ agree with $\theta_+(M^\prime)$, obtained in thermodynamic limit $L \to \infty$. \emph{When $M^\prime=0$, the induced charge for 2-fold-degenerate, half-filled ground state vanishes, for any $\Delta$}, implying a precise cancelation between the contributions from occupied NZMs and Dirac sea. In Fig.~\ref{fig:3c}, we show the dependence of $\delta Q(\tilde{R})$ on $\Delta$ for a fixed $M^\prime=+0.05 t$. Deep inside the TI$_1$ phase, we find $\delta Q_{max}/(-e) \to \mathcal{N}_3/2=-1/2$. When TQCP is approached, $\delta Q_{max}$ can significantly deviate from $-e \mathcal{N}_3/2$ due to strong finite-size corrections. By varying the winding number $\mathcal{N}_3$ and the monopole strength $m$, we have confirmed that the maximum induced charge for thermodynamically large systems is given by [see Fig.~\ref{fig:3d} ] \begin{equation}\label{gen1} \delta Q_{max, TI}= - e \; m \; \mathcal{CS}_+ = -\frac{e}{2} \; m \; \mathcal{N}_3. \end{equation} As the overall half-filled system is charge-neutral, compensating charge \begin{equation} \delta Q_s = -\delta Q_{max, TI}=+\frac{e}{2} \; m \; \mathcal{N}_3 \end{equation} appears around the sample boundary. The localization pattern of surface-localized NZMs in Figs.~\ref{fig:2d} implies that the highest intensity of surface-bound charge density under OBC occurs around $12$ hinges. By changing boundary conditions, the pattern of surface-bound charge density can be altered. But the monopole-bound charge density and $\delta Q_{max, TI}$ will remain unaffected. We have also found that the TQCP between two distinct insulators leads to \begin{equation}\label{gen2} \delta Q_{max, TQCP}=- \frac{e}{4} m \; (\mathcal{N}_{3,1}+ \mathcal{N}_{3,2}). \end{equation} This is a three-dimensional analog of the law of corresponding states for quantum Hall plateau transition~\cite{,Kivelson1}. Usually, the quantum critical phase does not support normalizable surface-states, and the compensating charge $-\delta Q_{max, TQCP}$ stays distributed over the entire sample. \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=0.55]{Figures/DopingMiddlePhase.pdf} \label{fig:6a}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/TIPhase_2Charge.pdf} \label{fig:6b}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/Critical_Charge.pdf} \label{fig:6c}} \caption{Spin-charge separation of first-order topological insulators for different winding numbers and monopole strengths, in the absence of a pseudo-scalar mass. All numerical calculations have been performed for a system-size $(L/a)^3=30^3$. As outlined in Fig.~\ref{fig:presentation2}, (a) TI$_2$ phase with winding number $\mathcal{N}_3=+2$, (b) TI$_1$ phase with winding number $\mathcal{N}_3=-1$ show identical structure of spin-charge separation, in the presence of unit monopole $m=+1$, and double monopole $m=+2$, respectively. We are accessing all charged $SU(4)$ multiplets by doping $-2 \leq N_e \leq +2$ electrons. The maximum induced charge oscillates between half-integer and integer values, following the general rule of Eq.~\ref{gen3}. (c) For topological quantum critical point at $\Delta=+3$, varying the monopole strength between $m=+1,+2,+3,+4$, and adding $N_{e}=m$ electrons, we observe four distinct values of maximum induced charge that follow Eq.~\ref{scTQCP}. } \label{fig:Witten_Num_22} \end{figure*} \subsection{Doped system and spin-charge separation}\label{WED} In the absence of pseudo-scalar training field, we can study spin-charge separation for TI$_1$ and TI$_2$ phases. We first consider the simplest case of TI$_1$ phase in the presence of unit monopoles $m= \pm1$. The results for one doped electron are shown in Fig.~\ref{fig:Witten_Num2}, which follows the schematic of Fig.~\ref{fig:presentation1} and the analysis of continuum theory. The results for one doped electron are shown in Fig.~\ref{fig:Witten_Num2}. At TQCP, $\delta Q_{max}=-e/4$ ($+e/4$) for single electron (hole) doping again corroborates our prediction $|\theta_c|=\pi/2$. In Fig.~\ref{fig:5c}, we elucidate the quantum critical scaling behavior of induced charge by plotting $\delta Q_{max}/(-e)$ as a function of $\Delta$. We have also performed such calculations for $|\mathcal{N}_3|>1$ and $|m|>1$. The spin-charge separation is controlled by $SU(2|n m|)$-multiplets. As we dope $N_e$ electrons, the maximum induced charge on monopoles will oscillate between half-integer and integer multiples of $e/2$, according to the formula \begin{eqnarray}\label{gen3} \delta Q_{max}=-\frac{e}{2} N_e, \; \text{with} \; -|m \mathcal{N}_3| \leq N_e \leq +|m \mathcal{N}_3|. \nn \\ \end{eqnarray} Rest of the doped charge $\delta Q_s=-\frac{e}{2} N_e$ will be localized around the boundary. In Figs.~\ref{fig:6a} and \ref{fig:6b}, we show that (i) $\mathcal{N}_3=-1$, $m=+2$ and (ii) $\mathcal{N}_3=+2$, $m=+1$ support identical structure of spin-charge separation, as outlined in Fig.~\ref{fig:presentation2}. Furthermore, by doping with $N_e=\pm |m|$ electrons at TQCP, we have observed that the monopole supports maximum induced charge \begin{equation}\label{scTQCP} \delta Q_{max,TQCP}=- \frac{e}{4} N_e \end{equation} for $m= 1, 2, 3, 4$, respectively [see Fig.~\ref{fig:6c} ]. For the quantum critical system, rest of the doped charge $-(3/4) e N_e$ stays distributed over the entire sample. Next, we address the stability of third homotopy class when the number of discrete symmetries is reduced. \section{Chiral higher-order topological insulators}\label{SOTI} In this regard, we consider 3D, second-order TIs~\cite{Schindlereaat0346}. A four-band model for such systems is given by \begin{equation}\label{eq:CH} H_2(\mathbf{k})=H_1(\bs{k})+t^{\prime \prime}(\cos k_{1}-\cos k_{2})\Gamma_{4}=\sum_{j=1}^{5} N_j(\bs{k}) \Gamma_j, \end{equation} which describes $\mathcal{P}$-, $\mathcal{T}$-, and cubic-symmetry- breaking, but $\mathcal{C}$-preserving, ME insulators. While the $C^z_4$ symmetry is broken for generic values of $k_z$, it remains unbroken for the high-symmetry planes, with $k_z=0, \pi$. The generator of $C^z_4$-symmetry for these planes is $\Gamma_{12}$. At these planes $H_2$ reduces to two-dimensional, Benalcazar-Bernevig-Hughes model of second-order insulators~\cite{Benalcazar61,BBHPrb}. In Ref.~\onlinecite{Schindlereaat0346}, the perturbed TI$_1$ phase ($1<|\Delta|<3$) was shown to possess quantized ME coefficient $\theta=\pi$. \begin{table}[t] \def1.5{1.5} \begin{tabular}{|c|c|c|} \hline \begin{tabular}{c}Tuning \\ parameter\end{tabular} & \begin{tabular}{c} FOTI ($t^{\prime \prime}=0$)\\ $(\mathfrak{C}^{\pi}_{12}, \mathfrak{C}^0_{12},n_3)$ \end{tabular} & \begin{tabular}{c} ch-HOTI ($t^{\prime \prime} \neq 0$)\\ $(\mathfrak{C}^{\pi}_{12}, \mathfrak{C}^0_{12},n_3)$ \end{tabular} \\ \hline $1<\Delta<3$ & $(-1,0,-1)$ & $(-1,0,-1)$ \\ \hline $-1<\Delta<1$ & $(+1,-1,+2)$ & $(-1,-1,0)$ \\ \hline $-3<\Delta<-1$ & $(0,+1,-1)$ & $(0,+1,-1)$\\ \hline \end{tabular} \caption{Classification of tunneling configurations of $SU(2)$ Berry connection for first order topological insulator (FOTI) and second order, chiral topological insulator (ch-HOTI). The $\mathfrak{C}^\pi_{12}$, and $\mathfrak{C}^0_{12}$ are the relative Chern numbers for $k_z=\pi$, $k_z=0$ planes, respectively [see Eq.~\ref{relChern} ]. The third homotopy class is identified by $n_3=\mathfrak{C}^\pi_{R}-\mathfrak{C}^0_{R}$, which leads to the regularized Chern-Simons coefficient $\mathcal{CS}_+=n_3/2$.} \label{fig:CHTable} \end{table} \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=0.2]{Figures/Chiral_HOTI_Single_Monopole.pdf} \label{fig:CHState1}} \subfigure[]{ \includegraphics[scale=0.2]{Figures/ChiralHOTIStatevsEnergyMiddlePhase.pdf} \label{fig:CHState2}} \subfigure[]{ \includegraphics[scale=0.2]{Figures/Chiral_Hoti_Mixed.pdf} \label{fig:CHLoc}} \subfigure[]{ \includegraphics[scale=0.53]{Figures/InducedChargeChiralHOTI.pdf} \label{fig:CHWitten}} \caption{Thought experiments on chiral higher-order topological insulators [see Eq.~\ref{eq:CH} ] with unit monopole $m=+1$. Fifty low-lying states are shown for a system-size $(L/a)^3=10^3$. (a) Under fully open boundary conditions, the perturbed TI$_1$ phase (with tuning parameter $\Delta=2$) supports two near-zero-modes, as three-dimensional, winding number $n_3=-1$ (see Table~\ref{fig:CHTable} ). While one mode is localized on the monopole, the other one is localized on the boundary. (b) Perturbed TI$_2$ state (with tuning parameter $\Delta=0$), does not support any near-zero-modes, as this phase has $n_3=0$ (see Table~\ref{fig:CHTable} ). (c) By implementing mixed boundary conditions, surface-localized fermion zero-mode can be eliminated. As the total number of states is an even integer, for a half-filled system, the monopole-localized zero-mode remains unoccupied. (d) Under open boundary conditions, Witten effect can only be found with a pseudo-scalar training field ($M^\prime \to 0^+$). In contrast to this, Witten effect can be observed under mixed boundary conditions, without any pseudo-scalar training field ($M^\prime=0$). Both systems show identical maximum induced charge on monopole $-\frac{e}{2} n_3$. By changing $t \to -t$ in the tight-binding Hamiltonian, while holding all other parameters fixed, we can change the sign of winding number $n_3$ and the induced charge. } \label{fig:Witten_Num_CHHOTI} \end{figure*} \par Witten's original result $\delta Q= -\frac{m n e }{2}$ was obtained for isotropic $\mathcal{P}$ and $\mathcal{T}$ symmetric vacuum~\cite{witten1979dyons} by combining the constraints of $\mathcal{CP}$ symmetry and the Schwinger-Zwanziger quantization condition for electric and magnetic charges of dyons~\cite{Schwinger1,Schwinger2,Zwanziger1,Zwanziger2}. When the $\mathcal{CP}$-symmetry is broken, the induced electric charge can be arbitrary. The topological quantization of $\theta$ for chiral HOTI was shown to be a consequence of the combined $C^z_4 \mathcal{T}$ symmetry~\onlinecite{Schindlereaat0346}. Furthermore, the perturbed TI$_2$ ($|\Delta|<1$) and NI ($|\Delta|>3$) states were identified as trivial insulators. A comprehensive topological analysis of the entire phase diagram can be performed by following Refs.~\onlinecite{tyner2021symmetry,sur2022mixed}. The Bloch Hamiltonian describes map from $T^3$ to the coset space $SO(5)/SO(4)=USp(4)/[SU(2) \times SU(2)]$. The main idea is to consider (i) the first homotopy class of high-symmetry lines, and (ii) the second homotopy class or quantized Berry flux of high-symmetry planes. Such axes and planes correspond to topological defects of the map. (iii) Finally, the third homotopy class of Bloch Hamiltonian (or texture) can be identified from the tunneling of quantized Berry flux for high-symmetry planes. A convenient form of intra-band $SU(2) \times SU(2)$ Berry connection is given by \begin{eqnarray}\label{Berryconnection} &&A_{i}(\bs{k})=\sum_{a<b} \; A_{i,ab} \Gamma_{ab} \nn \\ &=&\sum_{a<b} \; \frac{[N_a (\bs{k})\partial_i N_b (\bs{k})- N_b(\bs{k})\partial_i N_a(\bs{k})]\Gamma_{ab}}{2[|\bs{N}(\bs{k})|+ N_5(\bs{k})]}. \end{eqnarray} where $\partial_i = \frac{\partial}{\partial k_i}$, $a$ and $b$ can take values from $1,2,3,4$. By projecting with $P_\pm=\frac{1}{2}(\mathbb{1} \pm \Gamma_5)$ we obtain $SU(2)$ connections of conduction ($+$) and valence ($-$) bands, respectively. Following Ref.~\cite{tyner2020topology}, we can calculate the relative Chern numbers, \begin{equation}\label{relChern} \mathfrak{C}_{ab}(k_z) = \frac{1}{2\pi} \; \int dk_x dk_y \; (\partial_x A_{y,ab} -\partial_y A_{j,ab}), \end{equation} measuring the flux of various non-Abelian color components through $k_x-k_y$ planes, as a function of $k_z$. Only the $\Gamma_{12}$ component supports quantized flux $\pm 2\pi$ for $C^z_4$-symmetric planes, with $k_z=0, \pi$. The tunneling configuration is classified by \begin{equation}\label{tunnelingrelChern} n_3=\mathfrak{C}_{12}(k_z=\pi) - \mathfrak{C}_{12}(k_z=0), \end{equation} leading to $\mathcal{CS}_+=n_3/2$. The results are summarized in Table~\ref{fig:CHTable}. Notice that the third homotopy class of perturbed TI$_1$ (TI$_2$) phase remains unaffected (becomes trivial). \subsection{Thought experiments} The results of Table~\ref{fig:CHTable} can be verified by thought experiments with unit magnetic monopole $m=+1$ [see Fig.~\ref{fig:Witten_Num_CHHOTI} ]. The results under OBC for perturbed TI$_1$ phase are similar to FOTIs, as $[001]$-surfaces can support gapless states. Only in the presence of a pseudo-scalar training field, we can observe WE. As chiral HOTI exhibits gapped spectrum for $[100]$ and $[010]$ surfaces, we can eliminate surface-bound zero-mode with MBC [see Fig.~\ref{fig:CHLoc} ]. We implemented the following MBC: (i) OBC along $[100]$-direction, with Dirac string oriented along $x$-axis, and (ii) PBC along $y$ and $z$-axes. As the monopole-bound FZM becomes non-degenerate, WE can be observed without any pseudo-scalar training field. As monopole unambiguously detects bulk winding number, $\delta Q_{max}=-\frac{e n_3}{2}$ remains unchanged under different boundary conditions. Only the details of compensating charge density and bulk-boundary correspondence are affected by different types of boundary conditions. \begin{figure} \subfigure[]{ \includegraphics[scale=0.65]{Figures/FOTIChargeFluxPBC.png} \label{fig:FDOS}} \subfigure[]{ \includegraphics[scale=0.65]{Figures/ChiralHOTIChargeFluxPBC.png} \label{fig:FStates}} \caption{Diagnosis of the second homotopy class of $xy$ planes with magnetic $\pi$-flux tube. (a) Local density of states on flux tube for first-order, TI$_2$ phase, with $\Delta=t^{\prime \prime}=0$ as a function of $k_{z}$. Both $k_z=0, \pi$ planes support two-fold degenerate zero-energy bound states. As a function of $k_{z}$, they evolve as finite-energy mid-gap states, and merge with the bulk states for $k_{z} \approx \pm \pi/2$. This is consistent with a non-trivial stacking of two-dimensional, first-order insulators (see Table~\ref{fig:CHTable}). (b) For the perturbed TI$_2$ phase, with $\Delta=0$, $t^{\prime \prime}=t^\prime=t$, the mid-gap states remain isolated, which is consistent with the trivial stacking of two-dimensional, second-order topological insulators, and $n_3=0$ (see Table~\ref{fig:CHTable}). However, the merger of mid-gap states with continuum is not an essential criterion for the existence of non-trivial third homotopy class, which will be demonstrated with the example of octupolar topological insulators. Only monopoles can unambiguously detect the third homotopy class.} \label{fig:FxInsert} \end{figure} \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[scale=0.55]{Figures/OBCStateVsEnergyZeemanALLTI1.pdf} \label{fig:8a}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/InducedChargeAllZeemanTI1.pdf} \label{fig:8b}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/T_Broken_Charge_Flux.pdf} \label{fig:8c}} \caption{Fermion spectrum and Witten effect for $C^z_4$-symmetric, magnetic topological insulators [see Eq.~\ref{eq:MagneticTI} ] that break all fundamental discrete symmetries. The calculations are performed for system-size $(L/a)^3=(15)^3$, as it produces numerical accuracy $10^{-4}$ for the quantized value of maximum induced electric charge. We are showing the data for $m_{12}=m_{34}=m_{45}=0.1 t$, $t=t^\prime$, and $\Delta=2$. (a) Fifty lowest energy states, in the presence of unit monopole $m=+1$. Due to the lack of particle-hole symmetry, the bound states move away from $E=0$. For magnetic topological insulators, the monopole- and surface- localized modes are not energetically degenerate. Thus, half-filled system can exhibit Witten effect without any pseudo-scalar training field. (b) The maximum induced electric charge (in units of $-e$) for half-filled system is given by $-1/2$, which identifies $Tr[\theta_{ij}]=-3\pi$ [see Eq.~\ref{tracetheta} ]. (c) In the presence of a magnetic flux tube, along the $z$-axis, only the $k_z=\pi$ plane shows two-fold degenerate fermion zero-modes, when $\phi=\phi_0/2$. This identifies non-trivial (trivial) second homotopy class of $k_z=\pi$ ($0$) plane, arising from $C^z_4$-symmetry protected Berry flux. The degeneracy of zero-modes and subsequent splitting for $\phi \neq \phi_0/2$ show that the net Berry flux of two occupied valence bands for the $k_z=\pi$ plane is zero (an example of 2D, non-quantum Hall insulator). Therefore, the Berry flux is carried by a traceless generator other than $\Gamma_5$. The non-trivial third homotopy class arises from the tunneling configuration of $C^z_4$-symmetry protected Berry flux. } \label{fig:MagneticTI} \end{figure*} Table~\ref{fig:CHTable} also implies that the perturbed TI$_2$ is not a topologically trivial phase. It is a HOTI, which is distinguished from the trivial phase by the non-trivial second homotopy class of high-symmetry planes. Vanishing of $n_3$ indicates that the perturbed TI$_2$ can be considered as 2D, second-order TIs, trivially stacked along the $z$-direction. To confirm this assertion, we have probed the second homotopy class of $C^z_4$-symmetric planes with a magnetic $\pi$-flux tube~\cite{SpinChargeSCZ,VishwanathPiFlux,Juricic2012,mesaros2013zero}. After performing Fourier transformation of $H_2$ along $x$ and $y$ directions, we insert a magnetic $\pi$-flux tube at $(x,y)=(0,0)$, oriented along the $z$ direction. Thus, the hopping matrix elements are modified according to $t_{ij} \to t_{ij} e^{i\nu_{ij}}$, where $\nu_{ij}=(e/\hbar)\int_{\bs{r}_i}^{\bs{r}_j}\mathbf{a}_t\cdot d\mathbf{l}$, and the vector potential \begin{equation} \bs{a}_t(\bs{r}_i)=\frac{\hbar}{2e} \frac{-y_i \hat{x} + x_i \hat{y}}{x^2_i + y^2_i}. \end{equation} As $k_z$ is a good quantum number, we have simulated a system of transverse-size $L_x/a=L_y/a=20$, under PBC along all three principal axes. The local density of states on the flux tube, for the first-order TI$_2$ phase and the perturbed TI$_2$ phase are shown shown in Fig.~\ref{fig:FxInsert}. The flux tube binds two-fold-degenerate zero-modes (Kramers pair) at $k_{z}=0, \; \pi$ planes, identifying these planes as generalized quantum spin Hall insulators. The results on chiral HOTI do not imply that ME HOTIs with $C^z_4 \mathcal{T}$-symmetry can only possess $n_3 = 0, \pm 1$. Such results are specific to the model of Eq.~\ref{eq:CH}. Instead of $B_{1g}$ perturbation $t^{\prime \prime} [\cos (k_1) -\cos (k_2)]\Gamma_4$, we can consider (i) $t^{\prime \prime} [\cos (2k_1) -\cos (2k_2)]\Gamma_4$, or (ii) $B_{2g}$-type perturbation $t^{\prime \prime} \sin k_{1} \sin k_{2} \Gamma_{4}$ to arrive at different models of ME HOTIs. By employing Eqs.~\ref{relChern}-\ref{tunnelingrelChern} and thought experiments for both models, we can show that the non-trivial third homotopy class of perturbed TI$_2$ phase is described by $n_3=\mathcal{N}_3=+2$. Therefore, $C^z_4 \mathcal{T}$-symmetric ME HOTIs can support $\theta = n \pi$, with $n \in \mathbb{Z}$. \section{Magnetic topological insulators} \label{AppA} Next we discuss the third homotopy class and quantized, ME response of magnetic TIs that require minimal protection of global discrete symmetries. In this regard, we consider the following four-band model \begin{eqnarray} H_{mag}=H_1(\bs{k})+m_{12}\Gamma_{12} + m_{34} \Gamma_{34} + m_{45} \Gamma_{45}, \label{eq:MagneticTI} \end{eqnarray} The fermion bilinear $\Psi^\dagger \Gamma_{12} \Psi$ is odd under $\mathcal{T}$ and even under $\mathcal{P}$ and $\mathcal{C}$. The bilinear $\Psi^\dagger \Gamma_{34} \Psi$ is odd under $\mathcal{T}$ and $\mathcal{C}$, but even under $\mathcal{P}$. Finally, the bilinear $\Psi^\dagger \Gamma_{45} \Psi$ is odd under $\mathcal{P}$, but even under $\mathcal{T}$, and $\mathcal{C}$. Therefore, Eq.~\ref{eq:MagneticTI} describes a generic member of unitary Altland-Zirnbauer class A. Such systems break all fundamental discrete symmetries $\mathcal{P}$, $\mathcal{T}$, $\mathcal{C}$, and the combined discrete symmetries $\mathcal{PT}$, $\mathcal{CP}$, $\mathcal{CT}$, and $\mathcal{CPT}$. For the present model only the $U(1)$ total number conservation law and the discrete four-fold rotational symmetry about the $z$ axis ($C^z_4$) remain intact. Within the Altland-Zirnbauer classification scheme, one flattens all occupied (empty) states to have identical dispersion. Thereby, it causes a fictitious degeneracy of occupied (empty) non-degenerate valence (conduction) bands. For the 4-band model of Eq.~\ref{eq:MagneticTI}, Altland-Zirnbauer scheme will identify the coset space to be $\frac{U(4)}{U(2) \times U(2)}$. Since the third homotopy group $\pi_3(\frac{U(4)}{U(2) \times U(2)})$ is trivial, one will conclude that the model cannot support topologically non-trivial, 3D insulators~\cite{RyuLudwigPRB,ryu2010topological}. An exception to this rule was considered in Ref.~\cite{moore2008topological}, while addressing two-band models of Hopf insulators, as $\pi_3(\frac{U(2)}{U(1) \times U(1)})=\mathbb{Z}$. Very recently, Ref.~\cite{Lapierre2021} has considered the correct coset space of $N$-band models of Hopf insulators $\frac{U(N)}{[U(1)]^N}$, which can support non-trivial third homotopy classification, as $\pi_3(\frac{U(N)}{[U(1)]^N})=\mathbb{Z}$. We emphasize that the correct coset space of non-degenerate bands is always described by $\frac{U(N)}{[U(1)]^N}$, and magnetic TIs (both Hopf and non-Hopf insulators) can possess non-trivial third homotopy class. Arbitrary deformation of Lie groups or coset space changes the homotopy groups, leading to incorrect results. This issue has been described for tight-binding models and \emph{ab initio} band structures in Ref.~\onlinecite{tyner2021symmetry}. In general, the $N$-band model of class A will possess a diagonal matrix of topological invariants, $n_{ij} = n_i \delta_{ij}$, where $n_i$ is the Abelian Chern Simons coefficient of $i$-th band. Even though $H_{mag}$ is a sufficiently simple 4-band model, one cannot analytically obtain closed form expressions of band dispersions and eigenfunctions. Therefore, the bulk invariant must be computed numerically. Furthermore, numerical calculations show that the model exhibits direction-dependent gapped and gapless surface-states. The [001] ([100] or [001]) surface harbors gapped (gapless) surface-states. Hence, $H_{mag}$ describes magnetic HOTIs. Since $\mathcal{T}$-breaking operators reduce spatial rotational symmetries, the effective action for adiabatic electrodynamic response \begin{eqnarray} S_{eff}&=&\int d^3x dt \bigg [\frac{1}{2} \epsilon_{ij} E_i E_j + \frac{1}{2} \mu^{-1}_{ij} B_i B_j + \frac{\theta_{ij} e^2}{4 \pi^2 \hbar} E_i B_j \nn \\ &&+....\bigg], \end{eqnarray} will involve anisotropic permittivity ($\epsilon_{ij}$), permeability ($ \mu_{ij} $), and ME ($\theta_{ij}$) tensors, and higher-order, anisotropic couplings are indicated by ellipsis. A static magnetic monopole gives rise to the displacement field \begin{equation} D_i (\bs{r})= P_{ME,i}(\bs{r})=\frac{\theta_{ij} e m}{8 \pi^2 \hbar} \frac{\hat{r}_j }{r^2}, \end{equation} and anisotropic volume charge density $\rho(\bs{r})= -\nabla \cdot \bs{P}_{ME}$. Since the total induced electric charge enclosed by a spherical Gaussian surface equals \begin{equation}\label{tracetheta} Q=- \frac{e m}{2 \pi} \; \frac{ \text{Tr}[\theta_{ij} ]}{3}, \end{equation} the Chern-Simons invariant for the ground state determines $\frac{ \text{Tr}[\theta_{ij} ]}{6 \pi}$. Can it be quantized for a system breaking all fundamental discrete symmetries and $C^z_4 \mathcal{T}$? To answer this question we performed thought experiments with magnetic monopoles. The results are shown in Fig.~\ref{fig:MagneticTI}. As the system lacks $\mathcal{C}$, $\mathcal{P}$ and $\mathcal{T}$ symmetries, the monopole and surface localized states are no longer degenerate. Furthermore, when $m_{12} \neq 0$, the Hamiltonian does not possess $\mathbb{Z}_2$ particle-hole symmetry, as $\{H_{mag},\Gamma_4\} \neq 0$. Therefore, monopole- and surface- localized, bound states move away from $E=0$. For sufficiently small $m_{12}$, $m_{34}$, and $m_{45}$, such that the band gap is not closed, the half-filled system exhibits quantized ME effect, without any pseudo-scalar training mass $M^\prime$. Thus, generic magnetic insulators with $n$-fold rotational symmetry can exhibit non-trivial third homotopy classification and topologically quantized, ME response. By tuning the band parameter $\Delta$, we also confirmed that magnetic TIs allow both odd and even integer invariants (i.e., $\mathbb{Z}$-classification). \section{Octupolar topological insulators}\label{TOTI1} Finally, we consider the third homotopy class of $\mathcal{C}$-, $\mathcal{P}$-, and $\mathcal{T}$- symmetric octupolar TIs, which exhibit gapped surface-states, and corner-localized, zero-energy, mid-gap states, under cubic-symmetry-preserving, OBC~\cite{Benalcazar61}. There are different but equivalent ways of writing $8 \times 8$ Bloch Hamiltonian of third order TIs. Ref.~\onlinecite{Benalcazar61} considered spin-less fermions to describe octupolar TIs. But the assumption of spin-less fermions is not essential for describing topological universality class of octupolar HOTIs. The general form of Bloch Hamiltonian \begin{eqnarray}\label{TOTI} && H_3(\mathbf{k})=t\sum^{3}_{j=1}\sin k_{j}\gamma_{j}+t^\prime(\Delta +\sum^{3}_{j=1}\cos k_{j})\gamma_{5} + \nn \\ && t^{\prime \prime}(\cos k_{x}-\cos k_{y})\gamma_{4}+\frac{t^{\prime \prime}}{\sqrt{3}}(2\cos k_{z}-\cos k_{x}-\cos k_{y})\gamma_{6}, \nn \\ \end{eqnarray} has been described in Ref.~\onlinecite{sur2022mixed}, and we have demonstrated that the octupolar HOTI can display gapless surface-states (four-component, 2D massless Dirac fermions) for $[111]$ surface. On a physical ground, we can think of hybridization between two four-band FOTIs, with $E_g$ type d-wave harmonics $(\cos k_x -\cos k_y)$ and $(2\cos k_z -\cos k_x -\cos k_y)$. The specific assignment of hopping constants $t^{\prime \prime}$ and $t^{\prime \prime}/\sqrt{3}$ guarantees that the cubic symmetry is preserved. Here $\gamma_j$'s are six mutually anti-commuting $8 \times 8$ matrices, and $\{H_3, \gamma_7 \}=0$ describes the $\mathbb{Z}_2$ particle-hole symmetry with respect to the absent matrix $\gamma_7$. The explicit form of $\gamma_j$'s depend on the choice of basis. When $t^{\prime \prime}=0$, the decoupled model displays $SU(2)$ flavor symmetry, generated by $\gamma_{46}$, $\gamma_{67}$, and $\gamma_{47}$, with $\gamma_{ij}=[\gamma_i, \gamma_j]/(2i)$. Such non-Abelian flavor symmetry is broken by the hybridization terms. Consequently, topological invariants and response of octupolar TI will require a clear understanding of important and inter-twined roles of FZMs, $\mathcal{CP}$-violation and flavor-symmetry-breaking. \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=0.195]{Figures/Octupole_Open_Single_Monopole.pdf} \label{fig:OHState}} \subfigure[]{ \includegraphics[scale=0.45]{Figures/Octupolar_State_Loc.pdf} \label{fig:OHLoc}} \subfigure[]{ \includegraphics[scale=0.2]{Figures/Octupole_Mixed_Single_Monopole.pdf} \label{fig:OHPBC}} \subfigure[]{ \includegraphics[scale=0.45]{Figures/Oct_State_Loc_PBC.pdf} \label{fig:OHLocPBC}} \subfigure[]{ \includegraphics[scale=0.195]{Figures/Octupole_Open_Double_Monopole.pdf} \label{fig:OHStatedoubleopen}} \subfigure[]{ \includegraphics[scale=0.195]{Figures/Octupole_Mixed_Double_Monopole.pdf} \label{fig:OHStatedoublemixed}} \caption{Fermion spectrum for octupolar topological insulators [see Eq.~\ref{TOTI} ] in the presence of monopoles. (a) Fifty low-lying states near $E=0$, under cubic-symmetry-preserving, open boundary conditions, in the presence of unit monopole $m=+1$. We used band parameters $t=t^\prime=t^{\prime \prime}$, and $\Delta=2$. (b) Localization pattern of ten-fold degenerate fermion-zero-modes, shown in (a). Eight (two) modes are bound to the corners (monopole). (c) Low lying states under mixed boundary conditions. Corner-states are removed by implementing periodic (open) boundary conditions along $x,y$ ($z$) directions. (d) Two-fold degenerate fermion-zero-modes, shown in (c), are bound to the monopole. For a general monopole $m$, the number of zero modes are given by $2|m| +8$ and $2|m|$ for open and mixed boundary conditions respectively. This is confirmed for a double monopole $m=+2$, under (e) open boundary conditions, and (f) mixed boundary conditions. Clearly, the three-dimensional bulk invariant is tracked by the number of zero-modes, localized on monopole, irrespective of boundary conditions.} \end{figure*} As the model only allows anti-commuting $8 \times8$ gamma matrices, the conduction and valence bands are four-fold degenerate, and $H_3(\mathbf{k})$ describes map from $T^3$ to the coset space $SO(6)/SO(5)$. The gauge group of intra-band Berry connection is $SO(5)=USp(4)/\mathbb{Z}_2$ and the calculation of bulk topological invariants is complicated by the four-fold degeneracy. This model supports one topologically non-trivial phase, when $|\Delta| < 3$. Without any loss of generality, we will consider the following representation of gamma matrices: \begin{eqnarray} && \gamma_{j}=\eta_0 \otimes \Gamma_{j}, \: \text{with} \; j=1,2,3,5 \; \nn \\ && \gamma_4=\eta_1 \otimes \Gamma_4, \; \Gamma_6=\eta_2 \otimes \Gamma_4, \; \Gamma_7=\eta_3 \otimes \Gamma_4, \end{eqnarray} where the $2 \times 2$ identity matrix $\eta_0$ and the Pauli matrices $\eta_j$'s operate on the flavor index. The $\mathcal{C}$, $\mathcal{P}$, $\mathcal{T}$, and the mirror symmetries are implemented as \begin{eqnarray} && \mathcal{C}^\dagger H^\ast_3(-\bs{k}) \mathcal{C} = - H_3(\bs{k}) , \; \mathcal{C}=i \gamma_3 \gamma_1 \gamma_4=\eta_1 \otimes \Gamma_{25}, \\ && \mathcal{P}^\dagger H(-\bs{k}) \mathcal{P} = H(\bs{k}), \; \mathcal{P}=i\gamma_{4}\gamma_{5} \gamma_{6}=\eta_3 \otimes \Gamma_5, \\ && \mathcal{T}^\dagger H^\ast_3(-\bs{k}) \mathcal{T} = H_3(\bs{k}), \; \mathcal{T}= i \gamma_2 \gamma_5 \gamma_6= \eta_2 \otimes \Gamma_{31}, \\ &&M^\dagger_j H_3(-k_j) M_j = H_3(k_j), \; M_j= \gamma_{j7}=\eta_3 \Gamma_{j4}, \nn \\ && \text{with} \; j=1,2,3. \end{eqnarray} Consequently, the fermion bilinear $\Psi^\dagger \gamma_7 \Psi$ is odd under $\mathcal{C}$, $\mathcal{P}$, three mirror operations, and even under $\mathcal{T}$, like an $\mathcal{CP}$-symmetric, $xyz$-type, electric octupole moment. In contrast to this, $\Psi^\dagger (-i \gamma_4 \gamma_6 \gamma_7) \Psi $ is the $\mathcal{C}$ even, but $\mathcal{P}$-, $\mathcal{T}$-, $M_j$- odd, pseudo-scalar operator. \subsection{Monopoles and fermion spectrum} To unambiguously probe three-dimensional invariant and topological response, we have performed thought experiments with Dirac monopoles. Without monopole, the octpolar HOTI supports eight, corner-localized, NZMs, under \emph{cubic-symmetry-preserving}, OBC. In the presence of minimal monopole $m=+1$, two monopole-localized NZMs are introduced, raising the total number of NZMs to ten [see Fig.~\ref{fig:OHState} ]. The localization pattern of NZMs is shown in Fig.~\ref{fig:OHLoc}. Note that under OBC, the $SU(2)$-flavor-symmetric, decoupled model (with $t^{\prime \prime}=0$) would lead to four NZMs (two on monopole and two on surface). As the hybridization terms ($t^{\prime \prime} \neq 0$ ) gap out all surface- and hinge- localized states, only two monopole-localized NZMs can survive. Thus, the number of NZMs, bound to the monopole agrees with the assignment of a bulk winding number $|n_{3}|=2$. To separate the physics of bulk and corner-localized states, we can impose MBC: (i) PBC in $xy$ plane, and (ii) OBC along $z$ direction to maintain the invisibility of Dirac string. The resulting spectrum for a system-size $L/a=10$ is shown in Fig.~\ref{fig:OHPBC}. As MBC does not obey cubic symmetry, all corner-localized modes are eliminated. But the monopole-localized NZMs remain unaffected, as shown in Fig.~\ref{fig:OHLocPBC}. To further demonstrate the separation between bulk topological invariant and geometry-dependent corner-states, we have considered higher monopole strengths. The results for $m=+2$ are shown in Fig.~\ref{fig:OHStatedoubleopen} and \ref{fig:OHStatedoublemixed}. While the number of corner-states ($8$ vs. $0$) is fixed by the boundary conditions, the number of zero-modes localized on monopole is given by $2 |m|$, irrespective of boundary conditions. Thus, the number of monopole-localized zero-modes identifies $|n_{3}|=2$ and performs $\mathbb{N}$-classification of the third homotopy class of octupolar HOTI. \subsection{Witten effect and spin-charge separation} To establish $\mathbb{Z}$-classification of quantized ME response, we have studied WE in the presence of a pseudo-scalar training field. Therefore, the Bloch Hamiltonian is modified as \begin{equation}\label{octpseudo1} H_3 \to H^\prime_3 = H_3 + M^\prime (-i \gamma_4 \gamma_6 \gamma_7). \end{equation} and we calculate monopole-bound electric charge $\delta Q(L,R,\xi,M^\prime)$ by taking $M^\prime \to 0^+$ limit. The results for $m=+1$ monopole are shown in Fig.~\ref{fig:OctIndtrainingCharge}. The half-filled state exhibits WE and the maximum induced electric charge saturates to $\delta Q_{max} = -\frac{e m \theta}{2\pi}$, with \begin{equation}\label{octpseudo2} \theta= n_3 \pi, \; \text{and} \; n_3=-2 \; \text{sgn}(t) \; \Theta(3-|\Delta|) , \end{equation} under OBC, and MBC. This implies the Chern-Simons coefficient of the ground state is given by $\mathcal{CS}_+ = \lim_{M^\prime \to 0+} \mathcal{CS}(M^\prime) = \frac{n_3}{2} \neq 0$. To maintain overall charge-neutrality of the half-filled state, compensating bound charge $-\delta Q_{max}$ will be localized in the vicinity of eight corners under OBC. Hence, each corner will support \begin{equation} \delta Q_{corner}=+\frac{1}{8} \; e \; m \; \text{sgn}(n_3). \end{equation} In contrast to this, under MBC, $-\delta Q_{max}$ will be shared by two gapped $[001]$-surfaces. \begin{figure}[t] \centering \includegraphics[scale=0.57]{Figures/InducedChargeTrainingOctupole_Two_Signs.pdf} \caption{Witten effect for half-filled, higher order octupolar topological insulator in the presence of a minimal monopole $m=+1$. The system has been trained with a small $\mathcal{CP}$-violating, pseudo-scalar field $M^\prime=+0.1 t$ (see Eq.~\ref{octpseudo1} ). Witten effect identifies $\mathbb{Z}$-classification of magneto-electric coefficient (see Eq.~\ref{octpseudo2} ). Identical quantization of induced electric charge is observed for open and mixed boundary conditions, which shows the bulk response is insensitive to the presence of corner-states. For numerical calculations we have used $t^{\prime}=t=t^{\prime \prime}$, $\Delta=2$, and system-size $(L/a)^3=10^3$. For larger system-size, we can gradually reduce the strength of $M^\prime$ toward zero.} \label{fig:OctIndtrainingCharge} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.55]{Figures/InducedChargeOctupole.pdf} \caption{Spin-charge separation of octupolar topological insulator in the presence of a minimal monopole $m=+1$, under mixed boundary conditions, without any pseudo-scalar training field. When the system is doped with one electron (hole), monopole ($m=+1$) binds maximum induced charge $-e$ ($+e$). The maximum induced charge at the topological quantum critical point equals $\mp e/2$ for electron and hole doping, respectively. All numerical calculations of induced charge have been performed for a system-size $(L/a)^3=30^3$ to achieve numerical accuracy $10^{-4}$ of quantized values.} \label{fig:OctIndCharge} \end{figure} Without any pseudo-scalar training field, $\delta Q(L,R,\xi,M^\prime=0)=0$. By maintaining the degeneracy of monopole bound zero-modes, we can study spin-charge separation. Under MBC with $m=+1$, we can dope one electron (hole) to completely occupy (empty) two NZMs. We have found $\delta Q_{max}= -e $ ($+e$) [see Fig.~\ref{fig:OctIndCharge} ], which completely accounts for the charge of doped electron (hole). At TQCP, we obtained $\delta Q_{max,TQCP}= -e/2 $ ($+e/2$). Similarly, under OBC, we can dope five electrons (or holes), to completely occupy (empty) ten NZMs. We have found $\delta Q_{max} = - e$ ($+e$) to be localized on monopole. Rest of the doped charge $-4e$ ($+4e$) will be distributed among eight corners, and each corner will exhibit $-e/2$ ($+e/2$). \begin{figure}[t] \centering \subfigure[]{ \includegraphics[scale=0.55]{Figures/Oct_111_Flux.pdf} \label{fig:OHStatevortex1}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/HOTI_111_StateVsEnergy.pdf} \label{fig:OHStatelDOS}} \caption{Diagnosis of non-trivial second homotopy class of $C_3$-symmetric planes of octupolar topological insulators with magnetic $\pi$-flux tube, oriented along $[111]$ axis. (a) The local density of states on flux tube as a function of $k_3=(k_x+k_y+k_z)/\sqrt{3}$ shows two pairs of two-fold-degenerate dispersive, mid-gap states that lead to four-fold-degenerate zero-modes for $k_3=\sqrt{3} \pi$ plane. This confirms our assertion that only $k_3=\sqrt{3} \pi$ plane carries quantized, flux of $SO(5)$ Berry connection. (b) The number of states vs, energy plot for $k_3=\sqrt{3}\pi$ plane. In contrast to this, $k_3=0$ plane only exhibits finite-energy mid-gap states, indicating trivial second homotopy class of this plane. The change of second homotopy class along $[111]$ corroborates the existence of non-trivial third homotopy class, even though mid-gap states never merge with the bulk continuum. } \label{fig:OctFxInsert} \end{figure} What is the origin of three-dimensional winding number? In Ref.~\onlinecite{sur2022mixed}, we have shown that $n_3$ arises from $C_3$-symmetry-protected, tunneling configurations of $SO(5)$ Berry connection along $[111]$-direction. The quantized Berry flux for $C_3$-symmetric planes is carried by the matrix $\gamma_{111}=(\gamma_{12} + \gamma_{23} + \gamma_{31})/\sqrt{3}$ and the strength of tunneling is quantified by the difference of relative Chern numbers \begin{eqnarray} n_3=\mathfrak{C}_{111}(k_3=\sqrt{3} \pi) -\mathfrak{C}_{111}(k_3=0) \nn \\ \end{eqnarray} with $k_{3}=(k_x + k_y +k_z)/\sqrt{3}$. For the current model, $C_R(k_3=0)=0$, and $C_{R}(k_3=\sqrt{3} \pi)=-2 \text{sgn}(t) \Theta(3-|\Delta|)$. To confirm such properties of Berry connection, we have probed the second homotopy class of $C_3$-symmetric planes with magnetic $\pi$-flux tube. We maintained PBC along $[111]$, $[1,-1,0]$, and $[1,1,-2]$ directions, and the flux tube was oriented along [111] direction. The results are shown in Fig.~\ref{fig:OctFxInsert}. \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=0.21]{Figures/Zeeman_g2_States.pdf} \label{fig:7a}} \subfigure[]{ \includegraphics[scale=0.12]{Figures/Zeeman_Edge_Loc.pdf} \label{fig:7b}} \subfigure[]{ \includegraphics[scale=0.12]{Figures/Zeeman_Mon_Loc.pdf} \label{fig:7c}} \subfigure[]{ \includegraphics[scale=0.55]{Figures/Zeeman_g2_Delta.pdf} \label{fig:7d}} \caption{Effects of singular, pseudo-scalar, Zeeman coupling [see Eq.~\ref{Zeeman1}] on Witten effect. (a) Low-lying states in the presence of a unit strength monopole $m=+1$ for TI$_{1}$ phase (with tuning parameter $\Delta=2$), and g-factor $\tilde{g}=2$. Due to the spatial dependence of $H_Z$, the energy of the surface-bound mode is minimally shifted in energy. But the monopole-bound state is significantly shifted in energy to $E \approx +0.5 t$. The localization patterns of (b) surface-bound and (c) monopole-bound states in the $z=0$ plane. (d) Charge induced on monopole (in units of $-e$) for a half-filled system of size $L/a=10 $ as a function of $\Delta$. Thus, the observed Witten effect with pseudo-scalar Zeeman coupling confirms that TI$_1$ phase has the bulk winding number $\mathcal{N}_3=-1$. } \label{fig:zeeman} \end{figure*} \section{Conclusions}\label{conclusions} Our comprehensive analysis of fermion spectrum, spin-charge separation with carrier doping, and Witten effect at half-filling clearly shows that monopoles can unambiguously detect $\mathbb{N}$- and $\mathbb{Z}$- classification of $\theta$. We have also showed that topological quantum critical points describe $\mathcal{C}$, $\mathcal{P}$, and $\mathcal{T}$ symmetric, Dirac semimetals, with Dirac points located at time-reversal-invariant momenta. Such symmetry-protected gapless states are associated with half-integer winding numbers and fractional axion angles $\theta_c = \pm \pi/2$. The analysis has been generalized to reveal non-trivial third homotopy class and quantized axion response of chiral, magnetic, and octupolar higher-order topological insulators. Therefore, we have demonstrated both first-order and higher order topological crystalline insulators can support quantized magneto-electric response with $\theta=(2n+1) \pi$ and $\theta=2n \pi$. Therefore, $\theta=2n \pi$ is not equivalent to $\theta=0$. In our recent work~\cite{tyner2021symmetry}, we have identified $n_i$, $|n_i|$ of various Kramers-degenerate bands of elemental bismuth, respectively from a tight-binding model~\cite{schindler2018higherbismuth}, and \emph{ab initio} band structure. The analysis avoids direct calculation of Chern-Simons coefficient and identifies tunneling configurations of $SU(2)$ Berry connection of Kramers-degenerate bands by simultaneously computing in-plane Wilson loops for high-symmetry planes, and Polyakov loops along high-symmetry axes. Both tight-binding model and \emph{ab initio} band structures lead to $\theta = 2 n \pi$. Specifically, the tight-binding model~\cite{schindler2018higherbismuth} of bismuth leads to $\theta=+4\pi$. Elemental bismuth is a semimetal due to indirect band gap between conduction and valence bands. But the adiabatic deformation of band-structure of bismuth describes $\mathbb{Z}_2$-trivial higher-order topological insulators. Our detailed analysis on first- and higher- order topological insulators shows that actual bulk invariants of such phases can be identified by magneto-electric coefficients. So far, magneto-electric coefficients have been measured with Faraday rotation in two non-magnetic, $\mathbb{Z}_2$ topological insulators, Bi$_2$Se$_3$~\cite{wu2016quantized}, and strained HgTe~\cite{dziom2017observation}. The experiments apply uniform magnetic fields. Therefore, magnetic topological insulators of Sec.~\ref{AppA} with inversion-symmetry breaking coupling $m_{45}=0$ provides appropriate description of the experimental systems. By doping indium in Bi$_2$Se$_3$, and controlling strain in HgTe, one can drive phase transitions between $\mathbb{Z}_2$ topological insulator and $\mathbb{Z}_2$-trivial insulator (not necessarily a topologically trivial insulator). Future measurements of magneto-electric coefficient across such phase transition will help us understand topological properties of quantum critical points and $\mathbb{Z}_2$-trivial insulators.
1,314,259,995,988
arxiv
\section{Introduction} In the last two decades, the importance of considering the chemistry occurring on grains or with grains has become clear \citep{Vidali2013,Linnartz2015}, thanks to an ever increasing number of laboratory experiments required to explain new observational data on ices coating dust grains \citep{Boogert2015}. Simulations of ISM environments make now use of gas-grain chemical networks, whether they employ rate-equations \citep{Garrod2008}, kinetic Monte Carlo \citep{Vasyunin2009}, or other stochastic methods \citep{Biham2001}. One key parameter that influences the ISM chemistry is the binding energy of a particle on a dust grain. The residence time of a particle on a grain is directly related to its binding energy to the grain through $t=\tau \exp(E_{\rm D}/k_{\rm B}T_{\rm s})$, where $E_{\rm D}$ is the binding energy, $T_{\rm s}$ is the temperature of the surface and $\tau$ is a characteristic time often associated, in the simplest case, to the inverse vibrational frequency of the particle in the adsorption well. Furthermore, in models the energy barrier for thermal diffusion is estimated by an empirical relation, $R=E_{\rm d}/E_{\rm D}$, where E$_{\rm d}$ is the energy barrier to diffusion. For weakly held particles on surfaces---the ones of interest here, $R\sim0.3$ for crystalline surfaces \citep{Antczak2005,Bruch2007}, but higher (0.5--0.8) for disordered/amorphous surfaces \citep{Katz1999,Garrod2008}. Modern day astrochemical models used a fixed binding energy for desorption; however, in reality the binding energy is a function of coverage. Recently, we found an empirical correlation between the sticking coefficient (or the probability for a particle to stick to a surface) and the binding energy for simple molecules on non-porous amorphous solid water (np-ASW) \citep{He2016a}. To know when and how molecules desorb from ices is obviously very important, especially in hot cores scenarios, where photons from the new star warm up the surrounding ice-coated grains causing an increase in detectable abundance of molecules in the gas-phase \citep{Ikeda2001,Bisschop2007}. To obtain the binding energy of atoms or molecules on a surface, the technique of Thermal Programmed Desorption (TPD) is often used; see \citet{Kolasinski2008} and next section for details. Although TPD measures the desorption energy, or the energy necessary to desorb a particle, this energy is essentially the binding energy if there are no activated processes, as it should be the case for simple molecules on ices. While there have been numerous laboratory experiments for determining the binding energy of molecules on ices \citep{Collings2004,Burke2010, Noble2012,Martin2014,Collings2015,Smith2016}, most of them were done at monolayer or multilayer coverage. However, when disordered or amorphous surfaces are concerned, which is the case of ices in the ISM, there is a notable variation of the binding energy with coverage, as in the case of D$_2$ on porous amorphous solid water (p-ASW) \citep{Amiaud2006}. The smaller the fraction of molecules on a disordered surface, the higher the binding energy. Except in a few cases, in the ISM the abundance of molecules in water ices is very low \citep{Boogert2015}. Thus, in order to interpret observational data correctly and to predict abundances using simulations, it is important to have the appropriate value of the binding energy of molecules on ices. In this contribution, we report the careful measurement of binding energies of simple molecules (N$_2$, O$_2$, CO, CH$_4$, and CO$_2$) on np-ASW at submonolayer (subML) and monolayer (ML) coverage, from $\sim$ 0.001 to 1 ML\@. We also studied the desorption of N$_2$ and CO from p-ASW, and of NH$_3$ from crystalline water ice. We derived a simple formula to estimate the binding energy change vs.\ submonolayer coverage; we then used this formula in a gas-grain chemical network simulation of a dense cloud and subsequent warm-up mimicking the hot-core/hot-corino phase. The paper is structured as follows. In the next section we give a description of the apparatus and of the experimental methods. In Section~\ref{sec:result} we present the data and their analysis. From this analysis we obtain a simple function of the binding energy vs.\ coverage which, in Section~\ref{sec:astro}, is used in a simulation of the chemical evolution of a dense cloud. In Section~\ref{sec:summary} we summarize the effect of the new binding energy determinations on abundances of the evolution of a dense cloud. \section{Experimental} \label{sec:exp} A detailed description of the apparatus can be found elsewhere \citep{Jing2013,He2015b,He2016a}, here we briefly summarize the main features that are most relevant for the measurement of the binding energy of molecules to water ice. The experiments were carried out in a 10 inch diameter ultra-high vacuum chamber (``main chamber''). A pressure as low as $1.5\times10^{-10}$ torr is reached routinely after bake-out. During molecule deposition, the operating pressure is $\sim 3\times 10^{-10}$ torr. At this low pressure, background deposition is negligible. A 1~cm$^{2}$ gold coated copper disk substrate is located at the center of the chamber. It can be cooled down to 8~K by liquid helium or heated to 450~K using a cartridge heater. A Lakeshore 336 temperature controller with a calibrated silicon-diode (Lakeshore DT670) is used to measure and control the substrate temperature with an uncertainty of less than 0.5~K. A Hiden Analytic triple-pass quadrupole mass spectrometer (QMS) mounted on a rotary platform records the desorbed species from the sample or measures the composition of the incoming beam. A Teflon cone is attached to the entrance of the QMS detector in order to maximize the collection of molecules desorbed or scattered from the surface. It has also the function of rejecting molecules desorbing from other parts of the sample holder. The QMS is placed at $42\degree$ from the surface normal, while the incident angle of the molecular beam is $8\degree$ and on the opposite side---with respect to the surface normal---of the QMS detector. At the back of the sample, there is a gas capillary array for water vapor deposition. The capillary array is not directly facing the sample holder in order to obtain a deposition of water vapor from the background. Distilled water underwent at least three freeze-pump-thaw cycles to remove dissolved air. A leak valve is used to control the water vapor flow into the main chamber. Three types of ices are used in this study, porous amorphous solid water (p-ASW), non-porous amorphous solid water (np-ASW), and crystalline water ice (CI). During water vapor deposition, we regulated the vapor pressure to be $\sim 5\times 10^{-7}$ torr, which corresponds to a deposition rate of 0.5~ML/s. This rate is close to the slowest deposition rate used by \citet{Bossa2015}. The ice thickness is calculated by integration of the chamber pressure with time, assuming $1\times10^{-6}$ torr$\cdot$s exposure corresponds to 1 monolayer (ML). For experiments carried out in this study, the ice thickness is 100 ML\@. Porous amorphous ice is prepared by background deposition of water vapor when the substrate is at 10~K, followed by an annealing at 70~K for 30 minutes and by cooling back down to $\sim$15~K before the molecules are deposited on the surface using the molecular beam. Then in the TPD experiment the temperature of the surface is raised linearly with time and the molecules coming from the surface are detected by the QMS\@. In the TPD experiments, the highest surface temperature for the p-ASW was kept below 70~K in order to maintain approximately the same surface morphology. Np-ASW is prepared when the substrate temperature is at 130~K, followed by annealing at 130~K for 30 minutes. TPD experiments on np-ASW were performed at a temperature always below 130~K. CI was prepared by annealing a np-ASW sample at $\sim$145~K for 10 minutes. A Fourier Transform Infrared spectrometer (FTIR) in the RAIRS (Reflection Absorption InfraRed Spectroscopy) configuration is used to check the ice structure. Connected to the main chamber are two highly collimated three-stage molecular beam lines. Gas flow into the beam was controlled by an Alicat MCS-5 mass flow controller. In the third stage of the beam line there is a flag controlled by a stepper motor to accurately control the beam opening time. An exposure time as short as 0.5 second can be programmed with an uncertainty of 0.05 second. The list of TPD experiments are shown in Table~\ref{tab:list_exp}. The surface temperature at which the beam deposition takes place (deposition temperature) was chosen so that the sticking is unity \citep{He2016a}. The NH$_3$ TPD was performed on crystalline ice because NH$_3$ desorb at temperature where water ice becomes crystalline. In the TPD the highest temperature was set to 145~K, at which the desorption of NH$_3$ is complete and water ice evaporation is still slow. \begin{table} \centering \caption{List of temperature programmed desorption (TPD) experiments performed. The deposition dose ranges from below 1\% ML to above 1 ML\@. The temperature ramp rate during TPD is 0.5~K/s; np-ASW= non-porous amorphous solid water; p-ASW=porous amorphous solid water.} \label{tab:list_exp} \begin{tabular*}{0.45\textwidth}{@{\extracolsep{\fill} }ccc} \toprule adsorbate & ice morphology & deposition temperature (K) \\ \midrule CO & p-ASW & 15 \\ CO & np-ASW & 15 \\ N$_2$ & p-ASW & 15 \\ N$_2$ & np-ASW & 15 \\ O$_2$ & np-ASW & 20 \\ CH$_4$ & np-ASW & 20 \\ CO$_2$ & np-ASW & 60 \\ NH$_3$ & crystalline & 50 \end{tabular*} \end{table} \section{Results and Discussion} \label{sec:result} \subsection{CO Desorption from Porous Amorphous Solid Water } The TPD spectra of CO from p-ASW are shown in Figure~\ref{fig:CO_TPD_porous}. Compared with TPD spectra in the literature, this work focuses on the submonolayer regime, especially on very low coverages. In the figure, CO coverages varies from 0.56\% ML to 1.33 ML\@. The coverage is determined by identifying the transition from submonolayer to multilayer, similar to the method in \citet{Smith2016}. In the submonolayer regime, the TPD peak temperature decreases with coverage. At more than 1 ML, the TPD peak temperature increases with coverage, and shows zeroth order desorption with overlapping leading edges. A shape change is also seen at this transition. For experiments performed in this work, the absolute uncertainty of coverage determination is about 30\%. The coverage of other traces is obtained accordingly based on the deposition time. For p-ASW, it takes 30 minutes of CO exposure to fully cover the surface area, while for np-ASW, only 3 minutes of exposure are needed. This indicates that the surface area in a 100 ML p-ASW as prepared in our experimental condition is 10 times as that of np-ASW\@. During molecules deposition, the beam intensity was stable and varies by less than 1\%. The sticking coefficient may also affect the amount of molecules on the surface. To eliminate the uncertainty induced by sticking, we set the surface temperature of deposition to be low enough so that the sticking coefficient during deposition is always unity~\citep{He2016a}. \begin{figure} \epsscale{1.1} \plotone{CO_TPD_porous.eps} \caption{TPD spectra of CO from 100~ML of np-ASW\@. The temperature at which CO is deposited is 15~K. The TPD ramp rate is 0.5~K/s. Coverage in unit of ML is in the inset. } \label{fig:CO_TPD_porous} \end{figure} \subsection{Direct Inversion Method} The TPD traces in Figure~\ref{fig:CO_TPD_porous} share a common trailing edge, which indicates that the diffusion rate of CO is high and equilibrium diffusion state is reached during the desorption \citep{He2014a}. Molecules tend to occupy the deep adsorption sites (with higher binding energy) before they fill shallow sites (with lower binding energy). In this scenario, the direct inversion method---which is based on particles diffusing and occupying preferably deep adsorption sites---was used previously \citep[e.g.][]{He2014a,Smith2016} and works well for extracting the coverage dependent binding energy distribution. A detailed discussion of the direct inversion method and the diffusion process on surface is available in \citet{He2014a}. We start from the Polanyi-Wigner rate equation; assuming first order desorption, the desorption rate can be written as: \begin{equation} \frac{\dif \theta(E_{\mathrm{D}},t)}{\dif t}= -\nu \theta (E_{\mathrm{D}},t)\exp \left(-\frac{E_{\mathrm{D}}}{k_{\mathrm{B}}T(t)}\right) \label{eq:p-w} \end{equation} where $\nu$ is the desorption pre-exponential factor that depends on the substrate and adsorbate; hereafter we use a widely accepted value of $10^{12}$ s$^{-1}$; $\theta (t)$ is the coverage defined as percentage of 1 ML, i.e., the number of adsorbate particles divided by the number of adsorption sites on the surface, $E_{\mathrm{D}}$ is the binding energy, $k_{\mathrm{B}}$ is Boltzmann constant, $T(t)$ is the temperature of the surface. The coverage dependent $E_{\rm D}(\theta)$ can be calculated for each TPD trace as follows: \begin{equation} E_{\mathrm{D}}(\theta)=-k_{\mathrm{B}}T\ln\left(-\frac{1}{\nu \theta}\frac{\dif \theta}{\dif t}\right) \label{eq:invert} \end{equation} This inversion is applied to each TPD trace in Figure~\ref{fig:CO_TPD_porous}. The resulting $E_{\mathrm{D}}(\theta)$ distribution is shown in Figure~\ref{fig:CO_porous_Edes}. It can be seen that the $E_{\mathrm{D}}(\theta)$ obtained from different TPD traces overlap well; this suggests that the direct inversion is a reliable method to obtain the binding energy distribution, giving that the trailing edge of the TPD traces overlap. For easier incoporation of the $E_{\rm D}$ in gas-grain models, we used a function to fit the $E_{\mathrm{D}}(\theta)$ distribution: \begin{equation} E_{\mathrm{D}}(\theta)= E_1 + E_2 \exp \left(-\frac{a}{\max(b-\lg (\theta), 0.001)}\right) \label{eq:fit} \end{equation} where $E_1$, $E_2$, $a$, and $b$ are fitting parameters. $E_1$ is the binding energy for $\theta > 1$ ML, while $E_1+E_2$ is the binding energy when $\theta$ approaches zero. This function can fit the binding energy at both the submonolayer and multilayer coverage by forcing the multilayer $E_{\mathrm{D}}(\theta)$ value to be a constant $E_1$ using the $\max$ function. The calculated binding energy distributions with the fitting function in Equation~\ref{eq:fit} are shown in Figure~\ref{fig:CO_porous_Edes}. The fitting parameters are shown in Table~\ref{tab:fitting_para}. \begin{figure} \epsscale{1.1} \plotone{CO_porous_Edes.eps} \caption{Surface coverage dependent binding energy distribution of CO on p-ASW obtained from a direct inversion method. The dashed lines are the inverted TPD traces in Figure~\ref{fig:CO_TPD_porous} using Eq~\ref{eq:invert}, the solid line is the fitting using formula in Eq~\ref{eq:fit} and parameters in Table~\ref{tab:fitting_para}.} \label{fig:CO_porous_Edes} \end{figure} So far we used $10^{12}$/s as the prefactor. However, in other works (i.e., \citet{Smith2016}), different values have been either assumed or obtained from fits. It is useful to analyze how different prefactors affect the simulation of interstellar chemistry. For simplicity, we consider an ideal case where the binding energy has a single value and the TPD peak is sharp. Suppose in a TPD experiment with a heating ramp rate of 0.5~K/s, a desorption peak is at 35~K. From this one can calculate the binding energy $E_{\rm D}$ using the Polanyi-Wigner equation and assuming a certain prefactor $\nu$. These values of $E_{\rm D}$ and $\nu$ can then be used in models to predict at what temperature molecules desorb from the interstellar grain surface. Since the heating up time scale of a dense cloud is in the order of $10^5$ years (see below), molecules desorb from the grain surface at a lower temperature than in the laboratory condition. In Figure~\ref{fig:changing_nu} we show the simulated desorption as a function of temperature using different prefactors and heating ramp rates. It can be seen that when the prefactor is changed by a factor of 100 (i.e., change from $10^{12}$ to $10^{14}$), even at the slowest ramp rate $10^{-10}$~K/s the difference in simulated TPD peak temperature is less than 1.4~K, which amounts to 7\% error. This error is usually smaller than the uncertainty in determining the binding energy. Therefore, the use of different prefactors is not significantly affecting the simulation of ISM environments, provided that the same prefactor is used both in laboratory data analysis and in gas-grain models. \begin{figure} \epsscale{1.1} \plotone{changing_nu.eps} \caption{Effect of using different values of the prefactor $\nu$ in the analysis of experimental data and in the gas-grain model. See text.} \label{fig:changing_nu} \end{figure} The TPDs of N$_2$ from p-ASW, as well as N$_2$, CO, O$_2$, CH$_4$ from np-ASW are shown in Figure~\ref{fig:6molecules}. We again apply the direct inversion method to each TPD trace from low coverage up to $\sim 1$ ML\@. Higher coverage curves are not included in the inversion result because the direct inversion method is not applicable for multilayer coverage. The binding energy distributions and the fittings are shown on the right panels. For TPDs of O$_2$ and CH$_4$ on np-ASW in the submonolayer regime, the trailing edges do not overlap well. Consequently, significant gaps are present in the inverted curves. The coverage dependent binding energy distribution of H$_2$ on water ice is very important, since molecular hydrogen, as the most abundant molecules in the universe, plays an fundamental role in the physical and chemical evolution of interstellar clouds. However, it is very challenging to measure H$_2$ TPDs especially at low coverage because of the high background of H$_2$ even in a vacuum chamber under ultra-high vacuum conditions (P $\sim 10^{-10}$ torr). D$_2$ TPDs are typically measured instead. Here we use the D$_2$ TPDs on np-ASW by \citet{Amiaud2006}, as shown in Figure~\ref{fig:D2_TPD_compact}. We follow similar direct inversion procedures as above to obtain the binding energy distribution, as is shown in Figure~\ref{fig:D2_compact_Edes}. The same group recently performed TPDs of H$_2$, D$_2$, and HD on p-ASW, and compared the peak temperature differences between these three molecules \citep{Amiaud2015}. Based on Figure 2 in \citet{Amiaud2015}, the D$_2$ and H$_2$ desorption peaks are at 16 and 14~K, respectively. A calculation of the binding energy turns out that the ratio $E_{\rm D,H_2}/E_{\rm D,D_2} = 0.87$. We then assume that this ratio also applies to np-ASW and multiply the D$_2$ binding energy distribution by 0.87 to represent the binding energy distribution of H$_2$. \begin{table} \centering \caption{Fitting parameters for the $E_{\rm D}(\theta)$ distribution as in Eq.~3. } \label{tab:fitting_para} \begin{tabular*}{0.42\textwidth}{@{\extracolsep{\fill} }ccccc} \toprule & E$_1$ & E$_2$ & a & b \\ \midrule CO (np-ASW) & 870 & 730 & 0.6 & 0.3 \\ N$_2$ (np-ASW) & 790 & 530 & 0.45 & 0.3 \\ O$_2$ (np-ASW) & 920 & 600 & 1.2 & 0.2 \\ CH$_4$ (np-ASW) & 1100 & 500 & 1 & 0.3 \\ D$_2$ (np-ASW) & 370 & 210 & 0.7 & 0.2 \\ H$_2$ (np-ASW) & 322 & 183 & 0.7 & 0.2 \\ \midrule CO (p-ASW) & 980 & 960 & 0.9 & 0.3 \\ N$_2$ (p-ASW) & 900 & 900 & 0.9 & 0.3 \\ \bottomrule \end{tabular*} \end{table} \begin{figure*} \epsscale{1} \plotone{subplots.eps} \caption{TPDs of CO and N$_2$ from p-ASW, and of CO, N$_2$, O$_2$, CH$_4$, and D$_2$ from np-ASW (left panel) with the corresponding binding energy distributions in K (right panel). The coverage in ML is in the middle panel.} \label{fig:6molecules} \end{figure*} Figure~\ref{fig:CO2_TPD_compact} shows the TPDs of CO$_2$ from np-ASW\@. Zeroth order like traces are seen even at very low deposition length, indicating CO$_2$ forms clusters. Surface coverage cannot be estimated by identifying the change in TPD shape. Instead, we assume that it takes the same amount CO$_2$ as CO (beam flux times deposition length) to cover 1 ML surface area. The beam flux is accurately controlled by the Alicat Mass flow controller and the correction factor for the specific gas is already taken into account by the controller. This is a fair assumption, considering that for all other species we measured, it takes 3 minutes of exposure to fully cover the first monolayer of np-ASW surface. The binding energy can be calculated using Polanyi-Wigner equation to be 2320~K, and it is coverage insensitive. The clustering behavior of CO$_2$ may help to explain the observed segregation of CO$_2$ in ices \citep{Oberg2009}. This issue is still open. \citet{Karssemeijer2014} used a new CO$_2$-H$_2$O potential to calculate the adsorption properties of CO$_2$ on amorphous solid water and crystalline ice. They found no evidence of clustering in contradiction with experiments by \citet{Edridge2013} of CO$_2$ adsorption on ASW and graphite. The segregation of molecules in water ice will be the focus of a forthcoming paper. \begin{figure} \epsscale{1} \plotone{CO2_TPD_compact.eps} \caption{CO$_2$ TPDs from np-ASW\@. The surface coverage is shown in the inset. CO$_2$ is deposited when the ice is at 60~K. During the TPD the heating ramp rate is 0.5~K/s.} \label{fig:CO2_TPD_compact} \end{figure} Figure~\ref{fig:NH3_TPD_crys} shows the TPDs of NH$_3$ from CI\@. Three distinct peaks are shown in the TPDs, at $\sim 140$~K, $\sim 110$~K, and $\sim 100$~K respectively. The 100~K peak is due to zeroth order desorption at multilayer coverage. The origin of the other two peaks is unknown and its explanation is out of scope of this work. A direct inversion is applied to the TPD traces and the so obtained binding energy distribution is shown in Figure~\ref{fig:NH3_crys_Edes}. This distribution cannot be fitted with the expression in Eq~\ref{eq:fit}. When the coverage of NH$_3$ is low, ammonia binds strongly to water ice and desorbs at a temperature slightly lower than the water desorption temperature. A more relevant measurement would be the desorption of NH$_3$ from the surface of a bare silicate or a carbonaceous material, but this is outside the focus of this work. \begin{figure} \epsscale{1.1} \plotone{NH3_TPD_crys.eps} \caption{NH$_3$ TPDs from crystalline water ice (CI). The surface coverage is shown in the inset. NH$_3$ is deposited when the ice is at 50~K. During the TPD the heating ramp rate is 0.5~K/s. The ice is heated only to 145~K because at a higher temperature water begins to desorb significantly. } \label{fig:NH3_TPD_crys} \end{figure} \begin{figure} \epsscale{1.1} \plotone{NH3_crys_Edes.eps} \caption{Surface coverage dependent binding energy distribution of NH$_3$ on crystalline water ice (CI) obtained from a direct inversion of TPD traces in Figure~\ref{fig:NH3_TPD_crys}.} \label{fig:NH3_crys_Edes} \end{figure} \begin{figure} \epsscale{1.1} \plotone{D2_TPD_compact.eps} \caption{D$_2$ TPDs from np-ASW\@. The surface coverage is shown in the inset. Data taken from \citet{Amiaud2006}. } \label{fig:D2_TPD_compact} \end{figure} \begin{figure} \epsscale{1} \plotone{D2_compact_Edes.eps} \caption{Surface coverage dependent binding energy distribution of D$_2$ on np-ASW obtained from a direct inversion of TPD traces in Figure~\ref{fig:D2_TPD_compact}. Data taken from \citet{Amiaud2006}. } \label{fig:D2_compact_Edes} \end{figure} \section{Astrophysical Implications} \label{sec:astro} Grain surface chemistry plays a crucial role in the formation of a wide range of molecules including the simplest and most abundant molecule, H$_2$. Once a gas species adsorbs on the grain surface it will hop on the surface due its thermal energy. Reactions occur via Langmuir-Hinshelwood (L-H), Eley-Rideal (E-L), or hot-atom (H-A) mechanism \citep{Vidali2013}. Here we only consider the L-H mechanism, i.e., reactants thermally migrate around the grain surface over the diffusion energy barrier $E_{\rm d}$ between sites until they meet at a binding site. Migration may also occur due to quantum mechanical tunneling. Initially it was postulated that mobility of light atoms on a grain surface at low temperature was due to quantum tunneling, but laboratory experiments suggest that the thermal hopping plays a bigger role \citep{Pirronello1997b,Vidali2013,Hama2013}. The diffusion energy barriers define the rates at which reactions take place. The rate of hopping at a given temperature $T$ is given by: \begin{equation} A_i= \nu_i\exp(-E_{\rm D}(i)/k_{\rm b}T), \end{equation} where, $k_{\rm b}$ is the Boltzmann constant and $\nu_i$ is the typical vibrational frequency, given by, \begin{equation} \nu_i= \sqrt{\frac{2 g_{\rm s} E_{\rm D}(i)}{\pi^2 m_i}}, \end{equation} where, $g_{\rm s}$ is surface density of sites on a grain, $m_i$ is the mass of the $i$-th species and $E_{\rm D}(i)$ is the binding energy for desorption. The typical values of vibrational energies are in the range of $10^{12}$--$10^{13}$ s$^{-1}$. It should be noted that in the direct inversion of the TPD traces, the $\nu$ value was taken to be $10^{12}$ s$^{-1}$ for all species. The small difference in $\nu$ values does not affect the result significantly. If one reactant finds another during hopping, they will recombine and form a molecule. However sometimes adsorbed species may desorb back into the gas phase without reacting due to variety of desorption mechanisms. The thermal desorption rate is given by: \begin{equation} W_i= \nu_i\exp(-E_{\rm D}(i)/k_{\rm b}T). \end{equation} Thus, $E_{\rm D}(i)$ and $E_{\rm d}(i)$ are two very crucial parameters that control grain surface chemistry once a gas phase species is adsorbed on the surface. Present day astrochemical model assumes a fixed $E_{\rm D}$ and $E_{\rm d}$ for the study of formation of grain surface species. From the experiments discussed in the previous sections, it is clear that these parameters are function of coverage. In the submonolayer regime the binding energy is significantly higher than in the monolayer regime; the consequence is that while molecules are kept on the grain surface longer, the formation rate is reduced due to slower hopping. We employed a gas-grain simulation to examine the impact of the experimental results that are discussed here. The details of the simulation and chemical networks used are given in \citep{He2016a,Acharyya2015a,Acharyya2015b}. \begin{table} \centering \caption{Model Names and Parameters } \label{Table_Model} \begin{tabular*}{0.45\textwidth}{@{\extracolsep{\fill} }ccccc} \toprule Name & T$_{\rm Grain}$ & Heating Rate & Density & Binding \\ & (K) & (K/year) & cm$^{-3}$ & Energy \\ \midrule C1N & 10 & No & 2 $\times$ $10^{4}$ & Eq. 3 \\ C1O & 10 & No & 2 $\times$ $10^{4}$ & Old \\ C2N & 10 & No & 1 $\times$ $10^{5}$ & Eq. 3 \\ C2O & 10 & No & 1 $\times$ $10^{5}$ & Old \\ \midrule W1FN & 10--200 & 190/5 $\times$ $10^{4}$ & 2 $\times$ $10^{4}$ & Eq. 3 \\ W1FO & 10--200 & 190/5 $\times$ $10^{4}$ & 2 $\times$ $10^{4}$ & Old \\ W2FN & 10--200 & 190/5 $\times$ $10^{4}$ & 1 $\times$ $10^{5}$ & Eq. 3 \\ W2FO & 10--200 & 190/5 $\times$ $10^{4}$ & 1 $\times$ $10^{5}$ & Old \\ \midrule W1SN & 10--200 & 190/1 $\times$ $10^{6}$ & 2 $\times$ $10^{4}$ & Eq. 3 \\ W1SO & 10--200 & 190/1 $\times$ $10^{6}$ & 2 $\times$ $10^{4}$ & Old \\ W2SN & 10--200 & 190/1 $\times$ $10^{6}$ & 1 $\times$ $10^{5}$ & Eq. 3 \\ W2SO & 10--200 & 190/1 $\times$ $10^{6}$ & 1 $\times$ $10^{5}$ & Old \\ \bottomrule \end{tabular*} \end{table} \subsection{Model Parameters} We ran two classes of models: in one we considered the old fixed binding energy for CO, N$_2$, O$_2$, CH$_4$, H$_2$ and atomic oxygen and in another we have used Equation~\ref{eq:fit} for CO, N$_2$, O$_2$, CH$_4$, H$_2$ and 1850~K for the atomic oxygen binding energy \citep{He2015b}. The sticking was treated the same way as that in \citet{He2016a}. We ran six models for each class by varying the density and gas and grain temperature. Two models for each class were run by keeping the gas and grain temperature fixed at 10~K. They differ by the initial density; in one we considered the hydrogen number density ($n_{\rm H}$) $2\times 10^4$ cm$^{-3}$ and in other $10^5$ cm$^{-3}$. We designate these two models as C1O and C2O (``O'' for old energies) and C1N and C2N (``N'' for new energies and Equation~\ref{eq:fit}). Then we ran another four models (``W'' for warm-up) for each class in which we warmed up the grain and gas and increased the temperature from 10~K to 200~K at a linear rate. We have considered two different densities, $n_{\rm H}=2\times 10^4$ (W1) and $10^5$ cm$^{-3}$ (W2) and two different heating rates for increasing the gas and grain temperature from 10~K to 200~K:\@ fast heating (``F'') in $5\times 10^4$ years and slow heating (``S'') in $10^6$ years. Thus we have W1FO and W2FO (fast), W1SO and W2SO (slow) models with old binding energy and W1FN and W2FN (fast), W1SN and W2SN (slow) models for which Equation~\ref{eq:fit} is used. \begin{figure} \epsscale{1} \plotone{ED_vs_Time.eps} \caption{Time variation of binding energy as calculated during the model run for: C1N (solid lines) and C1O (dashed lines) model (top panel); W1FN (solid lines) and W1FO model (dashed lines) (bottom panel).} \label{fig-astro1} \end{figure} \subsection{Results} In Figure~\ref{fig-astro1} (top), solid curves show the variation of binding energy as a function of time during the calculation for the C1N model and dashed lines show fixed binding energy used for the C1O model. Since both O$_2$ and N$_2$ have the same fixed binding energy (1000~K) in the C1O model only N$_2$ is shown. For CO, the binding energy for the C1N model (solid red line) is high up to $3\times 10^4$ years, and after that it is lower compared to the fixed binding energy of 1150~K (red dashed line). For O$_2$ (solid cyan line) and N$_2$ (solid blue line), for all times the binding energy in the C1N model is higher than in the C1O model (dashed blue line) which essentially mean that these two species always desorb in the sub-monolayer regime. The time variation of the CH$_4$ binding energy shows that for time greater than $2\times 10^4$ years, the binding energy (solid magenta line) in model C1N is lower than in C1O (dashed magenta line). Figure~\ref{fig-astro1} (bottom), shows the binding energy variation in the W1FN (solid curves) model. Up to $10^6$ years, the binding energy profile of all the species is the same as shown in Figure~\ref{fig-astro1} (top). However, once grains begin to warm up and the surface abundance decreases below the monolayer regime, we can see that the binding energy starts to increase and attains the maximum level that is sum of E$_1$ and E$_2$. \subsubsection{CO} Figure~\ref{fig-astro2} shows the time variation of CO abundance for various models. In the top panel the CO abundance for models C1O, C2O, C1N, and C2N is shown. For these models the gas and grain temperatures are kept constant at 10~K (cyan dashed line). In Figure~\ref{fig-astro2}a, the red curve shows the gas-phase CO abundance. When Equation~\ref{eq:fit} is used for $n_{\rm H}$ = $2\times 10^4$ cm$^{-3}$, the CO gas-phase abundance is significantly higher than in the model with the old and fixed value of the energy (dashed red curve). This is mainly due to the fact that in the astronomical literature the binding energy for CO is taken as 1150~K (shown in red dashed line in Figure~\ref{fig-astro1}a) whereas when Equation~\ref{eq:fit} is used the CO binding energy remains close to 840~K for a time greater than $4\times 10^4$ years, see Figure~\ref{fig-astro1} (red curve). Thus, most of the time the CO binding energy is lower than the one used in the literature. A lower binding energy of CO makes cosmic ray induced desorption rate \citep{Hasegawa1993} to be greater than the accretion rate of CO\@. The net effect is very little CO depletion. Consequently we can see that grain surface CO abundance in the C1N model (solid black line) is lower than in the CIO model (dashed black line). To investigate this matter further we ran a model with $n_{\rm H}$ = $10^5$ cm$^{-3}$. The solid green line shows the CO gas-phase abundance; after $10^6$ years it starts to go down much more rapidly than in the previous model but is still significantly lower than in the C2O model (dashed green line). The solid blue line shows the grain surface CO abundance for $n_{\rm H} = 10^5$ cm$^{-3}$; it is still much lower compared to the one in the C2O model (dashed blue line). Figure~\ref{fig-astro2}b shows the time variation of the CO abundance for the warm up model with the fast heating rate. For these models, the gas and grain temperature is kept constant at 10~K up to $10^6$ years and then is linearly increased to 200~K in $5\times 10^4$ years. The solid red line shows the CO gas-phase abundance for the W1FN model; it is clear that with the W1FO model there is hardly any change from the warm-up phase to the post warm-up phase. Once the warm-up process begins and the grain temperature goes above 20~K, thermal desorption takes over and becomes the most dominant desorption process; by 30~K, grain surface CO is completely evaporated back into gas phase. A similar trend is seen for surface CO\@. In Figure~\ref{fig-astro2}b the solid red curve (dashed red line) shows the binding energy variation of CO with time for this model (old model with fixed 1150~K binding energy). It is clear that just above $10^6$ years CO attains maximum binding energy of 1600~K in warm-up models since there is very little CO on the grain surface. To understand CO desorption from the sub monolayer regime during warm up we ran models with a slow heating rate; the time variation of CO abundance is shown in the Figure~\ref{fig-astro3}. Since the pre-collapse behavior is the same as shown in the Figure~\ref{fig-astro2}, in this plot we only highlighted the portion of the curve in which CO is desorbing due to the warm-up. It can be seen that the gas phase abundance of CO i.e., solid red line ($n_{\rm H}=2\times 10^4$ cm$^{-3}$) and solid green line ($n_{\rm H}=1 \times 10^5$ cm$^{-3}$) for the W1SN and W2SN models, starts to increase much earlier than in the W1SO and W2SO models (dashed red and green line) due to the lower binding energy in the monolayer regime compared to the fixed binding energy. But once it desorbed back to the gas-phase, the abundance profile is very similar. It is clear that grain surface CO abundance for both the W1SN and W2SN models starts to decrease earlier than in W1SO and W2SO models but for W1SN and W2SN models the decrease is much slower and we see a knee like decrease. This means that a small fraction of CO is retained up to a much longer time and up to higher temperature ($\sim$40~K) when the binding energy as a function of coverage is used. However, when heating rate is fast the effect is small (Figure~\ref{fig-astro2}b). \begin{figure} \epsscale{1.1} \plotone{CO_TPD_PLOT_ALL_MODELS.eps} \caption{The time variation of CO abundance is shown for different models. Solid and dashed lines are for models with Eq.~3 and with the old binding energies, respectively. Red (gas) and black (grain) lines are for $n_{\rm H}=2\times 10^4$ cm$^{-3}$ and green (gas) and blue (grain) lines for $n_{\rm H}=1\times 10^5$ cm$^{-3}$. (a) is for cloud with fixed gas and grain temperature of 10~K, and, (b) is warm up model in which grain temperature is raised from 10 to 200~K in $5\times 10^4$ years. } \label{fig-astro2} \end{figure} \begin{figure} \epsscale{1.1} \plotone{ZOOM_CO_TPD_PLOT_ALL_MODELS.eps} \caption{Same as Figure~\ref{fig-astro1}b but the grain temperature is heated to 200~K in $10^6$ years; the early warm-up phase is enlarged to show the sub-monolayer regime desorption of CO.} \label{fig-astro3} \end{figure} \subsubsection{\texorpdfstring{N$_2$}{N2}} Figure~\ref{fig-astro4}a shows the time variation of N$_2$ for the C2N and C2O models. Gas phase N$_2$ abundance for the C2N model (solid green line) remains almost unchanged after $10^5$ years, similarly to the CO abundance profile, whereas for the C2O model, the abundance goes down initially slowly and then sharply after $10^5$ years. If we now look at Figure~\ref{fig-astro1} and compare the binding energy used during the model run, it is clear that for all times N$_2$ has a higher binding energy in the C2N model (solid blue line) than in the C2O model (dashed blue line). Then the question is why gas-phase N$_2$ is more strongly enhanced in the new model. Looking at the gas phase formation pathways of N$_2$ we found that at late times the most dominant pathway for N$_2$ formation in the C2N model is via CO + N$_2$H$^+$ where as in the C2O model the most dominant pathway is N$_2$H$^+$ + e. Thus, a significant high abundance of CO in the C2N model causes an enhanced abundance of N$_2$. Surface N$_2$ in the C2N model is also almost constant due to steady accretion from the gas phase. Figure~\ref{fig-astro4} shows that the gas phase N$_2$ abundance for W2SN (solid green line) and W2SO (dashed green line) models is very similar. However, for surface N$_2$, the desorption starts little later and takes longer time in model W2SN (solid blue line) due to higher binding energy compared to the W2SO model (dashed blue line). \begin{figure} \epsscale{1.1} \plotone{ALL_SPEC_TPD_PLOT_ALL_MODELS.eps} \caption{Similar to Figure~\ref{fig-astro2}a but for N$_2$, O$_2$ and CH$_4$ and for $n_{\rm H} = 10^5$ cm$^{-3}$.} \label{fig-astro4} \end{figure} \begin{figure} \epsscale{1.1} \plotone{ZWARM_1D5_1D6_ALL.eps} \caption{Similar to Figure~\ref{fig-astro3} but for N$_2$, O$_2$ and CH$_4$.} \label{fig-astro5} \end{figure} \subsubsection{\texorpdfstring{O$_2$}{O2}} Figure~\ref{fig-astro4}b shows the time variation of O$_2$ for the C2N and C2O models. Up to $10^5$ years in both the C2N and C2O models the gas-phase abundance for O$_2$ is almost the same. When time is further increased the abundance of O$_2$ in the C2N model starts to decrease compared to the C2O model, and above $10^6$ years nearly an order of magnitude difference in abundance could be seen between these two models. It is clear from Figure~\ref{fig-astro1} that the coverage dependent O$_2$ binding energy is significantly higher compared to the fixed binding energy of 1000~K. Therefore we see more O$_2$ in the C2O model compared to the C2N model due to higher non-thermal desorption rate owing to a lower binding energy. Surface O$_2$ abundance before $2\times 10^4$ years for the C2O model is significantly higher than in the C2N model. During this time the major source of O$_2$ on the grain surface is via O+O reaction. Due to the very high binding energy of oxygen in model C2N, there is no O$_2$ formation via this route. After $10^5$ years the major source of O$_2$ on the grain surface is via accretion from the gas phase. Thus during this time the O$_2$ surface abundance in C2N and C2O models is very similar. Once again we see a difference at time greater than $10^6$ years. During this time the C2O model surface O$_2$ abundance is due to the relatively higher accretion rate caused by the higher gas-phase abundance of O$_2$ in this model. We can see the same effect for W2SO and W2SN models in the Figure~\ref{fig-astro5}. \subsubsection{\texorpdfstring{CH$_4$}{CH4}} Figure~\ref{fig-astro4}c shows time variation of CH$_4$ abundance for C2N (solid green line) and C2O (dashed green line) models. Up to $10^4$ years the gas-phase abundance remains very similar in both models. After that, although the C2N model abundance remains almost constant, C2O model abundance starts to decrease up to $10^5$ years and then begins to increase again after 3 $\times 10^5$ years. At very late time the abundance for both models is similar. The main reasons for this deviation is two-fold. First for the C2N model, the binding energy used during the model calculation is lower than the C2O model between $10^4$ and $10^6$ years. Therefore, net depletion in the C2N model is much lower compared to the C2O model. And the second reason is that the most dominant gas-phase formation pathways for CH$_4$ is via reaction with CO and CH$_5^+$. Thus more CO in the C2N model will cause more CH$_4$ formation in this model. Similarly, due to the higher binding of CH$_4$ in the C2O model there will be less non thermal desorption in the C2O model; this will cause an increase in surface CH$_4$ abundance at late times as evident in Figure~\ref{fig-astro4}c (dashed blue line). Figure~\ref{fig-astro5} shows the CH$_4$ abundance for the W2SN and W2SO models. We can see clearly that in the W2SN model, as temperature is increased CH$_4$ starts to desorb earlier than in the W2SO model due to its lower binding energy in the monolayer regime. On the other hand, although surface CH$_4$ in the W2SN model also starts to decrease due to thermal desorption before the W2SO model, owing to its higher binding energy in the sub-monolayer regime, it slows down and takes longer time to complete the desorption. \section{Summary} In this work we measured the binding energy distribution of N$_2$, CO, O$_2$, CH$_4$, and CO$_2$ on non-porous amorphous solid water (np-ASW), of N$_2$ and CO on porous amorphous solid water (p-ASW), and of NH$_3$ on crystalline water ice down to a fraction of 1\% of the layer. We found that CO$_2$ forms clusters on np-ASW surface even at very low coverage. This may help to explain the observed CO$_2$ segregation in ices. The binding energy of N$_2$, CO, O$_2$, and CH$_4$ decreases with coverage. The energy values in the low coverage limit are much higher than those commonly used in gas-grain astrochemical models. We found a simple empirical formula that gives the binding energy of a molecule as a function of the coverage. We then used this formula in a simulation of the time evolution of a dense cloud followed by a warm-up as appropriate in a hot core or hot corino. We found that for O$_2$ and N$_2$, desorption takes place from the sub-monolayer regime for all models we considered, therefore effective desortion and hopping energies are higher compared to the single energy values used in the current day astronomical models. For CO and CH$_4$ and for cold cloud models, initially the effective binding energy remains significantly higher; then it gradually decreases to the value for a monolayer as coverage is increased and remains at this value till to the end of the simulation. In the warm-up model, the binding energy increases with grain temperature and attains the maximum value of $E_1+E_2$, see Table~\ref{tab:fitting_para}. Another important outcome is that during the slow warm-up, although the desorption process starts earlier for species like CO and CH$_4$ due to lower value of binding energy at monolayer regime, it takes longer time to complete compared to the single value binding energy due the significantly higher sub-monolayer binding energy. Thus, a fraction of all these ices stays much longer on the grain surface compared to the case of using a single value of the binding energy as is currently done in astrochemical models. \label{sec:summary} \section{Acknowledgments} We would like to thank S M Emtiaz and Xixin Liang for technical assistance. This work was supported in part by a grant to GV from NSF---Astronomy \& Astrophysics Division (\#1311958). K. A. would like to thank the support of local funds from Physical Research Laboratory.
1,314,259,995,989
arxiv
\section{Introduction and Motivation} The quantum harmonic oscillator with a time-dependent frequency has been widely studied in numerous contexts, because it rears its head in practically every situation involving the study of a quantum field in a non-trivial time dependent background (for a review, see for e.g.~\cite{txts1,txts2}). The dynamics of such a system is encoded in a propagator Kernel $K(q_2,t_2;q_1,t_1)$ which will allow us to determine the quantum state $\psi(q_2,t_2)$ at $t=t_2$ if the quantum state $\psi(q_1,t_1)$ at $t=t_1$ is given. Since the Lagrangian of the system is quadratic in $q$, the Kernel can be expressed in terms of the solutions to the classical equation of motion (see for e.g.~\cite{dr94}). So, if the classical solution for a harmonic oscillator with a particular time dependence is known, the quantum theory should be trivial. And so one would have thought. The reason one continues to investigate this problem is not so much because the equations cannot be solved but because the interpretation of the solutions is non-trivial. (In this paper, for example, the solution appears within the first ten equations and the rest of the paper is on interpreting it!) Hence, it is worthwhile to raise some questions which need to be addressed in reasonably precise terms, before we proceed further. This will provide the motivation for contributing yet another paper on this topic to the already extensive literature! (1) In virtually every context we are interested in, there will be a classical degree of freedom symbolically denoted by a variable $C$ (cosmological background, external electromagnetic field, .....) interacting with a quantized degree of freedom $q$ (usually a scalar field) with the total Lagrangian for the combined system written as $L(q,C)=L_0(C)+L_I(C,q)$. We will be interested in quantizing $q$ in the background provided by $C$ and study the effect of $q$ on $C$ at the semiclassical level. To the lowest order, the configuration of $C$ will be determined by the equations of motion arising from $L_0$, ignoring $q$ completely. If this configuration is nontrivial (say, time dependent) then the quantum theory of $q$ will be based on a time dependent Hamiltonian and the $q$- particles will be generated by the interaction. The first conceptual question is: how can one define a notion of such particles and their production when the time dependence is nontrivial? If the classical system reaches a time-independent state asymptotically, it is straightforward to define the notion of particles in the \emph{asymptotic} \emph{in} and \emph{out} states and also obtain an expression for the \emph{total} number of particles produced. But many physically relevant problems (like cosmological particle production) will not give us this luxury of well defined in and out states. (2) The question of defining the notion of a particle in a time dependent background is not one of idle curiosity. If $C$ is producing the quanta of $q$, it has to supply the energy for the process and obviously this will modify the evolution of $C$. It will be important to obtain the equations of motion for $C$ with this backreaction incorporated as the particle production progresses. This is certainly not an asymptotic notion and one would like to imagine that -- in any causal theory -- the backreaction on $C$ at time $t=t_1$ should not depend on how the system will evolve at time $t>t_1$. So, purely conceptually, we need a notion of back reaction which does not use the concept of an out vacuum state etc. (3) Such a back reaction --- and in fact, the notion of semiclassical evolution --- can be meaningful only if we have at least an approximate notion of `classicality' for the $q$ mode. Our intuitive idea of a particle which is produced (that has drained away energy from $C$) is classical and one assumes that a particle which is produced `stays produced'. This is of course not true in general and no sensible, time dependent, definition of particle exists which will obey this criterion. This is related to the fact that particle production is stochastic and what is usually computed is the {\it mean number} of particles, which is the mean of a stochastic process; if the variance is not small compared to the mean, one needs to review the entire philosophy. On the other hand, we expect the notion of particles to be reasonably well defined if the quantum state is `quasi-classical'. This brings us to the question of defining the notion of classicality of the state in some precise sense and relating it to particle production. The standard approach to understanding the evolution of the $C-q$ system involves starting with the path integral for the full system and integrating over the $q$ degree of freedom to obtain an {\it effective Lagrangian} $L_{eff}=L_0(C)+L_{corr}(C)$ in terms of $C$, that determines the transition amplitude between the asymptotic in and out vacuum states of the quantum subsystem \textit{when they are definable}. This effective Lagrangian, in general, can have an imaginary part, which can be related to the mean number of $q$-particles produced out of the vacuum by the $C$-field over the entire evolutionary history of the system. (Roughly speaking it specifies the out-particle content of the in-vacuum state.) In order to compute the {\it backreaction} of the quantum subsystem on the classical degree of freedom, one needs to vary $L_{eff}(C)$. However, since $L_{eff}$ can be complex, one would end up with complex equations (and solutions), which are difficult to interpret (see for e.g.~\cite{brown}). What is usually done is to consider only the real part of the effective action in the calculation of backreaction and work with $L_{0}(C)+\textrm{Re}L_{corr}(C)$. This term $\textrm{Re}L_{corr}(C)$ is normally associated with vacuum polarization, and gives a contribution to the backreaction even in the absence of particle creation. On the other hand the vacuum-persistence-probability is directly related to the total number of particles produced and is determined by $\textrm{Im}L_{corr}(C)$. In such an approach it is not clear whether the back reaction due to the production of particles is incorporated in $\textrm{Re}L_{corr}(C)$. The dropping of the imaginary part, which {\it also} carries information about the quantum state of $q$, needs to be justified and one would like to have a formalism which provides a unified picture of the evolution of the quantum system in the background of $C$. We will now briefly connect up the above comments, presented abstractly in terms of a $C$ and $q$, in two specific contexts widely studied in the literature. The first one is pair production in a constant electric field and the second one quantum field theory in an inflationary universe. In the case of the so called {\it Schwinger effect} in QED~\cite{schwinger}, we study the interaction of a quantized, charged scalar field $\phi$ with a constant background classical electric field {\bf E} (see refs.~\cite{itz,efield} for a small sample of recent work and reviews). Usually one computes the in-out transition amplitude by integrating over the $\phi$ variable in the path integral [say, using Schwinger's proper time formalism] to obtain the one-loop effective action (known as the Euler-Heisenberg effective action) for {\bf E}: \begin{equation} \langle 0_{+} | 0_{-} \rangle_{\bf E} = \int {\cal D} \phi~ e^{ i \int {\cal L}(\phi, {\bf A}) d^{4} x } ~\equiv~ e^{ i W_{eff} } ~\equiv~ e^{ i \int {\cal L}_{eff} d^{4} x } . \end{equation} The effective Lagrangian ${\cal L}_{eff}$ turns out to be complex, and its (renormalized) real and imaginary parts are given by \begin{equation} \textrm{Re} {\cal L}_{eff} = -\frac{1}{(4 \pi)^{2}}\int_{0}^{\infty} \frac{ds}{s^3} \cos m^{2} s \left[ \frac{qEs}{\sinh qEs} -1 + \frac{1}{6}q^{2}E^{2}s^{2} \r] \end{equation} and \begin{equation} \textrm{Im} {\cal L}_{eff} = \frac{(qE)^{2}}{16 \pi^{3}} \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^{2}} e^{- n \frac{\pi m^{2}}{qE}}. \end{equation} The real part possesses a Taylor expansion in $q^2$ and the lowest order correction to the classical Maxwell Lagrangian can be easily computed. The standard approach to computing the corrections to Maxwell's equations proceeds using {\it only} this real part; since it is analytic in $q^2$ one can obtain a modified action incorporating quantum corrections order-by-order. On the other hand, the density of pairs produced by the electric field can be obtained directly from the imaginary part of the effective Lagrangian but is {\it non-analytic} in the coupling constant $q$. One would have expected the production of these particle pairs to drain the energy of the electric field and lead to a back reaction. It is not clear how an approach based on $\textrm{Re}{\cal L}_{eff}$ encodes this effect contained in $\textrm{Re}{\cal L}_{eff}$ and this needs to be clarified. As a second example, consider the standard inflationary cosmology. The quantity usually computed in this context is not the particle number density but is the magnitude of the power spectrum~\cite{ps}. Since one uses it as a source in the perturbed Einstein equations, we are clearly moving into the domain of back-reaction. One normally computes the two-point correlation function of a quantized scalar field evolving in the inflationary background, and the power spectrum [the fourier transform in k-space] of the two-point function of the {\it field} is then used in calculating the fluctuations in density\cite{infl1,infl2,cosmo,paddy03}. This is different from computing an effective Lagrangian or from computing the number density of particles produced by the exponentially expanding universe (which is almost never attempted in the study of inflationary cosmology). The two approaches are thus technically different, and one needs to ascertain how the two-point function for the field is related to the particle content, and the extent to which the vacuum polarization part contributes to it. An added complication to this issue is that, although the early time evolution of the fluctuations [when the wavelength of a given perturbation is much smaller than the Hubble radius] is adiabatic and a choice of well-motivated initial conditions based on a natural definition of the vacuum state is possible, the late time phase [following Hubble exit, when the modes turn super-Hubble] is highly non-adiabatic and one can not identify a unique asymptotic out vacuum state. The notion of particles is ill-defined in such a dynamic background, and one needs to come up with appropriate variables that can help quantify the content of the quantum state of the perturbations in this background. Once again we need a generalization of our concepts from an out vacuum state to a time dependent situation. Another closely related issue stems from the fact that although the perturbations are assumed to be generated as vacuum fluctuations of a quantized field, their late time evolution at super-Hubble scales is treated using classical notions, with the only reference to their quantum origin being in the choice of initial conditions. Clearly, one needs to understand the validity of this assumption, and figure out how a quantum state can turn effectively classical. This has generally been regarded as an interpretational issue, and the usually adopted viewpoint in the literature hinges on the concept of $\it{decoherence}$~\cite{decoh}, which basically provides a mechanism for suppression of quantum interference terms between various states of the system (a generic quantum property) by taking into account its unavoidable interaction with a suitably defined environment. Mathematically, one proceeds by splitting the system under consideration into a part that is built out of that set of physical variables one is interested in, and an `environment' that consists of the collection of those degrees of freedom that are in some sense inaccessible; the properties of the subsystem of interest can then be adequately described by a reduced density matrix that is obtained by starting with the full density matrix and tracing over the inaccessible degrees corresponding to the environment. Under suitable conditions, the reduced density matrix can be shown to reduce to a diagonal form (in a suitable basis) with the off-diagonal elements, representing quantum interference effects, getting suppressed. Although the formalism of decoherence can be invoked under fairly general conditions as real quantum systems seldom exist in isolation, ways have been suggested to do away with the quantum environment, and interpret classicality in terms of alternative notions. In the context of inflation, phase space correlations of the quantum variables have been analyzed using the Wigner distribution~\cite{infl2,wig,wig2}, and classicality interpreted in terms of peaking on the corresponding classical trajectory. The emergence of classicality has also been tied with the phenomenon of squeezing of the quantum state~\cite{infl2,class}. Because of the potential implications of such approaches, which rely only on the intrinsic properties of the system for an understanding of quantum-to-classical transitions, there is a need to make them unambiguous. Keeping these questions in mind, we will confine our attention to addressing two specific issues: (1) quantifying the physical content of a quantum state evolving in a time dependent background; in particular, identifying suitable variables that encode various aspects of the information contained in the quantum state, including the notion of particles, and (2) making the idea of interpreting classicality of the quantum state, as well as its relation with the concept of [appropriately defined] particle production, more precise. This will be done in a sequence of two papers. In the present paper, we will concern ourselves with studying the quantum mechanics of a single oscillator. This, of course, has general applicability. If one is studying a quantum field in an external time dependent background, like an FRW cosmological model \cite{parker} characterized by a scale factor $a(t)$ or an electric field expressed in a time dependent gauge with vector potential ${\bf A}(t)$, the field can be decomposed by taking a spatial fourier transform into a set of uncoupled oscillators with time dependent parameters, each corresponding to a particular fourier mode {\bf k}. This allows for a mode-by-mode analysis of the field evolution. Furthermore if, say, the oscillator labeled by the wave vector {\bf k} is in the $n$th excited state, this is interpreted, in the field picture, as the presence of $n$ particles with momenta {\bf k} each. Presuming such a close correspondence with the field picture, we will freely make use of field theory language in our dealings with the quantized oscillator, with the understanding that the various definitions are to be looked at in the broader context of fields. Having said that, it may also be mentioned that when one considers quantum fields, some additional ideas can emerge. Firstly, the presence of an infinite number of degrees of freedom brings in issues related to regularization, which need to be dealt with in a suitable manner. Secondly, one has the liberty of shifting from the oscillator picture by taking an inverse fourier transform back to coordinate space, and this opens up an alternative way to try understanding the physical content of the system. The case of quantum fields will be taken up in Paper II~\cite{gm}. This paper is organized as follows. In section~\ref{sec:formalism}, we outline a well-motivated and straightforward formalism, in the Schrodinger picture, to analyze the dynamics of a general quantized time-dependent oscillator, that proves a convenient tool to understand the content of the evolving quantum state, and the relationships between the various physically sensible quantities built out of it. Then, in section~\ref{sec:analysis}, we first carry out a detailed analysis of a simple toy model, with particular emphasis on the question of relating particle content to classicality, and demonstrate that peaking of the Wigner function on the classical trajectory \emph{alone} is \emph{not} tantamount to having classical behavior; one needs to {\it also} look at some alternative measure of correlations to get rid of such ambiguities. We define such a reasonable alternative (calling it the `classicality parameter') and based on an approximate asymptotic analysis of a class of models with general time dependent frequency $\omega(t)$, show that {\it whenever} there is [appropriately defined] particle creation at late times, it is {\it always} accompanied by growth of the classicality parameter which can be quite unambiguously interpreted as an emergence of classicality. This is followed up with an analysis of a few additional toy examples, which serve to validate our analytic approximations and further illustrate the general features associated with the time evolution. Finally, we conclude in section~\ref{sec:diss} with a discussion. In what follows, we shall set $\hbar=c=1$. \section{Evolution of the quantum state:~General formalism} \label{sec:formalism} The starting point of our analysis is a general one-dimensional harmonic oscillator with the following action, where we treat the mass and the frequency of the oscillator to be time dependent: \begin{equation} {\cal A}[q] = \frac{1}{2} \int{m(t)\left[\dot q^2 - \omega ^2 (t)\, q^2\r]} dt \label{action} \end{equation} with the conditions that $m(t)>0,\omega^2(t)>0$. (If $m(t)$ is also monotonic, then using a time coordinate $\tau$ with $d\tau=dt/m(t)$ will get rid of $m(t)$ and change $\omega^2$ to $\omega^2 m^2$. But we will see that it is convenient not to do this and hence we will retain this form.) The classical trajectory of the oscillator can be obtained by solving the following equation of motion: \begin{equation} \frac{d}{dt} \left( m(t) \frac{dq}{dt} \right)+m(t) \omega^{2} (t) q = 0. \label{cleq} \end{equation} As mentioned earlier, the action in \eq{action} can describe a particular fourier mode of a quantum field in the presence of a time-dependent classical background. In the standard procedure of canonical quantization, the variable $q$ describing the coordinate of the oscillator is replaced by the corresponding hermitian operator $\hat{q}$ satisfying the commutation relation $[\hat{q},\hat{p}]=i$. In the Schrodinger picture which we shall work in, the evolution of the quantized oscillator can be described by a wave function $\psi (q,t)$ satisfying the time-dependent Schrodinger equation: \begin{equation} i\, \frac{{\partial} \psi(q,t)}{{\partial} t} =-\frac{1}{2\, m(t)}\, \frac{{\partial}^2\psi(q,t)}{{\partial} q^2} +\frac{1}{2}\, m(t) \omega^{2} (t) q^{2}\, \psi(q,t).\label{seq} \end{equation} We are interested in the solution to this equation which could have been interpreted as the ground state of the oscillator at some time $t=t_0$. This suggests looking for a particular class of solutions of \eq{seq} that has been studied in various contexts~\cite{gauss,pad,wig,paddy03,tpgswfn,ss}, the {\it form invariant} Gaussian state, which is an exponential of a quadratic function of $q$. If one restricts oneself to states which have a vanishing mean (which happen to be the relevant ones in the examples we will consider), then the wave function assumes the form \begin{equation} \psi\left(q,t\r) = N(t)\, \exp\left[- R(t) q^2 \r] \label{gswfn} \end{equation} where $N(t)$ and $R(t)$ are complex quantities. Substituting the expression in \eq{gswfn} in the Schrodinger equation \eq{seq}, the following equations for $R$ and $N$ are obtained: \begin{equation} i\frac{\dot N}{N} = \frac{R}{m} \quad;\quad i \dot{R} = \frac{2 R^2}{m} -\frac{m \omega^{2}}{2}. \label{nr} \end{equation} From the first equation and its complex conjugate, it is easy to show that \begin{equation} \vert N\vert^2=\left(\frac{R+R^\ast}{\pi}\r)^{1/2} \label{eq:NkRk} \end{equation} which can also be obtained from the normalization condition on the wave function. Therefore, other than the overall phase of $N$, the only non trivial aspect of the quantum state is encoded in the time dependence of the function $R(t)$. Before we present the exact solution to this system, it is worthwhile to obtain the adiabatic limit of the system. If $\omega,m$ are slowly varying functions of time, then the \eq{nr} can be solved to give \begin{equation} R(t)\approx m(t)\omega(t)/2;\qquad N(t)=N_0\exp \left(-\frac{i}{2}\int_{t_0}^t\omega(t')dt'\r). \label{adiaN} \end{equation} This determines the evolution of our state in the adiabatic approximation. Since $R(t)\approx m(t)\omega(t)/2$ in the adiabatic limit, it is convenient to introduce a new complex function $z(t)$ in place of $R(t)$ by the relation \begin{equation} R(t)\equiv\frac{m \omega}{2} \left(\frac{1-z}{1+z} \r). \end{equation} Clearly, $z$ measures the deviation of $R$ from the adiabatic value and we will call it the \emph{excitation parameter}. The wave function is completely determined by $z(t)$ as \begin{equation} \psi (q,t) = N \exp[- R q^{2}] = N \exp \left[ - \frac{m \omega}{2} \left(\frac{1-z}{1+z} \r) q^{2} \r] = N \exp \left[ - \frac{m \omega}{2} \left(\frac{1-|z|^{2} - 2 i Im z}{1+|z|^{2} + 2 Re z} \r) q^{2} \r] \label{gswfn_z} \end{equation} where \begin{equation} |N|^{2} = \sqrt{\frac{m \omega}{\pi} \frac{\left(1-|z|^{2}\r)}{|1+z|^{2}}}. \end{equation} From the equation satisfied by $R$ in \eq{nr} one can obtain the equation satisfied by $z$; one finds that $z$ satisfies a rather simple first order differential equation: \begin{equation} \dot z + 2 i \omega z + \frac{1}{2} \left(\frac{\dot \omega}{\omega} + \frac{\dot m}{m} \r) (z^{2} - 1) = 0. \label{eq:z_gen} \end{equation} Once this equation is solved for $z$ with some appropriate initial condition ($z=0$ set at some instant $t_{0}$) we have completely solved the problem. Our analysis will be based on the time evolution of $z$. The evolution equation for $z$ can be written in a slightly different form by using $\omega$, instead of the time coordinate $t$ as the independent variable; given any monotonic range of $\omega(t)$, one can in principle invert this relation to obtain time as a function of the frequency, $t \equiv t(\omega)$. With this replacement, \eq{eq:z_gen} can be recast in the following form: \begin{equation} \omega \frac{d z(\omega)}{d \omega} + \frac{2 i}{\epsilon(\omega)}z(\omega) + \frac{1}{2}\left( z^2(\omega) - 1 \r) = 0 \label{eq:z_omega} \end{equation} with $\epsilon \equiv ((\dot{\omega}/\omega) + (\dot{m}/m))/\omega$, which will henceforth be called the \emph{adiabaticity parameter} (and which too is expressible as a function of $\omega$). This equation determines the function $z(\omega)$, and this conversion will be of particular relevance in our discussion of the effective Lagrangian in section~\ref{sec:L_eff}. It may be mentioned in passing that the adiabaticity parameter as defined above has a simple physical interpretation in the context of quantum fields in a cosmological background characterized by the scale factor $a(t)$. For a massless minimally coupled scalar field, any given fourier mode (describable by an oscillator) has a time dependent mass $a^3(t)$ and frequency $|{\bf k}|/a(t)$, so the adiabaticity parameter is $\epsilon_{\bf k}(t)= 2 \dot{a}(t)/ |{\bf k}|$, which (apart from a numerical factor) is just the ratio of the physical wavelength of the mode, $\lambda_p = 2 \pi a|{\bf k}|^{-1}$, to the Hubble radius $R_H=(\dot{a}/a)^{-1}$. The equations for $R,z$ are first order but nonlinear. Since they are of the generalized Riccati type they can be transformed into second order linear equations. For example, if we set $R=-\left(i\, m/2\r)(\dot \mu/ \mu)$ where $\mu(t)$ is a new function then \eq{nr} implies that $\mu$ satisfies the following differential equation: \begin{equation} \ddot \mu+ \frac{\dot m}{m}\, \dot \mu +\omega ^2 \mu=0 \label{mueq} \end{equation} which is same as the classical equation of motion, \eq{cleq}, satisfied by the oscillator variable $q$. The solutions of \eq{mueq} obey the Wronskian condition \begin{equation} \mu \dot \mu^{*} - \mu^{*} \dot \mu = -i\frac{W}{m(t)} \end{equation} where $W$ is independent of time. The variable $z$ can now be expressed as \begin{equation} z(t) = \left( \frac{\omega + \frac{i}{\mu}\frac{d \mu }{dt}}{\omega -\frac{i}{\mu}\frac{d\mu}{dt}}\r). \label{z} \end{equation} As mentioned in the introduction, we now have completely solved the problem. One can, in principle, solve \eq{mueq} to obtain the two linearly independent solutions for $\mu$, say $s$ and $s^{*}$. The general solution is a linear superposition of the form $\mu =\left[{\cal A}\, s + {\cal B}\, s^*\r]$, with ${\cal A}$ and ${\cal B}$ determined by one's choice of initial conditions. Once $\mu$ has been found, the function $R$ can then be computed, which completely fixes the quantum state of the system. The wave function $\psi(q,t)$ depends only on the ratio $\cal R = \cal B / \cal A$, since it is independent of the overall scaling of $\mu$. The real difficulty is not in doing this but in understanding what the physical content of the quantum state is. We shall now address this question. \subsection{Particle content of the quantum state} \label{sec:pc} The quantum state given by \eq{gswfn} is more generally known in the literature as a {\it squeezed} quantum state and has been quite extensively studied~\cite{squeeze} especially in the context of quantum optics. In the squeezed state formalism in the Heisenberg picture, one usually introduces the "squeeze" operator, defined as $\hat{S}(\xi) = \exp \left[(1/2)(\xi^{*} a^2 - \xi a^{\dagger 2})\r]$ with $\xi = r e^{i\theta}$, and where $r$ is known as the squeeze parameter ($0\leq r \leq \infty$). The function $z(t)$ that we defined earlier happens to be indirectly related to the squeeze parameters, as $z = - e^{i \theta - 2 i \rho} \tanh r$ (where $\dot{\rho}=\omega$). Unfortunately, the squeezed state formalism does not help much in our tasks in hand; so we shall follow a different approach here. A physically motivated and reasonable way of quantifying the time-dependent content of the state would be to compare it with the \textit{instantaneous} energy eigenstates at any given moment. Since the oscillator parameters are time-dependent, one can not define stationary states in the usual manner; the alternative would be to define a set of instantaneous eigenstates at {\it every} instant. A wave function that starts off in the instantaneous ground state might, at a later time, be in a superposition of instantaneous eigenstates defined at $\it{that}$ moment; this can be thought of as excitation of quanta. We will define these instantaneous eigenstates, at a given moment $t$, as a set of states that have been obtained by {\it adiabatically} evolving the eigenstates at some initial instant $t_0$. This amounts to arranging matters in such a way that \textit{if} the oscillator evolves adiabatically, a wave function that has been set in the instantaneous ground state at $t=t_0$ will evolve to coincide, at every subsequent moment $t$, with the instantaneous ground state defined at $t$. This set of instantaneous eigenstates at time $t$ has the form \begin{equation} \phi_{n}(q,t) = \left(\frac{m \omega}{\pi}\r)^{1/4}\frac{1}{\sqrt{2^{n} n!}} H_{n}(\sqrt{m \omega}q) \exp\left[-\frac{m \omega}{2} q^{2} - i\int_{t_0}^{t}\left( n + \frac{1}{2} \r) \omega(t) dt\right] \label{inststate} \end{equation} where $H_n$ are the Hermite polynomials~\cite{grad} and $n=0, 1, 2,...$. Except for the phase factor, these are essentially the harmonic oscillator wave functions, constructed using the instantaneous values of $\omega(t),m(t)$ at time $t$. They form a complete set of orthonormal functions at time $t$ and any other wave function can be expanded in terms of them. The phase in \eq{inststate} is so chosen that it coincides with the phase of the wave function in the adiabatic limit obtained in \eq{adiaN}. We can now expand our exact solution $\psi$ in terms of the instantaneous eigenfunctions. Since $\psi$ is an even function, the amplitude for the oscillator to be in the $nth$ instantaneous eigenstate $\phi_{n}(t)$ at time $t$ is non-zero only for even $n$, and is given by \begin{equation} C_{n}(t) = \int_{-\infty}^{\infty} {dq} \phi_{n}^{*}(q,t) \psi (q,t) = N \left(\frac{m \omega}{\pi}\r)^{1/4}\frac{1}{\sqrt{2^{n} n!}}\int_{-\infty}^{\infty} {dq} H_{n}(\sqrt{m \omega}q) e^{-\left(R+\frac{m \omega}{2}\r)q^{2} + i\int_{t_0}^{t}\left(n + \frac{1}{2} \r)\omega(t) dt}. \label{amplitude} \end{equation} The above integral can be evaluated by standard methods (see Appendix A for details). We can then compute the probability associated with the amplitude $C_{n}(t)$ to obtain \begin{equation} P_{2n}(t) = |C_{2n}|^{2} = P \frac{(2n)!}{(n!)^2} \frac{|z|^{2n}}{2^{2n}}, \label{prob0} \end{equation} with the \emph{time-dependent} vacuum-persistence-probability $P_{0}(t)$ given by: \begin{equation} P_{0}(t)=P = \sqrt{1-|z|^2} \label{defp0} \end{equation} Clearly the probability for occupying the excited states is controlled by $z$ justifying the name \emph{excitation parameter}. The generating function for the probability distribution in \eq{prob0}, defined as $G(x) = \sum_{n=0}^{\infty} P_{2n} x^{n}$, is given by \begin{equation} G(x) = \frac{P}{\sqrt{1-x|z|^2}} = \sqrt{\frac{1-|z|^2}{1-x|z|^2}}. \label{genfn1} \end{equation} Once the probability distribution is known, it is trivial to compute the mean number of quanta in the state at any time $t$; this is given by \begin{equation} \langle n \rangle = \sum_{n=0}^{\infty} 2n P_{2n} = 2\,G'(1) = \frac{P|z|^{2}}{(1-|z|^2)^{3/2}} = \frac{|z|^2}{1-|z|^2}. \label{n} \end{equation} We shall take this quantity, $\langle n \rangle$, as describing the time dependent `particle' content of the quantum state. We do not claim that our definition is \emph{unique} in any sense; only that it will be \emph{useful} and physically reasonable. This interpretation will be borne out by different considerations in what follows and the first of these is the following. The system evolves under the action of a time-dependent Hamiltonian $H$ and one can compute the mean value of the energy at any time $t$ by the expectation value of the Hamiltonian, $E(t)=\langle \psi|H|\psi \rangle$. Direct computation shows that \begin{equation} E(t) =\left(\frac{m}{2\, W}\r)\left(\vert \dot \mu \vert^2 +\omega ^2\, \vert \mu\vert^2\r) =\left(\langle n \rangle +\frac{1}{2}\r)\, \omega (t). \label{E-n} \end{equation} This clearly strengthens the motivation to think of $\langle n \rangle$ as defined above as the number of quanta present at time $t$. Let us rewrite the expression for the vacuum persistence probability, \eq{defp0} in terms of the the mean particle number: \begin{equation} P_{0}(t) = \sqrt{1-|z|^{2}} = \left( 1 + \langle n \rangle \r)^{-1/2} = \exp \left[ -\frac{1}{2}\ln \left( 1 + \langle n \rangle \r) \r]. \label{p0_n} \end{equation} It is obvious from this expression (as well as, of course, from \eq{prob0}) the excitation probability for different levels is \textit{not} Poissonian. When the excitation to level $2m$ is interpreted as creation of $m$ pairs of particles in quantum field theory, this implies that pair production is not --- in general --- a Poisson process and is non-trivially correlated. In the limit of $\langle n \rangle \ll1$, however, we have $P_{0}\sim\exp(-\langle n \rangle/2)$ which allows one to identify $\langle n \rangle/2$ as the mean number of pairs produced from the vacuum, which is a standard result. It is clear from the expression for the generating function, \eq{genfn1}, that the mean particle number $\langle n \rangle$, as well as the higher order moments, are functions of only the magnitude of $z$. On the other hand, the wave function in (\ref{gswfn_z}) is built out of not only the magnitude, but also the phase $\theta$ of $z$. In fact, writing $z=|z|e^{i\theta}$ and using \eq{n} to determine $|z|$, we find that the wave function can be expressed in the form: \begin{equation} \psi(t,q)= \frab{m\omega}{\pi}^{1/4}\left(1+2\langle n \rangle+2\sqrt{\langle n \rangle(\langle n \rangle+1)}\cos\theta\right)^{-1/4} \exp\left[-\frac{1}{2}m\omega q^2\left( \frac{1-2i\sqrt{\langle n \rangle(\langle n \rangle+1)}\sin\theta}{1+2\langle n \rangle+2\sqrt{\langle n \rangle(\langle n \rangle+1)}\cos\theta} \right)\right]. \end{equation} It is obvious that the particle number only contains partial information about the quantum state; given the particle number (and even all the higher moments) at a given moment of time, one can {\it not} have complete knowledge of the state of the oscillator at that moment. While on this issue, it is worth noting that $\langle n \rangle\to\infty$ when $|z|\to1$. In this limit the width of the wave function scales as $\langle n \rangle$ and the gaussian spreads all over the coordinate space. In the same limit the width of the wave function in momentum space goes to zero, ``squeezing" the wave function to the $p=0$ axis. One can also show that the dynamical equations are same as the following equations for $\langle n \rangle$ and $\theta$: \begin{equation} \dot{\langle n \rangle}=\left(\frac{\dot\omega}{\omega} + \frac{\dot m}{m} \r) \sqrt{\langle n \rangle(\langle n \rangle+1)}\cos\theta;\quad \dot\theta= -2\omega-\frac{1}{2} \left(\frac{\dot\omega}{\omega} + \frac{\dot m}{m} \r) \frac{2\langle n \rangle +1}{\sqrt{\langle n \rangle(\langle n \rangle+1)}} \sin\theta. \end{equation} It is obvious from these equations that even when $\dot\omega$ and $\dot m$ have a fixed sign, the sign of $\dot{\langle n \rangle}$ depends on the phase $\theta$ and hence need not be monotonic in general. Moreover, it may be noted that if $\langle n \rangle$ is specified as a function of time, one can, using the above equations, determine the form of $\theta(t)$, and this allows one to reconstruct the complete time evolution of the wave function. In this sense, it is possible to fix the state of the system completely given the time-variation of the mean particle number alone. And further, if $\langle n \rangle $ is a monotonic function of time, then one can trade off the $t$ dependence for dependence on $\langle n \rangle$, and thus express $z$ (as well as the wave function) explicitly in terms of just the mean number of particles. Such a transformation will be well-defined only when there is a one-to-one relation between $t$ and $\langle n \rangle$. (If the mean particle number is oscillatory, then there would in general be several values of time for the same $\langle n \rangle$, and consequently the expression for the wave function in terms of $\langle n \rangle$ will be multiple-valued as well, and not uniquely specifiable.) Getting back to our quantum state, it is clear that an additional physical variable needs to be specified that can provide information about the phase $\theta$. A suitable choice for this is the spread in the wave function, given by the expectation value of $q^{2}$. It has the form \begin{equation} \langle q^{2} \rangle = \int_{-\infty}^{\infty} q^{2} | \psi(q,t) |^{2} d q = \frac{1}{2 ( R + R^{*} )} = \frac{|\mu|^{2}}{W}. \end{equation} In the context of quantum fields in a cosmological setting, this dispersion can be directly related to the logarithmic power spectrum~\cite{infl1,infl2,pad,wig,cosmo}, which is the fourier transform of the two-point correlation function for the field evaluated in a particular state: \begin{equation} k^{3} P_{\phi} (k) = \frac{k^{3}}{2 \pi ^{2}} \langle q_{k}^{2} \rangle. \label{pow} \end{equation} The quantity $\langle q^{2} \rangle$ also can be re-expressed in terms of $z$ as follows: \begin{equation} \langle q^{2} \rangle = \frac{\langle n \rangle}{2 m \omega} \left| 1+\frac{1}{z} \r| ^{2} = \frac{\left ( 2 \langle n \rangle + 1 \r)}{2 m \omega} + \frac{(\langle n \rangle +1)}{ m \omega} \textrm{Re}(z). \label{q2_n} \end{equation} This expression, again, shows that the power in the $q$-mode is {\it not} completely expressible in terms of the instantaneous mean particle number [which involves just the magnitude of $z$], because it carries additional information encoded in the phase of $z$. It follows that, in general the computation of the power spectrum in \eq{pow} (in the cosmological context, for example) is different from computing the mean number of particles produced. However, it can be seen from \eq{q2_n} that in the limit when the real part of $z$ becomes a constant, $\langle q^{2} \rangle$ {\it can} be written purely as a function of the mean particle number. Our treatment is completely equivalent to the standard analysis that is generally done in the Heisenberg picture, in which the time dependence of the system is encoded in the Bogolyubov coefficients $\alpha(t)$ and $\beta(t)$ satisfying the relation $|\alpha(t)|^{2} - |\beta(t)|^{2} = 1$. The function $z$ that we have defined is related in a simple manner to these Bogolyubov coefficients, as \begin{equation} z(t) = \frac{\beta^*(t)}{\alpha^*(t)}~e^{- 2 i \rho(t)} \label{H_pic} \end{equation} with $\dot{\rho}(t) \equiv \omega(t)$ (see Appendix C). In fact it can be shown quite trivially, that if one sets the oscillator in the vacuum state at time $t_0$, then the above quantity is directly proportional to the [normalized] amplitude for the vacuum state defined at time $t_0$ to be a 2-particle state with respect to the vacuum defined at time $t$: \begin{equation} \frac{\langle 2, t ~|~ 0, t_0 \rangle}{\langle 0, t | 0, t_0 \rangle} = \frac{1}{\sqrt 2}\frac{\langle 0, t ~|\hat{a}(t)\hat{a}(t)|~ 0, t_0 \rangle}{\langle 0, t | 0, t_0 \rangle} = \frac{1}{\sqrt 2} \frac{\beta^*(t)}{\alpha^*(t)} \equiv \frac{z(t)}{\sqrt 2} e^{2 i \rho(t)} \end{equation} where $\hat{a}(t)$ is the annihilation operator defined at time $t$, with $\hat{a}(t) |0,t \rangle = 0$. It follows from the relation in \eq{H_pic} that all the physical quantities are alternatively expressible in terms of the Bogolyubov coefficients. In particular \eq{n} now reduces to $\langle n \rangle=|\beta|^2$ which is the standard result. We however shall not be using them in the analysis that follows, but will stick to understanding the evolution of $z(t)$. (In field theory, each fourier mode labeled by a wave vector ${\bf k}$ will have a corresponding $z_{\bf k}$ and one can also obtain the spatial fourier transform of this quantity; we will see in Paper II~\cite{gm} that it contains valuable information about the classicality of the state.) \subsection{Effective Lagrangian} \label{sec:L_eff} Another useful way to quantify the effects induced by the time-dependent background on the quantum system, especially in the semiclassical context, is to look at the {\it vacuum persistence amplitude}, which measures the amplitude for a state to be a vacuum at late times, if it started out as a vacuum state at early times. Normally, this quantity is defined using asymptotic in and out vacuum states~\cite{txts1,txts2,schwinger}. It is directly related to the effective action, the imaginary part of which specifies the asymptotic particle content in the quantum state, while the real part is used in analyzing backreaction. This idea needs generalization when asymptotically adiabatic vacuum states cannot be defined. One would like to have some sort of a generalized time-dependent analogue of the effective action that is amenable to a suitable interpretation. Based on the formalism outlined in the previous section, one can define such a quantity in a fairly natural manner. Consider a situation in which the oscillator has been set in the instantaneous ground state at some instant, say, $t=t_{0}$. This fixes the form of $z(t)$ and hence of $R(t)$. One can then compute the amplitude for this oscillator to evolve and be in the \textit{instantaneous} ground state at some moment $t>t_0$ in the future: this is just the amplitude $C_{0}$ evaluated in \eq{amplitude}, and one can write it in terms of a time dependent `effective action' as follows: \begin{equation} C_{0}(t) = \frac{ N(t) \left( m \omega \pi \r)^{1/4} }{ \sqrt{\left( R(t) + \frac{m \omega}{2} \r)} } \exp\left[i\int_{t_0}^{t}\frac{\omega(t)}{2}dt\right] \equiv \exp\{i A_{eff} (t)\} \equiv \exp\left[i \int_{t_{0}}^{t} L_{eff}(t) dt\right]. \end{equation} Since the amplitude is known it can be easily computed and turns out to be a rather simple expression: \begin{equation} L_{eff}(t) = \frac{i}{4} \left( \frac{\dot \omega}{\omega} + \frac{\dot m}{m} \r) z \label{L_eff} \end{equation} To place this result in context, recall that the time dependence of $\omega$, for example, arises because the classical degree of freedom (background metric, electric field .....) is described by a time dependent solution. This makes quantities like $\dot\omega=(\partial\omega/\partial C)\dot C$ explicit functionals of $C$. Further, the effective Lagrangian also depends on $C(t)$ implicitly through $z$ which is determined in terms of the background variables through the differential equation \eq{eq:z_omega}. While the effective Lagrangian looks rather simple when expressed in the above form this simplicity is deceptive (and useless for the purpose of calculating backreaction etc.). The effective Lagrangian needs to the thought of as a \emph{functional} of background variables and this is often nontrivial. Nevertheless, we can make significant progress using the above expressions, keeping the relevant caveats in mind. The real and imaginary parts of the effective Lagrangian are directly related to the function z. The real part can be written as \begin{equation} \textrm{Re} L_{eff}(t) = - \frac{1}{4} \left( \frac{\dot \omega}{\omega} + \frac{\dot m}{m} \r) \textrm{Im} z = -\frac{1}{4}~\epsilon~\omega~\textrm{Im} z. \label{RL_eff} \end{equation} Note that we have defined the particle content and the vacuum state using the instantaneous eigenstates obtained by adiabatic evolution. Therefore, the usual adiabatic term (integral of $\omega$ over time) does not appear in this expression and $Re L_{eff}$ vanishes in the adiabatic limit ($\epsilon\to0$). The imaginary part, on the other hand, is given by \begin{equation} \textrm{Im} L_{eff}(t) = \frac{1}{4} \left( \frac{\dot \omega}{\omega} + \frac{\dot m}{m} \r) \textrm{Re} z. \label{IL_eff1} \end{equation} We would have expected it to be related to the particle content and indeed it is. Using the equation for $\langle \dot n \rangle$, one can show that this is equal to \begin{equation} \textrm{Im} L_{eff}(t) = \frac{1}{4} \frac{d}{dt} \ln ( 1 + \langle n \rangle ); \quad \textrm{Im} A_{eff}=\frac{1}{4}\ln ( 1 + \langle n \rangle ) \label{IL_eff} \end{equation} This is in accordance with the standard interpretation of $A_{eff}$ if the system has a late time adiabatic regime, where $\textrm{Im} A_{eff}(t)$ saturates to a constant value specifying the asymptotic particle number. If $\langle n \rangle\ll1$ then the imaginary part of the effective action becomes $\textrm{Im} A_{eff}(t)\approx (1/4)\langle n \rangle$ so that the vacuum persistence probability is \begin{equation} |\langle 0,t|0,t_0 \rangle |^2\approx \exp(-2\textrm{Im} A_{eff}(t))\approx 1-\frac{1}{2}\langle n \rangle \end{equation} which matches with the result in \eq{defp0} for $|z|^2\ll 1$. (The factor $(1/2)$ is due the fact that $\langle n \rangle$ is the mean number of \textit{particles} while in quantum field theory one usually quotes the result for mean number of \textit{pairs}.) We will see that our definitions make sense in all other cases too when we look at the behavior of $L_{eff}$ in different limits in the study of toy models in the next section. To recapitulate, we have defined the quantum state in terms of a parameter $z$ and find that the particle content of the state determines (and is determined by) $|z|^2$. To fix the state uniquely we also need to know the phase of $z$, which can be obtained from some other suitably defined quantity like the dispersion $\langle q^2 \rangle$. Together they completely specify the state of the system at any moment. The next question we want to address is the `classicality' of this state, and its approach to classicality. This may be understood, in one possible way, by shifting attention to the system's phase space. \subsection{Wigner function} We now will attempt to quantify the level of classicality of the quantum state in terms of its phase space correlations. A suitable tool for this purpose is the Wigner distribution function \cite{wig,wig2,pad}, which is defined, for a wave function $\psi(q,t)$, as \begin{equation} {\cal W}\left(q, p,t\r) =\frac{1}{2\pi}\, \int\limits_{-\infty}^{\infty}du\; \psi^*\left(q+\frac{u}{2},t\r)\; \psi\left(q-\frac{u}{2},t\r)\; e^{ipu}. \end{equation} The Wigner function can be regarded as a quantum analogue of the classical distribution function, and satisfies the following identities: \begin{equation} \int_{-\infty}^{\infty} {\cal W}(q,p) dp = |\psi(q)|^{2}, \end{equation} \begin{equation} \int_{-\infty}^{\infty} {\cal W}(q,p) dq = |\varphi(p)|^{2} \end{equation} where $\varphi(p)$ is the wave function in Fourier space (i.e, the fourier transform of $\psi$). These two relations suggest that $ {\cal W}(q,p)$ can be thought of as a probability distribution in phase space, provided it is positive. In general, Wigner function satisfies the evolution equation: \begin{equation} \frac{{\partial} {\cal W}}{{\partial} t} + \frac{p}{m}\frac{{\partial} {\cal W}}{{\partial} q}-\frac{dV}{dq}\frac{{\partial} {\cal W}}{{\partial} p} = \frac{\hbar ^2}{24}\frac{d^{3}V}{dq^{3}}\frac{d^{3}{\cal W}}{dp^{3}}+ ... \label{wfee} \end{equation} where $V(q)$ denotes the potential, and dots indicate terms with higher powers of $\hbar$ and higher derivatives of $\cal W$ and $V$. For quadratic potentials, the right hand side of eq.(\ref{wfee}) vanishes and we recover the classical continuity equation. (This is true for any potential up to order $\hbar^{2}$.) Although the above relations suggest that the Wigner function might be interpreted as a `joint probability distribution', it can take on negative values in some cases~\cite{squeeze}. Gaussian states, however, turn out to have a positive-definite Wigner function. The Wigner function is useful in studying quantum to classical transitions in a system in terms of correlations between the phase space variables $(q,p)$; a pure quantum system is represented by a completely uncorrelated Wigner function like ${\cal W}(q,p)=A(q)B(p)$ so that the probability for the system to have a momentum $p$ is independent of its position $q$. The ground state of the harmonic oscillator, for example, has such a Wigner function showing it is very non-classical. For a classical system, we will find ${\cal W}$ to be peaked in a limited region in phase space with the totally classical state being ${\cal W}(q,p) \propto \delta (p-f(q))$, where $p=f(q)$ represents the classical phase-space trajectory. Hence the classical system is expected to show a strong correlation between $q$ and $p$. A useful measure of the `classicality' of a state can thus be provided by the degree of correlation between $q$ and $p$. For this purpose, we consider the following quantity: \begin{equation} {\cal S} = \frac{\langle pq \rangle_{{\cal W}}}{\sqrt{\langle p^2 \rangle_{{\cal W}} \langle q^2 \rangle_{{\cal W}} }} \end{equation} with the average for any function $F(q,p)$ of the phase space variables being calculated using the Wigner function: \begin{equation} \langle F(q,p) \rangle_{{\cal W}} = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} F(q,p) {\cal W}(q,p) dq dp. \end{equation} We shall henceforth refer to the object ${\cal S}$ as the {\it classicality parameter}; for a pure quantum state, like the ground sate of the oscillator, we get ${\cal S} = 0$. For a highly classical state, we will expect ${\cal S}\to 1$. The Wigner function corresponding to the gaussian wave function in \eq{gswfn} can be expressed as \begin{equation} {\cal W}\left(q, p, t\r) =\frac{1}{\pi}\; \exp\left[-\frac{q^2}{\sigma^2(t)} -\sigma ^2(t)\, \left(p- {\cal J}(t)\, q\r)^2\r] \label{gswifn} \end{equation} where $\sigma $ and ${\cal J}$ are given by \begin{equation} \sigma ^2=\left(R+R^{\ast}\r)^{-1} \qquad{\rm and}\qquad {\cal J}= i\,\left(R-R^{\ast}\r). \end{equation} The above expressions can be written in terms of the function $z$ (and $\langle n \rangle$) as follows: \begin{equation} \sigma ^2 = \frac{2\vert \mu \vert ^{2}}{W} = \frac{|1+z|^2}{m \omega (1-|z|^2)} = \frac{\langle n \rangle}{m \omega} \left |1+\frac{1}{z} \r|^2 \label{sigma} \end{equation} and \begin{equation} {\cal J} = \frac{m}{2} \frac{d (ln\vert \mu \vert ^{2})}{dt} = \frac{2 m \omega Im(z)}{|1+z|^2}. \label{J} \end{equation} For this Wigner distribution, the classicality parameter is expressible in terms of $\sigma ^2$ and ${\cal J}$: \begin{equation} {\cal S} = \frac{{\cal J} \sigma ^2 }{\sqrt{1 + ({\cal J} \sigma ^2 )^{2}} }. \end{equation} It can be seen from eq.(\ref{gswifn}) that when ${\cal J}=0$, the Wigner function represents an uncorrelated product of gaussians in $q$ and $p$, and ${\cal S}=0$; this happens to occur when the gaussian state coincides with an instantaneous ground state. On the other hand, when both $\sigma ^2$ and ${\cal J}$ take on non-zero values, ${\cal W}(q,p)$ becomes correlated and ${\cal S}$ would be appreciably different from zero. [It may be noted that $|{\cal S}|\leq 1$, and the maximum possible value that it can have is unity.] The choice of ${\cal S}$ as a measure of classicality is thus a reasonable one. The idea of associating particle creation with the approach to classicality of a quantum state has been around in the literature for quite some time, particularly in the context of cosmology~\cite{class,wig}. It has been demonstrated, in the case of perturbations evolving during an inflationary epoch in the early universe, that the Wigner function gets peaked on the corresponding classical trajectory as the state undergoes large squeezing at super-Hubble scales~\cite{pad,wig}. (The argument in this case hinges on the fact that the frequency of the oscillator turns imaginary at late times, and results on peaking of the Wigner function for the inverted oscillator are then applied to draw the relevant conclusions.) Here, we are interested in exploring the generality of this connection. In particular, we would like to know how production of particles (identified, in our case, with a growth in the quantity $\langle n \rangle$) is related to the spreading of the Wigner function, as well as to the variation in the sharpness of the $q$-$p$ correlation, i.e. the classicality parameter. We note that for the time dependent oscillator (with completely general $m(t)$ and $\omega(t)$), both the particle number and the $q$-$p$ correlation can be built out of the complex quantity $z$; but there is no simple relation \emph{directly} connecting the two. The classicality parameter involves phase information of $z$ as well, and this makes drawing conclusions about its behavior solely on the basis of one's ideas about $\langle n \rangle$ (which depends only on the magnitude of $z$) not so straightforward. This means that one needs to actually determine $z$ to address the question of relating the two variables, to which we now turn. \section{Analytic approximations and asymptotic analysis} \label{sec:analysis} We move on to explicit analysis of several concrete examples based on the ideas of section~\ref{sec:formalism}. For convenience, we set $m=1$, so that the time dependence enters only through $\omega(t)$. (As we explained earlier, this entails no loss of generality.) The appropriate measure to characterize the behavior of the oscillator is, of course, the adiabaticity parameter given by $\epsilon(t)=\dot{\omega}/\omega^{2}$, whose magnitude determines the nature of its dynamics. We will assume the late time limit to be given by the time variable $t$ going to infinity, and consider the two extreme cases: one in which the late time evolution is adiabatic ($\epsilon \ll$ 1) and the other in which the adiabaticity condition is strongly violated ($\epsilon \gg 1$). For the case of an oscillator with a time-dependent frequency $\omega(t)$, the equation (\ref{eq:z_gen}) for the function $z$, simplifies to \begin{equation} \dot z + 2 i \omega z + \frac{\dot \omega}{2 \omega} (z^{2} - 1) = 0. \label{eq:z} \end{equation} As mentioned before, once $z$ is known, all other variables of interest can be trivially obtained. The table given below encapsulates the expressions for the various physical quantities in terms of $z$ for ready reference: \vspace{0.1cm} \begin{center} {\scriptsize \begin{tabular}{|c|c|} \hline & \\ & $\psi (q,t) = \exp[- R q^{2}]$ \\ The wave function & \\ & = $\exp \left[ - \frac{\omega}{2} \left(\frac{1-z}{1+z} \r) q^{2} \r] $ \\ & \\ \hline & \\ ~~~~~~~~Evolution equation for z~~~~~~~~&~~~~$\dot z + 2 i \omega z + \frac{\dot \omega}{2 \omega} (z^{2} - 1) = 0$~~~~\\ & \\ \hline & \\ Mean particle number & $\langle n \rangle = \frac{|z|^2}{1-|z|^2}$ \\ & \\ \hline & \\ Wigner function ${\cal W}\left(q, p, t\r)$ & $\sigma ^2 = \frac{|1+z|^2}{\omega (1-|z|^2)}$ \\ & \\ ~~~$=\frac{1}{\pi}\; \exp\left[-\frac{q^2}{\sigma^2(t)} -\sigma ^2(t)\, \left(p- {\cal J}(t)\, q\r)^2\r] $~~~ & ${\cal J} = \frac{2 \omega Im(z)}{|1+z|^2}$\\ & \\ \hline & \\ Classicality parameter & $ {\cal J} \sigma ^{2} = 2 \langle pq \rangle_{{\cal W}} $ \\ & \\ ${\cal S} = \frac{{\cal J} \sigma ^2 }{\sqrt{1 + ({\cal J} \sigma ^2 )^{2}} }$ & $= \frac{2 Im(z)}{(1-|z|^{2})}$\\ & \\ \hline & \\ Spread in the wave function & $\langle q^{2} \rangle = \frac{\langle n \rangle}{2 \omega} \left| 1+\frac{1}{z} \r| ^{2}$ \\ & \\ (related to the power spectrum) & = $\frac{\left ( 2 \langle n \rangle + 1 \r)}{2 \omega} + \frac{(\langle n \rangle +1)}{ \omega}Re z$ \\ & \\ \hline & \\ & $Re L_{eff} = -\frac{1}{4} \frac{\dot \omega}{\omega} Im z$ \\ Effective Lagrangian & \\ & ~~$Im L_{eff} = \frac{1}{4} \frac{\dot \omega}{\omega} Re z = \frac{1}{4} \frac{d}{dt}\ln(1+\langle n \rangle)$~~ \\ & \\ \hline \end{tabular} } \end{center} \vspace{0.3cm} We are particularly interested in how an oscillator, that starts off in the instantaneous ground state at some moment $t=t_{0}$, would evolve at late times [the $t \to \infty$ limit]. With this in mind, we will specifically consider a scenario in which the system enters an adiabatic phase at late times and, alternatively, when the asymptotic evolution deviates from adiabaticity. All the useful choices for the frequency $\omega (t)$ fall within one of these categories. Before we move on to carry out an approximate analysis for such functions, we will take up a simple toy model first. \subsection{The case of constant adiabaticity parameter} \label{sec:ce} We will begin our analysis by considering the case of {\it constant} adiabaticity parameter $\epsilon$. (In the cosmological context, this model would describe a massless scalar field evolving in a background characterized by the scale factor $a(t) \propto t$, which corresponds to having a matter source with equation of state $p=-\rho/3$. This is the borderline case between accelerating and decelerating models.) This model is exactly solvable, and can describe adiabatic as well as non-adiabatic evolution through appropriate choices for the value of $\epsilon$. This flexibility allows one to see how the nature of the evolution changes with transition from adiabaticity to non-adiabaticity. This example is also expected to shed some light on the general features one may expect to find in the two extreme limits. For the case of constant $\epsilon$, the frequency function is given by \begin{equation} \omega(t) = \frac{\lambda}{1 + \nu (\lambda t)} \qquad(0<t<\infty) \end{equation} with $\nu > 0$, $\lambda$ having the dimension of frequency and the adiabaticity parameter being equal to $-\nu$. The combination $(\nu \lambda)^{-1}$ sets the time scale for variation; for $t \gg (\nu \lambda)^{-1}$, the frequency falls as $1/(\nu t)$. For this $\omega(t)$, equation (\ref{eq:z}) for $z$ can be solved analytically to obtain the following result (where $\tau \equiv \lambda t$ and we have set $z=0$ at $t=0$): \begin{equation} z(\tau) = \frac{1 - (1 + \nu \tau)^{\frac{z_{1}-z_{2}}{2}}}{z_{1}(1 + \nu \tau)^{\frac{z_{1}-z_{2}}{2}} - z_{2}} \end{equation} with $z_{1,2} = (2i \pm \sqrt{\nu^{2} - 4})/\nu $. It is convenient to analyze the behavior of this model by splitting the range of $\nu$ into two parts: one corresponding to $\nu^2>4$ and the other to $\nu^2<4$. Let us consider the former case first. For $\nu^2>4$, the expression for the particle number is given by \begin{equation} \langle n \rangle = \frac{\nu^2}{(\nu^2 -4 )} \sinh^2 \left(\frac{1}{2} \sqrt{1-\frac{4}{\nu^{2}}} \ln(1+\nu \tau) \r). \end{equation} This is an exact expression valid at all times. It is clear that the particle number increases monotonically with time in this case. Since we are particularly interested in the scenario of highly non-adiabatic evolution, it is appropriate to consider large values of $\nu$ (in comparison with unity). For $\nu \gg 1$, we have the following approximate expression for the particle number: \begin{equation} \langle n \rangle \approx \frac{1}{4} \frac{\nu^2 \tau^2}{1+ \nu \tau}. \end{equation} Let us consider the early and late time limits of the above expression. For the case of $\nu \tau \ll 1$, \begin{equation} \langle n \rangle \approx \frac{\nu^2 \tau^2}{4} + O(\nu^3 \tau^3). \label{n_Leps_early} \end{equation} On the other hand, for $\nu \tau \gg 1$, \begin{equation} \langle n \rangle \approx \frac{ \nu \tau}{4} \approx \frac{1}{4 \omega}. \label{n_Leps_late} \end{equation} For strongly non-adiabatic evolution, starting from a vacuum state, the particle number increases without bound at late times. But \eq{n_Leps_late} also shows that the energy content of the particles $\langle n \rangle \omega$, however, saturates at a constant value (1/4) at late times. Our approximation is borne out by the plots of the particle number and the quantity $ \langle n \rangle \omega$, shown in figures~\ref{clen} and \ref{cleno}. The particle number grows without bound, while the latter quantity has a constant limiting value. \begin{figure} \begin{center} \subfigure[ ]{\label{clen} \includegraphics[width=6cm,angle=0.0]{con_Leps_n.png}} \subfigure[ ]{\label{cleS}\includegraphics[width=6cm,angle=0.0]{con_Leps_S.png}} \subfigure[ ]{\label{clez}\includegraphics[width=5cm,angle=0.0]{con_Leps_z.png}} \end{center} \caption{Variation of the mean particle number $\langle n \rangle$, classicality parameter ${\cal S}$ and the excitation parameter $z$ with $\tau$ for constant $|\epsilon|= 20$. $\langle n \rangle$ is a monotonically increasing function of time. ${\cal S}$ increases sharply and saturates at unity at large times. The trajectory of $z$ remains confined within a quadrant and ends up at a point with $|z|=1$ in the $\tau \to \infty$ limit.} \label{cle} \end{figure} The classicality parameter ${\cal S}$ for this case, describing the strength of phase space correlations, is plotted in figure~\ref{cleS}. It is evident that ${\cal S}$ starts from zero [corresponding to the initial vacuum state], and quickly grows to unity as the particle number increases with the progress of time. This clearly presents an example where the phase space correlations grow in accompaniment to particle creation due to violation of adiabaticity. One can also directly visualize the trajectory of the state in the complex $z$ plane, which has been plotted in figure~\ref{clez}. $z$ starts from the origin, corresponding to the initial vacuum state, stays within a quadrant, and ends up at a limiting point corresponding to $|z|=1$ at late times. \begin{figure} \includegraphics[width=6cm,angle=0.0]{con_Leps_no.png} \caption{Variation of $\langle n \rangle \omega$ with time for the case of $|\epsilon|= 20$. This quantity saturates asymptotically, so the energy remains finite at late times.} \label{cleno} \end{figure} One would also like to know how the Wigner function evolves with time. This can be understood by computing the functions $\sigma^2$ and ${\cal J}$ using the relations (\ref{sigma}) and (\ref{J}). These functions have been plotted in figure~\ref{cleW}. The value of $\sigma^2$ starts from $1/\omega(t=0)$ and monotonically increases with time, while ${\cal J}$ starts from zero, rises to a maximum, and then falls off again to zero at late times. This variation corresponds to a Wigner function that ends up peaking on the $q-$axis at late times. Let us compare this with the way the corresponding classical trajectory behaves with time. The classical trajectory is given by \begin{equation} q_c (\tau) = \sqrt{1 + \nu \tau} \left( c_1 e^{\frac{1}{2}\sqrt{1-\frac{4}{\nu^2}} \ln(1 + \nu \tau)} + c_2 e^{-\frac{1}{2}\sqrt{1-\frac{4}{\nu^2}} \ln(1 + \nu \tau)} \r) \end{equation} and \begin{equation} p_c(\tau) = q'_c(\tau) = \frac{\nu}{2} \left[ \left(1+\sqrt{1-\frac{4}{\nu^2}}\r)c_1 \left( 1 + \nu \tau \r)^{\frac{1}{2}\left(\sqrt{1-\frac{4}{\nu^2}}-1\r)} + \left(1-\sqrt{1-\frac{4}{\nu^2}}\r)c_2 \left( 1 + \nu \tau \r)^{-\frac{1}{2}\left(\sqrt{1-\frac{4}{\nu^2}}+1\r)} \r] \end{equation} where $c_1,c_2$ are real constants. It is clear that in the late time limit, $q_c \to \infty$ while the momentum $p_c \to 0$, so every trajectory at sufficiently late times ends up on the $q-$axis, coinciding with the peaking of the Wigner function. This feature, together with the behavior of the classicality parameter, provides strong indication of a quantum-to-classical transition taking place at late times. \begin{figure} \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{con_Leps_sigma.png}} \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{con_Leps_J.png}} \caption{Variation of the functions $\sigma^2$ and ${\cal J}$ for $|\epsilon|= 20$. $\sigma^2$ grows monotonically while ${\cal J}$ falls to zero at late times, indicative of a Wigner function that peaks on the $q-$axis.} \label{cleW} \end{figure} The numerical value of the effective Lagrangian for this case can be evaluated using \eq{L_eff}. (As we mentioned before, such an expression has a limited validity since the effective action should be treated as a functional of $\omega$ rather than as a function of $t$ for a particular $\omega(t)$. Nevertheless it illustrates certain interesting features.) In the highly non-adiabatic limit of $\nu \gg 1$, it has the approximate form \begin{equation} L_{eff} \approx \frac{\lambda}{1 + \nu \tau}\left[ \frac{\nu^2 \tau^2}{2 (\nu \tau + 2 )^2} + \frac{i \nu}{4} \frac{\nu \tau}{ (\nu \tau + 2 )} \r]. \end{equation} This reduces to the following limiting form at early times [corresponding to $\nu \tau \ll 1$]: \begin{equation} L_{eff} \approx \omega \left[ \frac{\nu^2 \tau^2}{8} + \frac{i \nu}{8} \nu \tau \r]. \label{RL_Leps} \end{equation} Interestingly, the real part of the effective Lagrangian can be re-expressed in terms of the particle number using \eq{n_Leps_early}: \begin{equation} \textrm{Re} L_{eff} \approx \langle n \rangle \omega / 2 \end{equation} This shows that the real part of the effective Lagrangian does contain information about the particle production (and hence can encode it in the back reaction calculation) in the non-adiabatic, but $\langle n \rangle \ll 1$ limit. We will find later that this is indeed a general feature. On the other hand, in the late time limit, \begin{equation} L_{eff} \approx \lambda \left[ \frac{1}{2 \nu \tau} + \frac{i \nu}{4} \frac{1}{\nu \tau} \r] \approx \frac{\lambda}{8} \left( \frac{1}{\langle n \rangle} + \frac{i}{2}\frac{\nu}{\langle n \rangle} \r)~~\text{for}~ \nu \tau \gg 1. \end{equation} From the above expression, it is clear that although $L_{eff}(\tau) \to 0$ in the late time limit, the effective {\it action} is logarithmically divergent, since $L_{eff} \propto \tau^{-1}$. This is related to particle production proceeding without bound at late times. We next consider the other regime, covering values of the adiabaticity parameter lying in the range $\nu^{2} < 4$. For this case, $z_1,z_2$ are purely imaginary and the expression for $z$ may be rewritten as \begin{equation} z = \frac{1 - e^{i \sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau)}}{z_{1}e^{i \sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau)} - z_{2}}. \end{equation} The particle number has the exact form \begin{equation} \langle n \rangle = \frac{\nu^2}{(4-\nu^2)} \sin^2 \left[\frac{1}{2} \sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau) \r]. \end{equation} This quantity keeps oscillating between the values 0 and $\nu^2 / (4 - \nu^{2})$, representing a continuous interplay of creation and annihilation of quanta. This situation thus presents a case where the oscillations persist with constant amplitude and the particle number never settles down to a constant value [or enters a phase of monotonic variation]. We have plotted the variation of the particle number as well as the classicality parameter for this case [with $\nu$ chosen to be 0.2] in figures~\ref{csen} and \ref{cseS}. The classicality parameter is also oscillatory in the absence of a monotonic variation in $\langle n \rangle$, and this may be contrasted with the non-adiabatic case considered earlier, where ${\cal S}$ grows and saturates at unity. This is suggestive of the possible close connection between particle creation and growth of phase space correlations, which will be explored in greater depth in the examples that will follow. \begin{figure} \subfigure[ ]{\label{csen}\includegraphics[width=6cm,angle=0.0]{con_Seps_n.png}} \subfigure[ ]{\label{cseS}\includegraphics[width=6cm,angle=0.0]{con_Seps_S.png}} \subfigure[ ]{\label{csez}\includegraphics[width=5cm,angle=0.0]{con_Seps_z.png}} \caption{Variation of the mean particle number, classicality parameter and $z$ for constant $|\epsilon|= \nu = 0.2$. Both$\langle n \rangle$ and ${\cal S}$ are oscillatory at all times with constant finite amplitudes. The complex trajectory of $z$ is a circle in the upper half-plane, reflected in the oscillatory nature of $\langle n \rangle$ and ${\cal S}$.} \label{cse} \end{figure} The complex trajectory of $z(t)$ is also depicted in figure~\ref{csez}. The values of the real and complex parts of $z$ oscillate with time, and the trajectory is a circle with the center displaced from the origin. The magnitude of $z$, too, oscillates and this is reflected in the variation of the particle number. This evolution differs sharply from that for large adiabaticity parameter, where the number of particles as well as the $q$-$p$ correlation display monotonic increase with time. As for the Wigner function, its evolution is described by the functions $\sigma^2$ and ${\cal J}$, shown in figure~\ref{cseW}. While ${\cal J}$ goes to zero at late times, $\sigma^2$ is an increasing function of time, driving the Wigner function to get concentrated on the $q-$axis at late times. In comparison, the classical trajectory has the general form \begin{equation} q_c(\tau) = c_1 \sqrt{1 + \nu \tau } \cos \left( \frac{1}{2} \sqrt{\frac{4}{\nu^{2}}-1} \ln( 1 + \nu \tau) + \phi \r) \end{equation} where $c_1$ is real and $\phi$ is a constant phase factor, and \begin{equation} p_c(\tau) = q'_c(\tau) = \frac{\nu c_1}{2 \sqrt{1 + \nu \tau }} \left[ \cos \left( \frac{1}{2} \sqrt{\frac{4}{\nu^{2}}-1} \ln( 1 + \nu \tau) + \phi \r) - \sqrt{\frac{4}{\nu^{2}}-1}~ \sin \left( \frac{1}{2} \sqrt{\frac{4}{\nu^{2}}-1} \ln( 1 + \nu \tau) + \phi \r) \r]. \end{equation} The amplitude of the momentum steadily decreases, while that of $q_c$ grows with time. The trajectory, consequently, ends up on the $q-$axis asymptotically. This matches with the late time behavior of the Wigner function. Thus the Wigner function peaks on the classical trajectory at late times, similar to that found in the non-adiabatic case earlier, in spite of the fact that here there is no genuine particle creation. It may also be noted that neither does the correlation measure ${\cal S}$ grow monotonically at large times. This example thus brings out a potential problem with regarding peaking of the Wigner distribution on the classical trajectory as the sole indicator of classicality. At the same time, the behavior of the classicality parameter, which evidently tracks particle creation, and is markedly different in the two cases, suggests that this measure may be able to provide a useful means of resolving such ambiguities regarding the interpretation of classicality. \begin{figure} \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{con_Seps_sigma.png}} \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{con_Seps_J.png}} \caption{Variation of the quantities $\sigma^2$ and ${\cal J}$ for $|\epsilon|= 0.2$. $\sigma^2$ is an increasing function of time while ${\cal J} \to 0$ asymptotically, so the Wigner function ends up peaking on the $q-$axis.} \label{cseW} \end{figure} In the strongly adiabatic limit, we have $\nu \ll 1$, and the particle number is approximately given by \begin{equation} \langle n \rangle ~\approx~ \frac{\nu^2}{4} \sin^2 \left(\frac{1}{\nu}\ln(1+\nu \tau) \r) + O(\nu^4). \label{n_approx_Seps} \end{equation} For small times ($\tau \ll 1$), $\ln(1+\nu\tau)\approx \nu \tau$ and $\sin \tau \approx \tau$, so that \begin{equation} \langle n \rangle ~\approx~ \frac{\nu^2 \sin^2{\tau}}{4} ~\sim~ \frac{\nu^2 \tau^2}{4}. \end{equation} This behavior matches with the early time variation of $\langle n \rangle$ in the extreme non-adiabatic case (corresponding to $\nu \gg 1$). The effective Lagrangian for $\nu^2 < 4$ takes the following form: \begin{equation} \textrm{Im} L_{eff} = \frac{1}{4}\frac{\dot{\langle n \rangle}}{\left(\langle n \rangle + 1 \r)} = \frac{\omega}{2} \sqrt{1-\frac{\nu^2}{4}} \left[ \frac{\sin \left( \sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau) \r) }{ \frac{8}{\nu^{2}}- 1 - \cos \left( \sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau) \r) } \r] \end{equation} and \begin{equation} \textrm{Re} L_{eff} = \frac{\omega \nu^2}{8} \left[ \frac{\sin^{2} \left(\frac{1}{2}\sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau) \r) }{ 1-\frac{\nu^2}{8}\left( 1 + \cos \left( \sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau) \r) \r) } \r]. \end{equation} In the adiabatic limit of $\nu \ll 1$, the real part is approximately given by \begin{equation} \textrm{Re} L_{eff} ~\approx~ \frac{\omega \nu^2}{8} \frac{4 \langle n \rangle }{\nu^2} \left[ 1 + \frac{\nu^2}{8}\left( 1 + \cos \left( \sqrt{\frac{4}{\nu^{2}}-1} \ln(1+\nu \tau) \r) \r) \r] ~\equiv~ \frac{\omega \langle n \rangle}{2} \left[ 1 + O(\nu^2) \r] \label{RL_Seps} \end{equation} and this is, of course, valid at all times. The identical form for $\textrm{Re}L_{eff}$ in both the limits of small as well as large values of the adiabaticity parameter, which may be noted from Eqs.(\ref{RL_Leps}) and (\ref{RL_Seps}), suggests that when the particle number is small, the real part of the effective Lagrangian is directly related to the particle content in the quantum state; this is of relevance to the issue of accounting for back-reaction due to particle creation using $\textrm{Re}L_{eff}$, a point we shall return to later. To summarize, the model of constant adiabaticity parameter, for different choices for the magnitude of $\epsilon$, can exemplify both adiabatic and non-adiabatic evolution. In the adiabatic limit of small $|\epsilon|$, particle creation is suppressed with the mean particle number and the classicality parameter (both starting from zero corresponding to an initial vacuum state) remaining finite and oscillatory at all times. This behavior sharply differs from that in the extreme non-adiabatic case of large $|\epsilon|$, where the mean number of particles as well as the $q$-$p$ correlation diverge with time, although the energy saturates at a finite limiting value. In both cases, however, the Wigner function ends up peaking on the corresponding classical phase space trajectory at late times, in marked contrast to the behavior of the classicality parameter. Having gained some idea of the kind of features which may be expected in adiabatic as well as non-adiabatic evolution, we now move on to consider the case of a general adiabaticity parameter $\epsilon(t)$, and examine its two possible late time limits. \subsection{ Adiabatic evolution at late times } \label{sec:adiab} For a general $\omega(t)$ we can physically distinguish the following situations. Depending on the numerical value of $\epsilon(t)$ one can describe the evolution as adiabatic or non-adiabatic. Further, in the case of adiabatic evolution one can further distinguish between two cases. the first case corresponds to the system evolving in an arbitrary fashion till about $t=T$, say, and evolving adiabatically for $t>T$. In this case the quantum state will not an instantaneous vacuum state at $t=T$ and certain amount of particle production would have taken place by then. The further adiabatic evolution should preserve this particle content with a higher order contribution to small amount of further particle production. The second case corresponds to an adiabatic evolution all the way from $t_0$ to $t$ with the system starting at the vacuum state at time $t=t_0$. In this case, the particle production will be a higher order effect and we will be interested in the lowest nontrivial order contribution. We will first analyze the case of the evolution proceeding adiabatically at late times. This covers all possible choices for the function $\omega(t)$ which fall off slower than $1/t$ in the $t \to \infty$ limit, and correspondingly, $\epsilon(t) \to 0$. A perturbation theory approach can be adopted to find an approximate solution in an adiabatic regime. We start by writing the function $f(t)$ in terms of two new functions ${\cal A}(t)$ and ${\cal B}(t)$ as \begin{equation} f(t) = \frac{{\cal A}(t)}{\sqrt{2 \omega}} e^{i\theta(t)} + \frac{{\cal B}(t)}{\sqrt{2 \omega}}e^{-i\theta(t)} \qquad (\dot{\theta} \equiv \omega) \end{equation} and imposing the additional constraint condition (to take into account our introduction of {\it two} new functions in place of the original one) that \begin{equation} \dot{f}(t) = i \omega \frac{{\cal A}(t)}{\sqrt{2 \omega}} e^{i\theta(t)} - i \omega \frac{{\cal B}(t)}{\sqrt{2 \omega}}e^{-i\theta(t)}. \end{equation} This gives (details can be found in the Appendix C) \begin{equation} \frac{\dot{f}}{f} = i \omega \left( \frac{{\cal A}(t)-{\cal B}(t)e^{-2i\theta(t)}}{{\cal A}(t)+{\cal B}(t)e^{-2i\theta(t)}} \r) \end{equation} and yields the following equations for ${\cal A},{\cal B}$: \begin{equation} \dot{\cal A} = \frac{\dot \omega}{2 \omega} {\cal B} e^{-2i\theta(t)} \quad,\quad \dot{\cal B} = \frac{\dot \omega}{2 \omega} {\cal A} e^{2i\theta(t)}. \label{AB_eqs} \end{equation} The first order adiabatic approximation for the functions ${\cal A}(t),{\cal B}(t)$ can be found by assuming initial conditions ${\cal A}(t_0)=a,{\cal B}(t_0)=b$ (with $a,b$ being arbitrary complex constants) at some initial time $t_0$ within the adiabatic regime. To a first approximation, the coupled equations then give \begin{eqnarray} {\cal A}(t) ~&\approx&~ a + i\frac{b}{4} \left(\epsilon(t) e^{-2i\theta(t)} - \epsilon(t_0) \r) - \frac{ib}{4}\int_{t_0}^{t} d t' \dot{\epsilon}(t') e^{-2i\theta(t')}, \nonumber \\ {\cal B}(t) ~&\approx&~ b - i\frac{a}{4} \left( \epsilon(t) e^{2i\theta(t)} - \epsilon(t_0) \r) + \frac{i a}{4}\int_{t_0}^{t} d t' \dot{\epsilon}(t') e^{2i\theta(t')}. \end{eqnarray} At the lowest order, one can ignore the integrals over $\dot{\epsilon}$ on the assumption of adiabaticity. (This result, therefore is exact for the the $\epsilon=$ constant case discussed earlier.) So, at the first order of approximation, we have \begin{eqnarray} z(t) ~=~ \frac{{\cal B}(t)}{{\cal A}(t)}e^{-2i\theta(t)} ~&\approx&~ \frac{b}{a}e^{-2i\theta(t)} + \frac{i}{4} \left( \epsilon(t_0) \left( 1 + \frac{b^2}{a^2} \r) e^{-2i\theta(t)} - \epsilon(t) \left( 1 + \frac{b^2}{a^2} e^{-4i\theta(t)} \r) \r) + O(\epsilon^2) \nonumber \\ &\equiv& \frac{b}{a} e^{-2i\theta(t)} + {\cal R}(t) + O(\epsilon^2). \label{z_approx} \end{eqnarray} where the function ${\cal R}(t)$ is of order $\epsilon$. This solution is valid in the period when the evolution is adiabatic, irrespective of the past history of the system. For example, one can envisage an evolution in which the system starts at the ground state, evolves very non-adiabatically for a period of time, $t_i<t<T$ say, and is adiabatic for $t>T$. In that case, the system will be in a highly excited state (with some amount of particle production already having taken place during $t_i<t<T$) at $t=T$ and during $t>T$, this state will evolve adiabatically. In such a case we cannot say anything about the magnitude of $a,b$ and we will keep them arbitrary for the moment. (This situation should be contrasted with another one frequently discussed in the literature in which it is assumed that the evolution is adiabatic \emph{throughout} the period. We will consider this case later on.) The corresponding $|z|^2$, to order $\epsilon$, is given by \begin{equation} |z|^{2} ~\approx~ \left|\frac{b}{a} \r|^2 - \frac{1}{2} Im \left[ \frac{b^*}{a^*} \left( \epsilon(t_0) \left( 1 + \frac{b^2}{a^2} \r) - \epsilon(t) e^{2 i \theta(t)} \left( 1 + \frac{b^2}{a^2} e^{-4i\theta(t)} \r) \r) \r] + O(\epsilon^2). \label{z2_approx} \end{equation} Using this expression in \eq{n}, one can compute an approximation for the mean particle number, which, again to order $\epsilon$, is given by \begin{equation} \langle n \rangle ~\approx~ \frac{|b|^2}{|a|^2 - |b|^2} - \frac{|a|^4}{2( |a|^2 - |b|^2 )^2} Im \left[ \frac{b^*}{a^*} \left( \epsilon(t_0) \left( 1 + \frac{b^2}{a^2} \r) - \epsilon(t) e^{2 i \theta(t)} \left( 1 + \frac{b^2}{a^2} e^{-4i\theta(t)} \r) \r) \r] + O(\epsilon^2). \end{equation} Let us now consider various limits of this expression. If the system evolves very non-adiabatically for a period of time $t_i<t<T$ and is adiabatic for $t>T$ then we can take the $\epsilon(t)\to0$ limit at $t>T$. Then, for any non-zero value of $b/a$, the terms dependent on $\epsilon(t)$ in the above expression get progressively suppressed. This implies that the mean particle number has a finite limiting value given by \begin{equation} \lim_{t \to \infty} \langle n \rangle ~\approx~ \frac{|b|^2}{|a|^2 - |b|^2} - \frac{\epsilon(t_0)}{2} \frac{|a|^4}{( |a|^2 - |b|^2 )^2} Im \left[ \frac{b^*}{a^*} \left( 1 + \frac{b^2}{a^2} \r) \r] + O(\epsilon^2 (t_0)). \end{equation} This result is completely understandable. In this case, all the particle production takes place during $t_i<t<T$ and the constants ${\cal A},{\cal B}$ play the role of Bogolyubov coefficients for the evolution in Heisenberg picture; the mean number of particles is indeed $|{\cal B}|^2$. If one compares the expression for $z$ in \eq{z_approx} with \eq{H_pic} relating $z$ to the functions $\alpha$ and $\beta$ in the Heisenberg picture, it can be seen that in the $\epsilon(t) \to 0$ limit, the ratio of the Bogolyubov coefficients becomes a constant: $\beta^* / \alpha^* \approx (b/a) + (i \epsilon(t_0)/4) ( 1 + (b^2/a^2) ) $. Using the Wronskian condition $|\alpha|^{2} - |\beta|^{2} = 1$, the limiting expression for $\langle n \rangle$ derived above can then be shown to be equal to $|\beta|^{2}$. This is exactly the expression one expects to obtain for the asymptotic particle number in an adiabatic `out' vacuum state in the Heisenberg picture. This thus demonstrates the close correspondence between our approach and the conventional one. An expression for the $q$-$p$ correlation can also be worked out, using the approximations in \eq{z_approx} and \eq{z2_approx}, to first order in $\epsilon$: \begin{equation} {\cal J}\sigma^{2} ~=~ \frac{2 Im(z)}{(1-|z|^{2})} ~\approx~ 2 \left[\frac{ Im (a^* b e^{-2 i \theta})}{|a|^2 - |b|^2} + \frac{|a|^2 Im{\cal R}}{|a|^2 - |b|^2} + \frac{ 2 Im \left( a^* b e^{-2 i \theta} \r) Re\left( a b^* {\cal R}e^{2 i \theta}\r)}{(|a|^2 - |b|^2)^2}\r] + O(\epsilon^2). \end{equation} In the late time limit (as $\epsilon(t) \to 0$), the function ${\cal R}$ has a constant amplitude, and since $\theta$ is an increasing function of time [owing to our choice for $\omega(t)$], the above expression is oscillatory with a finite amplitude. Thus, in the late time adiabatic regime, once the particle number has saturated, the phase space correlation $\langle qp \rangle_{{\cal W}}$ remains bounded and oscillates around zero, again indicative of the necessity of \textit{continuing} particle excitation for driving the system towards classicality. One can also look at the effective Lagrangian, defined in \eq{L_eff}, in this limit. The imaginary part of the action will become a constant at late times with its value related to the asymptotic number of particles produced, in conformity with the usual interpretation. The real part, using \eq{RL_eff} and \eq{z_approx}, is given by \begin{eqnarray} \textrm{Re} L_{eff} &=& -\frac{1}{4}\frac{\dot \omega}{\omega}Im z \nonumber \\ &\approx& -\frac{\omega(t) \epsilon(t)}{4} Im\left(\frac{b}{a}e^{-2i\theta(t)}\r) - \frac{\omega(t) \epsilon(t)}{16} Re\left(\epsilon(t_0) \left( 1 + \frac{b^2}{a^2} \r) e^{-2i\theta(t)} - \epsilon(t) - \epsilon(t) \frac{b^2}{a^2} e^{-4i\theta(t)} \r) \label{RL_adia} \end{eqnarray} to second order in the adiabaticity parameter. Written in this form, it is evident that the first O($\epsilon$) term is a rapidly oscillating function. The O($\epsilon^{2}$) term also contains two oscillatory functions. These average out to zero, and the first non trivial contribution to the real part of the effective action comes from the non-oscillatory term with magnitude $\omega \epsilon^{2}/16$ in \eq{RL_adia}. Let us next consider the case in which the entire evolution up to some time $t$ is adiabatic, the oscillator being started off in the instantaneous vacuum state at some time $t_0<t$ in the adiabatic regime. The initial conditions for an instantaneous vacuum correspond to setting $a=1,b=0$ and $\dot{f}/f=i\omega(t_0)$. For these initial conditions, because of \eq{AB_eqs}, even at the lowest order of approximation in the adiabatic regime ${\cal B}$ cannot be assumed to remain constant. To a first approximation, we now have: \begin{equation} {\cal A}(t) \approx 1 ~,~\dot{\cal B}(t) \approx \frac{\dot \omega}{2 \omega}e^{2i\theta(t)} \end{equation} so that \begin{equation} {\cal B}(t) = -\frac{i}{4} \left( \epsilon(t) e^{2i\theta(t)} - \epsilon(t_0) \r) + \frac{i}{4}\int_{t_0}^{t} d t' \dot{\epsilon}(t') e^{2i\theta(t')}. \end{equation} The integral over $\dot{\epsilon}$ will be again ignored under the assumption that $\dot\epsilon(t)\approx0$. So, at the lowest order of approximation, \begin{eqnarray} z(t) = \frac{{\cal B}(t)}{{\cal A}(t)} e^{-2i\theta(t)} = -\frac{i}{4} \left( \epsilon(t) - \epsilon(t_0) e^{-2i\theta(t)} \r), \nonumber \\ |z(t)|^{2} = \frac{1}{16} \left[ \epsilon^{2}(t) + \epsilon^{2}(t_0) - 2 \epsilon(t)\epsilon(t_0)\cos \left( 2 \theta(t) \r) \r]. \label{z_adia_vac} \end{eqnarray} This can also be derived from the equation for $z$: \begin{equation} \dot z + 2 i \omega z = \frac{\dot \omega}{2 \omega} (1 - z^{2}). \end{equation} Starting with $z(t_0)=0$ and assuming that to the lowest order of approximation $(1-z^2) \approx 1$ the above equation can be solved to obtain the same expression for $z(t)$ as found above. The approximate expression for $z$ gives \begin{equation} \frac{1}{1-|z(t)|^{2}} \approx 1 + \frac{1}{16} \left[ \epsilon^{2}(t) + \epsilon^{2}(t_0) - 2 \epsilon(t)\epsilon(t_0)\cos \left( 2 \theta(t) \r) \r] + {\cal O}(\epsilon^{4}), \end{equation} so that the particle number to order $\epsilon^2$ is given by \begin{equation} \langle n \rangle \approx \frac{1}{16} \left[ \epsilon^{2}(t) + \epsilon^{2}(t_0) - 2 \epsilon(t)\epsilon(t_0)\cos \left( 2 \theta(t) \r) \r]. \label{n_ad_vac} \end{equation} If $\epsilon(t_0)=0$, then the particle number, to the lowest order in $\epsilon$, is just $\epsilon^{2}(t)/16$. In order to verify the validity of our approximation, let us reconsider as an illustration, the example of constant adiabaticity parameter we analyzed earlier. For this case, $\epsilon(t) = -\nu$ and $\theta(t) = (1/\nu)\ln(1+ \nu \tau)$. [The $\dot{\epsilon}$ dependent integrals are exactly zero here.] This gives the approximate particle number from \eq{n_ad_vac} to order $\epsilon^2$ as \begin{equation} \langle n \rangle \approx \frac{\nu^2}{8} \left[ 1 - \cos \left( \frac{2}{\nu} \ln(1+\nu \tau) \r) \r]. \label{con_eps_approx} \end{equation} This is clearly in agreement with the approximate expression derived in \eq{n_approx_Seps} from the exact analytic formula. It is also straightforward to compute the functions $\sigma^2$ and ${\cal J}$ using the approximation for $z$ in \eq{z_adia_vac}, and to ${\cal O}(\epsilon)$ are given by \begin{equation} \sigma^2 = \frac{1}{\omega(t)}\left( 1 + \frac{\epsilon(t_0)}{2}\sin(2\theta(t)) \r) \end{equation} and \begin{equation} {\cal J} = -\frac{\omega(t)}{2} \left( \epsilon(t) - \epsilon(t_0) \cos(2\theta(t)) \r). \end{equation} The expressions for $\sigma^2$ and ${\cal J}$ make it clear that their late time behavior depends on the asymptotic limit of the frequency function; in particular, if $\omega(t) \to 0$ at large times, we have $\sigma^2 \to \infty$ and ${\cal J} \to 0$, so the Wigner function will peak on the $q$-axis (this happens for example in the constant $\epsilon$ case for $\nu \ll 1$). On the other hand, if $\omega(t)\to \infty$ (with $\epsilon(t) \to 0$), then $\sigma^2 \to 0 $ while ${\cal J}$ is oscillatory with an increasing amplitude at late times, representing a Wigner function that ends up peaking on the $p$-axis. In contrast, the $q p$ correlation, which has the form \begin{equation} \langle q p \rangle_{\cal W} = {\cal J}\sigma^{2} \approx -\frac{1}{2} \left( \epsilon(t) - \epsilon(t_0) \cos(2\theta(t)) \r) + {\cal O}(\epsilon^2) \end{equation} is oscillatory in general and remains finite, so the classicality parameter, too, is bounded at late times. For the adiabatically evolved vacuum state, the imaginary part of the effective Lagrangian again leads to the standard result and gives the effective number of particles produced. As for the real part, making use of \eq{z_adia_vac} one has, up to order $\epsilon^2$: \begin{equation} \textrm{Re} L_{eff} ~\approx~ -\frac{1}{4}\frac{\dot \omega}{\omega} Im \left[ -\frac{i}{4} \left( \epsilon(t) - \epsilon(t_0) e^{-2i\theta(t)} \r) \r] ~=~ \frac{\omega}{16} \left[ \epsilon^2 (t) - \epsilon(t) \epsilon(t_0) \cos (2 \theta(t)) \r]. \end{equation} Using \eq{n_ad_vac}, one can rewrite the above expression in terms of the mean particle number, to order $\epsilon^{2}$, as \begin{equation} \textrm{Re} L_{eff} ~\approx~ \omega \langle n \rangle - \frac{\omega}{16} \epsilon^2 (t_0) + \frac{\omega}{16} \epsilon(t) \epsilon(t_0) \cos (2 \theta(t)). \end{equation} The first term in this expression has a simple interpretation. The energy drained from the classical system due to particle production, $-\omega \langle n \rangle$, matches with this term. (Recall that effective Lagrangian and effective Hamiltonian differ by a sign in this case.) So to lowest order in the adiabaticity parameter, one can indeed incorporate the back reaction due to the production of particles using the first term in $\textrm{Re} L_{eff}$. The other two terms will be subdominant when $\epsilon(t_0)$ is sufficiently small and will represent transient effects. In this context, to lowest non vanishing order in $\epsilon$, the real part of the effective Lagrangian incorporates information about particle creation. \subsection{Non-adiabatic evolution at late times} \label{sec:nonadiab} We would like to compare the features suggested by the above analysis with the alternative scenario of adiabaticity being {\it violated} in the late time limit. All choices for $\omega(t)$ which monotonically fall off faster than $1/t$ for large $t$ would pass into such a phase, and for such functions, the adiabaticity parameter grows without bound at late times: $\lim_{t \to \infty} \epsilon(t) = \infty$. Since our focus is on understanding the influence of particle creation on the evolution towards classicality, the case of increasing particle number $\langle n \rangle$ is considered. The approximate {\it asymptotic} form of the solution which has this feature, is given (see Appendix D for an outline of the derivation) to the lowest order by \begin{equation} z = ( 1 + \Delta ) e^{i \theta},\nonumber \end{equation} with \begin{equation} \Delta ~\sim~ c_{1} \omega \nonumber ~,\qquad \theta ~\sim~ \pi - c_{0} \omega - 2 \omega t \label{delta_theta} \end{equation} in which $c_{0},c_{1}$ are real constants. With these analytic expressions, one can determine the asymptotic form of the mean particle number as well as the correlation: \begin{equation} \langle n \rangle \approx - \frac{1}{ 2 c_{1} \omega} \label{approx_n} \end{equation} and \begin{equation} 2 \langle qp \rangle_{\cal W} ~=~ {\cal J} \sigma^{2} ~\simeq~ - \frac{\sin \theta}{\Delta}~ \approx~ -\frac{c_{0}}{c_{1}} - \frac{2 t }{c_{1}}. \label{approx_S} \end{equation} Since it has been assumed that $\omega(t) \to 0$ at late times, the particle number clearly diverges. Further, $\langle qp \rangle_{\cal W}$ is an increasing function of time in the $t \to \infty$ limit. Thus, steady particle creation (i.e. growth in the particle number $\langle n \rangle$) at late times is accompanied by a monotonic growth of the magnitude of the $q$-$p$ correlation, and so the classicality parameter ${\cal S}$ ends up saturating at unity. For this case, the late time approximations for the imaginary and real parts of the effective Lagrangian, making use of \eq{delta_theta}, are given by \begin{equation} \textrm{Im} L_{eff} \approx \frac{1}{4} \frac{\dot{\langle n \rangle}}{n} \sim -\frac{1}{4} \frac{\dot \omega}{\omega}\qquad \textrm{Re} L_{eff} \approx - \frac{1}{2} \dot{\omega} t \end{equation} from which it is evident that the imaginary part of the effective {\it action} blows up as rapid particle production occurs. In comparison, the real part of $L_{eff}$ falls to zero because of our constraint on the asymptotic form of $\omega(t)$, and $\textrm{Re} A_{eff}(t)$ has a finite asymptotic limit. \newline Based on the features revealed by the above two cases, one can begin to delineate a relationship between the production of particles (increasing $\langle n \rangle$) and emergence of classical correlations in terms of the classicality parameter. We would also like to see how the corresponding Wigner function behaves in these two limits. In order to proceed with this, as well as to set the above ideas on a firmer footing, we shall next analyze two toy examples which will serve to demonstrate the key aspects. We will resort to a numerical approach for tracking the time evolution since it suffices for our purpose, which is to compare the two alternative late time scenarios and verify the validity of our approximations. These examples will also serve as prototypes for more realistic models, like those appearing in field theory in cosmological or electromagnetic backgrounds, to be discussed in Paper II~\cite{gm}. Let us begin with an example that provides an illustration of case~\ref{sec:adiab} considered above. \subsection*{Example 1: Adiabatic evolution at late times} We choose for the time dependent frequency the form $\omega_{1}(t) = \lambda \sqrt{1+\lambda^{2} t^{2}}$ ($-\infty < t < \infty$). This frequency function is symmetric in time $t$. It is physically relevant too, as it appears in the study of a complex scalar field evolving in the background of a constant, classical electric field (see for eg. Refs.~\cite{itz,efield}). For this function, the adiabaticity parameter given by $\epsilon(t)= \lambda t/(1+\lambda^{2} t^{2})^{3/2}$ vanishes in the asymptotic limits ($t \to \pm \infty$), so in- and out- vacuum states can be defined in the adiabatic regions. If one starts out in the in- vacuum state (defined by $\langle n \rangle \rightarrow 0$ as $t \to -\infty$), the non-equivalence of the two asymptotically defined vacua implies that the state will appear populated with quanta measured with respect to the out- vacuum at late times. The evolution of the system can be conveniently followed in our Schrodinger picture formalism. The solution to the equation for $z$, \eq{eq:z}, determines the complete time evolution of the quantum state (with the initial condition $z=0$ set well within the early time adiabatic phase at some moment $|t_{0}| >> \lambda^{-1}$ with $t_{0} < 0$). The numerical solutions for the mean particle number and the classicality parameter as functions of the dimensionless variable $\tau = \lambda t$ are plotted in figures~\ref{o1n} and \ref{o1S}. As anticipated, the particle number $\langle n \rangle$ starts from zero in the distant past, and saturates at a constant value in the $\tau \to \infty$ limit. However, in the intermediate region (where the adiabaticity parameter $\epsilon$ is appreciably non-zero), $\langle n \rangle$ is characterized by large osillations superimposed on a steadily increasing mean, which can be interpreted as a continual interplay of creation and annihilation of quanta. The amplitude of these oscillations gets progressively diminished as the evolution proceeds into the late time adiabatic regime, and the mean settles at a constant value. The oscillations in $\langle n \rangle$ conform to the results we have obtained in earlier sections using analytic approximations. The classicality parameter ${\cal S}$ stays very close to zero at early times, but in the late time adiabatic regime, ends up oscillating about ${\cal S}=0$ as our approximate analysis in section~\ref{sec:adiab} did suggest. As is evident from the plot, the time-averaged mean of this oscillatory variation is zero, but the {\it variance} of ${\cal S}$ has a non-zero finite value that stays nearly constant in the late time limit. One can possibly interpret this as an emergence of correlations in comparison with the state at early times. One can also directly visualize the evolution of $z$ in the complex plane. The plot for the complex trajectory of $z$ is shown in figure~\ref{o1z}. $z$ starts from zero and after meandering around for a while in the intermediate phase, ultimately ends up, for large $\tau$, on the trajectory $|z|=$ constant, corresponding to a constant mean particle number. The oscillation of the imaginary part of $z$, as it circles around, is reflected in the late-time variation of the classicality parameter; this can be understood from the relation between $z$ and ${\cal S}$, \eq{J}. \begin{figure}[h] \begin{center} \subfigure[ ]{\label{o1n}\includegraphics[width=6cm,angle=0.0]{n_3.png}} \subfigure[ ]{\label{o1S}\includegraphics[width=6cm,angle=0.0]{S_3.png}} \subfigure[ ]{\label{o1z}\includegraphics[scale=0.5]{z_3.png}} \end{center} \caption{Plots of the particle number $\langle n \rangle$, classicality parameter ${\cal S}$ and the complex trajectory of $z$ with time for $\omega_{1}(t)$ ($\epsilon \to 0$ at late times). $\langle n \rangle$ saturates at a finite value at late times, and ${\cal S}$ ends up oscillating about zero with a finite amplitude. The variation of ${\cal S}$ reflects the complex trajectory of $z$, which at late times is a circle centered on the origin of the complex plane.} \label{o1} \end{figure} \begin{figure}[h] \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{sigma_3.png}} \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{J_3.png}} \caption{Variation of the functions $\sigma ^2$ and ${\cal J}$ for $\omega_{1}(t)$. The Wigner function starts uncorrelated and peaked on the $p-$axis, and again at late times ends up peaking on the $p-$axis. } \label{o1W} \end{figure} \begin{figure}[h] \begin{center} \subfigure[ ]{\includegraphics[width=5cm,angle=0.0]{n_2.png}} \subfigure[ ]{\includegraphics[width=5cm,angle=0.0]{S_2.png}} \subfigure[ ]{\includegraphics[width=4cm,angle=0.0]{z_2.png}} \subfigure[ ]{\includegraphics[width=5cm,angle=0.0]{sigma_2.png}} \subfigure[ ]{\includegraphics[width=5cm,angle=0.0]{J_2.png}} \end{center} \caption{Evolution of the particle number $\langle n \rangle$, classicality parameter ${\cal S}$, complex trajectory of $z$ and the Wigner function variables $\sigma ^2$ and ${\cal J}$ with time for the frequency function $\omega_{2}(t) = \lambda [a + b~ \tanh (\lambda t)]$ ($\epsilon \to 0$ at late times) for the choice $a=2$ and $b=1$. The variation of $\langle n \rangle$, ${\cal S}$ and $z$ are qualitatively similar to those for the frequency function $\omega_1(t)$ considered earlier.} \label{o2} \end{figure} Let us look at the spreading of the Wigner function in the asymptotic limits. As is clear from the plots for $\sigma^2$ and ${\cal J}$ in figure~\ref{o1W}, the Wigner function starts uncorrelated and sharply peaked on the vertical $p$ axis, and again at late times as well, ends up getting concentrated on the $p$ axis. This may be compared with the corresponding classical trajectory, which, in both the asymptotic regions, has the approximate form \begin{equation} q_c ~\sim~ \frac{1}{(\lambda + \tau^{2})^{1/4}} \textrm{Re} \left[ {\cal C} e^{i \int^{\tau} \sqrt{\lambda+\tau'^{2}} d \tau'} \r]~\approx~\frac{1}{\sqrt{|\tau|}}\left(1 - \frac{\lambda}{4 \tau^2}\r) \textrm{Re} \left[ {\cal C} e^{i \left(\frac{\tau^{2}}{2} + \frac{\lambda}{2} \ln|\tau| \r)} \r] \end{equation} with the momentum being \begin{equation} p_c = \dot{q}_c \approx - \frac{q_c}{2 |\tau|} - \left( \sqrt{|\tau|} + \frac{1}{4 |\tau|^{3/2}} \r) Re \left( i~ {\cal C} e^{i \int^{\tau} \sqrt{1+\tau'^{2}} d \tau'} \r). \end{equation} It is clear that in both limits, $q \to 0$ and the trajectory ends up on the $p$ axis, coinciding with the Wigner function's behavior. The tracking of the classical trajectory by the Wigner function, thus, {\it is not} quite the same as the evolution of the classicality parameter, which, in contrast, displays an asymmetry in time and more closely tracks the particle number. This example also demonstrates that interpreting classicality merely in terms of peaking on the classical trajectory may be suspect, since this can happen even when the oscillator is in a near-vacuum state [which is the case at early times]. But the variation of ${\cal S}$ suggests that the state gets appreciably more correlated in the far future [though remaining oscillatorily zero] {\it in relation} to early times [when it stays very nearly zero]. \newline For the sake of verification of the numerical work, we have repeated the above analysis for another frequency, given by $\omega_{2}(t) = \lambda [a + b~ \tanh (\lambda t)]$ ($a > b > 0$) which has similar early/late time limits. The analysis of this model proceeds exactly as in the above example. The time evolution is almost identical too, particularly with regard to the particle number and the $q$-$p$ correlation, as is clear from the plots in figure~\ref{o2}. In this case, too, the mean particle number exhibits oscillations that get suppressed as the evolution turns adiabatic for large $\tau$. This shows that our conclusions are fairly generic. \newline We now move on to consider a third toy example that is expected to illustrate the {\it alternative} scenario, that of strongly non-adiabatic evolution in the late time limit. \subsection*{Example 2: Non-adiabatic evolution at late times} Consider the time dependent frequency having the form $\omega_{3}(t) = \lambda / (1+\lambda^{2} t^{2})$. This frequency function presents a sharp contrast to our previous examples. For this case, the adiabaticity parameter $\epsilon(t) = -2 \lambda t$ is small for $|t| \leq \lambda^{-1}$, allowing a reasonable definition of an adiabatic vacuum in the region around $t = 0$. But the magnitude of $\epsilon$ continues to grow linearly with time, and for large $t$, an adiabatic vacuum clearly cannot be defined. Thus, although the usual definition of particle based on an asymptotically adiabatic out- vacuum is not possible here, our choice of the particle number $\langle n \rangle$ provides a very reasonable alternative to quantify the state's particle content in such a situation. We set the oscillator in the instantaneous ground state at $t=0$ and track its evolution for $t>0$. The plots corresponding to this solution for $z$ are depicted in figures~\ref{o3n}--\ref{o3z}. Our approximations given by Eqs.~(\ref{approx_n}) and (\ref{approx_S}) are borne out, at least qualitatively, by the numerical plots, which show that the particle number $\langle n \rangle$ as well as the phase space correlation $\langle qp \rangle_{{\cal W}}$ grow rapidly without bound at late times. The state can, thus, be regarded as being driven to become strongly correlated with particle creation. This is also in contrast to the [asymptotically] adiabatic evolution for the frequency function $\omega_{1}(t)$ [or of $\omega_{2}(t)$] which was taken up earlier, where ${\cal S}$ stays oscillatorily zero in the absence of particle creation at late times. \begin{figure}[h] \subfigure[ ]{\label{o3n}\includegraphics[width=6cm,angle=0.0]{n_1.png}} \subfigure[ ]{\label{o3S}\includegraphics[width=6cm,angle=0.0]{S_1.png}} \subfigure[ ]{\label{o3z}\includegraphics[scale=0.6]{z_1.png}} \caption{Variation of $\langle n \rangle$, the classicality parameter ${\cal S}$ and $z$ with time for $\omega_{3}(t)$ ($\epsilon \to \infty$ as $t \to \infty$). $\langle n \rangle$ monotonically grows with time and is unbounded; ${\cal S}$ also grows sharply with increasing particle number and saturates at unity at late times. The trajectory of $z$ is confined to a quadrant, and the state ends up at the point $z=-1$ in the $t \to \infty$ limit. } \label{o3} \end{figure} The variation of $z$ in the complex plane, shown in figure~\ref{o3z}, brings out the contrast with the previous case even more vividly. Now $z$ starts from the origin, stays within a quadrant, and ends up going to the limiting value of $z=-1$. The imaginary part of $z$ does not show oscillations, but rises to a maximum before falling off in the large $t$ limit. Let us also look at the spreading of the Wigner function at late times. From the plots for $\sigma^{2}$ and ${\cal J}$ in figure~\ref{o3W}, it is clear that the Wigner function, starting from being uncorrelated but fairly spread out in phase space at $t=0$, ends up sharply peaking on the $q$ axis as $t \to \infty$. The classical trajectory, in comparison, is given in terms of the exact analytical solution to \eq{mueq}, which is expressible in terms of standard functions~\cite{grad}: \begin{equation} q_c = Re[ {\cal A}~ e^{i \sqrt{2}~\tan^{-1} \tau} \sqrt{1 + \tau^{2}}] \end{equation} with \begin{equation} p_c = \dot{q}_c = Re \left[ {\cal A}~ \frac{(i \sqrt{2} + \tau)}{\sqrt{1+\tau^{2}}} e^{i \sqrt{2}~\tan^{-1} \tau} \r] \end{equation} so that at late times, \begin{equation} \frac{p_c}{q_c} = \frac{2 \tau}{1+\tau^{2}} - \frac{2 \sqrt{2}}{1+\tau^{2}} \tan \left( \phi + \sqrt{2}~\tan^{-1} \tau \r ) \stackrel{\tau \to \infty} {\longrightarrow} 0. \end{equation} Thus, the classical trajectory too ends up on the $q$ axis, being tracked by the behavior of the Wigner function. In this instance, the interpretations of the state approaching classicality based on {\it both} peaking on the classical trajectory {\it and} the late time growth of $\langle q p \rangle_{{\cal W}}$ clearly match. \begin{figure} \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{sigma_1.png}} \subfigure[ ]{\includegraphics[width=6cm,angle=0.0]{J_1.png}} \caption{Variation of the functions $\sigma^{2}$ and ${\cal J}$ for $\omega_{3}(t)$. The Wigner function starts uncorrelated in $q$ and $p$ but spread out, and ends up peaking on the $q-$axis at late times. } \label{o3W} \end{figure} \begin{figure} \subfigure[ ]{\label{o3_no}\includegraphics[width=6cm,angle=0.0]{n_omega_1.png}} \subfigure[ ]{\label{o3_s/t}\includegraphics[width=6cm,angle=0.0]{S_by_t_1.png}} \caption{Variation of the functions $\langle n \rangle \omega $ and ${\cal S} / \tau$ for the non-adiabatic case. These quantities saturate at constant values in the $t \to \infty$ limit, in agreement with our late time analytic approximations in \eq{approx_n} and \eq{approx_S}.} \label{o3_no_s/t} \end{figure} The plots in fig.~\ref{o3} follow our expectations based on the approximate analysis carried out in section~\ref{sec:nonadiab} at least at the qualitative level, but we would also like to check if our late time analytic approximations given by Eqs.(\ref{approx_n}) and (\ref{approx_S}) hold. From the expressions for $\langle n \rangle$ and ${\cal S}$, it can be seen that at late times ($t \to \infty$), the variables $\langle n \rangle \omega $ and ${\cal J} \sigma^{2}/ t$ approach constant limiting values. We have plotted the numerical solutions for these quantities as functions of time ($\tau$) in figures~\ref{o3_no} and \ref{o3_s/t}. Sure enough, they reach steady-state values for large $\tau$, in clear agreement with our asymptotic analysis. \section{Discussion} \label{sec:diss} In this paper, we have outlined a general formalism to analyze the dynamics of a time dependent oscillator, which provides a reasonable means of quantifying the physical content of the evolving quantum state, and addressing the issue of quantum-to-classical transitions. Among other things, our definition of the time-dependent particle number $\langle n \rangle$ based on instantaneous eigenstates is a very reasonable one to adopt in the absence of adiabatically definable in and out vacua. (The instantaneous particle concept has been considered in field theory in the literature before~\cite{txts1}, and some `difficulties' (the prediction of vastly more particle creation than expected on physical grounds in certain situations etc.) have been pointed out; nevertheless, we believe that in the present context of an oscillator with \emph{general} time dependence, its suitability outweighs its disadvantages.) An interesting feature suggested by our approximate analysis, in particular of adiabatic evolution, is the possible incorporation of information about particle production in the \emph{real} part of the effective Lagrangian, having implications for the issue of studying back-reaction in semiclassical theory, which normally involves considering only the real part of $L_{eff}$. We next took up a detailed study of various illustrative toy examples, in an attempt to understand more clearly the connection between particle creation and the approach to classicality based on the Wigner function. We take stock by summarizing the main results of our analysis (in particular the late time behavior of various quantities constructed out of the quantum state, starting from an instantaneous vacuum at the initial moment) of the different examples in tabular form below: \vspace{0.5cm} \begin{center} {\scriptsize \begin{tabular}{|c|c|c|c|c|c|} \hline & & & & & \\ & Constant adiabatic & Constant adiabatic & Adiabatic at late times & Adiabatic throughout& Non-adiabatic at late\\ & parameter $|\epsilon| \ll 1$ & parameter $|\epsilon| \gg 1$ & ($\epsilon(t) \to 0$ as $t \to \infty$) & (starting from vacuum) & times ($\epsilon(t) \to \infty$ as $t \to \infty$) \\ & & & & &\\ \hline & & & & &\\ $\omega(t)$ & $\frac{\lambda}{1+\lambda \nu t}$ \quad ($\epsilon=-\nu$) & $\frac{\lambda}{1+\lambda \nu t}$ \quad ($\epsilon=-\nu$) & $\lambda \sqrt{1+\lambda^2 t^2}$ & $|\epsilon(t)|\ll1$ at all times & $\frac{\lambda}{(1+\lambda^2 t^2)}$\\ & & & & & \\ \hline & & & & & \\ $ \langle n \rangle $ & Oscillatory at all times & Monotonically & Saturates at a finite & Remains finite and of & Monotonically increases\\ & with constant amplitude & increases with time & value asymptotically & order $\epsilon^2$ at all times & with time \\ & & & & & \\ \hline & & & & & \\ $z(t)$ & A circular trajectory in& Stays within a quadrant & Ends up on a circular & Remains confined to a & Stays within a quadrant\\ &the complex plane with & and ends up at a point & trajectory centered on & finite region around the & and ends up at the \\ & $Im z \geq 0$ at all times& with $|z|=1$ &the origin & origin for general $\epsilon(t)$ & point $z=-1$ \\ & & & & & \\ \hline & & & & & \\ ${\cal S}(t)$ & Oscillatory at all times & Grows and saturates & Oscillatorily zero with & Remains finite and of & Grows and saturates\\ & with nearly constant& at unity at late times & finite amplitude at & order $\epsilon$ at all times & at unity at late times\\ & finite amplitude& & & &\\ & & & & & \\ \hline & & & & & \\ ${\cal W} (q,p)$ & Starts spread out & Starts spread out and & Starts peaked on the $p$- & Ends up peaking on either & Starts spread out and \\ & and ends up peaking & ends up peaking on & axis and again ends up & $q$- or $p$-axis depending on & ends up peaking on the \\ &on the $q$-axis &the $q$-axis & peaking on the $p-$axis & late time limit of $\omega(t)$ & $q$-axis \\ & & & & &\\ \hline \end{tabular} } \end{center} \vspace{0.5cm} We began by examining an oscillator with constant adiabaticity parameter, a model that is capable of providing a prototype for adiabatic as well as non-adiabatic evolution. Going from small to large values of $|\epsilon|$, there is a clear variation in the features exhibited by the evolving quantum state. In the adiabatic regime, particle creation is suppressed and the mean particle number, starting from an instantaneous vacuum, maintains an oscillatory profile. The classicality parameter, taken as a measure of phase space correlations, also starts from zero (corresponding to the instantaneous vacuum) and remains bounded and oscillatory. This behavior is sharply contrasted by that in the extreme non-adiabatic case, where the mean number of particles as well as the $q$-$p$ correlation diverge at late times (although the energy saturates at a finite limiting value). We subsequently analyzed three toy models that provide examples of frequency functions which vary adiabatically ($|\epsilon| \to 0$) or non-adiabatically ($|\epsilon| \gg 1$) at late times. These models provide adequate confirmation of the validity and generality of the features revealed by the constant $\epsilon$ case. Our examples also demonstrate that peaking of the Wigner distribution on the classical trajectory is a rather general feature and occurs whenever the frequency goes to zero or infinity, and is quite independent of the particle production (this is particularly clear in the cases of constant $|\epsilon| \ll 1$ and the frequency function $\omega_3(t)$). The question of interpreting classical behavior based on the Wigner function involves considering two possible requirements: peaking on the corresponding classical phase space trajectory and emergence of correlation between $q$ and $p$. To repeat once again what we have stated before, our toy examples provide indications that an interpretation based solely on the former requirement can leads to ambiguities. (Another example of such behavior is found in the cosmological context to be discussed in Paper II~\cite{gm}: when evolution of scalar field modes in a de Sitter universe is considered, the Wigner function can be shown to get peaked on the classical trajectory not just at late times, but also in the asymptotic past when the modes are started off in the Bunch-Davies vacuum state.) In light of this fact, we believe that the correlation function ${\cal S}$ comes in as a useful additional variable to quantify classicality. The Wigner function for a vacuum state is expressible as an uncorrelated product of the form $f(q)g(p)$, and for this state, ${\cal S}$ is indeed zero. Once one chooses to concentrate on the variation of the $q$-$p$ correlation, two possible asymptotic limits can be identified. When the system evolves adiabatically, the correlation function maintains an oscillatory profile. So suppression of particle creation keeps the correlation from varying monotonically (and in fact it can even average out to zero in some cases, like in the example exhibiting late time adiabaticity). If, on the other hand, the system violates adiabaticity at late times (with $\omega,\omega t \to 0$ as $t \to \infty$), large amount of particle production occurs, and this is accompanied by sharp growth in the classicality parameter. This is in reasonable agreement with, and to some extent a generalization of, previous analysis in the literature~\cite{infl2,squeeze,class} associating the notion of classicality with particle creation in the context of inflationary cosmology. We would here like to clarify our point of view regarding the interpretation of classicality in relation to the other approaches found in the literature that try to explain emergence of classical properties in terms of the behavior of Wigner functions. It goes without saying that the real world is fundamentally quantum mechanical, and the idea of classical behavior is essentially a matter of interpretation; as a consequence, there is no reason to expect a single criterion to determine classicality to be applicable in all situations. Different criteria are expected to be appropriate depending upon the context. For a normal oscillator, a WKB state is an energy eigenstate, being a solution of the time-independent Schrodinger equation. For the WKB state, it can be shown that the corresponding Wigner function is peaked on the curve $P=f(Q)$, which represents the classical phase space trajectory~\cite{wig2}, and thus shows a precise correlation between position and momentum. The Wigner function in this case does not involve explicit time dependence, and the above-mentioned feature is also naturally independent of time. The other state which has been analyzed in this context is a coherent state which is explicitly time-dependent and for which $\vert \psi \vert ^2$ is actually peaked about a classical trajectory. For such a state, the Wigner function is expressible as an uncorrelated product of gaussians of the form ${\cal W}(q,p,t)=A(q,t)B(p,t)$ and is time-dependent, but the \emph{peak} follows a classical trajectory in phase space~\cite{infl2,wig2}. So, although at any given moment, the Wigner function is not peaked on a classical trajectory (as happens in the case of the WKB stationary state mentioned above), the peak traces out a classical trajectory over time. The gaussian state we are working with is, firstly, explicitly time dependent (and so is the corresponding Wigner function) and the `peak', corresponding to the maximum value of ${\cal W}$, remains fixed at the origin ($q=p=0$), since we have chosen a quantum state having zero mean. It is therefore more appropriate to consider the way the Wigner function is \emph{concentrated} in some region of phase space (by, say, tracking the behavior of a Wigner function ellipse, i.e. a contour corresponding to a chosen value of ${\cal W}$). The Wigner distribution is \emph{not} concentrated on the classical trajectory at all times in the manner a stationary WKB state is, and does not track the classical path (unlike the coherent state). But in some particular limit, like at very late times, one \emph{can} make a clear-cut correspondence between the concentration of the Wigner function, based on the orientation and shape of an ellipse, and the behavior of a general classical phase space trajectory in that limit. (Based on a careful analysis of the graphs for the evolution of the Wigner function ellipses and the classical trajectory, such a correspondence does not appear to hold at intermediate times.) It is only in this sense that the Wigner function gets `peaked' on the classical trajectory in some limit, and our interpretations in the various examples studied have been based on making this kind of correspondence. The essential features revealed by our analysis of the toy models are expected to hold more generally, at least at the qualitative level. In Paper II~\cite{gm}, we shall consider examples from field theory in cosmological and electric field backgrounds. \section*{ACKNOWLEDGEMENTS} G.M. is supported by the Council of Scientific $\&$ Industrial Research, India. \section*{APPENDIX} \subsection{Derivation of the generating function for $\psi (q,t)$} \label{app:gfn} The amplitude for the oscillator to be in the $nth$ instantaneous eigenstate $\phi_{n}(t)$ at time $t$ is non-zero only for even $n$, since $\psi$ is an even function, and is given by \begin{equation} C_{n} = \int_{-\infty}^{\infty} {dq} \phi_{n}^{*}(q,t) \psi(q,t) = N \left(\frac{m \omega}{\pi}\r)^{1/4}\frac{1}{\sqrt{2^{n} n!}}\int_{-\infty}^{\infty} {dq} H_{n}(\sqrt{m \omega}q) e^{-\left(R+\frac{m \omega}{2}\r)q^{2} + i\int_{t_0}^{t}\left(n + \frac{1}{2} \r) \omega(t)dt }. \end{equation} This can be evaluated by making use of the following generating function for Hermite polynomials~\cite{arf}: \begin{equation} e^{-t^{2} + 2tx} = \sum_{n=0}^{\infty} \frac{H_{n}(x) t^{n}}{n!}. \end{equation} Substituting $x=\sqrt{m \omega}q$, multiplying both sides by $\exp[-(R+ m \omega/2)q^{2}]$, and then integrating w.r.t. $q$ finally gives \begin{equation} \sqrt{\frac{\pi}{(R+\frac{m \omega}{2})}} e^{t^2\left(\frac{m \omega}{R+m \omega/2} - 1\r)} = \sum_{n=0}^{\infty} \frac{t^n I_{n}}{n!}. \end{equation} Equating coefficients of equal powers of $t$ on both sides gives \begin{equation} I_{2n} = (2n)!\sqrt{\frac{\pi}{(R+\frac{m \omega}{2})}} \frac{\left( \frac{m \omega}{R+m \omega/2} - 1 \r)^n}{n!} \qquad (n = 0, 1, 2,...) \end{equation} and \begin{equation} C_{2n} = N \left(\frac{m \omega}{\pi}\r)^{1/4}\frac{I_{2n}}{\sqrt{2^{2n} (2n)!}}~e^{i\int_{t_0}^{t}\left(2n + \frac{1}{2} \r) \omega(t)dt} \label{C_2n_Spic} \end{equation} while $I_{n}$ vanishes for odd $n$. Therefore, the probability for the oscillator to be in the eigenstate $\phi_{2n}$ at time $t$ is simply \begin{equation} P_{2n} = |C_{2n}|^{2} = P \frac{(2n)!}{(n!)^2} \frac{|z|^{2n}}{2^{2n}}, \end{equation} where \begin{equation} z = \left( \frac{\omega + \frac{i}{\mu}\frac{d \mu }{dt}}{\omega -\frac{i}{\mu}\frac{d\mu}{dt}}\r) \end{equation} using \eq{z}, and \begin{equation} P = \frac{|N|^2 \sqrt{\pi m \omega}}{|R+\frac{m \omega}{2}|}. \end{equation} The generating function for this probability distribution is defined as \begin{equation} G(x) \equiv \sum_{n=0}^{\infty} P_{2n} x^{n} = P \sum_{n=0}^{\infty} \frac{(2n)!}{(n!)^2} \frac{|z|^{2n}}{2^{2n}} x^{n}. \end{equation} It can be evaluated using the following relation~\cite{arf}: \begin{equation} \sum_{n=0}^{\infty} \frac{(2n)!}{(n!)^2}\frac{X^n}{2^{2n}} = \left( 1 - X \r)^{-1/2} \end{equation} which gives (setting $x \vert z \vert ^2 = X$): \begin{equation} G(x) = P \left( 1-x|z|^2 \r)^{-1/2}. \end{equation} And finally, since the total probability given by $G(1)$ is unity, we have \begin{equation} P = \sqrt{1-|z|^2}. \label{prob1} \end{equation} \subsection{Some properties of the probability distribution for the gaussian state} \label{app:p_d} Consider the probability distribution of the particle number for our gaussian quantum state, given by \eq{genfn1}. Setting $|z|^2 = \exp{(-b)}$ and $x=\exp{(-\lambda)}$, the generating function in \eq{genfn1} can be written as \begin{equation} G(\lambda) = \sqrt{\frac{1-e^{-b}}{1-e^{-b - \lambda}}} . \label{genfn2} \end{equation} (It may be remembered that this is a generating function for $\it pairs$ of particles, i.e. the $k$th term in the series expansion in powers of $\exp{(-\lambda)}$ gives the probability for $k$ $\it pairs$.) In terms of $b$, the mean particle number has the form $\langle n \rangle = (\exp{(b)}-1)^{-1}$, which is same as for a Planckian distribution specified by a temperature $b^{-1}$. It's also straightforward to show that \begin{equation} \langle n^2 \rangle = 4 \sum_{k=0}^{\infty} k^2 P_{2k} = 4 \frac{d^2 G(\lambda)}{d \lambda^2}_{|\lambda=0} = 3 \langle n \rangle ^2 + 2 \langle n \rangle \end{equation} and so the dispersion in the particle number is given by \begin{equation} (\Delta n )^2 = \langle n^2 \rangle - \langle n \rangle^{2} = 2 \langle n \rangle \left( \langle n \rangle + 1 \r). \end{equation} Let us also look at a thermal state in comparison, which is given by the following distribution: \begin{equation} \bar{P}(n) = \mathcal{N} e^{- b n} \qquad (n = 0, 1, 2,...) \end{equation} with the normalization $\mathcal{N}=(1-e^{-b})$ fixed by requiring the total probability to be unity. This distribution is described by the generating function \begin{equation} \bar{G}(\lambda) = \sum_{n=0}^{\infty} \bar{P}(n) e^{-\lambda n} = \left( \frac{1-e^{-b}}{1-e^{-b - \lambda}} \r). \label{genfn3} \end{equation} from which it is clear that the generating functions of the two distributions are related in a simple manner as follows: \begin{equation} G(\lambda) = \sqrt{\bar{G}(\lambda)}. \end{equation} For the thermal distribution, the first two moments are given by \begin{equation} \langle n \rangle = \frac{1}{e^{b}-1} \quad,\quad \langle n^2 \rangle = \frac{1}{e^{b}-1} + \frac{2}{\left( e^{b}-1 \r) ^2} \end{equation} and hence \begin{equation} (\Delta n )^2 = \langle n \rangle \left( \langle n \rangle + 1 \r) \end{equation} which is the standard result~\cite{squeeze}. The distribution in \eq{genfn2}, thus, has a dispersion that differs by only a factor of 2 from that for the thermal state. This fact has been pointed out in the literature before (see for e.g.~\cite{grishchuk}), but its significance, if any, is unclear. \subsection{Connecting with the Heisenberg picture} \label{app:hpic} For completeness and for reference, we give below an outline of the analysis of the time-dependent oscillator in the Heisenberg picture. The starting point is the Hamiltonian, defined as \begin{equation} H(q,p)~=~\frac{1}{2} \left(p^2 + \omega^2(t) q^2 \r) \quad \textrm{with} \quad p = \dot{q} \end{equation} and $q$ satisfies the equation of motion \begin{equation} \ddot{q}(t) + \omega^{2}(t) q(t) = 0. \end{equation} $q$ is elevated to the status of an operator, and we write it as \begin{equation} q(t) = a f(t) + a^{\dagger} f^* (t) \label{q_h_pic} \end{equation} where $a,a^{\dagger}$ are the time independent creation and annihilation operators, and $f(t)$ satisfies the same equation as $q(t)$. Further, requiring $[q,p]=i$ and the ladder operators to satisfy the commutation relation $[a,a^{\dagger}]=1$, one can show that \begin{equation} {\dot f}^{*}f - {\dot f} f^{*} = i. \end{equation} Next, the function $f$ is written in terms of two new functions $A(t)$ and $B(t)$ in the following form: \begin{equation} f(t) = A(t) e^{-i\rho(t)} + B(t) e^{i\rho(t)} \qquad (\dot{\rho} = \omega) \label{f_h_pic} \end{equation} Since two functions have been introduced, an additional constraint needs to be specified to fix the form of these functions. We impose the requirement that $\dot{f}(t)$ be of the form \begin{equation} \dot{f}(t) = -i \omega A(t) e^{-i\rho(t)} + i \omega B(t) e^{i\rho(t)} \label{dot_f_h_pic} \end{equation} i.e. the derivative of $f$ have the same form as it would if $A,B$ were to be independent of time. This gives the following constraint on $A,B$: \begin{equation} \dot{A} e^{-i \rho} + \dot{B} e^{i \rho} = 0. \label{eq:1} \end{equation} Further, substituting the expression in \eq{f_h_pic} in the equation for $f$ and using (\ref{dot_f_h_pic}) gives \begin{equation} \dot{A} e^{-i \rho} - \dot{B} e^{i \rho} = \frac{\dot \omega}{\omega} \left( B e^{i \rho} - A e^{-i \rho} \r) \label{eq:2} \end{equation} Using \eq{eq:1} and \eq{eq:2}, one ends up with the following pair of coupled equations for $A$ and $B$: \begin{eqnarray} \dot{B} + \frac{\dot{\omega}}{2 \omega}B &=& \frac{\dot{\omega}}{2 \omega} A e^{-2 i \rho}, \nonumber \\ \dot{A} + \frac{\dot{\omega}}{2 \omega}A &=& \frac{\dot{\omega}}{2 \omega} B e^{2 i \rho}. \label{eq:3} \end{eqnarray} Let $A(t)= \alpha(t)/\sqrt{2 \omega}$ and $B(t)= \beta(t)/\sqrt{2 \omega}$. Then \eq{eq:3} reduce to \begin{eqnarray} \dot{\alpha} &=& \frac{\dot{\omega}}{2 \omega} \beta e^{2 i \rho}~~\text{and} \nonumber \\ \dot{\beta} &=& \frac{\dot{\omega}}{2 \omega} \alpha e^{-2 i \rho} \label{h_pic_eqs}. \end{eqnarray} Given the functional form of $\omega(t)$, these coupled equations specify the complete time evolution of the system starting from any initial condition on $\alpha$ and $\beta$. \eq{q_h_pic} is next rewritten in terms of $\alpha,\beta$: \begin{eqnarray} q(t) = a f(t) + a^{\dagger} f^{*}(t) &=& \left( \frac{\alpha(t)}{\sqrt{2 \omega}} a + \frac{\beta^*(t)}{\sqrt{2 \omega}} a^{\dagger} \r) e^{-i\rho} + \left( \frac{\beta(t)}{\sqrt{2 \omega}} a + \frac{\alpha^*(t)}{\sqrt{2 \omega}} a^{\dagger} \r) e^{i\rho} \nonumber \\ &\equiv& \frac{a(t) e^{-i\rho} + a^{+}(t) e^{i\rho}}{\sqrt{2 \omega}}. \end{eqnarray} The last line above defines for us the time-dependent creation and annihilation operators as \begin{equation} a(t) = \alpha(t) a + \beta^{*}(t) a^{\dagger}~~,~~a^{\dagger}(t) = \alpha^{*}(t) a^{\dagger} + \beta(t) a \label{def_a(t)} \end{equation} and these imply \begin{equation} a = \alpha^{*}(t) a(t) - \beta^{*}(t) a^{\dagger}(t)~~,~~a^{\dagger} = \alpha(t) a^{\dagger}(t) - \beta(t) a(t). \label{def_a} \end{equation} ($a(t),a^{\dagger}(t)$ are defined in such a way that for the constant frequency case, they remain independent of time.) Let $|n,t \rangle$ stand for the n-particle state at time $t$. The vacuum at time $t$ is defined by the condition $a(t) |0,t \rangle = 0$. Let $\alpha(0)=1$ and $\beta(0)=0$ so that $a$ and $a^{\dagger}$ coincide with the ladder operators at $t=0$. One can then expand the vacuum state at $t=0$ (the {\it in} vacuum) in terms of the basis of states defined at $t$: \begin{equation} |0,0 \rangle = \sum_{n=0}^{\infty} C_n |n,t \rangle. \end{equation} Operating with $a$ on both sides and using the relation in \eq{def_a} gives \begin{equation} C_1 = 0~;~C_{2m+1} = 0 \qquad (m = 0, 1, 2,...) \end{equation} and \begin{equation} C_{2m} = \frac{\beta^*}{\alpha^*}\sqrt{\frac{2m-1}{2m}}C_{2m-2} \qquad (m = 1, 2,...). \end{equation} The recurrence relation above allows one to express $C_{2m}$ in terms of $C_0$, and thus \begin{equation} |0,0 \rangle = C_{0} \sum_{m=0}^{\infty} \left(\frac{\beta^*}{\alpha^*}\r)^{m} \frac{\sqrt{(2m)!}}{2^{m}m!} |2m,t \rangle. \end{equation} Using the relation $\left(a^{\dagger}(t)\r)^{2m}|0,t\rangle = \sqrt{(2m)!}|2m,t\rangle$, the expression above can be rewritten in the following form: \begin{equation} |0,0 \rangle = C_{0} \sum_{m=0}^{\infty} \frac{1}{m!} \left(\frac{\beta^*}{2\alpha^*}a^{\dagger 2}(t)\r)^{m}|0 , t \rangle \equiv C_0 \exp \left( \frac{\beta^*}{2\alpha^*}a^{\dagger 2}(t) \r) |0,t \rangle. \label{C_2m_Hpic} \end{equation} The magnitude of $C_0$ can be set by normalization: \begin{equation} \langle 0,0 | 0,0 \rangle \equiv \sum_{m=0}^{\infty} |C_{2m}|^{2} = 1 ~\Longrightarrow~ \frac{|C_0|^{2}}{\sqrt{1-\left|\frac{\beta}{\alpha}\r|^2}} = 1 ~\Longrightarrow~ |C_0|^{2}|\alpha| = 1 \end{equation} so $|C_0|=|\alpha|^{-1/2}$. One can compare the coefficient in \eq{C_2m_Hpic} with the expression for $C_{2m}$ derived earlier in the Schrodinger picture, \eq{C_2n_Spic}, to make the correspondence $z \exp(2 i \rho) = \beta^{*}/\alpha^{*}$ with $\dot{\rho}=\omega$. This relation can also be deduced by working out the equation satisfied by $(\beta^* / \alpha^*)\exp(-2 i \rho)$ using \eq{h_pic_eqs}, which turns out to be the same as the $z$ equation in (\ref{eq:z}). \subsection{Late time analytic approximation for $z$ when $\epsilon(t) \to \infty$} \label{app:a_a} Making the substitution $z=(1+\Delta)\exp(i\theta)$~(with $\Delta$ real) in the equation for $z$, eq.(\ref{eq:z}), the following coupled equations for $\Delta$ $\&$ $\theta$ are obtained: \begin{eqnarray} \omega \theta ' + \left[ 1 + \frac{\Delta^2}{2 (\Delta + 1)} \r] \sin \theta + \frac{2}{\epsilon} = 0, \nonumber \\ \omega \Delta ' + \left( \Delta + \frac{\Delta^{2}}{2} \r) \cos \theta = 0 \label{eq:d_t_1} \end{eqnarray} where the prime ($'$) denotes differentiation w.r.t. $\omega$. We are seeking an approximate late time solution with $\Delta \to 0$ as $\omega \to 0$ (i.e. $t \to \infty$). Assuming that in this limit, the O($\Delta^{2}$) terms in eq.(\ref{eq:d_t_1}) can be dropped, we are left with \begin{eqnarray} \omega \theta ' + \sin \theta + \frac{2}{\epsilon} \approx 0, \nonumber \\ \omega \Delta ' + \Delta \cos \theta \approx 0. \label{eq:d_t_2} \end{eqnarray} Let $\sin \theta \to 0$ as $\omega \to 0$, i.e. $\theta \to n \pi$ ($n~\epsilon~{\cal I}$). This would give $\cos \theta \to \pm 1$. Consider the upper limit first. Then $\theta \to n \pi$ with even $n$. This gives \begin{eqnarray} \omega \Delta ' + \Delta \approx 0 \nonumber \\ \Longrightarrow~~\Delta \sim \frac{c_{1}}{\omega} \end{eqnarray} which, in the $\omega \to 0$ limit, blows up, and we run into an inconsistency with the dropping of the O($\Delta^{2}$) terms earlier. Consider therefore the other limit, i.e. $\cos \theta \to -1$. Let $\theta = n \pi - \phi$ ($n$ is odd). Eqs.(\ref{eq:d_t_2}) reduce to \begin{eqnarray} -\omega \phi ' + \phi \approx -\frac{2}{\epsilon}~, \nonumber \\ \omega \Delta ' - \Delta \approx 0. \end{eqnarray} The second equation above can be integrated to give $\Delta \sim c_{1} \omega$ which yields the right late time limit. The first equation can also be solved, and gives \begin{eqnarray} \frac{\phi}{\omega} ~\approx~ c_{0} + \int^{\omega} \frac{2}{\epsilon~\omega^{2}} d \omega ~\equiv~ c_{0} + 2 t \end{eqnarray} using the relation $\epsilon = \dot{\omega}/\omega^{2}$.
1,314,259,995,990
arxiv
\section{INTRODUCTION} By $z$\s1, when the Universe was about half its current age, many of the properties of the present-day galaxy population were already in place. Although the rate of star formation in the Universe as a whole was an order of magnitude higher at $z$$\sim$1 than it is today (Lilly et al.\ 1996), there already existed a well-developed luminosity function of quiescent galaxies (Lilly et al.\ 1995) as well as an established population of galactic bulges and disks (Schade et al.\ 1995, Brinchman et al.\ 1998) with normal-looking Tully-Fisher rotation curves (Vogt et al.\ 1997). Thus, while clearly much remains to be learned about galaxies and galaxy evolution at $z$$<$1, we must look to higher redshifts, $z$$>$1, to witness many of the key events in the story of galaxy formation. To efficiently reach beyond $z$$>$1 requires techniques that let us robustly select high-$z$ galaxies, preferably with some --- even crude --- redshift information, while minimizing contamination by the far more numerous foreground objects. Such selection is possible using multi-color broadband photometry that is sensitive to the imprint on galaxy spectra of coarse spectral features such as the Lyman break at rest-912\AA, the Balmer and 4000\AA\ breaks at 3648--4000\AA, and --- in the rest-frame infrared --- the H$^-$ opacity bump at 1.6$\mu$m\ (e.g., Sawicki 2002). One such multicolor approach is the technique of photometric redshifts in which the most likely redshift of an object is estimated by comparing its observed and predicted broadband spectral energy distributions (e.g., Connolly et al.\ 1995; Sawicki, Lin, \& Yee 1997; Bolzonella et al.\ 2000; Sawicki 2002; Csabai et al.\ 2003). Another, even simpler, technique --- popularized and shown to be very effective through extensive spectroscopic follow-up by Steidel and collaborators (e.g., Steidel et al.\ 1996, 1999, 2003) --- straightforwardly selects high-$z$ star-forming galaxies by their distinctive colors in an optical color-color diagram. Extensive spectroscopic follow-up of such color-color selected Lyman Break Galaxies (LBGs) has allowed Steidel and his collaborators to amass very large samples of $\sim$10$^3$ {\it spectroscopically confirmed} galaxies at $z$$\sim$3 (selection in the $U_n-G$\ vs.\ $G-{\cal R}$\ color space) and $z$$\sim$4 (selection in the $G-{\cal R}$\ vs.\ ${\cal R}-I$\ space) while a recent extension of the technique to lower redshifts ($z$$\sim$1.7, $z$$\sim$2.2; Erb et al.\ 2003, Steidel et al.\ 2004) has also started to yield promising results. The imaging samples assembled by the Steidel team are designed for efficient spectroscopic confirmation and, therefore, are limited to relatively bright objects ($\cal R$\lap25.5 or $I$\lap25). Consequently, they do not probe significantly fainter than the characteristic luminosity, $L^*$, at which the galaxy luminosity function changes slope. The transition at $L^*$\ in the galaxy luminosity function is likely an imprint of how galaxies form and evolve and the comparison of galaxies above and below $L^*$\ may tell us much about what different processes are responsible for this evolution. The mass function of dark matter halos, predicted from simulations of dark-matter clustering (e.g., Jenkins et al.\ 2001), is essentially a power law on mass scales that encompass the range of galaxy masses; in contrast, the observed luminosity function of galaxies, both at low redshift and high, exhibits different behaviours above and below a characteristic luminosity, $L^*$. This different shape in the luminisity function suggests that different mechanisms dominate the evolution of galaxies above and below $L^*$ and that therefore our understanding of galaxy formation and evolution may profit from studying the evolution of not just the bright end but also the faint component of the galaxy population at high redshift. Similarly, because the strength of the clustering of dark matter halos depends on the halo mass, a careful study of galaxy clustering properties as a function of both epoch {\it and} galaxy luminosity may inform us about how the properties of dark matter halos affect the star formation that occurs in the galaxies that they host. Motivated by the twin goals of studying the dependence of the galaxy luminosity function and of galaxy clustering on epoch {\it and} luminosity, we have carried out a very deep ($\cal R$$_{lim}$\s27), wide-area (169 arcmin$^2$), multicolor ($U_n G {\cal R} I$) {\it imaging} survey of galaxies that are significantly fainter than those reached in the well-known studies by the Steidel team. Our survey uses the {\it same} $U_n G {\cal R} I$\ filter system that is used by Steidel and his collaborators for selecting their \zs4, \zs3, \zs2.2., and \zs1.7 samples, but probes significantly fainter --- up to 1.5 magnitudes, or a factor of 4 in luminosity. Spectroscopic follow-up of their color-color selected samples has allowed Steidel et al.\ (1999, 2003, 2004) to precisely characterize important quantities, including selection volumes and fractions of low-$z$ interlopers. Our use of identical filters and color-color selection permits us to apply this knowledge in a relatively straightforward way to our samples. Thus, even without what --- at the magnitudes of the objects in our survey --- would have been {\it very} expensive spectroscopy, we can understand and correct for the selection effects that are at play. In the present paper --- which serves as the introduction to our survey --- we describe our observations (\S~\ref{observations}) and data reductions (\S~\ref{datareductions}), including an assessment of the photometric completeness and surface brightness selection. We then outline the photometric selection of our \zs4, \zs3, \zs2.2, and \zs1.7 galaxy samples (\S~\ref{LBGselection}) and compare and contrast our survey with other deep imaging surveys (\S~\ref{summarydiscussion}). Subsequent papers in this series will study the faint end of the high-$z$ galaxy luminosity function and the clustering of faint galaxies at high redshift and will extend this work into the near-IR. Unless specifically stated otherwise, throughout this series of papers we normalize fluxes on the AB magnitude system (Oke 1974) and adopt $\Omega_M$=0.3, $\Omega_{\Lambda}$ = 0.7, and $H_0$=70 km~s$^{-1}$~Mpc$^{-1}$. \section{SURVEY STRATEGY AND OBSERVATIONS}\label{observations} The design of the survey reflects our goal of robustly studying the evolution of the {\it faint} (sub-$L^*$) component of the star-forming galaxy population at high redshift. Briefly: \begin{itemize} \item {To reach {\it deep} into the sub-$L^*$\ population we carried out very deep imaging using the LRIS imaging spectrograph on the 10m Keck~I telescope. } \item {To {\it robustly} identify high-$z$ star-forming galaxies and to ensure a smooth joining with the Steidel et al.\ work at brighter magnitudes, we used the very same filter set and selection techniques as are used in their larger, but shallower, spectroscopically-calibrated surveys.} \item {To avoid being dominated by small number statistics {\it and} by cosmic variance the survey covers a large area (169 arcmin$^2$) that is split into three spatially-independent patches. } \end{itemize} We call our survey the Keck Deep Fields (KDF). The $G {\cal R} I$\ color composite images shown in Figure~\ref{color-images.fig} give a visual overview of the KDF, while Table~\ref{field-details.tab} summarizes key information about our survey fields, including field coordinates and sizes, foreground extinction, exposure times per filter, image quality, and limiting depth. As shown in Fig.~\ref{color-images.fig}, the survey consists of five LRIS fields, two pairs of which are abutting along their long edges. The survey area is thus split into two larger 'patches' of two LRIS fields each and a third, smaller patch that consists of a single LRIS field. Our field-naming convention is based on the right ascensions of the patches and we call the five fields 02A, 03A and 03B (together comprising the 03 patch), and 09A and 09A (the 09 patch). The three patches are widely separated on the sky, helping to ensure that the effects of large-scale structures are averaged out. The purpose of abutting pairs of fields into the larger patches is to give us improved ability to study galaxy clustering (Sawicki \& Thompson, in preparation), where the larger {\it contiguous} area is important as it gives us many more baselines in general and, particularly, many long baselines few of which are available in a single LRIS field. Our 09 patch partially overlaps one of the $U_n G {\cal R} I$\ fields (field Q0933+289) from the survey of Steidel et al.\ (2003); this patch contains a $z$=3.43 quasar (the brightest object in the 09A field --- see Figure~\ref{color-images.fig}), but we do not expect its presence to bias our science as there is no evidence of a galaxy overdensity at the QSO redshift in the extensive Steidel et al.\ (2003) spectroscopy of this field. Our survey was carried out in dark-time over two three-night observing runs: 2001 December 18--20 UT (hearfter Run~1) and 2002 December 2--4 UT (hereafter Run~2). All data were taken using the LRIS imaging spectrograph (Oke et al.\ 1995; McCarthy et al.\ 1998; Steidel et al.\ 2003) mounted on the Keck I telescope. LRIS provides two independent channels, fed through a dichroic beamsplitter, which {\it simultaneously} observe two separate wavelength ranges. We used the D560 dichroic, which allowed us to observe simultaneously in $U_n$\ (or $G$) on the blue side and $\cal R$\ (or $I$\, or $Z$) on the red side. The red channel uses a SITe/Tektronix 2048$\times$2048 pixel, backside-illuminated CCD with 0.215\arcsec\ pixels and a quantum efficiency that peaks at 70\% at 6000\AA\ and remains above 50\% to 8500\AA. The detectors in the blue channel changed half-way through the project: during Run 1 we had to use a fairly average SITe 2048$\times$2048 pixel CCD (QE$\sim$35\% at $U_n$\ and $\sim$65\% at $G$), but during Run 2 we benefited from an excellent mosaic of two UV-optimized EEV (Marconi) 2048$\times$4096 pixel CCDs with 0.135\arcsec\ pixels and very high UV and blue quantum efficiency (QE$>$60\% at $U_n$\ and $>$85\% at $G$). In all these detectors dark current was negligible. We lost the bulk of the first night of Run 1 to weather and instrument problems, but the remaining two nights of Run 1, as well as all three nights of Run 2 were both photometric and trouble-free. $\cal R$-band seeing ranged over 0.7--1.4\arcsec, although we rejected the frames with the poorest image quality when making our stacked images (\S~\ref{datareductions}). During part of one night of Run 1 the seeing was particularly poor, affecting all the $G$-band images of one of the fields (field 03A). Consequently, all the Run~1 $G$-band images of this field was rejected and was redone in Run~2 while at the same time some additional $I$-band imaging of that field was simultaneously obtained using the red channel of the instrument. The data were acquired as a series of short, dithered exposures. The $U_n$-band frames were always acquired simultaneously with the $\cal R$-band frames, and the $G$-band frames with the $I$-band ones. Aditionally, a small amount of $Z$-band data, taken through the long-pass RG850 filter, were taken in parallel with the $U_n$-band observations; these $Z$-band data will be presented elsewhere. Individual exposure times were 1200s in $U_n$-band, 1200s in $G$-band, 525s (or 585s) in $\cal R$-band, and 325s (or 360s) in $I$-band. Exposure times in the red channel ($\cal R$\ and $I$\ bands) --- where the night-time sky is quite bright even during darktime --- were short to avoid straying into the non-linear regime of the CCD response. We wished to dither but did not want to waste time not taking data in one channel while waiting for the other channel to finish. Consequently, we set the red channel exposure times so that the end of the last read-out of a set of red exposures coincided with the end of the read out of a blue exposure: \begin{equation} N_{exp}^{red}\times(t_{exp}^{red}+t_{read}^{red})= (t_{exp}^{blue}+t_{read}^{blue}), \end{equation} where $t_{exp}$ are the exposure times, $t_{read}$ are the read-out times, and $N_{exp}^{red}$ is the number of red channel exposures taken during one blue-channel exposure ($N_{exp}^{red}$=2 for $\cal R$-band and 3 for $I$-band). After each such exposure sequence consisting of one blue-channel and $N^{red}_{exp}$ red-channel exposures, the telescope was offset following a quasi-random pattern on a nine-point square grid with 24\arcsec or 30\arcsec edges. This dithering was done to allow the subsequent exclusion of bad pixels from the stacked frames and to permit the masking-out of sources in the construction of flat-field images and fringe frames from the imaging data. In total, our good $U_n G {\cal R} I$\ images of the five fields consumed 71.1 hours of {\it exposure time}. This number does not include overheads, images that were excluded from the final stacks because of poor image quality etc., nor the $Z$-band data. Once processed and stacked, these images comprise some of the deepest multiwavelength imaging taken, particularly the U-band, over this wide an area. \section{DATA REDUCTION}\label{datareductions} \subsection{Pre-processing and frame stacking} The data were processed in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} using a fairly standard algorithm for processing CCD data. The primary deviation is embodied in the iterative determination of the fringe-correction image and the corrected domeflat. We reduced each dataset from a different CCD separately and, where possible, we also independently processed data from separate nights. In particular, the initial processing of the data consisted of first subtracting the bias signal from each image using the overscan regions. Separate bias frames were then stacked and subtracted from the science data to correct for any residual bias structure. We then constructed normalized domeflats in each filter, rescaling the images to a constant mean value and stacking them with sigma-clipping to remove cosmic rays. We constructed a bad pixel mask from the bias and domeflat images, marking as bad any hot or dead pixels and bad columns or charge traps. After this initial preparation, we flatfielded the science data with the domeflats, then rescaled and stacked the images to reject real sources on the sky. We used these stacked images to construct a fringe frame for the $G$, $\cal R$, and $I$\ data (the $U_n$\ data did not need fringe correction). Any residual gradients were used to correct the domeflats, and these few steps repeated until we converged on a good fringe frame and corrected domeflat. The final reduction of the science data produced individual flat-fielded, fringe-corrected images with zero mean sky values. We then shifted the images (using integer pixle shifts) to correct for dithering offsets and stacked all of the data for each field and filter combination to produce preliminary deep images. The deep images were chopped, scaled, and convolved as necessary to subtract from the individual images to aid in identifying and masking satellite or meteor trails, asteroids, and cosmic rays. Instrumental magnitudes for a set of isolated, unsaturated sources in common to all frames were determined and used to multiplicatively rescale the images to constant photometry. We measured the sky noise ($\sigma_i$) and full-width at half-maximum ($FWHM_i$) of the seeing in each of these images, $i$, and then produced the final deep image with a modified variance weighting, which is corrected for the seeing: \begin{equation}\label{weights.eq} weight_i \propto 1 / (\sigma_i^2 FWHM_i). \end{equation} This scheme gives higher weight to the images with the best seeing and/or the lowest noise, resulting in a gain in the depth of the final, stacked images. \subsection{Image registration, trimming, and photometric calibration} \label{match-calibrate} Next, the stacked $U_n$, $G$, and $I$\ images were geometrically transformed to match their corresponding $\cal R$-band images using the positions of multiple bright point sources. These aligned $U_n G {\cal R} I$\ images of each field were then trimmed to exclude areas of low S/N around the edges that were a byproduct of spatial dithering; this trimming was done so that in each field the trimmed images in each of the four bands covered only a common area, with all low-S/N edge areas trimmed out. Even after this trimming, sufficient overlap remained between the 09A and 09B images (5.0 arcmin$^2$) and the 03A and 03B images (1.8 arcmin$^2$) to allow us to later accurately tie together the two fields of each patch. The images were then photometrically calibrated. We tied our photometric system to photometrically calibrated deep $U_n G {\cal R} I$\ images that were kindly provided to us for that purpose by Chuck Steidel. Although the filter sets used in Steidel's images and in ours were nearly identical, we nevertheless computed the invariably small color terms in our photometric transformations since we wanted to be able to replicate without bias the Steidel et al.\ (1998, 2003, 2004) color-color selection of high-$z$ galaxies. The use of these color terms, derived from the comparison of images obtained using our apparatus with those taken by Steidel et al.\ in their surveys, ensures that any wavelength-dependent differences due to detector QE, mirror reflectivity, etc., are calibrated out, and that we are working on the same photometric system as Steidel et al. are. As the final step in our photometric calibration, we have corrected for the very small effect of foreground Galactic dust extinction as determined from the Schlegel, Finkbeiner, \& Davis (1998) dust maps. The $E(B-V)$\ values of these dust corrections are given in Table~\ref{field-details.tab}. In addition to the natural-seeing images, we also produced seeing-matched $U_n G {\cal R} I$\ images for each field. These seeing-matched images were made for use in measuring object colors (\S~\ref{objectfinding}) and were made by smoothing the three images of the field that have better seeing to match the seeing in the fourth, poorest-quality image. The smoothing was done using a Gaussian kernel whose width was determined for each image based on the sizes of multiple bright but unsaturated point sources. The FWHM sizes of the seeing in the resultant smoothed images are given as ``common smoothed seeing'' in Table~\ref{field-details.tab}). They are typically FWHM $\sim$ 1.0--1.1\arcsec, except for the 03A field for which the final image quality was significantly worse at 1.4\arcsec. Our $\sim$~1\arcsec\ seeing is very comparable to the seeing in the shallower $U_n G {\cal R} I$\ surveys of Steidel et al.\ (1999, 2003, 2004). \subsection{Object detection and photometry}\label{objectfinding} Overall, our photometric approach is very similar to that used by Steidel and collaborators, including the use of the same $U_n G {\cal R} I$\ filter set, $\cal R$-band detection of objects, and color measurement through circular apertures. The specifics are discussed below and any differences in approach are noted. Ideally, we would have wished to {\it detect} galaxies at a constant {\it rest-frame} wavelength irrespective of redshift. Given our data and redshift ranges of interest this could have been done at $\lambda$$\approx$1700\AA\ which corresponds to observed $I$-band at \zs4, $\cal R$-band at \zs3, and $G$-band at \zs2. However, because our $\cal R$-band images are extremely deep, we chose to do object detecion in the $\cal R$-band images only. This approach is completely analogous to the procedure used by the Steidel team and has the virtue of simplicity in that it results in only one source catalog per field. Given the depth of our $\cal R$-band images and the relatively mild ${\cal R}-I$\ and $G-{\cal R}$\ colors of \zs4 and \zs2 galaxies, respectively (see Figs.~\ref{GRIcolorcolor.fig} and \ref{UGRcolorcolor.fig}, and also Steidel et al.\ 1998, 2003, 2004), our $\cal R$-band object detection should not bias our samples in any significant way. We used the SExtractor package (Bertin \& Arnouts, 1996) for object detection and photometry. To detect objects, we ran SExtractor on the {\it unsmoothed} $\cal R$-band images. Total $\cal R$-band magnitudes are SExtractor's MAG\_AUTO apertures which are Kron-like (Kron 1980) elliptical apertures in which fluxes are corrected for any contaminating close companions through masking and image symmetrization. Colors were measured in Sextractor's dual image mode, using the unsmoothed images for object detection and the smoothed, seeing-matched images for color photometry. In keeping with the approach of the Steidel team, we used 2.0\arcsec-diameter circular apertures (Steidel et al.\ 2003) for color measurement. Finally, the instrumental magnitudes were transformed onto the AB system (Oke 1974) using the zeropoints and color terms determined in \S~\ref{match-calibrate}. The total magnitudes in the $U_n$, $G$, and $I$\ bands were then straightforwardly computed from the $\cal R$-band total magnitudes and aperture colors via \begin{equation} m_{tot}={\cal R}_{tot}-({\cal R}_{ap}-m_{ap}), \end{equation} where $\cal R$\ is the $\cal R$-band magnitude, $m$ represents the magnitude in one of the other bands ($U_n$, $G$, or $I$) and the subscripts $ap$ and $tot$ denote aperture and total magnitudes, respectively. Our treatment of ``drop-out'' objects also follows closely the approach of Steidel et al.\ (2003). If an object's reported flux in the $U_n$\ (or $G$) band is greater than the 1$\sigma$ fluctuation in the sky background over the size of our color aperture then the object is considered detected in that band and is assigned an aperture magnitude that corresponds to that measured flux. If, on the other hand, the reported flux is below the 1$\sigma$ threshold, the object is considered a drop-out and is assigned an upper flux limit that is defined as the magnitude that corresponds to the 1$\sigma$ sky fluctuation. The main difference between our procedure and that of Steidel and his team is that whereas they use an in-house modified version of FOCAS for object detection and photometry, we use SExtractor. The use of these two different programs has two potential consequences. \begin{enumerate} \item {The first of these potential consequences is that the number of objects detected by SExtractor and FOCAS may be different as the two programs use different object-detection algorithms. If these were the case than we might expect a different number density of galaxies in the two surveys. However, this is {\it not} the case here: Table~\ref{field-details.tab} lists the number densities of objects with $\cal R$=22.5--25.0 in our fields. These number densities are entirely consistent with the distribution of identically defined object densities in the 17 fields of Steidel et al.\ (2003) and the average of our five fields, $<$N$>$=27.7$\pm$2.9, is fully consistent with their 17-field average of $<$N$>$=25.9$\pm$1.9. We therefore conclude that any differences in object-finding between FOCAS and SExtractor are likely not significant for our purposes and result in variations that are certainly smaller than field-to-field scatter due to cosmic variance.} \item {The second potential consequence of using SExtractor instead of FOCAS is that total fluxes measured by SExtractor may differ from those measured by FOCAS. Such difference may arise because the two programs use different ways of sky estimation and different definitions of total aperture (SExtractor uses Kron-like apertures corrected for near neighbor contamination, whereas FOCAS uses simple padded isophotal magnitudes). However, given the small angular sizes of distant galaxies, the differences between FOCAS and SExtractor should be small --- on order a few percent. Moreover, any strong differences would also also reflect in a discrepancy in the number density of 22.5$\leq$$\cal R$$\leq$25 objects, and --- as we discussed above --- no such discrepancy is seen. Finally, in the most sensitive respect --- that of determining object {\it colors} --- we follow a recipe identical to that of Steidel and his team, measuring colors through fixed, 2\arcsec-diameter apertures. } \end{enumerate} Overall, our object selection procedure and photometry are thus very similar to those used by Steidel et al.\ and we expect there to be no significant systematic differences between the two approaches. The spatial distribution of 23$\leq$$\cal R$$\leq$27 objects in our survey is shown in Fig.~\ref{xypos.all.fig}. Our catalog is clearly missing objects in the vicinity of very bright stars (c.f.\ Fig.~\ref{color-images.fig}), where SExtractor has trouble finding faint sources in the bright glow from the stars. However, these areas of low sensitivity are small and we will account for as needed using simulations (\S~\ref{completeness}; also Sawicki \& Thompson 2004). In all, our catalog contains 14579 objects with 23$\leq$$\cal R$$\leq$27 in the 169~arcmin$^2$\ of our survey. \subsection{Depth, Completeness, and Surface Brightness Selection Effects} \label{completeness} We assess the depth of the Keck Deep Fields in two ways: by measuring the sky noise of our images and by conducting Monte Carlo simulations that seek to detect artificial objects implanted into the images. Table~\ref{field-details.tab} lists the sky surface brightness limits measured from pixel-to-pixel RMS fluctuation in several representative, object-free areas of each image. In the redder bands the image depth correlates closely with total exposure time, while in the bluer bands it has a pronounced dependence on run-to-run changes in detector sensitivity and on the sky brightness changes in the individual exposures that were coadded into the stacked images. In the four fields that received close to the full intended exposure time (fields 03A, 03B, 09A, and 09B) characteristic 1$\sigma$ sky surface brightness limits are $\mu_{lim}$ $\sim$ 30.7, 30.8, 30.0, and 29.4 mag/arcmin$^2$\ in $U_n G {\cal R} I$. They are $\sim$0.5 mag shallower in the 02A field. In contrast, the typical sky surface brightness limits of the imaging used by the Steidel team are 28.7, 29.0, 28.4, and 28.0 in $U_n G {\cal R} I$\ (Steidel et al.\ 2003, 1999). Thus, over the bulk of our survey, we reach $\sim$1.5--2 mag deeper in sky noise than do Steidel et al. Even in our relatively shallow 02A field we reach sky surface brightness limits that are 0.7--1.8 mag deeper than those in the Steidel et al.\ surveys. Sky surface brightness limits are of course only part of what affects the sensitivity of faint object imaging. A second key ingredient is image quality, which is primarily influenced by seeing and is parametrized by the amplitude of the stellar FWHM. With the exception of the 03A field, typical seeing of our stacked images is $\sim$1\arcsec, which is very comparable to the seeing in the data used by the Steidel team (Steidel et al.\ 2003). We use Monte Carlo simulations to assess the combined impact of both seeing {\it and} sky surface brightness on our data. By their nature these simulations also take into account other effects that impact the detection rate, such as the poorer detection sensitivity around bright stars and the confusion by close companions. At this stage, we are only interested in simply determining the {\it detection} efficiency of faint objects in the $\cal R$-band KDF images and do not attempt here to ascertain the incompleteness of our LBG sample due to scatter out of color-color selection regions by photometric errors in object colors. Such more comprehensive simulations are part of our study of the LBG luminosity functions (Sawicki \& Thompson, in prep.). To gauge our detection efficiency, we made simulations that implanted artificial objects with a range of fluxes and sizes at random position into our images and which then attempted to recover them using the same procedures as those we used in \S~\ref{objectfinding} to construct our source catalogs. To assess the sensitivity of our catalog to object surface brightness, we simulated a range of Gaussian-shaped sources with FWHM=0.5--2\arcsec. However, we note that HST imaging of LBGs shows them to be very compact, with half-light radii $r_{1/2}$$\sim$0.1--0.3\arcsec\ over a range of epochs: \zs5, \zs3, and \zs2 (Bremmer et al.\ 2004; Giavalisco, Steidel, \& Macchetto 1996; Erb et al.\ 2003). Consequently, given that our catalog is based on ground-based $\cal R$-band images with FWHM$\geq$0.8\arcsec, our target high-$z$ galaxies are essentially unresolved point sources with FWHM that corresponds to the seeing. The contours in Fig.~\ref{detection-efficiency.fig} show the results of our detection-rate simulations. The fraction of objects recovered as a function of object FWHM and $\cal R$-band magnitude is shown for each of the five fields. As expected, the detection efficiency drops with increasing total magnitude and FWHM. The horizontal line in each panel shows the stellar FHWM as measured from several bright but unsaturated stars in the images and correspond to the ``seeing ($\cal R$)'' values in Table~\ref{field-details.tab}. As noted above, because of the very small sizes ($r_{1/2}$$\sim$0.2\arcsec) of distant star-forming galaxies, the horizontal line also reasonably represents the expected FHWM of our targets. As a measure of the detection depth of our images we adopt the magnitude at which 50\% of unresolved objects are detected (hereafter, $\cal R$$_{lim}$). These limiting detection magnitudes range over $\cal R$$_{lim}$=26.7--27.3 and are listed in Table~\ref{field-details.tab} and shown as vertical lines in Fig.~\ref{detection-efficiency.fig}. Overall, three of our five fields (03B, 09A, and 09B) reach object detection limits $\cal R$$_{lim}$\s27.2. The other two fields, 02A and 03A, reach $\cal R$$_{lim}$\s26.7. The \s0.5 mag difference in depth arises because the two shallower fields either had a shorter exposure time (02A) or poorer seeing (03A) than the three deeper fields. In summary, the detection limits, $\cal R$$_{lim}$, tell us to what depth we can reasonably push our source list before encountering significant detection incompleteness. We find that $\cal R$$_{lim}$\s27.0 in the KDF, which is \s1.5 magnitudes deeper than the work of the Steidel team. The sky surface brightness limits that were discussed earlier tell us how deep we can push our matched-aperture color measurements without incurring photometric errors that are larger than those in the surveys of Steidel et al. Here, we also found that we can do so to \s1.5 magnitudes deeper than is the case in the data of the Steidel team. Thus, overall, our $U_n G {\cal R} I$\ survey can select galaxies in a manner that's identical to that used for the spectroscopically-calibrated selection of high-$z$ star-forming galaxies by the Steidel group, but can do so with confidence for objects that are up to \s1.5 magnitudes fainter, namely to $\cal R$\s27.0. It is to the selection of high-$z$ star-forming galaxies that we now turn. \section{PHOTOMETRIC SELECTION OF HIGH-$z$ GALAXIES}\label{LBGselection} \subsection{Color-color selection criteria}\label{selection-criteria} Steidel et al.\ (1999; 2003; 2004) have developed and extensively tested color-color selection criteria that efficiently and robustly select galaxies at \zs4, \zs3, \zs2.2, and \zs1.7. These selection criteria have evolved somewhat over time (c.f., e.g., Steidel et al.\ 1996, Erb et al.\ 2003) and in our work we use the most recent published selection criteria, which are as follows. To select \zs4 objects we use (Steidel et al.\ 1999) \begin{eqnarray}\label{z4sel.eq} G-{\cal R} & \geq & 2.0, \nonumber \\ G-{\cal R} & \geq & 2({\cal R}-I)+1.5, \\ {\cal R} - I & \leq & 0.6, \nonumber \end{eqnarray} for \zs3 objects we use (Steidel et al.\ 2003) \begin{eqnarray}\label{z3sel.eq} G-{\cal R} & \leq & 1.2, \nonumber \\ U_n - G & \geq & G - {\cal R} + 1.0, \\ G-{\cal R} & \geq & -0.1, \nonumber \end{eqnarray} for \zs2.2 objects we use (see Steidel et al.\ 2004) \begin{eqnarray}\label{z22sel.eq} G-{\cal R} & \geq & -0.2, \nonumber\\ U_n-G & \geq & G-{\cal R}+0.2, \\ G-{\cal R} & \leq & 0.2(U_n-G)+0.4, \nonumber\\ U_n-G & \leq & G-{\cal R}+1.0,\nonumber \end{eqnarray} and for \zs1.7 objects we use (Steidel et al.\ 2004) \begin{eqnarray}\label{z17sel.eq} G-{\cal R} & \geq & -0.2, \nonumber\\ U_n-G & \geq & G-{\cal R}-0.1, \\ G-{\cal R} & \leq & 0.2(U_n-G)+0.4, \nonumber\\ U_n-G & \leq & G-{\cal R}+0.2.\nonumber \end{eqnarray} Additionally, we impose a faint magnitude limit of $\cal R$$\leq$27.0 motivated by the depth of our images. To guard against bright foreground interlopers we also impose a bright-end cut of $\cal R$$\geq$23.0. The color-color selection criteria of Equations~\ref{z4sel.eq}--\ref{z17sel.eq} are illustrated in the left-hand panels of Figures~\ref{GRIcolorcolor.fig} and \ref{UGRcolorcolor.fig}. The left panel of Figure~\ref{GRIcolorcolor.fig} shows the region in $G-{\cal R}$\ vs. ${\cal R}-I$\ color space used to select galaxies at \zs4 (Eq.~\ref{z4sel.eq}). The left panel of Figure~\ref{UGRcolorcolor.fig} shows in green, blue, and magenta the regions of $U_n-G$\ vs.\ $G-{\cal R}$\ color-color space defined by Equations~\ref{z3sel.eq}, \ref{z22sel.eq}, and \ref{z17sel.eq} (\zs3, 2.2, and 1.7), respectively. The criteria of Eq.~\ref{z3sel.eq} correspond exactly to the union of LBG types C, D, M, and MD of Steidel et al.\ (2003); those of Equations~\ref{z22sel.eq} and \ref{z17sel.eq}, respectively, to what Steidel et al.\ (2004) call types BX and BM. In our work we do not use this nomenclature of Steidel et al., but --- motivated by the observed redshift distributions (see below) of objects selected by Equations~\ref{z4sel.eq}--\ref{z17sel.eq} --- refer to them as the ``\zs4'', ``\zs3'', ``\zs2.2'', and ``\zs1.7'' criteria. Extensive spectroscopy of hundreds of objects (Steidel et al.\ 1999, 2003, 2004) has shown that the redshift distributions of objects selected by the criteria of Equations~\ref{z4sel.eq}--\ref{z17sel.eq} are --- at least for their shallower samples --- roughly Gaussian-shaped with $<$$z$$>$=4.13$\pm$0.26, $<$$z$$>$=2.96$\pm$0.26, $<$$z$$>$=2.20$\pm$0.32, and $<$$z$$>$=1.70$\pm$0.34. Spectroscopy has also shown that there is only small contamination by Galactic stars, low-$z$ galaxies, or high-$z$ AGN. At intermediate magnitudes, $\cal R$\s24--25.5, the contamination by foreground interlopers --- defined as objects with $z$$<$1 --- is less than \s5\% in all three $U_n G {\cal R}$-selected redshift bins (\zs1.7, 2.2, and 3; Steidel et al.\ 2003, 2004) and is likely to be even smaller in our samples because the ratio of galaxies to Galactic stars increases towards fainter magnitudes. The AGN fraction is put at $\sim$3\% by Steidel et al.\ (2003, 2004). In the $G {\cal R} I$-selected \zs4 sample, the foreground contamination is somewhat higher, \s20\%, although the statistics are fairly poor due to the small numbers of $G {\cal R} I$-selected objects with spectroscopy (Steidel et al.\ 1999). In summary, the color-color selection criteria of Equations~\ref{z4sel.eq}--\ref{z17sel.eq} select distinct populations with fairly narrow spreads in redshift of $\delta$$z$$\sim$$\pm$0.3 and with very little contamination by interlopers. We now turn to apply these well-understood selection criteria to our data. \subsection{Our high-$z$ galaxy sample} The right-hand panel of Fig.~\ref{GRIcolorcolor.fig} shows the $G-{\cal R}$\ vs.\ ${\cal R}-I$\ colors of the 23$\leq$$\cal R$$\leq$27 objects in the KDF. The right-hand panel of Figure~\ref{UGRcolorcolor.fig} shows the $U_n-G$\ vs.\ $G-{\cal R}$\ colors of 23$\leq$$\cal R$$\leq$27 objects, although --- for clarity --- only 1 in 3 objects are plotted. The color distributions of objects in these two figures are very similar to the corresponding brighter samples of the Steidel et al.\ surveys (see Fig.~2 of Steidel et al.\ 1999 and Fig.~1 of Steidel et al.\ 2003). This close similarity is not surprising given the similarity of the KDF image depths at $\cal R$\s27 to theirs at $\cal R$\s25.5. However, it does gives us strong reassurance that we are selecting identical populations, with similar photometric scatter, though at substantially fainter luminosities. To $\cal R$=27.0, the color-color selection gives us 427 \zs4, 1481 \zs3, 2417 \zs2.2, and 2043 $z$$\sim$ 1.7 star-forming galaxies in the 169 arcmin$^2$\ of the KDF. This gives surface densities of, 2.5, 8.8, 14.3, and 12.1~arcmin$^{-2}$ at \zs4, \zs3, \zs2.2, and \zs1.7, respectively. These densities are significantly higher than the surface densities of identically-selected but brighter objects in the $\cal R$$\leq$25.5 samples of Steidel et al.\ (1999, 2003, 2004), which is not surprising in light of the fact that the KDF probe considerably deeper into the faint end of the luminosity function at these redshifts. We discuss in detail the shape and evolution of the high-$z$ galaxy luminosity function in a separate paper (Sawicki \& Thompson, in prep.). Figures~\ref{xypos.z4.fig}--\ref{xypos.z17.fig} show the spatial positions of the color-color selected objects in our survey overplotted on the positions of all $\cal R$-selected objects. As with Fig.~\ref{xypos.all.fig}, there are detection ``voids'' in the vicinity of bright stars (c.f.\ Fig.~\ref{color-images.fig}). Additionally, however, the high-$z$ galaxies shown in these four redshift slices do show significant real clustering: numerous voids and overdensities can be seen in all four redshift slices, and there are also hints of filaments, best seen in the \zs4 sample in Fig.~\ref{xypos.z4.fig}. High-$z$ galaxies are, of course, well known to cluster, (e.g., Adelberger et al.\ 1998; Giavalisco et al.\ 1998; Ouchi et al.\ 2001), and their clustering can, for example, be used to constrain the properties of the dark matter halos that they inhabit. Because of its depth, area, and large redshift span, our KDF sample is uniquely well suited to the study of clustering evolution and its dependence on luminosity. We will study these issues in detail in Sawicki \& Thompson (in prep.). \subsection{Contamination and completeness of the high-$z$ samples} \label{contamination-completeness} As is the case in any color-color selection of high-$z$ galaxies, our high-$z$ samples may suffer both from selection bias and from contamination by foreground interlopers. Four effects can be at play: some high-$z$ galaxies may be missed because they have {\it intrinsic} colors that are outside the color-color selection regions defined by Equations~\ref{z4sel.eq}--\ref{z17sel.eq}; low-$z$ interlopers may be included because they have intrinsic colors inside the color-color selection regions; high-$z$ galaxies with intrinsic colors that are inside the selection regions may scatter out of them due to photometric errors; and, finally, foreground interlopers may scatter into the selection regions due to photometric errors. These four effects will affect our completeness by making us miss some fraction of high-$z$ galaxies from our sample, yet will also contaminate our sample with foreground interlopers. We discuss the importance of these four effects in turn, making particular use of the {\it spectroscopically constrained} contamination fractions of the Steidel et al.\ surveys. \begin{enumerate} \item {\it Low-$z$ objects with intrinsic colors that place them in the high-$z$ samples.} While the color-color selection criteria of Equations~\ref{z4sel.eq}--\ref{z17sel.eq} are very effective at selecting high-$z$ galaxies from the much more numerous foreground objects, they are not immune against low-$z$ objects whose {\it intrinsic} colors place them in the high-$z$ color-color selection boxes. The colors of certain types of Galactic stars, for example, put them into the regions of Equations~\ref{z3sel.eq}--\ref{z17sel.eq}, and the colors of $z$\s1 red galaxies come dangerously close to the $z$$\sim$4 selection box defined by Equation~\ref{z4sel.eq} (see Steidel et al.\ 1999). Ordinarily, we would have no robust way of determining interloper fractions without recourse to {\it very} expensive spectroscopy. However, because we use the very same $U_n G {\cal R} I$\ filters and color-color selection criteria as the Steidel group, we can use their spectroscopically-determined contamination fractions to constrain the fraction of such interlopers in our samples. At $\cal R$\s25, the interloper fractions are $\lesssim$5\% for \zs1.7, 2.2, and 3, and \s20\% at \zs4 (Steidel et al. 1999, 2003, 2004; see also \S~\ref{selection-criteria}). Most of these interlopers are Galactic stars and intermediate-redshift ($z$\s1) red galaxies. However, at the magnitudes of our survey, the interloper fractions should be lower than in the surveys of Steidel et al.\ because the ratio of galaxies to Galactic stars decreases at fainter magnitudes as one ``punches'' out of the Galaxy, and --- similarly --- the fraction of intermediate-$z$ red galaxies decreases as one probes past the peak of their luminosity function. Thus, we can expect that the interloper fractions measured by Steidel et al.\ at $\cal R$\s25 are {\it higher} than the interloper fractions at the fainter magnitudes of our survey. We therefore conservatively conclude that the interloper fractions in our \zs1.7, 2.2, and 3 samples are $\lesssim$5\%, and are $\lesssim$20\% at \zs4. \item {\it Low-$z$ objects scattered into the high-$z$ samples by photometric errors.} In addition to low-$z$ interlopers whose {\it intrinsic} colors lie in the high-$z$ color-color selection regions (effect \#1 above), low-$z$ interlopers with intrinsic colors {\it outside} the high-$z$ selection criteria may get scattered into the selection regions by random photometric errors. The importance of such scatter could be crudely gauged using simulations. However, a more direct and robust approach is to note that the photometric errors of $\cal R$\s27 objects in our imaging are similar to those of $\cal R$\s25.5 objects in the Steidel et al.\ surveys. Because of this similarity we can expect expect similar interloper fractions in our survey as in theirs. The interloper fractions measured spectroscopically by Steidel et al.\ (1999, 2003, 2004) include {\it both} the photometrically-scattered interlopers being discussed here {\it and} the interlopers with intrinsic colors that place them in the high-$z$ color-color selection regions (effect \#1 above). We can therefore conclude that the sum of {\it both} classes of interlopers in our survey amounts to $\lesssim$5\% at \zs1.7, 2.2., and 3, and $\lesssim$ 20\% at \zs4. \item {\it High-$z$ galaxies scattered out of the high-$z$ samples due to photometric errors.} In addition to low-$z$ objects being scattered by photometric errors into our color-color-selected samples (item \#2 above), true high-$z$ objects with intrinsic colors that should place them in these samples may be scattered out of the slection regions because of random photometric errors. To first order such scatter should be no larger than the scatter in the opposite direction (\#2 above), given that the high-$z$ galaxies are less numerous at a given apparent magnitude than low-$z$ ones. However, the amount of such scatter can be gauged more accurately using Monte Carlo simulations and we will use such simulations as needed --- for example when we use these data to study the high-$z$ galaxy luminosity functions (Sawicki \& Thompson 2004). \item {\it High-$z$ galaxies with intrinsic colors that place them outside our color-color selection criteria.} Finally, there exist high-$z$ galaxies whose {\it intrinsic} colors lie outside the color-color selection regions defined by Equations~\ref{z4sel.eq}--\ref{z17sel.eq}. For example, sufficient amounts of interstellar dust will redden high-$z$ galaxies out of our samples, moving them to the upper right in Figures~\ref{GRIcolorcolor.fig} and \ref{UGRcolorcolor.fig}. We have no way here to directly assess the size of such a missed population. We note, however, that our $U_n G {\cal R} I$\ selection ensures that our fainter high-$z$ samples misse {\it exactly the same} classes of high-$z$ galaxies as are missed in the brighter work by the Steidel group. Therefore --- unlike other optical LBG surveys that have to combine bright with faint samples selected using different filter sets and color-color selection criteria, we are free of {\it differential} bias in our sample selection. If we are biasing ourselves against certain classes of objects, we are doing so in the same way as the Steidel et al.\ samples, with no dependence on luminosity between our and their work. \end{enumerate} Above we have discussed the ways in which objects may be scattered in and out of our high-$z$ samples by photometric errors, and the ways in which our samples may be systematically contaminated and biased. We are greatly aided in determining our interloper fractions by our $U_n G {\cal R} I$\ selection that is analogous to the Steidel et al.\ work. This identical selection is a key feature of our survey and confers upon us a great advantage over other deep surveys that use different selection criteria that have not been extensively tested and calibrated with spectroscopy. \section{SUMMARY AND DISCUSSION}\label{summarydiscussion} In this paper we have introduced the Keck Deep Fields, a very deep $U_n G {\cal R} I$\ imaging survey which we use to construct samples of very faint star-forming galaxies at \zs4, \zs3, \zs2.2, and \zs1.7. The key features of this survey are: \begin{enumerate} \item The KDF survey uses the very same $U_n G {\cal R} I$\ filter set and spectroscopically-confirmed and -optimized color-color selection techniques developed by Steidel et al. (1999, 2003, 2004), thus obviating the need for expensive spectroscopic characterization of the sample and allowing us to confidently select {\it faint} star-forming galaxies at \zs4, \zs3, \zs2.2, and \zs1.7. \item The completeness limit of the KDF is $\cal R$$_{lim}$\s27 (with small field-to-field variations), where $\cal R$$_{lim}$ is the magnitude at which 50\% of point sources are detected. Because optically-selected high-$z$ galaxies are unresolved in our ground-based images, this magnitude limit is also the 50\% completeness limit for high-$z$ galaxies in our survey. \item The KDF survey reaches up to \s1.5 magnitudes deeper than the wider-area, but shallower, imaging used by Steidel and collaborators, allowing us to select samples of much fainter, substantially sub-$L^*$\ objects at \zs4, \zs3, \zs2.2, \zs1.7 than are possible in the Steidel et al.\ surveys. \item To $\cal R$=27, the KDF survey contains 427, 1481, 2417, and 2043, $U_n G {\cal R} I$-selected star-forming galaxies at \zs4, \zs3, \zs2.2, and \zs1.7, respectively. \item The KDF survey covers 169 arcmin$^2$\ and is split into three widely-separated, spatially-independent patches on the sky. It thereby provides a large sample of high-$z$ star-forming galaxies whose statistics are dominated neither by Poisson noise nor by cosmic variance. \end{enumerate} Our survey complements directly the wider but shallower surveys by the Steidel team by extending their well-understood slection techniques to galaxies that are up to four times fainter than the limit of the Steidel et al.\ work. The depth and efficiency of the KDF stems from three factors: (1) In obvious contrast to the Steidel et al.\ surveys, which are typically carried out on 4m-class telescopes, our survey was undertaken on a much larger, 10m-aperture telescope; to first order, this simple increase in collecting area allows us to reach 6.25 times deeper per unit exposure time making our survey practical. (2) Additionally, we used a two-channel instrument, that allowed us to observe the same field in two wavebands simultaneously, thereby greatly decreasing the amount of total telescope time required. (3) Finally, in our second observing run --- the run that yielded the bulk of our data --- we benefited significantly from a very efficient, UV-optimized detector mosaic, that significantly reduced the necessary exposure times in $G$-, and --- especially --- $U_n$-bands. In a great many respects, the KDF survey is the best currently available for the study of {\it faint} star-forming galaxies at $z$\s2, 3 and 4. It holds very significant advantages over other very deep imaging surveys. For example, our survey is comparable in depth to the FORS Deep Field (FDF; Heidt et al.\ 2003), but covers \s3.5 times more area and is distributed over three spatially-independent patches compared to the FDF's single 48 arcmin$^2$\ field. Our area is \s7 times smaller than the Subaru Deep Surveys 1200 arcmin$^2$\ (SDS; Ouchi et al.\ 2004), but whereas the SXDF's $BVRi'$ filters allow the selection of only $z$\s4 Lyman Break galaxies, our $U_n G {\cal R} I$\ filter set lets us probe the \s2.2~Gyr time-span from $z$\s4 to \zs1.7; given our redshift coverage, we are in a position to study not just the properties but also the time evolution of the LBG population. The HST $UBVI$ imaging of the Hubble Deep Fields (HDFs; Williams et al.\ 1996, Casertano et al.\ 2002) matches ours in wavelength coverage and surpasses it in depth. However, the two HDFs combined cover a total of only \s10 arcmin$^2$, or a mere \s6\% of the area of our survey. Consequently, the HDFs are limited by Poisson noise in the number of objects and are more susceptible to the effects of large-scale structure. The Hubble Ultra Deep Field (UDF), deeper still than the HDFs, is similarly restricted to only a single small pointing, and, moreover, lacks deep $U$-band coverage, meaning that it is restricted to higher redshifts only. In addition to our advantages of area and/or wavelength coverage over {\it all} these surveys, one must also add the extremely important advantage that our survey gains from its use of the well-tested $U_n G {\cal R} I$\ filter system. Our use of this filter system and its associated color-color selectios allows us to confidently select high-$z$ galaxies and tie them {\it directly} to the work of the Steidel group at brighter fluxes. To our knowledge, the only real competitor for the KDF survey is the GOODS project. The HST $BVIz$ imaging of the GOODS fields (Giavalisco et al.\ 2004), which covers twice the area of our survey and to a somewhat greater depth, provides an excellent dataset at $z$$\gtrsim$4. However, at \zs3 and below, the lack of uniform $U$-band coverage of the GOODS fields makes them less useful since only the northern GOODS field has been imaged in $U$-band to a depth approaching that of our survey ($\sim$40 hrs in \s1.25$\arcsec$ seeing with the KPNO 4m+Mosaic; Mauro Giavalisco, private communication). Thus, the GOODS $U$-band imaging covers an area nearly identical to that of our survey (160 arcmin$^2$\ vs.\ our 169 arcmin$^2$) to a similar depth. However, this single GOODS field is potentially more affected by cosmic variance than is the sum of our three spatially independent patches. Moreover, as always, our KDF $U_n G {\cal R} I$\ data holds the very significant advantage at all redshifts $z$$\approx$ 2--4 of being a {\it direct and straightforward} extension of the spectroscopically-calibrated Steidel et al.\ selection technique. We therefore conclude that while the GOODS HST data dominates the field above $z$=4, the KDF are better suited for work at $z$$\lesssim$4. In the present paper --- the first in a series --- we have introduced our survey, described our observations and data reductions, and have shown our selection criteria for high-$z$ star-forming galaxies. As we have argued above, in many ways ours is the best survey to study the population of {\it faint} star-forming galaxies from \zs4 to \zs1.7. The key features of our survey are its combination of depth with the well-understood $U_n G {\cal R}$\ and $G {\cal R} I$\ color-color selection. With the survey introduced and the data described in the present paper, subsequent papers in this series will address in detail the properties and evolution of the population of very faint star-forming galaxies as the Universe ages by 2.5$\times$ over the \s2.2~Gyr from \zs4 to \zs1.7. \vspace{5mm} We dedicate this work to the memory of Bev Oke, one of whose great many contributions to astronomy was the LRIS imaging spectrograph (Oke et al.\ 1995) without which this work would not have happend. We also thank the Caltech time allocation committee for a generous time allocation that made this project possible and the staff of the W.M.\ Keck Observatory for their help in obtaining these data. We are also grateful to Chuck Steidel for his encouragement and support of this project and to Jerzy Sawicki for a thorough reading of the manuscript and many useful comments. Finally, we wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community; we are most fortunate to have the opportunity to conduct observations from this mountain. \newpage
1,314,259,995,991
arxiv
\section{Introduction} Ultracold atomic gases provide an ideal platform for the study of quantum many-body physics \cite{cold_atom_rev}. Ever since the realization of Bose–Einstein condensates of weakly interacting gases \cite{BEC_1,BEC_2}, milestone achievements have been reported in cold-atom experiments. Prominent examples are the observation of the superfluid to Mott insulator phase transition of bosons in optical lattices \cite{BH_exp_1,BH_exp_2,BH_exp_3} and the BCS-BEC crossover for a degenerate Fermi gases \cite{mixture_exp_ff_1, mixture_exp_ff_2}. Among them, trapping of bosonic atoms in a double-well potential constitutes a prototype system for the investigations of the tunneling dynamics \cite{DW_exp_1,DW_exp_2, DW_exp_3}. Such a system represents a bosonic Josephson junction (BJJ), an atomic analogy of the Josephson effect initially predicted for a pair of electrons (Cooper pair) tunneling through two weakly linked superconductors \cite{BJJ_1,BJJ_2}. Owing to the unprecedented controllability of the trapping geometries as well as the atomic interaction strengths \cite{cold_atom_rev}, studies of the BJJ unveil various intriguing phenomena which are not accessible for conventional superconducting systems \cite{BJJ_Rabi_1, BJJ_Rabi_2, BJJ_Rabi_3, BJJ_Frag_1,BJJ_Frag_2, BJJ_Squeeze_1, BJJ_Squeeze_2, BJJ_Few_1, BJJ_Few_2, BJJ_Few_3, BJJ_Few_4, BJJ_Few_5}. Examples are the Josephson oscillations \cite{BJJ_Rabi_1, BJJ_Rabi_2, BJJ_Rabi_3}, fragmentations \cite{BJJ_Frag_1, BJJ_Frag_2}, macroscopic quantum self trapping \cite{DW_exp_3, BJJ_Rabi_1, BJJ_Rabi_2}, collapse and revival sequences \cite{BJJ_Rabi_3}, atomic squeezing state \cite{BJJ_Squeeze_1, BJJ_Squeeze_2} as well as strongly correlated tunneling dynamics in few-body systems \cite{BJJ_Few_1, BJJ_Few_2, BJJ_Few_3, BJJ_Few_4, BJJ_Few_5}. On the other hand, systems driven out of equilibrium by time dependent driving forces have attracted growing interests in recent years. The time-dependent variation of the control parameters can trigger non-trivial responses allowing the system to exhibit novel properties which are absent in the static counterpart \cite{Driven_rev_1, Driven_rev_2, Driven_rev_3}. It has been shown that external driving can lead to different phenomena in ultracold atomic ensembles \cite{Driven_rev_3}, for instance, the emergence of superfluid-Mott insulator transition by periodically shaking the optical lattices \cite{Driven_BEC_Mott_1,Driven_BEC_Mott_2,Driven_BEC_Mott_3}, the single-particle and many-body coherent destruction of tunneling in a driven double-well potential \cite{CDT_1,CDT_2}. A phenomena of particular interest in driven cold atomic ensembles is the `ratchet effect', which can lead to an unidirectional transport of the atoms in a fluctuating environment even in absence of a net force bias \cite{Ratchet_1, Ratchet_2, Ratchet_3, Ratchet_4}. In order to realize such directed transport, the system must necessarily break certain spatio-temporal symmetries \cite{Ratchet_rev, Ratchet_5, Ratchet_6, Ratchet_7, Ratchet_8, Ratchet_9, Ratchet_10, Ratchet_11, Ratchet_12}. This provides not only a useful method for controlling the transport of atomic ensembles but also different applications like particle separation based on physical properties \cite{Ratchet_13, Ratchet_14, Ratchet_15} and design of efficient velocity filters \cite{Ratchet_16, Ratchet_17}. In the present work, we explore the ratchet effect for a many-body bosonic ensemble confined in a 1D double-well potential whose depth is periodically modulated. Unlike most previous studies, which focus either on the non-interacting regime \cite{CDT_1} or on the transient dynamics \cite{BJJ_Driven_1, BJJ_Driven_2}, we investigate the transport properties of interacting particles in the asymptotic limit $t \rightarrow \infty$. Specifically, we start with an equal number of particles in both wells and explore the emergence of an asymptotic population imbalance (API) of particles in the two wells. For this, the spatial parity and the time reversal symmetries need to be broken \cite{Ratchet_rev, Ratchet_5}, which is achieved by a suitable bi-harmonic driving force. We show that the value of the API can be flexibly controlled by changing the driving phase. Most importantly, we demonstrate that for the same driving force, the value of the API shows an individually characteristic behavior for different particle numbers. While the API is highly sensitive to the initial state in the few-particle regime, this dependence on the initial state is lost as the number of particles is increased thus approaching the classical limit of large particle numbers. We explain the behavior of the API in the few particle limit in terms of the underlying Floquet modes. In the many-particle regime, we show that the API can be interpreted in terms of the well established classical non-rigid driven pendulum \cite{BJJ_Rabi_1, BJJ_Rabi_2, BJJ_Rabi_3, BJJ_Driven_1}, providing a deeper insight into the connections between classical and quantum physics. Although the obtained API values agree very well with the ones in the classical limit, we show that there exists a significant disagreement in the corresponding real-time population imbalance due to the presence of quantum correlations. This paper is organized as follows. In Sec. \ref{Hamiltonian_setup}, we introduce our setup and the quantities of interests. In Sec. \ref{symmetries}, we investigate the relevant symmetries controlling the API in both the quantum and classical limits. In Sec. \ref{Results_analysis} and Sec.\ref{discussion}, we present a comprehensive study of the behavior of the API as we go from the few-particle regime to the many-particle regime. Finally, our conclusions and outlook are provided in Sec. \ref{Conclusions}. \section{Setup} \label{Hamiltonian_setup} We consider an ultracold many-body ensemble consisting of $N$ interacting bosons confined within an one dimensional (1D) symmetric double-well potential $V_{DW}(x)$, whose depth is modulated periodically via a driving force $F(t)$. The Hamiltonian of the system is given by \begin{align} \hat{H}(t) &=\int dx~\hat{\psi}^{\dagger}(x) \textit{h}_{0}(x,t) \hat{\psi}(x) \nonumber\\ &+\frac{g_{b}}{2}\int dx~\hat{\psi}^{\dagger}(x)\hat{\psi}^{\dagger}(x)\hat{\psi}(x)\hat{\psi}(x), \end{align} where $ \hat{\psi}^{\dagger}(x)$ [$\hat{\psi}(x)$] is the field operator that creates (annihilates) a boson at position $x$. The single-particle Hamiltonian $\textit{h}_{0}(x,t) = -\frac{\hbar^{2}}{2 m}\frac{\partial^{2}}{\partial x^{2}}+ V_{DW} (x) + xf(t)$ , where $f(t) = E_{1}\text{cos}(\omega t) + E_{2}\text{cos}(2\omega t + \phi)$ is a bi-harmonic periodic driving force. $E_{1}$ and $E_{2}$ denote driving amplitudes, $\omega$ is the driving frequency and $\phi$ is a temporal phase shift. The interaction among the bosons is assumed to be of zero-range and is modeled by a contact potential of strength \cite{Feshbach_0,Feshbach_1} \begin{equation} g_{b} = \frac{4 \hbar^{2}a_{b}}{m a_{\bot, b}^{2}}[1-C \frac{a_{b}}{a_{\bot, b}}]^{-1}. \end{equation} Here $a_{b}$ is the 3D Bose-Bose $s$-wave scattering length and $C \approx 1.4603$ is a constant. The parameters $a_{\bot, b} = \sqrt{2 \hbar/ \omega_{\bot}}$ describes the transverse confinement. In this work, we focus on the repulsive interaction regime, i.e., $g_{b} \geqslant 0$, which can be controlled experimentally by tuning the $s$-wave scattering lengths via Feshbach or confinement-induced resonances \cite{Feshbach_1,Feshbach_2,Feshbach_3}. For sufficiently weak interaction and tight enough confinement, the particle excitations are severely suppressed and as a result, the bosons mainly populate the lowest two eigenstates $u_{\pm} (x) $ for the single-particle Hamiltonian $\hat{h}_{s}(x) = -\frac{\hbar^{2}}{2 m}\frac{\partial^{2}}{\partial x^{2}}+ V_{DW} (x) $. We, therefore, adopt the single-band approximation by expanding the field operator as \begin{equation} \hat{\psi}(x) = u_{L}(x) \hat{a}_{L} + u_{R}(x) \hat{a}_{R}, \label{2_mode_psi} \end{equation} with $u_{L,R}(x)$ being the Wannier-like states localized in the left and right well, respectively. This leads to the modified Hamiltonian \begin{align} \hat{H}_{BH}(t) &= -J_{BH} (\hat{a}^{\dagger}_{L}\hat{a}_{R} + \hat{a}^{\dagger}_{R} \hat{a}_{L} ) + \frac{U_{BH}}{2} \sum_{i = L,R} \hat{a}^{\dagger}_{i}\hat{a}^{\dagger}_{i}\hat{a}_{i} \hat{a}_{i} \nonumber \\ &+ f(t) (\hat{a}^{\dagger}_{L} \hat{a}_{L} - \hat{a}^{\dagger}_{R} \hat{a}_{R}). \label{BH_model} \end{align} corresponding to the two-site Bose-Hubbard (BH) model with $\hat{a}^{\dagger}_{L/R}$ ($ \hat{a}_{L/R}$) being the creation (annihilation) operator with respect to the $u_{L/R}(x)$ state. The coefficients \begin{align} J_{BH} &= \int u_{L}^{\ast}(x) h_{s}(x) u_{R}^{\ast}(x), \nonumber \\ U_{BH} &= g_{b} \int u_{i}^{4}(x) dx, ~~~ (i = L,R) \end{align} represent the hopping amplitude and the on-site repulsion energy, respectively, and \begin{equation} f(t) = E_{1}\text{cos}(\omega t) + E_{2}\text{cos}(2\omega t + \phi) \label{f_t} \end{equation} denotes the bi-harmonic driving force. We choose the units of the energy and time as $\eta = \epsilon_2 - \epsilon_1$ and $\xi = 2\pi \hbar / \eta$, with $\epsilon_1$ ($\epsilon_2$) being the energy of the ground (first excited) state of the single-particle Hamiltonian $\hat{h}_{s}(x)$. With this choice, the hopping amplitude in $\hat{H}_{BH}(t)$ results in a constant value $J_{BH} = 1/2$. In this work, we explore the asymptotic particle transport in the setup due to the time-dependent driving of the spatial potential. Since our system is spatially bounded, such a particle transport eventually results in an \textit{asymptotic population imbalance} (API) between the two wells. We characterize the API as \begin{equation} \overline{\Delta \rho } = lim_{ \tau, \tau^{\prime} \rightarrow \infty} ~\frac{1}{ \tau^{\prime}} \int_{\tau}^{\tau + \tau^{\prime}} dt~ \langle \Delta \hat{\rho}\rangle(t), \label{API} \end{equation} with $ \Delta \hat{\rho} = (\hat{n}_L - \hat{n}_R)/N$ being the normalized particle occupation difference for a fixed total particle number $N$. The average $\langle \Delta \hat{\rho}\rangle(t)$ is computed with respect to the many-body wavefunction $| \Psi(t) \rangle$, which evolves according to the Schr\"odinger equation $i\hbar \partial / \partial t | \Psi(t) \rangle = \hat{H}_{BH}(t) |\Psi(t) \rangle $. Throughout this work, we consider the initial population of the two wells to be equal such that $\langle \Delta \hat{\rho}\rangle(0)= 0 $ (see below), and we explore the possibilities for the appearance of a non-vanishing $\overline{\Delta \rho } $ in the limit $ \tau, \tau^{\prime} \rightarrow \infty$. Since the Hamiltonian \eqref{BH_model} is periodic in time, i.e., $H_{BH}(t) = H_{BH}(t+T)$, with period $T = 2\pi/ \omega$, we can write the above wavefunction as \cite{Driven_rev_1,Floquet_1} \begin{equation} |\Psi(t) \rangle = \sum_{\alpha} A_{\alpha} e^{-i \epsilon_{\alpha}t} | \Phi_{\alpha}(t) \rangle, \end{equation} with $| \Phi_{\alpha}(t) \rangle$ being the Floquet mode (FM) with the temporal period $T$, i.e., $| \Phi_{\alpha}(t) \rangle = | \Phi_{\alpha}(t+ T) \rangle$. The quasi-energy (QE) $\epsilon_{\alpha}$ can always be chosen within the interval $[-\omega/2, \omega/2]$ \cite{Driven_rev_1,Floquet_1}. According to the Floquet theorem, the FM fulfills the eigenstate equation \begin{equation} \hat{H}_{F}(t) | \Phi_{\alpha}(t) \rangle \rangle = \epsilon_{\alpha} |\Phi_{\alpha}(t) \rangle \rangle. \label{Floquet_eigen} \end{equation} Here $\hat{H}_{F}(t) = \hat{H}_{BH}(t) - i \partial / \partial t$ is the Floquet Hamiltonian which is defined in the composite Hilbert space $\mathcal{R} \otimes \mathcal{T}$, with $\mathcal{R}$ being the Hilbert space of square integrable functions and $\mathcal{T}$ denotes the space of time-periodic functions whose period is $T = 2\pi/ \omega$. The FM $|\Phi_{\alpha}(t) \rangle \rangle$ can thus be expressed as a linear superposition of the composite states \begin{equation} |\Phi_{\alpha}(t) \rangle \rangle = \sum_{N_{L}, n} \mathcal{D}^{\alpha}_{N_{L}, n} |N_{L}, N_{R}\rangle \otimes e^{i n \omega t}, \label{FM_number_state} \end{equation} where $\{|N_{L}, N_{R}\rangle \}$ denote the number states with $N_{L} + N_{R} = N$ and $n =0, \pm 1, \pm 2, ...$ is an integer number. Correspondingly, the orthonormality condition for FMs read \begin{align} \langle \langle \Phi_{\alpha}(t) | \Phi_{\beta}(t) \rangle \rangle &= \sum_{N_{L}, n} \sum_{N_{L}^{\prime}, n^{\prime}} [\mathcal{D}^{\alpha}_{N_{L}, n}]^{*} \mathcal{D}^{\beta}_{N_{L}^{\prime}, n^{\prime}} \nonumber \\ &\times \frac{1}{T} \int_{0}^{T} dt e^{i(n^{\prime}-n) \omega t} \langle N_{L}, N_{R} |N_{L}^{\prime}, N_{R}^{\prime} \rangle = \delta_{\alpha, \beta} \end{align} In terms of the Floquet modes, the API defined in Eq.\eqref{API} simplifies to \begin{align} \overline{\Delta \rho} & = lim_{\tau, \tau^{\prime} \rightarrow \infty} ~\frac{1}{ \tau^{\prime} } \int_{\tau}^{\tau + \tau^{\prime}} dt~ \sum_{\alpha, \beta} A_{\alpha}^{*} A_{\beta} ~e^{i(\epsilon_{\alpha} - \epsilon_{\beta})t} \langle \langle\Phi_{\alpha}(t)| \Delta \hat{\rho} | \Phi_{\beta}(t)\rangle \rangle \nonumber \\ & = \sum_{\alpha} P_{\alpha} \overline{\Delta \rho_{\alpha}}, \label{API_FM} \end{align} where $P_{\alpha} = A_{\alpha}^{*} A_{\alpha} $ denotes the weight corresponding to the $\alpha$-th FM and is obtained as the overlap of the initial state with the $| \Phi_{\alpha}(t=0) \rangle \rangle$. $ \overline{\Delta \rho_{\alpha}} = \langle \langle \Phi_{\alpha}(t)| \Delta \hat{\rho} | \Phi_{\alpha}(t)\rangle \rangle $ denotes the API corresponding to the $\alpha$-th FM $| \Phi_{\alpha}(t) \rangle \rangle$. It is important to emphasize that the validity for the Eq. \eqref{API_FM} relies on the assumption that the Floquet Hamiltonian $\hat{H}_{F}(t)$ is non-degenerate, i.e., $\epsilon_{\alpha} \neq \epsilon_{\beta}$ for $\alpha \neq \beta$, which is well-justified by the extension of the von Neumann-Wigner theorem \cite{ Ratchet_rev, Non_degenerate}. \section{Symmetry analysis}\label{symmetries} In order to achieve a non-vanishing asymptotic population imbalance between the two wells, one needs to break certain symmetries of the underlying system, specifically the \textit{generalized parity symmetry} and the \textit{generalized time-reversal symmetry}. In this section, we discuss how these symmetries are violated in our system for both the quantum and the classical cases. We begin with the quantum limit where we show how these symmetries affect both the FMs $|\Phi_{\alpha}(t) \rangle \rangle$ and the operator $ \Delta \hat{\rho}$, thereby controlling the value of the API. In contrast, the dynamics of the particles in the classical limit is fully characterized by the classical phase space. The appearance of a nonzero API in this case, as we will show, is due to a desymmetrization of the chaotic manifold of the phase space caused by the breaking of the symmetries. \subsection{Quantum limit} \subsubsection{Angular-momentum representation} We first introduce three angular-momentum operators as \cite{BJJ_Rabi_3, BJJ_Driven_1} \begin{align} \hat{J}_{x} &= \frac{1}{2} (\hat{a}_{L}^{\dagger} \hat{a}_{R} + \hat{a}_{R}^{\dagger} \hat{a}_{L} ) ,~~~\hat{J}_{y} = -\frac{i}{2} (\hat{a}_{L}^{\dagger} \hat{a}_{R} - \hat{a}_{R}^{\dagger} \hat{a}_{L} ), \nonumber\\ \hat{J}_{z} &= \frac{1}{2} (\hat{a}_{L}^{\dagger} \hat{a}_{L} - \hat{a}_{R}^{\dagger} \hat{a}_{R} ). \label{spin_operators} \end{align} obeying the SU(2) commutation relation $ [\hat{J}_{\alpha}, \hat{J}_{\beta}] = i \epsilon_{\alpha \beta \gamma} \hat{J}_{\gamma}$. In this representation, the many-particle Hamiltonian \eqref{BH_model} can be rewritten as \begin{equation} \hat{H}_{S}(t) = - \hat{J}_{x} + U_{BH} \hat{J}_{z}^{2} -2f(t) \hat{J}_{z}, \label{spin_BH_model} \end{equation} and the Floquet Hamiltonian in Eq.\eqref{Floquet_eigen} becomes as $\hat{H}_{F}(t) = \hat{H}_{S}(t) - i \partial / \partial t$. The Casimir invariant $\hat{J}^{2}$ can be expressed in terms of the total number of particles $N$ as \begin{equation} \hat{J}^{2} = \hat{J}_{x}^{2} + \hat{J}_{y}^{2} + \hat{J}_{z}^{2} = \frac{N}{2}(\frac{N}{2}+1), \end{equation} denoting the conservation of the total angular momentum with the magnitude $l = N/2$. Consequently, all the eigenstates $\{|l,m \rangle \}$ for both $\hat{J}^{2}$ and $\hat{J}_{z}$ precisely corresponds to the $N + 1$ basis states $\{|N_{L}, N_{R}\rangle \}$ of the $N$-particle Hilbert space. In this way, the original many-particle Hamiltonian in \eqref{BH_model} is completely mapped onto the single-particle Hamiltonian in \eqref{spin_BH_model}. The hopping of the particles between the two wells now corresponds to an angular momentum precession around about the $x$-axis and the driving potential $f(t) (\hat{N}_{L} - \hat{N}_{R})$ can be interpreted as a periodic modulation of a Zeeman field applied in the $z$-direction. The FMs in the Eq \eqref{FM_number_state} can be now expressed as \begin{equation} | \Phi_{\alpha}(t) \rangle \rangle = \sum_{m,n} C_{m,n}^{\alpha} | l,m \rangle \otimes e^{i n \omega t}, \label{cof_floquet} \end{equation} in terms of the angular momentum basis $\{| l,m \rangle \} $. The API can hence be interpreted as the asymptotic magnetization along $z$-direction \begin{align} \overline{\Delta \rho } &= \overline{J_{z} } = lim_{\tau, \tau^{\prime} \rightarrow \infty} ~\frac{2}{N \tau^{\prime} } \int_{\tau}^{\tau + \tau^{\prime}}dt~ \langle \hat{J}_{z}\rangle(t) \nonumber \\ &= \frac{2}{N} \sum_{\alpha} P_{\alpha} \langle \langle \Phi_{\alpha}(t)| \hat{J}_{z} | \Phi_{\alpha}(t)\rangle \rangle = \sum_{\alpha} P_{\alpha} \overline{J_{z}^{\alpha} }, \label{API_sum} \end{align} with $\overline{J_{z}^{\alpha} } = \frac{2}{N}\langle \langle \Phi_{\alpha}(t)| \hat{J}_{z} | \Phi_{\alpha}(t)\rangle \rangle$ being the API corresponding to the $\alpha$-th FM $| \Phi_{\alpha}(t)\rangle \rangle$. In order to obtain a non-zero API, it is important that the system breaks the symmetries which transforms $\hat{J}_{z} \rightarrow -\hat{J}_{z}$ and hence renders $\overline{J_{z}^{\alpha} }=0$ \cite{Ratchet_rev}. In the following we discuss the general form of these symmetry operations and how they can be broken. \subsubsection{Generalized parity symmetry} \label{Sp_symmetry} In the absence of any driving force, i.e. $ E_1= E_2 = 0$, the Hamiltonian in \eqref{spin_BH_model} is time-independent. A natural choice of the symmetry transformation which keeps this time-independent Hamiltonian invariant, meanwhile, changing the sign of $\hat{J}_{z}$ is a rotation through an angle $\pi$ about the $x$-axis denoted by the operator $\hat{R}_{x} (\pi)= e^{-i \pi \hat{J}_{x}}$. This is no longer true for the time dependent cases since $\hat{R}_{x}(\pi) \hat{H}_{F}(t) \hat{R}_{x} ^{-1}(\pi) \neq \hat{H}_{F}(t) $ in general. However, if $ E_2 = 0$, the driving force changes sign due to a time translation, i.e. $f(t) = -f(t + T/2)$ [c.f. Eq.\eqref{f_t}], the Hamiltonian $\hat{H}_{F}(t)$ is symmetric with respect to the transformation \cite{kicked_top} \begin{equation} S_{p}: (J_{x}, J_{z}, t) \rightarrow (J_{x}, -J_{z}, t+T/2 ), \end{equation} generated by the symmetry operator \begin{equation} \hat{S}_{p} = \hat{R}_{x}(\pi) \otimes \hat{Q}(T/2). \end{equation} Here $\hat{Q}(T/2)$ is the time-shift operator which shifts $t$ by $T/2$, resulting in $f(t) \rightarrow -f(t)$. $\hat{S}_{p}$ is the most general transformation which keeps $\hat{H}_{F}(t)$ invariant but changes the sign of $\hat{J}_{z}$ in the presence of our periodic driving force $f(t)$. In view of the interpretation of $\hat{J}_{z}$ [c.f. Eq.\eqref{spin_operators}] in terms of the particle numbers in the left and right well of our double well potential, we regard the symmetry transformation $S_{p}$ as the \textit{generalized parity symmetry}. Since $\hat{S}_{p} \hat{H}_{F}(t) \hat{S}_{p} ^{-1} = \hat{H}_{F}(t) $ and $\hat{S}_{p}$ is an unitary operator, all the eigenstates of $\hat{H}_{F}(t)$ can be characterized as either symmetric or anti-symmetric with respect to $\hat{S}_{p}$, i.e., $\hat{S}_{p} | \Phi_{\alpha}(t) \rangle = \pm \sigma | \Phi_{\alpha}(t) \rangle$ with $\sigma = 1$ for $l = N/2 $ being the integers and $\sigma = i$ for $l $ being the half-integers. Along with the relation that $\hat{S}_{p} \hat{J}_{z} \hat{S}_{p}^{-1} = -\hat{J}_{z}$, this implies \begin{align} \overline{J_{z}^{\alpha} } &= \frac{2}{N} \langle \langle \Phi_{\alpha}(t)| \hat{J}_{z} | \Phi_{\alpha}(t)\rangle \rangle \nonumber \\ &= \frac{2}{N} \langle \langle \Phi_{\alpha}(t) |\hat{S}^{-1}_{p} \hat{S}_{p} \hat{J}_{z} \hat{S}^{-1}_{p} \hat{S}_{p} |\Phi_{\alpha}(t)\rangle \rangle \nonumber\\ &= - \frac{2}{N} \langle \langle \Phi_{\alpha}(t)| \hat{J}_{z} | \Phi_{\alpha}(t)\rangle \rangle =- \overline{J_{z}^{\alpha} }=0. \label{zero_API_parity} \end{align} Here we have employed the fact that $\hat{S}_{p}$ is an unitary operator which leads to $\langle \langle \Phi_{\alpha}(t) |\hat{S}^{-1}_{p} = \langle \langle \Phi_{\alpha}(t)| \hat{S}^{\dagger}_{p} = \pm \sigma^{\ast} \langle \langle \Phi_{\alpha}(t)| $. Since the contribution $\overline{J_{z}^{\alpha} }$ from each FM $| \Phi_{\alpha}(t) \rangle$ to the API $\overline{J_{z} }$ vanishes, one concludes that $\overline{J_{z} } = \sum_{\alpha} P_{\alpha} \overline{J_{z}^{\alpha} } = 0$ for any arbitrary initial condition. As the above single harmonic driving force (i.e. $E_{2} = 0$) satisfy $f(t) = -f(t + T/2)$, the corresponding API is always zero. Hence in order to achieve a non-zero API, we must have $E_{2} \neq 0$. \subsubsection{Generalized time-reversal symmetry} \label{St_symmetry} Apart from ${S}_{p}$, also the time reversal operation can flip the sign of $\hat{J}_{z}$ \cite{QM_Sakurai}. In fact, for our bi-harmonic driving force with the temporal phase shift $\phi=\pi/2$ or $\phi = 3\pi/2 $, $f(t)$ satisfies $f(t) = -f(-t + T/2)$ [c.f. Eq.\eqref{f_t}], one can define the \textit{generalized time-reversal symmetry} transformation \cite{kicked_top} \begin{equation} S_{t}: (J_{x}, J_{z}, t) \rightarrow (J_{x}, -J_{z}, -t+T/2) \end{equation} generated by the symmetry operator \begin{equation} \hat{S}_{t} = \hat{R}_{z}(\pi) \otimes \hat{\Theta} \otimes \hat{Q}(T/2) \label{S_t} \end{equation} as the most general form of the time reversal operation which transforms $\hat{J}_{z} \rightarrow - \hat{J}_{z}$ and keeps the Hamiltonian $\hat{H}_{F}(t)$ invariant. Here $\hat{R}_{z}(\pi)$ represents the operator inducing a rotation by an angle $\pi$ around the $z$-axis, $\hat{\Theta}$ is the anti-unitary time-reversal operator and $\hat{Q}(T/2)$ is the time-shift operator. Although the time-reversal operator $\hat{\Theta}$ does not commute with the time-shift operator $\hat{Q}(T/2)$ in general, we note that, when they are acting on the FMs, the relative order among them does not affect the physics (see Appendix \ref{Appendix_1}). Due to the anti-unitary operator $\hat{\Theta}$ in $\hat{S}_{t}$, one cannot classify the FMs based on odd or even symmetry analogous to our previous discussion for the parity transformation. However, we note that \cite{QM_Sakurai} \begin{align} \hat{R}_{z}(\pi) | l,m \rangle &= e^{-im\pi} |l,m \rangle , \nonumber \\ \hat{\Theta} | l,m \rangle & = i^{2m} |l,-m \rangle. \end{align} These relations together with the fact that the transformation $\hat{S}_{t}$ preserves the modulus of the inner product of two FMs provide an useful relation regarding the expansion coefficients \begin{equation} |C_{m,n}^{\alpha}|^{2} = |C_{-m,n}^{\alpha}|^{2}. \label{cof_TR_FM} \end{equation} We note that the API $\overline{J_{z}^{\alpha} }$ corresponding to the FM $| \Phi_{\alpha}(t)\rangle$ can be expressed in terms of the coefficients $C_{m,n}^{\alpha}$ as \begin{equation} \overline{J_{z}^{\alpha} } = \frac{2}{N} \langle \langle \Phi_{\alpha}(t)| \hat{J}_{z} | \Phi_{\alpha}(t)\rangle \rangle = \frac{2}{N} \sum_{m,n} |C_{m,n}^{\alpha}|^{2} m. \label{cof_TR_FM2} \end{equation} Alternatively, by applying the symmetry transformation $\hat{S}_{t}$, $\overline{J_{z}^{\alpha} }$ can also be expressed as \cite{QM_Sakurai} \begin{align} \overline{J_{z}^{\alpha} } &= \frac{2}{N} \langle \langle \Phi_{\alpha}(t)| \hat{J}_{z} |\Phi_{\alpha}(t)\rangle \rangle \nonumber \\ & = \frac{2}{N} \langle \langle \widetilde{\Phi}_{\alpha}(t)| \hat{S}_{t} \hat{J}_{z} \hat{S}^{-1}_{t} |\widetilde{\Phi}_{\alpha}(t)\rangle \rangle \nonumber \\ &= - \frac{2}{N} \sum_{m,n} |C_{-m,n}^{\alpha}|^{2} m = -\overline{J_{z}^{\alpha} } = 0, \end{align} where $|\widetilde{\Phi}_{\alpha}(t)\rangle \rangle = \hat{S}_{t} |\Phi_{\alpha}(t)\rangle \rangle$ and we have used the fact that $\hat{S}_{t} \hat{J}_{z} \hat{S}_{t}^{-1} = - \hat{J}_{z}$ along with Eqs.\eqref{cof_TR_FM} and \eqref{cof_TR_FM2}. Hence, for the cases where the driving phase $\phi=\pi/2$ or $\phi = 3\pi/2 $, the API $\overline{J_{z} } = \sum_{\alpha} P_{\alpha} \overline{J_{z}^{\alpha} } = 0$ for any arbitrary initial condition. In order to achieve a non-zero API, we must therefore not only have $E_{2} \neq 0$ but also $\phi \neq \pi/2 $ and $\phi \neq 3\pi/2 $. \subsubsection{Dependence of API on the driving phase} \label{API_two_phi} Having investigated the symmetries of the Floquet Hamiltonian and ways to break them, let us now discuss how the value of $\overline{J_{z}^{\alpha} }$ depends on the driving phase $\phi$. At first, we note that two Floquet Hamiltonians which are related by a symmetry transformation have the same quasi-energy (QE) spectrum. We consider two Floquet Hamiltonians $\hat{H}_{F1}(t)$ and $\hat{H}_{F2}(t)$ satisfying \begin{align} \hat{H}_{F1}(t) | \Phi_{\alpha}^{(1)}(t) \rangle &= \epsilon_{\alpha}^{(1)} |\Phi_{\alpha}^{(1)}(t) \rangle, \nonumber \\ \hat{H}_{F2}(t) | \Phi_{\alpha}^{(2)}(t) \rangle &= \epsilon_{\alpha}^{(2)} |\Phi_{\alpha}^{(2)}(t) \rangle, \end{align} with $|\Phi_{\alpha}^{(i)}(t) \rangle \rangle$ and $\epsilon_{\alpha}^{(i)}$ ($i = 1,2$) being the associated FMs and QEs. We assume that $\hat{H}_{F1}(t)$ and $\hat{H}_{F2}(t)$ are connected via a symmetry transformation $\hat{H}_{F1}(t) = \hat{S}\hat{H}_{F2}(t) \hat{S}^{-1}$, with $\hat{S}$ being the corresponding symmetry operator. This gives rise to \begin{align} \hat{S}\hat{H}_{F2}(t) \hat{S}^{-1} \hat{S} | \Phi_{\alpha}^{(2)}(t) \rangle &= \hat{H}_{F1}(t) \hat{S} | \Phi_{\alpha}^{(2)}(t) \rangle \nonumber \\ &= \epsilon_{\alpha}^{(2)} \hat{S} |\Phi_{\alpha}^{(2)}(t) \rangle . \label{FMs_two} \end{align} which implies that $\epsilon_{\alpha}^{(2)}$ and $\hat{S} |\Phi_{\alpha}^{(2)}(t) \rangle $ are the eigenvalue and eigenstate for $\hat{H}_{F1}(t)$ as well. In this way, we demonstrate that $\hat{H}_{F1}(t)$ and $\hat{H}_{F2}(t)$ share the same QE spectrum. Moreover, for the non-degenerate Hamiltonians $\hat{H}_{F1}(t)$ and $\hat{H}_{F2}(t)$, this further implies that $\hat{S} | \Phi_{\alpha}^{(2)}(t) \rangle \rangle$ can only differ from $|\Phi_{\alpha}^{(1)}(t) \rangle \rangle$ by at most a phase factor. For two different driving phases $\phi$ and $-\phi$, if we further consider $\hat{H}_{F1}(t) = \hat{H}_{F}(t, \phi)$ and $\hat{H}_{F2}(t) = \hat{H}_{F}(t, -\phi)$, these two Hamiltonians are related by the symmetry operator \begin{equation} \hat{S}_{\phi}^{I} = \hat{R}_{y}(\pi) \otimes \hat{\Theta} \end{equation} which yields the transformation \begin{equation} S_{\phi}^{I}: (J_{x}, J_{z}, t) \rightarrow (J_{x}, J_{z}, -t). \label{S_P_I} \end{equation} Hence, it immediately follows that \cite{QM_Sakurai} \begin{align} \overline{J_{z}^{\alpha} } (\phi) &= \frac{2}{N} \langle \langle \Phi_{\alpha}^{(1)}(t) |\hat{J}_{z} | \Phi_{\alpha}^{(1)}(t) \rangle \rangle \nonumber \\ &= \frac{2}{N} \langle \langle \widetilde{\Phi}_{\alpha}^{(1)}(t) | \hat{S}_{\phi}^{I} \hat{J}_{z} (\hat{S}_{\phi}^{I} )^{-1} | \widetilde{\Phi}_{\alpha}^{(1)}(t) \rangle \rangle \nonumber \\ & = \frac{2}{N} \langle \langle \Phi_{\alpha}^{(2)}(t) | \hat{J}_{z} | \Phi_{\alpha}^{(2)}(t)\rangle \rangle = \overline{J_{z}^{\alpha} } (- \phi), \end{align} where $| \widetilde{\Phi}_{\alpha}^{(1)}(t) \rangle \rangle = \hat{S}_{\phi}^{I} | \Phi_{\alpha}^{(1)}(t) \rangle \rangle = \eta | \Phi_{\alpha}^{(2)}(t) \rangle \rangle$ with $\eta$ being an arbitrary phase factor and we have employed the relation $\hat{S}_{\phi}^{I} \hat{J}_{z} (\hat{S}_{\phi}^{I} )^{-1} = \hat{J}_{z} $. This shows that the contribution $\overline{J_{z}^{\alpha} } (\phi)$ to the API from each FM possess a \textit{mirror symmetry} around $\phi = 0$ and $\phi = \pi$, where we have noticed that $\overline{J_{z}^{\alpha} } (\phi)$ is periodic in $\phi$ with period $2\pi$. It is also important to emphasize once again that the above conclusion relies on the assumption that $\hat{H}_{F}(t)$ is non-degenerate, which, as previously mentioned, is well-justified by the extension of the von Neumann-Wigner theorem \cite{ Ratchet_rev, Non_degenerate}. Similarly, if we consider $\hat{H}_{F1}(t) = \hat{H}_{F}(t, \phi)$ and $\hat{H}_{F2}(t) = \hat{H}_{F}(t, \phi + \pi)$, the symmetry operation \begin{equation} S_{\phi}^{II}: (J_{x}, J_{z}, t) \rightarrow (J_{x}, -J_{z}, t + T/2), \end{equation} transforms $\hat{H}_{F1}(t)$ into $\hat{H}_{F2}(t)$, with the symmetry operator being \begin{equation} \hat{S}_{\phi}^{II}= \hat{R}_{x}(\pi) \otimes \hat{Q}(T/2). \end{equation} Since $\hat{S}_{\phi}^{II}$ reflects $J_{z}$ as $\hat{S}_{\phi}^{II} \hat{J}_{z} (\hat{S}_{\phi}^{II} )^{-1} = -\hat{J}_{z} $, it results in $\overline{J_{z}^{\alpha} } (\phi) = - \overline{J_{z}^{\alpha} } (\phi + \pi)$. Hence in addition to the mirror symmetry, $\overline{J_{z}^{\alpha} } (\phi)$ also possess a \textit{shift anti-symmetry}. \subsection{Classical limit} \label{classical_limit} In the limit of infinite particle number $N \rightarrow \infty$ and small interaction energy $U_{BH} \rightarrow 0$, such that $\Lambda = N U_{BH}$ is fixed, the dynamics of the particles can be well described by that of a classical non-rigid pendulum \cite{BJJ_Rabi_1, BJJ_Rabi_2, BJJ_Rabi_3, BJJ_Driven_1}. In order to explore the behavior of the API in this classical limit, we adopt the mean-field approximation as $\hat{a}_{j} = a_{j}$ $( j = L,R)$, with $a_{j} $ being a $c$-number \cite{GPE_1}. Since the total particle number $N_{L}^{2} + N_{R}^{2} = N$ is conserved, it is convenient to express $a_{j}$ in the phase-density representation $a_{j} = \sqrt{N_{j}} e^{i \theta_{j}}$, where the particle numbers $N_{j}$ and the phases $\theta_{j}$ are in general time-dependent. We further introduce the two conjugate variables \begin{align} Z(t) &= (N_{L} - N_{R}) /N, &Z \in [-1,1] \nonumber \\ \varphi(t) &= \theta_{R} - \theta_{L} \label{cl_z_phi}, \end{align} representing the relative population imbalance between the two wells and the relative phase difference, respectively. Substituting $Z$ and $\varphi$ into the Eq.\eqref{BH_model} and replacing all the operators $\hat{a}_{j}$ ($\hat{a}_{j}^{\dagger}$) by $a_{j}$ ($a_{j}^{*}$), we obtain the classical Hamiltonian \begin{equation} H_{cl}(t) = \frac{\Lambda}{2} Z^{2} - \sqrt{1-Z^{2}} \text{cos}\varphi + 2f(t) Z, \label{BH_classical} \end{equation} which describes a driven non-rigid pendulum with angular momentum $Z$ and length proportional to $\sqrt{1-Z^{2}} $ \cite{BJJ_Rabi_1, BJJ_Rabi_2, BJJ_Rabi_3, BJJ_Driven_1}. $\Lambda = NU_{BH}$ is the coupling strength which is inversely proportional to the effective mass of the pendulum. The corresponding equations of motion are thus \begin{align} \dot{Z} &= - \sqrt{1-Z^{2}} ~\text{sin}\varphi, \nonumber \\ \dot{\varphi} &= \Lambda Z + \frac{Z}{\sqrt{1-Z^{2}}}~\text{cos}\varphi + 2f(t). \label{eom_classical} \end{align} Such a classical reformulation allows us to interpret the API as the average angular momentum of the pendulum \begin{equation} \overline{\Delta \rho} = \overline{Z} = lim_{\tau, \tau^{\prime} \rightarrow \infty} ~\frac{1}{ \tau^{\prime} } \int_{\tau}^{\tau + \tau^{\prime}} Z(t) dt. \label{API_cl} \end{equation} The particle dynamics in the classical limit can be well understood through an analysis of the three dimensional (3D) phase space characterized by $(Z,\varphi,t)$ underlying the equations of motion Eq.\eqref{eom_classical}. The stroboscopic Poincar\'e surfaces of sections (PSOS) (see Fig.~\ref{ps_cl}) of the particle dynamics reveal that the system has a mixed phase space that depends on the choice of the system parameters with both chaotic and regular components separated by Kolmogorov-Arnold-Moser (KAM) tori \cite{Ratchet_rev, PS}. Due to ergodicity, a trajectory initialized anywhere in the chaotic layer explores the entire chaotic layer in the course of its dynamics. The average $\overline{Z}$ for such a trajectory, corresponding to the value of API for the chosen initial condition, can thus be non-zero only if we break all the symmetries of the equations of motion Eq.\eqref{eom_classical} that transforms $Z \rightarrow -Z$. From Eq.\eqref{eom_classical}, it can be seen that the system is invariant with respect to the generalized parity transformation \begin{equation} S_{p}: (Z, \varphi, t) \rightarrow (-Z, -\varphi, t+\tau) \end{equation} if the driving law has the symmetry $f(t) = -f(t + \tau)$ for any arbitrary time shift $\tau$. On the other hand, if the driving law satisfy $f(t) = -f(-t + \tau)$, the generalized parity and time reversal operation \begin{equation} S_{pt}: (Z, \varphi, t) \rightarrow (-Z, \varphi, -t+\tau) \end{equation} keep the system invariant. Since these are the only two possible symmetry transformations of the system which flips the sign of $Z$, one needs to break them in order to achieve a non-zero API. A bi-harmonic driving force with $E_{1}, E_{2} \neq 0$ and $\phi \neq n\pi/2$ ( $n \in$ odd integers) breaks both symmetries $S_{p}$ and $S_{pt}$ thus allowing for a non-vanishing API. Furthermore, we can also predict the dependence of the API $\overline{Z}$ on the driving phase $\phi$ by a similar symmetry analysis. We note that the Eq.\eqref{eom_classical} is invariant under the joint transformation \begin{align} &\phi \rightarrow -\phi, &(Z, \varphi, t) \rightarrow (Z, -\varphi, -t) \label{mirror_z} \end{align} Hence it follows that $\overline{Z}$ should possess a mirror symmetry with respect to $\phi$, i.e. $\overline{Z}(\phi) = \overline{Z}( - \phi)$. The joint transformation \begin{align} &\phi \rightarrow \phi+ \pi, &(Z, \varphi, t) \rightarrow (-Z, -\varphi, t+T/2) \label{shift_z} \end{align} also keeps the equation of motion invariant, hence $\overline{Z}$ has a shift anti-symmetry $\overline{Z}(\phi) = -\overline{Z}(\phi + \pi)$. Before closing this section, we note that the above mean-field approximation $\hat{a}_{j} = a_{j} = \sqrt{N_{j}} e^{i \theta_{j}}$ is equivalent to express the many-body wavefunction as \cite{GPE_1} \begin{equation} \Psi(x_1, x_2, ..., x_N,t) = \prod_{i=1}^{N} \phi({x_{i},t)}, \label{GP_wf} \end{equation} with the single-particle state \begin{equation} \phi({x_{i},t)} = c_{L}(t) u_{L}(x_{i}) + c_{R}(t) u_{R}(x_{i}), \label{GP_single} \end{equation} being the linear superposition of the localized states $u_{L}(x)$ and $u_{R}(x)$. The time-dependent coefficients $c_{L/R}(t) $ are in general complex, fulfilling the normalization condition $|c_{L}(t)|^{2} + |c_{R}(t)|^{2} = 1$. The conjugate variables $Z(t)$ and $\varphi(t)$ can thus be expressed in terms of $ c_{L}(t)$ and $ c_{R}(t)$ as \begin{align} Z(t) &= |c_{L}(t)|^{2} - |c_{R}(t)|^{2}, \nonumber \\ \varphi(t) &= \text{arg}(c_{L}(t))- \text{arg}(c_{R}(t)) \label{cl_z_phi_wannier}. \end{align} This provides a relation between the dynamics for $Z(t)$ and $\varphi(t)$ and that of $ c_{L}(t)$ and $ c_{R}(t)$ respectively. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{cl_ps.pdf}\hfill \caption{(Color online) Poincar\'e surfaces of sections (PSOS) for $\Lambda = 5$, the parameters for the driving force $f(t)$ are $E_{1} = 0.4$, $E_{2} = 0.2$, $\omega = 0.5$ and $\phi = 0$. The three red dots from left to right denote the phase space point $(Z = 0, \varphi = 9\pi/10)$, $(Z = 0, \varphi = \pi)$, $(Z = 0, \varphi = 11\pi/10)$, respectively, which will be used as the initial conditions for the classical simulations (see discussions below).} \label{ps_cl} \end{figure} \section{Results} \label{Results_analysis} \subsection{Initial state and numerical setup} The initial condition in the classical limit is provided by a specific point ($Z$, $\varphi$) in the phase space, which determines the initial population and phase difference. In order to find its equivalent counterpart for the quantum limit, we employ the relations in Eqs.~\eqref{GP_wf} and \eqref{GP_single}, and express the many-body state as \begin{align} |\theta, \varphi \rangle &= \frac{1}{\sqrt{N!}}\left[\text{cos}(\frac{\theta}{2}) \hat{a}^{\dagger}_{L} + \text{sin}(\frac{\theta}{2}) e^{i \varphi}\hat{a}^{\dagger}_{R}\right]^{N} ~|vac \rangle \nonumber \\ &= \sum_{N_L=0} ^{N} \left(\begin{array}{c} N \\ N_L \end{array} \right)^{1/2} \text{cos}^{N_L}(\theta/2)~\text{sin}^{N_R}(\theta/2) ~ e^{i N_R \varphi} ~|N_L,N_R \rangle, \end{align} which is the linear superposition of all the number states $\{|N_L, N_R \rangle\}$. The state $|\theta, \varphi \rangle$ is referred to as the atomic coherent state (ACS) \cite{ACS_1,ACS_2} fulfilling the completeness relation \begin{equation} (N+1) \int \frac{d \Omega}{4 \pi} |\theta, \varphi \rangle \langle \theta, \varphi | = 1, \label{completeness_ACS} \end{equation} with $d \Omega = \text{sin}\theta d \theta d \varphi$ being volume element. The ACS relates to the mean-field wavefunction $\Psi$ [c.f. Eqs. \eqref{GP_wf} and \eqref{GP_single}] as \begin{align} \theta &= \text{cos}^{-1}(|c_{L}|^{2} - |c_{R}|^{2}) \nonumber\\ \varphi &= \text{arg}(c_{L})- \text{arg}(c_{R}) \label{qm_z_phi_wannier} \end{align} where $\theta$ and $\varphi$ control the initial population difference $\text{cos}\theta = (N_{L} - N_{R})/N $ and the initial phase difference respectively. Comparing Eq. \eqref{qm_z_phi_wannier} to the Eq. \eqref{cl_z_phi_wannier}, we find a one-to-one correspondence between the $|\theta, \varphi \rangle$ and the ($Z$, $\varphi$) and thus allows us to compare the quantum and the classical dynamics. Correspondingly, the ACS can be expressed as \begin{equation} |\theta, \varphi \rangle = \sum_{m=-l} ^{l} \left(\begin{array}{c} 2l \\ m+l \end{array} \right)^{1/2} \text{cos}^{l+m}(\theta/2)~\text{sin}^{l-m}(\theta/2) ~ e^{i (l-m) \varphi} ~|l,m \rangle. \label{ACS_spin} \end{equation} in the angular momentum basis. In recent ultracold experiments, such an ACS can be implemented in a controllable manner. Tuning a two-photon transition between two hyperfine states of ${}^{87}\textrm{Rb}$ atoms allow us to prepare an ACS with arbitrary $|\theta, \varphi \rangle$ \cite{ACS_3,ACS_4}. In this work, we aim to explore how the asymptotic population imbalance behaves when we go from the few-particle regime to the many-particle regime. To this end, we fix the coupling strength $\Lambda = NU_{BH} = 5.0$ for all our simulations and vary the interaction energy $U_{BH}$ and particle number $N$ accordingly. For all our quantum simulations, we choose the initial ACS $|\theta = \pi/2, \varphi \rangle$, which corresponds to $Z(0) = 0$ in the classical limit signifying a balanced particle population between the two wells at the beginning. The phase difference $\varphi$ is carefully chosen such that the ACS $|\theta, \varphi \rangle$ is always located within the chaotic layer corresponding to the classical PSOS [see three red dots in Fig.~\ref{ps_cl}]. We also simulate the classical limit by numerically integrating Eq.\eqref{eom_classical}. Finally, we compare the behavior of the API obtained from the quantum ($\overline{J}_{z}$) and classical ($\overline{Z}$) simulations. \subsection{Variation of API with particle number and driving phase} In Fig.~\ref{APIs}, we present the asymptotic population imbalance $\overline{J}_{z}$ as a function of the driving phase $\phi$ for different particle numbers $N = 2, 20, 500$ and different initial states $|\Psi(0) \rangle = |\pi/2, \pi \rangle$ [Fig.~\ref{APIs} (a)], $|\Psi(0) \rangle = |\pi/2, 9\pi/10 \rangle$ [Fig.~\ref{APIs} (b)] and $|\Psi(0) \rangle = |\pi/2, 11 \pi/10 \rangle$ [Fig.~\ref{APIs} (c)]. The API $\overline{Z}$ corresponding to the classical simulations for the same initial conditions [see three red dots in Fig.~\ref{ps_cl}] are depicted as well [all blue dashed lines in Fig.~\ref{APIs}]. We first discuss the results obtained for the classical limit. Since a trajectory initialized anywhere in the chaotic layer will explore the entire chaotic layer in the course of the dynamics due to ergodicity, it is hence guaranteed that the obtained value of API should be independent of the initial conditions. Hence, the observed behavior of $\overline{Z}$ is the same for all the three different initial conditions. As varying the driving phase $\phi$, $\overline{Z}(\phi)$ shows an oscillatory behavior having maxima (minima) at $\phi=\pi$ ($\phi=0, 2\pi$) and vanishes at $\phi=n\pi/2$ for all odd integers $n$. Most importantly, it preserves both the mirror symmetry $\overline{Z}(\phi) = \overline{Z}( - \phi)$ [see Eq.\eqref{mirror_z}] and the shift anti-symmetry $\overline{Z}(\phi) = -\overline{Z}(\phi +\pi)$ [see Eq.\eqref{shift_z}], thus verifying our symmetry analysis in Sec. \ref{classical_limit}. In the quantum limit, the behavior of the API $\overline{J}_{z}$ is much more complicated. For a large number of particles $N =500$, the behavior of $\overline{J}_{z}$ upon varying $\phi$ almost agrees very well with that of the API $\overline{Z}$ in the classical limit, independent of the initial quantum state [see the red solid lines in Fig.~\ref{APIs}]. As a result, $\overline{J}_{z}$ exhibits the corresponding mirror symmetry $\overline{J}_{z}(\phi) = \overline{J}_{z}( - \phi)$ and shift anti-symmetry $\overline{J}_{z}(\phi) = -\overline{J}_{z}(\phi +\pi)$ as well. By contrast, the API in the few-particle regime depends strongly on the initial states. Most importantly, the symmetries of $\overline{J}_{z}(\phi)$ observed in the large particle limit are broken. For the initial state $|\Psi(0) \rangle = |\pi/2, \pi \rangle$, only the mirror symmetry is preserved [see, e.g., the black solid and the orange solid lines in Fig.~\ref{APIs} (a)], while for $|\Psi(0) \rangle = |\pi/2, 9\pi/10 \rangle$ or $|\pi/2, 11\pi/10 \rangle$ both the mirror symmetry and the shift anti-symmetry are explicitly broken [c.f. Fig.~\ref{APIs} (b,c)]. Instead, a new symmetry which relates the value of $\overline{J}_{z}(\phi)$ for two different initial states is now observed in the few-particle regime. Specifically, the dependence of $\overline{J}_{z}$ on $\phi$ for the initial state $|\pi/2, 9\pi/10 \rangle$ [c.f. Fig.~\ref{APIs} (b)] can be obtained by a reflection of $\overline{J}_{z}(\phi)$ for the initial state $|\pi/2, 11\pi/10 \rangle$ [c.f. Fig.~\ref{APIs} (c)] about either $\phi=0$ or $\phi=\pi$. Since $|\pi/2, \varphi \rangle = |\pi/2, \varphi-2\pi \rangle $, we can represent this symmetry by \begin{equation} [\overline{J}_{z}(\phi)]_{\varphi} = [\overline{J}_{z}(-\phi)]_{-\varphi}, \label{symmetry_two_initial_states} \end{equation} where $[\overline{J}_{z}(\phi)]_{\varphi}$ ($[\overline{J}_{z}(\phi)]_{-\varphi}$) denotes the obtained $\overline{J}_{z}$ value for the initial state $|\theta, \varphi \rangle$ ($|\theta, -\varphi \rangle$) for a given driving phase $\phi$. Lastly, we note that the API values vanish for $\phi = n\pi/2$ for all odd integers $n$ [see the green dots in Fig.~\ref{APIs}] in both the classical and the quantum limit in accordance with our symmetry analysis in Sec. \ref{St_symmetry} and Sec. \ref{classical_limit}. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{APIs.pdf}\hfill \caption{(Color online) Asymptotic population imbalance (API) as a function of the driving phase $\phi$ for the three initial states: (a) $|\Psi(0) \rangle = |\pi/2, \pi \rangle$, (b) $|\Psi(0) \rangle = |\pi/2, 9\pi/10 \rangle$, (c) $|\Psi(0) \rangle = |\pi/2, 11\pi/10 \rangle$. The solid black, orange and red lines corresponds to particle number $N=2$, $N=20$ and $N=500$ respectively. The API in the classical limit is depicted as blue dashed lines for the corresponding initial conditions: (a) $ (Z=0, \varphi = \pi)$, (b) $(Z=0, \varphi = 9\pi/10)$ and (c) $(Z=0, \varphi = 11\pi/10)$ in the classical PSOS (see Fig.~\ref{ps_cl}). The green solid dots indicate that the API vanishes at $\phi = \pi/2 $ and $\phi = 3\pi/2 $. Remaining parameters are $E_{1} = 0.4$, $E_{2} = 0.2$, $\omega = 0.5$, $\Lambda=5$. } \label{APIs} \end{figure*} \section{Discussions}\label{discussion} \subsection{API in the few-particle regime} \label{API_few} In order to explain the broken symmetries as well as the emergence of the new symmetry [see Eq.\eqref{symmetry_two_initial_states}] as we observed in the few-particle regime, we analyze the contribution of each Floquet mode to the value of the API. Specifically, since $\overline{J}_{z} (\phi) = \sum_{\alpha} P_{\alpha} (\phi) \overline{J_{z}^{\alpha} } (\phi) $ [c.f. Eq.\eqref{API_sum}], we inspect how each $P_{\alpha} $ and $\overline{J_{z}^{\alpha} } $ depend on the driving phase $\phi$. We note that while $\overline{J_{z}^{\alpha} } (\phi) $ is solely determined by the Floquet Hamiltonian, $P_{\alpha} (\phi) $ depends on both the Floquet Hamiltonian and the initial state. To illustrate this, we consider the case for $N=2$. Fig.~\ref{floquet_N2}(a) shows how the contributions $\overline{J_{z}^{\alpha} } $ from the three FMs depend on the driving phase $\phi$. As it can be seen, their dependence on $\phi$ are significantly different from each other, however all of them vanish for $\phi=n\pi/2$ for all odd integers $n$. Additionally, all the three $\overline{J_{z}^{\alpha} } (\phi) $ preserves both the mirror-symmetry and shift anti-symmetry, i.e., $\overline{J_{z}^{\alpha} } (\phi) = \overline{J_{z}^{\alpha} } (-\phi)$ and $\overline{J_{z}^{\alpha} } (\phi) = - \overline{J_{z}^{\alpha} } (\phi + \pi)$. Hence, the broken symmetries of $\overline{J}_{z}$ in the few particle regime are definitely not due to the contributions from $\overline{J_{z}^{\alpha} } (\phi)$ as already verified by our previous symmetry analysis but stem from the weights $P_{\alpha} (\phi) $. In Fig.~\ref{floquet_N2}(b-d), we show the behavior of $P_{\alpha} (\phi) $ corresponding to the three initial states $|\pi/2, \pi \rangle$, $ |\pi/2, 9\pi/10 \rangle$ and $ |\pi/2, 11\pi/10 \rangle$, respectively. Indeed, as one can see the exhibited symmetric (asymmetrical) structure for $P_{\alpha}(\phi)$ results in the mirror symmetry (symmetry-breaking) in the corresponding $\overline{J}_{z}(\phi)$. For instance, $P_{\alpha}(\phi) = P_{\alpha}(-\phi) $ for initial state $|\pi/2, \pi \rangle$, hence $\overline{J}_{z}$ fulfills $\overline{J}_{z}(\phi) = \overline{J}_{z}(-\phi)$. By contrast, $P_{\alpha}(\phi) \neq P_{\alpha}(-\phi) $ for the initial states $|\pi/2, \pi \pm \pi/10 \rangle$, which results in $\overline{J}_{z}(\phi) \neq \overline{J}_{z}(-\phi)$. Moreover, since $P_{\alpha}(\phi)$ does not obey the property $P_{\alpha}(\phi) = P_{\alpha}(\phi + \pi)$ in general, it thereby explains the broken shift anti-symmetry for all the $\overline{J}_{z}(\phi)$ in the few-particle regime. The emergence of the new symmetry in Eq.\eqref{symmetry_two_initial_states} can also be understood from the behavior of $P_{\alpha}(\phi)$. Since for two different initial states $|\theta, \varphi \rangle$ and $|\theta, -\varphi \rangle$, the corresponding $P_{\alpha}(\phi)$ satisfy $[P_{\alpha}(\phi)]_{\varphi} = [P_{\alpha}(-\phi)]_{-\varphi}$ [see Fig.~\ref{floquet_N2}(c,d) and Appendix \ref{Appendix_2}], hence \begin{align} [\overline{J}_{z}(\phi)]_{\varphi} &= \sum_{\alpha} [P_{\alpha}(\phi)]_{\varphi} \overline{J_{z}^{\alpha} } (\phi) \nonumber \\ &= \sum_{\alpha} [P_{\alpha}(-\phi)]_{-\varphi} \overline{J_{z}^{\alpha} } (-\phi) = [\overline{J}_{z}(-\phi)]_{-\varphi}. \label{symmetry_Jz_two_initial} \end{align} Here, we have employed the mirror symmetry property of $ \overline{J_{z}^{\alpha} } $, along with the fact that $ \overline{J_{z}^{\alpha} } $ is independent for different choices of the initial states. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{floquet_N2.pdf}\hfill \caption{(Color online) Decomposition of $\overline{J}_{z} (\phi)$ with respect to three FMs for the case $N = 2$, in which (a) represents the $\overline{J_{z}^{\alpha} } (\phi)$, (b-c) denote the $P_{\alpha}(\phi)$ for $|\Psi(0) \rangle = |\pi/2, \pi \rangle$, $|\pi/2, 9\pi/10 \rangle$ and $|\pi/2, 11 \pi/10 \rangle$, respectively. The blue dashed, red solid and black dash-dotted line correspond to the FM $| \Phi_{1}(t)\rangle \rangle $, $| \Phi_{2}(t)\rangle \rangle $ and $| \Phi_{3}(t)\rangle \rangle $, respectively. The corresponding driving parameters are $E_{1} = 0.4$, $E_{2} = 0.2$ and $\omega = 0.5$.} \label{floquet_N2} \end{figure} \subsection{API in the many-body regime} We now discuss the behavior of the API in the many-particle regime in detail. Although the dependence of the API $\overline{J}_{z}$ on the driving phase $\phi$ for $N=500$ agrees very well with that of the $\overline{Z}$ in the classical limit (see Fig.~\ref{APIs}), we show now that there exists a significant disagreement in the corresponding real-time population imbalance due to quantum correlations. \subsubsection{Quantum correlations} In Fig.~\ref{gpop_mx_cl} (a), we show the time evolution of $J_{z}(t)$ corresponding to $N=500$ for the initial state $|\Psi(0) \rangle = |\pi/2, \pi \rangle$ along with that of $Z(t)$ for the initial condition $(Z(0)=0, \varphi(0) = \pi)$. Note that for $N=500$, the system is already in the weak-interaction regime, with $J_{BH}/U_{BH} = 50 \gg 1$, which, as one may anticipate, renders the mean-field approximation to work well \cite{GPE_1,GPE_2}. We observe that although the two quantities agree very well for very short timescales ($t \leq 5$), they evolve much differently at longer timescales. In order to understand why such a deviation occurs, we perform a spectral decomposition of the reduced one-body density operator \cite{dma1_1,dma1_2} \begin{equation} \hat{\rho}_{1}(t) = \sum_{i=1}^{2} n_{i}(t) | \phi_{i} (t)\rangle \langle \phi_{i}(t)|, \end{equation} and monitor the evolution for the quantum depletion defined as $\lambda(t) = 1 - n_{1}(t)$. Here $\{n_{i}(t)\}$ are the normalized time-dependent natural populations sorted in a descending order of their values such that $n_{1}(t) \geqslant n_{2}(t)$. $\{| \phi_{i} (t)\rangle\} $ denote the natural orbitals that form a time-dependent single-particle basis for the description of the dynamical system. Note that the two-mode expansion of the field operator $\hat{\psi}(x)$ in Eq.\eqref{2_mode_psi} leads to the single-particle Hamiltonian being restricted to a two-dimensional Hilbert space and thus gives rise to only two natural populations (natural orbitals) in the spectral decomposition. Physically, the natural population $n_{i}(t)$ denotes the probability for finding a single particle occupying the state $ | \phi_{i}(t) \rangle$ at time $t$, after tracing out all other $(N-1)$ particles. When $\lambda(t) = 0$, all the bosons reside in the single-particle state $\phi({x_{i},t)}$ [c.f. Eq.\eqref{GP_single}]. Hence the corresponding many-body wavefunction can be expressed in a mean-field product form [c.f. Eq.\eqref{GP_wf}]. According to our discussions in Sec. \ref{classical_limit}, this implies that the time evolution of the quantum dynamics $J_{z}(t)$ is completely equivalent to that of the classical dynamics $Z(t)$. In contrast for $\lambda(t) > 0$, quantum correlations come into play and therefore this would result in a completely different dynamics between $J_{z}(t)$ and $Z(t)$. This is indeed seen in the evolution of $\lambda(t)$ shown in Fig.~\ref{gpop_mx_cl} (b). For short timescales $t \leq 5$, $\lambda \approx 0$ as a result of which $J_{z}(t)$ and $Z(t)$ evolve in the same manner. However, for $t > 5$, the value of $\lambda (t)$ increases rapidly resulting in the different time evolution of the $J_{z}(t)$ and the $Z(t)$ dynamics. Hence the existing quantum correlations in the system lead to significant quantitative differences between the quantum and classical dynamics although the time averaged asymptotic particle imbalance is the same in both cases. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{gpop_mx_cl.pdf}\hfill \caption{(Color online) Upper panel: Real-time dynamics for $J_{z}(t)$ (blue solid line) and $Z(t)$ (red dashed line) for the initial condition $|\Psi(0) \rangle = |\pi/2, \pi \rangle$ for the quantum limit and $(Z(0)=0, \varphi(0) = \pi)$ for the classical limit. Lower panel: quantum depletion $\lambda(t)$ for the case examined in the upper panel, and the inset denotes the transient dynamics for $\lambda(t)$ for $t<50$. Both of them correspond to the particle number $N=500$ and the values of the driving parameters are $E_{1} = 0.4$, $E_{2} = 0.2$, $\omega = 0.5$ and $\phi = 0$. } \label{gpop_mx_cl} \end{figure} \subsubsection{Time-averaged Husimi distribution} This leads to the interesting but non-trivial question: while large discrepancies persist between the dynamics of $Z(t)$ and $J_{z}(t)$, how does it eventually result in the same value of the time averaged quantities $ \overline{Z}$ and $\overline{J}_{z}$? In order to answer this question, we first explore how does our classical state initialized at $(Z = 0, \varphi = \pi)$ evolve over time in the phase space up to $t=10^{7}$. Since the initial state belongs to the chaotic layer in the PSOS (Fig.~\ref{ps_cl}), it explores the entire chaotic sea ergodically in the course of its dynamics. In Fig.~\ref{husimi}(a), we show the probability density function (PDF) $\overline{P}_{C} (Z, \varphi)$ of this trajectory over the entire course of the dynamics. Note that the PDF $\overline{P}_{C} (Z, \varphi)$ unsurprisingly bears a striking resemblance with the corresponding PSOS in Fig.~\ref{ps_cl}. Since the system visits all the possible states (phase space points) that belong to the chaotic layer ergodically, it results in the uniform distribution of $\overline{P}_{C} (Z, \varphi)$ for all $(Z, \varphi)$ belonging to the chaotic sea. The regions for $\overline{P}_{C} (Z, \varphi) = 0$ correspond to the regular islands which the system can not enter. The visualization of the $Z(t)$ dynamics in terms of the PDF $\overline{P}_{C} (Z, \varphi)$ allows us to reformulate the time-averaged population imbalance $\overline{Z}$ as \begin{equation} \overline{Z} = lim_{\tau, \tau^{\prime} \rightarrow \infty} ~\frac{1}{ \tau^{\prime} } \int_{\tau}^{\tau + \tau^{\prime}} Z(t) dt = \int d \sigma \overline{P}_{C} (Z, \varphi) Z, \label{API_PDF} \end{equation} where $d \sigma $ is volume element of the phase space. Averaged over the whole dynamics, $\overline{P}_{C} (Z, \varphi) d \sigma $ thus indicates the probability for the system to be located at the state $(Z, \varphi )$. In the quantum limit, the evolution of the initial state $|\Psi(0) \rangle = |\pi/2, \pi \rangle$ for $N=500$ can be visualized, analogous to $\overline{P}_{C} (Z, \varphi)$, by the time-averaged Husimi distribution (TAHD) defined as \cite{TAHD_1,TAHD_2} \begin{equation} \overline{Q}_{H}(\theta, \varphi) = lim_{\tau, \tau^{\prime} \rightarrow \infty} ~\frac{1}{ \tau^{\prime} } \int_{\tau}^{\tau + \tau^{\prime}} Q_{H} (\theta, \varphi,t) dt, \end{equation} where \begin{equation} Q_{H} (\theta, \varphi,t) = \frac{N+1}{4\pi} \langle \theta, \varphi | \hat{\rho}(t) | \theta, \varphi \rangle, \label{husimi_t} \end{equation} with $ \hat{\rho}(t) = | \Psi(t) \rangle \langle \Psi(t)| $ being the system's density matrix. $Q_{H} (\theta, \varphi,t) $ thus satisfies the normalization condition $\int Q_{H} (\theta, \varphi,t) d \Omega = 1$. The TAHD represents the probability for our quantum system locating at the ACS $|\theta, \varphi\rangle$ averaged over the entire dynamics. As can be seen from Fig.~\ref{husimi}(b), the TAHD matches very well with the distribution $\overline{P}_{C} (Z, \varphi)$ in Fig.~\ref{husimi}(a). Note that, the $\theta$-axis in Fig.~\ref{husimi}(b) has been rescaled to $\text{cos}\theta$ since $\text{cos}\theta = Z$ [c.f. Eq.\eqref{cl_z_phi_wannier} and Eq.\eqref{qm_z_phi_wannier}]. This suggests that the system evolves in an ergodic manner such that it has an equal probability for occupying all the ACSs located in the corresponding classical chaotic sea in the course of the dynamics. Analogous to the classical case [c.f. Eq.~\eqref{API_PDF}], the API in the quantum limit can be reformulated in terms of the TAHD $\overline{Q}_{H} (\theta, \varphi)$ as (see Appendix \ref{Appendix_3}) \begin{align} \overline{J_{z} } &= lim_{ \tau, \tau^{\prime} \rightarrow \infty} ~\frac{2}{N \tau^{\prime} } \int_{\tau}^{\tau + \tau^{\prime}}dt~ \langle \hat{J}_{z}\rangle(t) \nonumber \\ &= \int d \Omega ~\overline{Q}_{H} (\theta, \varphi) J_{z}(\theta, \varphi), \label{API_husimi} \end{align} with $J_{z}(\theta, \varphi) = \frac{2}{N}\langle \theta, \varphi | \hat{J}_{z} | \theta, \varphi \rangle$. Since the TAHD $\overline{Q}_{H} (\theta, \varphi)$ has a similar distribution as the classical PDF $\overline{P}_{C} (Z, \varphi)$, together with the fact $J_{z}(\theta, \varphi) = \text{cos}\theta = Z$, the value of API thus agrees in both the quantum and classical limit. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{husimi.png}\hfill \caption{(Color online) Upper panel: Phase space probability distribution function (PDF) for a classical trajectory initialized at the point ($Z = 0, \varphi = \pi$). Lower panel: The time-averaged Husimi distribution for $N=500$ and for the initial state $|\Psi(0) \rangle = |\pi/2, \pi \rangle$. Note that the $\theta$-axis has been rescaled to $\text{cos}\theta$ in accordance with the correspondence $\text{cos}\theta = Z$. The values of the driving parameters are $E_{1} = 0.4$, $E_{2} = 0.2$, $\omega = 0.5$ and $\phi = 0$. }\label{husimi} \end{figure} \section{Conclusions and Outlook} \label{Conclusions} We have investigated a driven many-body bosonic ensemble confined in a 1D double-well potential and showed how an asymptotic population imbalance of particles between the two wells emerges from an initially symmetric particle population in both the quantum and classical limits. The asymptotic population imbalance can be controlled by changing the phase of the driving force as well as the total number of particles in the setup. The variation of the API in the few-particle quantum regime is elaborated in terms of the symmetries of the underlying Floquet modes. In the many-particle regime, the API can be interpreted in terms of an equivalent classical driven non-rigid pendulum. However, we show that quantum correlations still exist in the many-body system resulting in significant differences in the real-time evolution of the particle population imbalance as compared to the corresponding classical description. Possible future investigations include the study of API for an atomic mixture consisting of two atomic species with different mass and interactions. The effect from the higher bands for the double-well potential, beyond the single-band approximation discussed here, is also an interesting perspective. \begin{acknowledgments} The authors acknowledge fruitful discussions with Kevin Keiler. J.C. and P.S. gratefully acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) in the framework of the SFB 925 ``Light induced dynamics and control of correlated quantum systems”. The excellence cluster ``The Hamburg Centre for Ultrafast Imaging-Structure: Dynamics and Control of Matter at the Atomic Scale” is acknowledged for financial support. A.K.M acknowledges a doctoral research grant (Funding ID: 57129429) by the Deutscher Akademischer Austauschdienst (DAAD). \end{acknowledgments}
1,314,259,995,992
arxiv
\section*{Abstract} {\bf We demonstrate the emergence of classical features in electronic quantum transport for the scanning gate microscopy response in a cavity defined by a quantum point contact and a micron-sized circular reflector. The branches in electronic flow characteristic of a quantum point contact opening on a two-dimensional electron gas with weak disorder are folded by the reflector, yielding a complex spatial pattern. Considering the deflection of classical trajectories by the scanning gate tip allows to establish simple relationships of the scanning pattern, which are in agreement with recent experimental findings. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:introduction} Electronic quantum transport in high-mobility two-dimensional electron gases (2DEGs) has been investigated in great detail during the last decades. An important tool to study the transport properties are scanning gate microscopy (SGM) experiments \cite{topinka2000imaging,topinka2001nature,topinka2003imaging,sellier2011review}, which measure the conductance change of a two-dimensional structure that is induced by a local scatterer (usually a charged atomic force microscope tip above the sample). The dependence of such an SGM response on position and strength of the scatterer yields a rich amount of data that contain additional information with respect to that of a standard transport measurement. In particular, branches that appear in the SGM data at a certain distance from a quantum point contact (QPC) \cite{topinka2001nature,topinka2003imaging,kola,braem18a} have been interpreted in terms of inhomogeneous branched electron flow in the sample, consistent with quantum simulations and classical trajectory density theoretical approaches in weakly disordered potentials \cite{topinka2001nature,topinka2003imaging}. While recent works have focused on the stability of these branches when the Fermi energy is varied \cite{braem18a,fratu19a}, and very recently branches have been observed in light propagation through soap films \cite{patsyk20}, confining gates have been found to lead to more complex SGM patterns \cite{crook03b,burke10a,aoki12a,steinacher2016,stein18a,toussaint18a} which are often difficult to interpret. When a channel defined by gates beyond the QPC is progressively turned on, three experimental regimes are observed: one in which branches spread unrestrictedly, one in which branches are confined, and one where the branches have disappeared \cite{steinacher2016}. The case of a circular mirror facing a QPC has been explored experimentally and theoretically \cite{stein18a} as a function of the invasiveness of the tip and the degree of confinement. The length scale of the fluctuations in the SGM response attains a maximum value, which is related to the spatial extension of the tip potential, for the weakest tip strength where the tip can be treated perturbatively \cite{jalabert2010measured,gorin13a}. As a consequence, the resolution of the SGM response is rather poor in the particularly interesting weakly invasive limit that allows to draw conclusions about the sample under study that are unaffected by the presence of the tip. When going from the weakly to the strongly confined regime, the gradual loss of a clear branch structure is associated with a folding of the branches, where the SGM pattern resembles a thicket. It is therefore important to determine the information that can be extracted from the effect of an SGM tip in a confined cavity. This is the goal of our work, where we investigate the effect of a micron-sized circular mirror gate facing the QPC on the SGM response as in Ref.\ \cite{stein18a}. In a recent experiment \cite{carolin_cavity}, the SGM response of such a system has been measured as a function of the tip position on a line parallel to the QPC gates and close to the mirror gate as in Ref.\ \cite{stein18a}, and the tip voltage was varied at each position. Interestingly, curved lines appear in a plot of the conductance as a function of tip strength and position. On those lines, the conductance assumes particularly high or low values. The present paper explains the origin of those structures, and shows that they are related to branches in electron flow through the cavity. Comparing quantum simulations and classical trajectory approaches, we find that the semiclassical ballistic conductance \cite{jalabert_prl90,baranger1991} describes well the qualitative features of the transport properties of the system. In particular, we show that the SGM response is related in a subtle way to branches that appear in the electron flow, and which can be deflected by the effect of the tip potential. Such a tip-induced modification of a branch can direct it back into the QPC and reduce the conductance or perturb a branch that is reflected by the mirror into the QPC and thereby increase the conductance. Such an effect on a branch depends on the tip strength and its position with respect to the branch. Features in the SGM response that appear due to this mechanism persist when the tip strength and the tip position vary simultaneously in a way that lets the deflection of the branch constant. Following these features to the limit of weak tip strength allows, in principle, to determine the branch position in the absence of the SGM tip. Our paper is organized as follows: In Sec.~\ref{sec:branches}, we present quantum and classical approaches to the branching phenomenon in transport through a cavity. The conductance through the cavity and the effect of an SGM tip is addressed in Sec.~\ref{sec:conductance}, with particular attention on the relation between the presence of branches and the SGM response. A comprehensive analytical approach to the dependence of the conductance on tip strength and position is presented in Sec.~\ref{sec:analytics}, confirming the existence of branch-induced features in the SGM response and the possibility of detecting branches in an unperturbed cavity by SGM experiments. Conclusions are presented in Sec.~\ref{sec:conclusions}. \section{Branches and partial local density of states in open space and in a cavity} \label{sec:branches} \begin{figure}[tb] \centerline{\includegraphics[width=.5\linewidth]{figure1}} \caption{\label{fig:sketch}Sketch of the system geometry considered. Red areas represent high potential regions that define the QPC of width $W$. The gray area indicates the mirror potential with a circular edge of radius $R$ that is intended to reflect electrons back into the QPC, and which is present in most of our calculations. In our transport calculations, we consider the conductance between the electrodes 1, 2, and 3 shown in blue, and in particular from electrode 1 to 2 or 1 to 3. The dashed line indicates the possible tip positions that are assumed in our simulations of SGM.} \end{figure} In this section, we discuss the effect of a cavity (see the sketch in Fig.~\ref{fig:sketch}) on the branches that appear in the electronic scattering wave functions and in the corresponding partial local density of states (PLDOS). The appearance of branches in SGM of electronic transport in a high-mobility 2DEG behind a constriction like a QPC is well known \cite{topinka2001nature,topinka2003imaging,kola,braem18a}. Its appearance is due to the unavoidable weak disorder seen by the electrons, and the precise branch pattern depends on the details of the disorder realization in the sample, but they are quite stable with respect to changes of the Fermi energy \cite{braem18a,fratu19a}. The branches in the SGM response to an invasive tip have been related \cite{ly17a} to the PLDOS for electrons entering from the QPC at the Fermi energy. In general, the PLDOS for electrons with energy $E$ from electrode $l$ with $N_l$ open channels can be defined as \cite{buettiker1996,gramespacher1999} \begin{equation} \label{eq:PLDOS} \rho_{lE} (\vec{r}) = 2 \pi \sum_{a=1}^{N_l} \left|\psi_{lEa}(\vec{r})\right|^2 \end{equation} through the scattering wave functions $\psi_{lEa}$ injected from the channel $a$ of electrode $l$. It has been shown that the branching structure is well reproduced by the density of classical trajectories starting in the QPC \cite{topinka2001nature,topinka2003imaging,braem18a}. Here we are interested in the consequences of imposing a circular gate facing the QPC, as they pertain to the branch structure and branch folding, as in the sample measured in Ref.~\cite{stein18a}. We thus start our analysis by considering the effect of confinement on the PLDOS and on the classical trajectory density. \subsection{Branches and scattering wave functions} \begin{figure}[tb] \centerline{\includegraphics[width=\linewidth]{figure2}} \caption{\label{fig:pldos}Partial local density of states from the QPC $\rho_{1E}(x,y)$ at an energy $E=\unit[5.3]{meV}$ [see Eq.\ \eqref{eq:PLDOS}] for a weakly disordered 2DEG without (left panel) and with (right panel) a cavity mirror.} \end{figure} We consider a 2DEG with weak disorder. The disorder potential is generated following the approach of Ref.\ \cite{ihn2010semiconductor} with realistic parameters ($150 000$ impurities in a square of size $\unit[10]{\mu m}\times \unit[10]{\mu m}$, in the doping layer situated at a distance of $\unit[70]{nm}$) as described in Appendix A of Ref.~\cite{fratu19a}. We first calculate the PLDOS corresponding to scattering states entering the 2DEG at an energy $E=\unit[5.3]{meV}$ from a QPC and in the absence of the tip, corresponding to a de Broglie wavelength of $\lambda=\unit[65.5]{nm}$. The width of the QPC is $W=\unit[100]{nm}$, such that three channels are open. The PLDOS obtained without the confining mirror (gray region in Fig.~\ref{fig:sketch}) using the fully coherent quantum transport approach implemented through the Kwant package \cite{groth14a} is shown in the left panel of Fig.~\ref{fig:pldos}. Branch-like structures clearly appear in this quantity. In a second calculation, we include the mirror gate (gray region in Fig.~\ref{fig:sketch}) at a distance of $R=\unit[2]{\mu m}$ from the QPC by adding hard wall boundaries. The resulting PLDOS is shown in the right panel of Fig.~\ref{fig:pldos}. The first obvious effect of the mirror is that the scattering wave function does not extend beyond the mirror boundary, and only the values inside the cavity are calculated and plotted. Furthermore, one can still notice branches, and some branch folding, but the structure has become richer and more complex. In addition, oscillations appear on the scale of $\lambda/2$ due to interferences of wave-like elements with the reflection from the boundary. These interferences lead to measurable effects when the radius of the mirror cavity is varied \cite{roessler15,ferguson17,stein18a}. \subsection{Description by classical trajectory density} \begin{figure}[tb] \centerline{\includegraphics[width=\linewidth]{figure3}} \caption{\label{fig:trajectories}Density of classical trajectories for the same systems and disorder configuration as in Fig.~\ref{fig:pldos}. Left (right) panel: without (with) the cavity mirror.} \end{figure} As the main features of the branching phenomenon \cite{topinka2001nature,topinka2003imaging,braem18a,fratu19a} and the conductance through micron-sized cavities \cite{poltl16} are well described by classical trajectory approaches, we compute the classical trajectories from the QPC and their density as described in Ref.\ \cite{fratu19a}. Figure \ref{fig:trajectories} shows the result for the classical trajectory density in the same system and for the same parameters and disorder configuration as the quantum calculation of the scattering states discussed above. Interestingly, most of the structures in Fig.~\ref{fig:pldos} are reproduced by the trajectory density shown in Fig.~\ref{fig:trajectories}, except for the interferences described at the end of the previous section. Comparing the branch structures without and with the confinement due to the mirror, shown in the left and right panel of Fig.~\ref{fig:trajectories}, respectively, the branches in the confined case include the ones seen in the unconfined situation. After all, the trajectories starting from the QPC are exactly the same until they reach the mirror gate, and therefore the appearing branches are also the same. In the confined case, the trajectories are reflected by the circular mirror, and thus additional structures due to the folded branches appear in the right panel of Fig.~\ref{fig:trajectories}. The trajectories are typically reflected several times by the mirror and the QPC-forming potential walls until they leave the cavity either through the QPC or on one of the sides. As a consequence, the average trajectory density in the cavity is increased, very much like the PLDOS in Fig.~\ref{fig:pldos}. This leads to much richer structures, and it allows to increase the SGM response such that the weakly invasive regime can be investigated experimentally without suffering from a too strong reduction of the signal \cite{stein18a}. \section{Electronic conductance through a cavity} \label{sec:conductance} After having verified that the correspondence between the PLDOS and the trajectory density is approximately maintained upon imposing the confinement, we now turn to the electronic conductance through the device in the presence of the mirror, that is the physical quantity to which we have access experimentally. We consider the conductance between the electrode 1 below the QPC and electrodes 2 and 3 of the cavity (cf.\ Fig.~\ref{fig:sketch}). The latter electrodes are treated as one large reservoir at a common chemical potential, and the conductance we are interested in is given by the sum $G=G_{12}+G_{13}$ of the conductances towards those electrodes. \subsection{Semiclassical approach to the conductance of an open cavity} The classical conductance of a ballistic system can be derived using a semiclassical approach \cite{jalabert_prl90,baranger1991}, yielding a weighted average over the contribution associated with transmitted classical trajectories with different initial conditions in the injection lead. In our case, we take the positions $x \in \left[-W/2, +W/2\right]$ and $y=0$ at the interface between electrode 1 and the cavity, and initial angles $\phi \in \left[-\pi/2,+\pi/2\right]$ with the $y$ axis (see Fig.~\ref{fig:sketch}), to get \begin{equation}\label{eq:conductance-classical} G =\frac{2e^2m v_\mathrm{F}}{h^2} \int_{-\pi/2}^{+\pi/2} \mathrm{d}\phi\, \cos{\phi} \int_{-W/2}^{+W/2} \mathrm{d}x\, f(x,\phi), \end{equation} where $e$ is the elementary charge, $m$ the (effective) electron mass, $v_\mathrm{F}$ the Fermi velocity, and $h$ is Planck's constant. In the expression above, $f(x,\phi)=1$ $(0)$ for transmitted (reflected) trajectories as a function of the initial conditions $(x,\phi)$. When a trajectory enters an electrode it is assumed that it ends there and does not come back into the cavity, i.e., a trajectory that ends in regions 2 or 3 of Fig.~\ref{fig:sketch} is counted as transmitted when it reaches one of the two electrodes, and as reflected when it returns to the injection electrode 1. The conductance $G$ obtained using a numerical implementation \cite{poltl16} of Eq.\ \eqref{eq:conductance-classical} for the same disorder realization as in Sec.~\ref{sec:branches}, and with a hard-wall mirror cavity, is about $G \simeq 2.6 \times 2e^2/h$, thus not far from the transmission $T=3$ of a QPC on the third conductance plateau. Such a value indicates that in spite of the perfect reflection of the mirror, most of the electrons injected into the cavity from the QPC are not directed back into the QPC. This phenomenon illustrates the crucial role that might have the small-angle scattering due to weak and smooth disorder \cite{stein18a}. \subsection{Conductance in the presence of an SGM tip} \begin{figure}[tb] \centerline{\includegraphics[width=0.65\linewidth]{figure4}} \caption{\label{fig:conductance-classical}Classical ballistic conductance $G$ from Eq.~\eqref{eq:conductance-classical} (in units of the conductance quantum $2e^2/h$) as a function of position $x_\mathrm{T}$ (while $y_\mathrm{T}=\unit[1.414]{\mu m}$ is kept fixed along the dashed line in Fig.~\ref{fig:sketch}) and strength $V_\mathrm{T}$ of the tip. The various lines represent plots of Eq.~\eqref{eq:parabola-lorentzian}.} \end{figure} We now focus on the SGM response (the conductance change induced by a local potential) as a function of tip strength and position, and the relation of its features to the branches in the cavity. In the presence of an SGM tip potential of Lorentzian shape \begin{equation} V(\vec{r})=V_\mathrm{T}\, \frac{\sigma^2}{|\vec{r}-\vec{r}_\mathrm{T}|^2+\sigma^2} \end{equation} with an amplitude $V_\mathrm{T}$ that determines the tip strength and a realistic \cite{stein18a} width $\sigma = \unit[175]{nm}$, the conductance can be increased or reduced, depending on the details of the tip parameters and its position $\vec{r}_\mathrm{T}$ related to the electron flow in the cavity. The calculated conductance as a function of the position $x_\mathrm{T}$ (on the dashed line in Fig.~\ref{fig:sketch} with $y_\mathrm{T}=\unit[1414]{nm}$ fixed) and strength $V_\mathrm{T}$ of the tip is presented in Fig.~\ref{fig:conductance-classical}. Figure \ref{fig:conductance-classical} exhibits different kinds of structures in the SGM response. We have checked that they are similar to the ones seen in the quantum conductance (see Appendix \ref{sec:kwant_parabolas}), indicating that the semiclassical approach represents a rather good approximation. The observations are also consistent with the experimental data of Ref.\ \cite{carolin_cavity}. Among them, a strong repulsive ($V_\mathrm{T}>0$) or attractive ($V_\mathrm{T}<0$) tip close to the center of the cavity increases the scattering of the electrons towards the side electrodes 2 and 3, and thereby increases the conductance to values that approach the transmission $T=3$ of the QPC. More intriguing, there are lines with approximately constant conductance that appear in Fig.~\ref{fig:conductance-classical} between regions of strong and weak tips, and which have a characteristic parabola-like shape. Such structures exist in the upper part ($V_\mathrm{T}<0$) where the tip potential is attractive as well as in the lower part ($V_\mathrm{T}>0$) where it is repulsive. Several of those structures seem to be superposed. We will argue in the sequel that a classical mechanism of bending of the branches by the effect of the tip is at the origin of these structures in the conductance plots. In Appendix \ref{sec:Analytics_clean} we tackle the semiclassical computation of the conductance for a cavity without disorder (i.e., the clean case) in the presence of an SGM tip, where an analytical treatment is possible for particular shapes of the tip potential. Contrary to the previously discussed case of a disordered cavity, the redirection into the QPC by the mirror is very efficient, and the conductance for weak values of the tip strength is very low. Despite this important difference, the parabolic-like features present in the experiment \cite{carolin_cavity} and reproduced by the semiclassical approach leading to Fig.~\ref{fig:conductance-classical}, are also suggested in the clean case (see Fig.~\ref{fig:classical-conductance-clean}). \subsection{Signature of branches in the SGM response} \begin{figure}[tb] \centerline{\includegraphics[width=.6\linewidth]{figure5}} \caption{\label{fig:conductance-classical-zoom}Zoom in the lower left part of Fig.~\ref{fig:conductance-classical}. The three marked points on the prominent edge between high and low conductance represent the parameter values for which the classical trajectory density is shown in Fig.~\ref{fig:classical-trajectories-points}.} \end{figure} \begin{figure} \centerline{\includegraphics[width=\linewidth]{figure6}} \caption{\label{fig:classical-trajectories-points}Density of classical trajectories in the weakly disordered sample of Fig.~\ref{fig:pldos}, modified by a Lorentzian bump at three different positions and strengths on the feature shown in Fig.~\ref{fig:conductance-classical-zoom}. The position of the tip is indicated by the colored symbols in the panels, that allow to identify the parameters with the ones of the marked points in Fig.~\ref{fig:conductance-classical-zoom}.} \end{figure} In order to illustrate the above-mentioned mechanism, here we focus on one of the prominent curved equiconductance lines in the position-amplitude space of Fig.~\ref{fig:conductance-classical}, and calculate the tip effect on the classical trajectory density for three points on that feature. A zoom of the selected region with three colored markers that represent the chosen points in a prominent structure is shown in Fig.~\ref{fig:conductance-classical-zoom}. The classical trajectory density in the presence of the tip for the three parameter sets is shown in Fig.~\ref{fig:classical-trajectories-points}. It can be seen that the structure of the branches in the cavity does not change drastically under the effect of the tip, and that the branches close to the tip are deflected by the tip potential in a way that keeps their contribution to the conductance essentially unchanged. For the most significant and closest branch, the change in tip position is compensated by a change in tip strength: A strong tip that is far away may have a similar effect as a tip that is weak and close. Based on such a qualitative understanding of the equiconductance lines, we propose in the next section a model calculation that leads to a detailed quantitative description of the shape of these lines. \section{Analytical model describing equiconductance features} \label{sec:analytics} In this section, we present a model calculation to predict the form of the equiconductance lines in SGM measurements as a function of tip position $x_\mathrm{T}$ and strength $V_\mathrm{T}$. The model is based of the deflection of classical trajectories by the tip potential, and the assumption that the tip effect on the contribution of a branch to the conductance is determined by the deflection angle of the trajectories in the branch. Since the classical conductance \eqref{eq:conductance-classical} measured in the SGM setup crucially depends on whether trajectories are transmitted or not, we expect that the SGM response is dominated by a number of branches that form due to weak disorder \cite{fratu19a}, and whose transmission is modified by the effect of the tip potential. Branches are bundles of trajectories, and thus the impact of the tip can be approximately described by the deflection of a trajectory that is part of the branch. For the quantitative description of such a deflection, we assume that the tip-induced potential bump/dip has circular symmetry around the tip center $\vec{r}_\mathrm{T}$ with the general expression \begin{equation}\label{eq:tip-general} V(\vec{r})=V_\mathrm{T}\, {v}(|\vec{r}-\vec{r}_\mathrm{T}|), \end{equation} where $V_\mathrm{T}$ is the potential strength, and ${v}(r)$ the shape of the potential with normalized maximum value ${v}(0)=1$ at the tip center. In the limit of weak scattering, the deflection angle $\alpha$ due to the presence of the potential \eqref{eq:tip-general} for a trajectory at energy $E$, starting at $x=x_\mathrm{T}+\Delta x$, $y=-\infty$ in the $y$ direction (see the sketch in Fig.~\ref{fig:deflection-sketch}), can be calculated \cite{fratu19a} as \begin{equation}\label{eq:deflection} \alpha=-\frac{V_\mathrm{T}}{E}\Delta x \int_1^\infty \mathrm{d}s \, \frac{{v}'(|\Delta x| s)}{\sqrt{s^2-1}}, \end{equation} where $v'$ is the derivative of $v$ with respect to $r$. \begin{figure}[tb] \centerline{\includegraphics[width=0.15\linewidth]{figure7}} \caption{\label{fig:deflection-sketch} Sketch of the considered situation in the calculation of the deflection of a classical trajectory.} \end{figure} The effect of the tip described by the deflection angle \eqref{eq:deflection} depends on the energy, tip shape, tip position, and tip strength. For the experimentally realized situation of fixed energy and tip shape, the condition of a constant (small) deflection angle for a given branch leads to a relationship between $\Delta x$ and the corresponding $V^{*}_\mathrm{T}$, by recasting Eq.\ \eqref{eq:deflection} as \begin{equation} \label{eq:parabola-shape} V^{*}_\mathrm{T}=-\alpha E \left[\Delta x \int_1^\infty \mathrm{d}s \, \frac{{v}'(|\Delta x| s)}{\sqrt{s^2-1}}\right]^{-1}. \end{equation} As the contribution of a tip-deflected branch (represented by a single trajectory) to the SGM response remains approximately unchanged along lines described by the relation \eqref{eq:parabola-shape}, prominent features in the SGM response can be expected to appear on such lines for particular values of $\alpha$, for example the ones for which an otherwise transmitted branch is reflected back into the QPC. Evaluating Eq.\ \eqref{eq:parabola-shape} for a Lorentzian potential bump/dip ${v}(r)={\sigma^2}/({r^2+\sigma^2})$ of width $\sigma$, we obtain the result \begin{equation}\label{eq:parabola-lorentzian} \frac{V^{*}_\mathrm{T}}{E \alpha}=-\frac{2}{\pi} \frac{\left[(\Delta x/\sigma)^2+1\right]^{3/2}}{\Delta x/\sigma} \end{equation} for the expected shape of SGM features in the $x_\mathrm{T}$--$V_\mathrm{T}$ plane. The result \eqref{eq:parabola-lorentzian} is depicted by the solid lines in Fig.~\ref{fig:parabolas-analytics}. Interestingly, the behavior for tip positions far from the main branch when $\Delta x \gg \sigma$ given by \begin{equation} \frac{V^{*}_\mathrm{T}}{E \alpha}\simeq -\frac{2}{\pi} \left(\frac{\Delta x}{\sigma}\right)^2 \end{equation} is approximately parabolic, consistent with the experiment \cite{carolin_cavity}. The lines in Fig.~\ref{fig:conductance-classical} represent plots of Eq.~\eqref{eq:parabola-lorentzian}, showing that they are perfectly consistent with the structures appearing in the classically calculated conductance. \begin{figure}[tb] \centerline{\includegraphics[width=0.6\linewidth]{figure8}} \caption{\label{fig:parabolas-analytics} Plot of the tip strength needed to maintain constant deflection as a function of distance from the branch for Lorentzian [solid lines, Eq.~\eqref{eq:parabola-lorentzian}] and Gaussian [dashed lines, Eq.~\eqref{eq:parabola-gaussian}] tip shapes.} \end{figure} In order to investigate the influence of the tip shape on the expected shape of equiconductance lines in the SGM response, we also present the case of a Gaussian potential bump/dip shape ${v}(r)=\mathrm{e}^{-r^2/2\sigma^2}$, for which one obtains from Eq.\ \eqref{eq:parabola-shape} the behavior \begin{equation}\label{eq:parabola-gaussian} \frac{V^{*}_\mathrm{T}}{E \alpha}=-\sqrt{\frac{2}{\pi}}\frac{\exp{([\Delta x/\sigma]^2/2)}}{\Delta x/\sigma}. \end{equation} Equation \eqref{eq:parabola-gaussian} is shown through the dashed lines in Fig.~\ref{fig:parabolas-analytics}. Their shape agrees with the features found in the quantum transport simulations of App.~\ref{sec:kwant_parabolas}, where a Gaussian tip shape has been used. A common feature of both, Lorentzian and Gaussian tip shapes, is the divergence $V^{*}_\mathrm{T}\propto {\sigma}/{\Delta x}$ in the limit $\Delta x/\sigma \to 0$, when the tip center approaches the trajectory. This divergence is due to the flat summits of the tip potentials considered, and the resulting need of stronger and stronger tip potentials to maintain a constant deflection angle. However, in this limit, trajectories are blocked when $V^{*}_\mathrm{T}>E$, and we do not expect strong features at very large tip strength in the experiment. Anyway, due to the symmetry of the potential, no deflection is possible when $\Delta x=0$, and it is clear that the lines of continuous deflection in Fig.~\ref{fig:parabolas-analytics} must have two non-connected parts. One of them corresponds to a repulsive tip on one side of the branch, the other to an attractive tip on the other side. The two are related by the odd symmetry of the features $V^{*}_\mathrm{T}(-\Delta x)=-V^{*}_\mathrm{T}(\Delta x)$, reflecting the fact that a fixed deflection can be due to pushing from one side (positive $V_\mathrm{T}$) or pulling from the other side (negative $V_\mathrm{T}$). While the sketch of Fig.~\ref{fig:deflection-sketch} and the relationship \eqref{eq:deflection} seem to apply to a trajectory belonging to a bundle that emerges from the QPC, the same kind of reasoning can be used in the case of a trajectory that has bounced off the reflector, and thus belongs to a folded branch. It is worth to mention that Eq.~\eqref{eq:deflection} is valid for any kind of weakly deflecting potential bump or dip. In this work we have applied it to the case of the deflection induced by the tip onto a classical trajectory in the absence of disorder, while in Ref.~\cite{fratu19a} it was used to understand the caustic formation due to weak disorder. In the realistic case of a disordered cavity, $\alpha$ would describe the additional deflection induced by the tip in an almost straight trajectory, and the effect of a weak disorder does not modify the simple picture arising from the analytical model. \section{Conclusions} \label{sec:conclusions} Motivated by recent experimental results, we have performed quantum and classical numerical simulations of transport through a QPC that is facing a circular reflecting mirror gate, taking into account the effect of a perturbing tip voltage. The numerics confirmed that an approach based on classical trajectories is able to capture the main features of the experimentally observed SGM patterns, and in particular it reproduces parabola-shaped features in the conductance plots as a function of tip position and strength. From analytical model calculations, we reach an understanding of the underlying mechanism, which is based on the branching of electron flow in a weak disorder potential, and considers the effect of the tip potential on nearby branches. A simple model describing the scattering of classical trajectories allows to predict features of SGM in the $x_\mathrm{T}$--$V_\mathrm{T}$ plane where the deflection angle of a branch due to the effect of the tip remains constant. When the tip is moved away from the branch, its effect on the corresponding electron trajectories gets weaker, while with a simultaneous increase of tip strength, the influence on the trajectories can remain almost unchanged, giving rise to the observed ``parabolic'' features. All these predictions are in good agreement with experiments \cite{carolin_cavity} and simulations of the SGM response, including the fact that the features exhibit a gap between the regimes of repulsive and attractive tips. It follows that the analysis of features of the SGM response as a function of tip position and strength can be used to extract the positions of branches in the unperturbed system (without the tip). In addition, the feature geometry depends on the shape of the potential of the bump or dip, with almost parabolic features for a Lorentzian tip and large $|\Delta x|/\sigma$. Notably, such a dependence on tip shape could allow to determine experimentally the shape of the tip potential from the features observed in the SGM response. The classical limit of the semiclassical approach has been found to be powerful enough to explain the experimental data for 2DEGs and the quantum simulations, unlike the case of open microwave billiards, where the influence of diffraction has been put in evidence \cite{hersch99,hersch00}. We have demonstrated that, in the regime of strong confinement, important information can be obtained by analyzing the features of the folded-branch thicket-like SGM pattern. While weak disorder is crucial for the establishment of the branches and for certain observables like the conductance of a mirror-confined cavity, it does not prevent the emergence of classical trajectory features underlying the SGM pattern. \section*{Acknowledgements} We are indebted to Carolin Gold, Klaus Ensslin, and Thomas Ihn for sharing with us unpublished experimental results which motivated the present work. We are grateful to Nicolas Cartier for useful discussions. \paragraph{Funding information} Financial support from the French National Research Agency ANR through projects ANR-11-LABX-0058\_NIE (Labex NIE) and ANR-14-CE36-0007-01 (SGM-Bal) is gratefully acknowledged. This research project has been supported by the University of Strasbourg IdEx program. \begin{appendix} \section{Quantum simulation of SGM in a cavity} \label{sec:kwant_parabolas} \begin{figure}[tb] \centerline{\includegraphics[width=.55\linewidth]{figure9}} \caption{\label{fig:cavity-potential}Confinement potential $V_\mathrm{conf}$ (in units of the Fermi energy $E_\mathrm{F}$) with smooth walls defining the geometry of Fig.~\ref{fig:sketch} used for the quantum conductance calculations. White areas are not part of the system, equivalent to regions of infinite potential.} \end{figure} \begin{figure}[tb] \centerline{\includegraphics[width=0.6\linewidth]{figure10}} \caption{\label{fig:conductance-clean}Quantum conductance (in colorscale) through the model cavity in the absence of disorder, as a function of tip position at fixed $y_\mathrm{T}=\unit[1625]{nm}$, and (dimensionless) tip strength.} \end{figure} We have performed quantum transport simulations using the Kwant package \cite{groth14a}. In these simulations, we calculate the electronic transport properties assuming that the electrons move in an effective one-body potential that determines the cavity as shown in Fig.~\ref{fig:cavity-potential}, and that electron-electron interaction effects beyond mean field can be neglected. The non-abrupt character of the chosen confining potential diminishes the importance of diffraction effects \cite{hersch99,hersch00} appearing in this kind of geometry. The electron dynamics is assumed to be perfectly coherent, and the transport properties are calculated at zero temperature. The parameters of the cavity shape and its size as well as the Fermi wavelength are close to the experimental situation of Ref.\ \cite{carolin_cavity}. For the tip potential, a Gaussian shape with strength $U_\mathrm{T}$ similar to the one used in Ref.\ \cite{stein18a} is used. Roughly, $U_\mathrm{T}=1$ corresponds to an experimental tip voltage of $\unit[-1]{V}$. In order to have a setup that is as close as possible to the experiment, the tip positions are chosen to vary along a line at $y_\mathrm{T}=\unit[1625]{nm}$. The numerical result for the quantum conductance in a clean model is shown in Fig.~\ref{fig:conductance-clean} as a function of tip position and strength. In the absence of disorder, and without the tip, the cavity reflects most of the electrons back into the QPC, such that the conductance at $U_\mathrm{T}=0$ is quite low. The main consequence of the tip is to perturb this mirror effect and to enhance the conductance. However, as in the experiment, the conductance change due to the tip exhibits the characteristic features evident in the plot of Fig.~\ref{fig:conductance-clean}. \begin{figure} \centerline{ \includegraphics[width=\linewidth]{figure11}} \caption{\label{fig:conductance-disorder}Quantum conductance through three samples with different disorder realizations.} \end{figure} The situation changes in the presence of weak disorder, where already in the absence of the tip, most electrons are transmitted, such that the conductance in this case is strongly enhanced with respect to the clean case \cite{stein18a}. We use a realistic disorder strength as in Ref.\ \cite{stein18a}, implemented along the lines described in Ref.\ \cite{ihn2010semiconductor} and detailed in Appendix A of Ref.\ \cite{fratu19a}. The results for the conductance as a function of tip position and strength are shown for three different disorder realizations in Fig.~\ref{fig:conductance-disorder}. It can be observed that there are curved features in all plots, some of them where the conductance is enhanced and some with reduced conductance. Their positions depend on the disorder configuration. \section{Classical ballistic conductance through a clean cavity} \label{sec:Analytics_clean} \begin{figure} \centerline{\hfill\includegraphics[width=0.43\linewidth]{figure12left} \hfill \includegraphics[width=0.43\linewidth]{figure12right}\hfill} \caption{\label{fig:classical-conductance-clean}Classical conductance through a clean cavity calculated from exact trajectories in the potential of a $1/r^2$-shaped tip. The right panel shows a zoom around the origin of the left panel using a more sensitive colorscale in order to highlight the details. The circle and the square indicate parameter choices for which the reflected trajectories are shown in Fig.~\ref{fig:reflected-trajectories}.} \end{figure} In this Appendix, we present an analytically solvable model that describes the conductance through a clean cavity sample in the presence of a perturbing tip potential. As in Ref.\ \cite{poltl16}, we evaluate the conductance \eqref{eq:conductance-classical} by counting transmitted and reflected trajectories from an ensemble of initial conditions in the injecting lead \cite{jalabert_prl90,baranger1991}. We consider the case of a disorder-free system for which we can analytically calculate the electron trajectories at energy $E$. We also assume that the sample confinement is due to hard-wall potentials with specular reflection of the trajectories, and that the tip potential reads \begin{equation}\label{eq:tip-analytical} V(\vec{r})= u_\mathrm{T}E_\mathrm{F}\left(\frac{R}{|\vec{r}-\vec{r}_\mathrm{T}|}\right)^2 \end{equation} with $u_\mathrm{T}$ some dimensionless tip strength and $E_\mathrm{F}$ the Fermi energy. The tip center is placed on a line parallel to the $x$ axis as in the experiment. It is then possible to calculate analytically the classical trajectories for the case of a repulsive tip, and thus the conductance based on those trajectories using Eq.\ \eqref{eq:conductance-classical}. The unrealistic divergence of the potential \eqref{eq:tip-analytical} $\propto 1/|\vec{r}-\vec{r}_\mathrm{T}|^2$ arising when $\vec{r}\to\vec{r}_\mathrm{T}$ is irrelevant in the repulsive case since electron trajectories at a fixed energy $E$ can never reach regions where the potential is larger. Moreover, the difference between a Lorentzian potential and the potential \eqref{eq:tip-analytical} becomes negligible when the potential maximum is much higher than $E$. In the case of an attractive potential however, the negative divergence of the potential leads to artifacts, namely trajectories that get caught in the attractive potential, getting closer and closer to the singularity while accelerating to higher and higher velocity. For this reason, the tip shape can only be approximated by the form \eqref{eq:tip-analytical} when the tip potential is repulsive. We therefore do not include the parameter region of attractive tip potentials in this Appendix. \begin{figure} \centerline{\hfill\includegraphics[width=0.4\linewidth]{figure13left} \hfill \includegraphics[width=0.4\linewidth]{figure13right}\hfill} \caption{\label{fig:reflected-trajectories} Reflected trajectories for the two marked points of enhanced conductance in Fig.~\ref{fig:classical-conductance-clean}. Gray areas: Classically forbidden depletion disks of the tip potential.} \end{figure} Figure \ref{fig:classical-conductance-clean} shows the conductance computed from the analytically calculated trajectories when perturbing the system with a tip potential given by Eq.\ \eqref{eq:tip-analytical}. In the absence of the tip potential, the electron trajectories are straight lines reflected at the confining walls, therefore almost all trajectories that hit the mirror gate are reflected back into the QPC. This leads to a very low conductance $G$ at zero tip strength $u_\mathrm{T}=0$, reminiscent of the behavior found in the quantum simulation of the conductance in the absence of disorder presented in Fig.~\ref{fig:conductance-clean}. The tip deflects the electron trajectories such that most of the reflected ones are transformed in transmitted trajectories. Hence, the effect of the tip is to strongly enhance the conductance, and the classical image directly explains this aspect found in the quantum simulation of Fig.~\ref{fig:conductance-clean}. However, as can be observed in Fig.~\ref{fig:classical-conductance-clean}, in the parameter space where the transmission is enhanced (light blue) by the repulsive tip, there are ``parabolic'' features where significant reflection is still present (dark blue). To get a better understanding of the mechanism, we show in Fig.~\ref{fig:reflected-trajectories} the reflected trajectories for a few parameter values that correspond to the marked points in the right panel of Fig.~\ref{fig:classical-conductance-clean}. Two different parameter sets have been chosen on a prominent ``parabolic'' feature. It can be seen in Fig.~\ref{fig:reflected-trajectories} that the reflected trajectories for the two cases (one for a tip that is closer to the center and weaker than the other) are very similar and correspond to a family of trajectories that is focused back into the QPC after several reflections at the sample walls. For parameter values placed other ``parabolic'' features, the trajectories look quite different, but they also remain similar when one changes parameters along the feature. In contrast to the disordered case, trajectories with only one reflection at the mirror do not play an important role for the equiconductance features in the clean case. \end{appendix}
1,314,259,995,993
arxiv
\section{Introduction} The direct numerical simulation (DNS) of turbulence is impractical for most applications due to its multiscale nature. Engineering decisions requiring a short turnover duration continue to rely on the Reynolds-averaged Navier-Stokes (RANS) equations for steady-state analysis of turbulent flows. Within that context, developments that lead to improved RANS results from the point of view of speed or accuracy have the potential to affect design workflows significantly. To that end, there have been several recent investigations into the augmentation of RANS using data-driven methods that attempt to improve its accuracy through the use of fidelity porting strategies \cite{zhang2015machine,ling2015evaluation,xiao2016quantifying,weatheritt2016novel,ling2016reynolds,wu2018physics,wang2017physics,sotgiu2019towards,wu2019physics,cruz2019use,geneva2019quantifying,layton2020diagnostics} or the utilization of experimental data \cite{parish2016paradigm,singh2017machine}. Closure optimization strategies have also looked at zonal coefficient refinement or model selection for improved accuracy \cite{matai2019zonal,maulik2019sub}. Most attempts at infusing information from higher fidelity models such as DNS to reduced-order representations have resulted in improved accuracy for the quantities of interest and represent the bulk of physics-informed machine learning investigations of turbulence closure modeling \cite{gamahara2017searching,maulik2017neural,vollant2017subgrid,maulik2018data,beck2019deep}. Recently, researchers have also advocated for CFD-driven machine learning wherein epochs of the machine learning optimization are coupled with a forward solve of the RANS equations \cite{zhao2019turbulence,taghizadeh2020turbulence} and have also investigated ideas of spatial and spatio-temporal super-resolution for reconstructing sub-grid information \cite{fukami2019super,fukami2020machine,maulik2020probabilistic}. The numerous opportunities for utilizing machine learning in turbulence modeling have also been summarized in recent reviews such as \cite{duraisamy2019turbulence} and \cite{zhang2019recent}. We are motivated by the seminal work of Tracey, Duraisamy and Alonso \cite{tracey2015machine}. This study assessed the feasibility of using neural networks to predict turbulent eddy viscosities obtained from training data generated by the Spalart-Allmaras model. We also note a recent investigation by Zhu et al., \cite{zhu2019machine} which looked at the surrogate modeling of subsonic flow around an airfoil with a radial-basis function network. The unique feature of this study was a section devoted to the improvements in compute time due to the preclusion of a separate partial differential equation (PDE) for the turbulent eddy viscosity computation. In contrast to the above investigations, this article investigates an alternate {approach for eddy viscosity surrogate modeling through the \emph{direct prediction} of steady-state turbulent eddy viscosity fields with the use of a deep learning framework}. We leverage the initial conditions of our test case (given by a low fidelity solution such as potential flow) across a range of boundary conditions as inputs and harvest steady-state turbulent eddy viscosities (from a suitable choice of a turbulence closure model based on the linear eddy-viscosity hypothesis) as outputs to train a relationship which need to be deployed \emph{once} at the start of a RANS simulation. In addition, we may also incorporate direct knowledge of the control parameter to improve the accuracy of this surrogate map. The velocity and pressure equations are solved to steady-state while utilizing this \emph{fixed} solution. It is observed that the resultant RANS solutions are able to approximately replicate the trends of the solutions of the PDE model but with the added advantage of significant reductions to the total time-to-solution. \textcolor{black}{The aforementioned study is also performed using training data for different geometries to test the viability of the framework for predicting steady-state turbulent eddy-viscosities in case of varying parameters that control the computational mesh.} The machine learning (ML) framework presented here consists of two phases, which we refer to as {\it offline} and {\it online}. The offline phase consists of data generation at strategically selected points in parameter space (corresponding to a design of experiments component) as well as the training of a neural network to predict a turbulent eddy-viscosity. This trained network is the ML surrogate model. The online phase corresponds to the deployment of the trained ML surrogate to predict a spatial field of eddy-viscosities which are then held fixed, while the velocity and pressure equations are solved to convergence. The overall procedure with online and offline phases for training and deploying the ML surrogate is denoted ``the ML framework''. Our test case is given by the two-dimensional backward-facing step exhibiting flow separation and considerable mesh anisotropy and thus represents a challenging test for the construction of surrogate closures for real-world applications. We note that the mesh utilized in this study, which will be introduced in greater detail in Section \ref{Problem}, is generated with information from the NASA LARC extended turbulence validation database. \section{Methodology} This section describes the proposed data-driven workflow and test case. In subsections \ref{Problem} and \ref{subsection:RANS} we define the problem statement and provide a brief background of RANS modeling. In subsection \ref{subsection:ml-rans} a description of the proposed ML and computational fluid dynamics (ML-CFD) integration procedure is provided. \subsection{Problem Definition}\label{Problem} In the current work, we undertake the problem of simulating flow past the backward-facing step as shown in figure \ref{fig:mesh_Geo}(a). The experimental description is provided in \cite{driver1985features}. For our first experiment, we simulate the airflow for a Reynolds number based on the fixed step height of $h = 1.27$ cm. The Reynolds number is defined as $Re_{h}= \frac{U h}{\nu}$ where $U$ is the freestream velocity (and also a control parameter for generating training and testing data in our machine learning problem definition) and $\nu = 1.5 \times 10^{-5} \frac{m^2}{s}$ is the kinematic viscosity. The problem is widely used for the validation of novel turbulence models. More details about the case can be seen on the NASA turbulence web page\footnote{https://turbmodels.larc.nasa.gov/backstep\_val.html}. \begin{figure} \centering \includegraphics[scale=0.25,width=\textwidth]{./Figures/Combine_Mesh_Geo_v2.png} \caption{The backward step used for a representative CFD simulation in this investigation. (a) shows the geometry and the step defined at (0,0). Red dashed lines show locations where data has been probed to assess our ML surrogate model. (b) shows the mesh used for the computation and a zoomed view of the refined mesh.} \label{fig:mesh_Geo} \end{figure} \\ \\ \indent The training data set for the first experiment is generated from ten simulations, each with distinct free-stream velocities resulting in Reynolds numbers that range from $Re_h = 34,000 \text{-} 41,500$. The computational domain is composed of a non-uniform grid as shown in figure \ref{fig:mesh_Geo}. To capture the separation occurring near the step, additional refinement is performed on the grid, which can be seen in the zoomed view shown in Figure \ref{fig:mesh_Geo}(b). The total number of cells used in this experiment was fixed at $20,540$. The mesh was generated using OpenFOAM's native grid generator \texttt{blockMesh}, with a maximum aspect ratio of $7,601.12$. A second experiment is designed by utilizing a fixed inlet velocity $U_b=44.2 m/s$ while varying the step height. We select $0.50h$, $0.75h$, $1.25h$, $1.5h$, $2.0h$ as our different data generation geometries and re-mesh the entire domain for each option. Table \ref{TableCell} provides a summary of the various mesh configurations. \begin{table} \centering \begin{tabular} {l*{2}{c}} Step height & Cell count \\ \hline 0.5h & 130,184 \\ 0.75h & 134,648 \\ 1.25h & 128,808 \\ 1.50h & 130,824 \\ 1.75h & 152,470 \\ 1.90h & 131,374 \\ 2.00h & 131,948 \\ \end{tabular} \caption{Mesh configurations showing different step heights and associated cell count} \label{TableCell} \end{table} This experiment would thus correspond to a numerical simulation campaign that scans across a family of geometries with the ML surrogate for $\nu_t$ aiding in faster times to solution for each configuration. All CFD simulations are performed with the OpenFOAM solver (version 5.0) \cite{weller1998tensorial} which has seen extensive use in practical CFD applications \cite{robertson2015validation} \subsection{RANS}\label{subsection:RANS} The RANS equations are a time-averaged form of the Navier-Stokes equations of motion for fluid flow. The equations are derived using the Reynolds decomposition principle shown in \cite{reynolds1895iv}, whereby the instantaneous quantity is decomposed into its time-averaged and fluctuating quantities. These equations are used for describing the steady-state behavior of turbulent flows. In Cartesian coordinates for a stationary flow of an incompressible Newtonian fluid, the RANS equations can be written as follows: \begin{subequations} \begin{equation} \frac{\partial \bar{u_i} }{\partial x_i}= 0, \end{equation} \begin{equation} \centering \rho \bar{u_j}\frac{\partial \bar{u_i} }{\partial x_j} = \rho \bar{f_i} + \frac{\partial}{\partial x_j} \left[ - \bar{p}\delta_{ij} + 2\mu \bar{S_{ij}} - \rho \overline{u_i^\prime u_j^\prime} \right] \label{Eq:RANS} \end{equation} \end{subequations} where $\bar{.}$ denotes time-averaged quantities, ${u}$ is the velocity component, $u^\prime$ is the fluctuating component, $p$ is the pressure, $\rho$ is the density of fluid, $f_{i}$ is a vector representing external forces, $\delta_{ij}$ is the Kronecker delta function, $\mu$ is the dynamic viscosity, and $\bar{S_{ij}} = \frac{1}{2}\left( \frac{\partial \bar{u_i}}{\partial x_j} + \frac{\partial \bar{u_j}}{\partial x_i} \right)$ is the mean rate of the stress tensor. The left-hand side of this equation defines the change of the mean momentum of the fluid element due to the unsteadiness of the mean flow and the convection. The change is balanced by the mean body force, the isotropic stress resulting from the mean pressure field, the viscous stresses and the stresses due to the fluctuating velocity field ${\displaystyle \left(-\rho {\overline {u_{i}^{\prime }u_{j}^{\prime }}}\right)}$, generally denoted the \textit{Reynolds stress}. This nonlinear term requires an additional model specification to close the RANS equation. Most methods deal with an explicit model for this tensor through the utilization of additional algebraic or differential equations. Some of these are based on the time evolution of the Reynolds stress equation as presented in \cite{chou1945velocity}. The bulk of our experiments in this investigation utilizes the one-equation Spalart-Allmaras \cite{spalart1992one} model (SA) for the generation of reference data sets. \\ \indent The SA model closure is written in the following form: \begin{equation} \begin{aligned} {\frac {\partial {\tilde {\nu }}}{\partial t}}+u_{j}{\frac {\partial {\tilde {\nu }}}{\partial x_{j}}} &= C_{b1}[1-f_{t2}]{\tilde {S}}{\tilde {\nu }}+{\frac {1}{\sigma }}\{\nabla \cdot [(\nu +{\tilde {\nu }}) \nabla {\tilde {\nu }}]+C_{b2}|\nabla {\tilde {\nu }}|^{2}\}\\ &-\left[C_{w1}f_{w}-{\frac {C_{b1}}{\kappa ^{2}}}f_{t2}\right]\left({\frac {\tilde {\nu }}{d}}\right)^{2}+f_{t1}\Delta U^{2} \end{aligned} \end{equation} \begin{align*} &\nu _{t}={\tilde {\nu }}f_{{v1}}, f_{{v1}}={\frac {\chi ^{3}}{\chi ^{3}+C_{{v1}}^{3}}}, \chi :={\frac {{\tilde {\nu }}}{\nu }},{\tilde {S}}\equiv S+{\frac {{\tilde {\nu }}}{\kappa ^{2}d^{2}}}f_{{v2}}, f_{v2} = 1 -{\frac{\chi }{1+\chi f\_{{v1}}}},\\ &f_{w}=g\left[{\frac{1+C_{{w3}}^{6}}{g^{6}+C_{{w3}}^{6}}}\right]^{{1/6}},g=r+C_{{w2}}(r^{6}-r), r\equiv {\frac {{\tilde {\nu }}}{{\tilde{S}}\kappa ^{2}d^{2}}},S={\sqrt {2\Omega _{{ij}}\Omega _{{ij}}}},\\ & f_{{t1}}=C_{{t1}}g_{t}\exp \left(-C_{{t2}}{\frac {\omega _{t}^{2}}{\Delta U^{2}}}[d^{2}+g_{t}^{2}d_{t}^{2}]\right), f_{{t2}}=C_{{t3}}\exp \left(-C_{{t4}}\chi ^{2}\right),\\ & \Omega _{{ij}}={\frac {1}{2}}(\partial u_{i}/\partial x_{j}-\partial u_{j}/\partial x_{i}),\sigma=2/3,C_{{b1}}=0.1355,C_{{b2}}=0.622,\kappa=0.41,\\ & C_{{w1}}=C_{{b1}}/\kappa^{2}+(1+C_{{b2}})/\sigma, C_{{w2}}=0.3,C_{{w3}}=2,C_{{v1}}=7.1,\\ & C_{{t1}}=1,C_{{t2}}=2,C_{{t3}}=1.1,C_{{t4}}=2 \end{align*} where turbulent eddy viscosity is defined as $\nu_{t}$ and the stress tensor is $S$,with $\Omega$ being the rotational tensor, $d$ is the distance from the closest surface and $\Delta U^{2}$ is the norm of the difference between the velocity. The model defines a transport equation for a new viscosity-like variable $\tilde {\nu}$. To solve the SA equation for the turbulence closure of RANS, the following boundary conditions are prescribed; at wall $\tilde{\nu} = 0$, freestream/inlet $\tilde{\nu} = 3 \nu$ and outlets are defined with Neumann condition. In the present work we use the SA closure to generate the steady state field of $\nu_{t}$ by solving the RANS equation, which is used as training data for the developing machine learning model. In addition, we also utilize the two-equation RNG $k-\epsilon$ \cite{yakhot1992development} and $k-\omega$ SST \cite{menter1994two} models for demonstrating the generality of the proposed framework. These model equations are widely used for a variety of practical turbulence modeling applications for simulating complex flows. We also generate the steady state $\nu_{t}$ field using these models. The equations for these models are omitted here for brevity but correspond to Eqs. (4.35), (4.42) and (5.10) for RNG $k-\epsilon$ in \cite{yakhot1992development} and Eqs. A1-2, A13-15 for $k-\omega$ SST in \cite{menter1994two}. Using OpenFOAM-based nomenclature, Table \ref{TableA} provides a summary of the boundary conditions for each closure model employed during the data-generation phase of our workflow.\\ Once our governing equations are formulated, they must be solved on a discrete grid. We utilize the finite-volume method to ensure local conservation of our governing laws. This method requires the numerical calculation of spatial gradients for which we use a second-order accurate discretization. In particular, our viscous terms use a purely central discretization, whereas the advective terms use the LUST (blended linear and linear upwind scheme) to control for any spurious Gibbs phenomena. We note that these discretization methods are commonly used in practical CFD applications. Our problem utilizes a steady-state solver given by the SIMPLE algorithm. We built our finite volume solver and machine learning deployment framework by integrating the C-backend of Tensorflow 1.14 \cite{tensorflow2015-whitepaper} into OpenFOAM 5.0. \vspace{0.5cm} \begin{table} \centering \footnotesize \resizebox{\textwidth}{!}{\begin{tabular}{llllllllll} \multicolumn{10}{c}{OpenFOAM Boundary Conditions} \\ \cline{1-10} \\ \multicolumn{10}{c}{SA} \\ \cline{1-10} \multicolumn{4}{l|}{Boundary Type} & \multicolumn{3}{c}{$\nu_{t}$} & \multicolumn{3}{c}{$\tilde{\nu}$} \\ \cline{1-10} \multicolumn{4}{l|}{Inlet} & \multicolumn{3}{c}{\texttt{calculated}} & \multicolumn{3}{c}{$3\nu$} \\ \multicolumn{4}{l|}{Outlet} & \multicolumn{3}{c}{\texttt{calculated}} & \multicolumn{3}{c}{\texttt{zeroGradient}} \\ \multicolumn{4}{l|}{Wall} & \multicolumn{3}{c}{\texttt{fixedValue}$=0.0$} & \multicolumn{3}{c}{0.0} \\ \cline{1-10} \\ \multicolumn{10}{c}{$k-\omega$ SST} \\ \cline{1-10} \multicolumn{4}{l|}{Boundary Type} & \multicolumn{2}{c}{$k$} & \multicolumn{2}{c}{$\omega$} & \multicolumn{2}{c}{$\nu_t$} \\ \cline{1-10} \multicolumn{4}{l|}{Inlet} & \multicolumn{2}{c}{\texttt{fixedValue}} & \multicolumn{2}{c}{\texttt{fixedValue}} & \multicolumn{2}{c}{\texttt{calculated}} \\ \multicolumn{4}{l|}{Outlet} & \multicolumn{2}{c}{\texttt{zeroGradient}} & \multicolumn{2}{c}{\texttt{zeroGradient}} & \multicolumn{2}{c}{\texttt{calculated}} \\ \multicolumn{4}{l|}{Wall} & \multicolumn{2}{c}{\texttt{kqRwallFunction}} & \multicolumn{2}{c}{\texttt{omegaWallFunction}} & \multicolumn{2}{c}{\texttt{nutUSpaldingWallFunction}} \\ \cline{1-10} \\ \multicolumn{10}{c}{$k-\epsilon$ RNG} \\ \cline{1-10} \multicolumn{4}{l|}{Boundary Type} & \multicolumn{2}{c}{$k$} & \multicolumn{2}{c}{$\epsilon$} & \multicolumn{2}{c}{$\nu_t$} \\ \cline{1-10} \multicolumn{4}{l|}{Inlet} & \multicolumn{2}{c}{\texttt{fixedValue}} & \multicolumn{2}{c}{\texttt{fixedValue}} & \multicolumn{2}{c}{\texttt{calculated}} \\ \multicolumn{4}{l|}{Outlet} & \multicolumn{2}{c}{\texttt{zeroGradient}} & \multicolumn{2}{c}{\texttt{zeroGradient}} & \multicolumn{2}{c}{\texttt{calculated}} \\ \multicolumn{4}{l|}{Wall} & \multicolumn{2}{c}{\texttt{kqRwallFunction}} & \multicolumn{2}{c}{\texttt{epsilonWallFunction}} & \multicolumn{2}{c}{\texttt{fixedValue=0.0}} \\ \cline{1-10} \end{tabular}}\\ \caption{OpenFOAM-based nomenclature to describe the boundary conditions for each turbulence model employed in this study.} \label{TableA} \end{table} \subsection{ML-RANS Integration}\label{subsection:ml-rans} The core idea behind this investigation motivates the development of a surrogate turbulent-eddy viscosity model to bypass the solution of any extra equations for closure. Through this we aim to obtain computational speed-up by reducing the number of PDEs to solve iteratively to steady-state. While the previous section has introduced the SA equation as our reference model, we note that models of any fidelity can be used to generate the data needed by the framework. This shall also be demonstrated on two examples of two-equation models below. The core procedure is given below: \begin{enumerate} \item \underline{Data generation phase} \begin{itemize} \item Select numerical experiment locations in the design space. In our study, this corresponds to different boundary conditions for inlet velocity and different step heights. \item Initialize initial conditions for numerical experiments using low fidelity approximations (such as potential flow). \item Generate steady-state turbulent eddy-viscosity profiles using an \emph{a priori} specified closure model (in this case SA). \item Augment initial condition information through feature preprocessing or geometric information embedding (for instance distance from the wall). \item Save training data in the form of pointwise input-output pairs of initial conditions and steady-state turbulent eddy viscosities. \end{itemize} \vspace{0.2cm} \item \underline{Training phase} \begin{itemize} \item Train an ML surrogate to predict the pointwise steady-state turbulent eddy viscosity field given low fidelity initial conditions as input. Optionally, the ML surrogate may also be given explicit information about the control parameter. \item Train framework using 90\% of the total data set, referred to as the training data; keeping the rest (10\%) as validation data for model selection and to detect any potential overfitting of the model. {Training and validation data are both comprised of samples from all control parameters.} \item Save the trained model for deployment. \end{itemize} \vspace{0.2cm} \item \underline{Testing phase} \begin{itemize} \item Choose a new design point in experiment space, for instance, a new value for inlet velocity or step height of the backward facing step {which was not present in the training data}. \item Generate initial conditions for this new point using low fidelity initial condition - these will be the \emph{test} inputs to the trained framework. Augment inputs with control parameter information if needed. \item Deploy the previously trained framework to predict an approximation for the steady-state turbulent eddy viscosity and fix this quantity permanently. \item Solve the Reynolds-averaged pressure and velocity equations to steady-state while utilizing the fixed steady-state turbulent eddy viscosity. \item Assess the impact of the surrogate eddy viscosity through quantitative metrics. \end{itemize} \end{enumerate} We now proceed with a tabulation of our input and output features in this study. Our inputs identify the region in space for model deployment through inputs of the finite-volume cell-centered coordinates and the initial conditions. We note that our inputs and outputs were scaled to having unit mean and zero variance for each feature to enable easier training. In this particular study we undertake two experiments. In our first study, we train an ML surrogate given by \begin{align} \mathbb{M}_1 : u^p (\textbf{x}), v^p (\textbf{x}), x^c (\textbf{x}), y^c (\textbf{x}) \rightarrow \nu_t (\textbf{x}) \end{align} and interpolate between inlet velocities. Here $u^p$ and $v^p$ indicate the horizontal and vertical velocities resulting from the initial conditions and where $x^c$, $y^c$ are the cell-centered coordinates of the grid in the domain. The output, $\nu_t(\textbf{x})$, is the steady-state turbulent eddy viscosity potentially obtained from any choice of RANS closure strategies. Our second study interpolates between different geometries (by varying the step height). For this purpose we establish a map that is augmented with control parameter information as follows \begin{align} \mathbb{M}_2 : u^p (\textbf{x}), v^p (\textbf{x}), x^c (\textbf{x}), y^c (\textbf{x}), h \rightarrow \nu_t (\textbf{x}) \end{align} where our original set of input features have been augmented by the step height $h$. \begin{remark} {The primary motivation for using potential flow solution quantities as input features stems from a desire to have low-fidelity features (that are easily computable) for training the ML map. This aligns with several ML-based emulation studies where low-fidelity models are used to reconstruct the influence of unavailable fidelity (for example in RANS \cite{ling2016reynolds,wang2017physics,wu2018physics,matai2019zonal,sotgiu2019towards,taghizadeh2020turbulence} and large eddy simulation \cite{gamahara2017searching,maulik2017neural,maulik2019sub} subgrid modeling). The use of potential flow is one such approach for constructing a map but it is not the sole approach we espouse. Any underlying information about the system may be utilized for training the $\nu_t$ emulator provided the learning is accurate and generalizable.} \end{remark} Both our surrogate maps $\mathbb{M}_1$ and $\mathbb{M}_2$ are given by a neural network with 6 hidden layers and 40 neurons in each layer. We use a {tangent sigmoidal} activation for the hidden layers while the output layer uses a linear activation. In our training procedure we use the ADAM optimizer\cite{kingma2014adam} with a learning rate of 0.001. For $\mathbb{M}_1$, the fully trained network achieves a coefficient of determination\footnote{$R^2 \equiv 1 - \frac{\sum_i (y_i - y_i^{\rm{pred}})^2}{\sum_i(y_i - \bar{y})^2} $ for data $y_i$, predictions $y_i^{\rm{pred}}$ and mean $\bar{y}$. } of $R^2=0.998$ for both training and validation data sets indicating a successful parameterization. For $\mathbb{M}_2$, we obtain a similarly high accuracy of $R^2 = 0.997$. The training for both networks was terminated using an early-stopping criterion. If validation errors did not improve for 10 epochs, the training would exit with the best model (corresponding to the lowest validation loss until then). We note that, in deployment, we truncated negative value predictions, akin to the built-in limiters of the SA model itself. We note that the hyperparameters of our network and its architecture were hand-tuned for accuracy due to its relative simplicity. Further analysis of this training will be performed in Section \ref{ML_Section}. \begin{remark} {The utilization of a tangent sigmoidal (tanh) activation ensures that $\nu_t$ predictions are differentiable with respect to the input mesh coordinates. While deep learning methods have demonstrated success with rectified linear (ReLU) activation to solve the vanishing gradient problem \cite{hochreiter1998vanishing}, the smooth transformation obtained via tanh aids us in avoiding discontinuous $\nu_t$ profiles from piece-wise linear reconstructions obtained from ReLU.} \end{remark} \section{Results} In the following section, we outline our machine learning effectiveness through \emph{a priori} analyses followed by a discussion of the results from its deployment as a surrogate model. \subsection{Machine learning} \label{ML_Section} We proceed by outlining the effectiveness of the machine learning through statistical estimates for its performance. Figure \ref{Fig_ML_Training} shows the progress to convergence for our learning frameworks. It was observed that approximately 40 epochs were sufficient for an accurate parameterization of the input-output relationship after which the early-stopping criteria terminated training. The training data set for $\mathbb{M}_1$ consisted of 205,390 samples whereas $\mathbb{M}_2$ consisted of 808,882 samples. The reader may note here, that while $\mathbb{M}_2$ was sampled across only 6 different step heights (compared to 10 different boundary velocities for $\mathbb{M}_1$), the varying geometries for each step height contributed to a greater number of training points in total. Out of the total number of samples 90\% were retained for the purpose of training while the rest were utilized for validation. \begin{figure} \centering \mbox{ \subfigure[$\mathbb{M}_1$ Convergence]{\includegraphics[width=\textwidth]{M1_Convergence.png}} } \\ \mbox{ \subfigure[$\mathbb{M}_2$ Convergence]{\includegraphics[width=\textwidth]{M2_Convergence.png}} } \caption{Convergence while learning the machine learning surrogate $\mathbb{M}_1$ (top) and $\mathbb{M}_2$ (bottom).} \label{Fig_ML_Training} \end{figure} A statistical assessment of the learning is shown in Figure \ref{Fig_ML_Assessment}. We note that these plots utilize training and validation data for all simulations for the purpose of \emph{a priori} assessment. The scatter plots show a good agreement between modeled and true magnitudes of $\nu_t$ particularly for the higher magnitudes. Some deviation is observed at the lower magnitudes potentially due to slight inaccuracies in the characterization of the transition from the separation zone to the free-stream. Notice how the probability density for $\mathbb{M}_1$ has a much smaller maximum value than $\mathbb{M}_2$ thus indicating a narrower spread of data. This may be explained by a wider spread of sampled $\nu_t$ in our training data set for the latter. The probability distribution plots for the predicted and true models also show good agreement. We note that the $\nu_t$ magnitude are captured accurately by the predictions. As a further exploration of the benefits of using a deep learning framework, the \emph{a priori} performance of the surrogate $\mathbb{M}_1$ is assessed against linear and polynomial regression using the same inputs and output. To account for on-node memory limitations for this study, we truncated the polynomial order to 6. The $R^2$ values (for validation data) for these assessments are shown in Table \ref{Table2}. The proposed framework can be seen to outperform the linear and polynomial regression methods successfully. {At this point, it is important to note that the number of trainable parameters of polynomial methods are quite lower than the DNN, particularly at lower orders of approximation. While this might imply computational competitiveness for this particular test case, polynomial models often suffer from the bias-variance trade-off where increasing the order leads to very poor generalization. This can be observed with gradual increase in the polynomial order where the validation $R^2$ is seen to saturate (to a value less than that obtained by the DNN) and drop suddenly if overfitting has started. In addition, while DNNs may be over-parameterized, they may readily be implemented by using freely available optimized machine learning libraries which take advantage of specialized computer hardware such as graphical and tensor processing units for acceleration.} \begin{figure} \centering \mbox { \subfigure[$\mathbb{M}_1$ Scatter ]{\includegraphics[width=0.5\textwidth]{M1_Scatter.png}} \subfigure[$\mathbb{M}_1$ Density]{\includegraphics[width=0.5\textwidth]{M1_Density.png}} } \\ \mbox { \subfigure[$\mathbb{M}_2$ Scatter ]{\includegraphics[width=0.5\textwidth]{M2_Scatter.png}} \subfigure[$\mathbb{M}_2$ Density]{\includegraphics[width=0.5\textwidth]{M2_Density.png}} } \\ \caption{An \emph{a priori} statistical assessment of the trained surrogate models $\mathbb{M}_1$ (top) and $\mathbb{M}_2$ (bottom) for learning $\nu_t$.} \label{Fig_ML_Assessment} \end{figure} \begin{table} \scriptsize \centering \begin{tabular}[b]{cccccccccccccc} \multicolumn{12}{c}{Validation-$R^2$} \\ \hline Order & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & DNN\\ \hline $\mathbb{M}_1$ & 0.146 & 0.249 & 0.421 & 0.745 & 0.861 & 0.937 & 0.959 & 0.972 & 0.981 & 0.967 & 0.998 \\ \hline $\mathbb{M}_2$ & 0.266 & 0.419 & 0.548 & 0.805 & 0.915 & 0.949 & 0.969 & 0.959 & 0.915 & 0.031 & 0.998 \\ \hline \vspace{0.2cm} \end{tabular} \begin{tabular}[b]{cccccccccccccc} \multicolumn{12}{c}{Number of parameters} \\ \hline Order & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & DNN\\ \hline $\mathbb{M}_1$ & 5 & 15 & 35 & 70 & 126 & 210 & 330 & 495 & 715 & 1001 & 10,081\\ \hline $\mathbb{M}_2$ & 6 & 21 & 56 & 126 & 252 & 462 & 792 & 1287 & 2002 & 3003 & 10,121\\ \hline \end{tabular} \caption{Validation $R^2$ values (top) and number of trainable parameters (bottom) for different order polynomial fits for the proposed experiments. The proposed deep learning framework (DNN) is seen to outperform the polynomial regressions. It was also observed that RANS deployments did not converge to the preset solver tolerances when using the linear and polynomial regressions.} \label{Table2} \end{table} \subsection{Generalization across boundary conditions} In the following sections, we outline the results from our proposed formulation for an interpolation task within a range of control parameters sampled for training data by the SA turbulence model. We remind the reader that the machine learning framework predicts the steady-state viscosity when the simulation is initialized and the solver then utilizes this fixed viscosity for its progress to convergence. We first outline results from the evaluation of surrogate $\mathbb{M}_1$. We shall be deploying our proposed framework on two \emph{testing} situations with inlet velocity conditions given by 44.2 m/s (denoted S1) and 49.5 m/s (denoted S2). Both these inlet velocity conditions constitute data that is unseen by the network during training. Our training data set consists of steady-state $\nu_t$ magnitudes obtained from using Spalart-Allmaras on inlet velocities of 40, 41, 42, 43, 44, 45, 46, 47, 48 and 49 m/s. Figure \ref{Fig_Lineplot_1} shows a line plot for the velocity magnitudes ($|U|$) (for both S1 and S2) at probe location 1 (previously defined in Section \ref{Problem}). A close agreement is observed between the converged simulation using the machine learning surrogate and the standard deployment of SA. A strength of this proposed approach is that the velocity and pressure solvers were untouched during our surrogate modeling and the converged fields preserve their respective symmetries. Figure \ref{Fig_Lineplot_2} shows the performance of the network in predicting the output quantity (i.e., $\nu_t$) at probe location 1. {The data-driven map is seen to capture the $\nu_t$ profiles accurately.} \begin{figure} \centering \includegraphics[width=\textwidth]{Figure_2.png} \caption{$|U|$ predictions (in m/s) at probe location 1 for surrogate deployment S1 (left) and S2 (right) using data-driven map $\mathbb{M}_1$.} \label{Fig_Lineplot_1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Figure_1.png} \caption{Turbulent eddy-viscosity predictions (in m\textsuperscript{2}/s) at probe location 1 for surrogate deployment S1 (left) and S2 (right) using data-driven map $\mathbb{M}_1$.} \label{Fig_Lineplot_2} \end{figure} Further assessments of the surrogate are performed for both our testing simulations at different probe locations for $|U|$ and $\nu_t$. As shown in Figure \ref{Fig_Lineplot_3} and Figure \ref{Fig_Lineplot_4}, a good agreement between the one-equation model and the surrogate is once again observed downstream of the step. Our third probe location (located before the flow reaches the step) also shows that the deep learning framework approximates the behavior of SA appropriately, as shown in Figures \ref{Fig_Lineplot_5} and \ref{Fig_Lineplot_6}. Finally, Figure \ref{Fig_Contour_2} shows $L_1$ errors between truth and prediction where it is observed that errors are an order of magnitude lower than the quantities of interest. Finally, we assess the accuracy of the proposed framework in terms of skin friction prediction in Figure \ref{Fig_CF_1}. The skin-friction is defined as \begin{align} C_f = \frac{\tau_w}{\frac{1}{2} \rho U^2}, \end{align} where $\tau_w = \nu_{eff} \frac{\partial \bar{u}_1}{\partial y}$ is the wall shear stress, $\nu_{eff} = \nu_t + \nu$ is the effective kinematic viscosity and $U$ is the free-stream inlet velocity (as introduced previously). The skin-friction coefficient may be used to determine the reattachment point of the flow after it has separated over the step. This is manifested by a tendency for $C_f$ to become zero at a certain distance downstream of the step indicating that the flow has touched the lower wall again. {Our proposed framework with $\mathbb{M}_1$ is seen to predict the reattachment location well. However, slight deviations are observed downstream of the reattachment before the skin-friction profile matches that of the PDE-based model.} This may be explained by point-wise predictions for $\nu_t$ which may lead to residual noise in the directly predicted `steady-state' turbulent eddy-viscosity. \begin{figure} \centering \includegraphics[width=\textwidth]{Figure_4.png} \caption{$|U|$ predictions (in m/s) at probe location 2 for surrogate deployment S1 (left) and S2 (right) using data-driven map $\mathbb{M}_1$.} \label{Fig_Lineplot_3} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Figure_3.png} \caption{Turbulent eddy-viscosity predictions (in m\textsuperscript{2}/s) at probe location 2 for surrogate deployment S1 (left) and S2 (right) using data-driven map $\mathbb{M}_1$.} \label{Fig_Lineplot_4} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Figure_6.png} \caption{$|U|$ predictions (in m/s) at probe location 3 for surrogate deployment S1 (left) and S2 (right) using data-driven map $\mathbb{M}_1$.} \label{Fig_Lineplot_5} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Figure_5.png} \caption{Turbulent eddy-viscosity predictions (in m\textsuperscript{2}/s) at probe location 3 for surrogate deployment S1 (left) and S2 (right) using data-driven map $\mathbb{M}_1$.} \label{Fig_Lineplot_6} \end{figure} \begin{figure} \centering \mbox{ \subfigure[$\nu_t$ $L_1$-error]{\includegraphics[trim={0 6cm 0 0},clip,width=0.48\textwidth]{nut_Err.png}} \subfigure[$|U|$ $L_1$-error]{\includegraphics[trim={0 6cm 0 0},clip,width=0.48\textwidth]{Vel_Err.png}} } \caption{$L_1$ errors for deploying the ML framework on test-case S1 using data-driven map $\mathbb{M}_1$. Errors are concentrated near the region of the step, due to mesh anisotropy as well as complicated physics. In addition, error spikes are seen on the upper boundary. Note that error magnitudes are an order of magnitude less than the quantities of interest.} \label{Fig_Contour_2} \end{figure} \begin{figure} \centering \mbox{ \includegraphics[width=0.48\textwidth]{Figures/CF_S1.png} \includegraphics[width=0.48\textwidth]{Figures/CF_S2.png} } \caption{Skin-friction coefficient predictions downstream of the step for deploying the ML framework on test-cases S1 (left) and S2 (right) using data-driven map $\mathbb{M}_1$. The ML model is seen to over-estimate flow reattachment length by a small amount.} \label{Fig_CF_1} \end{figure} \subsubsection{Speedup from the proposed approach} One of the advantages of the proposed formulation is the removal of an additional PDE for the calculation of $\tilde{\nu}$ utilized for specifying $\nu_t$. While one might expect a speedup due to the reduced dimensionality of the coupled PDE system, it is not clear if this would manifest in a reduced time to solution due to the complicated interplay with deep learning prediction errors and the steady-state solvers for velocity and pressure. Through an empirical assessment, we determined that the proposed methodology would allow for considerably \emph{larger} relaxation factors for the steady-state velocity and pressure solvers. In the case of the two surrogate models tested above, relaxation factors of 0.9 for both pressure and velocity led to a converged solution in 687 iterations of the steady-state solver for S1 and 716 iterations for S2. The SA implementations, however, needed 3696 and 3646 iterations for convergence while utilizing relaxation factors of 0.5 for pressure, 0.9 for velocity and 0.3 for $\tilde{\nu}$. We note that the relaxation factors for this deployment were hand-tuned to obtain convergence and that speedup factors are relative. A graphical representation of the speedup is provided in Figure \ref{Fig_Lineplot_7}. In terms of time to solution for experiment S1, the proposed framework required 14.49 seconds for convergence whereas the SA model required 102.76 seconds. For experiment S2, the corresponding times-to-solution were 15.08 and 112.60 seconds. We note that all experiments were performed using a serial execution of OpenFOAM on an Intel Core-i7 processor with a clockspeed of 1.90 GHz. \begin{figure} \centering \includegraphics[width=\textwidth]{Figure_11.png} \caption{Residual plots for deploying the ML framework on test-cases S1 (left) and S2 (right) exhibiting the speedup from the proposed approach using data-driven map $\mathbb{M}_1$.} \label{Fig_Lineplot_7} \end{figure} \begin{remark} {It must be noted that the current framework accelerates the solution of the system to a different end-result, as observed in Figure \ref{Fig_CF_1}. An accurate data-driven map for the turbulent eddy viscosity may lead to a small deviation of quantities of interest between the true and reduced-order solution. This is different from methods that accelerate the solution of the system to the same end-result (for example by using preconditioners, multigrid methods or Krylov solvers).} \end{remark} \subsection{Generalization across geometries} In the following section, we assess the utility of the proposed surrogate model, $\mathbb{M}_2$, for predicting the steady-state turbulent eddy viscosity across mesh configurations \textcolor{black}{instead of across boundary conditions}. A series of tests are performed to assess the viability of the methodology for generalization across geometries. These are performed by generating training data for different step heights \textcolor{black}{of 0.5$h$, 0.75$h$, 1.25$h$, 1.5$h$, 1.75$h$, 2.0$h$ (and therefore different meshes)} and assessing interpolation between them. \textcolor{black}{We fix the inlet velocity at 44.2 m/s to assess generalization across geometries within computational constraints. A parameter sweep across both varying geometries and boundary conditions would require more full-order solves - but the conclusions from this study would be valid regardless}. Recall that to set up a surrogate modeling framework for $\nu_t$ in this context, our machine learning framework is adapted to allow for the step height as an explicit feature input in addition to the initial conditions and mesh coordinate points. {We then test the trained model for predicting the eddy viscosity profile for a value of step height ($1.9h$) held out from the training data set.} The results for the $\nu_t$ and $|U|$ profiles at three locations are shown in Figure \ref{fig:dg_pred_2}. For $\nu_t$ magnitudes, the ML framework is able to predict accurately near the top wall but is prone to overestimating the magnitude on the lower wall. However, we draw attention to the fact that trained network is able to predict a larger magnitude of $\nu_t$ (associated with true predictions from SA for a larger step height) successfully. {Also, the framework successfully reproduces the qualitative trends in the $\nu_t$ profiles on the lower wall with increasing distance from the step.} For velocity magnitudes, the resulting steady-state solution using the ML-model leads to a mismatch in the behavior for location 1 (within the recirculation region). However, accurate results in location 2 and location 3 are observed. For instance, successfully learning $\mathbb{M}_2$ allows for the framework to accurately predict that location 3 will still have recirculation for this step height, unlike with height $h$. {Subsequently, we assess skin-friction predictions for the test step-height in Figure \ref{Fig_CF_2} (a). The performance is seen to be similar to that observed for the previous set of assessments. The reattachment location is captured well, and slight deviations are observed downstream of the reattachment before the skin-friction profile matches that of the PDE-based model. We also perform a test for assessing the deployment of trained models at varying mesh densities. This test utilizes a geometry of height $1.9h$ but utilizes a mesh with 292,214 degrees of freedom. We remind the reader that the previous testing geometry with the same step height possessed 131,374. Therefore our trained ML model is tasked with recreating $\nu_t$ profiles from reduced observability. We observe similar results for the coefficient of friction obtained from these fine mesh assessments as shown in Figure \ref{Fig_CF_2}(b)} {Finally, the slight errors of the ML framework can be contrasted with the computational benefits of the proposed workflow which led to a converged solution in only 2132 iterations (as opposed to 14,340 iterations for the PDE-based model) for the test step height of 1.9$h$ with a standard mesh. The refined mesh required 3299 iterations (as opposed to 16,584 iterations for the PDE-based model).}. This is shown in Figure \ref{fig:dg_iter}. Contour plots for the standard mesh $1.9h$ test case are shown in Figure \ref{Fig_Contour_3} where one can observe relatively low velocity magnitude errors in regions far away from the step. The larger separation region is also recreated appropriately with the recirculation zones for step height 1.9$h$ being predicted by the proposed methodology. The circulation region shows a few hot spots where the error is relatively higher. We note that improved results are expected if a greedy-sampling of parameter space is incorporated into this workflow. \begin{figure} \centering \mbox{ \subfigure[Location 1]{\includegraphics[width=0.33\textwidth]{DG_nut_4.png}} \subfigure[Location 2]{\includegraphics[width=0.33\textwidth]{DG_nut_5.png}} \subfigure[Location 3]{\includegraphics[width=0.33\textwidth]{DG_nut_6.png}} } \mbox{ \subfigure[Location 1]{\includegraphics[width=0.33\textwidth]{DG_U_4.png}} \subfigure[Location 2]{\includegraphics[width=0.33\textwidth]{DG_U_5.png}} \subfigure[Location 3]{\includegraphics[width=0.33\textwidth]{DG_U_6.png}} } \caption{Predictive capability of ML framework for unseen geometries (i.e., data-driven map $\mathbb{M}_2$). The assessments in these figures are for a case where the backward facing step with height $1.9h$ was \emph{not} a part of the training data set. Instead, training data was generated from step heights of 0.5$h$, 0.75$h$, 1.25$h$, 1.5$h$, 1.75$h$, 2.0$h$. The figure shows line plots of $\nu_t$ (top) and velocity magnitude (bottom) for three different probe locations.} \label{fig:dg_pred_2} \end{figure} \begin{figure} \centering \mbox{ \subfigure[Standard Mesh]{\includegraphics[width=0.48\textwidth]{Figures/CF_DG2.png}} \subfigure[Fine Mesh]{\includegraphics[width=0.48\textwidth]{Figures/CF_DG2_Fine.png}} } \caption{Skin-friction coefficient predictions downstream of the step for deploying the ML framework on a test step height of $1.9h$ using data-driven map $\mathbb{M}_2$. {The finer mesh possessed approximately twice the degrees of freedom as the standard mesh.}} \label{Fig_CF_2} \end{figure} \begin{figure} \centering \subfigure[Standard Mesh]{\includegraphics[width=0.48\textwidth]{DG_iter_2.png}} \subfigure[Fine Mesh]{\includegraphics[width=0.46\textwidth]{DG_iter_2_Fine.png}} \caption{Progress to convergence for the experiment assessing ML framework generalization across geometries (i.e., data-driven map $\mathbb{M}_2$). Results are shown for a step height of $1.9h$. The ML framework offers approximately 5 times faster convergence than the PDE-based model. This computational speedup is similar to the speedup for the boundary condition interpolation case in Figure \ref{Fig_Lineplot_7}. } \label{fig:dg_iter} \end{figure} \begin{figure} \centering \mbox{ \subfigure[$\nu_t$ $L_1$-error]{\includegraphics[trim={0 6cm 0 0},clip,width=0.48\textwidth]{nut_Err_DG.png}} \subfigure[$|U|$ $L_1$-error]{\includegraphics[trim={0 6cm 0 0},clip,width=0.48\textwidth]{Vel_Err_DG.png}} } \caption{$L_1$-error contour plots for a backward facing step of height $1.9h$ for the data-driven map $\mathbb{M}_2$ and using a standard mesh. Note that the training set for the ML surrogate did not include this control parameter configuration. Also, errors are one order of magnitude lower than their corresponding field quantities.} \label{Fig_Contour_3} \end{figure} \subsection{Surrogate for two-equation models} To demonstrate the utility of the current machine learning surrogate for turbulent eddy-viscosity models that utilize higher fidelity approximations, we also train $\mathbb{M}_1$ on training data generated from two different two-equation models, namely, the $k-\omega$ SST approximation for $\nu_t$ and the RNG $k-\epsilon$ model. We preface our results for this assessment by stating that these demonstrations are performed for training and testing solely on one boundary condition (i.e., inlet velocity) for proof-of-concept. We note that the data for this assessment was generated using first-order upwind methods whereas the SA deployments utilized second-order accurate discretizations. These discretizations were kept consistent for both the training data generation and the surrogate deployments. Figure \ref{Fig_KO_RNG} shows the results of the surrogate modeling framework for the two additional turbulence modeling strategies. {While the ML surrogate captures the near-wall trends of the eddy-viscosity profile well for both two-equation models, slight inaccuracies are observed for the transition to free-stream velocities.} We note that despite the use of lower-order methods in the training data generation, which led to a reduced number of iterations for convergence (871 and 990 iterations respectively for $k-\omega$ SST and $k-\epsilon$ RNG), the ML framework required 424 and 451 iterations respectively. This indicated a considerable acceleration as well {without compromising on accuracy}. \begin{figure} \centering \mbox{ \subfigure[$|U|$ $k-\omega$ SST (m/s)]{\includegraphics[width=0.5\textwidth]{Figure_8.png}} \subfigure[$\nu_t$ $k-\omega$ SST ( m\textsuperscript{2}/s)]{\includegraphics[width=0.5\textwidth]{Figure_7.png}} } \\ \mbox{ \subfigure[$|U|$ $k-\epsilon$ RNG (m/s)]{\includegraphics[width=0.5\textwidth]{Figure_10.png}} \subfigure[$\nu_t$ $k-\epsilon$ RNG ( m\textsuperscript{2}/s)]{\includegraphics[width=0.5\textwidth]{Figure_9.png}} } \caption{Surrogate modeling results for the training data generated from different closures with $k-\omega$ SST (top) and $k-\epsilon$ RNG (bottom) showing the ability of the proposed framework to reconstruct higher fidelity fields at probe location 1. This experiment learns $\mathbb{M}_1$ for two-equation eddy-viscosity models.} \label{Fig_KO_RNG} \end{figure} \section{Conclusion \& Future work} This investigation outlines results from the development of a machine learning framework that converts the RANS closure problem to an interpolation task through the integration of machine learning. Our proposed formulation utilizes a neural network surrogate model that is trained on data generated by sampling a region in parameter space and which is deployed in the vicinity of this space for instant prediction of steady-state turbulent eddy-viscosity profiles. The inputs to the deep learning framework are given by geometry specific initial conditions leveraging the potential flow framework. The framework is tested by providing surrogates to the SA, $k-\omega$ SST, and $k-\epsilon$ RNG turbulence closures for a two-dimensional backward-facing step problem and leads to an accurate reconstruction of steady-state solutions. The workflow is also extended to a situation where different meshes (belonging to the same class of geometries) are used to train an interpolation framework spanning the range of a physical dimension given by the height of the backward-facing step. {Tests on a finer mesh for the same step height indicate that the proposed framework can generalize to some variation in mesh fidelity.} The preclusion of an additional set of equations for turbulent eddy-viscosity calculation leads to a significant reduction in the time to solution for the RANS problem using a surrogate closure. The ML surrogate trained on the SA equation to predict steady-state turbulent eddy-viscosities for an unseen initial condition is 5-8 faster than a full RANS simulation. The framework is also tested for interpolation within a range of different step heights to determine the workflow's viability for training data coming from different geometries, and similar speed-up and accuracy are obtained. A speed-up factor of approximately 2 is obtained for additional assessments on learning from two-equation models that use first-order upwinding for the advective term. We note that the formulation avoids any modification to the velocity and pressure solvers thereby leading to the preservation of the symmetries of the steady-state RANS equations. The conclusions from this investigation suggest the feasibility of building surrogates for sub-grid quantities for statistically steady-state problems using high-fidelity methods such as LES. Further investigation into physics regularized optimization of the steady-state mapping procedure (for instance to ensure alignment with our understanding of sub-grid mathematical properties) is also underway for greater confidence in the neural network predictions. {The current study utilizes uniformly spaced samples for the different inlet velocities or step heights for generating the training data set. The generation of a representative machine learning training data set within a limited offline computational budget is an open problem. Some promising strategies for addressing this issue include the utilization of greedy sampling strategies for sampling from a higher-dimensional parameter space. This is vital for guarding against extrapolation, where machine learning predictions deteriorate significantly. We also envision the coupling of surrogate models and training into an online feedback process where a workflow incorporates test assessments at regular intervals (or quantifies uncertainty) for the generation of a more representative data set. These costs and strategies for their mitigation will be increasingly relevant for three-dimensional simulations.} \textcolor{black}{An open area of this research is how we may be able to add interpretable outcomes from machine learning deployments, therefore, it is necessary to couple deep learning deployments with input importance assessments \cite{lundberg2017unified} for understanding the causal mechanism of the model being deployed.} The current results suggest that effective surrogate building of turbulence closure quantities using an ML framework can shift a large proportion of the online cost of parameter space exploration for assessing design quantities of interest to an offline sampling of the parameter space. \textcolor{black}{This also has applications for multifidelity frameworks where lower fidelity simulations may be generated from the proposed surrogate model to identify useful regions of parameter space for high fidelity forward model solves.} \section*{Acknowledgments} {We would like to thank the anonymous referee whose suggestions improved this paper considerably.} This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357. This research was funded in part and used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. RM acknowledges support from the Margaret Butler Fellowship at the Argonne Leadership Computing Facility. HS acknowledges support from the ALCF Director's Discretionary (DD) program for CFDML project. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. DOE or the United States Government. Declaration of Interests - None. \bibliographystyle{elsarticle-num-names}
1,314,259,995,994
arxiv
\section{Introduction} \vspace{.4cm} String Theory \cite{acv}, certain other approaches to Quantum Gravity, as well as Black Hole Physics \cite{maggiore} suggest a modification of the Heisenberg's Uncertainty Principle near the Planck scale to a so-called Generalized Uncertainty Principle (GUP) of the form \begin{equation} \Delta p~\Delta x \geq \frac{\hbar}{2} \left[ 1 + \beta_0 \frac{\ell_{Pl}^2}{\hbar^2} \Delta p^2 \right] \label{gup1} \end{equation} where $\ell_{Pl} = \sqrt{\frac{G\hbar}{c^3}} = 10^{-35}m$ is the Planck length and $\beta_0$ is a constant, normally assumed to be of order unity. Evidently, the new second term on the RHS of (\ref{gup1}) is important only when $x,\Delta x\approx \ell_{Pl}$ or $p, \Delta p \approx p_{Pl} \approx 10^{16} TeV/c $ (the Planck momentum), i.e. at very high energies/small length scales. Inverting Eq.(\ref{gup1}), we get $ \Delta p \leq \frac{\hbar}{\beta_0\ell_{Pl}^2} \left[ \Delta x \pm \sqrt{\Delta x^2 - \beta_0\ell_{Pl}^2} \right]$, implying the existence of a {\item minimum} measurable length $\Delta x \geq \Delta x_{min} \equiv \sqrt{\beta_0} \ell_{Pl}$. It can be shown that the above GUP can be derived from a modified Heisenberg algebra \cite{kmm} \begin{equation} [x_i, p_j] = i \hbar [ \delta_{ij} + { \frac{\beta_0 \ell_{Pl}^2}{\hbar^2} (p^2 \delta_{ij} + 2 p_i p_j)}]~. \label{algebra1} \end{equation} On the other hand, Doubly Special Relativity (DSR) theories \cite{dsr} suggest yet another modified algebra between position and momenta \cite{cg} \begin{equation} [x_i,p_j] = i \hbar [ (1-\ell_{Pl} |\vec p|)\delta_{ij} + \ell_{Pl}^2 p_i p_j ] \label{algebra2} \end{equation} as well as the existence of a {\item maximum} observable momentum $\Delta p \leq \Delta p_{max} \approx M_{Pl}c$. Using the Jacobi identity $ \left[ [x_i,x_j],p_k \right] + \left[ [x_j,p_k],x_i \right] + \left[ [p_k,x_i],x_j \right] = 0 $ and the assumption that space commutes with space and momenta with momenta, algebras (\ref{algebra1}) and (\ref{algebra2}) can be reconciled as limits of a single algebra of the form \cite{dv3} \footnote{We also cite reference [6] for more references to earlier works.} \begin{equation} [x_i,p_j] = i\hbar \left[ \delta_{ij} - {\alpha \left( p \delta_{ij} + \frac{p_i p_j}{p} \right) + \alpha^2 (p^2 \delta_{ij} + 3 p_i p_j )} \right]~. \end{equation} Here $\alpha = \frac{\alpha_0}{M_{Pl} c} = \frac{\alpha_0 \ell_{Pl}}{\hbar}$. Again, $\alpha_0$ is normally assumed to be of order unity. The above algebra predict both a $\Delta x_{min}$ and a $\Delta p_{max}$. It also implies the following representation of the momentum operator in position space $p_j = p_{0j} \left( 1 {- \alpha p_0 + 2\alpha^2 p_0^2 }\right)$ where $p_{0j} = -i\hbar\frac{\partial}{\partial x_j}$ is canonical (but unphysical) and satisfies the usual commutator $[x_i,p_{0j}]=i\hbar \delta_{ij}$. Correspondingly, a non-relativistic Hamiltonian takes the form $ H = \frac{p^2}{2m} + V(\vec r) = \frac{p_0^2}{2m} + V(\vec r) - \frac{i\hbar^3\alpha}{m}\frac{d^3}{d x^3} $ where the last term can be considered as a Quantum Gravity induced perturbation in the time-dependent Schr\"odinger Equation \begin{eqnarray} \left[ H_0 {+ H_1} \right]\hspace{-0.3ex}\psi = \left[-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} +V(x) - {i \frac{\alpha\hbar^3}{m} \frac{d^3}{dx^3}} \right]\hspace{-0.3ex}\psi = i\hbar \frac{\partial \psi}{\partial t}~.\nonumber \label{se1} \end{eqnarray} The above equation admits of a new conserved current $ J = \frac{\hbar}{2mi} \left( \psi^\star \frac{d\psi}{dx} - \psi \frac{d\psi^\star}{dx} \right) + { \frac{\alpha \hbar^2}{m} \left( \frac{d^2|\psi|^2}{dx^2} - 3 \frac{d\psi}{dx} \frac{d\psi^\star}{dx} \right)} $ and charge $\rho = |\psi|^2~$, such that $\frac{\partial J}{\partial x} + \frac{\partial \rho}{\partial t} = 0 $. The effect of the perturbation can be found for example on a simple harmonic oscillator, with $V=m\omega^2x^2/2$, for which the shift in the ground state energy eigenvalues is, using second order perturbation theory $ \frac{\Delta E_{GUP(0)}}{E_0} \sim {\hbar \omega m \alpha^2}~. $ Concerning Landau Levels, for a particle of mass $m$, charge $e$ in a constant magnetic field ${\vec B} = B {\hat z}\approx~10T$, ${\vec A}=Bx {\hat y}$ and cyclotron frequency $\omega_c=eB/m$, the Hamiltonian is $ H = \frac{1}{2m}\left( \vec p_0 - e \vec A\right)^2 {- \frac{\alpha}{m}\left( \vec p_0 - e \vec A\right)^3} = H_0 { - \sqrt{8 m } \alpha H_0^{\frac{3}{2}} } $ and the energy shifts are $ \frac{\Delta E_{n(GUP)}}{E_n} = { -\sqrt{8 m} \alpha (\hbar \omega_c )^{\frac{1}{2}} (n +\frac{1}{2})^{\frac{1}{2}} } \approx { - 10^{-27} \alpha_0 }~, $ from which we conclude that if $\alpha_0 \sim 1$, then $\frac{\Delta E_{n(GUP)}}{E_n}$ is too small to measure. On the other hand, with current measurement accuracy of $1$ in $10^3$, one obtains the following upper bound on the GUP parameter: $\alpha_0 < 10^{24}$. Similarly for a Hydrogen atom with standard Hamiltonian $H_0 = \frac{p_0^2}{2m} - \frac{k}{r}~$ and perturbing Hamiltonian $H_1 = -\frac{\alpha}{m} p_0^3$, it can be shown that the GUP effect on the Lamb Shift is $ \frac{\Delta E_{n(GUP)}}{\Delta E_n} = 2 \frac{\Delta |\psi_{nlm(0)}| }{\psi_{nlm}(0)} \approx \alpha_0 \frac{4.2 \times 10^4 E_0}{27 M_{Pl C^2}} \approx 10^{-24}~\alpha_0 $~. Again, if $\alpha_0 \sim 1$, then $\frac{\Delta E_{n(GUP)}}{E_n}$ is too small, whereas with current measurement accuracy of $1$ in $10^{12}$, we infer $\alpha_0 < 10^{12}$. For some other examples, we refer the reader to our earlier papers \cite{dv1,dv2}~. Finally, we consider the free-particle Schr\"odinger equation for a particle in a box of length $L$ \cite{dv3}, with the solution $\psi(x) = {A} e^{ik'x} + B e^{-ik''x} + { C e^{\frac{ix}{2\alpha\hbar}}}$. Note the appearance of a {\item new} oscillatory term. Here $k' = k(1 {+k\alpha\hbar})~,~k'' = k(1{-k\alpha\hbar})$ (to leading order in $\alpha$). The boundary condition $\psi(0)=0$ implies $A + B + C = 0$, and in addition to the boundary condition $\psi(L)=0$, this yields \begin{eqnarray} 2iA \sin(kL) &=& {\left|C\right| \left[ e^{-i(kL+\theta_C)} -e^{i(L/2\alpha\hbar - \theta_C)} \right]} + {\cal O}(\alpha^2)~, \end{eqnarray} where $ C = |C|e^{-i\theta_C} $. Taking real parts of both sides (assuming $A$ is real, without loss of generality), we get $ {\cos\!\left(\frac{L}{2\alpha\hbar} -\theta_C \right) = \cos(kL+\theta_C) = \cos(n\pi + \theta_C +\epsilon)} $ which has the solutions \begin{eqnarray} \frac{L}{2\alpha\hbar} = \frac{L}{2\alpha_0 \ell_{Pl}} = n\pi + 2q \pi + 2\theta_C ~\mbox{or}~ = - n\pi + 2q\pi ~~~[n,q \in \mathbb{N}]~. \end{eqnarray} From the above we conclude that a particle can be confined only in boxes of certain discrete lengths, and further speculate that this might indicate that all measurable lengths are quantized, since measurement of lengths require at least one particle, possibly many. We think that this result can be generalized to relativistic particles, as well as to the quantization of areas and volumes \cite{dv4}~. In summary, in this article we have shown that a single GUP exists, which is consistent with the predictions of Black Hole Physics, String Theory, DSR etc., and that this induces perturbations to all Hamiltonians. Applying this to a few concrete examples such as the Harmonic Oscillator, Landau Levels and Lamb Shift, we have computed corrections due to this perturbation. From these, we concluded that if the GUP parameter $\alpha_0$ is of order unity, these corrections are probably too small to be measured at present. On the other hand, current experimental accuracies impose upper bounds on the GUP parameter. Finally, by solving the GUP corrected Schr\"odinger equation for a particle in a box, we have shown that boundary conditions require the box length to be quantized, suggesting quantization of measurable lengths, and possibly of surfaces and volumes as well. We hope to report further on these elsewhere. This work is supported by the Natural Sciences and Engineering Research Council of Canada and the Perimeter Institute for Theoretical Physics. \vspace{-.1cm}
1,314,259,995,995
arxiv
\section{Introduction} Determining thermodynamic properties of strongly interacting matter is a crucial goal of high energy nuclear physics, and strong efforts are currently in place from both theory and experiment to expand the knowledge of the different phases of QCD matter. The reach of such knowledge would range from giving a better understanding of the first microseconds of evolution of the universe, to represent a major step in the study of cold and dense astrophysical objects such as neutron stars. Over the past decades, countless models have been produced in order to study and describe the behavior of strong matter in its different phases and regimes. In the last 15 years, first principle Lattice calculations have given precise quantitative results in the baryon-antibaryon symmetric regime -- the one of the early universe -- over a broad range of temperatures, confirming the presence of a continuous phase transition at vanishing baryon density at a temperature of $T \simeq 155 \, \textrm{MeV}$ \cite{Aoki:2006we, Aoki:2009sc, Borsanyi:2010bp, Bazavov:2011nk}. It is now strongly believed \cite{Asakawa:1989bq,Halasz:1998qr,Berges:1998rc,Stephanov:2004wx,Stephanov:2007fk} that at some higher baryonic densities, the chiral/deconfinement transition would be of the first order, thus implying the presence of a critical point. Past studies \cite{Halasz:1998qr,Berges:1998rc,Pisarski:1983ms} have shown that such a critical point would be in the same universality class as the three dimensional Ising model. The experimental search of this critical point, as well as the continuous theoretical effort to interpret experimental results in this search, have currently reached a peak of productivity in light of the BES-II program, which will take place at the Relativistic Heavy Ion Collider in the next couple of years, exploring the higher density region of the QCD phase diagram. The main intent of the program is to locate the critical point, or alternatively rule out its presence in a broad range of densities. Hydrodynamic simulations play a major role in the study of heavy-ion collisions, and therefore the theoretical interpretation of experimental data. The main ingredient needed in order to perform a hydrodynamic simulation is an equation of state of QCD matter that would drive the evolution of the system. Therefore, in studying the potential presence of a critical point in the range of densities accessible to the BES-II program, the need of an equation of state which includes critical behavior is indisputable. Whereas models have been used to produce such an equation of state, the purpose of this work is to generate an equation of state in a parametric form, which contains critical behavior \textit{in the right universality class}, and on the other hand \textit{matches the known first principle Lattice QCD results at vanishing baryon density}. Current knowledge of QCD equation of state (EoS) from first principle calculations is in the form of a Taylor expansion of the pressure in the baryonic chemical potential around $\mu_B=0$. This is due to the well-known sign problem of Lattice simulations of QCD at finite density. Hence, the EoS can be given as \cite{Bazavov:2017dus,Bellwied:2015rza}: \begin{equation} \frac{P \left( T, \mu_B \right)}{T^4} = \sum_n c_{2n} (T) \left( \frac{\mu_B}{T} \right)^{2n} \, \, , \qquad \qquad c_n (T) = \left. \frac{1}{n!} \frac{\partial^n P/T^4}{\partial (\mu_B/T)^n} \right|_{\mu_B=0} \end{equation} There have been studies \cite{Gavai:2004sd,Karsch:2011yq,DElia:2016jqh,Bazavov:2017dus} aimed at determining whether, and under what requirements, it would be possible to extract information about a possible critical point just from Lattice QCD data, however the number of coefficients in the expansion currently allows to only partially constrain the location of such point. Moreover, the extent of validity of the Taylor series can never reach beyond the baryonic chemical potential at the critical point. On the other hand, the fact that the universality class to which critical point belongs is known, allows one to impose the behavior of the EoS in some region of a certain size around the critical point. \\ The strategy pursued in this work can be summarized as follows: \begin{enumerate}[i)] \item Make use of a suitable parametrization to describe the universal scaling behavior of the EoS in the 3D Ising model near the critical point; \item Map the 3D Ising model phase diagram onto the one of QCD via a parametric, non universal change of variables; \item Use the thermodynamics of the Ising model EoS to estimate the critical contribution to the expansion coefficients from Lattice QCD; \item Reconstruct the full pressure, matching Lattice QCD at $\mu_B=0$ and including the correct critical behavior. \end{enumerate} Note that the parametric nature of an EoS constructed with the described strategy has the advantage of allowing the influence of the presence of a critical point on the thermodynamics itself and on hydrodynamic simulations, as well as the disadvantage of relying on a number of parameters. However, the choice of parameters in the Ising $\longmapsto$ QCD map is not free. Current knowledge from Lattice QCD results already puts constraints on the location of the critical point as well as other parameters (more details will follow in the next section). Moreover, thermodynamic consistency requirements will have to be met by the produced EoS, thus reducing the possible choice of parameters allowed. \section{Scaling EoS in 3D Ising model and map to QCD} For the scaling EoS of the 3D Ising model, in a neighborhood of the critical point, one can use the following parametrization for the magnetization $M$, the magnetic field $h$ and the reduced temperature $r=\left( T- T_C \right)/T_C$ \cite{Nonaka:2004pg,Guida:1996ep,Schofield:1969zz,Bluhm:2006av}: \begin{align} M &= M_0 R^\beta \theta \, \, , \\ h &= h_0 R^{\beta \delta} \tilde{h}(\theta) \, \, , \\ r &= R (1- \theta^2) \, \, . \end{align} where $M_0$, $h_0$ are normalization constants, $\tilde{h}(\theta) = \theta (1 + a \theta^2 + b \theta^4)$ with $a=-0.76201$, $b=0.00804$, $\beta$ and $\delta$ are 3D Ising critical exponents \cite{Guida:1996ep}, and the parameters take on the values $R \geq 0$, $\left| \theta \right| \leq \theta_0 \simeq 1.154$, $\theta_0$ being the first non-trivial zero of $\tilde{h}(\theta)$. \begin{figure}[h] \includegraphics[width=\textwidth]{Ising_QCD.png} \caption{The 3D Ising model phase diagram is mapped onto the QCD one by means of a linear transformation.} \label{fig:IsQCD} \end{figure} Following \cite{Nonaka:2004pg, Guida:1996ep}, one can write the Gibbs free energy density as a function of the parameters $(R,\theta)$ and, using the thermodynamic relation $G/V = g = - P$ between Gibbs free energy density and pressure, write down the latter in the Ising model scaling EoS as: \begin{equation} P_{\text{Ising}}(R, \theta) = - h_0 M_0 R^{2 - \alpha} \left[ g(\theta) - \theta \tilde{h}(\theta) \right] \, \, . \end{equation} In order to transfer the critical thermodynamics to QCD, a non-universal mapping is needed between Ising variables $(h,r)$ and QCD coordinates $(T,\mu_B)$ (see Fig. \ref{fig:IsQCD}). The most general linear transformation allowing this makes use of six parameters: \begin{align} \frac{T - T_C}{T_C} &= w \left( r \rho \, \sin \alpha_1 + h \, \sin \alpha_2 \right) \, \, , \\ \frac{\mu_B - \mu_{BC}}{T_C} &= w \left( - r \rho \, \cos \alpha_1 - h \, \cos \alpha_2 \right) \, \, . \end{align} where $(T_C,\mu_{BC})$ give the location of the critical point, $\alpha_1$ and $\alpha_2$ indicate the relative angle between the $r$ and $h$ axes and the lines of $T= \textit{const.}$, and the parameters $w$ and $\rho$ correspond to global and relative rescaling of $r$ and $h$. \\ Thanks to this transformation, it is possible to have the following map: \begin{equation} \left( R, \theta \right) \longmapsto \left( h, r \right) \longleftrightarrow \left( T, \mu_B \right) \end{equation} where the second step is globally invertible. The critical contribution to the pressure in QCD can then simply be built from: \begin{equation} P^{\text{QCD}}_{\text{crit}}(T, \mu_B) = f(T, \mu_B) P_{\text{Ising}} (R(T, \mu_B) ,\theta (T, \mu_B)) \end{equation} for some regular function $f(T, \mu_B)$ with energy dimension four. \\ Assuming it is possible to separate the critical contribution coming from the high temperature critical point in the Taylor coefficients calculated from Lattice QCD, one can write: \begin{equation} \label{eq:coeff} c_n^{\text{LAT}}(T) = c_n^{\text{reg}}(T) + c_n^{\text{crit}}(T) \, \, , \end{equation} where on the left hand side are the coefficients calculated from Lattice QCD, hence enforcing agreement with Lattice EoS at $\mu_B=0$, and this equation should be read as a definition for the ``regular'' coefficients $c^{\text{reg}}(T)$, which will include all contributions not coming from the Ising critical point. Thus, one can reconstruct the full pressure as: \begin{equation} \label{eq:Pfull} P (T, \mu_B) = T^4 \sum_n c^{\text{reg}}_{2n} (T) \left( \frac{\mu_B}{T} \right)^{2n} + P^{\text{QCD}}_{\text{crit}}(T, \mu_B) \, \, . \end{equation} which will, by construction, match Lattice results at $\mu_B=0$ and contain critical behavior in the correct universality class. \section{Results} \subsection{The choice of parameters} \label{sec:param} Although the most general linear map between Ising variables and QCD coordinates requires the use of six parameters, it is possible to introduce some constraint in the choice by making use of additional arguments for the location of the critical point. For example, there have been works \cite{Bellwied:2015rza,Cea:2015bxa,Bonati:2015bha,Bazavov:2017dus} that have calculated the curvature of the crossover line of the chiral transition at $\mu_B=0$, approximating the shape of such transition line with a parabola: \begin{equation} T = T_0 + \kappa \, T_0 \left(\frac{\mu_B}{T_0}\right)^2 + {\cal O} (\mu_B^4) \end{equation} where $T_0 \simeq 155 \, \text{MeV}$ and $\kappa \simeq -0.0149$ (the values are from \cite{Bellwied:2015rza}) are the transition temperature and curvature of the transition line at $\mu_B=0$, respectively. The number of the parameters is thus reduced to 4, being the angle $\alpha_1$ also fixed by: \begin{equation} \alpha_1 = \tan^{-1} \left( 2 \frac{\kappa}{T_0} \mu_{BC} \right) \, \, . \end{equation} The aim of the EoS being to be employed in hydrodynamic simulations for heavy-ion collisions in the BES-II program, we will make a choice to have a value of the baryonic chemical potential which is accessible within such program, hence in the following we will set $\mu_{BC} = 350 \, \textrm{MeV}$, resulting in: \begin{align} T_C \simeq 143.2 \, \textrm{MeV} \, , \qquad \qquad \alpha_1 \simeq 3.85 \, ^\circ \, \, . \end{align} In addition, the axes are chosen to be orthogonal, so that $\alpha_2 \simeq 93.85 \, ^\circ$, and the scaling parameters are: \begin{align} w = 1 \, \, , & \qquad \qquad \rho = 2 \, \, . \end{align} \subsection{Parametrization of Lattice data} For the purpose of being used in hydrodynamic simulation, our EoS needs to cover the region of the phase diagram at low temperature, which is not available lattice simulations (typically $T \simeq 100 \, \textrm{MeV}$ is the lower bound for Lattice results). In order to solve this issue, we extend the Lattice data downwards in temperature by calculating the baryon susceptibilities that appear as coefficients in the Taylor expansion making use of the ideal Hadron Resonance Gas (HRG) model, which is commonly accepted as a good approximation of QCD in this regime. In addition, the resulting data from Lattice/HRG are parametrized in order to obtain a dependence on the temperature which is smooth enough to obtain tractable results for the entropy density and baryon density, which are first derivatives of the pressure. The parametrization is performed in the range $T = 5 - 500 \, \textrm{MeV}$ via a ratio of $5^{\text{th}}$ order polynomials in the inverse temperature (see Fig.\ref{fig:param}). \begin{figure}[h] \center \includegraphics[width=.32\textwidth]{Chi0_parametrization_005500.png} \includegraphics[width=.32\textwidth]{Chi2_parametrization_005500.png} \includegraphics[width=.32\textwidth]{Chi4_parametrization_005500.png} \caption{Parametrization of baryon susceptibilities from Lattice QCD \cite{Bellwied:2015rza} and HRG model calculations.} \label{fig:param} \end{figure} The smooth curves obtained with the parametrization will be the $c_n^{\text{LAT}} (T)$ coefficients in Eq. (\ref{eq:coeff}), thus defining the $c_n^{\text{reg}} (T)$ coefficients that will be used for the Taylor expansion. In Fig.\ref{fig:chis} we can see the comparison of the critical and ``regular'' contribution with the parametrized Lattice/HRG model results. \begin{figure}[h] \center \includegraphics[width=.32\textwidth]{chi0_LAT_reg_crit.png} \includegraphics[width=.32\textwidth]{chi2_LAT_reg_crit.png} \includegraphics[width=.32\textwidth]{chi4_LAT_reg_crit.png} \\ \caption{Comparison of critical (blue) and non-critical (red) contributions to baryon susceptibilities up to ${\cal O}(\mu_B^4)$ with parametrized Lattice data from the Wuppertal-Budapest collaboration \cite{Bellwied:2015rza}.} \label{fig:chis} \end{figure} The reconstruction of the full pressure is now straightforward, and can be carried out as in Eq.(\ref{eq:Pfull}): \begin{equation} P (T, \mu_B) = T^4 \sum^2_{n=0} c^{\text{reg}}_{2n} (T) \left( \frac{\mu_B}{T} \right)^{2n} + T_C^4 \, P^{\text{QCD}}_{\text{Ising}}(T, \mu_B) \, \, , \end{equation} which is shown, with the current choice of parameters and up to order ${\cal O} (\mu_B^4)$, in Fig.\ref{fig:Pfull}, for $T = 50 - 500 \, \textrm{MeV}$ and $\mu_B = 0 - 450 \, \textrm{MeV}$. The entropy density, defined from the pressure as: \begin{equation} S(T,\mu_B) = \left( \frac{\partial P(T,\mu_B)}{\partial T} \right)_{\mu_B} \, \, , \end{equation} is shown in Fig. \ref{fig:Entrfull}, where the discontinuity due to the first order phase transition for $\mu_B > \mu_{BC}$ is visible. \begin{figure}[h] \center \includegraphics[width=.8\textwidth]{PressGRPAR350143286PARAM050500ORDER4.png} \caption{Full pressure for the choice of parameters in Section \ref{sec:param}} \label{fig:Pfull} \end{figure} \begin{figure}[h] \center \includegraphics[width=.8\textwidth]{EntrGRPAR350143286PARAM050500ORDER4.png} \caption{Entropy density for the choice of parameters in Section \ref{sec:param}} \label{fig:Entrfull} \end{figure} \section{Discussion} A parametrized equation of state for QCD that matches Lattice results \textit{exactly} at vanishing baryochemical potential and contains critical behavior in the expected universality class of the theory is presented in this work. By means of a parametrization for the scaling EoS in the vicinity of the Ising-like critical point and of a non-universal map from the Ising model variables to QCD coordinates, it was possible to calculate the critical contribution to thermodynamic quantities (i.e. the pressure) in QCD at $\mu_B=0$. The reconstructed pressure in Fig.\ref{fig:Pfull}, along with the entropy density in Fig. \ref{fig:Entrfull}, can be readily used, together with other thermodynamic quantities (baryon density, energy density, speed of sound, etc.), in hydrodynamic simulations of heavy ion-collisions. When experimental data from the BES-II program will be available, the comparison of such data with predictions (e.g. baryon number fluctuation observables) from hydrodynamical simulations that make use of the presented EoS, will constrain the values of the parameters employed, thus possibly provide indication on the location of the critical point. \section*{Acknowledgements} This material is based upon work supported by the National Science Foundation under Grants No. PHY-1513864, PHY-1654219 and OAC-1531814 and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the Beam Energy Scan Theory (BEST) Topical Collaboration. An award of computer time was provided by the INCITE program. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract No. DE-AC02-06CH11357. The authors gratefully acknowledge the use of the Maxwell Cluster and the advanced support from the Center of Advanced Computing and Data Systems at the University of Houston.
1,314,259,995,996
arxiv
\section{Introduction}\label{sec:Intro} Predictive modeling is at the core of many scientific disciplines, including business, engineering, finance, and public health. A natural way to gauge the predictive capability of a statistical model is to estimate its predictive risk. The systematic study of the risk of a statistical procedure traces back to at least~\cite{stein1981}. Since then, the concept of risk has become an integral part of applied statistical modeling: predictive risk is routinely used to assess the complexity of statistical modeling procedures~\citep[e.g.][]{akaike1973, mallows1973, foster1994} to compare different statistical models~\citep[e.g.][]{hastie1990, ye1998}, and to choose between tuning parameters that control bias-variance trade-offs~\citep[e.g.][]{donoho1995, kou2002}. In several special cases,~\possessivecite{stein1981} theory of unbiased risk estimation provides simple estimates for the risk of a statistical model. However, in general, there does not exist a unified approach to estimating the predictive risk of a statistical model or procedure. In this paper, we focus on the predictive risk of possibly misspecified quantile regression models. In addition to its role in applied statistical modeling as outlined above, the predictive risk from quantile regression models has also garnered significant interest in finance and risk management to assess the value-at-risk and expected shortfall of investments or to solve portfolio choice problems~\citep[e.g.][]{xiao2015,cahuich2013, gaglianone2011,hexd2011,bassett2004, engle2004, chernozhukov2001}. We contribute to the theory on the predictive risk of quantile regression models by deriving two asymptotic characterizations of the bias of the in-sample risk (when used to estimate the predictive risk) and proposing a uniformly consistent, de-biased estimator for the predictive risk. Following the terminology introduced by~\cite{efron1983} we call the bias of the in-sample risk the ``expected optimism''. Our first characterization of the expected optimism provides a characterization comparable to~\possessivecite{efron2004} covariance penalty and~\possessivecite{tibshirani1999} covariance inflation criterion. The second characterization relates to robust and generalized Akaike-type information criteria~\citep[e.g.][]{lv2014, portnoy1997, burman1995}. Both characterizations show that large part of the expected optimism can be attributed to a nonlinear function of the quantile level, the conditional density of the response variable given the predictors and the (weighted) covariance matrix of the predictors. Specializing to location models, we glean additional insight into the expected optimism and its functional dependence on the conditional density and the number of predictors. As a consequence, the commonly used notion of effective degree of freedom for a statistical model has a richer content for misspecified models. The second characterization of the expected optimism lends itself to a simple plug-in estimator. We establish its uniform consistency over a class of candidate models and, based on this result, propose a uniformly consistent, de-biased estimator of the predictive risk. Our theoretical analysis indicates that the de-biased estimator is particularly relevant in the case in which the dimension of candidate models grows at least in the order of the square root of the sample size. Empirical evidence suggests that the de-biasing procedure is practically relevant even when the model size is fixed and relatively small compared to the sample size. A comparison of our de-biased estimate against the popular method of cross-validation is favorable for our procedure. To allow broad applicability our our results, we develop our theory in a triangular array of row-wise independent random vectors whose dimension may grow with the sample size. We only require minimal assumptions on the joint distribution of the response and predictor variables. Notably, the response and the predictor variables can both be unbounded, their marginal distributions can be non-Gaussian, and their relationship (i.e. the conditional quantile functions) can be linear, nonlinear or nonparametric. Thus, our framework for quantile regression generalizes the frameworks of~\cite{lee2015, noh2013, angrist2006, kim2003} who consider misspecified quantile regression models with a fixed number of parameters. Unlike the recent literature on quantile regression based on series, semi- and nonparametric estimators we do not assume that the misspecification error vanishes as more predictors are included in the regression function~\citep{belloni2016, chao2016}. Naturally, our results continue to hold if the model is (asymptotically) correctly specified. We organize this article as follows: In Section~\ref{sec:MisspecifiedQRandPredRisk} we lay out a general framework for misspecified quantile regression models. We introduce necessary terminology and discuss how to define the predictive risk of potentially misspecified quantile regression models. In Section~\ref{sec:ApproxExpectOpt} we derive two asymptotic characterizations of the expected optimism of the in-sample risk and discuss insights that we gain from these characterizations. In Section~\ref{sec:EstimationPredRisk} we propose a nonparametric plug-in estimator for one of the asymptotic characterizations of the expected optimism and use it to construct a de-biased estimate of the predictive risk. We establish uniform consistency of both estimators. In Section~\ref{sec:NumericalEvidence} we report numerical evidence that our estimates of the expected optimism and the predictive risk are on target, and that the predictive risk estimate can be better than the commonly-used cross-validation approach. We conclude in Section~\ref{sec:Conclusion} with additional remarks, and present all proofs in~\ref{appendix:Proofs}. We provide further theoretical results and technical lemmata in the Supplementary Materials~\ref{Supplement:Angrist},~\ref{Supplement:ModelSelection}, and~\ref{Supplement:TechnicalLemmata}. Additional simulation results are relegated to the Supplementary Materials~\ref{Supplement:NumericalEvidence}. \section{Misspecified quantile regression and predictive risk}\label{sec:MisspecifiedQRandPredRisk} \subsection{Notation and framework}\label{subsec:NotationFramework} The setting of interest is a high-dimensional triangular array $\mathcal{D}_n = \{(Y_{ni},X_{ni})\}_{i=1}^n$, where $(Y_{ni},X_{ni}) \in \mathbb{R} \times \mathcal{X}$ are row-wise independent random vectors with distribution $F_n$ which may change with the sample size $n$. As per convention the scalar variable $Y_{ni}$ denotes the response variable and the vector $X_{ni} \in \mathcal{X}$ denotes a vector of covariates. We denote by $F_{Y_n|X_n}$ the conditional distribution of $Y_{ni}$ given $X_{ni}$. We use subscripts on the expectation operator $\mathbb{E}$ to specify to which random variable the operator is applied to, i.e. $\mathbb{E}_{(Y_{n1},X_{n1})}$ means that expectation is only taken over $(Y_{n1},X_{n1})$ whereas $\mathbb{E}_{\mathcal{D}_n}$ means that expectation is taken over the entire triangular array $\mathcal{D}_n$. We let \begin{align}\label{eq:XZMap} x \mapsto Z(x) = \big(Z_1(x), \ldots, Z_d(x)\big) \end{align} denote a mapping from $\mathcal{X}$ into $\mathbb{R}^d$ and call the transformed covariates $Z(X_{n1}), \ldots, Z(X_{nn})$ predictor variables. We consider the case where the dimension $d$ of the predictor variables grows with the sample size $n$ and may be much larger than $n$. We call a subset $S \subseteq \{1, \ldots, d\}$ of predictors $Z(X_{ni})$ a model and write \begin{align} Z_S(X_{ni}) = \big(Z_j(X_{ni})\big)_{j \in S}. \end{align} We denote the collection of models under consideration by $M$. We allow $M$ to be as large as the power set of $\{1, \ldots, d\}$ and to grow with the sample size $n$. We write $|S|$ for the cardinality of a model $S$ and denote the largest cardinality of models in $M$ by $m$. Clearly, we have $m \leq d$. The purpose of linear quantile regression is to approximate the true conditional quantile function (CQF) of $Y_{ni}$ given $X_{ni}$, \begin{align}\label{CQF} Q_{Y_n}(\tau|X_{ni}) = \inf\left\{y: F_{Y_{n}|X_{n}}(y|X_{ni}) \geq \tau \right\}, \end{align} by a linear function of the predictor variables $Z(X_{ni})$. To this end, we assume that the vectors of predictor variables $Z(X_{ni})$ consist of series functions with reasonably good approximation properties such as indicators, B-splines, regression splines, polynomials, Fourier series, or wavelets~\citep[e.g.][]{belloni2016, chao2016}. However, unlike them we do not require that the approximation error vanishes as the number of predictors $m$ increases, i.e. we allow for persistent misspecification. We define the vector of regression coefficients $\theta^\tau_{n,S} = (\theta_{n1}^\tau, \ldots, \theta_{n|S|}^\tau)'$ associated with model $S$ as the solution to the quantile regression problem \begin{align}\label{eq:PopulationQRProblem} \min_{\theta \in \mathbb{R}^{|S|}} \mathbb{E}_{\mathcal{D}_n} \left[\rho_\tau\big(Y_{n1} - Z_S(X_{n1})'\theta\big) - \rho_\tau\big(Y_n - Q_{Y_n}(\tau|X_{n1})\big) \right], \end{align} and the vector of estimated regression coefficients $\hat{\theta}_{n,S}^\tau = (\hat{\theta}_{n1}^\tau, \ldots, \hat{\theta}_{n|S|}^\tau)'$ as the solution to the sample quantile regression problem \begin{align}\label{eq:SampleQRProblem} \min_{\theta \in \mathbb{R}^{|S|}} \frac{1}{n} \sum_{i=1}^n \rho_\tau\big(Y_{ni} - Z_S(X_{ni})'\theta\big), \end{align} where $\rho_\tau(u) = (\tau - 1\{u \leq 0\})$ is the check loss~\citep{koenker2005}. The estimate of the true CQF of $Y_{n}$ given $X_{n}$ based on model $S$ is given as \begin{align} \widehat{Q}_{Y_n}(\tau|X_{n}, S) = Z_S(X_{n})'\hat{\theta}_{n,S}^\tau. \end{align} \cite{koenker1978} show that under mild conditions $\widehat{Q}_{Y_n}(\tau|X_n,S)$ is a consistent estimate of $Q_{Y_n}(\tau|X_n)$ if the true CQF is indeed linear in $Z_S(X_{ni})$ and the dimension of the predictor variables is fixed.~\cite{angrist2006} establish corresponding consistency and asymptotic normality results for misspecified quantile regression models. The results on general M-estimators~\citep[][]{he2000} (Theorem 1), semi-parametric quantile regression~\citep[][]{chao2016} and quantile series estimators~\citep[][]{belloni2016} extend these result to cases in which the dimension of the predictors $m$ increases with the sample size $n$. \subsection{Predictive risk and expected optimism}\label{subsec:PredRiskandExpectOptGENERAL} Two statistical theories have been developed to estimate the predictive risk, cross-validation~\citep[e.g.][]{stone1974, stone1977, allen1974, golub1979, wahba1990, efron1983, efron1986, efron2004, efron1997} and covariance penalties, which include techniques such as~\possessivecite{mallows1973} Cp, \possessivecite{akaike1969} information criterion (AIC) and final prediction error (FPE), \possessivecite{takeuchi1976} information criterion (TIC), and~\possessivecite{stein1981} unbiased risk estimate (SURE). Our approach to estimating the predictive risk of potentially misspecified quantile regression models falls into the category of covariance penalties. In this section we therefore introduce necessary terminology and the rational behind covariance penalties. Suppose that a model $f$ is fitted to some data $\mathcal{Z}_n = \{Z_1, \ldots, Z_n\}$ producing an estimate $\hat{\mu}_n = f(\mathcal{Z}_n)$ for target $\mu$. Predictive risk evaluation tries to assess how well $\hat{\mu}_n$ predicts $\mu$ at a future data point $Z^0$ independently generated from the same mechanism that produced $\mathcal{Z}_n$. To measure the error between $\hat{\mu}_n$ and $\mu$ one chooses a loss function $L$ and defines the predictive risk as the average loss over current and future data, i.e. \begin{align} \mathbb{E}_{\mathcal{Z}_n, Z^0}\left[L\Big(\mu(Z^0), \hat{\mu}_n(Z^0)\Big)\right]. \end{align} Covariance penalties provide as an intermediate result an estimate of the bias of the in-sample risk when used as estimate of the predictive risk. Following the terminology introduced by~\cite{efron1983} we call the negative bias the ``expected optimism'' of the in-sample risk, \begin{align} b_n(L, \mu) = \mathbb{E}_{\mathcal{Z}_n, Z^0}\left[L\Big(\mu(Z^0), \hat{\mu}_n(Z^0)\Big)\right] - \mathbb{E}_{\mathcal{Z}_n}\left[\frac{1}{n}\sum_{i=1}^nL\Big(\mu(Z_i), \hat{\mu}_n(Z_i)\Big)\right]. \end{align} Given a consistent estimate $\hat{b}_n(L, \mu)$ of $b_n(L, \mu)$ one obtains a consistent and de-biased estimate of the predictive risk via \begin{align} \frac{1}{n}\sum_{i=1}^nL\Big(\mu(Z_i), \hat{\mu}_n(Z_i)\Big) + \hat{b}_n(L, \mu). \end{align} Even though covariance penalties are conceptually straightforward, so far, they have only been derived for a limited number of loss functions, namely the square loss and the ``q class of error functions''~\citep[][]{efron1986}. This is likely because the expected optimism of most other loss functions is a highly non-linear function for which it is difficult to construct estimators. Yet, in principle, covariance penalties have two advantages over cross-validation techniques: First, cross-validation techniques tend to produce estimates of the predictive risk that have a high variance than covariance penalties, since they split the sample into test and training sets and thereby reduce the number of samples from which $\hat{\mu}_n$ is estimated~\citep[e.g.][]{efron2004}. Second, cross-validation techniques are known to produce biased estimates of the predictive risk. Several heuristic adjustments to (the vanilla) cross-validation techniques have been proposed, but they lack rigorous proofs~\citep[e.g.][]{burman1989, tibshirani2009}. \subsection{Predictive risk and expected optimism in quantile regression}\label{subsec:PredRiskandExpectOptQR} We discuss the choice of the loss function to measure the predictive risk of a potentially misspecified quantile regression model $S$ and define the associated expected optimism. Let $(Y_n^0, X_n^0)$ be a pair of data points drawn from $F_n$ and independent of sample $\mathcal{D}_n=\{(Y_{ni}, X_{ni})\}_{i=1}^n$. Fix a model $S \subseteq \{1, \ldots, d\}$ and consider the estimate of the CQF of $Y_n^0$ given $X_n^0$ based on model $S$ and sample $\mathcal{D}_n$, i.e. \begin{align} \widehat{Q}_{Y_n^0}(\tau|X_n^0, S) = Z_S(X_n^0)'\hat{\theta}_{n,S}^\tau. \end{align} Since the true CQF of $Y^0_n$ given $X^0_n$, $Q_{Y_n^0}(\tau|X_n^0)$, is not an observable statistic given the data $\mathcal{D}_n$ and $(Y^0_n, X^0_n)$, risk measures which assess directly the difference between estimate $\widehat{Q}_{Y_n^0}(\tau|X_n^0, S)$ and target $Q_{Y_n^0}(\tau|X_n^0)$, such as the mean squared prediction error or the mean absolute prediction error, do not have (simple) sample analogues. We therefore propose the following risk measure which depends only on observables. \begin{definition}[Predictive risk]\label{def:PredRisk} The predictive risk of quantile regression model $S$ is \begin{align*} \mathrm{PR}^\tau_n(S) &= \mathbb{E}_{\mathcal{D}_n, (Y_n^0, X_n^0)}\left[\rho_\tau\big(Y^0_n - \widehat{Q}_{Y_n^0}(\tau|X_n^0, S)\big) - \rho_\tau(Y^0_n)\right], \end{align*} where $(Y_n^0, X_n^0)$ is a pair of data points drawn from $F_n$ and independent of sample $\mathcal{D}_n$. \end{definition} The associated expected optimism of using the in-sample risk $\frac{1}{n} \sum_{i=1}^n \Big(\rho_\tau\big(Y_{ni} - \widehat{Q}_{Y_{n}}(\tau|X_{ni}, S)- \rho_\tau(Y_{ni})\Big)$ as an estimate of the predictive risk is defined as follows. \begin{definition}[Expected Optimism]\label{def:ExpectedOptimism} The expected optimism of quantile regression model $S$ is \begin{align*} b_n^\tau(S) &= \mathrm{PR}^\tau_n(S) - \mathbb{E}_{\mathcal{D}_n}\left[\frac{1}{n} \sum_{i=1}^n \Big(\rho_\tau\big(Y_{ni} - \widehat{Q}_{Y_{n}}(\tau|X_{ni}, S)\big) - \rho_\tau(Y_{ni})\Big)\right]. \end{align*} \end{definition} Several comments are in order with regard to these two definitions. First, the reason for subtracting $\rho_\tau(Y^0_n)$ in Definition~\ref{def:PredRisk} (and $\rho_\tau(Y_{ni})$ in Definition~\ref{def:ExpectedOptimism}) is purely technical: it allows us to dispense with moment conditions on the response variable $Y^0_n$. To see this, note that the check loss $\rho_\tau$ is Lipschitz continuous and hence the predictive risk $\mathrm{PR}^\tau_n(S)$ is upper bounded by $\mathbb{E}_{\mathcal{D}_n, (Y_n^0, X_n^0)}\big|Z_S(X^0_n)'\hat{\theta}^\tau_{n,S} \big|$. For this expected value to be finite it suffices that the CQF of $Y^0_n$ given $X^0_n$ has finite second moments~\citep[e.g.][]{angrist2006}. Second, the predictive risk of model $S$ can be shown to be an (almost) affine transformation of the unconditional mean squared prediction error (MSPE) $\mathbb{E}_{\mathcal{D}_n, X_n^0} \big[\big(Q_{Y_n^0}(\tau|X_n^0) - \widehat{Q}_{Y_n^0}(\tau|X_n^0, S)\big)^2\big]$, which cannot be estimated directly since it depends on $Q_{Y_n^0}(\tau|X_n^0)$, the unobserved true CQF. Since the MSPE is itself an important quantity to assess model fit, its connection with our notion of predictive risk may be of independent interest. We relegate the precise statement of this technical result to the Supplementary Materials~\ref{Supplement:Angrist}. Third, the predictive risk based on the check loss $\rho_\tau$ has garnered significant interest in finance and risk management. For example, it is used in the context of value-at-risk~\citep[e.g.][]{xiao2015, gaglianone2011}, conditional value-at-risk and expected shortfall~\citep[e.g.][]{engle2004, chernozhukov2001} and portfolio choice problems with Choquet expectation~\cite[e.g.][]{cahuich2013, hexd2011, bassett2004, tversky1992}. Fourth, the predictive risk and the expected optimism play an important role in model selection criteria~\citep[e.g.][]{akaike1973, ronchetti1985, foster1994, burman1995, portnoy1997, ye1998, bozdogan2000, lv2014}, model comparison~\citep[e.g.][]{hastie1990, tibshirani1999, kou2002}, and computation of generalized degrees of freedom~\citep[e.g.][]{ye1998}. \subsection{Technical assumptions}\label{sec:Assumptions} For the theoretical investigations of the predictive risk and the expected optimism of potentially misspecified quantile regression models we require several assumptions, which we discuss in this section. Since the quantile level $\tau$ is always pre-specified, we suppress the dependence on $\tau$ in some notation. Recall that $S \subseteq \{1, \ldots, d\}$, $|S| \leq m$, and that $M$ is a subset of the power set of $\{1, \ldots, d\}$. Throughout, we assume that $M$ contains at least two models, i.e. $|M| \geq 2$, and that $n \geq 16$, i.e. $\log \log n \geq 1$. \textit{ \begin{itemize} \item[(A1)] The data $(Y_{ni}, X_{ni}) \in \mathbb{R} \times \mathcal{X}$ are row-wise independent random vectors with distribution $F_n$, where $F_n$ may change with the sample size $n$. \item[(A2)] The conditional density $f_{Y_n|X_n}$ of $Y_n$ given $X_{n}$ is uniformly bounded from above, i.e. there exits $\nu_{+} < \infty$ such that \begin{align*} \limsup_{n \rightarrow \infty}\sup_{a \in \mathbb{R}}\sup_{x \in \mathbb{R}^d}\left| f_{Y_n|X_n}(a|x)\right| \leq \nu_{+}. \end{align*} \item[(A3)] The conditional density $f_{Y_n|X_n}$ of $Y_n$ given $X_n$ is $\alpha$-H{\"o}lder continuous for $\alpha \in \left[\frac{1}{2}, 1\right]$, i.e. there exists a constant $\nu_H > 0$ such that for any $a, b \in \mathbb{R}$, \begin{align*} \limsup_{n \rightarrow \infty}\sup_{x \in \mathbb{R}^d}\Big| f_{Y_n|X_n}(a|x) - f_{Y_n|X_n}(b|x)\Big| \leq \nu_{H} |a - b|^{\alpha}. \end{align*} \item[(A4)] The maximum eigenvalue of the matrix of second moments is uniformly bounded from above, i.e. there exists $\lambda_{+} < \infty$ such that \begin{align*} \limsup_{n \rightarrow \infty} \max_{S \in M}\lambda_{\max } \left(\mathbb{E}_{X_n}\left[Z_S(X_{n})Z_S(X_{n})'\right] \right) \leq \lambda_+, \end{align*} and the minimum eigenvalue of the weighted second moment matrix is bounded from below by $\lambda_{n} > 0$, \begin{align*} \min_{S \in M}\lambda_{\min}\left(\mathbb{E}_{X_n}\left[f_{Y_n|X_n}\big(Z_S(X_{n})'\theta^\tau_{n,S}|Z_S(X_n)\big)Z_S(X_{n})Z_S(X_{n})'\right]\right) > \lambda_{n}. \end{align*} \end{itemize} } In the above assumptions the uniformity in $n$ is necessary since we consider triangular arrays. Assumptions (A1), (A2), and (A3) with $\alpha =1$ are fairly standard in the quantile regression literature~\cite[e.g.][]{angrist2006,belloni2016,chao2016}. It is possible to relax the (implicit) assumption that the random variables are identically distributed within each row; in fact independence suffices for our results. However, we do not pursue these refinements in the present paper. The stringentness of Assumption (A4) depends on how fast $\lambda_n$ is allowed to go to zero. We require the following technical rate condition on $\lambda_n$: \textit{ \begin{itemize} \item[(A5)] The minimum eigenvalue of the matrix of second moments, $\lambda_n$, is bounded below asymptotically in the following way: \begin{align*} \lambda_n \gtrsim \left(\frac{m \log |M| \: \log \log n}{n}\right)^{1/2 - 1/(4\alpha)}. \end{align*} \end{itemize} } This rate condition is purely technical and difficult to motivate. Clearly, the condition is less stringent the larger $\alpha$, i.e. the smoother the conditional density $f_{Y_n|X_n}$ of $Y_n$ given $X_n$. In particular, if $\alpha = 1/2$, we require $\lambda_n = O(1)$; whereas in the case of a continuous conditional density, we allow $\lambda_n = O\big((m \log |M| \: \log \log n)^{1/4}n^{-1/4}\big)$. The rate condition relaxes the stronger boundedness assumptions on the largest and smallest eigenvalue of the weighted second moment matrix that prevail in the literature on quantile regression~\cite[][]{koenker2017}. Together with the upper bound on the largest eigenvalue of the expected value of the Gram-matrix the rate condition implies that $m \lesssim n$. This is a much weak condition on the growth rate of the number of predictors than has been proposed in recent work on (misspecified) quantile regression with increasing number of predictors. E.g.~\cite{belloni2016} and~\cite{chao2016} require that $\zeta_m \equiv \sup_{x \in \mathcal{X}}\|Z(x)\|_2 < \infty$ satisfies $m \zeta_m^2 (\log n)^2 = o(n)$. If the predictors are element-wise bounded, this amounts to the condition $m^2 (\log n)^2 = o(n)$. We shall see that our relaxed assumption on the growth rate is important in the theoretical analysis of the proposed estimate for the predictive risk in Section~\ref{sec:EstimationPredRisk}. Lastly, we introduce the following moment condition on the predictors: \textit{ \begin{itemize} \item[(A6)] The vector $Z(X_n) = \big(Z_1(X_{n}), \ldots, Z_d(X_{n})\big)$ is a vector of random variables with finite $8 + \delta$ moment, for some $\delta > 0$. In particular, for $1 \leq k \leq 8$, there exist constants $\mu_k > 0$ such that \begin{align*} \limsup_{n \rightarrow \infty} \max_{j =1, \ldots, d}\left(\mathbb{E}_{X_n}\left[\left|Z_j(X_{n}) \right|^{k + \delta} \right]\right)^{1/(k + \delta)} \leq \mu_k. \end{align*} \end{itemize} } This condition is significantly weaker than the uniform boundedness assumption on the map $Z$ imposed in~\cite{belloni2016} and~\cite{chao2016} (i.e. $\zeta_m \equiv \sup_{x \in \mathcal{X}}\|Z(x)\|_2 < \infty$). Again, uniformity in $n$ is necessary since we consider triangular arrays. \section{Two asymptotic characterizations of the expected optimism}\label{sec:ApproxExpectOpt} \subsection{The covariance form of the expected optimism}\label{subsec:CovExpectOpt} In the case of ordinary least squares, the expected optimism can be evaluated via~\possessivecitewos{mallows1973} $C_p$. In the case of nonlinear least squares with Gaussian errors, the expected optimism can be estimated via~\possessivecite{stein1981} divergence formula. And for loss functions that belong to~\possessivecite{efron2004} ``q class of error measures'', the expected optimism can be expressed as a function of the covariance of two observable quantities. Since the expected optimism $b^\tau_n(S)$ from Definition~\ref{def:ExpectedOptimism} is based on the check loss $\rho_\tau$, none of the above three results applies. Instead we have the following result. \begin{theorem}[Covariance Form of the Expected Optimism]\label{theorem:ExpectOptCovForm} Suppose that Assumptions (A1) -- (A6) from Section~\ref{sec:Assumptions} hold. Then, \begin{align*} b^\tau_n(S) = tr \left(Cov\left( \frac{1}{n}\sum_{i=1}^n Z_S(X_{ni})\varphi_\tau\big(Y_{ni} - Z_S(X_{ni})'\theta^\tau_{n,S}\big), \: \hat{\theta}^\tau_{n,S} - \theta^\tau_{n,S} \right) \right) \:\: + \:\: r_{n,1}(S), \end{align*} where $\varphi_\tau(u) = \tau - 1\{u< 0\}$ and \begin{align*} \sup_{S \in M} \left|r_{n,1}(S)\right| = O\left(\frac{1}{\lambda_{n}^{3/2}} \left(\frac{m \log |M| \: \log \log n}{n}\right)^{5/4} \right). \end{align*} \end{theorem} We postpone a discussion of the rate of the remainder term to the next section. Focusing instead on the leading term of above approximation, we observe the following: If the true CQF is indeed linear in $Z_S(X_n)$, i.e. $Q_{Y_n}(\tau|X_n) = Z_S(X_n)'\theta^\tau_{n,S}$, then the leading term of the optimism $b^\tau_n(S)$ can be re-formulated as \begin{align} \frac{1}{n}\sum_{i=1}^n Cov\left(- 1\big\{Y_{ni} < Q_{Y_n}(\tau|X_{ni})\big\}, \: \widehat{Q}_{Y_n}(\tau|X_{ni}, S) \right). \end{align} Thus, in this case the expected optimism is essentially the covariance between the estimates $\widehat{Q}_{Y_n}(\tau|X_{ni}, S)$ and a simple function of the targets $Q_{Y_n}(\tau|X_{ni})$, $i=1, \ldots, n$. This is reminiscent of~\possessivecite{efron2004} results for the ``q class of error measures'': For this specific class of error measures the expected optimism is equal to the covariance between estimate $\hat{\mu}_n$ and target $\mu$, i.e. $\frac{1}{n}\sum_{i=1}^n Cov\big(\mu(Z_i), \: \hat{\mu}_n(Z_i) \big)$. Theorem~\ref{theorem:ExpectOptCovForm} states that for the check loss $\rho_\tau$ a similar relation holds up to the deterministic error $r_{n,1}(S)$. Re-writing the leading term of the optimism $b^\tau_n(S)$ as the expected value of the gradient of the check loss and the centered regression vector, \begin{align}\label{eq:CovarianceFormGradientVersion} \mathbb{E}_{\mathcal{D}_n}\left[\frac{1}{n}\sum_{i=1}^n\varphi_\tau\big(Y_{ni} - Z_S(X_{ni})'\theta^\tau_{n,S}\big) Z_S(X_{ni})'(\hat{\theta}^\tau_{n,S}- \theta^\tau_{n,S})\right], \end{align} we gain two more insights: First, the covariance form of the expected optimism can be viewed as a first order linearization of the check loss. In particular, the covariance form is the (expected value) of the directional derivative of the check loss in direction $\hat{\theta}^\tau_{n,S}- \theta^\tau_{n,S}$ and evaluated at the vector of regression coefficients $\theta^\tau_{n,S}$. Since the check loss is convex, this directional derivative is always non-negative, i.e. the leading term of the expected optimism non-negative. This confirms our statistical intuition that the bias of the in-sample risk as estimate of the predictive risk is negative. Second, using the naive sample analogue $\frac{1}{n}\sum_{i=1}^n \varphi_\tau\big(Y_{ni} - Z_S(X_{ni})'\hat{\theta}^\tau_{n,S}\big) X_{ni,S}'\hat{\theta}^\tau_{n,S}$ to estimate the expected optimism will inevitably result in a poor estimate because the gradient evaluated at its sample minimizer $\hat{\theta}^\tau_{n,S}$ is close to zero. Thus, even though the approximate covariance form does not dependent on the future (unattainable) data point $(Y^0_n, X^0_n)$, it does not allow us to entirely bypass the computation of the expected value with respect to the unknown distribution $F_n$. A similar observation was first made by~\cite{efron1986} about his covariance penalties. To overcome this difficulty, he proposes a parametric bootstrap approach; below we show a different approach which does not rely on re-sampling. \subsection{The trace form of the expected optimism}\label{subsec:TraceFormExpectOpt} As noted in Section~\ref{subsec:PredRiskandExpectOptQR}, the predictive risk under check loss $\rho_\tau$ is an almost affine transformation of the unconditional mean squared prediction error. We might therefore expect that the expected optimism can be approximated by an expression similar to the penalty term in~\possessivecitewos{mallows1973} $C_p$ or~\possessivecite{takeuchi1976} TIC. The following theorem shows that this intuition is correct. \begin{theorem}[Trace Form of the Expected Optimism]\label{theorem:ExpectOptTraceForm} Suppose that Assumptions (A1) -- (A6) from Section~\ref{sec:Assumptions} hold. Then, \begin{align*} b_n^\tau(S) = \frac{1}{n}tr\left(D_{n,0}^\tau(S)^{-1} D_{n,1}^\tau(S)\right) + r_{n,2}(S), \end{align*} where \begin{align*} D_{n,0}^\tau(S) &=\mathbb{E}_{X_{n1}}\Big[f_{Y_n|X_n}\big(Z_S(X_{n1})'\theta^\tau_{n,S}|X_{n1}\big)Z_S(X_{n1})Z_S(X_{n1})'\Big],\\ D_{n,1}^\tau(S) &=\mathbb{E}_{X_{n1}}\Big[\varphi_\tau^2\big(Y_{n1}- Z_S(X_{n1})'\theta^\tau_{n,S}\big) Z_S(X_{n1})Z_S(X_{n1})'\Big], \end{align*} with $\varphi_\tau(u) = \tau - 1\{u< 0\}$ and \begin{align*} \sup_{s \in M} \left|r_{n,2}(S)\right| = O \left(\frac{1}{\lambda_{n}^2}\left(\frac{m \log |M| \: \log\log n}{n}\right)^{5/4} \right). \end{align*} \end{theorem} We observe the following: First, under Assumptions (A1) -- (A6) the trace from is roughly of order $O\left(\lambda_n^{-1}n^{-1}|S|\right)$ and hence dominates the remainder term $r_{n,2}(S)$. Therefore, the trace form is a meaningful approximation of the expected optimism. We also conclude that the same is true for the covariance form and the remainder term $r_{n,1}(S)$ from Theorem~\ref{theorem:ExpectOptCovForm}. Second, in the literature on robust estimation the trace form is also known as ``expected self-influence'', i.e. the average influence that an observation has on its own fitted value~\citep[e.g.][p. 317]{hampel2005}. While at hindsight the connection between expected optimism and ``expected self-influence'' appears intuitive, it has not been made in the past, to the best of our knowledge. Third, the trace form clearly resembles the complexity penalties of AIC-type model selection criteria for misspecified (linear) regression models~\citep[e.g.][]{takeuchi1976, bozdogan2000} and misspecified robust and generalized linear models~\citep[e.g.][]{ronchetti1985, lv2014}. This similarity is expected since complexity penalties of AIC-type model selection criteria aim at estimating the expected optimism of the in-sample risk based on a loss function equal to the negative (pseudo) log-likelihood. Lastly, by Theorem~\ref{theorem:ExpectOptTraceForm} the expected optimism is a nonlinear function of the conditional density $f_{Y_n|X_n}$, the quantile level $\tau$, the (weighted) covariance of the predictors $Z_S(X_{n})$, and the size $|S|$ of model $S$. This property becomes more salient in the following two special cases: \begin{corollary}[Location Model]\label{corollary:TraceFormLocationModel} Let $Y_{ni} = X_{ni}'\theta_{S_0} + \epsilon_{ni}$, with i.i.d. covariates $X_{ni}$ and i.i.d. errors $\epsilon_{ni} \sim F_\epsilon$ and density $f_\epsilon$. Suppose that the $X_{ni}$ and $\epsilon_{ni}$ are mutually independent for $i=1, \ldots, n$. Let the map $Z$ be the identity map so that the $Z(X_{ni}) = X_{ni}$. Suppose that the conditions of Theorem~\ref{theorem:ExpectOptTraceForm} hold and that the fitted model $S$ contains the true model $S_0$, i.e. $S_0 \subseteq S$. Then, \begin{align*} \frac{1}{n}tr\left(D_{n,0}^\tau(S)^{-1} D_{n,1}^\tau(S)\right) = \frac{\tau (1- \tau)}{f_{\epsilon}\left(F^{-1}_\epsilon(\tau)\right)}\frac{|S|}{n}. \end{align*} \end{corollary} \begin{corollary}[Nested Quantile Regression Location Models]\label{corollary:TraceFormNestedLocationQR} Suppose that the data generating process is a (potentially nonlinear) location model. Let $S_1$ and $S_2$ be two models such that $S_1 \subseteq S_2$. The trace form of the larger model $S_2$ can be written in terms of the conditional density of $Y_n$ given the predictors $Z_{S_1}(X_{n})$ of the smaller model, i.e. \begin{align*} \frac{1}{n}tr\left(D_0^\tau(S_2)^{-1} D_1^\tau(S_2)\right) = \frac{\tau (1- \tau)}{n}tr\left(D_0(S_1, S_2)^{-1} D_1(S_2) \right), \end{align*} where \begin{align*} D_0(S_1, S_2) &=\mathbb{E}_{X_n}\left[f_{Y_n|Z_{S_1}(X_{n})}\big(Z_{S_2}(X_{n})'\theta^\tau_{S_2}|Z_{S_1}(X_{n})\big)Z_{S_2}(X_{n})Z_{S_2}(X_{n})'\right],\\ D_1(S_2) &=\mathbb{E}_{X_n}\left[Z_{S_2}(X_{n})Z_{S_2}(X_{n})'\right]. \end{align*} \end{corollary} Both corollaries are an immediate consequences of Theorem~\ref{theorem:ExpectOptTraceForm} and~\possessivecite{angrist2006} characterization of the misspecified quantile regression problem as a weighted least squares problem. We omit their proofs. We will return to these two corollaries in Section~\ref{sec:NumericalEvidence} and use them as benchmark in our numerical experiments. \section{Consistent estimators for expected optimism and predictive risk}\label{sec:EstimationPredRisk} \subsection{A plug-in estimator for the expected optimism}\label{subsec:EstimationExpectOpt} The trace form of Theorem~\ref{theorem:ExpectOptTraceForm} lends itself to a simple plug-in estimator for the expected optimism since the two matrices $D_{n,0}^\tau(S)$ and $D_{n,1}^\tau(S)$ are well-studied in the context of the (asymptotic) covariance matrix of the quantile regression vector~\citep[e.g.][]{koenker2005}. In the case of incorectly specified quantile regression models, the following estimates for $D_{n,0}^\tau(S)$ and $D_{n,1}^\tau(S)$ have been proposed \begin{align} \widehat{D}^{\tau}_{0,h}(S) &= \frac{1}{2nh} \sum_{i=1}^{n}1\big\{\big|Y_{ni} - \widehat{Q}_{Y_n}(\tau|X_{ni}, S)\big|\leq h\big\}Z_S(X_{ni})Z_S(X_{ni})',\\ \widehat{D}^\tau_{n,1}(s) &= \frac{1}{n} \sum_{i=1}^{n}\varphi_\tau\big(Y_{ni} - \widehat{Q}_{Y_n}(\tau|X_{ni},S)\big) Z_S(X_{ni})Z_S(X_{ni})', \end{align} where $h$ is a bandwidth parameter and $\varphi_\tau(u) = \tau - 1\{u < 0\}$~\citep[e.g.][]{angrist2006, belloni2016}. We therefore propose the following plug-in estimate for the expected optimism $b^\tau_n(S)$, \begin{align}\label{eq:EstimateExpectOpt} \hat{b}_{n, h}^\tau(S) = \frac{1}{n}tr\left(\widehat{D}^{\tau^{-1}}_{0,h}(S) \widehat{D}^\tau_{n,1}(S)\right). \end{align} Since our regularity conditions are slightly more general than those in~\cite{belloni2016}, the following consistency theorem does not follow from their Lemma 30. In particular, our Assumption (A5) on the growth rate of the number of predictors is less stringent than theirs. We shall see that this relaxation is important in the context of predictive risk estimation in Section~\ref{subsec:EstimationPredRisk}. \begin{proposition}[Uniform Consistency of the Estimated Trace Form]\label{proposition:ConsistencyEstTraceForm} Suppose that Assumptions (A1) -- (A6) from Section~\ref{sec:Assumptions} hold, let $h > 0$ be the bandwidth parameter, and $r_{n} = \frac{1}{\lambda_n}\left(\frac{m \log |M|\: \log \log n}{n}\right)^{1/2}$. Then, \begin{align*} \sup_{S \in M} \left|n \cdot \hat{b}_{n, h}^\tau(S) - tr\Big(D_{n,0}^\tau(S)^{-1} D_{n,1}^\tau(S)\Big) \right| &= O_p \left( \frac{m \: h^\alpha}{\lambda_n^2} + \frac{m\:r_{n}}{h \lambda_n} + \frac{m\: r_{n}^\alpha}{\lambda_n^2} \right). \end{align*} \end{proposition} The first and second terms on the right hand side capture the variance and bias of the estimator with bandwidth $h$. They are standard in nonparametric smoothing. The third term controls the bias induced by $\big\{\big(Y_{ni} - \widehat{Q}_{Y_n}(\tau|X_{ni}, S)\big)\big\}_{i=1}^n$ at model $S$ which serve as proxies for $\big\{\big(Y_n - Z_S(X_{ni})'\theta_{n,S}^\tau\big)\big\}_{i=1}^n$. Specializing to the common case of a continuous conditional density $f_{Y_n|X_n}$, i.e. $\alpha = 1$, we observe the following: The optimal, mean-variance-balancing bandwidth is $h^* = (c_1/c_0)^{1/2}(\lambda_n r_{n})^{1/2}$ with constants $c_0, c_1 > 0$ given in eq.~\eqref{eq:BoundAA1} and~\eqref{eq:BoundAA2}, respectively. In principle, these constants can be estimated from the data; however, in practice, we find that the specific choice of the bandwidth has no significant effect. With bandwidth $h^*$ the estimate $\hat{b}_{n, h}^\tau(S)$ is consistent at rate $O_p\big(m \: r_{n}^{1/2} \lambda_n^{-3/2} + m \: r_{n} \lambda_n^{-2} \big) = O_p\big(m \: r_{n}^{1/2} \lambda_n^{-3/2}\big)$. That is, $\hat{b}_{n, h}^\tau(S)$ is consistent at a rate that is the same as if the true errors $\big\{\big(Y_{ni} - \widehat{Q}_{Y_n}(\tau|X_{ni}, S)\big)\big\}_{i=1}^n$ at model $S$ were known. Combining Theorem~\ref{theorem:ExpectOptTraceForm} and Proposition~\ref{proposition:ConsistencyEstTraceForm} we obtain the following consistency result. \begin{theorem}[Uniform Consistency of the Estimated Expected Optimism]\label{theorem:ConsistencyEstExpectOpt} Let $r_{n} = \frac{1}{\lambda_n}\left(\frac{m \log |M|\: \log \log n}{n}\right)^{1/2}$. Under the conditions of Proposition~\ref{proposition:ConsistencyEstTraceForm}, \begin{align*} \sup_{S \in M}\left| \frac{\hat{b}_{n, h}^\tau(S)}{b_n^\tau(S)} -1 \right| &= O_p\left( n \lambda_n^{3/2} r_{n}^{5/2} + \frac{m\: h^\alpha}{\lambda_n^2} + \frac{m\: r_{n}}{h\lambda_n} + \frac{m \: r_{n}^\alpha}{\lambda_n^2}\right). \end{align*} \end{theorem} Since $\hat{b}_{n, h}^\tau(S)$ is the plug-in estimator for the trace form approximation, it is a biased estimate of the actual expected optimism $b_n^\tau(S)$. This deterministic bias is captured in the first term; the remaining three terms are already familiar from Proposition~\ref{proposition:ConsistencyEstTraceForm}. Specializing once again to the case of a continuous conditional density, i.e. $\alpha = 1$, we have under the optimal bandwidth $h^*$ a rate of $O_p\big(n \lambda_n^{3/2} r_{n}^{5/2} + m \: r_{n}^{1/2} \lambda_n^{-3/2}\big) = O_p\big(n \lambda_n^{3/2} r_{n}^{5/2}\big)$. Thus, the deterministic error of using the trace form $tr\big(D_{n,0}^\tau(S)^{-1} D_{n,1}^\tau(S)\big)$ to approximate the expected optimism $b_n^\tau(S)$ dominates the stochastic estimation error. In other words, as point estimate $\hat{b}_n^\tau(S)$ is as good in estimating the expected optimism $b_n^\tau(S)$ as the unattainable trace form $tr\big(D_{n,0}^\tau(S)^{-1} D_{n,1}^\tau(S)\big)$. \subsection{A de-biased estimator of the predictive risk}\label{subsec:EstimationPredRisk} As outlined in Section~\ref{subsec:PredRiskandExpectOptGENERAL}, given the consistent estimate of the expected optimism~\eqref{eq:EstimateExpectOpt} we can construct the following de-biased estimate of the predictive risk, \begin{align}\label{eq:EstimatePredRisk} \widehat{PR}^{\tau}_{n,h}(S) = \frac{1}{n} \sum_{i=1}^n \Big(\rho_\tau\big(Y_{ni} - \widehat{Q}_{Y_{n}}(\tau|X_{ni}, S)\big) - \rho_\tau(Y_{ni})\Big) + \hat{b}_{n, h}^\tau(S). \end{align} We call this estimate ``de-biased'' because the in-sample risk $\frac{1}{n} \sum_{i=1}^n \big(\rho_\tau\big(Y_{ni} - \widehat{Q}_{Y_{n}}(\tau|X_{ni}, S) - \rho_\tau(Y_{ni})\big)$ is itself already a consistent estimate for $\mathrm{PR}^{\tau}_{n,h}(S)$ in the sense that for any $S \in M$ with fixed model size $|S|$, \begin{align}\label{eq:PredRiskNaive} \left|\mathrm{PR}^{\tau}_{n}(S) - \frac{1}{n} \sum_{i=1}^n \Big(\rho_\tau\big(Y_{ni} - \widehat{Q}_{Y_{n}}(\tau|X_{ni}, S)\big) - \rho_\tau(Y_{ni})\Big)\right| = O_p\left(n^{-1/2}\right). \end{align} We strengthen this fact in several ways: First, we show that under appropriate conditions our proposed estimator $\widehat{PR}^{\tau}_{n,h}(S)$ is consistent uniformly over all $S \in M$ and for models whose size $|S|$ grows with the sample size $n$. Second, we will see that for large models with size $|S| \gtrsim n^{1/2}$ the in-sample risk is no longer $n^{1/2}$-consistent for the predictive risk and that under certain conditions de-biasing the in-sample risk with $\hat{b}_{n, h}^\tau(S)$ restores the $n^{1/2}$-consistency. We deduce these claims from the following general result. \begin{theorem}[Uniform Consistency of the De-biased Predictive Risk Estimate]\label{theorem:PredRiskConsistency} Suppose that Assumptions (A1) -- (A6) from Section~\ref{sec:Assumptions} hold. In addition, assume that $f_{Y_{n}|X_{n}}$ is uniformly bounded away from 0 for all $n$ and that $\limsup_{n \rightarrow \infty}\mathbb{E}_{X_n}\left[Q_{Y_n}^2(\tau|X_n)\right] < \infty$. Let $h > 0$ be a bandwidth and $r_n = \frac{1}{\lambda_n}\left(\frac{m \log|M| \: \log\log n}{n}\right)^{1/2}$. Then, \begin{align*} & \sup_{S \in M}\left| \widehat{PR}^{\tau}_{n,h}(S) - \mathrm{PR}^\tau_n(S) \right| = O_p \left(\left( \frac{\log |M|}{n}\right)^{1/2} + \frac{r_n}{n^{1/2}} + \lambda_n^{3/2} r_{n}^{5/2} + \frac{m \: h^\alpha}{\lambda_n^2 n} + \frac{m\:r_{n}}{h \lambda_n n} + \frac{m\: r_{n}^\alpha}{\lambda_n^2 n}\right), \end{align*} \end{theorem} The last four terms on the right hand side are familiar from the uniform consistency result of the trace form estimate for the expected optimism (i.e. Theorem~\ref{theorem:ConsistencyEstExpectOpt}), while the first two terms are related to the in-sample risk. Clearly, if $m= o( \lambda_n^{-2} n \log |M| \: \log \log n)$ and bandwidth $h$ satisfies \begin{align} \frac{1}{\lambda_n} \frac{m}{n} \left(\frac{m \log|M| \: \log\log n}{n}\right)^{1/2} \lesssim h \lesssim \frac{1}{\lambda_n^{2/\alpha}} \left(\frac{n}{m}\right)^{1/\alpha}, \end{align} then $\widehat{PR}^{\tau}_{n,h}(S)$ is consistent for $\mathrm{PR}^\tau_n(S)$ uniformly for all $S \in M$. However, we can learn more by considering special cases. To simplify this discussion, we consider the case in which the conditional density $f_{Y_n|X_n}$ is continuous and the bandwidth is chosen to balance the nonparametric estimation bias and variance (see discussion in Section~\ref{subsec:EstimationExpectOpt}). Then, Theorem~\ref{theorem:PredRiskConsistency} implies the following. \begin{corollary}\label{corollary:PredRiskConsistencyContinuous} Suppose that the conditions of Theorem~\ref{theorem:PredRiskConsistency} hold, that the conditional density $f_{Y_n|X_n}$ is continuous, and that $ \lambda_n^2 m = o\left(n \log |M| \: \log \log n\right)$. If $ n^{1/4} h \sim \left(m \log|M| \: \log\log n\right)^{1/4}$, then \begin{align*} & \sup_{S \in M}\left| \widehat{PR}^{\tau}_{n,h}(S) - \mathrm{PR}^\tau_n(S) \right| = O_p \left(\left( \frac{\log |M|}{n}\right)^{1/2} + \frac{1}{\lambda_n^2} \left(\frac{m \log|M| \: \log\log n}{n}\right)^{5/4} \right). \end{align*} \end{corollary} These rates have an intuitive explanation: The first term $O\big(n^{-1/2}(\log |M|)^{1/2}\big)$ is related to the stochastic variability of the in-sample risk. And the second term $O(\lambda_n^{-2} n^{-5/4} (m \log|M| \log \log n)^{5/4})$ is known from Theorem~\ref{theorem:ExpectOptTraceForm} to be the deterministic error of using the trace form $tr(D_{n,0}^\tau(S)^{-1} D_{n,1}^\tau(S))$ to approximate the expected optimism $b_n^\tau(S)$. Thus, unlike one might have suspected, it is not the nonparametric estimate of the expected optimism but the deterministic approximation of the expected optimism and the stochastic variability of the in-sample risk which limit the accuracy of our predictive risk estimate. It is easy to verify that under the stated assumptions $\widehat{PR}^{\tau}_{n,h}(S)$ is consistent for $\mathrm{PR}^\tau_n(S)$ uniformly over all $S \in M$. It is instructive to consider the implication of Corollary~\ref{corollary:PredRiskConsistencyContinuous} under different growth regimes of the number of predictor variables. To this end, recall that the estimated trace form, $\hat{b}_{n, h}^\tau(S)$, is of order $O(\lambda_n^{-1}n^{-1}|S|)$. Hence, if $n^{1/2} \lesssim |S| \lesssim n$ the estimated trace form, $\hat{b}_{n, h}^\tau(S)$, dominates (rate-wise) the stochastic error and also the deterministic error (provided that we sharpen condition on $m$ and $n$ to $m = o(n \: \lambda_n^4 (\log |M| \: \log \log n)^{-5})$). Thus, in this regime the in-sample risk alone is not $n^{1/2}$-consistent for the predictive risk; de-biasing the in-sample risk is necessary to retain $n^{1/2}$-consistency. However, if $|S| \lesssim n^{1/2}$ the stochastic error of the in-sample risk dominates (rate-wise) the estimate of the trace form. Thus, from the perspective of first order asymptotics the correction provided by the $\hat{b}^\tau_{n,h}(S)$ is not necessary in this regime. However, in Section~\ref{sec:NumericalEvidence} we report numerical evidence showing that even in this regime the de-biasing effect of $\hat{b}^\tau_{n,h}(S)$ is practically relevant. As an aside, this discussion provides another explanation for the well-known fact that Akaike-type model selection criteria are not model selection consistent: Akaike-type penalties (based on estimates of the expected optimism) are too small to effectively discriminate between models of size $|S| \lesssim n^{1/2}$ since the stochastic variability of the in-sample risk is relatively large. For correctly specified (linear least squares regression) models with a fixed number of parameters this has already been recognized~\citep[e.g.][]{shao1997, yang2005}. It is natural to consider using the de-biased predictive risk estimator for model selection purposes. Indeed, the uniform consistency result from Theorem~\ref{theorem:PredRiskConsistency} implies that the model(s) minimizing the de-biased predictive risk estimate $\widehat{PR}^{\tau}_{n,h}$ are consistent for the model(s) minimizing the predictive risk $\mathrm{PR}^\tau_n$. For a precise statement and proof of this claim, we refer to the Supplementary Materials~\ref{Supplement:ModelSelection}. In contrast, neither Akaike-type Information Criteria~\citep[e.g.][]{burman1995, koenker2005}, nor Schwarz/ Bayesian Information Criteria~\citep[e.g.][]{machado1993, koenker1994, lee2014}, nor the Asymptotically defined Information Criterion~\citep{portnoy1997} are known to satisfy such a model selection consistency property if the candidate models are (possibly) misspecified. \section{Empirical evidence}\label{sec:NumericalEvidence} \subsection{Set-up of the simulation study} We conduct Monte Carlo experiments to evaluate empirically the trace form approximation of the expected optimism and to corroborate the theoretical results from Sections~\ref{sec:ApproxExpectOpt} and~\ref{sec:EstimationPredRisk}. We also compare the empirical performance of the trace form approximation to the commonly used cross-validated estimate of the expected optimism. Our Monte Carlo study uses four designs as the data generating processes (DGP), but only the results from DGP1 are given in the paper. The results from the other DGPs are qualitatively similar and details are given in the Supplementary Materials in~\ref{Supplement:NumericalEvidence}. \begin{itemize} \item Independent Gaussian design (DGP1): $y_i = x_{i1} + x_{i2} + x_{i3} + x_{i4} + \epsilon_i$, with $x_{i} \sim_{iid} N(0, I_p)$ independent of the errors $\epsilon_i \sim_{i.i.d.} N(0, 4)$. We use this process to illustrate the elementary properties of the predictive risk and the expected optimism from Corollaries~\ref{corollary:TraceFormLocationModel} and~\ref{corollary:TraceFormNestedLocationQR}. The joint Gaussianity of predictors and errors allows us to compute the exact value of the trace form with which we can assess the accuracy of our estimates. The variance of the error distribution is chosen such that signal-to-noise-ratio equals one. \item Correlated Gaussian design (DGP2): $y_i = x_{i1} + x_{i2} + x_{i3} + x_{i4} + \epsilon_i$, with $\epsilon_i \sim_{i.i.d.} N(0, 12.384)$ independent of $x_{i} \sim_{iid} N(0, \Sigma)$ and $\Sigma_{ij}= 0.8^{|i-j|}$ for all $i, j = 1, \ldots, p$. The variance of the error distribution is chosen such that the signal-to-noise ratio equals one. \item Heteroscedastic noise (DGP3): $y_i = x_{i1} + x_{i2} + x_{i3} + (1 + 1.5 x_{i4})\epsilon_i$, where $x_{ij} \sim_{i.i.d.} U\left([0,2]\right)$ for $j=1, \ldots, 4$ independent of the errors $\epsilon_{i} \sim_{iid} N(0, 1)$. In this DGP the covariate $x_4$ is active for the conditional quantile functions except at the median. \item Single interaction term with heavy-tailed noise (DGP4): $y_i = x_{i1} + x_{i2} + x_{i3} + 4x_{i3}x_{i4} + \epsilon_i$, where $\epsilon_i$ follow the t-distribution with 2 degrees of freedom independent of the predictors $x_{i} \sim_{iid} N(0, I_p)$. In this DGP all quantiles are non-linear functions of the covariates. \end{itemize} We set the dimension of the space of covariates $\mathcal{X}$ equal to 50, and let $Z$ be the identity map, so that the predictors are simply the covariates $X_1, \ldots, X_{50}$. We consider a collection of 176 candidate models with model seizes ranging between 0 to 50. This implies that we the size of the largest model under consideration is $m=50$. We explain the choice of those candidate models in Section~\ref{subsec:TraceFormIS}. Throughout the numerical experiments we keep the sample size fixed at $n=500$. All reported estimates are averages over 10,000 independent realizations of the corresponding DGPs. To estimate the matrix $D_0(S)$ at quantile $\tau$ we use~\possessivecite{powell1986} nonparametric estimator with uniform kernel function and bandwidth \begin{align*} c_{n,S} = \kappa_{n,S} \left(\Phi^{-1}(\tau + h_n) - \Phi^{-1}(\tau - h_n) \right), \end{align*} where $\Phi$ denotes c.d.f. of the standard normal distribution, $\kappa_{n,S}$ is the minimum of the standard error and the inter-quartile-range of the estimated quantile regression residuals of model $S$, and \begin{align*} h_n = \frac{1}{n^{1/5}}\left(\frac{4.5\phi\big(\Phi^{-1}(\tau)\big)^4}{(2\Phi^{-1}(\tau)^2 + 1)^2}\right)^{1/5}, \end{align*} where $\phi$ denotes the p.d.f. of the standard normal distribution. Thus, $c_{n,S}$ satisfies the conditions of Theorems~\ref{theorem:ConsistencyEstExpectOpt} and~\ref{theorem:PredRiskConsistency} which guarantee (uniform) consistency of the estimates of the expected optimism and the predictive risk; see~\cite{koenker2005} for a detailed discussion of this choice of bandwidth. Recall Definitions~\ref{def:PredRisk} and~\ref{def:ExpectedOptimism} that the predictive risk and the expected optimism require the evaluation of a double expectation. Since the quantile regression vector is only implicitly defined, this double expectation cannot be evaluated analytically. Instead, we use Monte Carlo estimates based on 50,000 samples to obtain values for the predictive risk and the expected optimism. \subsection{Estimation of the expected optimism}\label{subsec:TraceFormIS} In Theorem~\ref{theorem:ConsistencyEstExpectOpt} we establish uniform consistency of the estimated trace form for the expected optimism. In Figure~\ref{fig:DGP1-IS} under DPG1, we plot the bias of 176 models (subsets of the 50 predictors) against their model sizes. We only consider 176 models because it is computationally expensive to evaluate the predictive risk and the expected optimism on all possible subsets of the 50 predictors. However, the special structure of the DGP together with Corollary~\ref{corollary:TraceFormNestedLocationQR} guarantee that this collection constitutes a representative subset of all possible models: The true DGP contains only four relevant predictors 1, 2, 3, and 4; those predictors are independent and identically distributed and contribute equally to the model (i.e. have the same regression coefficients). We can therefore stratify the collection of all possible subsets of the 50 predictors according to how many relevant predictors are included in a specific subset. This results in five collections of nested models indexed by 0 (relevant predictors), 1 (relevant predictor), \ldots, 4 (relevant predictors). By Corollary~\ref{corollary:TraceFormNestedLocationQR} the expected optimism of all nested models with $j$ relevant predictors lie (approximately) on a ray emanating from the in-sample bias of the smallest model with $j$ relevant predictors. Moreover, the slope of the ray is given by $\frac{\tau(1-\tau)}{500 \phi_j}$, where $\phi_j$ denotes the value of the density of a centered normal random variable with variance $j^2 + 1$ evaluated at 0. The 176 models comprise the model that contains only the intercept and 35 models of each of the five stratified collections. In Figure~\ref{fig:DGP1-IS} the top gray line corresponds to the theoretical values of the trace form of models that have four relevant predictors and additional, irrelevant, predictors. The second line from the top corresponds to the theoretical values of the trace form of models that contain three relevant predictors and additional, irrelevant, predictors, and so forth. The last line (fifth from above) corresponds to models that do not contain any relevant predictors. We observe that the estimates of the trace form (in red) lie on (or are very close) to theoretical values of the trace form uniformly for all 176 models. This confirms the fast uniform convergence rates obtained in Theorem~\ref{theorem:ConsistencyEstExpectOpt}. Note that the plot shows only 50 red dots and not as one might expect 176 dots. This is due to the fact that for DGP1 the value of the estimated trace form does not depend on the specific subset of predictors (i.e. $S$) but only on the size of the model (i.e. $|S|$), e.g. the two models with predictors $\{1,2, 5\}$ and $\{3,4, 10\}$ have the same trace form which is fully determined by the fact that they contain two relevant and one irrelevant predictors. The expected optimism (in blue) does not follow the dashed gray lines of the theoretical values of the trace form as closely as the estimates do. This reflects the fact that the trace form is only an approximation to the expected optimism (see Theorem~\ref{theorem:ExpectOptTraceForm}). The difference between the values of the trace form and the expected optimism appears to be negligible for models of size up to $20 \approx \sqrt{n}$ (recall that $n= 500$). The vertical red lines indicate the standard deviations of the estimated trace forms. The standard deviation increases with the model size and, holding the number of nuisance predictor variables fixed, decreases with the number of relevant predictor variables that are included in the model. The latter effect is rather weak and can be best observed in the plot for the 80\% quantile. \begin{figure}[!htbp] \centering \minipage{0.85\textwidth} \includegraphics[width=\linewidth]{expected-optimism-lines-50} \endminipage\hfill\\ \minipage{0.85\textwidth} \includegraphics[width=\linewidth]{expected-optimism-lines-80} \endminipage \caption{Red: estimates of the trace form and standard errors. Blue: expected optimism. Dashed gray lines: exact evaluation of the trace form. Top: DGP1 with $\tau = 0.5$. Bottom: DGP1 with $\tau =0.8$. }\label{fig:DGP1-IS} \end{figure} \subsection{Comparison with cross-validated expected optimism} Cross-validation is a commonly-used method for estimating the predictive risk and the expected optimism. In this subsection we compare the trace form estimate with a 10-fold cross-validation estimate of the expected optimism. Figure~\ref{fig:DGP1-IS-Bias} shows the results of 10-fold cross-validation and the trace form for DGP1 at the median. We consider four representative models: Model I is the correct model (with predictors 1 to 4), Model II is an over-fitted model (with predictors 1 to 10), Model III is an under-fitted model (with predictors 1 and 2) and Model IV is the model that comprises the relevant predictors 1 and 2 and the irrelevant predictors 5 to 15. The vertical red line indicates the expected optimism. The white histograms show the empirical distribution of 10,000 cross-validation estimates of the expected optimism and the dark gray histograms show the empirical distribution of 10,000 trace form estimates of the expected optimism. Both histograms are centered around the expected optimism; however, the estimate of the trace form concentrates significantly more around the target. As mentioned in Section~\ref{subsec:PredRiskandExpectOptGENERAL} the reason for this is that the cross-validation estimate is based on a smaller sample size both for model estimation and for risk estimation. \begin{figure}[!htbp] \minipage{0.5\textwidth} \includegraphics[width=\linewidth]{hist-bias-CV-est-50-true-model-new} \endminipage\hfill \minipage{0.5\textwidth} \includegraphics[width=\linewidth]{hist-bias-CV-est-50-over-fitted-new} \endminipage\hfill \minipage{0.5\textwidth} \includegraphics[width=\linewidth]{hist-bias-CV-est-50-under-fitted-new} \endminipage\hfill \minipage{0.5\textwidth} \includegraphics[width=\linewidth]{hist-bias-CV-est-50-1-2-5-15-new} \endminipage \caption{Histograms of the 10-fold CV estimate of the expected optimism and the trace form estimate for DGP1 and $\tau = 0.5$. Red line: expected optimism. White histogram: 10-fold CV. Gray histogram: trace form estimate. Model I: correct model (with predictors 1 to 4), Model II: an over-fitted model (with predictors 1 to 10), Model III: an under-fitted model (with predictors 1 to 2) and Model IV that comprises the relevant predictors 1 and 2 and the irrelevant predictors 5 to 15.}\label{fig:DGP1-IS-Bias} \end{figure} \section{Conclusion}\label{sec:Conclusion} In the present paper, we have derived two asymptotic approximations of the expected optimism, or the bias of the in-sample risk when used as an estimate of the predictive risk, and have proposed consistent estimates of the expected optimism and the predictive risk of potentially misspecified quantile regression models. The asymptotic approximations based on two explicit forms help us understand how the expected optimism depends on several factors, including the quantile level, the model misspecification bias, the model size, and sampling variability. In some simpler cases, the expected optimism is asymptotically linear in the model size, but for under-fitted or misspecified models in general, the relationship is far more complicated. The results show that commonly used AIC-type model selection criteria for quantile regression are not really good proxies of the predictive risk. We propose a bias-corrected estimate of the predictive risk and establish its uniform consistency under weak assumptions. Our theoretical results indicate that de-biasing the in-sample risk with an estimate of the expected optimism is necessary when considering models whose dimension grow with at least $n^{1/2}$. Empirical evidence suggests that even in the case of models with fixed dimension estimates of the predictive risk can be significantly improved via de-biasing the in-sample risk. The asymptotic approximations derived in the present paper are uniform in a class of candidate models, but those models are not data-dependent. An interesting question that relates more to model selection criteria is how well the bias, and thus the predictive risk estimation, hold up for data-dependent models. Clearly, additional research is needed to address this question. \section*{Acknowledgements} The research is partly supported by NSF Award DMS-1607840 (USA) and Natural Science Foundation of China Grant 11690012. We thank the Editors and two Referees for their comments and suggestions that have led to significant improvements in presentation and scope of the paper.
1,314,259,995,997
arxiv
\section{Introduction} \label{sec:intro} This paper is concerned with the fundamental question of how well one can estimate a sparse vector from noisy linear measurements in the general situation where one has the flexibility to design those measurements at will (in the language of statistics, one would say that there is nearly complete freedom in designing the experiment). This question is of importance in a variety of sparse signal estimation or sparse regression scenarios, but perhaps arises most naturally in the context of compressive sensing (CS) \cite{MR2236170,MR2300700,donoho-CS}. In a nutshell, CS asserts that it is possible to reliably acquire sparse signals from just a few linear measurements selected a priori. More specifically, suppose we wish to acquire a sparse signal $\mathbf{x} \in \mathbb{R}^n$. A possible CS acquisition protocol would proceed as follows. $(i)$ Pick an $m \times n$ random projection matrix $\mathbf{A}$ (the first $m$ rows of a random unitary matrix) in advance, and collect data of the form \begin{equation} \label{measure} \mathbf{y} = \mathbf{A} \mathbf{x} + \mathbf{z}, \end{equation} where $\mathbf{z}$ is a vector of errors modeling the fact that any real world measurement is subject to at least a small amount of noise. $(ii)$ Recover the signal by solving an $\ell_1$ minimization problem such as the Dantzig selector~\cite{dantzig} or the LASSO~\cite{Tibshirani96}. As is now well known, theoretical results guarantee that such convex programs yield accurate solutions. In particular, when $\mathbf{z} = {\boldsymbol 0}$, the recovery is exact, and the error degrades gracefully as the noise level increases. A remarkable feature of the CS acquisition protocol is that the sensing is completely nonadaptive; that is to say, no effort whatsoever is made to understand the signal. One simply selects a collection $\{\mathbf{a}_i\}$ of sensing vectors a priori (the rows of the matrix $\mathbf{A}$), and measures correlations between the signal and these vectors. One then uses numerical optimization---e.g., linear programming \cite{dantzig}---to tease out the sparse signal $\mathbf{x}$ from the data vector $\mathbf{y}$. While this may make sense when there is no noise, this protocol might draw some severe skepticism in a noisy environment. To see why, note that in the scenario above, most of the power is actually spent measuring the signal at locations where there is no information content, i.e., where the signal vanishes. Specifically, let $\mathbf{a}$ be a row of the matrix $\mathbf{A}$ which, in the scheme discussed above, has uniform distribution on the unit sphere. The dot product is \[ \<\mathbf{a}, \mathbf{x}\> = \sum_{j=1}^n a_j x_j, \] and since most of the coordinates $x_j$ are zero, one might think that most of the power is wasted. Another way to express all of this is that by design, the sensing vectors are approximately orthogonal to the signal, yielding measurements with low signal power or a poor signal-to-noise ratio (SNR). The idea behind adaptive sensing is that one should localize the sensing vectors around locations where the signal is nonzero in order to increase the SNR, or equivalently, not waste sensing power. In other words, one should try to ``learn'' as much as possible about the signal while acquiring it in order to design more effective subsequent measurements. Roughly speaking, one would $(i)$ detect those entries which are nonzero or significant, $(ii)$ progressively localize the sensing vectors on those entries, and $(iii)$ estimate the signal from such localized linear functionals. This is akin to the game of 20 questions in which the search is narrowed by formulating the next question in a way that depends upon the answers to the previous ones. Note that in some applications, such as in the acquisition of wideband radio frequency signals, aggressive adaptive sensing mechanisms may not be practical because they would require near instantaneous feedback. However, there do exist applications where adaptive sensing is practical and where the potential benefits of adaptivity are too tantalizing to ignore. The formidable possibilities offered by adaptive sensing give rise to the following natural ``folk theorem.'' \begin{quote} {\bf Folk Theorem}. {\em The estimation error one can get by using a clever adaptive sensing scheme is far better than what is achievable by a nonadaptive scheme. } \end{quote} In other words, learning about the signal along the way and adapting the questions (the next sensing vectors) to what has been learned to date is bound to help. In stark contrast, the main result of this paper is this: \begin{quote} {\bf Surprise}. {\em The folk theorem is wrong in general. No matter how clever the adaptive sensing mechanism, no matter how intractable the estimation procedure, in general it is not possible to achieve a fundamentally better mean-squared error (MSE) of estimation than that offered by a na\"{i}ve random projection followed by $\ell_1$ minimization.} \end{quote} The rest of this article is mostly devoted to making this claim precise. In doing so, we shall also show that adaptivity does not help in obtaining a fundamentally better estimate of the signal support, which is of independent interest. \subsection{Main result} To formalize matters, we assume that the error vector $\mathbf{z}$ in \eqref{measure} has i.i.d.\ $\mathcal{N}(0,\sigma^2)$ entries. Then if $\mathbf{A}$ is a random projection with unit-norm rows as discussed above, \cite{dantzig} shows that the Dantzig selector estimate $ \widehat{\mathbf{x}}^{\rm DS}$ (obtained by solving a simple linear program) achieves an MSE obeying \begin{equation} \label{nonadapt} \frac1n \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}}^{\rm DS} - \mathbf{x} \|_2^2 \leq C \, \frac{k}{m} \, \log(n) \, \sigma^2, \end{equation} where $C$ is some numerical constant. The bound holds {\em universally} over all $k$-sparse signals\footnote{A signal is said to be $k$-{\em sparse} if it has at most $k$ nonzero components. We also occasionally use the notation $\|\mathbf{x}\|_0$ to denote the number of nonzero components of $\mathbf{x}$.} provided that the number of measurements $m$ is sufficiently large (on the order of at least $k \log (n/k)$). Moreover, one can show that this result is essentially optimal in the sense that {\em any} possible nonadaptive choice of $\mathbf{A}$ (with unit-norm rows) and {\em any} possible estimation procedure $\widehat{\mathbf{x}}$ will satisfy \begin{equation} \label{nonadapt-LB} \frac1n \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|_2^2 \geq C' \, \frac{k}{m} \, \log(n/k) \, \sigma^2, \end{equation} where $C'$ is a numerical constant~\cite{candes-davenport}. The fundamental question is thus: {\em how much lower can the MSE be} when $(i)$ we are allowed to sense the signal adaptively and $(ii)$ we can use any estimation algorithm we like to recover $\mathbf{x}$. The distinction between adaptive and nonadaptive sensing can be expressed in the following manner. Begin by rewriting the statistical model \eqref{measure} as \begin{equation} \label{measure2} y_i = \<\mathbf{a}_i, \mathbf{x}\> + z_i, \quad i = 1, \dots, m, \end{equation} in which a power constraint imposes that each $\mathbf{a}_i$ is of norm at most 1, i.e., $\|\mathbf{a}_i\|_2 \leq 1$; then in a nonadaptive sensing scheme the vectors $\mathbf{a}_1, \dots, \mathbf{a}_m$ are chosen in advance and do not depend on $\mathbf{x}$ or $\mathbf{z}$ whereas in an adaptive setting, the measurement vectors may be chosen depending on the history of the sensing process, i.e., $\mathbf{a}_i$ is a (possibly random) function of $(\mathbf{a}_1, y_1, \dots, \mathbf{a}_{i-1}, y_{i-1})$. If we follow the principle that ``you cannot get something for nothing,'' one might argue that giving up the freedom to adaptively select the sensing vectors would result in a far worse MSE. Our main contribution is to show that this is not the case. \begin{thm} \label{teo:main-minmax} Suppose that $k < n/2$ and let $m$ be an arbitrary number of measurements. Assume that $\mathbf{x}$ is sampled with i.i.d.\ coordinates such that $x_j = 0$ with probability $1-k/n$ and $x_j = \mu$ with probability $k/n$ (so that we have $k$ nonzero entries on the average). Then for $\mu = \frac{4}{3} \sqrt{\frac{n}{m}}$, any sensing strategy and any estimate $\widehat{\mathbf{x}}$ obey \begin{equation} \label{main-minmax} \frac1n \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|_2^2 \ge \frac{4}{27} \, \frac{k}{m} \, \sigma^2 > \frac{1}{7} \, \frac{k}{m} \, \sigma^2. \end{equation} \end{thm} For any $n$ and $k$, the number of nonzero entries in a random vector drawn from the Bernoulli prior is between $k \pm 3\sqrt{k}$ with probability at least 99\%. With some additional arguments, and when $n$ and $k$ are sufficiently large, we can actually show that the last inequality in \eqref{main-minmax} holds true in a minimax sense when $\mathbf{x}$ is known to have a support size in that range. In order to avoid unnecessary technicalities, we prove a simpler result. \begin{thm} \label{thm:second-minmax} For any $n \ge 2$ and $k < n/2$, and any $m$, \begin{equation} \label{at-most-k} \inf_{\widehat{\mathbf{x}}} \, \sup_{\|\mathbf{x}\|_0 \le k} \ \frac1n \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|_2^2 \ge C_k \, \frac{k}{m} \, \sigma^2, \end{equation} in which $\inf_{k \ge 1} C_k \ge 1/33$. For $k \ge 10$ we can take $C_k \ge 1/15$, and when $k$ is sufficiently large we can take $C_k = 1/7$. \end{thm} In short, Theorems~\ref{teo:main-minmax} and~\ref{thm:second-minmax} say that if one ignores a logarithmic factor, then {\em adaptive measurement schemes cannot (substantially) outperform nonadaptive strategies.} While seemingly counterintuitive, we find that precisely the same sparse vectors which determine the minimax rate in the nonadaptive setting are essentially so difficult to estimate that by the time we have identified the support, we will have already exhausted our measurement budget (i.e., we will have acquired all $m$ measurements). Before moving on, we should clarify precisely what we mean by a {\em substantial} improvement. After all, the lower bound in Theorem~\ref{thm:second-minmax} does improve upon the nonadaptive bound in~\eqref{nonadapt-LB} by a factor of $\log(n/k)$. Indeed, we will see in Section~\ref{sec:discussion} that at least in some very special cases (e.g., when $k=1$), this log factor can in fact be eliminated. However, this is a relatively modest improvement compared to what one might hope to gain by exploiting adaptivity. Specifically, consider a simple adaptive procedure that uses $m/2$ measurements to identify the support of $\mathbf{x}$ and uses the remaining $m/2$ measurements to estimate the values of the nonzeros. If such a scheme identifies the correct support, then it is easy to show that this procedure will yield an estimate satisfying $$ \frac1n \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|_2^2 = \frac{2k}{n} \, \frac{k}{m} \, \sigma^2 . $$ Thus, there seems to be room for reducing the error by a factor of $k/n$ beyond the $\log(n/k)$ factor. Theorem~\ref{thm:second-minmax}, however, shows that this gain is not possible in general. On the one hand, our main result states that one cannot universally improve on bounds achievable via nonadaptive sensing strategies. Indeed, we will see that there are natural classes of sparse signals for which, even after applying the most clever sensing scheme and the most subtle testing procedure, one would still not be sure about where the nonzeros lie. This remains true even after having used up the entirety of our measurement budget. On the other hand, our result does not say that adaptive sensing {\em never} helps. In fact, there are many instances in which it will. For example, when some or most of the nonzero entries in $\mathbf{x}$ are sufficiently large, they may be detected sufficiently early so that one can ultimately get a far better MSE than what would be obtained via a nonadaptive scheme, see Section \ref{sec:numerics} for simple experiments in this direction and \secref{discussion} for further discussion. \subsection{Connections with testing problems} The arguments we develop to reach our conclusions are quite intuitive, simple, and yet they seem different from the classical Fano-type arguments for obtaining information-theoretic lower bounds (see Section \ref{sec:fano} for a discussion of the latter methods). Our approach involves proving a lower bound for the Bayes risk under the prior from Theorem~\ref{teo:main-minmax}. To obtain such a lower bound, we make a detour through testing---multiple testing to be exact. Our argument proceeds through two main steps: \begin{itemize} \item {\em Support recovery in Hamming distance.} We consider the multiple testing problem of deciding which components of the signal are zero and which are not. We show that no matter which adaptive strategy and tests are used, the Hamming distance between the estimated and true supports is large. Put differently, the multiple testing problem is shown to be difficult. In passing, this establishes that adaptive schemes are not substantially better than nonadaptive schemes for support recovery. \item {\em Estimation with mean-squared loss.} Any estimator with a low MSE can be converted into an effective support estimator simply by selecting the largest coordinates or those above a certain threshold. Hence, a lower bound on the Hamming distance immediately gives a lower bound on the MSE. \end{itemize} The crux of our argument is thus to show that it is not possible to choose sensing vectors adaptively in such a way that the support of the signal may be estimated accurately. \subsection{Differential entropies and Fano-type arguments} \label{sec:fano} Our approach is significantly different from classical methods for getting lower bounds in decision and information theory. Such methods typically rely on Fano's inequality~\cite{MR2239987}, and are all intimately related to methods in statistical decision theory (see \cite{MR2724359,MR1462963}). Before continuing, we would like to point out that Fano-type arguments have been used successfully to obtain (often sharp) lower bounds for some adaptive methods. For example, the work \cite{4494677} uses results from \cite{MR2724359} to establish a bound on the minimax rate for binary classification (see the references therein for additional literature on active learning). Other examples include the recent paper \cite{rigollet2010nonparametric}, which derives lower bounds for bandit problems, and \cite{5394945} which develops an information theoretic approach suitable for stochastic optimization, a form of online learning, and gives bounds about the convergence rate at which iterative convex optimization schemes approach a solution. Following the standard approaches in our setting leads to major obstacles that we would like to briefly describe. Our hope is that this will help the reader to better appreciate our easy itinerary. As usual, we start by choosing a prior for $\mathbf{x}$, which we take having zero mean. Coming from information theory, one would want to bound the mutual information between $\mathbf{x}$ (what we want to learn about) and $\mathbf{y}$ (the information we have), for any measurement scheme $\mathbf{a}_1, \dots, \mathbf{a}_m$. Assuming a deterministic measurement scheme, by the chain rule, we have \begin{equation} \label{diff-entropy} I(\mathbf{x},\mathbf{y}) = h(\mathbf{y}) - h(\mathbf{y} \, | \, \mathbf{x}) = \sum_{i = 1}^m h(y_i \, | y_{[i-1]}) - h(y_i \, | y_{[i-1]}, \mathbf{x}), \end{equation} where $y_{[i]} := (y_1, \ldots, y_{i})$. Since the history up to time $i-1$ determines $\mathbf{a}_i$, the conditional distribution of $y_i$ given $y_{[i-1]}$ and $\mathbf{x}$ is then normal with mean $\<\mathbf{a}_i, \mathbf{x}\>$ and variance $\sigma^2$. Hence, $h(y_i \, |\, y_{[i-1]}, \mathbf{x}) = \frac12 \log(2\pi e \sigma^2)$. This is the easy term to handle---the challenging term is $h(y_i \, |\, y_{[i-1]})$ and it is not clear how one should go about finding a good upper bound. To see this, observe that \[ \text{Var}(y_i \, | \, y_{[i-1]}) = \text{Var}(\<\mathbf{a}_i, \mathbf{x}\> \, | \, y_{[i-1]}) + \sigma^2. \] A standard approach to bound $h(y_i \, |\, y_{[i-1]})$ is to write \[ h(y_i \, | y_{[i-1]}) \le \frac12 \operatorname{\mathbb{E}} \log \bigl(2\pi e \text{Var}(\<\mathbf{a}_i, \mathbf{x}\> \, | \, y_{[i-1]}) + 2\pi e \sigma^2\bigr), \] using the fact that the Gaussian distribution maximizes the entropy among distributions with a given variance. If we simplify the problem by applying Jensen's inequality, we obtain \begin{equation} \label{I} I(\mathbf{x},\mathbf{y}) \leq \sum_{i=1}^m \frac12 \log \bigl(\operatorname{\mathbb{E}} \<\mathbf{a}_i, \mathbf{x}\>^2 / \sigma^2+ 1\bigr). \end{equation} The RHS needs to be bounded uniformly over all choices of measurement schemes, which is a daunting task given that $\mathbf{a}_i$ is a function of $y_{[i-1]}$ which is in turn a function of $\mathbf{x}$. We note however that the RHS {\em can} be bounded in the nonadaptive setting, which is the approach taken in~\cite{candes-davenport} to establish~\eqref{nonadapt-LB}. See also~\cite{raskutti2009minimax,5571873,verzelen2010minimax} for other asymptotic results in this direction. We have presented the problem in this form to help information theorists see the analogy with the problem of understanding the role of feedback in a Gaussian channel \cite{MR2239987}. Specifically, we can view the inner products $\<\mathbf{a}_i, \mathbf{x}\>$ as inputs to a Gaussian channel where we observe the output of the channel via feedback. It is well-known that feedback does not substantially increase the capacity of a Gaussian channel, so one might expect this argument to be relevant to our problem as well. Crucially, however, in the case of a Gaussian channel the user has full control over the channel input---whereas in the absence of a priori knowledge of $\mathbf{x}$, in our problem we are much more restricted in our control over the ``channel input'' $\< \mathbf{a}_i, \mathbf{x} \>$. \subsection{Connections with other works} \label{sec:connections} A number of papers have studied the advantages (or sometimes the lack thereof) offered by adaptive sensing in the setting where one has {\em noiseless} data, see for example \cite{donoho-CS,NovakPower,indyk} and references therein. Of course, it is well known that one can uniquely determine a $k$-sparse vector from $2k$ linear nonadaptive noise-free measurements and, therefore, there is not much to dwell on. The aforementioned works of course do not study such a trivial problem. Rather, the point of view is that the signal is not exactly sparse, only approximately sparse, and the question is thus whether one can get a lower approximation error by employing an adaptive scheme. Whereas we study a statistical problem, this is a question in approximation theory. Consequently, the techniques and results of this line of research have no bearing on our problem. There is much research suggesting intelligent adaptive sensing strategies in the presence of noise and we mention a few of these works. In a setting closely related to ours---that of detecting the locations of the nonzeros of a sparse signal from noisy point samples (so that $m>n$)---\cite{haupt2009distilled} shows that by adaptively allocating sensing resources one can significantly improve upon the best nonadaptive schemes~\cite{dj04}. Lower bounds for nonadaptive and adaptive methods in this context were recently established in~\cite{malloy2011}, with the adaptive lower bounds established through the sequential probability ratio test (SPRT)~\cite{MR799155}. Closer to home,~\cite{haupt-adaptive,haupt-compressive} consider CS schemes (with $m < n$) which perform sequential subset selection via the random projections typical of CS, but which focus in on promising areas of the signal. When the signal is $(i)$ very sparse $(ii)$ has sufficiently large entries and $(iii)$ has constant dynamic range, the method in~\cite{haupt-compressive} is able to remove a logarithmic factor from the MSE achieved by the Dantzig selector with (nonadaptive) i.i.d.\ Gaussian measurements. In a different direction, \cite{4518814,4524050} suggest Bayesian approaches where the measurement vectors are sequentially chosen so as to maximize the conditional differential entropy of $y_i$ given $y_{[i-1]}$. Finally, another approach in~\cite{iwen} suggests a bisection method based on repeated measurements for the detection of 1-sparse vectors, subsequently extended to $k$-sparse vectors via hashing. None of these works, however, establish a lower bound on the MSE of the recovered signal. \subsection{Content} We prove all of our results in Section \ref{sec:main}, trying to give as much insight as possible as to why adaptive methods are not much more powerful than nonadaptive ones for detecting the support of a sparse signal. We will also attempt to describe the regime in which adaptivity might be helpful via simple numerical simulations in Section \ref{sec:numerics}. These simulations show that adaptive algorithms are subject to a fundamental phase transition phenomenon. Finally, we comment on open problems and future research in Section \ref{sec:discussion}. \section{Limits of Adaptive Sensing Strategies} \label{sec:main} This section establishes nonasymptotic lower bounds for the estimation of a sparse vector from adaptively selected noisy linear measurements. To begin with, we remind ourselves that we collect possibly adaptive measurements of the form \eqref{measure2} of an $n$-dimensional signal $\mathbf{x}$ where $\|\mathbf{a}_i\|_2 \le 1$; from now on, we assume for simplicity and without loss of generality that $\sigma = 1$. In our analysis below, we denote the total-variation metric between any two probability distributions $\mathbb{P}$ and $\mathbb{Q}$ by $\|\mathbb{P} - \mathbb{Q}\|_{\text{TV}}$, and their KL divergence by $K(\mathbb{P}, \mathbb{Q})$~\cite{MR2319879}. Our arguments will make use of Pinsker's inequality, which relates these two quantities via \begin{equation} \label{pinsker} \|\mathbb{P} - \mathbb{Q}\|_{\rm TV} \leq \sqrt{K(\mathbb{Q}, \mathbb{P})/2}. \end{equation} We shall also use the convexity of the KL divergence, which states that for $\lambda_i \ge 0$ and $\sum_i \lambda_i = 1$, we have \begin{equation} \label{eq:KLconvex} K\Bigl(\sum_i \lambda_i \mathbb{P}_i, \sum_i \lambda_i \mathbb{Q}_i\Bigr) \leq \sum_i \lambda_i K(\mathbb{P}_i, \mathbb{Q}_i) \end{equation} in which $\{\mathbb{P}_i\}$ and $\{\mathbb{Q}_i\}$ are families of probability distributions. Before proceeding, we argue that when we are given a prior $\pi(\mathbf{x})$, we can restrict ourselves to deterministic measurement schemes in the sense that $\mathbf{a}_1$ is a deterministic vector and, for $i \geq 2$, $\mathbf{a}_i$ is a deterministic function of $y_{[i-1]} = (y_1, \dots, y_{i})$. In the general case we have $\mathbf{a}_i = F_i(y_{[i-1]}, U_i)$, where $F_i$ is a deterministic function and $U_i$ is random and independent of $y_{[i-1]}$ and $z_i$. With $\mathbf{U} = (U_1, \ldots, U_m)$, it follows from the law of iterated expectation \[ \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|^2 = \operatorname{\mathbb{E}} \Bigl[ \operatorname{\mathbb{E}}[\|\widehat{\mathbf{x}} - \mathbf{x}\|^2 | \mathbf{U}] \Bigr] \] (the expectation in the left-hand side is taken over $\mathbf{x}, \mathbf{y}$ and $\mathbf{U}$) that there exists a fixed realization $\mathbf{u} = (u_1, \ldots, u_m)$ obeying \[ \operatorname{\mathbb{E}}[\|\widehat{\mathbf{x}} - \mathbf{x}\|^2 | \mathbf{U} = \mathbf{u}] \le \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|^2. \] Hence, we can construct an estimator based on a deterministic measurement scheme which is as good as any based on a randomized measurement scheme. Note that in a deterministic scheme, letting $\mathbb{P}_\mathbf{x}$ be the distribution of $y_{[i-1]}$ when the target vector is $\mathbf{x}$ and using the fact that $y_i$ is conditionally independent of $y_{[i-1]}$ given $\mathbf{a}_i$, we see that the likelihood factorizes as \begin{equation} \label{Px} \P_\mathbf{x}(y_{[m]}) = \prod_{i=1}^m \P_\mathbf{x}(y_i | \mathbf{a}_i), \end{equation} which will be of use in our analysis below. \subsection{The Bernoulli prior} \label{sec:bernoulli} We begin by studying the model in Theorem~\ref{teo:main-minmax} which makes our argument most transparent. The proof of Theorem~\ref{thm:second-minmax} essentially reduces to that of Theorem~\ref{teo:main-minmax}. In this model, we suppose that $\mathbf{x} \in \mathbb{R}^n$ is sampled from a product prior: for each $j \in \{1, \ldots, n\}$, \begin{equation} \label{bernoulli} x_j = \begin{cases} 0 & \text{w.p. } 1-k/n,\\ \mu & \text{w.p. } k/n, \end{cases} \end{equation} and the $x_j$'s are independent. In this model, $\mathbf{x}$ has on average $k$ nonzero entries, all with known positive amplitudes equal to $\mu$. This model is easier to study than the related model in which one selects $k$ coordinates uniformly at random and sets those to $\mu$. The reason is that in this Bernoulli model, the independence between the coordinates of $\mathbf{x}$ brings welcomed simplifications, as we shall see. Our goal here is to establish a lower bound on the MSE when $\mathbf{x}$ is drawn from this prior. We do this in two steps. First, we look at recovering the support of $\mathbf{x}$, which is done via a reduction to multiple testing. Second, we show that a lower bound on the error for support recovery implies a lower bound on the MSE, leading to Theorem~\ref{teo:main-minmax}. \subsubsection{Support recovery in Hamming distance} We would like to understand how well we can estimate the support $S = \{j : x_j \neq 0\}$ of $\mathbf{x}$ from the data \eqref{measure2}, and shall measure performance by means of the expected Hamming distance. Here, the error of a procedure $\widehat{S}$ for estimating the support $S$ is defined as \[ \operatorname{\mathbb{E}} |\widehat{S} \Delta S| = \sum_{j=1}^n \P( \widehat{S}_j \neq S_j ) \] where $\Delta$ denotes the symmetric difference, $S_j = 1$ if $j \in S$ and equals zero otherwise, and similarly for $\widehat{S}_j$. As we can see, this reduces our problem to a sequence of $n$ independent hypothesis tests. We will obtain a lower bound on the number of errors among these tests by exploiting the following lemma. \begin{lem} \label{lem:Bayes} Consider the testing problem of deciding between $H_0 : \mathbf{x} \sim \P_0$ and $H_1 : \mathbf{x} \sim \P_1$, where $H_0$ and $H_1$ occur with prior probabilities $\pi_0$ and $\pi_1$ respectively. Under the 0-1 loss, The Bayes risk $B$ obeys $$ B \ge \min(\pi_0, \pi_1) \left(1 - \| \P_1 - \P_0 \|_{\mathrm{TV}} \right). $$ \end{lem} \begin{proof} Assume without loss of generality that $\pi_1 \le \pi_0$. The test with minimum risk is the Bayes test rejecting $H_0$ if and only if $$ \Lambda = \frac{\pi_1 \, \mathbb{P}_{1}(\mathbf{x})}{\pi_0 \, \P_{0}(\mathbf{x})} > 1; $$ that is, if the adjusted likelihood ratio exceeds one; see~\cite[Pbm. 3.10]{TSH}. A simple calculation shows that the Bayes risk obeys $$ B = \pi_0 \operatorname{\mathbb{E}}_0 \left( \min(1,\Lambda) \right), $$ where $\operatorname{\mathbb{E}}_0$ denotes expectation under $\P_{0}$. Using the fact that $\operatorname{\mathbb{E}}_0 \Lambda = \pi_1/\pi_0$ together with $$ \min(1,\Lambda) = \frac{1+\Lambda}{2} + \frac{|\Lambda-1|}{2}, $$ we obtain \begin{equation} \label{eq:Bayes1} B = \frac{1}{2} - \frac{\pi_0}{2} \operatorname{\mathbb{E}}_0 | \Lambda-1 |. \end{equation} Finally, \begin{align*} \pi_0 \operatorname{\mathbb{E}}_0 |\Lambda-1| = \int | \pi_1 \text{d}\P_1 - \pi_0 \text{d}\P_0 | & \le \pi_1 \int |\text{d}\P_1 - \text{d}\P_0| + \pi_0 - \pi_1 \\ & = 2\pi_1 \| \P_1 - \P_0 \|_{\mathrm{TV}} + \pi_0 - \pi_1, \end{align*} which when combined with~\eqref{eq:Bayes1} establishes the lemma. \end{proof} \begin{thm} \label{thm:bernoulli-support} Suppose that $\mathbf{x}$ is sampled according to the Bernoulli prior with $k \le n/2$, then any estimate $\widehat{S}$ obeys \begin{equation} \label{eq:supportH} \operatorname{\mathbb{E}} |\widehat{S} \Delta S| \geq k \Bigl(1 - \frac{\mu}{2} \sqrt{\frac{m}{n}}\Bigr). \end{equation} \end{thm} Hence, if the amplitude of the signal is below $\sqrt{n/m}$, we expect a large number of errors; indeed, if $\mu = \sqrt{n/m}$, then $\operatorname{\mathbb{E}} |\widehat{S} \Delta S| \ge k/2$. \begin{proof}\footnote{The main ideas of our proof are similar to those in that of Assouad's Lemma, see \cite{MR777600,MR2724359} for instance. Note, however, that our approach yields a sharper constant.} Let $\pi_1 = k/n$ and $\pi_0 = 1 - \pi_1$. For any $j$, set $\mathbb{P}_{0,j} = \P(\cdot | x_j = 0)$ and $\mathbb{P}_{1,j} = \P(\cdot | x_j \neq 0)$. Let $B_j$ denote the Bayes risk of the decision problem $H_{0,j} : x_j = 0$ versus $H_{1,j} : x_j = 1$. From Lemma~\ref{lem:Bayes} we have that $$ \operatorname{\mathbb{E}} |\widehat{S} \Delta S| = \sum_{j=1}^n \P(\widehat{S}_j \neq S_j) \ge \sum_{j=1}^n B_j \ge \pi_1 \sum_{j=1}^n \Bigl(1-\|\mathbb{P}_{1,j} - \mathbb{P}_{0,j}\|_{\text{TV}}\Bigr). $$ Applying the Cauchy-Schwartz inequality, we obtain \begin{equation} \label{eq:intermediate_a} \operatorname{\mathbb{E}} |\widehat{S} \Delta S| \ge k \Bigl(1- \frac{1}{\sqrt{n}} \sqrt{\sum_{j=1}^n \|\mathbb{P}_{1,j} - \mathbb{P}_{0,j}\|_{\text{TV}}^2}\Bigr). \end{equation} The theorem is a consequence of~\eqref{eq:intermediate_a} combined with \begin{equation} \label{eq:intermediate} \sum_{j=1}^n \|\mathbb{P}_{1,j} - \mathbb{P}_{0,j}\|_{\text{TV}}^2 \le \frac{\mu^2}{4} \, m. \end{equation} To establish~\eqref{eq:intermediate}, we apply Pinsker's inequality twice to obtain \begin{equation} \label{XLi} \|\mathbb{P}_{1,j} - \mathbb{P}_{0,j}\|^2_{\text{TV}} \le \frac{\pi_0}{2} K(\mathbb{P}_{0,j},\mathbb{P}_{1,j}) + \frac{\pi_1}{2} K(\mathbb{P}_{1,j},\mathbb{P}_{0,j}) \end{equation} so that it remains to find an upper bound on the KL divergence between $\mathbb{P}_{0,j}$ and $\mathbb{P}_{1,j}$. Write $\mathbb{P}_0 = \mathbb{P}_{0,j}$ for short and likewise for $\mathbb{P}_{1,j}$. Then \[ \P_0(y_{[m]}) = \sum_{\mathbf{x}'} \P(\mathbf{x}') \P(y_{[m]} | x_j = 0, \mathbf{x}') := \sum_{\mathbf{x}'} \P(\mathbf{x}') \mathbb{P}_{0,\mathbf{x}'}, \] where $\mathbf{x}' = (x_1, \ldots, x_{j-1}, x_{j+1}, \ldots, x_n)$ and $\mathbb{P}_{0,\mathbf{x}'}$ is the conditional probability distribution of $y_{[m]}$ given $\mathbf{x}'$ and $x_j = 0$; $\P_1(y_{[m]})$ is defined similarly. The convexity of the KL divergence \eqref{eq:KLconvex} gives \begin{equation} \label{KL} K(\mathbb{P}_0,\mathbb{P}_1) \le \sum_{\mathbf{x}'} \P(\mathbf{x}') K(\mathbb{P}_{0,\mathbf{x}'}, \mathbb{P}_{1,\mathbf{x}'}). \end{equation} We now calculate this divergence. In order to do this, observe that we have $y_i = \<\mathbf{a}_i,\mathbf{x}\> + z_i = c_i + z_i$ under $\mathbb{P}_{0,\mathbf{x}'}$ while $y_i = a_{i,j} \mu + c_i + z_i$ under $\mathbb{P}_{1,\mathbf{x}'}$. This yields \begin{align*} K(\mathbb{P}_{0,\mathbf{x}'}, \mathbb{P}_{1,\mathbf{x}'}) & = \operatorname{\mathbb{E}}_{0,\mathbf{x}'} \log \frac{\mathbb{P}_{0,\mathbf{x}'}}{\mathbb{P}_{1,\mathbf{x}'}}\\ &= \sum_{i=1}^m \operatorname{\mathbb{E}}_{0,\mathbf{x}'} \left(\frac12 (y_i - \mu a_{i,j} - c_i )^2 - \frac12 (y_i - c_i )^2\right) \\ &= \sum_{i=1}^m \operatorname{\mathbb{E}}_{0,\mathbf{x}'} \left(- z_i \mu a_{i,j} + (\mu a_{i,j})^2/2\right)\\ &= \frac{\mu^2}2 \sum_{i=1}^m \operatorname{\mathbb{E}}_{0,\mathbf{x}'} (a_{i,j}^2). \end{align*} The first equality holds by definition, the second follows from \eqref{Px}, the third from $y_i = c_i + z_i $ under $\mathbb{P}_{0,\mathbf{x}'}$ and the last holds since $z_i$ is independent of $a_{i,j}$ and has zero mean. Using \eqref{KL}, we obtain \[ K(\mathbb{P}_{0},\mathbb{P}_{1}) \le \frac{\mu^2}2 \sum_{i=1}^m \operatorname{\mathbb{E}}[ a_{i,j}^2 | x_j = 0]. \] Similarly, \[ K(\mathbb{P}_{1},\mathbb{P}_{0}) \le \frac{\mu^2}2 \sum_{i=1}^m \operatorname{\mathbb{E}}[ a_{i,j}^2 | x_j = \mu] \] and, therefore, \eqref{XLi} shows that \[ \|\mathbb{P}_{1,j} - \mathbb{P}_{0,j}\|^2_{\text{TV}} \le \frac{\mu^2}{4} \Bigl(\sum_{i=1}^m \pi_0 \operatorname{\mathbb{E}}[ a_{i,j}^2 | x_j = 0] + \pi_1 \operatorname{\mathbb{E}}[ a_{i,j}^2 | x_j = \mu] \Bigr) = \frac{\mu^2}{4} \sum_{i=1}^m \operatorname{\mathbb{E}}[ a_{i,j}^2]. \] For any particular pair $(i,j)$ with $i>1$, we can say very little about $\operatorname{\mathbb{E}}[ a_{i,j}^2]$ since it can depend on all the previous measurements in a potentially very complicated manner. However, by summing this inequality over $j$ we can obtain~\eqref{eq:intermediate} by using the only constraint we have imposed on the $\mathbf{a}_i$, namely, $\|\mathbf{a}_i\|_2 = 1$, so that $\sum_{ij} a^2_{ij} = m$. This establishes the theorem. \end{proof} \subsubsection{Estimation in mean-squared error} \label{sec:estimation} It is now straightforward to obtain a lower bound on the MSE from Theorem \ref{thm:bernoulli-support}. \begin{proof}[Proof of Theorem~\ref{teo:main-minmax}] Let $S$ be the support of $\mathbf{x}$ and set $\widehat{S} := \{j: |\widehat{x}_j| \geq \mu/2\}$. We have \[ \|\widehat{\mathbf{x}} - \mathbf{x}\|_2^2 = \sum_{j \in S} (\widehat{x}_j - x_j)^2 + \sum_{j \notin S} \widehat{x}_j^2 \geq \frac{\mu^2}{4} |S \setminus \widehat{S}| + \frac{\mu^2}{4} |\widehat{S} \setminus S| = \frac{\mu^2}{4} |\widehat{S} \Delta S| \] and, therefore, \[ \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|_2^2 \geq \frac{\mu^2}{4} \operatorname{\mathbb{E}} |\widehat{S} \Delta S| \geq \frac{\mu^2}{4} k \Bigl(1 - \frac{\mu}{2} \sqrt{\frac{m}{n}}\Bigr), \] where the last inequality is from \thmref{bernoulli-support}. We then plug in $\mu = \frac{4}{3}\, \sqrt{\frac{n}{m}}$ and simplify to conclude. \end{proof} \subsection{The conditional Bernoulli prior and minimax bound} \label{sec:minmax} To establish Theorem~\ref{thm:second-minmax}, we choose as distribution on $\mathbf{x}$ the prior $\nu_{n,k}$ defined as follows: we start with the Bernoulli prior $\pi_{n, \alpha k}$ \eqref{bernoulli} with mean $\alpha k$ (instead of $k$) for some fixed $\alpha \in (0,1)$, and then condition that distribution to realizations with at most $k$ nonzero entries. \begin{prp} \label{prp:support2} Suppose that $\mathbf{x}$ is sampled according to $\nu_{n,k}$ with $k\le n/2$, then any estimate $\widehat{S}$ obeys \begin{equation} \label{eq:supportH2} \operatorname{\mathbb{E}} |\widehat{S} \Delta S| \geq \alpha k \Bigl(1-\gamma_{n,k}(\alpha) - \frac{\mu}2 \sqrt{\frac{m}{n}}\Bigr), \end{equation} where \[ \gamma_{n,k}(\alpha) := \frac1\alpha \sum_{j = k+1}^n (2 + (j-1)/k) \P(\operatorname{Bin}(n, \alpha k/n) = j). \] \end{prp} \begin{proof} We begin by arguing that we can restrict attention to estimates $\widehat{S}$ with cardinality at most $2k-1$. To see why, consider an arbitrary estimate $\widehat{S}$ and set \[ \widehat{S}_k = \begin{cases} \widehat{S}, & |\widehat{S}| \le 2 k -1, \\ \emptyset, & |\widehat{S}| \ge 2 k. \end{cases} \] Now if $|\widehat{S}| \ge 2 k$, then for any $S$ with $|S| \le k$, we have \[ |\widehat{S} \Delta S| \ge |\widehat{S} \setminus S| \ge |\widehat{S}| - |S| \ge k \ge |\emptyset \Delta S|. \] Since $|S| \le k$ under $\nu_{n,k}$, it follows that $\operatorname{\mathbb{E}} |\widehat{S} \Delta S| \ge \operatorname{\mathbb{E}} |\widehat{S}_k \Delta S|$, which proves the claim. From now on, we assume that $|\widehat{S}| < 2k$. Set $\pi_{n, \alpha k}(k) = \P_{\mathbf{x} \sim \pi_{n,\alpha k}}(|S| \le k)$ and observe the identity \[ \operatorname{\mathbb{E}}_{\mathbf{x} \sim \nu_{n,k}} |\widehat{S} \Delta S| = \operatorname{\mathbb{E}}_{\mathbf{x} \sim \pi_{n,\alpha k}} \left[ |\widehat{S} \Delta S| \, \vert \, |S| \le k \right] = \frac{1}{\pi_{n, \alpha k}(k)} \, \operatorname{\mathbb{E}}_{\mathbf{x} \sim \pi_{n,\alpha k}} \left[ |\widehat{S} \Delta S| \, {\bf 1}_{\{|S| \le k\}} \right]. \] To conclude, \thmref{bernoulli-support} together with $|\widehat{S} \Delta S| \le |\widehat{S}| + |S| \le 2k-1 + |S|$ give \begin{align*} \pi_{n, \alpha k}(k) \, \operatorname{\mathbb{E}}_{\mathbf{x} \sim \nu_{n,k}} |\widehat{S} \Delta S| & = \operatorname{\mathbb{E}}_{\mathbf{x} \sim \pi_{n,\alpha k}} |\widehat{S} \Delta S| - \operatorname{\mathbb{E}}_{\mathbf{x} \sim \pi_{n,\alpha k}} |\widehat{S} \Delta S| \ {\bf 1}_{\{|S| \ge k+1\}} \\ & \ge \alpha k \left(1 - \frac\mu2 \sqrt{\frac{m}n}\right) - \sum_{j = k+1}^n (2k -1 + j) \P(\operatorname{Bin}(n, \alpha k/n) = j). \end{align*} \end{proof} We do as in \secref{estimation} to conclude the proof of Theorem~\ref{thm:second-minmax}. Let $\gamma$ be a short for $\gamma_{n,k}(\alpha)$. We find that the optimal choice is $\mu = \frac43 (1 - \gamma) \sqrt{n/m}$, yielding the lower bound \begin{equation} \label{MSE1} \operatorname{\mathbb{E}} \|\widehat{\mathbf{x}} - \mathbf{x}\|_2^2 \ge \alpha (1 - \gamma) \frac{4 k}{27}\sqrt{n/m}. \end{equation} To obtain a bound on $\gamma$, note that we can write \[ \alpha k \, \gamma_{n,k}(\alpha) = 3k \P(\operatorname{Bin}(n, \alpha k/n) \ge k+1) + \sum_{j \ge k+2} \P(\operatorname{Bin}(n, \alpha k/n) \ge j). \] Bennett's inequality applied to the binomial distribution, gives \begin{equation} \label{bennett} \P(\operatorname{Bin}(m, p) \ge j) \le \exp\big[- j \log(j/(mp)) +j - mp \big]. \end{equation} Therefore, if $j \ge k$, we have \[ \P(\operatorname{Bin}(n, \alpha k/n) \ge j) \le \exp\big[- j \log(j/(\alpha k)) +j - \alpha k \big] \le \exp[- \beta j], \quad \beta := \alpha - 1 - \log \alpha, \] where the last inequality follows from the fact that the exponent is increasing in $k$ over the range $(0, j/\alpha)$. Note that $\beta > 0$ for any $\alpha < 1$. Applying this inequality, we get \begin{equation} \label{gamma} \alpha k \, \gamma_{n,k}(\alpha) \le 3 k e^{- (k+1) \beta} + \frac{e^{- (k+2) \beta}}{1 - e^{-\beta}} \le (3 k+1) e^{- (k+1) \beta}, \end{equation} when $\beta \ge \log 2$. This bound yields $\gamma < 1$ for all $k \ge 1$ when $\alpha \le 0.03$, in which case \eqref{eq:supportH2} and \eqref{MSE1} become meaningful. This bound is quite conservative, however. Using the definition of $\gamma$, we can numerically show that that by choosing $\alpha$ appropriately we can obtain $\alpha (1 - \gamma) \ge 2 e^{-1/2}-1 \ge 0.21$ for any $n \ge 2$ and all $k \le n/2$. Thus we can always write $\alpha (1 - \gamma) \frac4{27} \ge \frac1{33}$. While setting $C_k = \frac1{33}$ ensures that Theorem~\ref{thm:second-minmax} holds for any possible choice of $k$, it is somewhat pessimistic in the sense that it is entirely dictated by the special case of $k=1$ (which could be handled more efficiently by alternative means~\cite{adaptiveISIT}). For larger values of $k$, it is possible to obtain an improved constant. For example, when $k \ge 10$ numerical calculations show that we can take $C_k = \frac1{15}$. Moreover, in view of the first inequality in~\eqref{gamma}, and the fact that $\beta > 0$ for all $\alpha < 1$, we have $\gamma_{n,k}(\alpha) \to 0$ as $k \to \infty$ and $\alpha$ is held fixed. Thus, for $\alpha$ sufficiently close to $1$ we will have that $\alpha(1-\gamma) \frac4{27} \ge \frac17$ for $k$ sufficiently large. We have also verified this numerically. Hence, the numerical constant $\frac17$ of Theorem~\ref{teo:main-minmax} is also valid in Theorem~\ref{thm:second-minmax} provided $k$ is sufficiently large. \section{Numerical Experiments} \label{sec:numerics} In order to briefly illustrate the implications of the lower bounds in~\secref{main} and the potential limitations and benefits of adaptivity in general, we include a few simple numerical experiments. To simplify our discussion, we limit ourselves to existing adaptive procedures that aim at consistent support recovery: the adaptive procedure from~\cite{4518814} and the recursive bisection algorithm of~\cite{iwen}. We emphasize that in the case of a generic $k$-sparse signal, there are many possibilities for adaptively estimating the support of the signal. For example, the approach in~\cite{haupt-compressive} iteratively rules out indices and could, in principle, proceed until only $k$ candidate indices remain. In contrast, the approaches in~\cite{4518814} and~\cite{iwen} are built upon algorithms for estimating the support of $1$-sparse signals. An algorithm for a $1$-sparse signal could then be run $k$ times to estimate a $k$-sparse signal as in~\cite{4518814}, or used in conjunction with a hashing scheme as in~\cite{iwen}. Since our goal is not to provide a thorough evaluation of the merits of all the different possibilities, but merely to illustrate the general limits of adaptivity, we simplify our discussion and focus exclusively on the simple case of one-sparse signals, i.e., where $k=1$. Specifically, in our experiments we will consider the uniform prior on the set of vectors with a single nonzero entry equal to $\mu > 0$ as in \secref{main}. Since we are focusing only on the case of $k=1$, the algorithms in~\cite{4518814} and~\cite{iwen} are extremely simple and are shown in \algref{1} and \algref{2} respectively. Note that in \algref{1} the step of updating the posterior distribution $\mathbf{p}$ consists of an iterative update rule given in~\cite{4518814} and does not require any a priori knowledge of the signal $\mathbf{x}$ or $\mu$. In \algref{2}, we simplify the recursive bisection algorithm of~\cite{iwen} using the knowledge that $\mu > 0$, which allows us to eliminate the second stage of the algorithm aimed at detecting negative coefficients. Note that this algorithm proceeds through $s_{\mathrm{max}} = \log_2 n$ stages and we must allocate a certain number of measurements to each stage. In our experiments we set $m_s = \lceil \beta 2^{-s} \rceil$, where $\beta$ is selected to ensure that $\sum_{s=1}^{\log_2 n} m_s \le m$. \begin{algorithm}[t] \caption{Adaptive algorithm from~\cite{4518814}} \label{alg:1} \begin{algorithmic} \STATE \textbf{input:} $m \times n$ random matrix $\mathbf{B}$ with i.i.d.\ Rademacher ($\pm 1$ with equal probability) entries. \STATE \textbf{initialize:} $\mathbf{p} = \frac{1}{n}(1, \ldots, 1)^T$. \FOR{$i=1$ to $i = m$} \STATE Compute $\mathbf{a}_i = (b_{i,1} \sqrt{p_1}, \ldots, b_{i,n} \sqrt{p_n})^T$. \STATE Observe $y_i = \inner{\mathbf{a}_i}{\mathbf{x}} + z_i$. \STATE Update posterior distribution $\mathbf{p}$ of $\mathbf{x}$ given $(\mathbf{a}_1,y_1), \ldots, (\mathbf{a}_i,y_i)$ using the rule in~\cite{4518814}. \ENDFOR \STATE \textbf{output:} Estimate for $\mathrm{support}(\mathbf{x})$ is the index where $\mathbf{p}$ attains its maximum value. \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Recursive bisection algorithm of~\cite{iwen}} \label{alg:2} \begin{algorithmic} \STATE \textbf{input:} $m_1$, \ldots, $m_{s_{\mathrm{max}}}$. \STATE \textbf{initialize:} $J_1^{(1)} = \{1, \ldots, \frac{n}{2}\}$, $J_2^{(1)} = \{ \frac{n}{2}+1, \ldots, n\}$. \FOR{$s=1$ to $s = s_{\mathrm{max}}$} \STATE Construct the $m_s \times n$ matrix $\mathbf{A}^{(s)}$ with rows $|J_1^{(s)}|^{-\frac12}{\bf 1}_{J_1^{(s)}} - |J_2^{(s)}|^{-\frac12}{\bf 1}_{J_2^{(s)}}$. \STATE Observe $\mathbf{y}^{(s)} = \mathbf{A}^{(s)} \mathbf{x} + \mathbf{z}^{(s)}$. \STATE Compute $w^{(s)} = \sum_{i=1}^{m_s} y_i^{(s)}$. \STATE Subdivide: Update $J_1^{(s+1)}$ and $J_2^{(s+1)}$ by partitioning $J_1^{(s)}$ if $w^{(s)} \ge 0$ or $J_2^{(s)}$ if $w^{(s)} < 0$. \ENDFOR \STATE \textbf{output:} Estimate for $\mathrm{support}(\mathbf{x})$ is $J_1^{(s_{\mathrm{max}})}$ if $w^{(s_{\mathrm{max}})} \ge 0$, $J_2^{(s_{\mathrm{max}})}$ if $w^{(s_{\mathrm{max}})} < 0$. \end{algorithmic} \end{algorithm} \subsection{Evolution of the posterior} \begin{figure}[t] \centering \begin{tabular}{ccc} \hspace{-3mm} \includegraphics[width=.33\linewidth]{matlab/posterior4} & \hspace{-3mm} \includegraphics[width=.33\linewidth]{matlab/posterior3} & \hspace{-3mm} \includegraphics[width=.33\linewidth]{matlab/posterior1} \\ \hspace{-2mm} {\small \sl (a)} & \hspace{-2mm} {\small \sl (b)} & \hspace{-2mm} {\small \sl (c)} \end{tabular} \caption{\small \sl Behavior of the posterior distribution as a function of $\mu$ for several values of $m$. (a) shows the results for nonadaptive measurements. (b) shows the results for~\algref{1}. (c) shows the results for~\algref{2}. We see that~\algref{2} is able to detect somewhat weaker signals than~\algref{1}. However, for both cases we observe that once $\mu$ exceeds a certain threshold proportional to $\sqrt{n/m}$, the ratio $\lambda$ of $p_{j^*}$ to the second largest posterior probability grows exponentially fast, but that this does not differ substantially from the behavior observed in (a) when using nonadaptive measurements. \label{fig:figure1}} \end{figure} We begin by showing the results of a simple simulation that illustrates the behavior of the posterior distribution of $\mathbf{x}$ as a function of $\mu$ for both adaptive schemes. Specifically, we assume that $m$ is fixed and collect $m$ measurements using each approach. Given the measurements $\mathbf{y}$, we then compute the posterior distribution $\mathbf{p}$ using the true prior used to generate the signal, which can be computed using the fact that \begin{equation} \label{eq:postCompute} p_j \propto \exp \left( - \frac{1}{2\sigma^2} \| \mathbf{y} - \mu \mathbf{A} \mathbf{e}_j \|_2^2 \right), \end{equation} where $\sigma^2$ is the noise variance and $\mathbf{e}_j$ denotes the $j$th element of the standard basis. What we expect is that once $\mu$ exceeds a certain threshold (which depends on $m$), the posterior will become highly concentrated on the true support of $\mathbf{x}$. To quantify this, we consider the case where $j^*$ denotes the true location of the nonzero element of $\mathbf{x}$ and define \[ \lambda = \frac{p_{j^*}}{ \max_{j \neq j^*} p_j }. \] Note that when $\lambda \le 1$, we cannot reliably detect the nonzero, but when $\lambda \gg 1$ we can. In~\figref{figure1} we show the results for a few representative values of $m$ (a) when using nonadaptive measurements, i.e., a (normalized) i.i.d.\ Rademacher random matrix $\mathbf{A}$, compared to the results of (b) \algref{1}, and (c) \algref{2}. For each value of $m$ and for each value of $\mu$, we acquire $m$ measurements using each approach and compute the posterior $\mathbf{p}$ according to~\eqref{eq:postCompute}. We then compute the value of $\lambda$. We repeat this for 10,000 iterations and plot the median value of $\lambda$ for each value of $\mu$ for all three approaches. In our experiments we set $n = 512$ and $\sigma^2 = 1$. We truncate the vertical axis at $10^4$ to ensure that all curves are comparable. We observe that in each case, once $\mu$ exceeds a certain threshold proportional to $\sqrt{n/m}$, the ratio $\lambda$ of $p_{j^*}$ to the second largest posterior probability grows exponentially fast. As expected, this occurs for both the nonadaptive and adaptive strategies, with no substantial difference in terms of how large $\mu$ must be before support recovery is assured (although~\algref{2} seems to improve upon the nonadaptive strategy by a small constant). \subsection{MSE performance} \begin{figure}[t] \centering \includegraphics[width=.5\linewidth]{matlab/oneSparse} \caption{\small \sl The performance of \algref{1} and \algref{2} in the context of a two-stage procedure that first uses $m_d = \frac{m}{2}$ adaptive measurements to detect the location of the nonzero and then uses $m_e = \frac{m}{2}$ measurements to directly estimate the value of the identified coefficient. We show the resulting MSE as a function of the amplitude $\mu$ of the nonzero entry, and compare this to a nonadaptive procedure which uses a (normalized) i.i.d.\ Rademacher matrix followed by OMP. In the worst case, the MSE of the adaptive algorithms is comparable to the MSE obtained by the nonadaptive algorithm and exceeds the lower bound in Theorem~\ref{thm:second-minmax} by only a small constant factor. When $\mu$ begins to exceed this critical threshold, the MSE of the adaptive algorithms rapidly decays below that of the nonadaptive algorithm and approaches $\frac{1}{m_e n}$, which is the MSE one would obtain given $m_e$ measurements and a priori knowledge of the support. \label{fig:figure2}} \end{figure} We have just observed that for a given number of measurements $m$, there is a critical value of $\mu$ below which we cannot reliably detect the support. In this section we examine the impact of this phenomenon on the resulting MSE of a two-stage procedure that first uses $m_d = pm$ adaptive measurements to detect the location of the nonzero with either~\algref{1} or~\algref{2} and then reserves $m_e = (1-p)m$ measurements to directly estimate the value of the identified coefficient. It is not hard to show that if we correctly identify the location of the nonzero, then this will result in an MSE of $(m_e n)^{-1} = ((1-p)mn)^{-1}$. As a point of comparison, if an oracle provided us with the location of the nonzero a priori, we could devote all $m$ measurements to estimating its value, with the best possible MSE being $\frac{1}{m n}$. Thus, if we can correctly detect the nonzero, this procedure will perform within a constant factor of the oracle. We illustrate the performance of~\algref{1} and~\algref{2} in terms of the resulting MSE as a function of the amplitude $\mu$ of the nonzero in~\figref{figure2}. In this experiment we set $n = 512$ and $m = 128$ with $p = \frac12$ so that $m_d = 64$ and $m_e = 64$. We then compute the average MSE over 100,000 iterations for each value of $\mu$ and for both algorithms. We compare this to a nonadaptive procedure which uses a (normalized) i.i.d.\ Rademacher matrix followed by orthogonal matching pursuit (OMP). Note that in the worst case the MSE of the adaptive algorithms is comparable to the MSE obtained by the nonadaptive algorithm and exceeds the lower bound in Theorem~\ref{thm:second-minmax} by only a small constant factor. However, when $\mu$ begins to exceed a critical threshold, the MSE rapidly decays and approaches the optimal value of $\frac{1}{m_e n}$. Note that when $\mu$ is large we can take $m_e \rightarrow m$ and hence can actually get arbitrarily close to $\frac{1}{m n}$ in the asymptotic regime. \section{Discussion} \label{sec:discussion} The contribution of this paper is to show that if one has the freedom to choose any adaptive sensing strategy and any estimation procedure no matter how complicated or computationally intractable, we would not be able to universally improve over a simple nonadaptive strategy that simply projects the signal onto a lower dimensional space and perform recovery via $\ell_1$ minimization. This ``negative'' result should not conceal the fact that adaptivity may help tremendously if the SNR is sufficiently large, as illustrated in Section~\ref{sec:numerics}. Hence, we regard the design and analysis of effective adaptive schemes as a subject of important future research. At the methodological level, it seems important to develop adaptive strategies and algorithms for support estimation that are as accurate and as robust as possible. Further, a transition towards practical applications would need to involve engineering hardware that can effectively implement this sort of feedback, an issue which poses all kinds of very concrete challenges. Finally, at the theoretical level, it would be of interest to analyze the phase transition phenomenon we expect to occur in simple Bayesian signal models. For instance, a central question would be how many measurements are required to transition from a nearly flat posterior to one mostly concentrated on the true support. In closing, we note that after the submission of this paper, a variant of Algorithm~\ref{alg:2} was shown to recover the correct support of a $1$-sparse vector with high probability provided that the amplitude $\mu$ of the nonzero entry obeys $\mu \ge C \sqrt{n/m}$ for some positive numerical constant $C$ \cite{adaptiveISIT,malloynowak}. This implies that for $k=1$, the lower bound in Theorem~\ref{thm:second-minmax} is tight up to constant factors. Thus, adaptive methods have the potential to remove the $\log(n/k)$ factor required in the nonadaptive setting. \small \subsection*{Acknowledgements} The authors would like to thank the reviewers as well as Rui Castro, Jarvis Haupt, and Alexander Tsybakov for their insightful feedback. They are grateful to Xiaodong Li for suggesting an improvement in the proof of Theorem~\ref{thm:bernoulli-support} and to Adam Bull for pointing out a technical error. E.~A-C.~is partially supported by ONR grant N00014-09-1-0258. E.~C.~is partially supported by NSF via grant CCF-0963835 and the 2006 Waterman Award, by AFOSR under grant FA9550-09-1-0643 and by ONR under grant N00014-09-1-0258. M.~D.~is supported by NSF grant DMS-1004718. \bibliographystyle{abbrv}
1,314,259,995,998
arxiv
\section{Introduction} \label{sec:intro} \gls{slam} is a popular field in robotics, and after roughly three decades of research, effective solutions are available. As many sectors rely on SLAM, such as autonomous driving, augmented reality and space exploration, it still receives much attention in academia and industry. The advent of robust machine learning systems allowed the community to enhance purely geometric maps with semantic information or replace hard-coded heuristics with data-driven ones. Within the computer vision community, we have seen photometric (or direct) approaches used to tackle the \gls{slam} (or Structure from Motion) problem. The direct techniques address pairwise registration by minimizing the pixel-wise error between image pairs. By not relying on specific features and having the potential of operating at subpixel resolution on the entire image, direct approaches do not require explicit data association and offer the possibility to boost registration accuracy \cite{schops2019bad}. Whereas these methods have been successfully used on monocular, stereo, or RGB-D~ images, their use on 3D LiDAR{} data is less prominent---probably due to the comparably limited vertical resolution relation to cameras. Della Corte \emph{et al.} \cite{della2018general} presented a multi-cue photometric registration methodology for RGB-D~cameras. It is a system that extends photometric approaches to different projective models and enhances the robustness by considering additional cues such as normals and depth or range in the cost function. Recently released 3D LiDARs{} sensors offer up to 128 beams, making direct approaches also more attractive for LiDAR~data. In addition, most LiDARs{} provide intensity or reflectivity information besides range data. This intensity can be used to sense a light reflectivity cue from the objects in the environment. Being able to assemble an intensity-like image out of a LiDAR{} scan has unleashed the possibility of using well-known computer vision appearance-based methods for place recognition~\cite{di2021visual}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{our_stamperia.png} \vspace{0.1cm}\\ \includegraphics[width=0.99\linewidth]{cloister.png} \caption{Scenes reconstructed using our pipeline. Top: results of a self-recorded dataset using Intel Realsense 455 RGB-D. Bottom: using LiDAR~OS0-128 of the \textit{cloister} sequence from the Newer College Dataset \cite{zhang2021multicamera}.} \label{fig:lidar-data} \end{figure} The dominant paradigm for modern SLAM systems today is graph-based SLAM. A graph-based SLAM system works by constructing a SLAM graph where each node represents the sensor position or a landmark, while edges encode a relative displacement between nodes. Pose-graphs are a particular case in which only poses are stored in the graph. These local transformations stored in the edges are commonly inferred by comparing and matching sensor readings. This paper investigates the fusion of multi-cue direct registration with graph-based SLAM. The main contribution of this paper is a flexible, direct SLAM pipeline for 3D data. To the best of our knowledge, our approach is the only open-source SLAM system that can deal with RGB-D~and LiDAR~in a unified manner. We realized a revised version of MPR \cite{della2018general} for computing the incremental motion of the sensor operating on RGB-D~as well as LiDAR~data. We detect loop closures by an appearance-based algorithm that uses a \gls{bst} structure proposed by Schlegel \emph{et al.}~\cite{schlegel2018hbst}, populated with binary feature descriptors \cite{rublee2011orb}. All components that require the solution of an optimization problem rely on the same framework~\cite{grisetti2020least}, resulting in a compact implementation. It is designed for flexibility, hence not optimizing the SLAM system to a specific sensor. Our system has been tested on both, RGB-D{} and LiDAR{} data, using benchmark datasets. The accuracy is competitive concerning other sensor-specific SLAM systems, while it outperforms them if some assumptions about the structure of the environment are violated. An open-source C++ implementation complements this work \footnote{https://github.com/digiamm/md\_slam}. \figref{fig:lidar-data} illustrates example maps map built with our system using RGB-D~(top) and LiDAR~(bottom). \section{Related Work} \label{sec:related} 3D \gls{slam} has been widely addressed by the computer vision and robotics community and a large number of valid SLAM systems are available. Whereas many deserve mention, we can focus only on some seminal works due to limited space in this section. The available computational resources limited early approaches to operate offline \cite{nuchter2005heuristic} in fairly limited environments\cite{williams2007real}. After the Kinect sensor became available about 15 years ago, we observed a revamped interest in RGB-D~SLAM. Newcombe et al. \cite{newcombe2011kinectfusion} were the first to leverage a dense tracking on a Truncated Signed Distance Function (TSDF) stored in the GPU while using massively parallel implementation to render the surface of the local scene perceived by the sensor. Meanwhile, Segal~\emph{et al} \cite{segal2009generalized} proposed a robust variant of \gls{icp} relying on a point-to-plane metric. These initial methods addressed the open-loop registration approaches, tracking the pose of the sensor in a small neighbourhood. The advent of efficient optimization systems such as iSAM~\cite{kaess2008isam} and g2o~\cite{grisetti2011g2o}, made it possible to build an effective full-fledged 3D SLAM system supporting loop closures and providing an online globally consistent estimate. Novel efficient salient floating point~\cite{bay2006surf} and binary image descriptors~\cite{rublee2011orb}, paired with bag-of-words retrieval methods inspired from web search engines, lead to impressive place recognition approaches~\cite{galvez2012bags}. These methods were then employed within visual \gls{slam} systems, ORB-SLAM by Mur Artal~\emph{et. al}~\cite{mur2017orb} being one of the most popular ones. The pipeline fully relied on the stability of features (keypoints), minimizing the reprojection error of the reconstructed landmarks within the image. In contrast to these indirect methods, another line of research aimed at photometric error minimization. Keller \emph{et al.} \cite{keller2013pointfusion} use projective data matching in a dense model, relying on a surfel-based map for tracking. Others rely on keyframe-based technique \cite{kerl2013dense}. As it happened for feature-based approaches, these works were assembled into full visual SLAM systems~\cite{engel2014lsd}. More recently, BAD-SLAM, a surfel-based direct \gls{ba} system that combines photometric and geometric error \cite{schops2019bad} using feature-based loop closures, shows that, for well-calibrated data, dense \gls{ba} outperforms sparse \gls{ba}. The accuracy and elegance shown by photometric approaches lead to further developments such as MPR~\cite{della2018general} aiming at unifying both LiDAR~and RGB-D~devices into a unique registration method. In parallel, the community approached LiDAR-based odometry by seeking alternative representations for the dense 3D point clouds. These include 3D salient features \cite{zhang2014loam,serafin2016fast}, subsampled clouds \cite{velas2016collar} or \gls{ndt}~\cite{stoyanov2012fast}. Nowadays, LiDAR~Odometry and Mapping (LOAM) is perhaps one of the most popular methods for LiDAR~odometry \cite{zhang2014loam, zhang2015visual}. It extracts distinct features corresponding to surfaces and corners, then used to determine point-to-plane and point-to-line distances to a voxel grid-based map representation. A ground optimized version (Lego-LOAM) method has been later proposed~\cite{shan2018lego}, as it leverages the presence of a ground plane in its segmentation and optimization steps. In contrast to sparse methods, dense approaches suffer less in a non-structured environment \cite{behley2018efficient}. Compared to RGB-D~images, 3D LiDARs~offer lower support for appearance-based place recognition. It is common for dense LiDAR~SLAM systems to attempt a brute force registration with all neighbourhood clouds to seek loop closures. Thanks to the typically small drift, this strategy is most successful; however, computational costs grow significantly in large environments. LiDAR~loop closures have been addressed in different ways compared to RGB-D. Magnusson \emph{et al.} proposed an approach suitable for \gls{ndt} representations~\cite{magnusson2009automatic}. R\"{o}hling et al. \cite{rohling2015fast} investigated the use of histograms computed directly from the 3D point cloud to define a measure of the similarity of two scans. Novel types of descriptors have been investigated, exploiting additional data gathered by the LiDAR~sensor – {i.e.}, light emission of the beams \cite{cop2018delight, guo2019local}. However, despite being very attractive, these descriptors are time-consuming to extract and match, resulting in a slower system overall. Recent works address loop-closures detection in a RGB-D~fashion, relying on the visual feature matching extracted from the image obtained by using the LiDAR~intensity channel \cite{di2021visual}. Building on top of prior work \cite{della2018general, di2021visual}, this paper presents a flexible and general SLAM approach. It is a direct method working on RGB-D and 3D LiDAR data alike providing a unified approach. Our results show that it is competitive with other sensor-specific systems. \begin{figure}[t] \centering \includegraphics[width=0.97\columnwidth]{pipeline} \caption{Illustration of our system. Range $\ensuremath{\mathcal{I}}_t^\mathrm{\rho}$ and $\ensuremath{\mathcal{I}}_t^\mathrm{i}$ images are taken as input from the system. An optimized trajectory within a map is produced as output. This system works independently both for RGB-D~and LiDAR.} \label{fig:parameval} \end{figure} \section{Basics} In this section, we outline some basic concepts used in multiple modules of our system. The incremental position tracking (\secref{sec:tracking}), loop closure validation (\secref{sec:closures}), and pose-graph solution (\secref{sec:pgo}) build upon \gls{ils}. All these modules are built on top of the same software framework~\cite{grisetti2020least}. Our system generalizes on range sensors by supporting different projective models. In the remainder, we shortly describe how an \gls{ils} solution can be found and recall projective models for RGB-D~and LiDARs. \subsection{Iterative Optimization} \label{sec:iterative-optimization} A generic Least Squares problem is captured by the following equation \begin{equation} \label{eq:gen_error} \mathbf{x}^* = \argmin_\mathbf{x} \sum_k \| \mathbf{e}_k(\mathbf{x}_k) \|_{\mathbf{\Omega}_k}^2. \end{equation} Here $\mathbf{e}_k(\mathbf{x}_k)$ is the error of the $k^\mathrm{th}$ measurement, which is only influenced by a subset $\mathbf{x}_k \in \mathbf{x}$ of the overall state vector $\mathbf{x}$ and $\| \cdot \|^2_{\mathbf{\Omega}}$ represents the squared Mahalonobis distance. \gls{ils} solves the above problem by refining a current solution $\mathbf{x}^*$. At each iteration they construct a local quadratic approximation of \eqref{eq:gen_error}: \begin{equation} \sum_k \| \mathbf{e}_k(\mathbf{x}^*_k + \mathbf{\Delta x}_k) \|_{\mathbf{\Omega}_k}^2 \simeq \mathbf{\Delta x}^T \mathbf{H} \mathbf{\Delta x} + 2 \mathbf{b}^T \mathbf{\Delta x} + c. \end{equation} The quadratic form is obtained by locally linearizing the vector error term $\mathbf{e}_k(\mathbf{x}_k)$ around the current solution, and assembling the coefficients as follows: \begin{align} &\mathbf{e}_k(\mathbf{x}^*_k + \mathbf{\Delta x}_k) \simeq \underbrace{\mathbf{e}_k(\mathbf{x}_k^*)}_{\mathbf{e}_k} + \underbrace{ \frac{\partial \mathbf{e}_k(\mathbf{x}_k)}{\partial \mathbf{x}_k} }_{\mathbf{J}_k} \mathbf{\Delta x}_k, \label{eq:gen-linearization}\\ &\mathbf{b}=\sum_k {\mathbf{J}_k^T} \mathbf{\Omega}_k \mathbf{e}_k, \qquad \mathbf{H} = \sum_k {\mathbf{J}_k^T} \mathbf{\Omega}_k \mathbf{J}_k \label{eq:Hb-linear} . \end{align} The minimum of the quadratic form is then found as the solution $\mathbf{\Delta x}^*$ of the linear system $\mathbf{H}\mathbf{\Delta x}^* = \mathbf{b}$. The computed perturbation is finally applied to the current solution $\mathbf{x}^* \leftarrow \mathbf{x}^* + \mathbf{\Delta x}^*$. This procedure is iterated until convergence. Should the state be a smooth manifold $\mathbf{X} \neq \mathbb{R}^n$, the problem admits a local Euclidean parameterization $\mathbf{\Delta x}$ on a chart constructed around $\mathbf{X}^*$. In this case, the Taylor expansion of \eqref{eq:gen-linearization} is evaluated at the origin $\mathbf{\Delta x}=\mathbf{0}$ of the chart computed around the current estimate $\mathbf{X}^*$. Once a new perturbation vector $\mathbf{\Delta x}^*$ is obtained by solving the linear system, the estimate is updated through the boxplus operator $\mathbf{X}^* \leftarrow \mathbf{X}^* \boxplus \mathbf{\Delta x}^*$ as reported in \cite{grisetti2020least}. All modules in our system carry on optimization on one or more variables in $\mathbb{SE}(3)$, represented as homogeneous transformation matrices. As perturbation for the optimization, we use $\mathbf{\Delta x} \in \mathbb{R}^6$. This encodes translation and the imaginary part of the normalized quaternion. We define the $\boxplus$ operator $\mathbf{X}' = \mathbf{X} \boxplus \mathbf{\Delta x} = \mathbf{X} \cdot \mathrm{v2t}(\mathbf{\Delta x})$, as a function that applies the tranform obtained from perturbation $\mathrm{v2t}(\mathbf{\Delta x})$ to the transform $\mathbf{X}$. Similarly, we define the operator boxminus $\boxminus$ as the one that calculates the vector perturbation between two manifold points as $\mathbf{\Delta x} = \mathbf{X}'\boxminus\mathbf{X} = \mathrm{t2v}(\mathbf{X}' \mathbf{X}^{-1})$. \subsection{Projections} A projection is a mapping $\pi : \mathbb{R}^3 \rightarrow \Gamma \subset \mathbb{R}^2$ from a world point $\mathbf{p} = [x, y, z]^T$ to image coordinates $\mathbf{u} = [u, v]^T$. Knowing the depth or the range $\rho$ of an image point $\mathbf{u}$, we can calculate the inverse mapping $\pi^{-1} : \Gamma \times \mathbb{R} \rightarrow \mathbb{R}^3$, more explicitly $\mathbf{p} = \pi^{-1}(\mathbf{u}, \rho)$. We will refer to this operation as unprojection. In the remainder, we recall the \emph{pinhole} projection that models with RGB-D~cameras, and the spherical projection that captures 3D LiDARs. \textbf{Pinhole Model:} Let $\mathbf{K}$ be the camera matrix. Then, the pinhole projection of a point $\mathbf{p}$ is computed as \begin{eqnarray} \pi_p(\mathbf{p}) &=& \phi(\mathbf{K} \, \mathbf{p})\\ \mathbf{K}&=&\begin{bmatrix} f_x& 0 & c_x\\ 0 & f_y & c_y\\ 0 & 0 &1 \end{bmatrix} \label{eq:camera-matrix-p}\\ \phi(\mathbf{v}) &=& \frac{1}{v_z} \begin{bmatrix} v_x \\ v_y \end{bmatrix} \label{eq:pinhole-projection}, \end{eqnarray} with the intrinsic camera parameters for the focal length~$f_x$, $f_y$ and the principle point~$c_x$, $c_y$. The function~$\phi(\mathbf{v})$ is the homogeneous normalization with $\mathbf{v}=[v_x,v_y,v_z]^T$. \textbf {Spherical Model:} Let $\mathbf{K}$ be a camera matrix in the form of~\eqref{eq:camera-matrix-p}, where $f_x$ and $f_y$ specify respectively the resolution of azimuth and elevation and $c_x$ and $c_y$ their offset in pixels. The function $\psi$ maps a 3D point to azimuth and elevation. Thus the spherical projection of a point is given by \begin{eqnarray} \pi_s(\mathbf{p}) &=& \mathbf{K}_{[1,2]} \psi(\mathbf{p}) \\ \psi(\mathbf{v}) &=& \begin{bmatrix} \atantwo(v_y, v_x) \\ \atantwo\left(v_z, \sqrt{v_x^2 + v_y^2}\right)\\ 1 \end{bmatrix}, \label{eq:spherical-projection} \end{eqnarray} Note that in the spherical model $\mathbf{K}_{[1,2]} \in \mathbb{R}^{2 \times 3}$, being the third row in $\mathbf{K}$ suppressed. \section{Our Approach} \label{sec:main} Our approach relies on a pose-graph to represent the map. Nodes of the pose-graph store keyframes in the form of multi-cue image pyramids. Our pipeline takes as input intensity and depth images for RGB-D~or intensity and range images for LiDAR. For compactness, we will generalize, mentioning only range images. The pyramids are generated from the inputs images each time a new frame becomes available. By processing the range information, our system computes the surface normals and organizes them into a three-channel image, which is then stacked to the original input to form a five-channel image. Pyramids are generated by downscaling this input. This process is described in \secref{sec:pyramids}. The pyramids are fed to the tracker, which is responsible for estimating the relative transform between the last keyframe and the current pyramid through the direct error minimization strategy summarized in \secref{sec:error-min}. The tracker is in charge of spawning new keyframes and adding them to the graph when necessary, as discussed in \secref{sec:tracking}. Whenever a new keyframe is generated, the loop closure schema, described in \secref{sec:closures}, seeks for potential relocalization candidates between the past keyframes by performing a search in appearance space. Candidate matches are further pruned by geometric validation and direct refinement. Successful loop closures result in the addition of new constraints in the pose-graph and trigger a complete graph optimization as detailed in \secref{sec:pgo}. \begin{figure} \includegraphics[width=\columnwidth]{lidar_cues.png} \vspace{0.1cm}\\ \includegraphics[width=\columnwidth]{rgbd_cues.png} \caption{Cues generated for LiDAR~(top) and RGB-D~(bottom) images. The first row/column shows the intensity $\ensuremath{\mathcal{I}}^i$, the middle shows the range $\ensuremath{\mathcal{I}}^\mathrm{\rho}$, and the last one illustrates the normals encoded by color $\ensuremath{\mathcal{I}}^\mathrm{n}$. The red pixels on the intensity cues are invalid measurements (i.e., range not available). } \label{fig:cues} \end{figure} \subsection{Pyramid Generation} \label{sec:pyramids} The first step to generate a pyramid from a pair of intensity $\ensuremath{\mathcal{I}}^\mathrm{i}$ and range image $\ensuremath{\mathcal{I}}^\mathrm{\rho}$ consists of extracting the normals. To calculate the normal at pixel $\mathbf{u}$ we unproject the pixels in the neighborhood $\mathcal U=\{\mathbf{u}': \|\mathbf{u}-\mathbf{u}'\| < \tau_\mathbf{u}\}$ whose radius $\tau_\mathbf{u}$ is inversely proportional to the range at the pixel $I^{\rho}(\mathbf{u})$. The normal $\mathbf{n}_\mathbf{u}$ is the one of the plane that best fits the unprojected points from the set $\mathcal U$ . All valid normals are assembled in a normal image $\ensuremath{\mathcal{I}}^\mathrm{n}$, so that $\ensuremath{\mathcal{I}}^\mathrm{n}(\mathbf{u})=\mathbf{n}_\mathbf{u}$. One level of a pyramid $\ensuremath{\mathcal{I}}$, therefore, consists of three images: $\ensuremath{\mathcal{I}}^\mathrm{i}$, $\ensuremath{\mathcal{I}}^\mathrm{\rho}$ and $\ensuremath{\mathcal{I}}^\mathrm{n}$. Further channels such as curvature and semantics can easily be embedded into the representation by adding additional images. In the remainder, we will refer to one general image in the set $\ensuremath{\mathcal{I}}$ as a \emph{cue} $\ensuremath{\mathcal{I}}^\mathrm{c}$. Pyramids are required to extend the basin of convergence of direct registration methods. This is due to the implicit data association, which operates only in the neighborhood of a few pixels. Hence, downscaling the images increases the convergence basin at the cost of reduced accuracy. However, the accuracy can be recovered by running the registration from the coarser to the finest level. Each time a level is changed, the initial guess of the transformation is set to the solution of the previous coarser level. A pyramid $\mathcal{P}$ is generated from all the cues $\ensuremath{\mathcal{I}}=\{\ensuremath{\mathcal{I}}^\mathrm{c} \}$, by downscaling at user-selected resolutions. In our experiments, we typically use three scaling levels, each of them half the resolution of the previous level. \subsection{Direct Error Minimization} \label{sec:error-min} As in direct error minimization approaches, our method seeks to find the transform $\mathbf{X}^*\in \mathbb{SE}(3)$ that minimizes the photometric distance between the two images: \begin{equation} \begin{small} \mathbf{X}^*= \argmin_{\mathbf{X} \in \mathbb{SE}(3)} \sum_{\mathbf{u}} \| \underbrace{ \ensuremath{\mathcal{\hat I}}^\mathrm{i}(\mathbf{u}) - \ensuremath{\mathcal{I}}^\mathrm{i}\overbrace{\left(\pi \left(\mathbf{X} \pi^{-1} \left(\mathbf{u},\rho \right) \right) \right)}^{\mathbf{u}'}}_{\mathbf{e}_{\mathbf{u}}} \|^2 \end{small} \label{eq:total-error} \end{equation} Where $\mathbf{e}^\mathrm{i}_\mathbf{u}$ denotes the error between corresponding pixels. The evaluation point $\mathbf{u}'$ of $\ensuremath{\mathcal{I}}^\mathrm{i}$ is computed by unprojecting the pixel $\mathbf{u}$, applying the transform $\mathbf{X}$ and projecting it back. To carry out this operation, the range at the pixel $\rho = \ensuremath{\mathcal{I}}^{\rho}(\mathbf{u})$ needs to be known. \eqref{eq:total-error} models classical photometric error minimization assuming that the cues are not affected by the transform $\mathbf{X}$. In our case, range and normal are affected by $\mathbf{X}$. Hence, we need to account for the change in these cues, and we will do it by introducing a mapping function $\zeta^\mathrm{c}(\mathbf{X}, \ensuremath{\mathcal{\hat I}}^\mathrm{c}(\mathbf{u}))$. This function calculates the \emph{pixel} value of the $\mathrm{c}^\mathrm{th}$ cue after applying the transform $\mathbf{X}$ to the original channel value $\ensuremath{\mathcal{\hat I}}^\mathrm{c}(\mathbf{u})$. We can thus rewrite a more general form of \eqref{eq:total-error} that accounts for all cues and captures this effect as follows: \begin{equation} \mathbf{X}^*= \argmin_{\mathbf{X} \in \mathbb{SE}(3)} \sum_ \mathrm{c} \sum_\mathbf{u} \| \underbrace{ \zeta^\mathrm{c}(\mathbf{X},\ensuremath{\mathcal{\hat I}}^\mathrm{c}(\mathbf{u})) - \ensuremath{\mathcal{I}}^\mathrm{c}(\mathbf{u}')}_{\mathbf{e}_{\mathbf{u}}^c}\|_{\mathbf{\Omega}^\mathrm{c}}^2 \label{eq:total-error-multicue} \end{equation} The squared Mahalanobis distance $\| \cdot \|^2_{\mathbf{\Omega}^c}$ is used to weight the different cues. More details about a general methodology for direct registration can be found in \cite{della2018general}. While approaching the problem in \eqref{eq:total-error-multicue} with the \gls{ils} method described in \secref{sec:iterative-optimization}, particular care has to be taken to the numerical approximations of floating-point numbers. In particular, since each pixel and cue contribute to constructing the quadratic form with an independent error $\mathbf{e}_{\mathbf{u}}^c$, the summations in \eqref{eq:Hb-linear} might accumulate millions of terms. Hence, to lessen the effect of these round-offs, \eqref{eq:Hb-linear} has to be computed using a stable algorithm. In our single-threaded implementation, we use the compensated summation algoritm~\cite{higham2002accuracy}. We use multi-cue direct alignment in incremental position tracking, explained in next section (\secref{sec:tracking}) and in loop closure refinement and validation (\secref{sec:closures}). \subsection{Tracking} \label{sec:tracking} This module is in charge of estimating the open-loop trajectory of the sensor. To this extent, it processes new pyramids as they become available by determining the relative transform between the last pyramid $\mathcal{P}_t$, and the current keyframe $\mathcal{K}_i$. A keyframe stores a global transform $\mathbf{X}_i$, and a pyramid $\mathcal{P}_i$. The registration algorithm of \secref{sec:error-min} is used to compute a relative transform $\mathbf{Z}_{i,t}$ between the last two pyramids. Whenever the magnitude of such a transform exceeds a given threshold or the overlap between $\mathcal{P}_i$ and $\mathcal{P}_t$ becomes too small, the tracker spawns a new keyframe $\mathcal{K}_{i+1}$, with transform $\mathbf{X}_{i+1}=\mathbf{X}_{i+1} \mathbf{Z}_{i,i+1}$. Furthermore, it adds to the graph a new constraint between the nodes $i$ and $i+1$, with transform $\mathbf{Z}_{i,i+1}$, and information matrix $\mathbf{\Omega}_{i,i+1}$. The latter is set to $\mathbf{H}$ matrix of the direct registration at the optimum. The generation of the new keyframe triggers the loop detection described in the next section. Using keyframes reduces the drift that would occur when performing subsequent pairwise registration since the reference frame stays fixed for a longer time. Potentially, if the sensor hovers at a distance smaller than the keyframe threshold, all registrations are done against the same pyramid, and no drift would occur. \subsection{Loop Detection and Validation} \label{sec:closures} This module is responsible for relocalizing a newly generated keyframe with respect to previous ones. More formally, given a query frame $\mathcal{K}_i$, it retrieves a set of tuples $\{\left < \mathcal{K}_j, \mathbf{Z}_{i,j}, \mathbf{\Omega}_{i,j}\right>\}$, consisting of a past keyframe $\mathcal{K}_j$, a transform $\mathbf{Z}_{i,j}$ between $\mathcal{K}_i$ and $\mathcal{K}_j$ and an information matrix $\mathbf{\Omega}_{i,j}$ characterizing the uncertainty of the computed transform. Our system approaches loop closing in multiple stages. At first, we carry on visual place recognition on the intensity channels. This approach leverages the results of previous work \cite{di2021visual}. For visual place recognition, we rely on ORB feature descriptors, extracted from the $\ensuremath{\mathcal{I}}^\mathrm{i}$ of each keyframe. Retrieving the most similar frame to the current one results in looking for the images in the database having the closest descriptor ``close'' to the one of the current image. To efficiently conduct this search, we use a hamming distance embedding binary search tree (HBST)~\cite{schlegel2018hbst}, a tree-like structure that allows for descriptor search and insertion in logarithmic time by exploiting particular properties of binary descriptors. A match from HBST also returns a set of pairs of corresponding points between the matching keypoints. Having the depth and unprojecting the points, we can carry on a straightforward RANSAC registration. Finally, each candidate match is subject to direct refinement (\secref{sec:error-min}). This step enhances the accuracy and it provides information matrices on the same scale as the ones generated by the tracker. The above strategy is applied independently to RGB-D~or LiDAR~data. These surviving pairs $\{\left < \mathcal{K}_j, \mathbf{Z}_{i,j}, \mathbf{\Omega}_{i,j}\right> \}$, constitute potential loop closing constraints to be added to the graph. However, to handle environments with large sensor aliasing, we introduced a further check to preserve topological consistency. Whenever a loop closure is found, we carry on a direct registration between all neighbours that would result \emph{after} accepting the closure. If the resulting error is within certain bounds, the closure is finally added to the graph, and a global optimization is triggered. \subsection{Pose-graph Optimization} \label{sec:pgo} The goal of this module is to retrieve a configuration of the keyframes in the space that is maximally consistent with the incremental constraints introduced by the tracker and the loop closing constraints by the loop detector. A pose-graph is a special case of a factor graph~\cite{grisetti2020least, kaess2008isam}. The nodes of the graph are the keyframe poses $\mathbf{X}=\{ \mathbf{X}_i\}_{i=1:N}$, while the constraints encode the relative transformations between the connected keyframes, together with their uncertainty $\{ \left< \mathbf{Z}_{i,j}, \mathbf{\Omega}_{i,j} \right> \}$. Optimizing a factor graph consists in solving the following optimization problem: \begin{equation} \mathbf{X}^*= \argmin_{\mathbf{X} \in \mathbb{SE}(3)^N} \sum_{i,j} \| \underbrace{ \mathbf{X}_i^{-1} \mathbf{X}_j \boxminus \mathbf{Z}_{i,j} }_{\mathbf{e}_{i,j}} \|^2_{\mathbf{\Omega}_{i,j}} \label{eq:pgo-error} \end{equation} Here, the error $\mathbf{e}_{i,j}$ is the difference between predicted displacement $\mathbf{X}_i^{-1} \mathbf{X}_j$ and result of the direct alignment $\mathbf{Z}_{i,j}$. The total perturbation vector $\mathbf{\Delta x} \in \mathbb{R}^{6N}$ results from stacking all variable perturbations $\{\mathbf{\Delta x}_i\}$. \begin{figure}[t] \centering \centering \includegraphics[width=0.40\textwidth]{desk_giorgio} \centering \includegraphics[width=0.40\textwidth]{stamperia} \caption{Qualitative RGB-D~reconstructions showing the global consistency produced by our pipeline. Data has been self-recorded with an Intel Realsense 455.} \label{fig:rgbd} \end{figure} \section{Experimental Evaluation} \label{sec:exp} In this section, we report the results of our pipeline on different public benchmark datasets. To the best of our knowledge, our approach is the only open-source SLAM system that can deal with RGB-D~and~LiDAR~in a unified manner. Therefore, to evaluate our system, we compare with state-of-the-art SLAM packages developed specifically for each of these sensor types. For RGB-D~we consider DVO-SLAM \cite{kerl2013dense} and ElasticFusion \cite{whelan2015elasticfusion} as direct approaches and ORB-SLAM2 \cite{mur2017orb} as indirect representative. For LiDAR~we compare against LeGO-LOAM \cite{shan2018lego} as feature-based and SuMA \cite{behley2018efficient} representing the dense category. To run the experiments, we used a PC with an Intel Core i7-7700K CPU @ 4.20GHz and 16GB of RAM. Since this work is focused on SLAM, we perform our quantitative evaluation using the RMSE on the absolute trajectory error (ATE) with $\mathbb{SE}(3)$ alignment. The alignment for the metric is computed by using the Horn method \cite{horn1988closed}, and the timestamps are used to determine the associations. Then, we calculate the RMSE of the translational differences between all matched poses. The tracking module dominates the runtime of our approach since loop closures are detected and validated asynchronously within another thread. Hence, we report the average frequency at which the tracker runs for each sensor. At the core of the tracker, we have the photometric registration algorithm, whose computation is proportional to the size of the images. Despite our current implementation of the registration algorithm being single-threaded, on the PC used to run the experiments, the tracking system runs at 5 Hz for the sensor with the highest resolution, while it can operate online on the sensor with the lowest resolution. \begin{figure} \centering \centering \includegraphics[width=0.95\linewidth]{g_maps_alignment.png} \caption{MD-SLAM map on \textit{long} sequence from Newer College Dataset \cite{ramezani2020newer} aligned with Google Earth.} \label{fig:gmaps} \end{figure} \subsection{RGB-D~Results} We conducted several experiments with RGB-D~sensor. Qualitative analysis have been done using self-recorded data and are shown in \figref{fig:rgbd}. As public benchmarks we used the TUM-RGB-D~\cite{sturm2012benchmark} and the ETH3D~\cite{schops2019bad}. The TUM RGB-D~dataset contains multiple real datasets captured with handheld Xbox Kinect. A rolling shutter camera provides RGB data. Further, the camera’s depth and color streams are not synchronized. Every sequence accompanies an accurate groundtruth trajectory obtained with an external motion capture system. ETH3D benchmark is acquired with global shutter cameras and accurate active stereo depth. Color and depth images are synchronized. We select several indoor sequences for which ground-truth, computed by external motion capture, is available. On these datasets, we compare with DVO-SLAM, ElasticFusion and ORB-SLAM2. These three approaches are representative of different classes of SLAM algorithms. \tabref{tab:rgbd-fr} shows the results on the TUM RGB-D~datasets, while \tabref{tab:rgbd-eth} presents the outcome on the ETH3D datasets. DVO SLAM implements a mixed geometry-based and direct registration. Internally the alignment between pairs of keyframes is obtained by jointly minimizing point-to-plane and photometric residuals. This is similar to ElasticFusion, whose estimate consists of a mesh model of the environment and the current sensor location instead of the trajectory. In contrast to these two approaches, ORB-SLAM2 implements a traditional visual SLAM pipeline, where a local map of landmarks around the RGB-D~sensor is constructed from ORB features. This map is constantly optimized as the camera moves by performing local \gls{ba}. Loop closures are detected through DBoW2 \cite{galvez2012bags} and a global optimization on a $\mathbb{S}\mathrm{im}(3)$ pose-graph to enforce global consistency is used. The TUM dataset provides images $640\times 480$ pixels, while ETH3D $740\times 460$ pixels. From these images, we compute a 3 level pyramid with scales $1/2$, $1/4$, and $1/8$. Our system runs respectively at $5.5$ and $5$ Hz at these resolutions. \newcolumntype{L}{>{$}l<{$}} \newcolumntype{C}{>{$}c<{$}} \newcolumntype{R}{>{$}r<{$}} \newcommand{\nm}[1]{\textnormal{#1}} \begin{table} [t] \centering \begin{tabular}{LCCC} \toprule & \multicolumn{1}{c}{fr1/desk} & \multicolumn{1}{c}{fr1/desk2} & \multicolumn{1}{c}{fr2/desk} \\ \midrule \nm{DVO-SLAM} & 0.021 & 0.046 & 0.017 \\ \nm{ElasticFusion} & 0.020 & 0.048 & 0.071 \\ \nm{ORB-SLAM2} & \textbf{0.016} & \textbf{0.022} & \textbf{0.009} \\ \nm{\textbf{Ours}} & 0.041 & 0.064 & 0.057 \\ \bottomrule \end{tabular} \caption{ATE RMSE [m] results on TUM RGB-D datasets, recorded with non-synchronous depth using a rolling shutter camera.} \label{tab:rgbd-fr} \end{table} \begin{table} [t] \centering \begin{tabular}{LCCCCCC} \toprule & \multicolumn{1}{c}{table3} & \multicolumn{1}{c}{table4} & \multicolumn{1}{c}{table7} & \multicolumn{1}{c}{cables1} & \multicolumn{1}{c}{plant2} & \multicolumn{1}{c}{planar2} \\ \midrule \nm{DVO-SLAM} & 0.008 & 0.018 & \textbf{0.007} & \textbf{0.004} & 0.002 & 0.002 \\ \nm{ElasticFusion} & - & 0.012 & - & 0.018 & 0.017 & 0.011\\ \nm{ORB-SLAM2} & \textbf{0.007} & \textbf{0.008} & 0.010 & 0.007 & 0.003 & 0.005\\ \nm{\textbf{Ours}} & 0.021 & 0.022 & 0.036 & 0.015 & \textbf{0.001} & \textbf{0.001}\\ \bottomrule \end{tabular} \caption{ATE RMSE [m] on ETH3D, recorded with global shutter camera and synchronous streams. ElasticFusion fails in \textit{table3} and \textit{table7}.} \label{tab:rgbd-eth} \end{table} In \tabref{tab:rgbd-fr} we can see that ORB-SLAM2 clearly outperforms all other pipelines. DVO-SLAM and ElasticFusion provide comparable results, and our approach is the worst in terms of accuracy. Yet, the largest error is 6.4 cm, which results in a usable map. As stated before, this dataset is subject to rolling shutter and asynchronous depth effects. ORB-SLAM2, being feature-based, is less sensitive to these phenomena. DVO-SLAM and ElasticFusion explicitly model these effects. Our approach does not attempt to address these issues since it would render the whole pipeline less consistent between different sensing modalities. \tabref{tab:rgbd-eth} presents the results on the ETH3D benchmark. In this case, our performances are on par with other methods, since intensity and depth are synchronous, and the camera is global shutter. These results highlight the strength and weaknesses of a purely direct approach not supported by any geometric association. While being compact, it suffers from unmodeled effects and requires a considerable overlap between subsequent frames. \subsection{3D LiDAR~Results} We conducted different experiments on public LiDAR~ benchmarks to show the performances of our SLAM implementation. For the LiDAR~we use the Newer College Dataset~\cite{ramezani2020newer, zhang2021multicamera} recorded at 10 Hz with two models of Ouster LiDARs: OS1 and OS0. We conducted our evaluation on the \emph{long}, \emph{cloister}, \emph{quad-easy} and \emph{stairs} sequences. The OS1 has 64 vertical beams. We selected the \emph{long} sequence that lasts approximately 45 minutes. It consists of multiple loops with viewpoint changes between buildings and a park. The other three shorter sequences are recorded with the OS0, which has 128 vertical beams. The LiDAR~ \emph{quad-easy} sequence contains four loops that explore quad, \emph{cloister} mixes outdoor and indoor scenes while \emph{stairs} is purely indoor and based on vertical motion through different floors. \begin{table} [h!] \centering \begin{tabular}{LCRCR} \toprule \multicolumn{1}{l}{} & \multicolumn{3}{c}{OS0-128} & \multicolumn{1}{c}{OS1-64} \\ \cmidrule{2-5} & \multicolumn{1}{c}{cloister} & \multicolumn{1}{c}{quad} & \multicolumn{1}{c}{stairs} & \multicolumn{1}{c}{long} \\ \midrule \nm{LeGO-LOAM} & \textbf{0.20} & \textbf{0.09} & 3.20 & \multicolumn{1}{c}{1.30} \\ \nm{SuMA} & 3.34 & 1.74 & 0.67 & \multicolumn{1}{c}{-} \\%\multicolumn{1}{c}{} \\ \nm{\textbf{Ours}} & 0.36 & 0.25 & \textbf{0.34} & \multicolumn{1}{c}{1.74} \\ \bottomrule \end{tabular} \caption{ATE RMSE [m] results of all benchmarked approaches on the Newer College Dataset. SuMA fails on \textit{long} sequence.} \label{tab:lidar} \end{table} \begin{figure} \centering \begin{subfigure}{0.20\textwidth} \centering \includegraphics[width=\textwidth]{quad_easy} \caption{quad-easy} \label{fig:quad-easy} \end{subfigure} \hfill \begin{subfigure}{0.27\textwidth} \centering \includegraphics[width=\textwidth]{stairs} \caption{stairs} \label{fig:stairs} \end{subfigure} \caption{Some scenes from Newer College dataset reconstructed by our system.} \label{fig:lidar} \end{figure} Qualitative analysis have been performed to show the results obtained by our pipeline. \figref{fig:lidar} illustrates some reconstructions obtained with MD-SLAM from Newer College sequences. \figref{fig:gmaps} and \figref{fig:traj} show the global consistency of our estimate on \textit{long} sequence. Quantitatively, we compare against LeGO-LOAM and SuMA. These represent two different classes of LiDAR~algorithms, respectively sparse and dense. \tabref{tab:lidar} summarizes the results of the comparison. LeGO-LOAM is currently one of the most accurate LiDAR~SLAM pipelines and represents a sparse class of LiDAR~algorithms. In contrast to our approach, LeGO-LOAM is a pure geometric feature-based frame-to-model LiDAR~SLAM work, where the optimization on roll, yaw and z-axis (pointing up) is decoupled from the planar parameters. SuMa constructs a surfel-based map and estimates the changes in the sensor’s pose by exploiting the projective data association in a frame-to-model or in a frame-to-frame fashion. For both the pipelines loop closures are handled through ICP. Being ground optimized, LeGO-LOAM shows impressive results mainly in chunks where ground occupies most of the scene, yet our approach provides competitive accuracy. The situation becomes challenging for LeGO-LOAM when its assumptions are violated, such as in the \emph{stairs} sequence. In this case, our pipeline is the most accurate since it does not impose any particular structure on the environment being mapped. SuMA performances are the worst in terms of accuracy. We tried this pipeline both in a frame-to-frame and frame-to-model mode. The one reported in \tabref{tab:lidar} represents SuMA frame-to-frame that always outperforms the frame-to-model on these datasets. We use the OS1 to produce images of $64 \times 1024$ pixels while the OS0 to produce images of $128 \times 1024$ pixels. Since the horizontal resolution is much larger than the vertical one, to balance the aspect ratio for direct registration, initially, we downscale the horizontal resolution by $1/2$ for OS0 and by $1/4$ for OS1. Our approach generates a pyramid with the following scales: $1$, $1/2$ and $1/4$. With these settings, our system operates at around 10 Hz on the OS0 and at approximately 20 Hz on the OS1, making it suitable for online estimation. \begin{figure}[t] \centering \centering \includegraphics[width=0.95\linewidth]{traj.pdf} \caption{Alignment of our estimate with the groundtruth in \textit{long} sequence of Newer College. The color bar on the right shows the translational error [m] over the whole trajectory.} \label{fig:traj} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we presented a direct SLAM system that operates both with RGB-D~and LiDAR~sensors. These two heterogeneous sensor modalities are addressed exclusively by changing the projection models. To the best of our knowledge, our approach is the only open-source SLAM system that can deal with RGB-D~and LiDAR~in a unified manner. All optimization components in our system are dealt with a single \gls{ils} solver, resulting in highly compact code. Comparative experiments show that our generic method can compete with sensor-specific state-of-the-art approaches. Being purely photometric and without making any assumption of the environment, our pipeline shows consistent results on different types of datasets. We release our software as a C++ open-source package. The current single-thread implementation can operate online with small image sizes. Thanks to the inherent data-separation of direct registration, we envision a GPU implementation of our approach that seamlessly scales to high resolution while matching real-time requirements. Furthermore, the independence of the internal representation from the sensor source paves the way to SLAM systems that operate jointly on both RGB-D~and LiDAR. \\ \bibliographystyle{plain}
1,314,259,995,999
arxiv
\section{Introduction} Frustrated magnetism continues to be of considerable current interest due to the rich possibility of new states and properties of matter.\cite{lacroix_2011} While Kagome and triangular lattices have been widely studied, frustration in the face-centred-cubic (fcc) lattice has received much less attention, particularly within itinerant electron models. Antiferromagnetic (AF) orders in fcc materials range from type-III and type-I in the 1:2 compounds ${\rm Mn S_2}$ and ${\rm Mn Te_2}$,\cite{hastings_1959} to type-II in the 1:1 compound ${\rm MnO}$.\cite{bloch_1974} Magnetic frustration and pressure-induced metal-insulator transition are exhibited by a variety of complex compounds having effective fcc magnetic lattice such as alkali fullerides $\rm A_3 C_{60}$ (A = K, Rb, Cs),\cite{capone_2009,ganin_2010} cluster compounds $\rm Ga Ta_4 Se_8 $, $\rm Ga Nb_4 Se_8 $,\cite{guiot_2013} and `B site ordered' double perovskites.\cite{aczel_2013} Magnetic frustration in an itinerant electron model has additional features besides the usual geometric frustration effect in the ideal Heisenberg model with nearest-neighbor (NN) two-spin interaction where the interplay of magnetic interactions and lattice geometry results in frustrated spins. This is effectively illustrated by the case of the $120^{\circ}$ ordered AF state of the Hubbard model on a triangular lattice. While at large $U$, spin wave dispersion in the random phase approximation (RPA) exactly matches with the corresponding result for the Heisenberg model with $J=4t^2/U$, extended-range effective spin couplings generated at finite $U$ result in strong zone-boundary spin wave softening and even magnetic instability with decreasing $U$, highlighting the finite-$U$-induced competing interaction and frustration effect in an itinerant electron system.\cite{as_2005} The cyclic ring-exchange four-spin term $({\bf S}_i \times {\bf S}_j).({\bf S}_k \times {\bf S}_l)$, generated in the square-lattice Hubbard model at next-to-leading order in the $t/U$ expansion arising from coherent motion of electrons beyond NN sites, illustrates that higher-spin couplings are also generated besides extended-range two-spin interactions.\cite{coldea_2001} The itinerant electron approach also directly connects magnetic frustration and spin density wave (SDW) band gap. The same hopping terms between parallel spins which are responsible for competing interactions, also result in band broadening which strongly reduces the SDW gap and renders the system more susceptible to metal-insulator transition with decreasing $U/t$. SDW band gap reduction, band overlap, and first-order metal-insulator transition with decreasing $U/t$ have been studied in the frustrated square- and triangular-lattice antiferromagnets due to electron self-energy correction calculated in the self consistent Born approximation (SCBA).\cite{asingh_2004+5} Within the $t$-$t'$ Hubbard model on the fcc lattice, ground-state magnetic phase diagram and related metal-insulator transition have been investigated very recently using the slave-boson approach by minimizing the ground state thermodynamical potential with respect to the spiral state wave vector.\cite{timirgazin_2016} The magnetic phase diagrams for different band fillings (fixed $t'$) as well as for different $t'$ (half filling) were obtained, showing a variety of magnetic phases and transitions. However, finite-$U$-induced competing interaction and frustration effects on spin waves have not been investigated for the fcc-lattice AF state of the Hubbard model. Study of magnetic excitations should be of particular interest for type-III order in view of the measured spin wave dispersion in $\rm MnS_2$, obtained from inelastic neutron scattering studies of the naturally occuring pyrite-structured mineral hauerite.\cite{tapan_mns2} \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig1.eps,angle=0,width=60mm} \caption{Type-III AF order on the fcc lattice. Planes shown in solid and dashed lines with spins in red and blue indicate the two identical fcc sublattices. The layers along the $z$ direction in the sequence $\alpha \alpha' \beta \beta' \alpha ...$ (labeled as $12341...$) have planar $(\pi,\pi)$ magnetic order along the $x$ and $y$ directions shown. Parallel spins connected by NN hopping in different fcc sublattices ($t_z$) reflect the strong inherent magnetic frustration.} \end{figure} \section{AF orders on the fcc lattice} Neutron scattering studies of the AF structures of ${\rm MnS_2}$, ${\rm MnSe_2}$, and ${\rm MnTe_2}$ have shown orderings of the ``third" kind for the disulphide, of the ``first" kind for the ditelluride, and an intermediate arrangement for the diselenide.\cite{hastings_1959} The magnetic structure has been established as collinear for ${\rm MnS_2}$.\cite{tapan_1991} While a planar $(\pi,\pi)$ order ($xy$ plane in Fig. 1) of nearest-neighbour (NN) spins is common to all three, it is the order in the perpendicular ($z$) direction involving next-nearest-neighbour (NNN) spins which distinguishes the three cases. The order is AF for the disulphide (smallest lattice parameter 6.097 \AA) and F for the ditelluride (largest lattice parameter 6.943 \AA), whereas with intermediate lattice parameter 6.417 \AA, the diselenide exhibits the intermediate arrangement. These structures may therefore be regarded as different interlayer stackings of the AF ordered layers. As inferred from the relatively low $T_{\rm N}$ values, the weak magnetic couplings between the Mn spins are due to the relatively large lattice parameters in these high-spin ($S$=5/2) systems. AF order of the ``second" kind is realized in the 1:1 compound $\rm Mn O$ (much smaller lattice parameter 4.447 \AA), which has planar $(\pi,0)$ order instead. These fcc lattice antiferromagnets exhibit several unusual magnetic properties. For $\rm MnS_2$, the magnetic phase transition at $T_{\rm N}$=48 K is of first order.\cite{hastings_1976,tapan_1984,westrum_1970} By using very high resolution synchrotron x-ray diffraction techniques, a pseudo-tetragonal distortion was detected below the magnetic ordering temperature ($c/a$ ratio 1.0006), indicating coupling between magnetic and lattice degrees of freedom.\cite{kimber_2015} Giant pressure-induced volume collapse accompanied by high spin ($S=5/2$) to low spin ($S=1/2$) transition involving interplay between crystal field splitting and Hund's rule coupling has been observed in this pyrite mineral.\cite{kimber_2014} The measured N\'{e}el temperature of $\rm MnTe_2$ has been found to show unusually large pressure dependence of 12K/GPa, giving rise to large violation of Bloch's rule.\cite{tapan_2015} Based on IR reflection measurements at room temperature, $\rm MnTe_2$ appears to undergo pressure-induced semiconductor-metal transition in the pressure range of 8-25 GPa.\cite{mita_2008} Quantum Monte Carlo simulations in a NN classical Heisenberg AF on the fcc lattice have confirmed a first order transition to a collinear type-I AF structure due to an ``order by disorder" effect.\cite{gvozdikova_2005} A first-order transition driven by thermal fluctuations has been suggested by the absence of stable fixed points within the renormalization group approach.\cite{qmc1,qmc2} As an illustration of low-temperature thermal fluctuations selecting collinear states through the ``order by disorder" effect,\cite{henley_1987} short wavelength thermal fluctuations lead to an effective biquadratic exchange $-({\bf S}_i . {\bf S}_j)^2$ between neighboring spins,\cite{canals_2004} which favours collinear spin arrangement. Strong geometric frustration is inherent in these fcc-lattice antiferromagnets. Unlike the weakly frustrated square-lattice AF, it is the strong NN AF bonds in neighboring layers which are frustrated in the fcc lattice. Also, within the localized-spin picture, type-III (type-I) order on the fcc lattice is stabilized for AF (F) sign of the second-neighbor interaction,\cite{henley_1987} as expected from Fig. 1. Competing interactions between neighboring layers of same and different fcc sublattices also allows for the spiral spin structure. In this paper, we will show through a spin wave stability analysis that strong finite-$U$-induced competing interaction and frustration effects in the fcc lattice result in significant additional spin wave softening (besides the usual geometric frustration effect), which considerably enriches the competition between different AF orders in the $t$-$t'$ Hubbard model. \section{$t$-$t'$ Hubbard model} We consider the $t$-$t'$ Hubbard model on the fcc lattice: \begin{equation} H = -t \sum_{\langle i,j \rangle,\sigma} a_{i\sigma} ^\dagger a_{j\sigma} - t' \sum_{\langle \langle i,j \rangle \rangle,\sigma} a_{i\sigma} ^\dagger a_{j\sigma} + U \sum_i n_{i\uparrow} n_{i\downarrow} \end{equation} where $t$ and $t'$ are the nearest- and next-nearest-neighbour hopping terms, respectively, and $U$ is the on-site Coulomb interaction. In order to identify the role of fcc lattice in magnetic frustration, we will employ the interlayer NN hopping terms (shown as $t_z$ in Fig. 1) as control. For $t_z$=0, the two fcc sublattices are completely decoupled, while they are coupled for $t_z$=$t$ (cubic case). In the following, we will set $t$=1 as the energy scale. Type-III order on the fcc lattice is shown in Fig. 1. Alternating layers along $z$ direction, shown as planes in solid and dashed lines with spins in red and blue, constitute two identical fcc sublattices. The type-III order is characterized by $(\pi,\pi)$ magnetic order in each layer, with layers within same fcc sublattice stacked antiferromagnetically in the $z$ direction. The NNN hopping term $t'$ provides the weak AF interlayer coupling required for stabilizing type-III order. Within the equivalent localized spin model (large $U$ limit), the NNN spin coupling $J'$ connects spins only within same fcc sublattice (the weak AF interlayer coupling), whereas the NN spin coupling $J$ connects spins in different fcc sublattices as well. These latter interactions are fully frustrated, and the relative magnetic orientation between the two fcc sublattices can therefore be arbitrary in the classical ground state. Corresponding to the type-III order, we consider the interaction term (Eq. 1) in the Hartree-Fock (HF) approximation with local magnetization taken along the $z$ direction and staggered field $\mp \sigma \Delta$ on the two magnetic sublattices A and B. In a composite four-layer $\otimes$ two-sublattice basis corresponding to the magnetic order, we obtain the $8\times 8$ Hamiltonian matrix: \begin{equation} H_{\rm HF} ^\sigma (\bf k) = \left [ \begin{array}{cccccccc} \varepsilon_{kxy}' & \varepsilon_{kxy} & \varepsilon_{kzxy} & \varepsilon_{kz\overline{xy}} & 0 & \varepsilon_{kz}' & \varepsilon_{kz\overline{xy}}^* & \varepsilon_{kzxy}^* \\ & \varepsilon_{kxy}' & \varepsilon_{kz\overline{xy}} & \varepsilon_{kzxy} & \varepsilon_{kz}' & 0 & \varepsilon_{kzxy}^* & \varepsilon_{kz\overline{xy}}^* \\ & & \varepsilon_{kxy}' & \varepsilon_{kxy} & \varepsilon_{kz\overline{xy}} & \varepsilon_{kzxy} & 0 & \varepsilon_{kz}' \\ & & & \varepsilon_{kxy}' & \varepsilon_{kzxy} & \varepsilon_{kz\overline{xy}} & \varepsilon_{kz}' & 0 \\ & & & & \varepsilon_{kxy}' & \varepsilon_{kxy} & \varepsilon_{kzxy} & \varepsilon_{kz\overline{xy}} \\ & & & & & \varepsilon_{kxy}' & \varepsilon_{kz\overline{xy}} & \varepsilon_{kzxy} \\ & & & & & & \varepsilon_{kxy}' & \varepsilon_{kxy} \\ & & & & & & & \varepsilon_{kxy}' \\ \end{array} \right ] \mbox{$\mp \sigma \Delta$} \end{equation} where the band terms corresponding to NN and NNN hoppings in the planar ($xy)$ and perpendicular ($z$) directions are given by: \begin{eqnarray} \varepsilon_{kxy}' & = & -4t' \cos k_x \cos k_y \\ \nonumber \varepsilon_{kxy} & = & -2t (\cos k_x + \cos k_y) \\ \nonumber \varepsilon_{kz}' & = & -2t' \cos k_z \\ \nonumber \varepsilon_{kzxy} & = & -2t_z e^{i k_z /2} \cos \left (\frac{k_x + k_y}{2} \right ) \\ \nonumber \varepsilon_{kz\overline{xy}} & = & -2t_z e^{i k_z /2} \cos \left (\frac{k_x - k_y}{2} \right ) \end{eqnarray} Here $\Delta = mU/2$ is the staggered field in terms of the sublattice magnetization: \begin{equation} m (\Delta) = (n_\uparrow ^A - n_\downarrow ^A)(\Delta) = (n_\downarrow ^B - n_\uparrow ^B)(\Delta) = (n_\uparrow ^A - n_\uparrow ^B)(\Delta) \end{equation} which is determined self-consistently from the electronic densities calculated from $H_{\rm HF} ^\sigma (\bf k)$ for the two spins $\sigma$=$\uparrow,\downarrow$ on the two magnetic sublattices A and B. In practice, it is easier to choose $\Delta$ and determine $U$ from the calculated sublattice magnetization $m(\Delta)$. In the large $U$ limit, $2\Delta \approx U$ as $m \rightarrow 1$. We will consider only the half-filled case ($n=1$) with Fermi energy in the AF band gap. Note that our coordinate axes ($x-y$) are rotated by $\pi/4$ with respect to the cubic planar axes, with lattice parameter $a/\sqrt{2}$ for the corresponding square lattice. Therefore $k_x,k_y$ and $k_z$ are in units of $\sqrt{2}/a$ and $1/a$, respectively, in terms of the cubic lattice parameter $a$. \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig2.ps,angle=-90,width=80mm} \caption{The HF level electronic DOS in the AF state, showing strongly reduced SDW band gap due to frustration compared to the unfrustrated band gap ($2\Delta$), and strongly asymmetric behaviour with respect to sign of NNN hopping term $t'$. Here $U$=9.2 (9.7) for $t'=+(-)0.3$.} \end{figure} \section{AF state electronic density of states} The AF state electronic density of states (DOS) shows strongly asymmetric behaviour with respect to sign of $t'$ (Fig. 2). For positive $t'$, the SDW band gap is more robust, and the AF insulator state survives even for relatively lower U values. DOS structure is similar to that for the planar $t$-$t'$ model, except that the fcc hopping term $t_z$ further splits the two SDW bands. The DOS drops off sharply at both band edges. Electronic self energy correction, as incorporated at the SCBA level, will further reduce the band gap, resulting in band overlap with decreasing $U/t$. On the other hand, for negative $t'$, the SDW band gap is significantly reduced due to band broadening, and the AF insulator state requires higher $U$ values. DOS structure is different from the planar case, indicating more three dimensional band structure effect due to the $t_z$ hopping term. The DOS does not fall abruptly as in the previous case but has broad tail for the lower band, indicating possibility of metallic AF state surviving even after weak band overlap, and similarly for small hole doping. \section{Spin wave excitations} We consider the spin wave propagator: \begin{equation} \chi^{-+}({\bf q},\omega) = \int dt \sum_{i} e^{i\omega(t-t')} e^{-{\bf q}.({\bf r}_i - {\bf r}_j)} \langle \Psi_0 | {\rm T} [ S_i ^{-}(t) S_j ^{+}(t') ] | \Psi_0 \rangle \end{equation} obtained from expectation value of the time-ordered product of transverse spin operators $S_i ^-$ and $S_j ^+$ at lattice sites $i$ and $j$ in the AF ground state $|\Psi_0 \rangle$. In the random phase approximation (RPA), the spin wave propagator can be written in the composite basis as: \begin{equation} [\chi^{-+}({\bf q},\omega)] = \frac{[\chi^0({\bf q},\omega)]} {{\bf 1} - U [\chi^0({\bf q},\omega)]} \end{equation} where $[\chi^0]$ is the bare particle-hole propagator matrix obtained by integrating out fermions in the broken-symmetry state. In terms of the energy eigenfunctions $\phi_{\bf k}$ and eigenvalues $E_{\bf k}$ of the Hamiltonian matrix $H_{\rm HF}^{\sigma} ({\bf k})$, \begin{eqnarray} [\chi^{0}({\bf q},\omega)]_{ss'} &=& i \int \frac{d\omega^{\prime}}{2\pi} \sum_{\bf k'} \left[G_{0}^{\uparrow}({\bf k'},\omega^{\prime}) \right]_{ss'} \left[G_{0}^{\downarrow}({\bf k'-q},\omega^{\prime} - \omega) \right]_{s's} \nonumber \\ &=& \sum_{{\bf k'},m,n} \left[ \frac{\phi^{\uparrow s}_{{\bf k'}\,m} \phi^{\uparrow s' *}_{{\bf k'}\,m} \phi^{\downarrow s'}_{{\bf k'-q}\,n} \phi^{\downarrow s *}_{{\bf k'-q}\,n}} {E^{+}_{{\bf k'-q}\downarrow n} - E^{-}_{{\bf k'}\uparrow m} + \omega -i\eta} + \frac{\phi^{\uparrow s}_{{\bf k'}\,m} \phi^{\uparrow s' *}_{{\bf k'}\,m} \phi^{\downarrow s'}_{{\bf k'-q}\,n} \phi^{\downarrow s *}_{{\bf k'-q}\,n}}{E^{+}_{{\bf k'}\uparrow m} - E^{-}_{{\bf k'-q}\downarrow n} - \omega - i\eta} \right] \end{eqnarray} Here $s,s'$ refer to indices in the composite four-layer, two-sublattice basis, $m$,$n$ indicate the eigenvalue branches, and + (-) refer to particle (hole) energies above (below) the Fermi energy. By diagonalizing the $[\chi^0({\bf q},\omega)]$ matrix, spin wave energies are obtained from solutions of 1-$U\lambda_{\bf q} ^l (\omega)$=0 representing poles of Eq. (6). Corresponding to the four-layer basis, there are four spin-wave branches. \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig3a.ps,angle=0,width=70mm} \psfig{figure=fig3b.ps,angle=0,width=70mm} \ \\ \psfig{figure=fig3c.ps,angle=0,width=70mm} \psfig{figure=fig3d.ps,angle=0,width=70mm} \ \\ \psfig{figure=fig3e.ps,angle=0,width=70mm} \psfig{figure=fig3f.ps,angle=0,width=70mm} \caption{Calculated spin wave dispersion along planar ($q_x=q_y$) and perpendicular ($q_z$) directions. The layered AF subsystems on the two fcc sublattices are independent for $t_z=0$: (a) and (b), strongly coupled in the cubic case $t_z=1$: (e) and (f), and moderately coupled in the intermediate case $t_z = 0.7$: (c) and (d).} \end{figure} We will consider the planar ($q_z=0$, $q_x$ and $q_y$ finite) and perpendicular ($q_x$=$q_y$=0, $q_z$ finite) spin wave modes in this investigation, which will provide excitation energies corresponding to spin twisting within layers as well as how neighboring layers are magnetically coupled. We have mainly considered the case of positive $t'$ as it appears relevant for $\rm MnS_2$. It is instructive to start with the limiting case $t_z$=0 where the two fcc sublattices get decoupled into simple layered antiferromagnetic subsystems with AF order in both planar and perpendicular directions. In this limit, the four spin wave branches (indicated by $l$=1-4) collapse into two [Figs. 3(a) and (b)], and the two Goldstone modes correspond to independent spin rotations in the two fcc sublattices. Further setting $t'$=0, all four branches become degenerate and match with the dispersion for the planar antiferromagnet. The calculated dispersion is of the form: \begin{equation} \omega_{\bf q} = (2+r) J \sqrt{1-\gamma_{\bf q} ^2} \end{equation} for a layered three-dimensional AF in the large $U$ limit, where \begin{equation} \gamma_{\bf q} = (\cos q_x + \cos q_y + r\cos q_z)/(2+r) \end{equation} and $r$=$J'/J$=$(t'/t)^2$ is the ratio of the interlayer to planar spin couplings, with $J$=$4t^2/U$ and $J'$=$4t'^2/U$. Here the minor frustration effect due to small planar NNN hopping $t'$ has been neglected. From the above expression, the maximum ($q_x$=$q_y$=$\pi/2$) and zone boundary ($q_x$=$q_y$=$\pi$) energies for the planar mode are approximately $2J$ and $2J\sqrt{2}(t'/t)$, while at $q_z$=$\pi/2$ the perpendicular mode energy is $2J(t'/t)$=$2\sqrt{J J'}$. For $\Delta$=10 ($U$$\approx$20) and $t'/t$=0.4, we estimate these three energies as 0.4, 0.2, and 0.16, respectively, which validate the calculated results shown in Figs. 3(a) and (b). When $t_z$ is turned on, the layered AF subsystems on the two fcc sublattices get coupled and a pair of low-energy weakly dispersive branches emerge. The dispersion of the high-energy branches in Fig. 3(e) is similar as for uncoupled fcc sublattices ($t_z$=0). This reflects the inherent fcc lattice frustration, as discussed earlier. The low-energy branches, on the other hand, correspond to opposite spin twistings on the two fcc sublattices, resulting in healing of frustrated NN AF bonds and consequent lowering of energy. Only one Goldstone mode survives, and the other mode acquires a small energy gap at $q$=0 as seen in Fig. 3(e). This small energy gap reflects the effective magnetic coupling between the two fcc sublattices. Strong softening of the low-energy branches as $t_z$ approaches 1 (cubic case) highlights the fcc lattice frustration. The marginal stability of type-III order, as seen from the nearly vanishing energies at $q_x$=$q_y$=$\pi/2$ in Fig. 3(e), even in the strong coupling limit, possibly accounts for the rarity of this magnetic order in nature, and highlights the importance of magnetoelastic effect and weak magnetic anisotropy in the stabilization of type-III order in $\rm MnS_2$. \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig4a.ps,angle=0,width=80mm} \psfig{figure=fig4b.ps,angle=0,width=80mm} \ \\ \psfig{figure=fig4c.ps,angle=0,width=80mm} \psfig{figure=fig4d.ps,angle=0,width=80mm} \caption{Finite-$U$-induced frustration effect on: (a) planar spin wave modes, (c) perpendicular mode (lowest-energy branch), and the characteristic energies $\omega_{q=0} ^l$ providing quantitative measure of the effective interlayer magnetic couplings (b) and (d).} \end{figure} With decreasing $U$, the low-energy branches undergo softening, eventually turning to negative-energy modes signalling instability of type-III order. Fig. 4(a) shows the planar dispersion at the $U$ value where the characteristic energy $\omega_{q=0} ^{l=3}$ just vanishes. Fig. 4(c) shows the softening of the lowest-energy perpendicular mode with decreasing $U$, the onset of negative-energy modes coinciding with the vanishing of the characteristic energy $\omega_{q=0} ^{l=2}$. These two characteristic energies provide quantitative measures of the effective interlayer magnetic couplings for same (intra) and different (inter) fcc sublattices, respectively. The reduction and eventual vanishing of these two energies with decreasing $U$ [Figs. 4(b) and (d)] highlights the finite-$U$-induced frustration effect in the fcc lattice, as explained below. Effective hopping connections between parallel spins through pair of NN hoppings (indicated in Fig. 1 as $t_z$) result in competing (i.e. antiferromagnetic) interactions at finite $U$ between 2nd neighbor (same layer) and 3rd neighbor (neighboring layers) spins. Unlike the weakly frustrated square-lattice AF involving weak NNN hopping terms $t'$, the finite-$U$-induced frustration effect in fcc lattice is quite significant as parallel spins are connected by NN hopping. The effective magnetic coupling between the two fcc sublattices is of particular interest. Within the localized spin model, NN interactions between spins on the two fcc sublattices are fully frustrated (Fig. 1), leading to degeneracy in the relative spin orientations. This degeneracy is, however, lifted at finite $U$ and the two fcc sublattices become effectively coupled. Of the two Goldstone modes ($q$=0) for $t_z$=0 corresponding to decoupled fcc sublattices, one mode acquires a small finite energy for finite $t_z$, and this energy $\omega_{q=0}^{l=2}$ provides a quantitative measure of the effective magnetic coupling between neighboring layers of different fcc sublattices. Energetically favorable magnetic coupling between the two fcc sublattices also confirms the stability of collinear type-III order. Figs. 4(b) and (d) show that this coupling vanishes at large $U$ corresponding to fully frustrated fcc sublattices. \begin{figure} \psfig{figure=fig5a.ps,angle=0,width=80mm} \ \\ \psfig{figure=fig5b.eps,angle=-90,width=70mm} \caption{(a) Calculated spin wave dispersion (planar mode) with small uniaxial anisotropy included. (b) Measured spin wave dispersion in $\rm MnS_2$ from inelastic neutron scattering experiments [12].} \end{figure} Figure 5(a) shows the calculated spin wave dispersion (all branches) for the planar mode in the momentum range $-\pi/2$$\le$$q$$\le$$\pi/2$. Here $U/t$$\approx$40, corresponding to the strong coupling limit, $t_z$=0.96, and a small uniaxial anisotropy term $ -\delta U \sum_i (S_i ^z)^2 $ was included $(\delta U/U$=$10^{-4})$ to stabilize type-III order and also phenomenologically account for the measured gap at the $\Gamma$ point. The calculated dispersion is in qualitative agreement with INS measurements\cite{tapan_mns2} of spin wave dispersion in $\rm MnS_2$ [Fig. 5(b)]. With decreasing $U$, enhancement of interlayer magnetic frustration results in magnetic instability when the spin wave energy turns negative. For fixed $t'$, type-III order is thus unstable below a critical interaction strength $U_c$. Similarly, for fixed $U$, decreasing $t'$ leads to the instability when the weak AF interlayer coupling due to $t'$ is unable to compete against the frustrating interlayer spin couplings generated by $t_z$. With increasing $t'$, the instability occurs at lower $U_c$ values where the finite-$U$-induced frustration is more effective, resulting in a characteristic negative slope of $U_c$ vs. $t'$. With increasing $t'$, a different kind of magnetic instability is obtained at $t'$$\approx$0.7 in the large $U$ limit involving competition between planar interactions. The instability expectedly shows up in the planar spin wave mode, as seen from emergence of negative energy modes at small $q$ [Fig. 6(a)]. This is the instability toward type-II order, and is related to the known instability in the planar antiferromagnet from $(\pi,\pi)$ to $(\pi,0)$ order as $t'/t \rightarrow 1/\sqrt{2}$. The perpendicular mode remains stable near this $t'$ value. For negative $t'$ also, the planar mode shows instability for $|t'|$ near $1/\sqrt{2}$ [Fig. 6(b)]. Some of our results are in agreement with the $n$=1 phase diagram obtained in Ref. [11] (where sign of $t'$ is reversed compared to our model). These include: i) negative slope of $U_c$ vs. $t'$, ii) the instability to type-II order at $|t'|$$\approx$0.7 in the strong coupling limit, and iii) the PM metal state for negative $t'$ (positive $t'$ in Ref. [11]) at lower $U$ values due to the strong frustration-induced band broadening and overlap of the two SDW bands (Fig. 2). However, our result is significantly different at lower $t'$ values (below 0.3), where we find sharp increase in $U_c$ below which finite-$U$-induced frustration destabilizes type-III order, as seen in Fig. 7. Also, we do not find the instability towards type-I order or the $(0,0,Q)$ spiral structure at lower $U$ values, as discussed below. \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig6a.ps,angle=0,width=80mm} \psfig{figure=fig6b.ps,angle=0,width=80mm} \caption{(a) Negative-energy modes at small $q$ for $t' \approx 1/\sqrt{2}$ signal long-wavelength instability towards type-II order which has $(\pi,0)$ instead of $(\pi,\pi)$ planar magnetic order. (b) For negative $t'$, the instability towards type-II order at $|t'| \approx 1/\sqrt{2}$ extends over a broad momentum range.} \end{figure} \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig7.ps,angle=0,width=80mm} \caption{Critical interaction $U_c$ vs. $t'$ for stability of type-III order, based on spin wave stability analysis using the lowest-energy branch of the perpendicular mode [Fig. 4(c)]. Due to weaker AF interlayer coupling at lower $t'$ values, smaller magnitude of finite-$U$-induced frustration is sufficient for destabilization, which accounts for the sharp increase in $U_c$.} \end{figure} Competition between effective interlayer couplings for different and same fcc sublattices would result in the spiral structure along $z$ direction. However, we do not find this $(0,0,Q)$ spiral structure instability which would be signalled by negative energy modes at small but finite $q_z$. Instead, with decreasing $U$, we find that $\omega_{q=0}^{l=2}$ decreases to zero and turns negative, resulting in negative energy modes for all $q_z$. This implies that the coupling between different fcc sublattices is turning negative, signalling instability towards non-collinear order involving relative spin twisting between the two fcc sublattices. We have also examined spin waves in the type-I magnetic structure in the region marked ``unstable" for type-III order (Fig. 7). We find type-I order to be also unstable (Fig. 8), indicating that instability of type-III order as inferred from the perpendicular spin wave mode is not towards type-I order or the $(0,0,Q)$ spiral structure but rather towards non-collinear order. \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig8a.eps,angle=0,width=50mm} \psfig{figure=fig8b.eps,angle=0,width=80mm} \caption{(a) Type-I order on the fcc lattice. (b) Perpendicular mode spin wave energy (lowest-energy branch) in the type-I ordered AF state, showing that type-I order is stable when the frustrating hopping term $t_z '$ is reduced but becomes unstable as $t_z ' \rightarrow 1$ in the cubic limit.} \end{figure} Finally, we consider the factors qualitatively affecting the particle-hole gap for a multi-band system with crystal-field (CF) splitting and Hund's-rule coupling term included. Fig. 9 schematically shows CF split lower and upper SDW sub-bands, with the corresponding majority spins ($\uparrow$ and $\downarrow$) on the A sublattice indicated. The effective particle-hole gap is between the upper and lower SDW sub-bands corresponding to the lower and upper CF levels, respectively. With increasing $\Delta E_{\rm CF}$ due to pressure, overlap of these two SDW sub-bands and filling of the upper sub-band (spin $\downarrow$) at the expense of lower sub-band (spin $\uparrow$) results in a pressure-induced metal-insulator transition accompanied with high-spin to low-spin transition. Strongly reduced $\Delta E_{\rm SDW} - W$ due to increased bandwidth $W$ in the frustrated fcc lattice, together with exceptionally large increase in $\Delta E_{\rm CF}$ with pressure (inferred from the observed lattice-parameter reduction), are the likely favourable factors for pressure-induced metal-insulator transition in compounds such as $\rm Mn Te_2$. \begin{figure} \vspace*{0mm} \hspace*{0mm} \psfig{figure=fig9.eps,angle=0,width=60mm} \caption{Schematic diagram showing crystal-field split lower and upper sub-bands in the SDW state of a half-filled multi-orbital model. The nominal particle-hole gap $\Delta E_{\rm ph} = \Delta E_{\rm SDW} - \Delta E_{CF} - W$ decreases with increasing crystal field splitting, illustrating the mechanism of pressure-induced metal-insulator transition with the onset of band overlap.} \end{figure} \section{Summary and Discussion} Spin waves in the type-III ordered AF state of the $t$-$t'$ Hubbard model on the fcc lattice were investigated. A composite four-layer, two-sublattice basis was employed corresponding to the $\alpha \alpha' \beta \beta' \alpha ...$ sequence of layers, and NN hopping terms between the two fcc sublattices were used as control to highlight magnetic frustration in the fcc lattice. Clearly illustrated by the reduction and eventual vanishing of the effective interlayer magnetic couplings with decreasing $U$, strong finite-$U$-induced competing interactions result in significant spin wave softening, besides the usual geometric frustration effect. Calculated spin wave dispersion with a weak magnetic anisotropy term included for stabilization was found to be in qualitative agreement with the measured dispersion in $\rm Mn S_2$ obtained from inelastic neutron scattering experiments. The delicate energy balance between competing magnetic interactions results in extreme sensitivity to Hamiltonian parameters, leading to sharp instabilities as inferred from spin wave energies turning negative. While instabilities towards type-I order and $(0,0,Q)$ spiral structure were not observed, instability towards non-collinear order was inferred from the perpendicular mode, indicating relative spin twisting between different fcc sublattices due to vanishing of corresponding interlayer magnetic coupling. The planar mode also showed instability near $q_x$=$q_y$=$\pi/2$ which sets in at slightly higher $U$ values. The instability to type-II order near $t'$$\approx$1/$\sqrt{2}$ corresponds to the known instability in the frustrated planar AF from $(\pi,\pi)$ to $(\pi,0)$ magnetic order. The strong frustration effects manifested in the fcc lattice AF provide understanding of the unusual magnetic properties of the fcc-structure compounds ($\rm Mn S_2$, $\rm Mn Se_2$, and $\rm Mn Te_2$), such as the critical role of magnetoelastic effect and weak magnetic anisotropy in stabilizing type-III order and the weakly dispersive spin wave branch observed in $\rm Mn S_2$. A likely scenario for the first order magnetic transition observed near $T_N$ is that loss of inter-layer spin correlations near $T_{\rm N}$ due to thermal spin disordering suppresses the magneto-elastic effect and dipolar energy gain, which further enhances thermal spin fluctuations, resulting in a runaway effect which causes the first order magnetic transition. Furthermore, the reduced SDW band gap due to strong frustration-induced band broadening and self-energy corrections renders the frustrated fcc lattice AF particularly susceptible to vanishing band gap with decreasing $U/t$. The above band picture of metal-insulator transition due to band overlap captures the essential feature in the realistic multi-band scenario involving interplay between Hund's-rule coupling and crystal-field splitting. Increasing crystal-field splitting with applied pressure reduces the energy gap between the highest occupied and the lowest unoccupied crystal-field sub-bands, with the pressure-induced metal-insulator transition corresponding to the onset of band overlap. This may be relevant to the pressure-induced high-spin to low-spin magnetic transition observed in $\rm Mn Te_2$ accompanied with changes in transport behavior suggestive of metal-insulator transition. \section*{Acknowledgement} Helpful discussions with Mike Zhitomirsky are gratefully acknowledged. \section*{References}
1,314,259,996,000
arxiv
\section{Introduction} {\em Bayesian networks\/} are basic graphical models, used widely both in statistics \cite{bib:Lau96} and artificial intelligence \cite{bib:Pea88}. These statistical models of conditional independence structure are described by acyclic directed graphs whose nodes correspond to (random) variables in consideration. A quite important topic is {\em learning Bayesian network structures\/} \cite{bib:Nea04}, which is determining the statistical model on the basis of given data. Although there are learning methods based on statistical conditional independence tests, contemporary methods are mainly based on maximization of a suitable {\em quality criterion} or {\em score function} $\mathcal{Q}(G,D)$ of the (acyclic directed) graph $G$ and the (given = fixed) data $D$, evaluating how good the graph $G$ explains the occurrence of the observed data $D$. This leads to a nonlinear combinatorial optimization problem that is $\NP$-hard \cite{bib:Chi96,bib:CHM04}. Below we will consider learning restricted Bayesian network structures. Some of these problems remain $\NP$-hard while others are polynomial-time solvable. It may happen that two different acyclic directed graphs describe the same statistical model, that is, they are {\em Markov equivalent}. A classic result \cite{bib:Fry90,bib:VerPea91} says that two acyclic directed graphs are Markov equivalent if and only if they have the same underlying undirected graph and the same set of immoralities (= special induced subgraphs $a\to c\leftarrow b$ over three nodes $\{a,b,c\}$ with no arc between $a$ and $b$ in either direction). In order to remove this ambiguity of Markov equivalent models/graphs, one is interested in having a unique representative for each Bayesian network structure (= statistical model). A classic unique graphical representative is the {\em essential graph\/} \cite{bib:AMP97} of the corresponding Markov equivalence class of acyclic directed graphs, which is a special graph allowing both directed and undirected edges (see Section \ref{Ssec.Learning} for more details). Any reasonable score function should be {\em score-equivalent} \cite{bib:Bou95}, that is, $\mathcal{Q}(G,D)=\mathcal{Q}(H,D)$ for any two Markov equivalent graphs $G$ and $H$. Another standard technical requirement is that the criterion has to be (additively) {\em decomposable} into contributions from the parent sets $\pa_G(i)$ of each node $i$ \cite{bib:Chi02} (see Section \ref{Ssec.Learning} for more details). In this paper, we deal with learning (restricted) decomposable models \cite{bib:Lau96}, interpreted as Bayesian network structures. Decomposable models are exactly those models whose essential graph is an \emph{undirected} (and thus also necessarily {\em chordal}) graph. That is, decomposable models correspond to graphical models without immoralities. As input to our learning problem we assume that we are given an undirected graph $K$ and an evaluation oracle for the score function $\mathcal{Q}(\cdot,D)$. Note that we do not assume the actual data $D$ being part of the input itself. Of course, the evaluation oracle uses the given data $D$ in order to evaluate score function values. However, in our treatment, we remove the complexity of evaluating score function values from the overall complexity. In particular, this means that the (large or small) number of data vectors in $D$ will be irrelevant for our complexity results. We show that learning spanning trees of $K$ and learning forests in $K$ are both polynomial-time solvable. For learning spanning trees of $K$, this observation has been already made in \cite{bib:ChowLiu68} for {\em specific} score functions. Moreover, we show that if we impose degree bounds $\deg(v)\leq k$ on all nodes $v\in N$ for some constant $k\geq 2$, then both problems become $\NP$-hard. We also show that learning chordal subgraphs of $K$ is $\NP$-hard. This result, however, has been already shown even for specific score functions and also for the case of fixed bounded size of possible cliques \cite{bib:Sre01}. We include our short proof to emphasize the simplicity and usefulness of our approach to easily recover also hardness results. We will rewrite the nonlinear combinatorial optimization problem behind the learning problem into a linear integer optimization problem (in higher dimension) by using an algebraic approach to the description of conditional independence structures \cite{bib:Stud05} that represents them by certain vectors with integer components, called {\em imsets} (short for ``Integer Multi-SETs''). In the context of learning Bayesian networks this led to the proposal to represent each Bayesian network structure uniquely by a so-called {\em standard imset}. The advantage of this algebraic approach is that every reasonable score function (score equivalent and decomposable), becomes an affine function of the standard imset (see Chapter 8 in \cite{bib:Stud05}). Moreover, it has recently been shown in \cite{bib:StudVomHem10} that the standard imsets over a fixed set of variables are exactly the vertices of their convex hull, the {\em standard imset polytope}. These results allow one to apply the methods of polyhedral geometry in the area of learning Bayesian networks, because they transform this task to a linear programming problem. Instead of considering standard imsets, we introduce a different unique representative that is obtained from the standard imset by an invertible affine linear map that preserves lattice points in both directions. We call these new representatives \emph{characteristic imsets}, as they are $0$-$1$-vectors and as they also contain, for each acyclic directed graph, the characteristic vector of the underlying undirected graph. Although, mathematically, this map is simply a change in coordinates, the characteristic imset is much closer to the graphical description because it allows one to identify immediately both the underlying undirected graph and the immoralities. Our procedure for recovering the essential graph from the characteristic imset is much simpler than the reconstruction from the standard imset as presented in \cite{bib:StudVom09}. Moreover, due to the affine transformation, every reasonable score function is also an affine function of the characteristic imset. Thus, learning Bayesian network structures can be reduced to solving a linear optimization problem over a certain $0$-$1$-polytope. Unfortunately, a complete facet description for this polytope (for general $|N|$) is still unknown. A conjectured list of all facets for the standard imset polytope (and consequently also for the characteristic imset polytope) is presented in \cite{bib:StuVom11}. A complete facet description is also unknown for the convex hull of all characteristic imsets of undirected chordal graphs, although the characteristic imsets themselves are well-understood in this case (see Section \ref{sec.characteristic}). To summarize, we offer a new method for analyzing the learning procedure through an algebraic way of representing statistical models. We believe that our approach via characteristic imsets brings a tremendous mathematical simplification that allows us to easily recover known results and to establish new complexity results. We also think that a better understanding of the polyhedral properties of the characteristic imset polytope (complete facet description or all edge directions) will lead to future applications of efficient (integer) linear programming methods and software in this area of learning Bayesian network structures. \section{Basic concepts}\label{sec.basic} We tacitly assume that the reader is familiar with basic concepts from polyhedral geometry. We only recall briefly the definitions of concepts mentioned above, but skip their statistical motivation. Throughout the paper $N$ is a finite non-empty set of {\em variables}; to avoid the trivial case we assume $|N|\geq 2$. In statistical context, the elements of $N$ correspond to random variables in consideration; in graphical context, they correspond to nodes. \subsection{Graphical concepts} Graphs considered here have a finite non-empty set of nodes $N$ and two types of edges: directed edges, called {\em arcs} (or {\em arrows} in machine learning literature), denoted by $i\rightarrow j$ (or $j\leftarrow i$), and {\em undirected edges}. No loops or multiple edges between two nodes are allowed. A set of nodes $C\subseteq N$ is a {\em clique} (or a {\em complete set}) in $G$ if every pair of distinct nodes in $C$ is connected by an undirected edge. An {\em immorality} in a graph $G$ is an induced subgraph (of $G$) for three nodes $\{ a,b,c\}$ in which $a\rightarrow c\leftarrow b$ and $a$ and $b$ are not adjacent. An undirected graph is called {\em chordal}, if every (undirected) cycle of length at least $4$ has a chord, that is, an edge connecting two non-consecutive nodes in the cycle. A {\em forest} is an undirected graph without undirected cycles. A connected forest over $N$ is called a {\em spanning tree}. By the {\em degree $\deg_G(i)$} of a node $i\in N$ (in an undirected graph $G$), we mean the number of edges incident to $i$ in $G$. Note that an undirected graph is chordal if and only if all its edges can be directed in such a way that the result is an acyclic directed graph without immoralities (see \S\,2.1 in \cite{bib:Lau96}). Occasionally, we will use the (in the machine learning community) commonly used acronym ``\DAG'' for ``directed acyclic graph'', although the grammatically correct phrase is ``acyclic directed graph". \subsection{Learning Bayesian network structures}\label{Ssec.Learning} In statistical context, to each variable (= node) $i\in N$ is assigned a finite (individual) sample space $\X_{i}$ (= the set of possible values); to avoid technical problems assume $|\X_{i}|\geq 2$, for each $i\in N$. A {\em Bayesian network structure\/} defined by a \DAG\ $G$ (over $N$) is formally the class of discrete probability distributions $P$ on the joint sample space $\prod_{i\in N} \X_{i}$ that are Markovian with respect to $G$. Note that $P$ is {\em Markovian\/} with respect to $G$ if it satisfies conditional independence restrictions determined by the respective separation criterion (see \cite{bib:Lau96, bib:Pea88}). Different \DAG s over $N$ can be {\em Markov equivalent}, which means they define the same Bayesian network structure. The classic graphical characterization of (Markov) equivalent graphs is this: they are equivalent if and only if they have the same underlying undirected graph and the same immoralities (see \cite{bib:AMP97}). The classic unique graphical representative of a Bayesian network structure is the {\em essential graph} $G^{*}$ of the respective (Markov) equivalence class $\mathcal{G}$ of acyclic directed graphs: one has $a\to b$ in $G^{*}$ if this arc occurs in every graph from $\mathcal{G}$ and it has an undirected edge between $a$ and $b$ in $G^{*}$ if one has $a\rightarrow b$ in one graph and $b\rightarrow a$ in another graph (from $\mathcal{G}$). A less informative (unique) representative is the {\em pattern} $\pat(G)$ (of any $G$ in $\mathcal{G}$): it is obtained from the underlying graph of $G$ by directing (only) those edges that belong to immoralities (in $G$). {\em Learning a Bayesian network structure\/} means to determine it on the basis of an observed (complete) database $D$ (of length $\ell\geq 1$), which is a sequence $x_{1},\ldots ,x_{\ell}$ of elements of the joint sample space. $D$ is called {\em complete} if all components of the elements $x_{1},\ldots,x_{\ell}$ are known. A {\em quality criterion} is a real function $\mathcal{Q}$ of two variables: of an acyclic directed graph $G$ and of a database $D$. A learning procedure consists in maximizing the function $G\mapsto \mathcal{Q}(G,D)$ for given fixed $D$. Since the aim is to learn a Bayesian network structure, the criterion should be {\em score equivalent}, which means, $\mathcal{Q}(G,D)=\mathcal{Q}(H,D)$ for any pair of Markov equivalent graphs $G,H$ and for any database $D$. A standard technical requirement \cite{bib:Chi02} is that the criterion has to be (additively) {\em decomposable}, which means, it can be written as follows: \[ \mathcal{Q} (G,D) =\sum_{i\in N} q_{i|\pa_{G}(i)} (D_{\{ i\}\cup\pa_{G}(i)}), \] where $D_{A}$ denotes the projection of the database $D$ to $\prod_{i\in A} \X_{i}$ (for $\emptyset\neq A\subseteq N$) and $q_{i|B}$ for $i\in N$, $B\subseteq N\setminus\{ i\}$ are real functions. Finally, let us remark that the essential graph $G^*$ of a \DAG\, $G$ is an undirected graph if and only if $G$ has no immoralities. Consequently, every cycle in the undirected graph underlying $G$ (which must coincide with $G^*$) of length at least $4$ must contain a chord (otherwise there exists an immorality on this cycle in $G$). Therefore, if an essential graph is undirected it has to be \emph{chordal}. Conversely, if $G^*$ is chordal, $G$ cannot have an immorality. Therefore, {\em learning decomposable models\/} can be viewed as learning (special) Bayesian network structures corresponding to chordal undirected essential graphs \cite{bib:AMP97b}. \subsection{Algebraic approach to learning}\label{Subsection: Algebraic approach to learning} An {\em imset} over $N$ is a vector in $\Z^{2^{|N|}}$, whose components are indexed by subsets of $N$. Traditionally, all subsets of $N$ are considered, although in Section \ref{sec.characteristic} we also consider imsets with a restricted domain (components corresponding to the empty set and to singletons are dropped, since they linearly depend on the other components). Every vector in $\R^{2^{|N|}}$ can be written as a (real) combination of basic vectors $\delta_{A}\in \{0,1\}^{2^{|N|}}$: \[ \delta_{A}(T) = \left\{ \begin{array}{ll} 1 & ~~\mbox{\rm if}~~ T=A\,,\\ 0 & ~~\mbox{\rm if}~~ T\subseteq N,\; T\neq A\,, \end{array} \right. \quad \mbox{for $T\subseteq N$ (if $A\subseteq N$ is fixed).} \] This allows us to give formulas for imsets. Given an acyclic directed graph $G$ over $N$, the {\em standard imset} for $G$ is given by \begin{equation} \staim_{G}=\delta_{N}-\delta_{\emptyset} + \sum_{i\in N} \left\{\, \delta_{\pa_{G}(i)} - \delta_{\{ i\}\cup\pa_{G}(i)}\, \right\}, \label{eq.staim} \end{equation} where the basic vectors can cancel each other. It is a unique algebraic representative of the corresponding Bayesian network structure because $\staim_{G}=\staim_{H}$ if and only if $G$ and $H$ are Markov equivalent (Corollary 7.1 in \cite{bib:Stud05}). The convex hull of the set of all standard imsets over $N$ is the {\em standard imset polytope}. An important result from the point of view of an algebraic approach to learning Bayesian network structures is that any score equivalent and decomposable quality criterion (= score function) $\mathcal{Q}$ has the form \begin{equation} \mathcal{Q}(G,D) \,=\, s^{\mathcal{Q}}_{D} -\langle t^{\mathcal{Q}}_{D},\staim_{G}\rangle\,, \label{eq.criter} \end{equation} where $\langle \ast,\ast\rangle$ denotes the scalar product, and both $s^{\mathcal{Q}}_{D}\in\R$ and $t^{\mathcal{Q}}_{D}\in \R^{2^{|N|}}$ only depend on the database $D$ and the chosen quality criterion (see Lemmas 8.3 and 8.7 in \cite{bib:Stud05}). In particular, the task to maximize $\mathcal{Q}$ is equivalent to finding the optimum of a linear function over the standard imset polytope. \section{Characteristic imsets}\label{sec.characteristic} In this section we introduce the notion of a \emph{characteristic imset\/} and prove some useful facts about it. For example, we show that this imset is always a $0$-$1$ vector. \begin{definition}\rm\label{def.char-im} Given an acyclic directed graph $G$ over $N$, let $\staim_{G}$ be the standard imset for $G$. We introduce \begin{eqnarray*} \portrait[\staim_{G}] &:=& (\,\portrait[\staim_{G}]\,(T)\,)_{T\subseteq N, |T|>1}\;\;\;\in\Z^{2^{|N|}-|N|-1}, ~\mbox{with}\\ \portrait[\staim_{G}]\,(T) &:=& \sum_{X\subseteq N: T\subseteq X} \staim_{G}(X) ~~\mbox{for~} T\subseteq N,\ |T|>1, \end{eqnarray*} and call $\portrait[\staim_{G}]$ the \emph{upper portrait} of $\staim_{G}$ or, simply, of $G$. Moreover, we will call \[ \chv_{G}:=\ve 1-\portrait[\staim_{G}]\in\Z^{2^{|N|}-|N|-1} \] the \emph{characteristic imset} of $G$. \end{definition} Characteristic imsets are unique representatives of Markov equivalence classes. This is because the standard imset are unique representatives and because the upper portrait map is an affine linear map that is invertible. The inverse map is given by the well-known Mobius inversion formula \cite{Bender+Goldman}. In fact, both maps assign lattice points to lattice points! Characteristic imsets have remarkable properties and, as we will show below, their entries directly encode the underlying undirected graph and the immoralities of the given acyclic directed graph. \begin{theorem}\label{Theorem: Characteristic imsets are 0-1} Let $G$ be an acyclic directed graph over $N$. For any $T\subseteq N$, $|T|>1$ we have $\chv_{G}(T)\in \{0,1\}$ and $\chv_{G}(T)=1$ iff there exists some $i\in T$ with $T\setminus \{ i\}\subseteq \pa_G(i)$. In particular, $\chv_{G}\in \{0,1\}^{2^{|N|}-|N|-1}$. \end{theorem} \boproof Consider the defining formula (\ref{eq.staim}) for the standard imset. For any $T\subseteq N$, $|T|>1$, the value $\portrait[\staim_{G}]\,(T)$ can be computed as \[ \portrait[\staim_{G}]\,(T)=\sum_{X\subseteq N: T\subseteq X} \staim_{G}(X)=1+\sum_{i\in N:T\subseteq\pa(i)} 1 - \sum_{i\in N:T\subseteq\pa(i)\cup\{i\}} 1. \] Hence, we get \begin{eqnarray*} \chv_{G}(T) & = & 1- \portrait[\staim_{G}]\,(T)\\ & = & \sum_{i\in N:T\subseteq\pa(i)\cup\{i\}} 1 - \sum_{i\in N:T\subseteq\pa(i)} 1\\ & = & \sum_{i\in N:T\subseteq\pa(i)\cup\{i\},i\in T} 1\\\ & = & \sum_{i\in T:T\setminus\{ i\}\subseteq\pa(i)} 1. \end{eqnarray*} For fixed $T$, assume that there are two different elements $i,j\in T$ with $T\setminus\{ i\}\subseteq \pa_G(i)$ and $T\setminus\{j\}\subseteq\pa_G(j)$. This implies both $i\in\pa_G(j)$ and $j\in\pa_G(i)$. The simultaneous existence of the arcs $i\rightarrow j$ and $j\to i$, however, contradicts the assumption that $G$ is acyclic. Thus, for each $T\subseteq N$, there is at most one $i\in T$ with $T\setminus\{ i\}\subseteq \pa_G(i)$. Consequently, \[ \chv_{G}(T)=\sum_{i\in T:T\setminus\{ i\}\subseteq\pa(i)} 1\;\;\; \in\{0,1\}, \] and thus $\chv_{G}\in\{0,1\}^{2^{|N|}-|N|-1}$. \eoproof \begin{corollary}\label{Cor: Vertices are the only lattice points} For any $N$, the only lattice points in the standard imset polytope and in the characteristic imset polytope are their vertices. \end{corollary} \boproof The statement holds for any $0$-$1$-polytope and thus in particular also for the characteristic imset polytope. Moreover, the portrait map and its inverse, the Mobius map, are affine linear maps between $\staim_{G}$ and $\chv_{G}$ that map lattice points to lattice points. Thus, the result holds also for the standard imset polytope. \eoproof {\bf Remark.} Corollary \ref{Cor: Vertices are the only lattice points} (for standard imsets) has already been stated and proved in \cite{bib:StuVom11}. The original proof of this result in the manuscript of \cite{bib:StuVom11} was quite long and complicated. Later discussions among the authors of the present paper led to the much simpler proof using the portrait map which was then also used in the final version of \cite{bib:StuVom11}. Corollary \ref{Cor: Vertices are the only lattice points} also implies that the set of standard imsets is exactly the set of all vertices of the standard imset polytope, again simplifying the lengthy proof from \cite{bib:StudVomHem10}. \eoproof Given a chordal undirected graph $G$, the corresponding characteristic imset $\chv_{G}$ is defined as the characteristic imset of any \DAG\, $\overrightarrow{G}$ Markov equivalent to $G$. The observation that characteristic imsets are unique representatives of Markov equivalence classes makes the definition correct. \begin{corollary}\label{Corollary: Entries of 1 for chordal graphs} Let $G$ be an undirected chordal graph over $N$. Then, for $T\subseteq N$, $|T|>1$, we have $\chv_{G}(T)=1$ if and only if $T$ is a clique in $G$. \end{corollary} \boproof As $G$ is the essential graph of an acyclic directed graph with no immoralities, we can direct the edges of $G$ in such a way that we obtain an equivalent acyclic directed graph $\overrightarrow{G}$ with no immoralities. To show the forward implication, let $T\subseteq N$ be given with $\chv_{G}(T)=1$. As $\chv_{\overrightarrow{G}}(T)=\chv_{G}(T)=1$, there exists some $i\in T$ such that $T\setminus\{ i\}\subseteq\pa_{\overrightarrow{G}}(i)$. Assume now, for a contradiction, that there are two nodes $j,k\in T\setminus\{i\}$ that are not connected by an edge in $\overrightarrow{G}$ (and hence $j$ and $k$ are not connected in $G$). Then, however, $j\rightarrow i\leftarrow k$ is an immorality in $\overrightarrow{G}$, a contradiction. Hence, all nodes in $T\setminus\{i\}$ must be pairwise connected by an edge in $G$. As they are all connected in $G$ by an edge to $i$, $T$ is a clique in $G$. To show the converse implication, let $T\subseteq N$ be a clique in $G$. Note that in $\overrightarrow{G}$, being an acyclic directed graph, the set $T$ must contain a node $i$ such that for all $j\in T$ the edge $\{i,j\}\in G$ is directed towards $i$ in $\overrightarrow{G}$. But then $T\setminus\{i\}\subseteq\pa_{\overrightarrow{G}}(i)$ and therefore, $\chv_{G}(T)=1$ by Theorem \ref{Theorem: Characteristic imsets are 0-1}. \eoproof Applying this observation to special undirected chordal graphs, namely to undirected forests, we obtain the following characterization. \begin{corollary}\label{Corollary: Characteristic imset for undirected forests} Let $G$ be an undirected forest having $N$ as the set of nodes. Then, for $T\subseteq N$, $|T|>1$, we have $\chv_{G}(T)=1$ if and only if $T$ is an edge of $G$, or, in other words, \[ \chv_{G}=\left(\begin{array}{c}\chi(G) \\ \ve 0 \end{array}\right), \] where $\chi(G)$ denotes the characteristic vector of the edge-set of $G$. \end{corollary} Indeed, the only cliques of cardinality at least two in a forest are its edges. A similar result, in fact, holds for any acyclic directed graph $G$. \begin{corollary}\label{Corollary: Characteristic imset always contains chi(G)} Let $G$ be a \DAG\, over $N$ and $\bar G$ its underlying undirected graph. Then for any two-element subset $\{a,b\}\subseteq N$, we have $\chv_{G}(\{a,b\})=1$ if and only if $a\to b$ or $b\to a$ is an edge of $G$, or, in other words, \[ \chv_{G}=\left(\begin{array}{c}\chi(\bar G) \\ * \end{array}\right), \] where $*$ denotes the remaining components of $\chv_{G}$. \end{corollary} \boproof This is an easy consequence of Theorem \ref{Theorem: Characteristic imsets are 0-1}. If $\chv_{G}(T)=1$ for $T=\{ a,b\}$ then the only $i\in T$ with $T\setminus\{ i\}\subseteq\pa_{G}(i)$ are either $a$ or $b$. \eoproof Thus, $\chv_{G}$ is an extension of the characteristic vector $\chi(\bar G)$ of the edge-set of $\bar{G}$, which motivated our terminology. Let us now show how to convert $\chv_{G}$ back to the pattern graph $\pat(G)$ of $G$. \begin{theorem}\label{Theorem: Reconstruction of pattern graph from characteristic imset} Let $G$ be an acyclic directed graph over $N$ and $a,b\in N$ are distinct nodes. Then the following holds: \begin{itemize} \item[(1)] $a,b\in N$ are connected in $G$ iff $\chv_{G}(\{a,b\})=1$, otherwise $\chv_{G}(\{a,b\})=0$. \item[(2)] $a\to b$ belongs to an immorality in $G$ iff there exists some $i\in N\setminus\{a,b\}$ with $\chv_{G}(\{a,b,i\})=1$ and $\chv_{G}(\{a,i\})=0$. The latter condition implies $\chv_{G}(\{a,b\})=1$ and $\chv_{G}(\{b,i\})=1$. \end{itemize} \end{theorem} \boproof The condition (1) follows from Corollary \ref{Corollary: Characteristic imset always contains chi(G)} and Theorem \ref{Theorem: Characteristic imsets are 0-1}. For (2) assume that $a\to b\leftarrow i$ is an immorality in $G$. Then $\chv_{G}(\{a,b,i\})=1$ by Theorem \ref{Theorem: Characteristic imsets are 0-1} and the necessity of the other conditions follows from (1). Conversely, provided that $\chv_{G}(\{a,b,i\})=1$, one of the three options $a\to i\leftarrow b$, $i\to a\leftarrow b$ and $a\to b\leftarrow i$ (with possible additional edges) occurs. Now, $\chv_{G}(\{a,i\})=0$ implies that $a$ and $i$ are not adjacent in $G$, which excludes the first two options and implies $a\to b\leftarrow i$ to be an immorality. \eoproof \begin{corollary}\label{Corollary: band determination} Let $G$ be a \DAG\ over $N$. The characteristic imset $\chv_{G}$ is determined uniquely by its values for sets of cardinality 2 and 3. \end{corollary} \boproof By Theorem \ref{Theorem: Reconstruction of pattern graph from characteristic imset} these values determine both the edges and immoralities in $G$. In particular, they determine the pattern $\pat(G)$. As explained in Section \ref{Ssec.Learning}, this uniquely determines the Bayesian network structure and, therefore, the respective standard and characteristic imsets. \eoproof More specifically, the components of $\chv_{G}$ for $|S|\geq 4$ can be derived iteratively from the components for $|S|\leq 3$ on the basis of the following lemma. A further simple consequence of the lemma below is that the entries for $|S|\geq 4$ are not linear functions of the entries for $|S|\leq 3$. \begin{lemma}\label{Lemma: band determination} Let $G$ be a \DAG\ over $N$, and $S\subseteq N$, $|S|\geq 4$. Then the following conditions are equivalent. \begin{itemize} \item[(a)] $\chv_{G}(S)=1$, \item[(b)] there exist $|S|-1$ subsets $T$ of $S$ with $|T|=|S|-1$ and $\chv_{G}(T)=1$, \item[(c)] there exist three subsets $T$ of $S$ with $|T|=|S|-1$ and $\chv_{G}(T)=1$. \end{itemize} \end{lemma} In the proof, by a {\em terminal node} within a set $T\subseteq N$ we mean $i\in T$ such that there is no $j\in T\setminus \{i\}$ with $i\to j$ in $G$. \boproof The implication $(a)\rightarrow (b)$ simply follows from Theorem \ref{Theorem: Characteristic imsets are 0-1}; $(b)\rightarrow (c)$ is trivial. To show $(c)\rightarrow (a)$ we first fix a terminal node $i$ within $S$. Now, $(c)$ implies there exist at least two sets $T\subseteq S$, $|T|=|S|-1$ which contain $i$. Let $\tilde{T}$ be one of them. Since $\chv_{G}(\tilde{T})=1$ by Theorem \ref{Theorem: Characteristic imsets are 0-1}, there exists $k\in\tilde{T}$ with $j\to k$ for every $j\in \tilde{T}\setminus\{k\}$. If $i\neq k$, then $i\to k$, which contradicts $i$ to be terminal in $S$. Thus, $i=k$. Since, those two sets $T$ cover $S$ one has $j\to i$ for every $j\in S\setminus \{i\}$ and Theorem \ref{Theorem: Characteristic imsets are 0-1} implies $\chv_{G}(S)=1$. \eoproof Theorem \ref{Theorem: Reconstruction of pattern graph from characteristic imset} allows us to reconstruct the essential graph for $G$. Indeed, the conditions (1) and (2) directly characterize the pattern graph $\pat(G)$. However, in general, there could be other arcs in the essential graph. Fortunately, there is a polynomial graphical algorithm transforming $\pat(G)$ into the corresponding essential graph $G^{*}$. More specifically, Theorem 3 in \cite{bib:Meek95} says that provided $\pat(G)$ is the pattern of an acyclic directed graph $G$ the repeated (exhaustive) application of the orientation rules from Figure \ref{fig.rules} gives the essential graph $G^{*}$. \begin{figure} \setlength{\unitlength}{1mm} \begin{center} \begin{picture}(128,20) \thinlines \put(2,2){\circle*{1}} \put(2,12){\circle*{1}} \put(12,2){\circle*{1}} \put(3,2){\line(1,0){8}} \put(2,11){\vector(0,-1){8}} \put(22,2){\circle*{1}} \put(22,12){\circle*{1}} \put(32,2){\circle*{1}} \put(22,11){\vector(0,-1){8}} \put(17,7){\makebox(0,0){$\Longrightarrow $}} \put(47,2){\circle*{1}} \put(47,12){\circle*{1}} \put(57,2){\circle*{1}} \put(48,2){\vector(1,0){8}} \put(47,11){\vector(0,-1){8}} \put(48,11){\line(1,-1){8}} \put(67,2){\circle*{1}} \put(67,12){\circle*{1}} \put(77,2){\circle*{1}} \put(68,2){\vector(1,0){8}} \put(67,11){\vector(0,-1){8}} \put(62,7){\makebox(0,0){$\Longrightarrow $}} \put(92,9.5){\circle*{1}} \put(99.5,2){\circle*{1}} \put(99.5,17){\circle*{1}} \put(107,9.5){\circle*{1}} \put(93,9.5){\line(1,0){13}} \put(93,10.5){\line(1,1){5.5}} \put(93,8.5){\line(1,-1){5.5}} \put(100.5,16){\vector(1,-1){5.5}} \put(100.5,3){\vector(1,1){5.5}} \put(117,9.5){\circle*{1}} \put(124.5,2){\circle*{1}} \put(124.5,17){\circle*{1}} \put(132,9.5){\circle*{1}} \put(118,10.5){\line(1,1){5.5}} \put(118,8.5){\line(1,-1){5.5}} \put(125.5,16){\vector(1,-1){5.5}} \put(125.5,3){\vector(1,1){5.5}} \put(112,9.5){\makebox(0,0){$\Longrightarrow $}} \thicklines \put(23,2){\vector(1,0){8}} \put(68,11){\vector(1,-1){8}} \put(118,9.5){\vector(1,0){13}} \end{picture} \end{center} \caption{Orientation rules for getting the essential graph.\label{fig.rules}} \end{figure} Finally, we wish to point out that Theorems \ref{Theorem: Characteristic imsets are 0-1} and \ref{Theorem: Reconstruction of pattern graph from characteristic imset} directly lead to a procedure for testing whether a given vector $\chv\in\Z^{2^{|N|}-|N|-1}$ is a characteristic imset for some (acyclic directed graph) $G$ over $N$. Using both theorems, one first constructs a candidate pattern graph, then a candidate essential graph, and then from it a candidate acyclic directed graph $G$. It remains to check whether the characteristic imset of $G$ coincides with the given vector $\chv$. \section{Learning restricted Bayesian network structures} A lot of research is devoted to the topic of finding complexity results of the general problem of learning Bayesian network structures analyzing different optimization strategies, scoring functions and representations of data. For example, Chickering, Heckerman and Meek show the large-sample learning problem to be $\NP$-hard even when the distribution is perfectly Markovian \cite{bib:CHM04}. On the other side Chickering \cite{bib:Chi96} shows learning Bayesian network structures to be $\NP$-complete when using a certain Bayesian score. This remains valid even if the number of parents is limited to a constant. \subsubsection*{Our assumptions} A reduction in complexity could be achieved by limiting the possible structures the Bayesian network can have. In the following, we will restrict our attention to learning \emph{decomposable models}, that is, learning the best \DAG\, among all \DAG s whose essential graphs are undirected (and thus also chordal). In fact, we assume that we are given an undirected graph $K$ over $N$ with an edge-set $\calE(K)$, not necessarily the complete graph, and we wish to learn a \DAG\ $G$ that maximizes the quality criterion and whose essential graph is an (undirected) subgraph of $K$ of a certain type. In particular, we are interested in learning undirected forests and spanning trees with and without degree bounds and in learning undirected chordal graphs. We wish to point out here that we make minimal assumptions on the database $D$ and on the quality criterion to be optimized. We only assume that the database $D$ over $N$ is \emph{complete}, that is, no data entry has a missing/unknown component (see Section \ref{Ssec.Learning}). For the quality criterion (= score function) we require that it is \emph{score equivalent} and \emph{decomposable}. In fact, instead of having $D$ and an explicit score function available, we only assume that we are given an evaluation oracle (depending on $D$) that, when queried on $G$, returns the value $\mathcal{Q}(G,D)$. Clearly, especially for larger databases $D$, computing a single score function value $\mathcal{Q}(G,D)$ may be expensive. By assuming a given evaluation oracle, we give constant costs to score function evaluations in our complexity results below. Finally, we wish to remind the reader that under our assumptions learning the best DAG representing $D$ becomes the problem of maximizing a certain {\em linear functional} (whose components depend on $D$) over the characteristic imsets (see Section \ref{Subsection: Algebraic approach to learning}). However, as this linear problem is in (exponential) dimension $2^{|N|}-|N|-1$, we cannot employ this transformation directly in our complexity treatment. \subsection{Learning undirected forests and spanning trees}\label{Ssec:forests} By Corollary \ref{Corollary: Characteristic imset for undirected forests}, we know that every \DAG\ whose essential graph is an undirected forest $G$ has $\binom{\chi(G)}{\bf 0}$ as its characteristic imset. Thus, the problem of learning the best undirected forest is equivalent to maximizing a linear functional over such vectors $\binom{\chi(G)}{\bf 0}$ which in turn is equivalent to finding a maximum weight forest $G$ as a subgraph of $K$. The same argumentation holds for learning undirected spanning trees of $K$. These are two well-known combinatorial problems that can be solved in polynomial time via greedy-type algorithms (see e.g.\ \S\,40 in \cite{bib:Sch03}). We conclude the following statement. \begin{lemma} Given a node set $N$, an undirected graph $K=(N,\calE(K))$ and an evaluation oracle for computing $\mathcal{Q}(G,D)$. The problems of finding a maximum score subgraph of $K$ that is \begin{itemize} \item[(a)] a forest, \item[(b)] a spanning tree, \end{itemize} can be solved in time polynomial in $|N|$. \end{lemma} Although $K$ is being part of the input, we need not state the complexity dependence with respect to the encoding length of $K$ explicitly here, since the encoding length $\langle K\rangle$ of $K$ is at least $|N|$. Moreover, we have $\langle K\rangle\in O(|N|^2)$. Chow and Liu \cite{bib:ChowLiu68} provided a polynomial time procedure (in $|N|$) for maximizing the maximum log-likelihood criterion which finds an optimal dependence tree (= a spanning tree). The core of their algorithm is the greedy algorithm and they apply it to a non-negative objective function. For their result, the complexity of computing the probabilities from data (and hence the objective/score function) is also omitted. A similar result was obtained by Heckerman, Geiger and Chickering \cite{bib:HGC95} for the Bayesian scoring criterion. Our result combines all of these previous results by only supposing a decomposable and score equivalent quality criterion. We wish to point out here that the well-known GES algorithm \cite{bib:Chi02,bib:Meek97}, which was designed to learn general Bayesian network structures, could be modified in a straight-forward way to learn undirected forests (among the subgraphs of $K$). Then the first phase of this new GES-type algorithm coincides with the greedy algorithm to find a maximum weight forest and the second phase of the algorithm cannot remove any edge. Thus, the modified GES algorithm always finds a best undirected forest (among the subgraphs of $K$) in time polynomial in $|N|$. \subsection{Learning undirected forests and spanning trees with degree bounds}\label{Ssec:forests-bounded} Although the problems of learning undirected forests and of learning undirected spanning trees are solvable in polynomial time, learning an undirected forest/spanning tree with a given degree bound $\deg_G(i)\leq k<|N|-1$, $\forall i\in N$, is $\NP$-hard. For $k=1$ this problem is equal to the well-known problem of finding a maximum weight matching in $K$, which is in the general case polynomial time solvable (see \S\,30 in \cite{bib:Sch03}). \begin{theorem}\label{Theorem: LUFSTDB} Given a node set $N$, an undirected graph $K=(N,\calE(K))$ and an evaluation oracle for computing $\mathcal{Q}(G,D)$. Moreover, let $k\in\Z_+$ be a constant with $2\leq k<|N|-1$. Then the following statements hold. \begin{itemize} \item[(a)] The problem of finding a maximum score subgraph of $K$ that is a forest and that fulfils the degree bounds $\deg(i)\leq k$, $\forall i\in N$, is $\NP$-hard (in $|N|$) for any {\em fixed} (strictly) positive score function $\mathcal{Q}(.,D)$. \item[(b)] The problem of finding a maximum score spanning tree of $K$ that fulfils the degree bounds $\deg(i)\leq k$, $\forall i\in N$, is $\NP$-hard (in $|N|$) for any {\em fixed} score function $\mathcal{Q}(.,D)$. \end{itemize} \end{theorem} {\bf Remark.} Again, we have removed the explicit dependence on $\langle K\rangle$, since $\langle K\rangle\in O(|N|^2)$. \boproof We deduce part (b) from the following feasibility problem. In \S\,3.2.1 of \cite{bib:GarJohn79}, it has been shown that the task: \textsc{Bounded degree spanning tree}\\ Instance: An undirected graph $K$ and a constant $2\leq k<|N|-1$\\ Question: ``Is there a spanning tree for $K$ in which no node has degree exceeding $k$?'' is $\NP$-complete by reduction onto the \textsc{Hamiltonian path problem}. Part (a) now follows by considering the subfamily of problems in which the linear objective takes only (strictly) positive values and, thus, every optimal forest (with the bounded degree) is a spanning tree. Hence, the problem of finding a maximum-weight forest (with a given degree bound) is equivalent to finding a maximum-weight spanning tree (with a given degree bound). As the feasibility problem for the latter is already $\NP$-complete, part (a) follows. \eoproof We wish to remark that Meek \cite{bib:Meek01} shows a similar hardness result for learning paths, i.e.\ spanning trees with upper degree bound $k=2$ for the maximum log-likelihood, the minimum description length and the Bayesian quality criteria. \subsection{Learning chordal graphs}\label{Ssec:LCG} Undirected chordal graph models are the intersection of \DAG\ models and undirected graph models, known as Markov networks \cite{bib:Stud05}. In this section, we show that learning these models is $\NP$-hard. \begin{theorem}\label{Theorem: LCG} Given a node set $N$, an undirected graph $K=(N,\calE(K))$ and an evaluation oracle for computing $\mathcal{Q}(G,D)$. The problem of finding a maximum score chordal subgraph of $K$ is $\NP$-hard (in $|N|$). \end{theorem} \boproof We show that we can polynomially transform the following $\NP$-hard problem to learning undirected chordal graphs. \textsc{Clique of given size}\\ Instance: An undirected graph $K$ and a constant $2\leq k\leq |N|-1$\\ Question: ``Is there a clique set in $K$ of size at least $k$?'' We define now a suitable learning problem that would solve this problem. By Corollary \ref{Corollary: Entries of 1 for chordal graphs}, we know that for any chordal graph $G$ the entry $\chv_{G}(T)$ is $1$ if and only if $T\subseteq N$, $|T|>1$ is a clique (otherwise this entry is $0$). Thus, the score function value for $G$ is determined by the values of the linear objective function ${\ve w}^\intercal\vex$ for the cliques $T$ in $G$. In particular, we can define the values for the cliques in such a way that when transforming the learning problem to the problem of maximizing ${\ve w}^\intercal\vex$ over the characteristic imset polytope (= the convex hull over all characteristic imsets) the entries ${\ve w}(T)$ are $0$ when $|T|<k$ and are positive when $|T|\geq k$. This implies that the maximum score among all chordal subgraphs of $K$ is positive iff there exists a chordal subgraph in $K$ containing a clique $T$ of size $|T|\geq k$. This happens iff $K$ has a clique of size at least $k$. \eoproof \subsection{Learning chordal graphs with bounded size of cliques} Let us consider a variation of the previous task by introducing an upper bound $k$ for the size of cliques. If $k\leq 2$, we get the problems of learning undirected forests/matchings, which we already know are solvable in polynomial time (see Section \ref{Ssec:forests} and Section \ref{Ssec:forests-bounded}). For $k>2$, the corresponding problem is $\NP$-hard already for a fixed type of score function. This has been shown by Srebro \cite{bib:Sre01} for the maximized log-likelihood criterion (as a generalization of the work by Chow and Liu \cite{bib:ChowLiu68}). \section{Conclusions} Let us summarize the main contributions of the paper. We introduced characteristic imsets as new simple representatives of Bayesian network structures, which are much closer to the graphical description. Actually, there is an easy transformation from the characteristic imset into the (essential) graph. Last but not least, the insight brought by the use of characteristic imsets makes it possible to offer elegant combinatorial proofs of (known and new) complexity results. The proofs avoid special assumptions on the form of the quality criterion besides the standard assumptions of score equivalence and decomposability. In our future work, we plan to apply these tools in the linear programming approach to learning. For this purpose we would like to find a general linear (facet-) description of the corresponding characteristic imset polytope or, at least, of a suitable polyhedral relaxation containing exactly the characteristic imsets as lattice points. Finding suitable polyhedral descriptions is also interesting and important for learning restricted families of Bayesian network structures, for example, for learning undirected chordal graphs. Finally, let us remark that a polyhedral approach to learning Bayesian network structures (using integer programming techniques) has been also suggested by Jaakkola {\em et.al.} \cite{bib:jaak10}, but their way of representing \DAG s is different from ours. Their representatives live in dimension $|N|\cdot 2^{|N|-1}$ and correspond to individual \DAG s, while ours live in dimension $2^{|N|}-|N|-1$ and correspond to Markov equivalence classes (of \DAG s). \subsubsection*{Acknowledgements} The results of Milan Studen\'{y} have been supported by the grants GA\v{C}R n.\ 201/08/0539 and GAAV\v{C}R n.\ IAA100750603.
1,314,259,996,001
arxiv
\subsection{Photometry} \subsubsection{TESS photometry} \label{TESSphotom} The TOI-431 system (TIC\,31374837, HIP\,26013) was observed in {\it TESS}\ Sectors 5 (Nov 15 to Dec 11 2018) and 6 (Dec 15 2018 to Jan 6 2019) on Camera 2 in the 2-minute cadence mode ($t_{\rm exp} = 2$\,min). TOI-431.01 (now TOI-431\,d) was flagged on Feb 8 2019 by the MIT Quick-Look Pipeline \citep[QLP,][]{Huang2019} with a signal-to-noise ratio (SNR) of 58; the Sector 5 light curve reveals 2 deep transits of TOI-431\,d, but further transits of this planet fell in the data gaps in S6. TOI-431\,d passed all Data Validation tests \citep[see][]{Twicken:DVdiagnostics2018} and model fitting \citep[see][]{Li:DVmodelFit2019}; additionally, the difference image centroiding results place the transit signature source within $\sim 3$ arcsec of the target star. TOI-431.02 (now TOI-431\,b) was flagged later, on June 6, after identification by the TESS Science Processing Operations Center (SPOC) pipeline \citep{Jenkins2016} with an SNR of 24 in a combined transit search of Sectors 5-6. We used the publicly available photometry provided by the SPOC pipeline, and used the Presearch Data Conditioning Simple Aperture Photometry (\textsc{PDCSAP\textunderscore FLUX}), which has common trends and artefacts removed by the SPOC Presearch Data Conditioning (PDC) algorithm \citep{Twicken2010,Smith2012,Stumpe2012,Stumpe2014}. The median-normalised PDCSAP flux, without any further detrending, is shown in the top panel of Fig. \ref{fig:TESS}. \subsubsection{LCOGT photometry} \label{LCOphotom} To confirm the transit timing and depth, and to rule out a nearby eclipsing binary (NEB) as the source of the {\it TESS}\ transit events, we obtained three seeing-limited transit observations of TOI-431\,d in the $zs$-band. The light curves were obtained using the 1-m telescopes at the Cerro Tololo Inter-American Observatory (CTIO) and the Siding Springs Observatory (SSO) as part of the Las Cumbres Observatory Global Telescope network \citep[LCOGT;][]{Brown2013}. Both telescopes are equipped with a $4096\times 4096$ Sinistro camera with a fine pixel scale of $0.39''$ pixel$^{-1}$. We calibrated each sequence of images using the standard LCOGT \texttt{BANZAI} pipeline \citep{McCully2018}. The observations were scheduled using the {\it TESS}\ Transit Finder, a customised version of the \texttt{Tapir} software package \citep{Jensen2013}. The differential light curves of TOI-431, and seven neighbouring sources within $2.5'$ based on the Gaia DR2 \citep{GAIA_DR2}, were derived from uncontaminated apertures using AstroImageJ \citep[AIJ;][]{Collins2017}. Two partial transits were obtained on UT December 9 2019 which covered the ingress and egress events from CTIO and SSO respectively (Figure~\ref{fig:LCO-NGTS}). We then obtained a second ingress observation on January 3 2020 from CTIO. Within each light curve, we detected the partial transit event on-target and cleared the field of NEBs down to $\Delta zs=6.88$\,mag. \subsubsection{PEST photometry} \label{PESTphotom} We also obtained a seeing-limited observation during the time of transit of TOI-431\,d on UT February 13 2020 using the Perth Exoplanet Survey Telescope (PEST) near Perth, Australia. The 0.3\,m telescope is equipped with a $1530\times1020$ SBIG ST-8XME camera with an image scale of 1$\farcs$2 pixel$^{-1}$, resulting in a $31\arcmin\times21\arcmin$ field of view. A custom pipeline based on {\tt C-Munipack}\footnote{http://c-munipack.sourceforge.net} was used to calibrate the images and extract the differential photometry, using an aperture with radius 6$\farcs$2. The images have typical stellar point spread functions (PSFs) with a FWHM of $\sim5\arcsec$. Because the transit depth of TOI-431\,d is too shallow to detect from the ground with PEST, the target star was intentionally saturated to check the fainter nearby stars for possible NEBs that could be blended in the {\it TESS}\ aperture. The data rule out NEBs in all 17 stars within $2\farcm5$ of the target star that are bright enough ({\it TESS}\ magnitude < 17.4) to cause the {\it TESS}\ detection of TOI-431\,d. \subsubsection{Spitzer photometry} \label{Spitzerphotom} Shortly after TOI-431 was identified and announced as a {\it TESS}\ planet candidate, we identified TOI-431\,d as an especially interesting target for atmospheric characterization via transmission spectroscopy. We therefore scheduled one transit observation with the Spitzer Space Telescope to further refine the transit ephemeris and allow efficient scheduling of future planetary transits. We observed the system as part of Spitzer GO 14084 \citep{crossfield:2018spitzer} using the 4.5$\mu$m channel of the IRAC instrument \citep{fazio:2005}. We observed in subarray mode, which acquired 985 sets of 64 subarray frames, each with 0.4\,s integration time. These transit observations spanned UT times from May 23 2019 21:13 to May 24 2019 04:42, and were preceded and followed by shorter integrations observed off-target to check for bad or hot pixels. Our transit observations used Spitzer/IRAC in PCRS Peak-up mode to place the star as closely as possible to the well-characterized ``sweet spot'' on the IRAC2 detector. \subsubsection{NGTS photometry} \label{NGTSphotom} The Next Generation Transit Survey \citep[NGTS;][]{wheatley18ngts} is an exoplanet hunting facility which consists of twelve 20\,cm diameter robotic telescopes and is situated at ESO's Paranal Observatory. Each NGTS telescope has a wide field-of-view of 8 square degrees and a plate scale of 5\,arcsec\, pixel$^{-1}$. NGTS observations are also afforded sub-pixel level guiding through the DONUTS auto-guiding algorithm \citep{mccormac13donuts}. A transit event of TOI-431\,d was observed using 5 NGTS telescopes on February 20 2020. On this night, a total of 5922 images were taken across the 5 telescopes, with each telescope observing with the custom NGTS filter and an exposure time of 10\,seconds. The dominant photometric noise sources in NGTS light curves of bright stars are Gaussian and uncorrelated between the individual telescope systems \citep{smith20multicam, bryant20multicam}. As such, we can use simultaneous observations with multiple NGTS telescopes to obtain high precision light curves. All the NGTS data for TOI-431 were reduced using a custom aperture photometry pipeline which uses the SEP library for both source extraction and photometry \citep{bertin96sextractor, Barbary2016}. Bias, dark and flat field image corrections are found to not improve the photometric precision achieved, and so we do not apply these corrections during the image reduction. SEP and GAIA \citep{GAIA, GAIA_DR2} are both used to identify and rank comparison stars in terms of their brightness, colour, and CCD position relative to TOI-431 \citep[for more details on the photometry, see][]{bryant20multicam}. \subsection{Spectroscopy} \subsubsection{HARPS high-resolution spectroscopy} \label{HARPSspec} TOI-431 was observed between February 2 and October 21 2019 with the High Accuracy Radial velocity Planet Searcher (HARPS) spectrograph mounted on the ESO 3.6\,m telescope at the La Silla Observatory in Chile \citep{Pepe2002}. A total of 124 spectra were obtained under the NCORES large programme (ID 1102.C-0249, PI: Armstrong). The instrument (with resolving power $R = 115,000$) was used in high-accuracy mode (HAM), with an exposure time of 900\,s. Between 1 and 3 observations of the star were made per night. The standard offline HARPS data reduction pipeline was used to reduce the data, and a K5 template was used in a weighted cross-correlation function (CCF) to determine the radial velocities (RVs). Each epoch has further calculation of the bisector span (BIS), full-width at half-maximum (FWHM), and contrast of the CCF. This data is presented in Table \ref{tab:dataharps}. In addition to this, there are 50 publicly available archival HARPS spectra dating from 2004 to 2015. \subsubsection{HIRES high-resolution spectroscopy} \label{HIRESspec} We obtained 28 high-resolution spectra of TOI-431 on the High Resolution Echelle Spectrometer of the 10m Keck I telescope \citep[Keck/HIRES,][]{Vogt1994}. The observation spans a temporal baseline from November 11 2019 to September 27 2020. We obtained an iodine-free spectrum on November 8 2019 as the template for radial velocity extraction. All other spectra were obtained with the iodine cell in the light path for wavelength calibration and line profile modeling. Each of these spectra were exposed for 4-8 min achieving a median SNR of 200 per reduced pixel near 5500\,Å. The spectra were analyzed with the forward-modelling Doppler pipeline described in \citet{Howard2010} for RV extraction. We analyzed the Ca II H \& K lines and extracted the $S_{\rm HK}$ using the method of \citet{Isaacson2010}. This data is presented in Table \ref{tab:datahires}. \subsubsection{iSHELL spectroscopy} \label{iSHELLspec} We obtained 108 spectra of TOI-431 during 11 nights with the iSHELL spectrometer on the {\em NASA Infrared Telescope Facility} \citep[IRTF,][]{2016SPIE.9908E..84R}, spanning 108 days from September-December 2019 . The exposure times were 5\,minutes, repeated 3-14 times within a night to reach a cumulative photon signal-to-noise ratio per spectral pixel varying from 131--334 at $\sim 2.4\,\mu$m (the approximate centre of the blaze for the middle order). This achieves a per-night RV precision of 3--8\,ms$^{-1}$ with a median of 5\,ms$^{-1}$. Spectra were reduced and RVs extracted using the methods outlined in \citet{caleetal2019}. \subsubsection{FEROS spectroscopy} \label{FEROSspec} TOI-431 was monitored with the Fiberfed Extended Range Optical Spectrograph \citep[FEROS,][]{kaufer:99}, installed on the MPG2.2 m telescope at La Silla Observatory, Chile. These observations were obtained in the context of the Warm gIaNts with tEss (WINE) collaboration, which focuses on the systematic characterization of \textit{TESS} transiting warm giant planets \citep[e.g.,][]{hd1397,jordan:2020}. FEROS has a spectral resolution of R $\approx48\,000$ and uses a comparison fibre that can be pointed to the background sky or illuminated by a Thorium-Argon lamp simultaneously with the execution of the science exposure. We obtained 10 spectra of TOI-431 between February 28 and March 12 2020. We used the simultaneous calibration technique to trace instrumental radial velocity variations, and adopted an exposure time of 300\,s, which translated in spectra with a typical signal-to-noise ratio per resolution element of 170. FEROS data was processed with the \texttt{ceres} pipeline \citep{brahm:2017:ceres}, which delivers precision radial velocity and line bisector span measurements through the cross-correlation technique. The cross-correlation was executed with a binary mask reassembling the properties of a G2-type dwarf star. \subsubsection{M\textsc{inerva}-Australis spectroscopy} \label{MAspec} M\textsc{inerva}-Australis is an array of four PlaneWave CDK700 telescopes located in Queensland, Australia, fully dedicated to the precise radial-velocity follow-up of TESS candidates. The four telescopes can be simultaneously fiber-fed to a single KiwiSpec R4-100 high-resolution (R=80,000) spectrograph \citep{barnes12,addison19,TOI257}. TOI-431 was observed by M\textsc{inerva}-Australis in its early operations, with a single telescope, for 16 epochs between 2019 Feb 12 and 2019 April 17. Each epoch consists of two 30-minute exposures, and the resulting radial velocities are binned to a single point. Radial velocities for the observations are derived for each telescope by cross-correlation, where the template being matched is the mean spectrum of each telescope. The instrumental variations are corrected by using simultaneous Thorium-Argon arc lamp observations. \subsection{High resolution imaging} \label{sec:hri} \begin{figure*} \centering \begin{subfigure}{0.44\textwidth} \includegraphics[width=\columnwidth]{1.Observations/HighResImaging_TOI431_Sensitivity3.pdf} \end{subfigure} \begin{subfigure}{0.55\textwidth} \includegraphics[width=\columnwidth]{1.Observations/toi431gallery.pdf} \end{subfigure} \caption{\textit{Left:} 5$\sigma$ contrast curves for all of the sources of high-resolution imaging described in Section \ref{sec:hri}. The 10 and 1 per cent contamination limits are given as the black dotted lines. The grey dashed lines labelled TOI-431\,b and d represent the maximum contrast magnitude that a blended source could have in order to mimic the planetary transit depth if it were an eclipsing binary. \textit{Right:} a compilation of reconstructed images from 'Alopeke and SOAR and AO images from NIRI and NIRC2, with the instrument and filter labelled. No additional companions are seen.} \label{fig:contrastcurves} \end{figure*} High angular resolution imaging is needed to search for nearby sources that can contaminate the {\it TESS}\ photometry, resulting in an underestimated planetary radius, or that can be the source of astrophysical false positives, such as background eclipsing binaries. The contrast curves from all of the sources of high resolution imaging described below are displayed in Fig. \ref{fig:contrastcurves}. \subsubsection{SOAR HRCam} We searched for stellar companions to TOI-431 with speckle imaging with the 4.1-m Southern Astrophysical Research (SOAR) telescope \citep{Andrei2018} on UT March 17 2019, observing in the Cousins I-band, a similar visible bandpass to TESS. More details of the observation are available in \citet{Ziegler2020}. The 5$\sigma$ detection sensitivity and speckle auto-correlation functions from the observations are shown in Fig. \ref{fig:contrastcurves}. No nearby stars were detected within 3\arcsec of TOI-431 in the SOAR observations. \subsubsection{Gemini NIRI} We collected high resolution adaptive optics observations using the Gemini/NIRI instrument \citep{Hodapp2003} on UT March 18 2019. We collected nine images in the Br$\gamma$ filter, with exposure time 0.6\,s per image. We dithered the telescope by ~2'' between each exposure, allowing for a sky background to be constructed from the science frames themselves. We corrected individual frames for bad pixels, subtracted the sky background, and flat-corrected frames, and then co-added the stack of images with the stellar position aligned. To calculate the sensitivity of these observations, we inject fake companions and measure their S/N, and scale the brightness of these fake companions until they are recovered at 5$\sigma$. This is repeated at a number of locations in the image. We average our sensitivity over position angle, and show the sensitivity as a function of radius in Fig. \ref{fig:contrastcurves}. Our observations are sensitive to companions 4.6\,mag fainter than the host at 0.2'', and 8.1\,mag fainter than the host in the background limited regime, at separations greater than 1''. \subsubsection{Gemini 'Alopeke} TOI-431 was observed on UT Oct 15 2019 using the ‘Alopeke speckle instrument on Gemini-North\footnote{https://www.gemini.edu/sciops/instruments/alopeke-zorro/}. ‘Alopeke provides simultaneous speckle imaging in two bands, 562\,nm and 832\,nm, with output data products including a reconstructed image, and robust limits on companion detections \citep{howell2011}. Fig.~\ref{fig:contrastcurves} shows our results in both 562\,nm and 832\,nm filters. Fig. \ref{fig:contrastcurves} (right) shows the 832\,nm reconstructed speckle image from which we find that TOI-431 is a single star with no companion brighter than within 5-8 magnitudes of TOI-431 detected within 1.2\,\arcsec. The inner working angle of the ‘Alopeke observations are 17 mas at 562\,nm and 28 mas at 832\,nm. \subsubsection{Keck NIRC2} As part of our standard process for validating transiting exoplanets to assess the possible contamination of bound or unbound companions on the derived planetary radii \citep{Ciardi2015}, we observed TOI-431 with infrared high-resolution Adaptive Optics (AO) imaging at Keck Observatory. The Keck Observatory observations were made with the NIRC2 instrument on Keck-II behind the natural guide star AO system. The observations were made on UT March 25 2019 in the standard 3-point dither pattern that is used with NIRC2 to avoid the left lower quadrant of the detector, which is typically noisier than the other three quadrants. The dither pattern step size was $3\arcsec$ and was performed three times. The observations were made in the $Ks$ filter $(\lambda_o = 2.196; \Delta\lambda = 0.336\mu$m) with an integration time of 1 second and 20 coadds per frame for a total of 300 seconds on target. The camera was in the narrow-angle mode with a full field of view of $\sim10\arcsec$ and a pixel scale of $0.099442\arcsec$ per pixel. The Keck AO observations revealed no additional stellar companions detected to within a resolution $\sim 0.05\arcsec$ FWHM (Fig. \ref{fig:contrastcurves}). The sensitivities of the final combined AO image were determined by injecting simulated sources azimuthally around the primary target every $45^\circ $ at separations of integer multiples of the central source's FWHM \citep{Furlan2017}. The brightness of each injected source was scaled until standard aperture photometry detected it with $5\sigma $ significance. The resulting brightness of the injected sources relative to the target set the contrast limits at that injection location. The final $5\sigma $ limit at each separation was determined from the average of all of the determined limits at that separation and the uncertainty on the limit was set by the rms dispersion of the azimuthal slices at a given radial distance. The sensitivity curve is shown in Fig. \ref{fig:contrastcurves} (left), along with an image centred on the primary target showing no other companion stars (right). \subsubsection{Unbound Blended Source Confidence (BSC) analysis} \label{sec:BSC} \begin{figure} \centering \includegraphics[width=\columnwidth]{1.Observations/TIC31374837d_NIRI_K_EBlimits.pdf} \caption{Contrast curve of TOI-431 from the Keck/NIRC2 instrument for the Ks filter (solid black line). The colour ($P_{\rm aligned}$) on each angular separation and contrast bin represents the probability of a chance-aligned source with these properties at the location of the target, based on TRILEGAL model (see Sect.~\ref{sec:BSC} within the main text). The maximum contrast of a blended binary capable of mimicking the planet transit depth is shown as a dotted horizontal line. The hatched green region between the contrast curve and the maximum contrast of a blended binary ($\Delta m_{max}$ line) represents the non-explored regime by the high-spatial resolution image. P(blended source) is the Blended Source Confidence (BSC), and this corresponds to the integration of $P_{\rm aligned}$ over the shaded region.} \label{fig:bsc_zorro832} \end{figure} We finally analyse all contrast light curves available for this target to estimate the probability of contamination from unbound blended sources in the {\it TESS}\ aperture that are undetectable from the available high-resolution images. This probability is called the Blended Source Confidence (BSC), and the steps for estimating it are fully described in \citet{lillo-box14b}. We use a Python implementation of this approach (\texttt{bsc}, by J. Lillo-Box) which uses the TRILEGAL\footnote{\url{http://stev.oapd.inaf.it/cgi-bin/trilegal}} galactic model \citep[v1.6][]{girardi12} to retrieve a simulated source population of the region around the corresponding target\footnote{This is done in Python by using the \citet{astrobase} implementation.}. This is used to compute the density of stars around the target position (radius $r=1^{\circ}$), and to derive the probability of chance alignment at a given contrast magnitude and separation. We used the default parameters for the bulge, halo, thin/thick disks, and the lognormal initial mass function from \cite{chabrier01}. The contrast curves of the high-spatial resolution images are used to constrain this parameter space and estimate the final probability of undetected potentially contaminating sources. We consider as potentially contaminating sources those with a maximum contrast magnitude corresponding to $\Delta m_{\rm max} = -2.5\log{\delta}$, with $\delta$ being the transit depth of the candidate planet in the {\it TESS}\ band. This offset from the target star magnitude gives the maximum magnitude that a blended star can have to mimic this transit depth. We convert the depth in the {\it TESS}\ passband to each filter (namely 562\,nm and 832\,nm for the Gemini/'Alopeke images and Ks for the rest) by using simple conversions using the TIC catalogue magnitudes and linking the 562\,nm filter to the SDSSr band, the 832\,nm filter to the SDSSz band and the Ks band to the 2MASS Ks filter. The corresponding conversions imply $\Delta m_{\rm 562\,nm} = 0.954\Delta m_{\rm TESS}$, $\Delta m_{\rm 832\,nm} = 0.920\Delta m_{\rm TESS}$, and $\Delta m_{\rm Ks} = 0.919\Delta m_{\rm TESS}$. In Fig.~\ref{fig:bsc_zorro832} we show an example of the BSC calculation for the Keck/NIRC2 image that illustrates the method. We applied this technique to TOI-431. The transits of the two planets in this system could be mimicked by blended eclipsing binaries with magnitude contrasts up to $\Delta m_{\rm b,max} = 6.65$~mag and $\Delta m_{\rm d,max} = 8.76$~mag in the {\it TESS}\ passband. This analysis is then especially relevant for the smallest planet in the system as the probability of a chance-aligned star increases rapidly with fainter magnitudes. However, the high quality of the high-spatial resolution images provide a very low probability for an undetected source capable of mimicking the transit signal. For TOI-431\,b, we find 0.034 per cent ('Alopeke/562\,nm), 0.019 per cent ('Alopeke/832\,nm), 0.13 per cent (Keck/NIRC2/Ks), and 0.54 per cent (Gemini-North/NIRI/Ks). For TOI-431\,d we find 0.009 per cent ('Alopeke/562\,nm), 0.002 per cent ('Alopeke/832\,nm), 0.04 per cent (Keck/NIRC2/Ks), and 0.16 per cent (Gemini-North/NIRI/Ks). \subsection{Stellar analysis} \begin{center} \begin{table} \caption{Stellar parameters for TOI-431. Section references describing the method used to find the parameters are given in the Table footer.} \label{tab:stellarparams} \begin{threeparttable} \begin{tabularx}{\columnwidth}{ l l X } \hline \textbf{Parameter (unit)} & \textbf{Value} & \textbf{Ref} \\ \hline Effective temperature $T_{\rm eff}$ (K) & 4850 $\pm$ 75 & 1 \\ Surface gravity $\log g$ (cgs) & 4.60 $\pm$ 0.06 & 1 \\ Microturbulence $V_{\rm tur, mic}$ (km s$^{-1}$) & 0.8 $\pm$ 0.1 (fixed) & 1 \\ Macroscopic turbulence $V_{\rm tur, mac}$ (km s$^{-1}$) & 0.5 $\pm$ 0.1 (fixed) & 1 \\ Bolometric flux $F_{bol}$ (10$^{-9}$ erg s$^{-1}$ cm$^{-2}$) & 7.98 $\pm$ 0.19 & 2 \\ Stellar radius $\mbox{R$_{*}$}$ (\mbox{R$_{\odot}$}) & 0.731 $\pm$ 0.022 & 2 \\ Stellar mass $\mbox{M$_{*}$}$ (\mbox{M$_{\odot}$}) & 0.78 $\pm$ 0.07 & 2 \\ Rotation period $P_{\rm rot}$ (days) & 30.5 $\pm$ 0.7 & 3 \\ \vsini & 2.5 $\pm$ 0.6 & 1 \\ \hline \textbf{Chemical Abundances (dex)} & \textbf{Value} & \textbf{Ref} \\ \hline Metallicity $[$Fe/H$]$ & 0.2 $\pm$ 0.05 & 1 \\ $[$NaI/H$]$ & 0.22 $\pm$ 0.14 & 4 \\ $[$MgI/H$]$ & 0.10 $\pm$ 0.07 & 4 \\ $[$AlI/H$]$ & 0.21 $\pm$ 0.10 & 4 \\ $[$SiI/H$]$ & 0.11 $\pm$ 0.13 & 4 \\ $[$CaI/H$]$ & 0.06 $\pm$ 0.15 & 4 \\ $[$TiI/H$]$ & 0.17 $\pm$ 0.17 & 4 \\ $[$CrI/H$]$ & 0.12 $\pm$ 0.11 & 4 \\ $[$NiI/H$]$ & 0.14 $\pm$ 0.08 & 4 \\ \hline \end{tabularx} \begin{tablenotes} \item 1: Section \ref{sec:spectroscopic-malcolm} \item 2: Section \ref{sec:SED-keivan} \item 3: From WASP-South, see Section \ref{sec:stellaractivity} \item 4: Section \ref{sec:spectroscopic-vardan} \end{tablenotes} \end{threeparttable} \end{table} \end{center} The parameters of the host star are required in order to derive precise values for the planetary ages, as well as the masses and radii, leading to bulk densities. This requires a good spectrum with high enough signal-to-noise and high spectral resolution. Our radial velocity spectra fulfil these requirements after co-adding the 124 individual HARPS spectra, resulting in a spectrum with a signal-to-noise of about 380 per pixel at 5950\AA. We perform 2 independent spectroscopic analysis methods to derive the host star parameters, and further SED fitting. \bigskip \subsubsection{Method 1: equivalent widths with ARES+MOOG:} \label{sec:spectroscopic-vardan} The stellar atmospheric parameters ($T_{\mathrm{eff}}$, $\log g$, microturbulence, and [Fe/H]) and respective error bars were derived using the methodology described in \citet[][]{Sousa-14, Santos-13}. In brief, we make use of the equivalent widths (EW) of iron lines, as measured in the combined HARPS spectrum of TOI-431 using the ARES v2 code\footnote{The last version of ARES code (ARES v2) can be downloaded at http://www.astro.up.pt/$\sim$sousasag/ares.} \citep{Sousa-15}, and we assume ionization and excitation equilibrium. The process makes use of a grid of Kurucz model atmospheres \citep{Kurucz-93} and the radiative transfer code MOOG \citep{Sneden-73}. This analysis results in values of effective temperature $T_{\mathrm{eff}} = 4740 \pm 94$\,K, surface gravity log $g = 4.20 \pm 0.27$, microturbulence V$_{\mathrm{tur}} = 0.62 \pm 0.28$, and metallicity [Fe/H] $= 0.06 \pm 0.04$\,dex. The value for log $g$ can be corrected according to \citet{Mortier2014}, to give $4.46 \pm 0.27$ (corrected for asteroseismology log $g$ values) and $4.63 \pm 0.28$ (corrected for transit log $g$ values). Stellar abundances of the elements were derived using the classical curve-of-growth analysis method assuming local thermodynamic equilibrium \citep[e.g.][]{Adibekyan-12, Adibekyan-15, Delgado-17}. For the abundance determinations we used the same tools and models as for stellar parameter determination. Unfortunately, due to the low $T_{\rm eff}$\ of this star, we could not determine reliable abundances of carbon and oxygen. The derived abundances are presented in Table \ref{tab:stellarparams} and they are normal for a star with a metallicity close to solar. In addition, we derived an estimated age by using the ratios of certain elements (the so-called chemical clocks) and the formulas presented in \citet{Delgado-19}. Since this star has a close to solar metallicity and is very cool (and thus probably outside the applicability limits of formulas using stellar parameters in addition to the chemical clock) we chose to use the 1D formulas presented in Table 5 of \citet{Delgado-19}. Due to the high error in Sr abundances we derived ages only from the abundance ratios [Y/Mg], [Y/Zn], [Y/Ti], [Y/Si], [Mg/Fe], [Ti/Fe], [Si/Fe] and [Zn/Fe]. The abundance errors of cool stars are quite large and in turn the individual age errors of each chemical clock are also large ($\gtrsim$\,3\,Gyr) but the dispersion among them is smaller. We obtained a weighted average age of 5.1\,$\pm$\,0.6 Gyr which is significantly older than the age obtained in Section \ref{sec:SED-keivan}. Nevertheless, we note that ages for very cool stars obtained from chemical clocks are affected by large errors and must be taken with caution. \subsubsection{Method 2: synthesis of the entire optical spectrum} \label{sec:spectroscopic-malcolm} We also derived stellar properties by analysing parts of the optical spectrum in a different way by comparing the normalized, co-added spectrum with modelled synthetic spectra obtained with the Spectroscopy Made Easy (SME) package \citep{1996A&AS..118..595V,2017A&A...597A..16P} version 5.22, with atomic parameters from the VALD database \citep{1995A&AS..112..525P}. The 1-D, plane-parallel LTE synthetic spectra are calculated using stellar parameters obtained from either photometry or a visual inspection of the spectrum as a starting point. The synthetic spectrum is automatically then compared to a grid of stellar atmospheric models. The grid we used in this case is based on the MARCS models \citep{2008A&A...486..951G}. An iterative $\chi^2$ minimization procedure is followed until no improvement is achieved. We refer to recent papers, e.g., \citet{2018A&A...618A..33P} and \citet{2008A&A...486..951G} for details about the method. In order to limit the number of free parameters we used empirical calibrations for the $V_{\rm mic}$\ and $v_{\rm mac}$\ turbulence velocities \citep{2010MNRAS.405.1907B, 2014MNRAS.444.3592D}. The value of $T_{\rm eff}$\ was determined from fitting the Balmer \halpha~line wings. We used the derived $T_{\rm eff}$\ to fit a large sample of [Fe\,{\sc I}] , Mg\,{\sc I} and Ca\,{\sc I} lines, all with well established atomic parameters in order to derive the abundance, [Fe/H], the rotation, and the surface gravity, log\,{\it g$_\star$}. We found the star to be slowly rotating, with \vsini~=\,2.5~$\pm$\, 0.6 km\,s$^{-1}$. The star is cool, and the effective temperature as derived from the \halpha~line wings is $T_{\rm eff}$\,=\,$4846 \pm 73$~K. Using this value for $T_{\rm eff}$\ we found the [Fe/H]~to be $0.20\,\pm\,0.05$ and the surface gravity log\,{\it g$_\star$}~to be $4.60\,\pm\,0.06$ (Table~\ref{tab:stellarparams}). In order to check our result, we also analysed the same co-added spectrum using the public software package SpecMatch-Emp \citep{2017ApJ...836...77Y}. This program extracts part of the spectrum and attempts to match it to a library of about 400 well characterized spectra of all types. Our input spectrum has to conform to the format of SpecMatch-Emp and we refer to \citet{2018AJ....155..127H} to describe our procedure for doing this. We derive a $T_{\rm eff}$\ of $4776 \pm 110$~K, an iron abundance of [Fe/H] = $0.15 \pm 0.09$ dex, and a stellar radius of $R_\star\,=\,0.76\,\pm\,0.18$~\mbox{R$_{\odot}$}. The former two values are in good agreement with the results from the SME analysis. Because of the higher precision in the SME analysis, the final adopted value of $T_{\rm eff}$\ for TOI-431 is \mbox{$4850\,\pm\,75$~K}. Note that the error here is the internal errors in the synthesis of the spectra and does not include the inherent errors of the model grid itself, as well as those errors caused by using 1-D models. The results from this method are in agreement with those found in Section \ref{sec:spectroscopic-vardan}, with $T_{\rm{eff}}$ and [Fe/H] (using SpecMatch-Emp) agreeing within error. The value for log$g$ also agrees with the corrected log$g$ values from the previous method. We therefore adopt the results from this method to take forward. \subsubsection{SED fitting} \label{sec:SED-keivan} \begin{figure} \centering \includegraphics[width=\columnwidth]{1.Observations/toi_431_sed_crop.pdf} \caption{Spectral energy distribution (SED) of TOI-431. Red symbols represent the observed photometric measurements, where the horizontal bars represent the effective width of the passband. Blue symbols are the model fluxes from the best-fit Kurucz atmosphere model (black).} \label{fig:sed} \end{figure} As an independent check on the derived stellar parameters, and in order to determine an estimate for stellar age, we performed an analysis of the broadband spectral energy distribution (SED). Together with the {\it Gaia\/} EDR3 parallax, we determine an empirical measurement of the stellar radius following the procedures described in \citet{Stassun2016,Stassun2017,Stassun2018}. We pulled the $B_T V_T$ magnitudes from {\it Tycho-2}, the $grizy$ magnitudes from Pan-STARRS, the $JHK_S$ magnitudes from {\it 2MASS}, the W1--W4 magnitudes from {\it WISE}, and the $G G_{\rm RP} G_{\rm BP}$ magnitudes from {\it Gaia}. Together, the available photometry spans the full stellar SED over the wavelength range 0.35--22~$\mu$m (see Figure~\ref{fig:sed}). In addition, we pulled the NUV flux from {\it GALEX} in order to assess the level of chromospheric activity, if any. We performed a fit using Kurucz stellar atmosphere models, with the effective temperature ($T_{\rm eff}$) and metallicity ([Fe/H]) adopted from the spectroscopic analysis (Section \ref{sec:spectroscopic-malcolm}). The extinction ($A_V$) was set to zero because of the star being very nearby (Table \ref{tab:toi-431}). The resulting fit is excellent (Figure~\ref{fig:sed}) with a reduced $\chi^2$ of 3.3 (excluding the {\it GALEX} NUV flux, which is consistent with a modest level of chromospheric activity; see below). Integrating the (unreddened) model SED gives the bolometric flux at Earth of $F_{\rm bol} = 7.98 \pm 0.19 \times 10^{-9}$ erg~s$^{-1}$~cm$^{-2}$. Taking the $F_{\rm bol}$ and $T_{\rm eff}$ together with the {\it Gaia\/} EDR3 parallax, with no systematic offset applied \citep[see, e.g.,][]{StassunTorres:2021}, gives the stellar radius as $R = 0.731 \pm 0.022$~R$_\odot$. Finally, estimating the stellar mass from the empirical relations of \citet{Torres2010} and a 6\% error from the empirical relation itself gives $M = 0.77 \pm 0.05 M_\odot$, whereas the mass estimated empirically from the stellar radius together with the spectroscopic $\log g$ gives $M = 0.78 \pm 0.07 M_\odot$. We can also estimate the stellar age by taking advantage of the observed chromospheric activity together with empirical age-activity-rotation relations. For example, taking the chromospheric activity indicator $\log R'_{HK} = -4.69 \pm 0.05$ from the archival HARPS data and applying the empirical relations of \citet{Mamajek2008} gives a predicted age of $1.9 \pm 0.3$~Gyr. Finally, we can further corroborate the activity-based age estimate by also using empirical relations to predict the stellar rotation period from the activity. For example, the empirical relation between $R'_{HK}$ and rotation period from \citet{Mamajek2008} predicts a rotation period for this star of $29.8 \pm 3.7$~d, which is compatible with the rotation period inferred from the WASP-South observations (see Section \ref{sec:stellaractivity}). All of the stellar parameter values derived in this section can also be found in Table \ref{tab:stellarparams}. \subsection{Stellar activity monitoring} \label{sec:stellaractivity} \begin{figure} \includegraphics[width=8cm]{1.Observations/toi431_wasp.pdf} \caption{The periodogram of the WASP-South data for TOI-431 from 2012--2014. The orange tick is at 30.5\,d, while the horizontal line is at the estimated 1 per cent false-alarm probability.} \label{fig:wasp} \end{figure} Two instruments were used during different time periods to monitor TOI-431 in order to investigate the rotation period of the star. This is important to disentangle the effect of stellar activity when fitting for any planets present in the system. WASP-South, located in Sutherland, South Africa, was the southern station of the WASP transit survey \citep{2006PASP..118.1407P}. The data reported here were obtained while WASP-South was operating as an array of 85mm, f/1.2 lenses backed by 2048x2048 CCDs, giving a plate scale of $32\arcsec$/pixel. The observations spanned 180 days in 2012, 175 days in 2013 and 130 days in 2014. Observations on clear nights, with a typical 10-min cadence, accumulated 52\,800 photometric data points. We searched the datasets for rotational modulations, both separately and by combining the three years, using the methods described by \citet{2011PASP..123..547M}. We detect a persistent modulation with an amplitude of 3 mmag and a period of 30.5 $\pm$ 0.7 d (where the error makes allowance for phase shifts caused by changing starspot patterns). The periodogram from the combined 2012--2014 data is shown in Fig.~\ref{fig:wasp}. The modulation is significant at the 99.9\%\ level (estimated using methods from \citealt{2011PASP..123..547M}). In principle, it could be caused by any star in the 112$\arcsec$ photometric extraction aperture, but all the other stars are more than 4 magnitudes fainter. Given the near-30-day timescale, we need to consider the possibility of contamination by moonlight. To check this, we made identical analyses of the light curves of 5 other stars of similar brightness nearby in the same field. None of these show the 30.5\,d periodicity. A single NGTS telescope was used to monitor TOI-431 between the dates of 2019 October 11 and 2020 January 20. During this time period a total of 79011 images were taken with an exposure time of 10\,seconds using the custom NGTS filter (520 - 890\,nm). This data shows a significant periodicity at 15.5 days, at approximately half the period of the WASP-South modulation. As the WASP-South period agrees with the activity signal we see in the HARPS data (see Fig. \ref{fig:RVlombscargle}), we therefore take the 30.5\,d period value forward. \subsection{The third planet found in the HARPS data} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{2.Methods/Detrending.pdf} \caption{Periodograms for the HARPS data, where (going from top to bottom) the highest power peak has been sequentially removed until there is no power left. The best fit periods (see Table \ref{tab:planetparams}) of TOI-431\,b (yellow), c (red), and d (blue), have been denoted by dotted lines, and the 1 standard deviation interval of the rotation period of the star has been shaded in green. The periodogram for the raw RV data is shown in panel (a); (b) has the stellar activity GP model removed; (c) has the best fit model for planet d removed also. Panel (d) has planet b removed, meaning that there should be no further power left. However, there is a peak evident at 4.85\,days above the 0.1 per cent FAP that does not correlate with any stellar activity indicators, and it is not an alias of any other peaks. Taking this as an extra planet in the system (TOI-431\,c) and removing the best fit model for this leaves a periodogram with no further signals, shown in panel (e).} \label{fig:RVlombscargle} \end{figure*} We initially ran a joint fit which included only the planets flagged by the {\it TESS}\ pipelines, i.e. TOI-431\,b and d. We then removed the signals of these planets from the raw HARPS\ radial velocities, and examined the residuals. This led to the discovery of an independent sinusoidal signal being seen as a significant peak in a periodogram of the residuals. This is shown in Fig. \ref{fig:RVlombscargle}: from the periodogram of the raw RV data produced on DACE\footnote{The DACE platform is available at \url{https://dace.unige.ch}}, signals from TOI-431\,b and d can be seen at 0.491 and 12.57\,d respectively, with false-alarm probabilities (FAP) of $< 0.1$ per cent. A large signal can also be seen at 29.06\,d; this is near the rotation period of the star found with WASP-South (see Section \ref{sec:stellaractivity}). Removing the fit for these two planets and the stellar activity reveals another signal at 4.85\,d which does not correlate with any of the activity indicators (FWHM, BIS, S-Index and H$\alpha$-Index; see Fig. \ref{fig:activityindicators} for periodograms of these indicators for both the current and archival HARPS data), and which is not an alias of the other planetary signals. Phase folding the {\it TESS}\ photometry on the RV period reveals no transit (see Fig. \ref{fig:TESS}, bottom plot, middle panel). We also attempted to use Transit Least Squares \citep[TLS,][]{Hippke2019} to recover this planet; it did not return any evidence of a transit at or near the RV period. As this planet is not evident in the {\it TESS}\ data, but is large enough to be detectable (see Section \ref{sec:fitresults}), we therefore make the assumption that it does not transit. As such, we conclude that this is a further, apparently non-transiting planet, and include it in the final joint fit model (described in Section \ref{sec:jointfitmodel}) when fitting the RV data. \subsection{Construction of the joint fit model} \label{sec:jointfitmodel} Using the \textsc{exoplanet} package \citep{exoplanet:exoplanet}, we fit the photometry from {\it TESS}, LCOGT, NGTS, and \textit{Spitzer}\ and the RVs from HARPS\ and HIRES simultaneously with Gaussian Processes (GPs) to remove the effects of stellar variability. \textsc{exoplanet} utilises the light curve modelling package \textsc{Starry} \citep{exoplanet:luger18}, \textsc{PyMC3} \citep{exoplanet:pymc3}, and \textsc{celerite} \citep{exoplanet:celerite} to incorporate GPs. While we use a GP kernel included in the \textsc{exoplanet} package for the {\it TESS}\ data, we construct our own GP kernel using \textsc{PyMC3} for the HARPS\ and HIRES data. For consistency, all timestamps were converted to the same time system used by {\it TESS}, i.e. BJD\,-\,2457000. All prior distributions set on the fit parameters of this model are given in Table \ref{tab:furtherfitparams}. \subsubsection{Photometry} \label{sec:fitphotom} The flux is normalised to zero for all of the photometry by dividing the individual light curves by the median of their out-of-transit points and taking away one. To model the planetary transits, we used a limb-darkened transit model following the \citet{Kipping2013} quadratic limb-darkening parameterisation, and Keplerian orbit models. This Keplerian orbit model is parameterised for each planet individually in terms of the stellar radius $R_*$ in solar radii, the stellar mass $M_*$ in solar masses, the orbital period $P$ in days, the time of a reference transit $t_0$, the impact parameter $b$, the eccentricity $e$, and the argument of periastron $\omega$. While a similar Keplerian orbit model is parameterised for the third planet, $b$ is not defined in this case as no transit is seen in the photometric data. We find the eccentricity of all planets to be consistent with 0: when eccentricity is a fit parameter in an earlier run of this model, we find the 95 per cent confidence intervals for the eccentricity of TOI-431\,b, c and d to be 0 to 0.28, 0 to 0.22, and 0 to 0.31 respectively. Therefore, we fix $e$ and $\omega$ to 0 for all planets in the final joint fit model. These parameters are then input into light curve models created with \textsc{Starry}, together with parameters for the planetary radii $R_p$, the time series of the data $t$, and the exposure time $t_{\mathrm{exp}}$ of the instrument. As we are modelling multiple planets and multiple instruments with different $t_{\mathrm{exp}}$, a separate light curve model is thus created per instrument for the planets that are expected to have a transit event during that data set. In some cases, TOI-431\,b and d will have model light curves (e.g. in the {\it TESS}\ and \textit{Spitzer}\ observations); in others (e.g. the LCOGT and NGTS observations), only TOI-431\,d is expected to be transiting. TOI-431\, c is not seen to transit, therefore we do not need to model it in this way. We use values from the TESS pipelines to inform our priors on the epochs, periods, transit depths and radii of the transiting planets. \bigskip \textbf{\textit{TESS}} Both TOI-431\,b and d are transiting in the {\it TESS}\ light curve, so we first create model light curves for each using \textsc{Starry}. As seen in Fig. \ref{fig:TESS}, the TESS Sector 5 and 6 light curves show some stellar variability. This variability was thus modelled with the SHOTerm GP given in \textsc{exoplanet} \footnote{\url{https://docs.exoplanet.codes/en/stable/user/api/\#exoplanet.gp.terms.SHOTerm}}, which represents a stochastically-driven, damped harmonic oscillator. We set this up using the hyperparameters $\log{(s2)}$, $\log{(Sw4)}$, $\log{(w0)}$, and $Q$. The prior on $Q$ was set to $1/\sqrt{2}$. Priors on $\log{(s2)}$ and $\log{(Sw4)}$ were set as normal distributions with a mean equal to the log of the variance of the flux and a standard deviation of 0.1. The prior on $\log{(w0)}$ was also set as a normal distribution but with a mean of 0 and a standard deviation of 0.1 (see Table \ref{tab:furtherfitparams}). We then take the sum of our model light curves and subtract these from the total PDCSAP flux, and this resultant transit-free light curve is the data that the GP is trained on to remove the stellar variability. The GP model can be seen in Fig. \ref{fig:TESS} (top plot, top panel), and the resultant best fit model in the middle panel. Further to this, phase folds of the {\it TESS}\ data for all planets in the system can also be seen in Fig. \ref{fig:TESS} (bottom plot), where TOI-431\,c has been folded on its period determined from the radial velocity data, and no dip indicative of a transit can be seen. \bigskip \textbf{LCOGT} \begin{figure} \centering \includegraphics[width=\columnwidth]{2.Methods/plot_LCO_NGTS.pdf} \caption{Best fit models of TOI-431\,d to the LCOGT ingress (top), egress (middle) and NGTS light curves (bottom). In the LCOGT panels (top and middle), the observed flux is shown as light grey circles, the binned flux as red circles. In the NGTS panel (bottom), the flux is binned to 2 minute intervals in light grey. In all panels, the fit model is given as the blue line, solid where there are photometry points and dashed where there are not.} \label{fig:LCO-NGTS} \end{figure} No further detrending to that outlined in Section \ref{LCOphotom} was included for the LCOGT data. Only TOI-431\,d is transiting in this data, so we create a model light curve of TOI-431\,d using \textsc{Starry} (as outlined above) per LCOGT dataset to produce 2 model light curves overall, as there are 2 transit events - an ingress and an egress - on separate nights. For each dataset, we use a normal prior with the model light curve as the mean and a standard deviation set to the error on the LCOGT data points, and this is then compared to the observed light curve. The best fit model for both the ingress and egress data is shown in Fig. \ref{fig:LCO-NGTS} (top 2 panels). \bigskip \textbf{NGTS} No further detrending was needed for the NGTS data after the pipeline reduction outlined in Section \ref{NGTSphotom}, and again, only TOI-431\,d is evident in this data. Thus the same simple method used for the LCOGT data above is also applied here, creating a singular model light curve of TOI-431\,d for the NGTS data and comparing this to the observed light curve, with a standard deviation set to the error on the NGTS data points. The best fit model for the NGTS data is shown in Fig. \ref{fig:LCO-NGTS} (bottom panel). \bigskip \textbf{\textit{Spitzer} and Pixel Level Decorrelation} \begin{figure} \centering \includegraphics[width=\columnwidth]{2.Methods/plot_spitzer.pdf} \caption{The \textit{Spitzer}\ double-transit. \textit{Top:} the raw \textit{Spitzer}\ data, without any PLD applied. \textit{Middle:} the \textit{Spitzer}\ light curve detrended with PLD in grey and binned as red circles, with the best fit models of planet\,b (orange) and d (blue) overlaid. \textit{Bottom:} the residuals when the best fit model has been subtracted from the detrended flux.} \label{fig:spitzer} \end{figure} For the \textit{Spitzer}\ double-transit observation, model light curves are created for both TOI-431\,b and d. \textit{Spitzer}\ data is given as $N$ pixel values on a grid; in this instance, the grid is 3x3 pixels as in figure 1 of \citet{Deming2015}. We follow the Pixel Level Decorrelation (PLD) method of \citet{Deming2015} (summarised below) to remove the systematic effect caused by intra-pixel sensitivity variations. Together with pointing jitter, these variations mask the eclipses of exoplanets in the photometry with intensity fluctuations that must be removed. We outline our PLD implementation as follows: First, the intensity of pixel $i$ at each time step $t$, i.e. $P_i^t$, is normalised such that the sum of the 9 pixels at one time step is unity, thus removing any astrophysical variations: \begin{equation} \hat{P}_i^t = \frac{P_i^t}{\sum^N_{i=1} P_i^t}. \end{equation} PLD makes the simplification that the total flux observed can be expressed as a linear equation: \begin{equation} \Delta S^t = \sum_{i}^{N} c_i \hat{P}_i^t + DE(t) + ft + gt^2 + h, \label{equ:PLD} \end{equation} where $\Delta S^t$ is the total fluctuation from all sources. The normalised pixel intensities are multiplied by some coefficient $c_i$, and summed with the eclipse model $DE(t)$, a quadratic function of time $ft + gt^2$ which represents the time-dependent ``ramp'', and an offset constant $h$. We use the eclipse model set up earlier using \textsc{exoplanet} as $DE(t)$, where $D$ is the eclipse depth. This allows us to remove the intra-pixel effect, while solving for the eclipse amplitude and temporal baseline effects. Overall, the PLD alone has 14 free parameters that we solve for: 9 pixel coefficients, the depth of eclipse and the eclipse model, 2 time coefficients, and an offset term. We add an additional fit parameter by introducing a \textit{Spitzer}\ ``jitter'' term. We can estimate a prior for this fit parameter by removing our best fit model from the total raw flux from Spitzer, and calculating the standard deviation of the residual flux, which is approximately 337\,ppm. Our overall model for the \textit{Spitzer}\ data is the PLD terms multiplied by the sum of the individual light curve models for each planet, b and d. We use a normal distribution with this model as the mean and a standard deviation set by the jitter parameter, and this is fit to the observed \textit{Spitzer}\ flux. This can be seen in Fig. \ref{fig:spitzer}. \begin{figure*} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.86\textwidth]{2.Methods/plot_HARPS_top.pdf} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.86\textwidth]{2.Methods/plot_HARPS_bottom.pdf} \end{subfigure} \caption{RV data plots, where the HARPS data is denoted as grey circles, HIRES as red upside down triangles, iSHELL as pale orange triangles, FEROS as pale pink squares, and M\textsc{inerva}-Australis as pale turquoise diamonds. \textbf{Top plot:} the RV data, showing the GP and planet models that have been fit. \textit{Top:} the best-fit GP used to detrend the stellar activity in the HARPS data is shown as the green line. The green shaded areas represent the 1 and 2 standard deviations of the GP fit. \textit{Upper middle:} the separate models for each planet, b (orange, offset by +6\,m\,s$^{-1}$), c (red), and d (blue, offset by -6\,m\,s$^{-1}$). \textit{Lower middle:} the total model, representing the addition of the models for planets b, c, and d, is plotted in black, and over plotted is the HARPS and HIRES data. \textit{Bottom:} the residuals after the total model, GP and offsets have been subtracted from the RV data. \textbf{Bottom plot:} the phase folds for each planet model, b (left), c (middle), and d (right), with the RV data over plotted. The top row shows all of the RV data (where the GP has been subtracted from each data set), the middle just the HARPS and HIRES data, and the bottom the residuals when the planet models have been subtracted from the RVs.} \label{fig:RVs} \end{figure*} \subsubsection{RVs} \label{sec:HARPSGP} We do not include the iSHELL, FEROS, or M\textsc{inerva}-Australis RVs in our joint fit, as they were not found to improve the fit due to large error bars in comparison to the HARPS and HIRES data; however, they are shown to be consistent with the result of our fit (see Fig. \ref{fig:RVs}). We also do not include the archival HARPS data due to a large scatter in cadence and quality in comparison to the purpose-collected HARPS data. \textbf{HARPS and HIRES fitting} In this joint fit model, we fit the HARPS and HIRES data using the same method and so they are described here in tandem. We first find predicted values of radial velocity for each planet at each HARPS\ and HIRES timestamp using \textsc{exoplanet}. We set a wide uniform prior on $K$ for each planet, the uniform distributions centred upon $K$ values found when fitting the RV data with simple Keplerian models for all of the planets in DACE. We fit separate ``offset'' terms for HARPS and HIRES to model the systematic radial velocity, giving this a normal prior with a mean value predicted in DACE. We also fit separate ``jitter'' terms, setting wide normal priors on these, the means of which are set to double the log of the minimum error on the HARPS\ and HIRES data respectively. The RV data also shows significant stellar variability due to stellar rotation, and so we model this variability using another GP (see Fig. \ref{fig:RVs}, top panel of top plot). This activity can be modelled as a Quasi-Periodic signal as starspots moving across the surface of the star evolve in time and are modulated by stellar rotation. In this case, we create our own Quasi-Periodic kernel using \textsc{PyMC3}, as no such kernel is available in \textsc{exoplanet}. \textsc{PyMC3} provides a range of simple kernels\footnote{\url{https://docs.pymc.io/api/gp/cov.html}} which are easy to combine. We use their \verb|Periodic|: \begin{equation} k(x,x') = \eta^2 \exp \left(-\frac{\sin^2(\pi |x-x'| \frac{1}{T})}{2l_p^2} \right), \end{equation} \noindent and \verb|ExpQuad| (squared exponential): \begin{equation} k(x,x') = \eta^2 \exp \left(-\frac{(x-x')^2}{2l_e^2} \right) \end{equation} \noindent kernels. The hyperparameters are $\eta$ (the amplitude of the GP), $T$ (the recurrence timescale, equivalent to the $P_{rot}$ of the star), $l_p$ (the smoothing parameter), and $l_e$ (the timescale for growth and decay of active regions) \citep[see e.g.][]{Rasmussen2006,Haywood2014,Grunblatt2015}. We multiply these kernels together to create our final Quasi-Periodic kernel: \begin{equation} k(x,x') = \eta^2 \exp \left(-\frac{\sin^2(\pi |x-x'| \frac{1}{T})}{2l_p^2} - \frac{(x-x')^2}{2l_e^2} \right). \end{equation} We use the same GP to fit the HARPS\ and HIRES data together using the same hyperparameters. We use a normal distribution with a mean equal to the rotation period of the star found by WASP-South (see Section \ref{sec:stellaractivity} and Table \ref{tab:stellarparams}) to set a wide prior on $T$. To bring everything together, we add the predicted radial velocities together with the offsets, and subtract these from their respective observed radial velocity values. This is then used as the prior on the GP, which is also given a noise term that is equal to an addition of the jitters with the squared error on the RV data. \subsection{Fit results} \label{sec:fitresults} We first use \verb|exoplanet| to maximise the log probability of the \verb|PyMC3| model. We then use the fit parameter values this obtains as the starting point of the \verb|PyMC3| sampler, which draws samples from the posterior using a variant of Hamiltonian Monte Carlo, the No-U-Turn Sampler (NUTS). By examining the chains from earlier test runs of the model, we allow for 1000 burn-in samples which are discarded, and 5000 steps with 15 chains. We present our best fit parameters for the TOI-431 system from our joint fit in Table \ref{tab:planetparams}. TOI-431\,b is a super-Earth with a mass of $3.07^{+0.35}_{-0.34}$\,M$_{\oplus}$ and a radius of 1.28 $\pm$ 0.04\,R$_{\oplus}$, and from this we can infer a bulk density of $7.96^{+1.05}_{-0.99}$\,g~cm$^{-3}$. This puts TOI-431\,b below the radius gap, and it is likely a stripped core with no gaseous envelope. A period of 0.49\,days puts TOI-431\,b in the rare Ultra-Short Period (USP) planet category (defined simply as planets with $P < 1$\,day); examples of systems which have USP planets include Kepler-78 \citep{Winn2018}, WASP-47 \citep{Becker2015}, and 55 Cancri \citep{Dawson2010}. TOI-431\,c has a minimum mass of $2.83^{+0.41}_{-0.34}$\,M$_{\oplus}$, but the lack of transits does not allow us to fit a radius. We can use the mass-radius relation via \textsc{forecaster} \citep{Chen2017} to estimate a radius of $1.44^{+0.60}_{-0.34}$\,R$_{\oplus}$, which would place this planet as another super-Earth. TOI-431\,d is a sub-Neptune with a mass of $9.90^{+1.53}_{-1.49}$\,M$_{\oplus}$ and a radius of $3.29^{+0.09}_{-0.08}$\,R$_{\oplus}$, implying a bulk density of $1.36^{0.25}_{-0.24}$\,g~cm$^{-3}$. This lower density implies that TOI-431\,d probably has a gaseous envelope. We further analyse these planets in the following section. \begin{center} \begin{table*} \caption{The parameters for the planets TOI-431\,b, c, and d, calculated from our joint fit model described fully in Section \ref{sec:jointfit}. The values are given as the median values of our samples, and the uncertainties are given as the 16th and 84th percentiles. The bulk densities are then calculated using the masses and radii, assuming a spherical planet of uniform density. A calculation of the radius of TOI-431\,c can be found in Section \ref{sec:fitresults}, and discussion of the inclinations of the planets can be found in Section \ref{sec:disc}. The equilibrium temperature is calculated assuming an albedo of zero. Further joint fit model parameters to those presented here can be found in Appendix A.} \label{tab:planetparams} \begin{tabularx}{\textwidth}{ l >{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}X } \hline \textbf{Parameter} & \textbf{TOI-431\,b} & \textbf{TOI-431\,c} & \textbf{TOI-431\,d} \\ \hline Period $P$ (days) & $0.490047^{+0.000010}_{-0.000007}$ & $4.8494^{0.0003}_{-0.0002}$ & 12.46103 $\pm$ 0.00002 \\ Semi-major axis $a$ (AU) & $0.0113^{+0.0002}_{-0.0003}$ & 0.052 $\pm$ 0.001 & 0.098 $\pm$ 0.002 \\ Ephemeris $t_0$ (BJD-2457000) & $1627.538^{+0.003}_{-0.002}$ & $1625.9 \pm 0.1$ & $1627.5453 \pm 0.0003$ \\ Radius $R_p$ (R$_{\oplus}$) & 1.28 $\pm$ 0.04 & - & $3.29 \pm 0.09$ \\ Impact parameter $b$ & $0.34^{+0.07}_{-0.06}$ & - & $0.15^{+0.12}_{-0.10}$ \\ Inclination $i$ (degrees) & $84.3^{+1.1}_{-1.3}$ & $< 86.35^{+0.04}_{-0.09}$ & $89.7 \pm 0.2$ \\ Eccentricity $e$ & 0 (fixed) & 0 (fixed) & 0 (fixed) \\ The argument of periastron $\omega$ & 0 (fixed) & 0 (fixed) & 0 (fixed) \\ Radial velocity semi-amplitude $K$ ($ms^{-1}$) & 2.88 $\pm$ 0.30 & $1.23^{+0.17}_{-0.14}$ & $3.16 \pm 0.46$ \\ Mass $M_p$ (M$_{\oplus}$) & $3.07 \pm 0.35$ & $2.83^{+0.41}_{-0.34}$ ($M \sin i$) & $9.90^{+1.53}_{-1.49}$ \\ Bulk density $\rho$ (g\,cm$^{-3}$) & $8.0 \pm 1.0$ & - & $1.36 \pm 0.25$ \\ Equilibrium temperature $T_{eq}$ (K) & $1862 \pm 42$ & $867 \pm 20$ & 633 $\pm$ 14 \\ \hline \end{tabularx} \end{table*} \end{center} \subsection{Atmospheric evolution of TOI-431 b and d} \begin{figure} \centering \includegraphics[width=\columnwidth]{4.Discussion/plot_radiusgap.pdf} \caption{A histogram of planet radius for planets with orbital periods less than 100 days, as given in \citet{Fulton2018}. The radius valley can be seen at 1.7\,R$_{\oplus}$: below the gap are rocky super-Earths, above the gap are gaseous sub-Neptunes. TOI-431\,b (orange, with 1\,$\sigma$ confidence intervals shaded) is the former, while TOI-431\,d (blue) is the latter.} \label{fig:radiusvalley} \end{figure} The architecture of this system is unusual in that the middle planet, TOI-431\,c, is non-transiting, while the inner and outer planets are both seen to transit. Examples of this can be seen in Kepler-20 \citep{Buchhave2016}, a 6-planet system where the fifth planet out from the star does not transit, but the sixth does, and HD\,3167 \citep{Vanderburg2016,Gandolfi2017,Christiansen2017}, a 3-planet system where the middle planet does not transit as is the case with TOI-431. Using the impact parameter $b$ from Table \ref{tab:planetparams}, we calculate inclinations for TOI-431\,b and d of $(84.5^{+1.1}_{-1.3})^{\circ}$ and $89.7 \pm 0.2^{\circ}$, respectively (Table \ref{tab:planetparams}). We can calculate a limit on the inclination for TOI-431\,c assuming $b = 1$, which results in an inclination that must be $< (86.35^{+0.04}_{-0.09})^{\circ}$ in order for TOI-431\,c to be non-transiting. The TOI-431 system is a good target system for studying planetary evolution. TOI-431\,b and d reside either side of the radius-period valley described in \citet{Fulton2017,Fulton2018,VanEylen2018} (see Fig. \ref{fig:radiusvalley}), providing a useful test-bed for the theorised mechanisms behind it. X-ray and EUV-driven photoevaporation is one of the two main proposed mechanisms \citep{Owen2017}, and we investigated its effect both now and in the past in the TOI-431 system. As no direct X-ray observations of the system exist, we had to make use of empirical formulae for relating the ratio of the X-ray and bolometric luminosities to age \citep{Jackson2012} and Rossby number (related to $P_{\rm rot}$, \citet{Wright2011,Wright2018}). We extrapolate to the EUV using the relations of \citet{King2018}. Under the assumption of energy-limited escape \citep{Watson1981,Erkaev2007}, we estimate a current mass loss rate for TOI-431\,d between $5\times 10^8$ and $5\times 10^9$ g\,s$^{-1}$. The same assumptions yield a current rate of $10^{10}$ to $10^{11}$ g\,s$^{-1}$ for TOI-431\,b, but since that planet is unlikely to retain much, if any, atmosphere, the likely true rate is much lower. Integrating the \citet{Jackson2012} relations across the lifetime of the star, and again assuming energy-limited escape, lifetime-to-date mass loss estimates of 44 per cent and 1.0 per cent for TOI-431\,b and d respectively are found. Adding 2 per cent extra mass and doubling the radius to account for a primordial envelope around TOI-431\,b raises the lifetime loss to 94 per cent. Again, the true value will be lower as XUV photoevaporation will not affect the rocky core, but rather the estimates calculated here demonstrate TOI-431\,b would easily have lost a typical envelope with a mass fraction of a few per cent. The value for TOI-431\,d is consistent with the density of the planet, which suggests it retains a substantial envelope. In order to characterize the composition of TOI-431\,b and TOI-431\,d, we model the interior considering a pure-iron core, a silicate mantle, a pure-water layer, and a H-He atmosphere. The models follow the basic structure model of \citet{Dorn2017}, with the equation of state (EOS) of the iron core taken from \citet{Hakim2018}, the EOS of the silicate-mantle from \citet{Connolly2009}, and SCVH \citep{Saumon1995} for the H-He envelope assuming protosolar composition. For water we use the QEOS of \citet{Vazan2013} for low pressures and the one of \citet{Seager2007} for pressures above 44.3\,GPa. Fig. \ref{fig:MRplot} shows M-R curves tracing compositions of pure-iron, Earth-like, pure-water and a planet with 95 per cent water and 5 per cent H-He atmosphere subjected to a stellar radiation of $F/F_{\oplus}$= 50 (comparable to the case of the TOI-431 planets), and exoplanets with accurate and reliable mass and radius determinations. It should be noted that the position of the water line in the diagram is very sensitive to used EOS \cite[e.g.][]{Haldemann2020}. Fig. \ref{fig:MRplot} shows two water lines using QEOS and EOS from \cite{Sotin2007}. As shown in Fig. \ref{fig:MRplot}, TOI-431\,b is one of the many super-Earths following the Earth-like composition line. This suggests that it is mostly made of refractory materials. TOI-431\,d, instead, sits above the two the pure-water curves and below the 5 per cent curve, implying that the H-He mass fraction is unlikely to exceed a few per cent. Its density is lower than most of the observed sub-Neptunes. There are three planets in the catalogue presented in \citet{Otegi2020} with masses below 10\,M$_{\oplus}$ and radii above 3\,R$_{\oplus}$ (Kepler-11\,d,e and Kepler-36\,c), and all of their masses have been determined with TTVs. As shown in \citet{Otegi2020b}, reducing the uncertainties in this M-R regime would lead to significant improvements on the determination of the volatile envelope mass. As TOI-431 is in the ESPRESSO GTO target list, more observations will help to further constrain the internal structure of TOI-431\,d. We then quantify the degeneracy between the different interior parameters and produce posterior probability distributions using a generalised Bayesian inference analysis with a Nested Sampling scheme \citep[e.g.][]{Buchner2014}. The interior parameters that are inferred include the masses of the pure-iron core, silicate mantle, water layer and H-He atmospheres. For the analysis, we use the stellar Fe/Si and Mg/Si ratios as a proxy for the planet. Table \ref{tab:interior} lists the inferred mass fractions of the core, mantle, water-layer and H-He atmosphere from the interior models. It should be noted, however, that our estimates have rather large uncertainties. Indeed, in this regime of the M-R relation there is a large degeneracy, and therefore the mass ratio between the planetary layers is not well-constrained. Nevertheless, we find that TOI-431\,b has a negligible H-He envelope of 1.2x$10^{-9}\,M_{\oplus}$. The larger companion TOI-431\,d is expected to have a significant volatile layer of H-He and/or water of about 3.6 or 33 per cent of its total mass, respectively. The nature of the volatile layer is degenerate. \\ \begin{figure} \centering \includegraphics[width=\columnwidth]{4.Discussion/MR_plot.pdf} \caption{Mass-radius diagram of known exoplanets with mass determinations better than 4$\sigma$ from the NASA exoplanet archive (\url{https://exoplanetarchive.ipac.caltech.edu}, as of 22 September 2020) shown in grey. TOI-431\,b (orange) and d (blue) are denoted as diamonds, and the Solar System planets Venus (V), Earth (E), Uranus (U), and Neptune (N) are marked as black stars. Also shown are the composition lines of iron (dark grey), Earth-like (green), and pure-water planets (pale blue and mid blue, using QEOS and EOS from \citet{Sotin2007} respectively), plus an additional line representing a planet with a 95 per cent water and a 5 per cent H-He envelope with $F/F_{\oplus}$= 50, comparable to the case of the TOI-431 planets (brown).} \label{fig:MRplot} \end{figure} \begin{table} \centering \caption{Inferred interior structure properties of TOI-431\,b and d.} \label{tab:interior} \begin{tabularx}{\columnwidth}{l l l} \hline \textbf{Interior Structure:} & \textbf{TOI-431\,b} & \textbf{TOI-431\,d} \\ \hline $M_{\rm{core}}/M_{\rm{total}}$ & $0.51^{+0.15}_{-0.14}$ & $0.29^{+0.16}_{-0.13}$ \\ $M_{\rm{mantle}}/M_{\rm{total}}$ & $0.37^{+0.27}_{-0.18}$ & $0.34^{+0.23}_{-0.12}$ \\ $M_{\rm{water}}/M_{\rm{total}}$ & $0.15^{+0.12}_{-0.09}$ & $0.33^{+0.21}_{-0.15}$ \\ $M_{\rm{H-He}}/M_{\rm{total}}$ & - & $0.036^{+0.012}_{-0.009}$ \\ \hline \end{tabularx} \end{table} Considering the future observation prospects of this system, for TOI-431\,d we calculate a transmission spectroscopy metric \citep[TSM;][]{Kempton2018} of $215\pm58$, after propagating the uncertainties on all system parameters. The relatively large uncertainty is dominated by the uncertainty on the planet's mass; nonetheless, this TSM value indicates that TOI-431\,d is likely among the best transmission spectroscopy targets known among small, cool exoplanets \citep[$<4 R_\oplus$, $<1000$~K; see Table 11 of][]{Guo2020}. \section{Introduction} The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX. The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers. This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables. This is not a general guide on how to use \LaTeX, of which many excellent examples already exist. We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts. Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides. For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}. Only technical issues with the \LaTeX\ class are considered here. \section{Obtaining and installing the MNRAS package} Some \LaTeX\ distributions come with the MNRAS package by default. If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN). The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper. To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file: \begin{verbatim} \documentclass{mnras} \end{verbatim} Then compile \LaTeX\ (and if necessary \bibtex) in the usual way. \section{Preparing and submitting a paper} We recommend that you start with a copy of the \texttt{mnras\_template.tex} file. Rename the file, update the information on the title page, and then work on the text of your paper. Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$. Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents). If a paper is accepted, it is professionally typeset and copyedited by the publishers. It is therefore likely that minor changes to presentation will occur. For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process. Papers must be submitted electronically via the online submission system; paper submissions are not permitted. For full guidance on how to submit a paper, see the instructions to authors. \section{Class options} \label{sec:options} There are several options which can be added to the document class line like this: \begin{verbatim} \documentclass[option1,option2]{mnras} \end{verbatim} The available options are: \begin{itemize} \item \verb'letters' -- used for papers in the journal's Letters section. \item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations. \item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format. \item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format. \item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns. \item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper. \item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations. \item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images. \item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}). \item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables. \end{itemize} Some of these options are deprecated and retained for backwards compatibility only. Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems. If you want to include any other packages, see section~\ref{sec:packages}. \section{Title page} If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present. Simply edit the title, author list, institutions, abstract and keywords as described below. \subsection{Title} There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head'). Enter them with \verb'\title[]{}' like this: \begin{verbatim} \title[Running head]{Full title of the paper} \end{verbatim} The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line. See appendix~\ref{sec:advanced} for more complicated examples. \subsection{Authors and institutions} Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command. If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list. For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location: \begin{verbatim} \author[K. T. Smith et al.]{ Keith T. Smith,$^{1}$ A. N. Other,$^{2}$ and Third Author$^{2,3}$ \\ $^{1}$Affiliation 1\\ $^{2}$Affiliation 2\\ $^{3}$Affiliation 3} \end{verbatim} Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'. Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote. If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times. Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command. \subsection{Abstract and keywords} The abstract is entered in an \verb'abstract' environment: \begin{verbatim} \begin{abstract} The abstract of the paper. \end{abstract} \end{verbatim} \noindent Note that there is a word limit on the length of abstracts. For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$. Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment: \begin{verbatim} \begin{keywords} keyword 1 -- keyword 2 -- keyword 3 \end{keywords} \end{verbatim} \noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years. Do \emph{not} make up new keywords! For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$. \section{Sections and lists} Sections and lists are generally the same as in the standard \LaTeX\ classes. \subsection{Sections} \label{sec:sections} Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels: \begin{verbatim} \section{Main section} \subsection{Subsection} \subsubsection{Subsubsection} \paragraph{Lowest level section} \end{verbatim} \noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used. Some sections are not numbered as part of journal style (e.g. the Acknowledgements). To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'. See appendix~\ref{sec:advanced} for more complicated examples. \subsection{Lists} Two forms of lists can be used in MNRAS -- numbered and unnumbered. For a numbered list, use the \verb'enumerate' environment: \begin{verbatim} \begin{enumerate} \item First item \item Second item \item etc. \end{enumerate} \end{verbatim} \noindent which produces \begin{enumerate} \item First item \item Second item \item etc. \end{enumerate} Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals. For an unnumbered list, use the \verb'description' environment without the optional argument: \begin{verbatim} \begin{description} \item First item \item Second item \item etc. \end{description} \end{verbatim} \noindent which produces \begin{description} \item First item \item Second item \item etc. \end{description} Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only. \section{Mathematics and symbols} The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here. See also section~\ref{sec:packages} for packages that support more advanced mathematics. Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$. Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below. \subsection{Equations} Equations should be entered using the \verb'equation' environment, which automatically numbers them: \begin{verbatim} \begin{equation} a^2=b^2+c^2 \end{equation} \end{verbatim} \noindent which produces \begin{equation} a^2=b^2+c^2 \end{equation} By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble. It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided. \subsection{Special symbols} \begin{table} \caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.} \label{tab:anysymbols} \begin{tabular}{lll} \hline Command & Output & Meaning\\ \hline \verb'\sun' & \sun & Sun, solar\\[2pt] \verb'\earth' & \earth & Earth, terrestrial\\[2pt] \verb'\micron' & \micron & microns\\[2pt] \verb'\degr' & \degr & degrees\\[2pt] \verb'\arcmin' & \arcmin & arcminutes\\[2pt] \verb'\arcsec' & \arcsec & arcseconds\\[2pt] \verb'\fdg' & \fdg & fraction of a degree\\[2pt] \verb'\farcm' & \farcm & fraction of an arcminute\\[2pt] \verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt] \verb'\fd' & \fd & fraction of a day\\[2pt] \verb'\fh' & \fh & fraction of an hour\\[2pt] \verb'\fm' & \fm & fraction of a minute\\[2pt] \verb'\fs' & \fs & fraction of a second\\[2pt] \verb'\fp' & \fp & fraction of a period\\[2pt] \verb'\diameter' & \diameter & diameter\\[2pt] \verb'\sq' & \sq & square, Q.E.D.\\[2pt] \hline \end{tabular} \end{table} \begin{table} \caption{Additional commands for mathematical symbols. These can only be used in maths mode.} \label{tab:mathssymbols} \begin{tabular}{lll} \hline Command & Output & Meaning\\ \hline \verb'\upi' & $\upi$ & upright pi\\[2pt] \verb'\umu' & $\umu$ & upright mu\\[2pt] \verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt] \verb'\lid' & $\lid$ & less than or equal to\\[2pt] \verb'\gid' & $\gid$ & greater than or equal to\\[2pt] \verb'\la' & $\la$ & less than of order\\[2pt] \verb'\ga' & $\ga$ & greater than of order\\[2pt] \verb'\loa' & $\loa$ & less than approximately\\[2pt] \verb'\goa' & $\goa$ & greater than approximately\\[2pt] \verb'\cor' & $\cor$ & corresponds to\\[2pt] \verb'\sol' & $\sol$ & similar to or less than\\[2pt] \verb'\sog' & $\sog$ & similar to or greater than\\[2pt] \verb'\lse' & $\lse$ & less than or homotopic to \\[2pt] \verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt] \verb'\getsto' & $\getsto$ & from over to\\[2pt] \verb'\grole' & $\grole$ & greater over less\\[2pt] \verb'\leogr' & $\leogr$ & less over greater\\ \hline \end{tabular} \end{table} Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals. Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}. Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production. To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'. For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'. \subsection{Ions} A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states. For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}. \section{Figures and tables} \label{sec:fig_table} Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX. \subsection{Basic examples} \begin{figure} \includegraphics[width=\columnwidth]{example} \caption{An example figure.} \label{fig:example} \end{figure} Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code: \begin{verbatim} \begin{figure} \includegraphics[width=\columnwidth]{example} \caption{An example figure.} \label{fig:example} \end{figure} \end{verbatim} \begin{table} \caption{An example table.} \label{tab:example} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline Sun & 1.00 & 1.00\\ $\alpha$~Cen~A & 1.10 & 1.52\\ $\epsilon$~Eri & 0.82 & 0.34\\ \hline \end{tabular} \end{table} The example Table~\ref{tab:example} was generated using the code: \begin{verbatim} \begin{table} \caption{An example table.} \label{tab:example} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline Sun & 1.00 & 1.00\\ $\alpha$~Cen~A & 1.10 & 1.52\\ $\epsilon$~Eri & 0.82 & 0.34\\ \hline \end{tabular} \end{table} \end{verbatim} \subsection{Captions and placement} Captions go \emph{above} tables but \emph{below} figures, as in the examples above. The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled. Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort. Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers. By default a figure or table will occupy one column of the page. To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment. If a figure or table is too long to fit on a single page it can be split it into several parts. Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'. This will automatically correct the numbering and add `\emph{continued}' at the start of the caption. \begin{table} \contcaption{A table continued from the previous one.} \label{tab:continued} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline $\tau$~Cet & 0.78 & 0.52\\ $\delta$~Pav & 0.99 & 1.22\\ $\sigma$~Dra & 0.87 & 0.43\\ \hline \end{tabular} \end{table} Table~\ref{tab:continued} was generated using the code: \begin{verbatim} \begin{table} \contcaption{A table continued from the previous one.} \label{tab:continued} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline $\tau$~Cet & 0.78 & 0.52\\ $\delta$~Pav & 0.99 & 1.22\\ $\sigma$~Dra & 0.87 & 0.43\\ \hline \end{tabular} \end{table} \end{verbatim} To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment. The landscape Table~\ref{tab:landscape} was produced using the code: \begin{verbatim} \begin{landscape} \begin{table} \caption{An example landscape table.} \label{tab:landscape} \begin{tabular}{cccccccccc} \hline Header & Header & ...\\ Unit & Unit & ...\\ \hline Data & Data & ...\\ Data & Data & ...\\ ...\\ \hline \end{tabular} \end{table} \end{landscape} \end{verbatim} Unfortunately this method will force a page break before the table appears. More complicated solutions are possible, but authors shouldn't worry about this. \begin{landscape} \begin{table} \caption{An example landscape table.} \label{tab:landscape} \begin{tabular}{cccccccccc} \hline Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\ Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\ \hline Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ \hline \end{tabular} \end{table} \end{landscape} \section{References and citations} \subsection{Cross-referencing} The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper. We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly. This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler). It is best to give each section, figure and table a logical label. For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'. Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}. Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'. The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing. It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges. For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}. \subsection{Citations} \label{sec:cite} MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}. This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers. Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations. There are two basic \verb'natbib' commands: \begin{description} \item \verb'\citet{key}' produces an in-text citation: \citet{author2013} \item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013} \end{description} Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler. \defcitealias{smith2014}{Paper~I} \begin{table*} \caption{Common citation commands, provided by the \texttt{natbib} package.} \label{tab:natbib} \begin{tabular}{lll} \hline Command & Ouput & Note\\ \hline \verb'\citet{key}' & \citet{smith2014} & \\ \verb'\citep{key}' & \citep{smith2014} & \\ \verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\ \verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\ \verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\ \verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\ \verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\ \verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\ \verb'\citetalias{key}' & \citetalias{smith2014} & \\ \verb'\citepalias{key}' & \citepalias{smith2014} & \\ \hline \end{tabular} \end{table*} There are a number of other \verb'natbib' commands which can be used for more complicated citations. The most commonly used ones are listed in Table~\ref{tab:natbib}. For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}. If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors. \subsection{The list of references} \label{sec:ref_list} It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead. \bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details. An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package. The rest of this section will assume you are using \bibtex. References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting. This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier. We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems. \bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry. Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}. Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author. Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key. Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command. Consult the documentation for your compiler or latex distribution. Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file. Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited. If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references. \section{Appendices and online material} To start an appendix, simply place the \verb' \section{Methods etc} Normally the next section describes the techniques the authors used. It is frequently split into subsections, such as Section~\ref{sec:maths} below. \subsection{Maths} \label{sec:maths} Simple mathematics can be inserted into the flow of the text e.g. $2\times3=6$ or $v=220$\,km\,s$^{-1}$, but more complicated expressions should be entered as a numbered equation: \begin{equation} x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}. \label{eq:quadratic} \end{equation} Refer back to them as e.g. equation~(\ref{eq:quadratic}). \subsection{Figures and tables} Figures and tables should be placed at logical positions in the text. Don't worry about the exact layout, which will be handled by the publishers. Figures are referred to as e.g. Fig.~\ref{fig:example_figure}, and tables as e.g. Table~\ref{tab:example_table}. \begin{figure} \includegraphics[width=\columnwidth]{example} \caption{This is an example figure. Captions appear below each figure. Give enough detail for the reader to understand what they're looking at, but leave detailed discussion to the main body of the text.} \label{fig:example_figure} \end{figure} \begin{table} \centering \caption{This is an example table. Captions appear above each table. Remember to define the quantities, symbols and units used.} \label{tab:example_table} \begin{tabular}{lccr} \hline A & B & C & D\\ \hline 1 & 2 & 3 & 4\\ 2 & 4 & 6 & 8\\ 3 & 5 & 7 & 9\\ \hline \end{tabular} \end{table} \section{Introduction} \label{sec:intro} \input 0.Intro/intro.tex \section{Observations} \label{sec:obs} \input 1.Observations/observations.tex \section{The Joint Fit} \label{sec:jointfit} \input 2.Methods/methods.tex \section{Discussion} \label{sec:disc} \input 4.Discussion/discussion.tex \section{Conclusion} We have presented here the discovery of three new planets from the {\it TESS}\ mission in the TOI-431 system. Our analysis is based upon 2-min cadence {\it TESS}\ observations from 2 sectors, ground-based follow-up from LCOGT and NGTS, and space-based follow-up from \textit{Spitzer}. The photometric data was modelled jointly with RV data from the HARPS spectrograph, and further RVs from iSHELL, FEROS, and M\textsc{inerva}-Australis are included in our analysis. We find evidence to suggest that the host star is rotating with a period of 30.5 days, and account for this in our joint-fit model. Nearby contaminating stellar companions are ruled out by multiple sources of high resolution imaging. TOI-431\,b is a super-Earth characterised by both photometry and RVs, with an ultra-short period of 0.49 days. It likely has a negligible envelope due to substantial atmosphere evolution via photoevaporation, and an Earth-like composition. TOI-431\,c is found in the HARPS RV data and is not seen to transit. It has a period of 4.84 days and a minimum mass similar to the mass of TOI-431\,b; extrapolating this minimum mass to a radius via the MR relation places it as a likely second super-Earth. TOI-431\,d is a sub-Neptune with a period of 12.46 days, characterised by both photometry and RVs. It has likely retained a substantial H-He envelope of about 4 per cent of its total mass. Additionally, TOI-431\,b and d contribute to the {\it TESS}\ Level-1 mission requirement. This system is a candidate for further study of planetary evolution, with TOI-431\,b and d either side of the radius valley. The system is bright, making it amenable to follow-up observations. TOI-431\,b, in particular, would potentially be an interesting target for phase-curve observations with JWST. \section*{Acknowledgements} This paper includes data collected by the {\it TESS}\ mission. Funding for the {\it TESS}\ mission is provided by the NASA Explorer Program. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. We acknowledge the use of public {\it TESS}\ Alert data from pipelines at the {\it TESS}\ Science Office and at the {\it TESS}\ Science Processing Operations Center. This study is based on observations collected at the European Southern Observatory under ESO programme 1102.C-0249 (PI: Armstrong). Additionally the archival HARPS data dating from 2004 to 2015 were obtained under the following programmes: 072.C-0488 (PI: Mayor); 183.C-0972 (PI: Udry); 085.C-0019 (PI: Lo Curto); 087.C-0831 (PI: Lo Curto); and 094.C-0428 (PI: Brahm). This work was supported by grants to P.P. from NASA (award 18-2XRP18\_2-0113), the National Science Foundation (Astronomy and Astrophysics grant 1716202), the Mount Cuba Astronomical Foundation, and George Mason University start-up funds.The NASA Infrared Telescope Facility is operated by the University of Hawaii under contract NNH14CK55B with NASA. This paper is in part based on data collected under the NGTS project at the ESO La Silla Paranal Observatory. The NGTS facility is operated by the consortium institutes with support from the UK Science and Technology Facilities Council (STFC) projects ST/M001962/1 and ST/S002642/1. This work makes use of observations from the LCOGT network. M\textsc{inerva}-Australis is supported by Australian Research Council LIEF Grant LE160100001, Discovery Grant DP180100972, Mount Cuba Astronomical Foundation, and institutional partners University of Southern Queensland, UNSW Sydney, MIT, Nanjing University, George Mason University, University of Louisville, University of California Riverside, University of Florida, and The University of Texas at Austin. We respectfully acknowledge the traditional custodians of all lands throughout Australia, and recognise their continued cultural and spiritual connection to the land, waterways, cosmos, and community. We pay our deepest respects to all Elders, ancestors and descendants of the Giabal, Jarowair, and Kambuwal nations, upon whose lands the M\textsc{inerva}-Australis facility at Mt Kent is situated. This work is based in part on observations made with the {\it Spitzer} Space Telescope, which was operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. Based on observations obtained at the international Gemini Observatory, a program of NSF’s NOIRLab acquired through the Gemini Observatory Archive at NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência, Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). Data collected under program GN-2019A-LP-101. This work was enabled by observations made from the Gemini North telescope, located within the Maunakea Science Reserve and adjacent to the summit of Maunakea. We are grateful for the privilege of observing the Universe from a place that is unique in both its astronomical quality and its cultural significance. This work is based in part on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{o}es (MCTI/LNA) do Brasil, the US National Science Foundation’s NOIRLab, the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU). This research made use of \textsf{exoplanet} \citep{exoplanet:exoplanet} and its dependencies \citep{exoplanet:agol19, exoplanet:astropy13, exoplanet:astropy18, exoplanet:exoplanet, exoplanet:kipping13, exoplanet:luger18, exoplanet:pymc3, exoplanet:theano}. This publication makes use of The Data \& Analysis Center for Exoplanets (DACE), which is a facility based at the University of Geneva (CH) dedicated to extrasolar planets data visualisation, exchange and analysis. DACE is a platform of the Swiss National Centre of Competence in Research (NCCR) PlanetS, federating the Swiss expertise in Exoplanet research. The DACE platform is available at https://dace.unige.ch. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. AO is supported by an STFC studentship. DJA acknowledges support from the STFC via an Ernest Rutherford Fellowship (ST/R00384X/1). RB acknowledges support from FONDECYT Post-doctoral Fellowship Project 3180246, and from the Millennium Institute of Astrophysics (MAS). IJMC acknowledges support from the NSF through grant AST-1824644. AJ acknowledges support from FONDECYT project 1210718, and from ANID – Millennium Science Initiative – ICN12\_009. MNG. acknowledges support from MIT's Kavli Institute as a Juan Carlos Torres Fellow. JSJ acknowledges support by FONDECYT grant 1201371 and partial support from CONICYT project Basal AFB-170002. We acknowledge support by FCT - Funda\c{c}\~ao para a Ci\^encia e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionaliza\c{c}\~ao by these grants: UID/FIS/04434/2019; UIDB/04434/2020; UIDP/04434/2020; PTDC/FIS-AST/32113/2017 \& POCI-01-0145-FEDER-032113; PTDC/FIS-AST/28953/2017 \& POCI-01-0145-FEDER-028953. VA, EDM and SCCB acknowledge the support from FCT through Investigador FCT contracts IF/00650/2015/CP1273/CT0001, IF/00849/2015/CP1273/CT0003, and IF/01312/2014/CP1215/CT0004 respectively. ODSD is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by FCT. CD acknowledges the SNSF Ambizione Grant 174028. SH acknowledge support by the fellowships PD/BD/128119/2016 funded by FCT (Portugal). JKT acknowledges that support for this work was provided by NASA through Hubble Fellowship grant HST-HF2-51399.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. DD acknowledges support through the TESS Guest Investigator Program Grant 80NSSC19K1727. JLB and DB have been supported by the Spanish State Research Agency (AEI) Project No. MDM-2017-0737 Unidad de Excelencia “María de Maeztu”- Centro de Astrobiología (CSIC/INTA). SH acknowledges CNES funding through the grant 837319. PJW acknowledges support from STFC through consolidated grants ST/P000495/1 and ST/T000406/1. L.M.W. is supported by the Beatrice Watson Parrent Fellowship and NASA ADAP Grant 80NSSC19K0597. \section*{Data Availability} The TESS data are available from the Mikulski Archive for Space Telescopes (MAST), at \url{https://heasarc.gsfc.nasa.gov/docs/tess/data-access.html}. The other photometry from the LCOGT, NGTS, and \textit{Spitzer}\, as well as all of the RV data, are available for public download from the ExoFOP-TESS archive at \url{https://exofop.ipac.caltech.edu/tess/target.php?id=31374837}. This data is labelled ``Osborn+ 2021'' in their descriptions. The high-resolution imaging data is also available from the ExoFOP TESS archive. The model code underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
1,314,259,996,002
arxiv
\section{Introduction} Classical cellular automata (CA) after S. Ulam and J. von Neumann \cite{Ulam-Neumann, Wolf} are defined on regular grids which are actually finite direct products of finite or infinite cyclic groups. If to say about finite products of finite cyclic groups (tori ) then it appears that many different CA have actually the same behavior after renaming states. In other words, the state transition diagrams of many different automata are isomorphic.\footnote{For instance, there are 256 different rules for additive cellular automata (see definition further or \cite{linCA}) with two-state cells on 1-dimensional torus of size 8. Among them only 16 are essentially different.} The grids have a certain system of symmetries that could be described by isomorphisms of the groups and these symmetries induce isomorphisms of behaviors of automata with different rules. Saying "behavior of an automaton" we mean the state transition diagram of the automaton. However besides of the symmetries of grids there are other symmetries which influence automata behavior; for instance - symmetries of automata rules. We study the question whether it is possible to derive all isomorphisms among state transition diagrams of CA from symmetries of underlying structures such that grids, sets of states of cells, rules, etc. For that we first should determine these things in such a way that their symmetries were clearly seen. On the other hand our purpose is also to diversify the set of possible symmetries of the CA supports. This is why we restrict ourselves with additive (i.e. linear homogeneous) automata whose sets of cell's states are finite fields because the set of rules of such automata has a clear algebraic structure. (The class of general ACA on grids is well known, see for instance \cite{linCA}.) On the other hand we diversify the set of supports via replacement of the classical grids with arbitrary (finite though) groups called further {\sl index groups}. We consider additive cellular automata on finite groups as an appropriate frame to study the question because placing cells of an automaton in group elements and making rule applications such that hold the group symmetries we can observe more rich picture of the connection between the structures of the groups and isomorphisms of the automata than it can be seen for the particular case such as finite tori. About finite groups see for instance \cite{Coxeter}. One general expectation of course is that the most of isomorphisms (we call them {\sl regular}) can be reduced to the system of symmetries (isomorphisms) of an underlying algebraic structure, basic symmetries that should be determined. According to the description of the basic symmetries accepted in this work there are many groups such that all isomorphisms of the CA on them are reducible to the symmetries of the underlying algebraic structure (index group, monoid $\m M$, its subgroup of reversible elements, and other, see below). However for some groups there are isomorphisms of CA on them which cannot be reduced to the basic symmetries that we accept. Section 2 deals with the definition of homogeneous linear automata on groups and the generalization of the notion of circulant \cite{Ryzhik} onto groups and the group convolution \cite{Weis}: $\mathcal{C}(T)$ and $\boxtimes$ playing central roles in the paper. In section 3 we mainly study the automorphisms of a monoid $\m M$ created by automata rules and the operation $\boxtimes$. First we study the contribution of the symmetries (index permutations) of the index group into the basic symmetries. The complete description of the class of index permutations for any given index group $\m g$ is a constructive relatively $\m g$. Then more wide class of linear automorphisms of $\m M$ is completely characterized. Both these classes are much easier to list than the complete set $\aut{\m M}$ of all automorphisms of $\m M$. Further we extend the class of the isomorphisms produced by $\aut{\m M}$ with some isomorphisms related to the group $\m G$ of all reversible elements of $\m M$ obtaining the set of regular isomorphisms of automata. In section 4 discussing our construction we provide a proof that all isomorphisms among CA upon field $\m F_2$ on some cyclic index groups $\m c_q$ for whose orders $q$ the number 2 is a primitive root \cite {Rosen} modulo $q$ are regular. We conjecture this is true for all prime $q$ that the number 2 is a primitive root modulo $q$. Also several examples of index groups are given for which the class of all CA isomorphisms is wider than classes of regular isomorphisms. \subsubsection{General denotations} \ \\ \begin{itemize} \item[a:] $\m g$ a finite group $\{g_0,g_1,\dots,g_{n-1}\}$ with the unit $g_0=\mathfrak{1}$. We use denotation $\ovl g$ for inverse element to $g\in\m g$. Also we assume that $\m g$ is not a trivial, i.e. $n>1$. Note that the above enumeration of elements of $\m g$ instals a linear order on the group where $\m1$ is the first element. We call this group {\sl index-group}. \item[b:] $p$ a prime number. We denote both kinds of multiplication: in the number field $\m F_p=\langle\{0,\dots,p-1\},0,1,+,\cdot\rangle$ and in the group $\m g$ in the same way, - as usual, by simple concatenation of elements like $kr,k,r\in\{0,\dots,p-1\}, gq,g,q\in\m g$. \item[c:] $\m V=\{v|v:\m g\to\{0,\dots,p-1\}\}$ set of evaluations of elements of $\m g$ and $v(g),g\in\m g,$ is $g$-th component of $v$. Sometimes we use upper or low indices to select vector's components when vectors participate in matrix algebra operations as row-vector and column-vector respectively. \item[d:] $g$th-{\sl constituent} $K[g]$ for which by the definition $K[g](g)=1$ and $K[g](g')=0$ for any $g'\neq g$. \item[e:] For $v\in\m V$ let $v^{-}$ be an element of $\m V$ such that $v^{-}(g)=v(\ovl g),\ g\in\m g$. \item[f:] $\m L=\langle \m V,+,\cdot\rangle$ a vector space on $\m V$ over field $\m F_p$ with vector's addition $(v+v')(g)=v(g)+v'(g)\mm p$ and multiplication of vectors by scalars $(k\cdot v)(g)=(kv)(g)=kv(g), k\in\{0,\dots,p-1\},g\in\m g$. \item[g:] The standard basis $\mathbf K=\{K[g_i]|g_i\in\m g\}$ for $\m L$. \item[h:] If $P$ is $n\times n$ matrix, $P_i,P^j$, and $P_i^j$ denote relatively the $i$th row, $j$th column, and the element of $P$ at the intersection of $i$th row and $j$th column. \end{itemize} \section{ACA on groups} For any vector $R\in\m V$ we define a {\sl additive linear cellular automaton on group} $\m g$ over field $F_p$ as a system whose {\sl states} consist $\m V$ and whose behavior $v^{[0]},v^{[1]},\dots,v^{[\tau]},\dots$ (where $\tau,\tau\in\mathbb Z^+,$ represents time\footnote{Usage of the parentheses $[,]$ in the denotation $v^{[\tau]}$ for a state at time $\tau$ is caused by a necessity to distinct $i$-th component $v^{i}$ of a row vector $v$ from the value of the vector at time $\tau$.} and $v$ is an initial state) is defined by recursion: \begin{eqnarray}\label{def-aut} v^{[0]}=v;\ v^{[\tau+1]}_f=\sum_gR(\ovl fg)v^{[\tau]}(g), f\in\m g. \end{eqnarray} This recursion reflects the fact that to calculate new state of cell $f$ we shift rule $R$ along the index-group by $f$. The shift could be expressed as $R(\ovl fg)$ or $R(g\ovl f)$ which are equivalent for commutative index-groups. We choose the first form. Vector $R$ is called {\sl rule} of the automaton denoted $\mathcal{A}_{\m g}^{p}(R)$ of shorter as $\mathcal{A}(R)$ when index group $\m g$ and the field $\m F_p$ are fixed. Note that in case when $\m g$ is a cartesian product of $k$ cyclic groups we deal with additive cellular automata on $k$-dimensional tori. Let $R*v$ denotes the application of rule $R$ to state $v$ according to (\ref{def-aut}). Using this denotation we can rewrite (\ref{def-aut}) more concisely $v^{[\tau+1]}=R*v^{[\tau]}$. {\sl State transition diagram} $\STDm R$ for automaton $\mathcal{A} (R)$ is defined as usual, i.e. this is a graph $\langle\m V,\{(v,R*v)|v\in\m V\}\rangle$. Automata with rules $R,T$ are {\sl isomorphic} if their diagrams $\STDm R, \STDm T$ are isomorphic; we denote the latter by $\STDm R\approx\STDm T$. Thus the set of all rules partitions on classes of isomorphisms whose quantity says how many essentially different ACA are there. \subsection{Quantities of the classes of isomorphism for ACA on small groups} Table~\ref{num} shows the numbers of classes of isomorphism for ACA with two-state cells on some small groups.\footnote{For some cyclic groups see also \cite{Bul2}.} Not only the number of classes depends of the structure of index group but some diagrams representing automata for a group do not appear among diagrams for another group of the same order. \begin{table}[here]\label{num} \caption{Number $|\m I|$ of classes of isomorphism for index groups of order $\le10$. $\m c_m$ - cyclic group of order $m$; $\m D_m$ - dihedral group of order $2m$; $\m Q$ - quaternion group (the denotations are from \cite{Coxeter}).} \begin{tabular}{||l||r|r|r|r||} \hline\hline Index group $\m g$&Order&Commutative?&Number of Rules&\ \ $| \m I| $\ \\ \hline\hline $\m c_2$&2&yes&4&4\\ \hline $\m c_3$&3&yes&8&6\\ \hline $\m c_4$&4&yes&16&8\\ \hline $\m c_2\times\m c_2$&4&yes&16&6\\ \hline $\m c_5$&5&yes&32&10\\ \hline $\m c_6$&6&yes&64&24\\ \hline $\m D_3$& 6&no&64&22\\ \hline $\m c_7$&7&yes&128&12\\ \hline $\m c_8$&8&yes&256&16\\ \hline $\m c_2\times\m c_4$&8&yes&256&12\\ \hline $\m c_2\times\m c_2\times\m c_2$&8&yes&256&8\\ \hline $\m D_4$&8&no&256&14\\ \hline $\m Q$&8&no&256&12\\ \hline $\m c_9$&9&yes&512&42\\ \hline $\m c_3\times\m c_3$&9&yes&512&30\\ \hline $\m c_{10}$&10&yes&1024&40\\ \hline $\m D_5$&10&no&1024&44\\ \hline\hline \end{tabular} \end{table} \section{Algebraic structure of variety of HLCA on a group $\m g$ and its automorphisms} \subsection{Group circulants} \ \\ Using matrix-vector multiplications we normally admit that automata' states are columns whereas rules are rows. That is why in these cases we notate components of a state as $v_g$ and rule's components as $R^g$. The agreement allows rewrite the recursion above in a different way using the generalization of the standard \cite{Ryzhik} concept of circulant matrices. Here {\sl circulant} is a matrix $\mathcal{C}$ whose $(g,f)$-components satisfy the condition $\mathcal{C}\big|_g^f=\mathcal{C}_{\m1}^{\ovl gf},f,g\in\m g$. Therefore if $\mathcal{C}_1=R$ we have $\mathcal{C}_g^f=R^{\ovl gf}$. Since in addition any circulant is defined by the first row ({\sl leader}) we use the denotation $\mathcal{C}(R)$ for a circulant $\mathcal{C}$ s.t. $\mathcal{C}_{\m 1}=R$. \begin{lem}\label{crit-circ} (i) $\mathcal{C}(H)$ is a circulant iff $\mathcal{C}(H)_{sg}^{sf}=\mathcal{C}(H)_{g}^{f}$.\\ (ii) $\mathcal{C}(T)\mathcal{C}(H)=\mathcal{C}(Q)$ where $Q^f=\sum_qT^{q}H^{\ovl qf}$. \end{lem} {\sl Proof.\ } (i) By the definition $\mathcal{C}(H)_{sg}^{sf}=H^{\ovl{sg}\,sf}=H^{\ovl{g}f}$. Now assume that a matrix $Q$ obeys the condition $Q_{sg}^{sf}=Q_{g}^{f}$ for all $f,g,s\in\m g$. Then setting $s=\ovl g$ we arrive at $Q_{g}^{f}= Q_{\m1}^{\ovl gf}$ which is a definition to $\mathcal{C}(Q_{\m 1})$. (ii) Indeed, \begin{eqnarray} \left[\mathcal{C}(T)\mathcal{C}(H)\right]_{sg}^{sf}= \sum_u\mathcal{C}(T)_{sg}^u\mathcal{C}(H)_u^{sf}= \sum_uT^{\ovl{sg}\,u}H^{\ovl u\,sf}=_{\text{ setting $q:=\ovl{sg}\,u$}}=\notag\\ \sum_qT^qH^{\ovl{q}\,\ovl{sg}\,sf}=\sum_qT^qH^{\ovl{q}\,\ovl g\,f}=_{\text{ setting $\ovl u:=\ovl{q}\,\ovl{g}$}}=\label{g=1}\\ \sum_uT^{\ovl gu}H^{\ovl uf}=\sum_u\mathcal{C}(T)_g^u\mathcal{C} (H)_u^f =\left[\mathcal{C}(T)\mathcal{C}(H)\right]_g^f.\notag \end{eqnarray} Therefore by lemma~\ref{crit-circ} the product $\mathcal{C}(T)\mathcal{C}(H)$ is a row circulant and for its first row $Q$ it follows from~(\ref{g=1}) that $Q^f=\sum_qT^{q}H^{\ovl qf}$ when $g=\m1$. \ \ \ $\Box$ Thus the dynamic equation for automata could be written using group circulant as $v^{t+1}=\mathcal{C}(R)v^t$. Group circulants for cyclic groups are the standard circulant matrices \cite{Ryzhik}. \subsection{Operations $\boxtimes$ and $\mathbf b_p$} With the group convolution (see \cite{Weis}) $\boxtimes :\m V\times\m V\to\m V$ defined by \begin{eqnarray}\label{BT-def} (v\boxtimes v')(g)=\underset{f}\sum v(f)v'(\ovl fg)\ v,v'\in\m V,\ g,g'\in\m g, \end{eqnarray} the statement (ii) of lemma~\ref{crit-circ} could be expressed as \begin{cor} $\mathcal{C}(T)\mathcal{C}(H)=\mathcal{C}(T\boxtimes H)$. \end{cor} The next consequence plays an important role in the following: \begin{theo}\label{lem2} $v^{[\tau+1]}=\bxt{v^{[\tau]}}{R^{-}}$. \end{theo} {\sl Proof.\ } $R*v=\bxt{v}{R^{-}}$ because from (\ref{def-aut}) we have $(R*v)(f)=\sum_gR(\ovl fg)v(g)=\sum_gv(g)R^-(\ovl gf)=(v\boxtimes R^-)(f)$. \ \ \ $\Box$ We use also the derivative operation $\mathbf b_p(T)=\underbrace{T\boxtimes\dots\boxtimes T}_{p\text{ terms}}$ which in case of finite abelian groups was studied in \cite{Bul1}. Here are some properties of $\boxtimes$ that we use. Let $\m Z$ be the commutant of $\m g$. We call $A\in\m V$ as $\m Z$-{\sl correct} if $\forall g,f[g\ovl f\in\m Z\implies A(g)=A(f)]$. \begin{lem}\label{oper-bxt} \begin{itemize} \item[(i)] $\bxt{}{}$ is an associative operation.\\ \item[(ii)] $\bxt{K[\m1]}A=\bxt A{K[\m1]}=A$.\\ \item[(iii)] $\bxt{}{}$ is a linear operation on $\m L$ w.r.t. both operands: \begin{eqnarray*} A\boxtimes(B+C)=A\boxtimes B+A\boxtimes C,\ \ (B+C)\boxtimes A=B\boxtimes A+C\boxtimes A. \end{eqnarray*} \item[(iv)] $A^{-}\boxtimes B^{-}=(B\boxtimes A)^{-}$.\\ \item[(v)] $A\boxtimes B=B\boxtimes A$ for any $A,B\in\m V$ such that at least one of them is $\m Z$-correct. \item[(vi)] $K[f]\boxtimes K[g]=K[fg]$ for any $f,g\in\m g,n\in\mathbb Z$. \item[(vii)] \begin{eqnarray*} (K[g]\boxtimes A)(f)= A(\ovl gf),\ \ \ \ (A\boxtimes K[g])(f)= A(f\ovl g). \end{eqnarray*} In other words $K[g]\boxtimes A=\mathcal{C}(A)_g$. \end{itemize} \end{lem} {\sl Proof.\ } (i) For $A,B,C\in\m V,g\in\m g$ we have \begin{eqnarray*} [\bxt{(\bxt AB)}C](g)=\sum_f\left(\sum_rA(r)B(\ovl rf)\right)C(\ovl fg)= \sum_rA(r)\sum_fB(\ovl rf)C(\ovl fg)=_{h:=\ovl rf} \\ =\sum_rA(r)\left(\sum_{h}B(h)C(\ovl h\ovl rg)\right)= \sum_rA(r)(\bxt BC)(\ovl rg)=[\bxt A(\bxt BC](g). \end{eqnarray*} It is possible to replace bounded variable $f$ with $h$ since for each fixed value of $r$ mapping $f\mapsto \ovl rf$ is 1-1-mapping $\m g$ on $\m g$.\footnote{We use similar arguments in many places below.}\\ (ii) $(\bxt{K[\m1]}A)(g)=\sum_fK[\m 1](f)A(\ovl fg)=A(g)= \sum_fA(f)K[\m1](\ovl fg). $\\ (iii) Now \begin{eqnarray*} (A\boxtimes(B+C))(g)=\sum_fA(f)(B+C)(g\ovl f)=\sum_fA(f)B(\ovl fg)+ \sum_fA(f)C(\ovl fg)=\\(A\boxtimes B)(g)+(A\boxtimes C)(g). \end{eqnarray*} The second identity has a similar proof.\\ (iv) $(R^{-}\boxtimes Q^{-})(g)=\sum_{f}R^{-}(f)Q^{-}(\ovl fg)=\sum_{f}R(\ovl f)Q(\ovl gf)$. If we define $w=\ovl gf$ (for any fixed $g$ variable $w$ runs over $\m g$ while $f$ runs over $\m g$) then $\ovl f=\ovl w\,\ovl g$ and $\sum_{f}R(\ovl f)Q(\ovl gf)=\sum_{w}Q(w)R(\ovl w\,\ovl g)=(Q\boxtimes R)(\ovl g)=(Q\boxtimes R)^{-}(g)$. \ \ \ $\Box$\\ (v) By definition we can write: \begin{eqnarray*} (\bxt AB)(g)=\sum_fA(f)B(\ovl fg)=\sum_rB(r)A(g\ovl r)\\ (\bxt BA)(g)=\sum_rB(r)A(\ovl rg)=\sum_fA(f)B(g\ovl f). \end{eqnarray*} Now if for instance $B$ is $\m Z$-correct then $B(\ovl fg)=B(g\ovl f)$ for all $g,f\in\m g$. This is because $g\ovl f\,\ovl{\ovl fg}$ is a commutator. Hence $(\bxt AB)(g)=(\bxt BA)(g)$.\\ (vi) \begin{eqnarray*} (K[f]\boxtimes K[g])(r)=\sum_sK[f](s) K[g](\ovl sr) =K[g](\ovl fr)= \begin{cases} 1,& \ovl fr=g\\ 0& \text{otherwise}. \end{cases} \end{eqnarray*} Since $\ovl fr=g$ means $r=fg$ we get what we need.\\ (vii) \begin{eqnarray*} (K[g]\boxtimes A)(f)=\sum_sK[g](s)A(\ovl sf)=_{\text{since }K[g]=0\text{ if }g\neq s}=A(\ovl gf).\\ (A\boxtimes K[g])(f)=\sum_sA(s)K[g](\ovl sf)=_{\text{since }K[g]=0\text{ if }g\neq \ovl sf}=A(f\ovl g). \end{eqnarray*} \ \ \ $\Box$ \subsection{Monoid $\m M$} As $\boxtimes$ is an associative binary operation on $\m V$ the structure $\m M=\langle\m V,\boxtimes,K[\mathfrak1]\rangle$ is a monoid \cite{Clifford} with the unit $K[\mathfrak1]$. We denote $\m G$ the subgroup of the monoid consisting of all its reversible elements (i.e. such $A\in\m V$ that there exists $B\in\m V$ obeying $A\boxtimes B=B\boxtimes A=K[\m1]$). \begin{lem}\label{revers-circ} (i) $A\in\m M$ is reversible iff $\mathcal{C}(A)$ is a non singular matrix.\\ (ii) Mapping $g\mapsto K[g]$ is isomorphism between $\m g$ and subgroup $\m K$ of $\m G$ generated by vectors $\{K[g]|g\in\m g\}$. \end{lem} {\sl Proof.\ } (i) Indeed, if for a vector $B$ we have $A\boxtimes B=K[\m1]$ then $\mathcal{C}(A)\mathcal{C}(B)$ is equal to the identity matrix $\mathcal{C}(K[\m1])$ and therefore $\mathcal{C}(A)$ is non singular. Vice versa, if $\mathcal{C}(A)$ is non singular then a unique matrix $\mathcal{M}$ exists obeying $\mathcal{M}\mathcal{C}(A)=\mathcal{C}(K[\m1])$. Hence $\mathcal{M}$ also is non singular. Let us show that $M$ is a circulant. For that let us prove that an inverse matrix of a circulant is a circulant, i.e. $[\mathcal{C}(A)]^{-1}|_g^f=[\mathcal{C}(A)]^{-1}|_{\m1}^{\ovl gf}$. Consider $\mathcal{M}\mathcal{C}(A)=\mathcal{C}(K[\m1])$ as an equation's system for $\mathcal{M}$ which in terms of elements looks like \begin{eqnarray*} \sum_s\mathcal{M}_g^sA^{\ovl sf}=K[\m1]^{\ovl gf}. \end{eqnarray*} Let search $\mathcal{M}$ in form of a circulant $\mathcal{C}(M)$. Then the system above can be rewritten as \begin{eqnarray*} \sum_sM^{\ovl gs}A^{\ovl sf}=K[\m1]^{\ovl gf}. \end{eqnarray*} Since the matrix of this system of linear equations for unknown numbers $M^i,i\in\m g,$ is $\mathcal{C}(A)$ and $|\mathcal{C}(A)|\neq0$, a unique solution $M$ exists to this system. Hence $\mathcal{M}=\mathcal{C}(M)$ and $\mathcal{C}(M)\mathcal{C}(A)=\mathcal{C}(K[\m1])$. Finally $M\boxtimes A=K[\m1]$. The similar reasonings yield $A\boxtimes M=K[\m1]$. Therefore vector $A$ is reversible in $\m M$. (ii) follows from lemma~\ref{oper-bxt} (i),(ii),(vi). \ \ \ $\Box$ The monoid $\m M$ and linear space $\m L$ are subsystems of an associative algebra $\mathbf A=\langle\m V,+,\boxtimes \rangle$ over field $\m F_p$. Clearly $\m A$ is the enveloping algebra of Lie algebra $L(\mathbf A)$ after introducing Lie bracket as follows: \begin{eqnarray*} [A,B]=A\boxtimes B- B\boxtimes A. \end{eqnarray*} \begin{lem} If $A$ or $B$ is $\m Z$-correct then $[A,B]=[B,A]=0$. \end{lem} {\sl Proof.\ } Indeed, \begin{eqnarray*} [A,B](f)=(A\boxtimes B-B\boxtimes A)(f)= \sum_gA(g)B(\ovl gf)-\sum_rB(r)A(\ovl rf)=_{\text{setting }g=\ovl rf}=\\ \sum_gA(g)B(\ovl gf)-\sum_gA(g)B(f\ovl g). \end{eqnarray*} Hence if $B$ is $\m Z$-correct, then $[A,B]=\mathbf0$. The same is true for $\m Z$-correct $A$ instead of $B$ because $[A,B]=-[B,A]$. \ \ \ $\Box$ The monoid is the basic algebraic structure on the set $\m V$ of all states and rules of HLCA on $\m g$ relatively to the question that we study in this paper. Indeed, state transition diagram for a rule $w^-\in\m V$ is completely defined in terms of $\m M$ as a graph $\langle\m V,\{(v,v\boxtimes w)|v\in\m V\}\rangle$. Therefore the more basic is an algebraic structure on HLCA whose automorphisms generate isomorphisms of diagrams the more complete set of isomorphisms among the diagrams can be revealed. On the other hand completions $\m M$ with additional operations lead to algebraic structures whose automorphisms are more specific but admit often more simple description. The next section illustrates this. We call group ${\m g}^{\circ} (\m G^{\circ})$ with the group multiplication $\circ(x,y)$ co-group for $\m g$ if it consists of the same elements as $\m g\ (\m G)$ and for all group elements $f,q$ it holds $[\circ(f,q)=f\circ q=qf]$ where $qf$ if the group multiplication in $\m g(\m G)$. Thus the table of multiplication of $\m g^{\circ}\ (\m G^{\circ})$ is a transposed table of multiplication of $\m g\ (\m G)$. {\sl Reflection on $\m g\ (\m G)$} is a mapping $\rho:\m g\to\m g\ (\varrho:\m G\to\m G)$ such that $\rho(f)=\ovl f, f\in\m g\ (\varrho(f)=\ovl f, f\in\m G)$. Similarly we call {\sl co-monoid} to $\m M$ a monoid $\m M^{\circ}=\langle\m V,\boxtimes^{\circ}, K[\m1]\rangle$ where $v\boxtimes^{\circ}w=w\boxtimes v$, and {\sl reflection on $\m M$} is a mapping $\rho:\m V\to\m V$ such that $\rho(v)=v^-, v\in\m V$. \begin{lem}\label{refl} 1). The reflection on $\m g$ ($\m G,\m M$) is a natural isomorphism between given index-group $\m g$ (group $\m G$, monoid $\m M$) and the co-group $\m g^{\circ}$ (co-group $\m G$, co-monoid $\m M^{\circ}$).\\ 2). Any isomorphism $\psi:\m g\to\m g^{\circ}$ ($\psi:\m M\to\m M^{\circ}$) is a composition of an automorphism $\varphi$ of $\m g$ ($\m M$) and the reflection. \end{lem} {\sl Proof.\ } Clearly the reflections are 1-1-mappings. 1) For index-group $\m g$: $\rho(x)=\ovl x, \rho(\m1)=\m1, \rho(xy)=\ovl{xy}=\ovl y\ovl x=\ovl x\circ\ovl y=\rho(x)\circ\rho(y)$. For the monoid $\m M$: $\rho(R)=R^-, \rho(K[\m1])=K[\m1]^-=K[\m1]$. Then, according to lemma~\ref{oper-bxt} (iv) we have $\rho(R\boxtimes T)=(R\boxtimes T)^-=T^-\boxtimes R^-=\rho (T)\boxtimes\rho (R)=\rho(R)\boxtimes^{\circ}\rho(T)$. 2) If $\psi$ is an isomorphism then $\rho\psi$ is an automorphism $\varphi$. Therefore $\psi=\rho\varphi$ because the reflection is idempotent. \ \ \ $\Box$ \begin{theo}\label{aut->iso} If $\varphi:\m M\to\m M$ is an automorphism of the monoid and $T,R$ are automata rules such that $T=\varphi(R)$ then $\STDm T\approx\STDm R$ and $\STDm{T^-}\approx\STDm{R^-}$. \end{theo} {\sl Proof.\ } From theorem~\ref{lem2} we have $R*v=v\boxtimes R^-$. On the other hand lemma~\ref{oper-bxt} (iv) yields $(v\boxtimes R^-)^-=R^{--}\boxtimes v^-$. Since the operation $(\cdot)^-$ is idempotent we obtain $(v\boxtimes R^-)^-=R\boxtimes v^-$. Thus \begin{eqnarray}\label{forms} w=R*v \iff w=v\boxtimes R^- \iff w^-=R\boxtimes v^- \end{eqnarray} The similar chain of the equivalents holds for $T$ as well. Now from $\varphi(w^-)=\varphi(R)\boxtimes\varphi(v^-)=T\boxtimes\varphi(v^-)$ and the fact that the mapping $(\cdot)^-$ is 1-1-mapping on $\m V$ the isomorphism of $\STDm R,\STDm T$ follows. On the other hand since $R^{--}=R$ we see that $(v,w)$ is an edge of $\STDm R^-$ iff $w=v\boxtimes R$. As $T=\varphi(R)$ given we arrive at $\varphi(w)=\varphi(v)\boxtimes T$. This means that $\STDm{T^-}\approx\STDm{R^-}$. \ \ \ $\Box$\footnote{In other words we apply here $\varphi(R)=T$ to $R\boxtimes v^-$ and $v\boxtimes R$. The first leads to isomorphism of the diagrams of $R,T$ and the second - to $\STDm{T^-}\approx\STDm{R^-}$.} From here it does not follow yet that $\STDm R\approx\STDm{R^-}$ whereas examples (with non-abelian index-groups) show that $\STDm R\approx\STDm {R^-}$. In abelian case we have \begin{theo} If $\m g$ is commutative group then $\varphi: R\mapsto R^-,R\in \m V,$ is an automorphism of $\m M$. Therefore $\STDm R\approx\STDm{R^-}$ for any $R\in\m V$. \end{theo} \footnote{This is also a consequence of a theorem about functional index-permutations (see below).} {\sl Proof.\ } The commutant $\m Z$ of any commutative group is consists of one element that is the unit of the group. Therefore any $v\in\m V$ is $\m Z$-correct and from lemma~\ref{oper-bxt}(v) it follows that $\boxtimes$ is commutative for commutative $\m g$. Therefore (iv) from lemma~\ref{oper-bxt} looks like $(v\boxtimes u)^-=v^-\boxtimes u^-$. This proves that $\varphi: R\mapsto R^-,R\in \m V,$ is an automorphism of $\m M$ translating any rule $R$ into $R^-$. Hence $\STDm R\approx\STDm{R^-}$. \ \ \ $\Box$ \subsection{Index-permutations} One class of automorphisms of the monoid, a class that has a simple description is defined by automorphisms of the index-group. We call {\sl index-permutation} any permutation $\theta:\m g\to \m g$ of the index-group $\m g$. Any index-permutation $\theta$ generates 1-1-mapping $\Theta:\m V\to\m V$ by a rule $\Theta(v)|_f:=v|_{\theta(f)}$. We say that index-permutation $\theta$ is {\sl functional index-permutation} if for any rule $R\in\m V$ there exists a rule $T$ such that $T*v=\Theta^{-1}(R*\Theta(v))$ for any state $v\in\m V$. In this case $R,T$ are obviously isomorphic. \begin{theo} 1. An index-permutation $\theta$ is functional iff $\ovl{\theta(\m1)}\theta(\cdot)$ is an automorphism of $\m g$.\\ 2. If $\theta$ is a functional index-permutation and rule $T$ satisfies $T*v=\Theta^{-1}(R*\Theta(v)),v\in\m V,$ then $T_{\ovl{\theta(\m1)}\theta(h)}=R_h,h\in\m g$. \end{theo} {\sl Proof.\ } Assume $\Theta(T*v)=R*\Theta(v), v\in\m V$. Then for each $f\in\m g$ \begin{eqnarray*} \sum_hR_{\ovl {f}\,h}v_{\theta(h)}=\sum_hT_{\ovl{\theta(f)}\,h}v_h= \sum_hT_{\ovl{\theta(f)}\,\theta(h)}v_{\theta(h)}. \end{eqnarray*} Since $v$ runs over $\m V$ it must hold $R_{\ovl {f}\,h}=T_{\ovl{\theta(f)}\,\theta(h)}, h,f\in\m g$. From here we deduce the condition that the product $\ovl{\theta(f)}\,\theta(h)$ depends only on $\ovl {f}h$. Let a function $\varphi:\{g_0,\dots,g_{n-1}\}\to\{g_0,\dots,g_{n-1}\}$ satisfies \begin{eqnarray}\label{theta} \ovl{\theta(f)}\,\theta(h)=\varphi(\ovl {f}h),\ h,f\in\m g, \end{eqnarray} and denote $a=\theta(\m1)$. Setting $f:=\m1$ we get $\theta(h)=a\varphi(h)$ and conclude that $\varphi$ is 1-1-mapping on $\m g$. Yet if to substitute $h:=\m1$ we arrive at $\ovl{\theta(f)}a=\varphi(\ovl f)$ and since $f,h$ any elements of $\m g$ we conclude $\varphi(\ovl x)=\ovl{\varphi(x)}, x\in\m g$. On the other hand from (\ref{theta}) setting $h=f=\m1$ we get $\varphi(\m1)=\m1$ and substituting there $a\varphi(h),a\varphi(f)$ instead of $\theta(h),\theta(f)$ relatively we arrive at $\varphi(\ovl {f}\,h)=\varphi(\ovl f)\phi(h)$. Thus we obtain the following equations for $\varphi$: \begin{eqnarray} \begin{cases}\label{ip8} \forall x,y\in\{g_0,\dots,g_{n-1}\}[x\neq y\implies\varphi(x)\neq\varphi(y)],\\ \varphi(\m1)=\m1,\\ \varphi(\ovl x)=\ovl{\varphi(x)},\\ \varphi(xy)=\varphi(x)\phi(y). \end{cases} \end{eqnarray} If $\varphi$ is being considering as a mapping from $\m g$ into $\m g$, then $\varphi$ is an automorphism. Simultaneously we proved the second statement because from $R_{\ovl {f}\,h}=T_{\ovl{\theta(f)}\,\theta(h)}$ the equation $T_{\ovl{\theta(\m1)}\theta(h)}=R_h$ follows if to set $f:=\m1$. Vice versa, an automorphism $\phi$ of $\m g$ and $a\in\m g$ given, let us define an index permutation $\theta(\cdot)$ as $a\phi(\cdot)$ and let $R$ be any rule. First if $u=\ovl{\theta^{-1}(f)}$ then $f=\theta(\ovl u)=a\phi(\ovl u)$ and $u=\ovl{\phi^{-1}(\ovl af)}$. Hence \begin{eqnarray*} \Theta^{-1}(R*\Theta(v))|_f=(R*\Theta(v))|_{\theta^{-1}(f)}= \sum_hR_{\ovl{\theta^{-1}(f)}\,h}v_{a\phi(h)}= \sum_hR_{\ovl{\phi^{-1}(\ovl a\,f)}\,h}v_{a\phi(h)}=_{q:=a\phi(h)}\\ \sum_qR_{\ovl{\phi^{-1}(\ovl a\,f)}\,\phi^{-1}(\ovl a\,q)}v_{q}. \end{eqnarray*} And because $\phi^{-1}$ also is an automorphism of $\m g$ we can continue as \begin{eqnarray*} \sum_qR_{\ovl{\phi^{-1}(\ovl a\,f)}\,\phi^{-1}(\ovl a\,q)}v_{q}= \sum_qR_{(\ovl{\phi^{-1}(f)})\,(\ovl{\phi^{-1}(\ovl a)})\,\phi^{-1}(\ovl a)\phi^{-1}(q)}v_{q} =\sum_qR_{\phi^{-1}(\ovl f)\,\phi^{-1}(q)}v_{q}. \end{eqnarray*} It remains to note that $\phi^{-1}(\ovl f)\,\phi^{-1}(q)=\phi^{-1}(\ovl f\,q)$ to conclude that it is possible to define a rule $T$ as $T_{\ovl f\,g}:=R_{\phi^{-1}(\ovl f\,q)}$. \ \ \ $\Box$ {\sl Note:} from here the result $T$ of application of $\theta$ to $R$ is defined by $T_q=R_{\theta^{-1}(\theta(\m1)q)}$.\ \ \ $\Box$\footnote{Check this on examples.} \begin{theo}\label{ind-aut-mon} If $\theta$ is an automorphism of index-group $\m g$ then $\Theta:\m V\to\m V$ is an automorphism of monoid $\m M$. \end{theo} {\sl Proof.\ } \begin{eqnarray*} \Theta(R\boxtimes T)|_f=\sum_hR_hT_{h\theta(f)}=_{\theta(q):=h}= \sum_{\theta(h)}R_{\theta(h)}T_{\ovl{\theta(h)}\theta(f)}= \sum_{h}R_{\theta(h)}T_{\theta(\ovl{h}\,f)}= \\ \sum_{h}\Theta(R)_{h}\Theta(T)_{\ovl{h}\,f}=(\Theta(R)\boxtimes\Theta(T))|_f \end{eqnarray*} \ \ \ $\Box$ \begin{cor} If index-group $g$ is commutative then $\STDm{R}\approx\STDm{R^-}$ for each rule $R\in \m V$. \end{cor} {\sl Proof.\ } For abelian $\m g$ the mapping $\theta:g\mapsto\ovl g$ is an automorphism. Hence by the theorem~\ref{ind-aut-mon} $\Theta$ induced by $\theta$ is an automorphism of the monoid $\m M$. On the other hand, $\Theta(R)_f=R_{\theta(f)}=R_{\ovl{f}},f\in\m g$. Therefore $\Theta(R)=R^-$. \ \ \ $\Box$ Examples show that at least for some non-commutative rules $\STDm{R}\approx\STDm{R^-}$ for each rule $R\in \m V$. Is this true or not for all index group could not be solved on the basis of index-permutations only since $\theta: g\mapsto\ovl g, g\in\m g,$ is not an automorphism for any non-commutative group $\m g$. \subsection{Linear automorphisms of $\m M$} We call an automorphism $\varphi:\m M\to\varphi M$ {\sl linear} automorphism if $\varphi(T+L)=\varphi(T)+\varphi(L),T,L\in\m V$. It is trivially follows from here that \begin{theo}\label{ind-lin} Any automorphism $\Theta$ of $\m M$ generated by an automorphism of the index-group $\m g$ of the monoid $\m M$ is a linear automorphism of $\m M$. \end{theo} {\sl Proof.\ } By the definition $\Theta(T+L)|_f=(T+L)|_{\theta(f)}=T_{\theta(f)}+L_{\theta(f)} = \Theta(T)|_f+\Theta(L)|_f= (\Theta(T)+\Theta(L))|_f$. \ \ \ $\Box$ Now let us find general characteristics of linear isomorphisms. \begin{lem}\label{autom->H} A linear automorphism $\varphi$ given, let $H[g]$ be $\varphi(K[g]), g \in \m g$. Then:\\ (1) Mapping $\psi:g\mapsto H[g], g\in\m g,$ is an injection of $\m g$ into the group $\m G$ consisting of all reversible elements of the monoid $\m M$.\\ (2) Rank of the system $\{H[g]|g\in\m g\}$ is equal to the order $|\m g|$ of the index group. \end{lem} {\sl Proof.\ } (1) First of all $H[\m1]=K[\m1]$ because $\varphi$ is an automorphism of $\m M$. Then by the same reason and definition of $H$ we have $H[g]\boxtimes H[f]=\varphi(K[g])\boxtimes\varphi(K[f])=\varphi(K[g]\boxtimes K[f]) =_{\text{by lemma~\ref{oper-bxt}(iv)}}=\varphi(K[gf])=H[fg]$. That means by the way that all $H[g], g\in\m g,$ are reversible in the monoid. In addition $\varphi$ is 1-1-mapping. Therefore $\psi:g\mapsto H[g]$ is an injection of the group $\m g$ into the monoid. Therefore all $H[g]$ should be reversible elements of the monoid $\m M$. Since all reversible elements of $\m M$ create a subgroup ($\m G$) of the monoid we deal with an injection $\m g$ into $\m G$. (2) The system $\{H[g]|g\in\m g\}$ of vectors is linearly independent because the matrix $\Phi$ defined as $\Phi_g=\varphi(K[g]),g\in\m g,$ should be reversible as the matrix representation of the automorphism $\varphi$ considered as a linear operator on $\m L$. \ \ \ $\Box$ Any injection $\psi$ of $\m g$ into $\m G$ obeying the condition that the system $\{\psi(g)|g\in\m g\}$ is linearly independent in $\m L$ we call {\sl non-singular $\m g$-injection}. \begin{cor} For any linear automorphism $\varphi$ of the monoid the mapping $\psi:\m g\to\m G$ defined by $\psi:g\mapsto \varphi(g)$ is a non-singular $\m g$-injection. \end{cor} If $\mathcal{M}$ is a non-singular matrix of size $|\m g|\times|\m g|$ and $\mathcal{C}(L)\mathcal{M}=\mathcal{M}\,\mathcal{C}(T)$ then rules $L,T$ produce isomorphic state transition diagrams. We call (see [2]) a non-singular matrix {\sl universal} if it commutes with set $\{\mathcal{C}(T)|T\in\m V\}$, i.e. $\forall T\in\m V\exists L\in\m V[\mathcal{C}(T)\mathcal{M}=\mathcal{M}\,\mathcal{C}(L)]$. Obviously, for any matrix $\mathcal{P}$ of size $|\m g|\times|\m g|$, the matrices $\mathcal{M},\mathcal{M}^{-1}$ are or are not universal simultaneously. \begin{theo} For any non-singular $\m g$-injection $\psi$ the matrix $\Psi$ defined as $\Psi_g=\psi(g),g\in \m g,$ is universal and in addition $\Psi_{\m1}=K[\m1]$. \end{theo} {\sl Proof.\ } The equality $\Psi_{\m1}=K[\m1]$ follows directly from the fact that $\psi(\m1)$ should be the unit $K[\m1]$ of the monoid. By the definition $\psi(g)=K[g]\Psi, g\in \m g$. Then \begin{eqnarray*} \Psi\mathcal{C}(\psi(g))=\mathcal{C}(K[g])\Psi. \end{eqnarray*} Indeed, \begin{eqnarray*} \left[\mathcal{C}(K[g])\Psi\right]_f^h=\sum_s\mathcal{C}(K[g])|_f^s\Psi_s^h= \sum_sK[g]^{\ovl fs}\Psi_s^h=\Psi_{fg}^h, \end{eqnarray*} where the latter equality is caused by the fact that $K[g]^{\ovl fs}=0$ if $fg\neq s$. On the other hand \begin{eqnarray*} \left[\Psi\mathcal{C}(\psi(g))\right]_f^h=\sum_s\Psi_f^s\mathcal{C}(\psi(g))_s^h= \sum_s\Psi_f^s\psi(g)|^{\ovl sh}= \sum_s\Psi_f^s\left({K[g]\Psi} \right)^{\ovl sh}=\\ \sum_s\Psi_f^s\sum_tK[g]^t\Psi_t^{\ovl sh}=\sum_s\Psi_f^s\Psi_g^{\ovl sh}=[\Psi_f\boxtimes\Phi_g]^h=\Psi_{fg}^h \end{eqnarray*} Let $T\in\m V$ be a given rule. Then obviously \begin{eqnarray*} \mathcal{C}(T)=\sum_{g\in\m g}a_g\mathcal{C}(K[g]), a_g\in\m F_p. \end{eqnarray*} From here and proved above: \begin{eqnarray*} \mathcal{C}(T)\Psi=\sum_ga_g\mathcal{C}(K[g])\Psi=\sum_ga_g\Phi\mathcal{C}(\psi(g))= \Psi\left(\sum_ga_g\mathcal{C}(\psi(g))\right)=\Psi\mathcal{C}(L) \end{eqnarray*} where $L=\sum_ga_g\psi(g)$. This proves that for any $T\in\m V$ there exists $L\in\m V$ such that $\mathcal{C}(T)\Psi=\Psi\,\mathcal{C}(L)$. Also because $\psi$ is a non-singular $\m g$-injection the rank of $\Psi$ is equal to $|\m g|$, i.e. $\Psi$ is non-singular matrix. Thus $\Psi$ in a universal matrix. \ \ \ $\Box$ \begin{theo} If $\mathcal{M}$ is a universal matrix obeying $\mathcal{M}_{\m1}=K[\m1]$ then the transformation $\varphi:T\mapsto T\mathcal{M}, T\in\m V,$ is a linear automorphism of $\m M$. \end{theo} {\sl Proof.\ } In first, $\varphi$ is 1-1-mapping because $\mathcal{M}$ is non-singular. The linearity of $\varphi$ is also obvious and it remains only to prove that $\varphi(T\boxtimes L)=(\varphi(T))\boxtimes(\varphi(L))$ for all $T,L\in\m V$. The remaining part of a proof of this theorem consists of several lemmas. We say that ${H[g]}\in\m V$ is a {\sl $g$-response} of a matrix $\mathcal{M}$ if $\mathcal{C}(K[g])\mathcal{M}=\mathcal{M}\mathcal{C}(H[g])$. Since $\mathcal{M}$ is non-singular then the responses $H[g]$ are defined uniquely. Evidently, $H[\m1]=K[\m1]$ because $C(K[\m1])$ is the identity matrix $\mathcal I$. \begin{lem}\label{crit-P} $\mathcal{C}(K[g])\mathcal{M}=\mathcal{M}\mathcal{C}(H[g])\iff\forall h[\mathcal{M}_{hg}=\bxt{\mathcal{M}_h}{H[g]}],\ \ g\in\m g$. \end{lem} {\sl Proof.\ } For the system of responses $\{H[g]|g\in\m g\}\subseteq\m V$ of $\mathcal{M}$ it holds \begin{eqnarray}\label{recH} \forall g\in\m g[\mathcal{M}\mathcal{C}(H[g])=\mathcal{C}(K[g])\mathcal{M} \iff \forall h,q\in\m g(\mathcal{M}^q_{hg}=\sum_u\mathcal{M}^u_h{H[g]}^{\ovl uq})], \end{eqnarray} Indeed, \begin{eqnarray*} \mathcal{M}^q_{hg}=\sum_uK[g]^{\ovl hu}\mathcal{M}^q_u=\sum_u\mathcal{C}(K[g])_h^u\mathcal{M}^q_u=(\mathcal{C}(K[g])\mathcal{M})_h^q=\\ (\mathcal{M}\mathcal{C}({H[g]}))_h^q= \sum_u\mathcal{M}^u_h\mathcal{C}({H[g]})_u^q=\sum_u\mathcal{M}^u_h{H[g]}^{\ovl uq}. \end{eqnarray*} Then we can rewrite (\ref{recH}) in form \begin{eqnarray*} \mathcal{M}_{hg}(q)=\sum_u\mathcal{M}_h(u)H[g](\ovl uq),\ h,q\in\m g. \end{eqnarray*} Hence for rows $\mathcal{M}_h,\mathcal{M}_{hg}$ of the universal matrix $\mathcal{M}$ and response $H[g]$ we get the equation $\mathcal{M}_{hg}=\mathcal{M}_h\boxtimes H[g]$. \ \ \ $\Box$ For any reversible element $A$ of the monoid $\m M$ we denote as $\ovl A$ a vector $X\in\m V$ such that $A\boxtimes X=X\boxtimes A=K[1]$ reserving the denotation $(\cdot)^{-1}$ for reverse matrix. \begin{lem} All responses $H[g],g\in\m g,$ of any universal matrix $\mathcal{M}$ are reversible in the monoid. In addition $\ovl{H[g]}=H[\ovl g]$ and $\mathcal{C}(H[\ovl g])=\mathcal{C}^{-1}(H[g])$ . \end{lem} {\sl Proof.\ } According to lemma~\ref{revers-circ} (i) for any $A\in\m V$ the matrix $\mathcal{C}(A)$ is reversible in $\m L$ iff $A$ is reversible in $\m M$. And because the uniqueness of the reverse matrix (if it exists), when $\ovl A$ exists then $\mathcal{C}(\ovl A)\mathcal{C}(A)=\mathcal{C}(K[\m1])$ and therefore $\mathcal{C}^{-1}(A)=\mathcal{C}(\ovl A)$. Let $\mathcal{M}$ be a universal matrix and $\mathcal{M}\mathcal{C}(H[g])=\mathcal{C}(K[g])\mathcal{M},g\in\m g$. Since $\mathcal{C}(K[g])$ is non singular matrix $\mathcal{C}(H[g])$ also is non singular. Therefore $\mathcal{C}^{-1}(H[g])\mathcal{M}^{-1}=\mathcal{M}^{-1}\mathcal{C}^{-1}(K[g])$ and from here $\mathcal{M}\mathcal{C}^{-1}(H[g])=\mathcal{C}^{-1}(K[g])\mathcal{M}$. On the other hand $\mathcal{C}^{-1}(K[g])$ is a circulant $\mathcal{C}(A)$ (as it is proved before), and $\mathcal{C}(K[\m1])=\mathcal{C}(A)\mathcal{C}(K[g])=\mathcal{C}(A\boxtimes K[g])$. Hence $A\boxtimes K[g]=\m1$. From lemma~\ref{oper-bxt}(vii) we get $A(u)\neq0\iff u=\ovl g$ and $A({\ovl g})=\m1$, i.e. $A=K[\ovl g]$. \ \ \ $\Box$ \begin{lem}\label{gen} Let $\mathcal{H}_g=H[g],g\in\m g,$ be the matrix compiled from the responses for a universal matrix $\mathcal{M}$. We call $\mathcal{H}$ a {\sl response matrix} for $\mathcal{M}$.\\ (i) If $f,h_1,\dots,h_k\in\m g$, and $f=h_1\dots h_k$ then $\mathcal{M}_f=\mathcal{M}_{\m1}\boxtimes H[h_1]\boxtimes \dots\boxtimes H[h_k]$.\\ (ii) Each row $\mathcal{M}_s,s\neq\m1,$ is the result of an application of operation $\boxtimes$ to $\mathcal{M}_{\m1}$ and some elements of $\mathbf H=\{H[g]|g\in\mathbf G\}$ where $\mathbf G$ is any fixed system of generators for $\m g$.\\ (iii) $H[f]\boxtimes H[g]=H[fg]$ for each $f,g\in\m g$.\\ (iv) The system or rows of the response matrix $\mathcal{H}$ is linearly independent. \end{lem} {\sl Proof.\ } (i) From corollary ~\ref{crit-P} we have $\mathcal{M}_{h_1}=\mathcal{M}_{\m1}\boxtimes H[h_1]$. Then $\mathcal{M}_{h_1h_2}=\mathcal{M}_{h_1}\boxtimes H[h_2]$ and so on. Finally $\mathcal{M}_{f}=\mathcal{M}_{h_1\dots h_k}=\mathcal{M}_{\m1}\boxtimes H[h_1]\boxtimes \dots\boxtimes H[h_k]$. (ii) Fix any system $\mathbf G$ of generators for $\m g$. Since each element $s$ of the group is a product $g_1...g_k$ of some its generators from $\mathbf G$ and their inverse (where factors could repeat), we get from (i) that $\mathcal{M}_{s}= \bxt{\mathcal{M}_{\m1}}{H[g_1]}\bxt{}{\dots}\bxt{}{H[g_k]}$. (iii) From lemma~\ref{oper-bxt}(vi) we have $\mathcal{C}(K[f])\mathcal{C}(K[g])=\mathcal{C}(K[fg])$. Now \begin{eqnarray*} \mathcal{M}\mathcal{C}(H[f])\mathcal{C}(H[g])=\mathcal{C}(K[f])\mathcal{M}\mathcal{C}(H[g])=\mathcal{C}(K[f]) \mathcal{C}(K[g])\mathcal{M}=\mathcal{C}(K[fg])\mathcal{M}=\mathcal{M}\mathcal{C}(H[fg]). \end{eqnarray*} Hence $\mathcal{M}\{\mathcal{C}(H[f])\mathcal{C}(H[g])-\mathcal{C}(H[fg])\}=\mathbf0$. But $\mathcal{M}$ is non singular matrix. (iv) Since $\mathcal{M}$ is non-singular matrix its system $\{\mathcal{M}_g|g\in\m g\}$ of rows is linearly independent. On the other hand $\mathcal{M}_g=\mathcal{M}_{\m1}\boxtimes H[g]$. From lemma~\ref{oper-bxt}(iii) \begin{eqnarray*} \sum_ga_g\mathcal{M}_g=\mathcal{M}_{\m1}\boxtimes\sum_ga_gH[g]. \end{eqnarray*} Hence if $\sum_ga_gH[g]=\mathbf0$ for some collection of $a_g\in \m F_p,g\in\m g,$ then $\sum_ga_g\mathcal{M}_g=\mathbf 0$. \ \ \ $\Box$ \begin{lem} For a universal matrix $\mathcal{M}$ and its response matrix $\mathcal{H}$ it holds \begin{eqnarray}\label{univ-resp} \mathcal{M}\mathcal{C}(T\mathcal{H})=\mathcal{C}(T)\mathcal{M}, \ \ T\in\m V. \end{eqnarray} \end{lem} {\sl Proof.\ } We have $T=\sum_gT^gK[g]$ and from here $T\mathcal{H}=\sum_gT^gK[g]\mathcal{H}=\sum_gT^g\mathcal{H}_g=\sum_gT^gH[g]$. On the other hand \begin{eqnarray*} \mathcal{C}(T)\mathcal{M}=\mathcal{C}\left(\sum_gT^gK[g]\right)\mathcal{M}= \sum_gT^g\mathcal{C}(K[g])\mathcal{M}= \\\sum_gT^g\mathcal{M}\mathcal{C}(H[g])= \mathcal{M}\mathcal{C}\left(\sum_gT^gH[g]\right)=\mathcal{M}\mathcal{C}(T\mathcal{H}). \end{eqnarray*} \ \ \ $\Box$ To finish the proof of the theorem we note that $\mathcal{C}((T\boxtimes L)\mathcal{H})=\mathcal{M}^{-1}\mathcal{C}(T\boxtimes L)\mathcal{M}=\mathcal{M}^{-1}\mathcal{C}(T)\mathcal{M}\Mal^{-1}\mathcal{C}(L)\mathcal{M}=\\ \mathcal{C}(T\mathcal{H})\mathcal{C}(L\mathcal{H})=\mathcal{C}((T\mathcal{H})\boxtimes(L\mathcal{H}))$ or $(T\boxtimes L)\mathcal{H}=(T\mathcal{H})\boxtimes(L\mathcal{H})$. Finally when $\mathcal{M}_{\m1}=K[1]$ it coincides with its response matrix, that is $\mathcal{M}=\mathcal{H}$, and we arrive at $(T\boxtimes L)\mathcal{M}=(T\mathcal{M})\boxtimes(L\mathcal{M})$. \ \ \ $\Box$ Let for matrix $\mathcal M$ the matrix $\mathcal{M}^-$ be defined by the condition $(\mathcal{M}^-)_g=(\mathcal{M}_g)^-,g\in\m g$. \begin{lem}\label{reversible} If $\mathcal{H}$ is a response matrix for $\mathcal{M}$ then $\mathcal{M}^-=\mathcal{H}^-\,\,\mathcal{C} (\mathcal{M}_{\m1}^-)$. \end{lem} {\sl Proof.\ } We should prove $\mathcal{M}^-_g|^f=\sum_u\mathcal{H}^-_g|^u\,\mathcal{C} (\mathcal{M}_{\m1}^-)|_u^f$: \begin{eqnarray*} \sum_u\mathcal{H}^-_g|^u\,\mathcal{C} (\mathcal{M}_1^-)|_u^f=\sum_uH[g]^{\ovl u}\mathcal{M}_1^{\ovl{\ovl uf}}=_{s:=\ovl f\,u}\ \sum_s\mathcal{M}_1^{s}H[g]^{\ovl s\ovl f}=(\mathcal{M}_1\boxtimes H[g])^{\ovl f}=\mathcal{M}_g^-|^f \end{eqnarray*} \ \ \ $\Box$ \begin{cor} (i) Any two universal matrix $\mathcal{M},\mathcal{M}'$ having the same response matrix $\mathcal{H}$ are equivalent in the meaning that $\mathcal{M}\mathcal{C}(L)=\mathcal{C}(T)\mathcal{M}\iff \mathcal{M}'\mathcal{C}(L)=\mathcal{C}(T)\mathcal{M}', T,L\in\m V.$\\ (ii) $\mathcal{M}_{\m1}$ is a reversible in the monoid for any universal matrix $\mathcal{M}$.\\ (iii) If $A$ is any reversible element of \ $\m M$ and $\mathcal{M}$ is a universal matrix with a response matrix $\mathcal{H}$, then matrix $\mathcal{M}'$ defined as $\mathcal{M}'_g=A\boxtimes\mathcal{M}_g$ is also a universal matrix with the same response matrix $\mathcal{H}$. \end{cor} {\sl Proof.\ } (i) directly follows from (\ref{univ-resp}). (ii) As any universal matrix is reversible it follows from lemma~\ref{reversible} that the matrix $\mathcal{C}(\mathcal{M}^-_{\m1})$ is reversible. By lemma~\ref{revers-circ} (i) then $\mathcal{M}_{\m1}^-$ is a reversible element in the monoid, i.e. there exists an element $X\in\m V$ such that $X\boxtimes\mathcal{M}_{\m1}^-=\mathcal{M}_{\m1}^-\boxtimes X=K[\m1]$. From the lemma ~\ref{oper-bxt} (iv) then we have that $\mathcal{M}_{\m1}\boxtimes X^-=X^-\boxtimes \mathcal{M}_{\m1}=K[\m1]^-=K[\m1]$. Hence $\mathcal{M}_{\m1}$ is also reversible. (iii) From lemma~\ref{crit-P} and definition of the response matrix $\mathcal{H}$ for it (see lemma~\ref{gen}) we have $\mathcal{M}_g=\mathcal{M}_{\m1}\boxtimes \mathcal{H}_g,g\in\m g$. Then $\mathcal{M}'_g=(A\boxtimes\mathcal{M}_{\m1})\boxtimes\mathcal{H}_g$. Since $A$ is reversible the system of rows $\{A\boxtimes\mathcal{M}_g|g\in\m g\}$ of the matrix $\mathcal{M}'$ has the same rank as the system of rows of the matrix $\mathcal{M}$. Therefore $\mathcal{M}'$ is non-singular matrix. Finally, from lemma~\ref{crit-P} we get that $\mathcal{M}'$ is a universal matrix with the response matrix $\mathcal{H}$. \ \ \ $\Box$ \begin{cor} Let $\mathcal{M}$ be a universal matrices representing an automorphism $\Theta$ of $\m M$ generated by an automorphism $\theta$ of the index group $\m g$. Then $\mathcal{M}_g^f=\mathcal{M}_{\m1}^{f\,\ovl{\theta^{-1}(g)}}$. The rows of the response matrix for $\mathcal{M}$ constitute the set $\{K[g]|g\in\m g\}$. \end{cor} {\sl Proof.\ } First consider the response matrix $\mathcal{H}$ for $\mathcal{M}$. As we know $\mathcal{H}_{\m1}=K[\m1]$. Also since $\mathcal{C}(\Theta(T))=\mathcal{M}^{-1}\mathcal{C}(T)\mathcal{M}$, for $T=K[g]$ we obtain $\mathcal{M}\mathcal{C}(H[g])=\mathcal{C}(K[g])\mathcal{M}$ and deduce further that $H[g]=\Theta(K[g])=K[\theta^{-1}(g)],g\in\m g$. On the other hand $\mathcal{M}_g=\mathcal{M}_{\m1}\boxtimes H[g]$. Therefore $\mathcal{M}_g^f=(\mathcal{M}_{\m1}\boxtimes K[\theta^{-1}(g)])^f= \mathcal{M}_{\m1}^{f\,\ovl{\theta^{-1}(g)}}$. \ \ \ $\Box$ {\sl Note:} if $\theta $ is a functional index-permutation then the corresponding automorphism of the index group is $\ovl{\theta(\m1)}\theta(\cdot)$. Therefore in terms of the functional index-permutations for the rows of the universal matrix representing $\theta$ we have $\mathcal{M}_g^f=(\mathcal{M}_{\m1}\boxtimes K[\theta^{-1}(g)])^f= \mathcal{M}_{\m1}^{f\,\ovl{\theta^{-1}(\theta(\m1)g)}}$.\ \ \ $\Box$ \footnote{Check this on examples.} \\ \begin{ex}{\rm There exist non-trivial response matrices proving that in general $\m i(\m M)\neq\m l(\m M)$. We call response matrix {\sl full} if the only its row having a form $K[g]$ is the first one. Table~\ref{A15} shows examples of full response matrix for $q=7,15$. }\ \ \ $\Box$ \end{ex} {\tiny \begin{table}\label{A15} \caption{Full response matrices for $q=7$ (left) and $q=15$ (right).} $\left(\begin {array}{ccccccc}\label{A15} 1&0&0&0&0&0&0\\ \noalign{\medskip}0&1&0&1&0&1&0\\ \noalign{\medskip}0&0&1&1&0&0&1\\ \noalign{\medskip}0&1&1&1&0&1&1\\ \noalign{\medskip}0&0&0&0&1&1&1\\ \noalign{\medskip}0&1&0&1&1&1&1\\ \noalign{\medskip}0&0&1&1&1&1&1 \end {array}\right), \left(\begin {array}{ccccccccccccccc} 1&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \noalign{\medskip}0&1&0&0&0&1&1&0&0&1&0&1&0&1&1\\ \noalign{\medskip}0 &0&1&1&0&0&0&1&0&0&1&1&1&1&0\\ \noalign{\medskip}0&1&1&0&1&0&1&1&1&0&0&0 &0&1&0\\ \noalign{\medskip}0&0&0&0&1&1&1&1&0&1&0&1&0&0&1 \\\noalign{\medskip}0&1&1&0&1&0&0&1&1&0&1&0&0&1&0\\ \noalign{\medskip}0 &1&1&0&1&0&0&0&1&0&0&1&1&0&1\\ \noalign{\medskip}0&0&1&1&0&1&1&0&1&1&1&0 &1&1&0\\ \noalign{\medskip}0&0&0&1&0&0&0&1&1&0&1&0&1&1&1 \\ \noalign{\medskip}0&1&1&1&1&0&0&0&1&0&0&1&0&0&1\\ \noalign{\medskip}0 &1&1&0&1&1&0&0&1&0&0&1&0&0&1\\ \noalign{\medskip}0&1&0&1&1&1&1&0&0&1&1&0 &1&0&1\\ \noalign{\medskip}0&1&1&0&1&0&0&1&1&1&0&0&0&1&0 \\\noalign{\medskip}0&0&1&1&0&1&1&1&1&1&1&0&1&0&0\\ \noalign{\medskip}0 &1&0&1&1&1&1&0&0&1&1&1&1&0&0 \end {array} \right) $ \end{table} } \subsection{Regular isomorphisms} Let $\aut{\m M}$ be the complete set of automorphisms of a monoid $\m M$, whereas $\m l(\m M),\m i(\m M)$ are respectively sets of linear automorphisms and automorphisms defined by automorphisms of the index group (we call them {\sl index automorphisms}). Given class $C\subseteq\aut{\m M}$, we denote as $\m I[C]$ the set of pairs ${T,H}$ of elements of $\m M$ such that there exists $\varphi\in C$ which translate $T$ into $H$. This means $\STDm T\approx\STDm H$ and therefore we can consider $\m I[C]$ as a class of isomorphisms of automata revealed by automorphisms from $C$. There exist one obvious set of isomorphisms on HLCA being based on reversibility of rules in $\m M$. \begin{theo} If $T\in\m G$ then $\STDm T\approx\STDm{T^{-1}}$. \end{theo} {\sl Proof.\ } Indeed, $v\boxtimes w=K[\m1]\iff w^-\boxtimes v^-=K[\m1]$, i.e. $(v^-)^{-1}=(v^{-1})^-$. From here $(v,w)\in\STDm T\iff w=v\boxtimes T^-\iff w\boxtimes(T^-)^{-1}=v\iff w\boxtimes(T^{-1})^-=v\iff (w,v)\in\STDm{T^{-1}}$. On the other hand if $T$ is reversible then it acts on set $\m V$ of states as 1-1-mapping and therefore its diagram is a graph consisting of cycles that are oriented in a way. If we revert the orientation we get an isomorphic graph which (as we saw just above) will be the diagram of the rule $T^{-1}$. \ \ \ $\Box$ The set of pairs $\m I_{\m G}=\{\{T,T^{-1}\}|T\in\m G,T\neq T^{-1}\}$ can extend isomorphisms generated by $\m l(\m M)$ as the example~\ref{q=7} shows. Therefore for $C\subseteq\aut{\m M}$ we denote $\m I[C]^+$ the set of isomorphisms of ACA on $\m g$ which is the closure of $\m I[C] \cup\m I_{\m G}$. Despite the mapping $T\mapsto T^{-1}$ for a concrete $T\in\m G$ could not be (generally speaking) extendable up either to an automorphism of $\m M$\footnote{It would be good to provide an example proving this.} or even to automorphism of $\m G$, the reflection $\varrho:T\mapsto T^{-1}$ was shown in lemma~\ref{refl} as the isomorphism between $\m G$ and the co-group $\m G^{\circ}$. Thus the extension $\aut{\m M}^+$ of $\aut{\m M}$ also relates to the isomorphisms of underlying algebraic structures. \begin{cor}\label{inclusion} $\m i(\m M)\subseteq\m l(\m M)\subseteq\aut{\m M}$. \end{cor} {\sl Proof.\ } See theorem~\ref{ind-lin}. \ \ \ $\Box$ As examples show in general even for cyclic groups all inclusions in the corollary~\ref{inclusion} are proper. \begin{ex}\label{q=7} {\rm If $\m g$ is a cyclic group of order 7 there are 12 different response matrices which do not represent index-permutations. Table~\ref{A15} above shows one (left part). This matrix for pairs $(1111000, 1001000)$, $(0101111,0100110)$ translates the left rule into the right and therefore rules in each pair have isomorphic diagrams. However this could not be shown by index permutations because the number of units for vectors in each pair are different. Also it appears that the number of isomorphic classes in this case is 12 whereas there are 28 classes of isomorphism induced by index automorphisms and only 20 classes of isomorphism induced by linear automorphisms of $\m M$.This means that \begin{eqnarray*} \m i(\m M)\subsetneq\m l(\m M)\subsetneq\aut{\m M} \end{eqnarray*} for this case. Moreover after the closure of the set of isomorphisms generated by linear automorphisms with isomorphisms generated by inversions of elements of $\m G$\footnote{Actually decreasing of the number of classes is enforced by isomorphisms of rule with number $67$ to the inverse rule whose number is $118$ and $50$ to the inverse rule $87$.} we arrive at 18 classes of isomorphisms. Thus this example shows that in general the proper inclusions could hold: \begin{eqnarray*} \m I[\m i(\m M)]\subsetneq\m I[\m l(\m M)]\subsetneq\m I[\m l(\m M)]^{+}\subsetneq\m I[\aut{\m M}]. \end{eqnarray*}\ \ \ $\Box$} \end{ex} {\bf Definition:}{\sl\ We call any isomorphism of automata $\mathcal{A}(T)$ and $\mathcal{A}(L)$ {\sl regular} if it belongs to $\m I[\aut{\m M}]^+$. Also a group $\m g$ is called regular for $\m F_p$ if all automorphisms of automata on it are regular. } \begin{remar}{\rm It is important that any regular isomorphism is produced by an automorphism of the monoid and the isomorphism $\m G\to\m G^{\circ}$ and therefore is defined by an 1-1-mapping of one system of generators of $\m M,\m G$ onto another whereas in the definition of automata isomorphism we are talking about a much larger class of permutations of $\m V$.}\ \ \ $\Box$ \end{remar} The next statement allows to show that some index groups are not regular. \begin{theo}\label{not-reg} Suppose for $\m g,\m F_p$ there exist two elements $v,w\in\m M\setminus\m G$ whose state transition diagrams are isomorphic but the numbers of solutions in $\m M$ to equations $x\boxtimes x=v$ and $x\boxtimes x=w$ are different. Then $\m g$ is not regular for $\m F_p$. \end{theo} {\sl Proof.\ } Suppose $|\{x|x\boxtimes x=v\}|>|\{x|x\boxtimes x=w\}|$. Any automorphism $\varphi\in\aut{\m M}$ such that $\varphi(v)=w$ should translate the solutions to the equation $x\boxtimes x=v$ into the solutions to $x\boxtimes x=w$. This contradicts to the condition that $\varphi$ is 1-1-mapping on $\m V$. Thus $\{v,w\}\notin\m I[\aut{\m M}]$. The state transition diagrams of reversible rules consist of cycles, whereas the diagrams of irreversible rules should have dangled vertices because of singularity of them as linear operators. Therefore there is no reversible rule with the diagram isomorphic to the diagram of an irreversible rule. This means (theorem~\ref{aut->iso}) that the classes $V,W$ of elements automorphic to $v,w$ correspondingly consist completely of irreversible rules. On the other hand adding a pair $\{s,t\}\in\m I_{\m G}$ we can glue some two classes of automorphism of elements including $s$ and $t$ respectively. But these classes consist of reversible elements as elements $s,t$ are. Therefore the closure of $\m I[\aut{\m M}]$ with $\m I_{\m G}$ does not influence the classes of automorphism of irreversible elements. \ \ \ $\Box$ \section{Automata isomorphisms for $\m F_2$ and small groups} Results and examples from this section mostly relate to the case of the simplest field $\m F_2$, i.e. $p=2$. In this case we use to say simply "$\m g$ is (is not) regular". \subsection{Case of cyclic $\m G$. Conjecture} First it could be only if $\m g$ is cyclic: indeed, any subgroup of cyclic group is cyclic and there is an injection of $\m g$ into $\m G$. Let $\m c_n$ be a cyclic group of order $n$. For the basic field $\m F_2$ we have then $\m1=0$ and $K[g], g\in\{0,\dots,n-1\}$ is a vector whose all components are equal 0 excepting $g$-th which is equal to 1. We also use $0,1$-words to write elements of $\m G$. So a word $0^{k_1}1^{k_2}\dots$ where $\sum_ik_i=n$ denotes a vector whose first $k_1$ components are zeroes, and these components are followed with $k_2$ components all equal to 1, etc. Elements $\mathbf{0,1}$ play a special role further. By definition $\mathbf0=[0,\dots,0]=0^{n},\ \mathbf 1=[1,\dots,1]=1^n$.\footnote{We distinguish $\mathbf 1$ and $\m1$. As it was defined in the beginning the latter is the unit of the index group $\m c_n$.} Let $T\in\m M$. We denote $L\in\m M$ as $T^{\star}$ iff $T^{\star}(i)=1-T(i), i=0,\dots,q-1$. Another way to write this is $T^{\star}=T+\mathbf 1$. Also let $\pi(T)$ be a parity of $T\in\m V$, i.e. $\pi(T)=\sum_fT^f\in\m F_2$. \begin{lem}\label{reg0} For any $T,L\in\m V$:\\ 1) $\pi(T)\mathbf1=T\boxtimes\mathbf1=\mathbf1\boxtimes T$.\\ 2) $\pi(T\boxtimes L)=\pi(T)\pi(L)$. \end{lem} {\sl Proof.\ } 1) First of all, due lemma~\ref{oper-bxt} (vi) $T\boxtimes\mathbf1=\mathbf1\boxtimes T$.\footnote{For commutative index group this also follows from commutativity $\boxtimes$.} Now, $\pi(T)\mathbf1|^i=[(\sum_jT^j)\mathbf1]^i= \sum_jT^j\mathbf1^{i-j}=(T\boxtimes\mathbf1)^i$. 2) $\pi(T\boxtimes L)=\sum_j(T\boxtimes L)^j=\sum_j\sum_iT^iL^{j-i}= \sum_iT^i[\sum_jL^{j-i}]=\sum_iT^i\pi(L)=\pi(L)\pi(T).$ \ \ \ $\Box$ \begin{lem}\label{reg1} Let $\m g$ is of an odd order $n$. \\ 1) $T\in\m M\implies \pi(T)\neq\pi(T^{\star})$.\\ 2) $T\in\m G\implies\pi(T)=1$.\\ 3) $\m G^{\star}=\{T^{\star}|T\in\m G\}$ is a subgroup of $\m M$ isomorphic to $\m G$. \end{lem} {\sl Proof.\ } 1) $\pi(T)=\sum_{i=0}^{n-1}T^i=\sum_{i=0}^{n-1}(1-(T^{\star})^i)=n- \pi(T^{\star})$. From here $\{\pi(T),\pi(T^{\star}\}=\{0,1\}$. 2) If $T$ is reversible then $\det[\mathcal{C}(T)]=1$. However as it is known for circulants\cite{Ryzhik} $\det[\mathcal{C}(T)]$ is multiple to $\pi(T)$. Hence $\pi(T)\neq0$. 3) Since $T^{\star}=T+\mathbf1$ and lemma~\ref{oper-bxt} (iii)we can proceed as follows: \begin{eqnarray*} T^{\star}\boxtimes L^{\star}=(T+\mathbf 1)\boxtimes (L+\mathbf1) =T\boxtimes L+T\boxtimes\mathbf1+L\boxtimes\mathbf1 +\mathbf1\boxtimes\mathbf1=\\ (T\boxtimes L)^{\star}+[\pi(T)+\pi(L)]\mathbf1 \end{eqnarray*} Therefore dealing with vectors $T,L$ of the same parity we can write \begin{eqnarray*} T^{\star}\boxtimes L^{\star}=(T\boxtimes L)^{\star}. \end{eqnarray*} This with the fact that $(\cdot)^{\star}$ is 1-1-mapping on $\m V$ yields that $(\cdot)^{\star}:\m G\to \m G^{\star}$ is an isomorphism. \ \ \ $\Box$ In order for distinguishing between a degree $m$ of power $\underbrace{T\boxtimes\dots\boxtimes T}_{m\text{ times}}$ of a vector $T\in\m M$ and the $m$th component of the row-vector we denote $\underbrace{T\boxtimes\dots\boxtimes T}_{m\text{ terms}}$ as $T^{\boxtimes m}$. \begin{lem}\label{max-cyc} Let positive integer $r$ is the minimal number such that $T^{\boxtimes(r+1)}=T$ for a vector $T\in\m M$. Then $r$ is the length of the maximal cycle in $\STDm T$. In particular, if $T\in\m G$, i.e. is reversible, then the order of $T$ in $\m G$ is equal to the length of the maximal cycle in $\STDm T$. \end{lem} {\sl Proof.\ } By the lemma~\ref{oper-bxt} (i),(iv) and theorem~\ref{lem2} we have \begin{eqnarray}\label{mcyc} \begin{cases} T^{\boxtimes (m+1)}\boxtimes A=T\boxtimes A& \iff \\ T^{\boxtimes m}\boxtimes (T\boxtimes A)=T\boxtimes A& \iff \\ (A^-\boxtimes T^-)\boxtimes(T^-)^{\boxtimes(m)}= (A^-\boxtimes T^-)& \iff \\ T^{\boxtimes m}*(A^-\boxtimes T^-)=(A^-\boxtimes T^-)& \end{cases} \end{eqnarray} Now, for any $B$ that belongs to a cycle in $\STDm T$ there exists an vector $A$ such that $B=T\boxtimes A$. From here the length of any cycle in the diagram of the rule $T$ does not exceed $r$. On the other hand, substituting $A:=K[\m1]$ in (\ref{mcyc}) and using (ii) from the lemma~\ref{oper-bxt} we arrive at the conclusion that the length of the cycle including $T^-$ in $\STDm T$ is not lesser $r$ because $r$ is the minimal number such that $T^{\boxtimes (r+1)}=T$. \ \ \ $\Box$ \begin{lem}\label{extent} Let $H,V$ be sub-semigroups of the monoid $\m M$ without mutual elements and $\varphi: H\to H$ is an automorphism of $H$. If there exists an isomorphism $\gamma:V\to H$ such that $\gamma^{\sigma}(h\boxtimes v)=h\boxtimes\gamma^{\sigma}(v), h\in H,v\in V,\sigma\in\{-1,1\},$ then $\varphi$ can be extended to an automorphism of $H\cup V$. \end{lem} {\sl Proof.\ } We define $\Phi:H\cup V\to H\cup V$ by \begin{eqnarray} \Phi(z)= \begin{cases} \varphi(z),& z\in H,\\ \gamma^{-1}\varphi\gamma(z),& z\in V. \end{cases} \end{eqnarray} To prove that $\Phi$ is an automorphism of $H\cup V$ first note that $\Phi$ is 1-1-mapping because the conditions that $H\cap V=\emptyset$ and $\varphi,\gamma$ are 1-1-mappings with $H$ being the set of their values. Now, \begin{eqnarray*} \Phi(h\boxtimes v)=\gamma^{-1}\varphi\gamma(h\boxtimes v)=_{\text{condition for $\gamma$ with $\sigma=1$}}\gamma^{-1}\varphi(r\boxtimes\gamma(v))=\\ \gamma^{-1} (\varphi(r)\boxtimes\varphi(\gamma(v)))= _{\text{condition for $\gamma$ with $\sigma=-1$}} \varphi(r)\boxtimes\gamma^{-1}\varphi\gamma(v)=\Phi(r)\boxtimes\Phi(v). \end{eqnarray*} \ \ \ $\Box$ \begin{theo}\label{reg-theo} Assume that \\ (i) the number $n=|\m g|$ is odd;\\ (ii) $|\m G|=2^{n-1}-1$;\\ (iii) for each two elements $T,H\in\m G$ of an equal order there exists an automorphism $\varphi$ of $\m G$ such that $\varphi(T)=H$.\\ Then all isomorphisms of ACA on $\m g$ are regular. \end{theo} {\sl Proof.\ } Since $T\in\m M$ with even parity cannot be reversible, the condition (ii) results that all elements of $\m M$ with odd parity (excluding $\mathbf1$ whose circulant has a determinant equal to 0) constitute $\m G$ and therefore are reversible. All elements of $\m M$ of even parity excluding $\mathbf 0$ constitute a subgroup (more exactly: sub-monoid) $\m G^{\star}$ of the monoid and by lemma~\ref{reg1} $\m G\simeq\m G^{\star}$. The mapping $T\mapsto T+\mathbf 1$ serves as an isomorphism $\gamma$. Indeed, $T\mapsto T+\mathbf 1=T^{\star}$ and clearly the identity $\gamma(\gamma(T))=T$ holds. That is $\gamma=\gamma^{-1}$. In addition \begin{eqnarray*} T\boxtimes(H+\mathbf 1)=T\boxtimes H+T\boxtimes\mathbf 1=T\boxtimes H+\mathbf 1. \end{eqnarray*} This means $T\boxtimes\gamma(H)=\gamma(T\boxtimes H)$ and therefore by lemma~\ref{extent} any automorphism $\varphi$ of $\m G$ is extendable to an automorphism $\Phi$ of $\m G\cup\m G^{\star}$. Then it obviously can be extended to an automorphism $\Phi'$ of $\m G\cup\m G^{\star}\cup\{\mathbf0\}$ setting $\Phi'(\mathbf 0)=\mathbf 0$ because $T\mathbf0=\mathbf0$. Finally we define $\Phi''(X)=\Phi'(X)$ if $X\neq\mathbf1$ and $\Phi''(\mathbf1)=\mathbf1$. Let show that $\Phi''\in\aut{\m M}$. For that it is enough to consider the action of $\Phi''$ of products of kind $T\boxtimes\mathbf 1$. Since $\Phi''$ preserves parity $\pi(T)$ of element $T$ and by lemma\ref{reg0} $T\boxtimes\mathbf1=\pi(T)\mathbf1$, the equality $\Phi''(T\boxtimes\mathbf 1)=\Phi''(T)\boxtimes\Phi''(\mathbf1)$ holds. Hence $\Phi''\in\aut{\m M}$. From lemma~\ref{max-cyc} it follows that elements $T,H$ of different orders cannot have isomorphic diagrams. This means that no automorphism of $\m M$ exists which translates $T$ into $H$. On the other hand from (iii) we have that for any elements $T,H\in\m G$ of the same orders there exists an automorphism $\varphi:\m G\to\m G$ s.t. $\varphi(T)=H$. As we showed, this automorphism of $\m G$ can be extended to an automorphism of $\m M$. Because of the isomorphism $\gamma:\m G^{\star}\to\m G$ the same is true for elements $T,H\in \m G^{\star}$. In addition, $\{\mathbf0\},\{\mathbf1\}$ are singleton classes of isomorphism \cite{Bul2}. Finally the extension of the set of isomorphisms $\m I[\aut{\m M}]$ by $\m I_{\m G}$ does not yield anything of new because both $T,T^{-1}, T\in\m G,$ have the same order and we can refer to the condition (iii). Thus all isomorphisms of ACA on $\m g$ are regular.\ \ \ $\Box$ The next lemma just recalls a well known fact: \begin{lem}\label{aut_cyc} For any two elements of the same order in a finite cyclic group there exists an automorphism of the group translating one of these elements into another. \end{lem} {\sl Proof.\ } Any cyclic group of order $n$ is isomorphic to the standard cyclic group $\mathcal{M}_n=\langle\{0,\dots,n-1\},+_{\mm n}\rangle$. So we can reason about this group. Now, an element $j$ of the group has order $k\iff \gcd(n,j)=\frac nk$, in other words $k=\frac{n}{\gcd(n,j)}$. Indeed, the order $k_j$ of $j$ is the least number $r$ such that $jr=sn$ for a number $s$. On the other hand $k_j|n$ because orders of elements divide the order of the group (Lagrange theorem ). From here $j=s\frac n{k_j}=st, t\in\mathbb Z^+$. Thus $t$ is the maximal divisor $j$ such that is simultaneously a divisor of $n$ ($t=\frac n{k_j}$). Hence $t=\gcd(n,j)$ and $k_j=\frac{n}{\gcd(n,j)}$. Let $a,b$ have the same order, i.e. $\gcd(n,a)=\gcd(n,b)=d$. If so then numbers $\frac ad,\frac bd$ are both relatively prime with $n$ and therefore are generators for the group $\m M_n$. Let us define $\varphi:[i(\frac ad)\mm n]\mapsto [i(\frac bd)\mm n]$ where $i$ runs over $\{0,\dots,n-1\}$. This mapping is an automorphism translating $a$ into $b$ because $a=d(\frac ad)$ and $b=d(\frac bd)$. Indeed, for any $x,y\in\{0,\dots,n-1\}$ there exist multipliers $X,Y$ such that $\frac adX=x,\frac adY=y$. Then $\varphi(x+y)=\varphi(\frac adX+\frac adY)=\varphi((X+Y)\frac ad)=(X+Y)\frac bd=X\frac bd+Y\frac bd=\varphi(x)+\varphi(y)$. Finally $\varphi$ is a bijection because both $\frac ad,\frac bd$ are relatively prime with $n$ that enforces both numbers $i(\frac ad)\mm n,i(\frac bd)\mm n$ run without repetitions the set $\{0,\dots,n-1\}$ when $i $ runs over this set. \ \ \ $\Box$ The next statement formulates conditions when elements of $\aut{\m M}$ are compositions of automorphisms of $\m G$ and an isomorphism between $\m G$ and $\m G^{\star}$. \begin{cor} If $\m G$ is a cyclic of order $2^{|\m g|-1}-1$, then any two different automata $\mathcal{A}(f),\mathcal{A}(g),f\neq g,$ on $\m g$ have isomorphic diagrams iff there exists an automorphism $\psi$ of $\m G$ such that $\psi(f)=g$ or $\psi(f^{\star})=g^{\star}$. \end{cor} {\sl Proof.\ } If $\m g$ is not cyclic, $\m G$ is not cyclic as well (any subgroup of cyclic group is cyclic). And if 2 is not a primitive root modulo $|\m g|$ then $\m G$ cannot be a cyclic group of the order $2^{|\m g|-1}-1$ because all elements have orders lesser this number. Hence $|\m g|$ is an odd prime and we can apply theorem~\ref{reg-theo} getting that all isomorphisms are regular. On the other hand, because the reverse element to an element $g$ of a group has the same order, from the lemma~\ref{aut_cyc} with the method of extension of automorphisms of $\m G,\m G^{\star}$ that we applied in the proof of theorem~\ref{reg-theo} on the basis of lemma~\ref{extent} we have that there is no proper extension of the class of automorphisms of $\m G$ by reversion of elements. Now, as we saw above, $\m G,\m G^{\star}$ are isomorphic and any automorphism of $\m M$ translates $\m G$ into $\m G$ and $\m G^{\star}$ into $\m G^{\star}$. Thus any automorphism of $\m M$ can be constructed from automorphisms of $\m G$. Finally, since $f\neq g$ we note that both rules $f,g$ should belong to the same set among $\m G,\m G^{\star}$. \ \ \ $\Box$ \begin{ex}\label{prim-root}{\rm In cases when index group $\m g$ is a cyclic group of order $q$ that 2 is a primitive root modulo $q$, computations yield that in each of the cases $q\in\m R$ where $\m R=\{3,5,11,13,19,29,37,53,59,61,67,83,101,107,131\}$\footnote{This is the initial segment of the sequence A001122, see $\langle$ http://www.research.att.com/~njas/sequences/A001122$\rangle$ } there exists an element of the corresponding monoid over $\m F_2$ having order $2^{q-1}-1$. (For instance, the 01-vectors with numbers $1,21,11,13,19$ have orders $4,15,1023,4095,$ and $262143$ in monoids with $|\m g|=3,5,11,13,19$ respectively.) This means that in these cases the groups of reversible elements of the monoids for cyclic groups of the orders belonging to $\m R$ are also cyclic groups of order $2^{q-1}-1.$ So the lemma~\ref{aut_cyc} holds for the groups of reversible elements and the theorem~\ref{reg-theo} says that all isomorphisms of ACA on the cyclic groups of orders $q\in\m R$ are regular. \ \ \ $\Box$} \end{ex} \begin{cor} If $\m G$ is a cyclic, then $\m i(\m M)=\m l(\m M)$. \end{cor} {\sl Proof.\ } This is because in this case there exists the unique subgroup (consisting of elements of $\mathbf K=\{K[g]|g\in\m g\}$) of order $|\m g|$ in $\m G$. \ \ \ $\Box$ Thus for all index groups from Example~\ref{prim-root} all linear automorphisms are index-permutations. \noindent{\bf Conjecture. } {\sl If $\m g$ is cyclic group $\m c_q$ of an order $q$ such that 2 is a primitive root modulo q, then $\m G$ is a cyclic group of order $2^{q-1}-1$ and therefore all isomorphisms of ACA on $\m g$ are regular.} \begin{remar} {\rm According to the known Artin conjecture on the set of primes (see the references in \cite{Artin-wiki}) there exist infinitely many primitive roots for any prime $p$. This Artin conjecture follows from Generalized Riemann hypothesis \cite{Hooley}.}\ \ \ $\Box$ \end{remar} \subsection{Cases of non-cyclic $\m G$} There are index groups $\m g$ for which the hyper group $\m G$ is not cyclic but nevertheless all isomorphisms of ACA on them are regular. These are for example $\m D_2,\m c_6,\m c_7$. On the other hand not all isomorphisms of ACA are regular for even simple index groups $\m g$ like $c_4,\m D_3$, etc. \begin{ex}{\rm There are only 2 groups (within isomorphism) of order 4.\\ {\sl Case 1: $\m g=\m c_4$}. In this case $\m G$ is the group $\m c_4\times\m c_2$ that is one of three commutative groups of order 8. All rules are partitioned into 8 classes of true isomorphisms of their diagrams. To check if automorphisms of $\m M$ reveal $\m I[\m M]$ it is possible to check all systems of 3 generators $\alpha,\beta$ for $\m G$ of orders 2 and 4 respectively and one $\gamma$ from $\m M\setminus\m G$. By "brute force" it is not difficult to check all candidates to be automorphisms of $\m M$ (there are lesser 10,000) of them and find $\m I[\aut{\m M}]$. It appears that the isomorphism of automata $\mathcal{A}(5)$ and $\mathcal{A}(10)$ is not revealed by $\aut{\m M}$ as well as the isomorphism of automata $\mathcal{A}(2)$ and $\mathcal{A}(13)$. Therefore $\m J[\aut{\m M}]$ produces 4 classes of isomorphisms $\{5\},\{10\},\{2\},\{13\}$ instead of true two $\{5,10\},\{2,13\}$. All other classes of isomorphic rules are equal to the true classes of isomorphic rules. Addition of $\m I_{\m G}$ does not yield anything of new because rules 5,10 are not reversible and 2 and 13 have the same order 2, that is $(0010)^{-1}=0010,\ (1110)^{-1}=1110$, and there is no pair in $\m I_{\m G}$ including any of these elements (pay attention that all other true classes of isomorphic rules are revealed by $\aut{\m M}$). The absence of an automorphism $\varphi\in\aut{\m M}$ translating rule 2 into rule 13 is confirmed (see theorem~\ref{not-reg}) by the fact that the equation $X\boxtimes X=1110$ has no solution in $\m M$ whereas there are 4 solutions for $X\boxtimes X=0010$. From here it follows that the group $\m c_4$ {\sl is not regular}.\\ {\sl Case 2: $\m g$ is the "Klein four group"}. This time $\m G$ is isomorphic to $\m c_2^3$; there are 6 true classes of isomorphic rules. We used generating systems of 6 elements for $\m M$ and found a few isomorphisms from $\aut{\m M}$ providing the true isomorphic classes of rules. So here $\m I[\aut{\m M}]=\m I[\m M]$ and the group is regular. }\ \ \ $\Box$ \end{ex} \begin{ex}{\rm For {\sl non-abelian group of order 6 ($\m D_3$)} we have isomorphic rules $T,H$ with numbers 39 and 52 respectively. However there could not exist (see theorem~\ref{not-reg}) any $\varphi\in\aut{\m M}$ such that $\varphi(T)=H$ because the equation $X\boxtimes X=T$ has 2 solutions, whereas $X\boxtimes X=H$ has 8 solutions. In contrast to this the abelian group of order 6 (it is $\m c_6$) is regular. }\ \ \ $\Box$ \end{ex} \begin{ex}{\rm For the case of the {\sl group $\m Q$ of quaternions} as $\m g$ let $T,H$ are irreversible rules with numbers $9$ and $144$. They are isomorphic rules. However the equations $X\boxtimes X=T$ and $X\boxtimes X=H$ have 16 and 48 solutions respectively. The theorem~\ref{not-reg} is applicable in this case as well and we conclude that $\m Q$ is not a regular group.}\ \ \ $\Box$ \end{ex} \begin{table}[here] \caption{Index-groups $\m g$ are cyclic. } \begin{tabular}{|l|r|r|r|r|r|r|c|} \hline $\m g$&$\m c_1$&$\m c_2$&$\m c_4$&$\m c_6$&$\m c_7$&$\m c_8$&$\m c_q,q\in\m R$\\ \hline Regular?&yes&yes&no&yes&yes&no&yes\\ \hline \end{tabular} \end{table} \begin{table}[here] \caption{Index-groups $\m g$ are non-cyclic. } \begin{tabular}{|l|r|r|r|r|r|} \hline $\m g$&$\m D_2$&$\m D_3$&$\m c_4\times\m c_2$&$\m D_4$&$\m Q$\\ \hline Order&4&6&8&8&8\\ \hline Commutative?&yes&no&yes&no&no\\ \hline Regular?&yes&no&no&no&no\\ \hline \end{tabular} \end{table} The open question is whether at least some non-regular isomorphisms have a combinatorial nature not reducible to the symmetries of the underlying algebraic structures $\m g, \m G,\m M$.\\
1,314,259,996,003
arxiv
\section{Introduction} It is by now more than fifteen years ago that a surprising suppression of the Kondo effect in thin films and wires of dilute magnetic alloys has been observed.\cite{exp1,exp2,exp3,exp4} A few years after the first experiments, \'Ujs\'aghy {\em et al.} proposed that the most likely explanation of the experimental observations is a spin-orbit coupling induced {\em magnetic anisotropy} in the vicinity of the surface of the films: In the presence of a surface, spin-orbit (SO) coupling gives rise to a level splitting of the impurity spin, and thus blocks the spin-flip processes responsible for the Kondo effect \cite{orsi1,orsi2,orsi3}. Indeed, later experiments seemed to be in agreement with this simple scenario and confirmed the predictions that follow from it.\cite{exp5} Fitting the experimental data for a $Au(Fe)$ film, \'Ujs\'aghy {\em et al.}\cite{orsi1} estimated the width of the `dead layer', $L_c$, where the splitting is larger or comparable to the Kondo temperature, $T_K = 0.3 K \simeq 0.03$ meV, and obtained $L_c \simeq 180$~\AA . To explain the unexpectedly large width of the dead layer, \'Ujs\'aghy {\em et al.} also proposed a model to describe surface-anisotropy, which we shall refer to as the {\em host spin-orbit coupling} (HSO) model. In this model an impurity with a half-filled $d$-shell and spin $S=5/2$ is immersed in a host metal, where conduction electrons experience SO scattering through hybridizing with low-lying valence $d$-orbitals of the host material.\cite{orsi1,orsi2,orsi3} These calculations have been revised recently in Ref.~\onlinecite{orsi4}. This HSO mechanism does not lead to the splitting of the six-fold degenerate spin state of the impurity, when placed in a bulk host with high (cubic or continuous rotational) symmetry. However, the presence of the surface induces an anisotropy term, \begin{equation} H^{\rm HSO}_{\rm anis}=K(d) \left( {\bf n} \, {\bf S} \right)^2 \; , \label{H-anisHSO} \end{equation} where ${\bf n}$ is the normal vector of the surface, $ {\bf S}$ is the spin-operator, and $K(d)$ denotes the magnetic anisotropy constant at a distance $d$ from the surface. The anisotropy constant $K(d)$ can be estimated in a simple free electron model, by treating the spin-orbit coupling, $\xi$, and the exchange coupling, $J$, perturbatively. This calculation leads to the asymptotic form,\cite{orsi4} \begin{equation} \label{K:orsi} K(d) = A(k_F) J^2 \xi^2 \frac{\sin(2k_F d)}{d^3} \; , \end{equation} where $k_F$ is the Fermi wavenumber.\cite{footnote1} Unfortunately, the constant $A(k_F)$ contains some cut-off parameters, which make the above formula less predictive for the experiments. However, ab initio calculations\cite{SG:prl97} indicated that this bulk mechanism is too weak to explain the experimental findings. Recently, however, a rather different mechanism has been proposed to produce a magnetic anisotropy in the vicinity of a surface.\cite{SZG:prl06} This mechanism, which we shall refer to as {\em local spin-orbit coupling} (LSO) mechanism, assumes only a strong {\em local} SO coupling on the impurity's $d$-level. The basic observation leading to this mechanism is that, for partially filled $d$-shells, spin-states have also a large orbital content. Therefore, spin states couple very strongly to Friedel oscillations in the vicinity of a surface: electrons on the deep $d$-levels can lower their energy by hybridizing with the conduction electrons through virtual fluctuations. The corresponding anisotropy appears already to first order in the exchange coupling $J$ and decays as $\sim 1/d^2$. In the specific case of an impurity with a $d^1$ configuration, the corresponding $J_{3/2}$ ground state multiplet is split by the presence of the surface as\cite{SZG:prl06} \begin{equation} H^{\rm LSO}_{\rm anis}=K(d) \left( {\bf n} \, {\bf J} \right)^2 \;, \label{H-anisLSO} \end{equation} where ${\bf J}$ stands for the total angular momentum operators, and $K(d) \sim J \sin(Q_F d)/d^{2}$, with $Q_F$ being the length of an extremal vector of the Fermi Surface (FS). As shown in Ref.~\onlinecite{SZG:prl06} the anisotropy can take the desired value of about a few tenths of meV even beyond 100 \AA \ from the surface. Although the second (local) mechanism is expected to be dominant for impurities with partially filled (not half-filled) $d$-shells, in Ref.~\onlinecite{SZG:prl06} only a toy model, namely, a single-band metal on a simple cubic lattice, has been considered. For a quantitative comparison, however, and to decide, which mechanism is responsible for the surface-induced anisotropy, more realistic lattice and band structures should be used. The aim of the present work is to provide such a qualitative and quantitative comparison of the two mechanisms described above. For this purpose, we shall embed the impurity into an fcc lattice, and employ realistic tight-binding surface Green's function methods\cite{sgfm} to describe the conduction and valence electrons of the host material. This method allows for a numerically exact treatment of the surface, and also incorporates the SO coupling non-perturbatively. To describe the magnetic impurity, we shall integrate out virtual charge fluctuations on the $d$-level of the magnetic impurity, and construct realistic spin models, which take into account the specific magnetic and crystal field structure of the impurity.\cite{CoxZawa} We shall then study the surface-induced anisotropy within both models, and derive explicit expressions for the anisotropy constants in terms of the local density of states around the magnetic impurity. Analyzing the behavior of $K(d)$ in the asymptotic regime, we find that the oscillations of $K(d)$ are related to the extremal vectors of the Fermi surface. For the case of Au and Cu host metals, we perform numerical calculations of the anisotropy constants as based on the asymptotic formulas and the oscillation periods are directly identified from the numerically computed Fermi surface. In the case of the local SO model, we are also able to confirm numerically the validity of the asymptotic expressions. Our results support the priority of the local SO mechanism. \section{Short review of the theoretical approach} Before we present our results, let us to some extent outline the theoretical methods we use. As mentioned in the introduction, in our approach we describe the host material within a tight binding Green's function formalism. The interaction between the magnetic impurity and the host, on the other hand, is described in terms of an effective interaction, which we construct by combining group theoretical methods with many-body techniques. Once this effective exchange interaction Hamiltonian at hand, we can use relatively standard field theoretical tools\cite{pseudofermion} to do perturbation theory in the exchange coupling, and determine the surface-induced anisotropy. \subsection{The Green's function of the host} In this paper, we shall study surfaces of ordered non-magnetic hosts, like the (001) surface of Au or Cu. In this case, the Hamiltonian of the host can be written as, \begin{eqnarray} \mathring{H}_{\lambda \sigma,\lambda^{\prime }\sigma^{\prime }}^{pn,p^{\prime }n^{\prime }} &=& \left( \varepsilon _{\lambda }\,\delta _{\lambda \lambda ^{\prime }} \delta _{\sigma \sigma^{\prime }} + \xi \left( \mbox{\boldmath $\ell$} \cdot {\mathbf s} \right)_{\lambda \sigma,\lambda ^{\prime }\sigma^{\prime }}\right) \delta _{pp^{\prime }}\delta _{nn^{\prime }}+ \nonumber \\ & +& V_{\lambda ,\lambda ^{\prime }}^{pn,p^{\prime }n^{\prime }}\,\delta _{\sigma \sigma^{\prime }}\quad, \label{host-hamiltoni} \end{eqnarray} where $p$, $p^\prime$ denote atomic layers normal to the surface, $n$, $n^\prime$ label atomic sites within the layers, while $\lambda$, $\lambda^\prime$ denote the canonical $spd$ orbitals centered at the lattice positions and $\sigma$, $\sigma^{\prime }$ are spin indices. In Eq.~(\ref{host-hamiltoni}), we replaced all the parameters by their bulk values, i.e., we neglected the dependence of the on-site energies, $ \varepsilon _{\lambda }$ and the SO parameter, $\xi$, on the layer depth $p$. The hopping matrix elements, $V_{\lambda ,\lambda ^{\prime }}^{pn,p^{\prime }n^{\prime }}$ are confined to first and next-nearest neighbors, and their layer-dependence is also neglected. These approximations lead certainly to some errors in the calculated electronic structure very close to the surface, however, they are expected to have no serious consequences in the asymptotic regime, which is the subject of our interest. By the same token, in the vacuum (i.e., $p \le 0$) the on-site energies are taken to be infinity. This simplifies somewhat our calculations, since only layers $p \ge 1$, forming thus a semi-infinite system, need to be considered in the evaluation of the Green's function. The Hamiltonian Eq.~(\ref{host-hamiltoni}) can be recast into a matrix in the spin and orbital labels, \begin{equation} \underline{\mathring{H}}^{pn,p^{\prime }n^{\prime }} = \left\{ \mathring{H}_{\lambda \sigma,\lambda^{\prime }\sigma^{\prime }}^{pn,p^{\prime }n^{\prime }} \right\} \; . \end{equation} Since our system is translational invariant within the layers, we can also define the Fourier transform of the Hamiltonian matrix, $\underline{\mathring{H}}^{pp^{\prime}}({\bf k})$, and introduce the 'semi-infinite' matrix, \begin{equation} \mathring{\cal H}({\bf k})=\left\{ \underline{\mathring{H}}^{pp^{\prime}}({\bf k}) \right\}_{p,p^\prime \ge 1} \; . \end{equation} The resolvent or Green's function matrix is then given as usual \begin{equation} \mathring{\cal G}(z,{\bf k})=\left( z-\mathring{\cal H}({\bf k}) \right) ^{-1}\;, \label{Gk} \end{equation} with $z$ a complex number (energy). To perform the matrix inversion in Eq.~(\ref{Gk}), we used the surface Green's function technique,\cite{sgfm} which is an efficient and, in principle, exact tool to compute the Green's function. Most importantly, the computational time of this method scales linearly with the number of layers, for which the Green's function is evaluated. The real-space representation of the Green's function can then be obtained by performing the following Brillouin zone (BZ) integral, \begin{equation} \underline{\mathring{G}}^{pn,p^{\prime}n^\prime}(z)=\frac{1}{\Omega_{BZ}}\int d^{2}% k\,\underline{\mathring{G}}^{pp^{\prime}}(z,{\bf k})\,e^{-i{\bf k}({\bf T}_{n^\prime}-{\bf T}_{n})}\;, \label{GBZ} \end{equation} where $\Omega_{BZ}$ is the volume of the 2-dimensional Brillouin zone, and the translation vector ${\bf T}_{n}$ is related to the position of atom $n$ in layer $p$ as ${\bf R}_{pn}= {\bf C}_{p} + {\bf T}_{n}$, with ${\bf C}_{p}$ a layer-dependent reference position. The host-Hamiltonian, Eq.\eqref {host-hamiltoni}, must be slightly modified in the presence of a magnetic impurity. In this case, the hopping of the conduction electrons to the impurity's $d$-orbitals should be excluded, since these processes involve charge fluctuations at the magnetic impurity site, which will be incorporated in the effective exchange interaction (see next section). The simplest way to account for this constraint is to shift the on-site $d$-state energies of the impurity, $\varepsilon^i_\lambda$, far below the valence band, and add the following term to the Hamiltonian, \begin{equation} \Delta H_{\lambda \sigma,\lambda^{\prime }\sigma^{\prime }}^{(q) pn,p^{\prime }n^{\prime }} = \left( \varepsilon^i_\lambda - \varepsilon_\lambda \right) \delta_{pq} \, \delta_{p^\prime q} \, \delta_{n0} \, \delta_{n^\prime 0}\, \delta_{\lambda ,\lambda^{\prime }} \, \delta_{\sigma \sigma^\prime} \; , \label{deltaH} \end{equation} where the impurity is at site $n=0$ and in layer $q$. To describe the spin dynamics, we do not need to evaluate the full Green's function: We need its value only for a small cluster of sites, ${\cal C}^{(q)}$, consisting of nearest neighbor atoms around the impurity and the impurity itself. Fortunately, since $\Delta H$ is also local, this restricted Green's function, \begin{equation} \underline{\underline{G}}^{(q)}(z) = \left\{ \underline{G}^{pn,p^{\prime}n^\prime} \right\}_{{\cal C}^{(q)}} \; , \end{equation} can be evaluated as \begin{equation} \underline{\underline{G}}^{(q)}(z) = \left( \underline{\underline{I}} - \underline{\underline{\mathring{G}}}^{(q)}(z) \, \Delta \underline{\underline{H}}^{(q)} \right)^{-1} \underline{\underline{\mathring{G}}}^{(q)}(z) \; , \end{equation} where $\underline{\underline{I}}$ is the unit matrix, and the matrix elements of $\underline{\underline{\mathring{G}}}^{(q)}(z)$ and $\Delta \underline{\underline{H}}^{(q)}$ are defined in Eqs.~(\ref{GBZ}) and (\ref{deltaH}), respectively. Finally, the spectral function matrix on this cluster is defined as \begin{equation} \underline{\underline{\varrho}}^{(q)} (\varepsilon) = -\frac{1}{2\pi i} \lim_{\delta \rightarrow +0} \left( \underline{\underline{G}}^{(q)} (\varepsilon +i\delta) -\underline{\underline{G}}^{(q)} (\varepsilon -i\delta) \right) \;. \label{romat}% \end{equation} As shown in the following subsections, the matrix elements of this spectral function matrix are directly related to the magnetic anisotropy. \subsection{Host spin-orbit model of the magnetic anisotropy} As in Refs.~\onlinecite{orsi1,orsi2,orsi3,orsi4} let us first consider a spin $S=5/2$ impurity with a half-filled $d$-shell. In this case, we can neglect the SO interaction on the magnetic ion, and the bulk SO interaction is the primary source of the surface-induced anisotropy. \begin{widetext} \begin{table}[h] \centering \begin{tabular}{lc} \hline \hline $|1\rangle$ & $D_{xz} = \frac{1}{2} \left( s_{{\bf x} {\bf z}} + s_{\bar{\bf x} \bar{\bf z}} -s_{{\bf x} \bar{\bf z}} - s_{\bar{\bf x} {\bf z}} \right)$ \\ $|2\rangle$ & $ D_{yz} = \frac{1}{2} \left( s_{{\bf y} {\bf z}} + s_{\bar{\bf y} \bar{\bf z}} -s_{{\bf y} \bar{\bf z}} - s_{\bar{\bf y} {\bf z}} \right)$ \\ $|3\rangle$ & $D_{xy} = \frac{1}{2} \left( s_{{\bf x} {\bf y}} + s_{\bar{\bf x} \bar{\bf y}} -s_{{\bf x} \bar{\bf y}} - s_{\bar{\bf x} {\bf y}} \right)$ \\ $|4\rangle$ & $D_{x^2-y^2} = \frac{1}{2\sqrt{2}} \left( s_{{\bf y} {\bf z}} + s_{\bar{\bf y} \bar{\bf z}} +s_{{\bf y} \bar{\bf z}} + s_{\bar{\bf y} {\bf z}} -s_{{\bf x} {\bf z}} - s_{\bar{\bf x} \bar{\bf z}} -s_{{\bf x} \bar{\bf z}} - s_{\bar{\bf x} {\bf z}} \right)$ \\ $|5\rangle$ \phantom{nnn} & $D_{2z^2-x^2-y^2} = \frac{1}{2\sqrt{6}} \left( 2s_{{\bf x} {\bf y}} + 2s_{\bar{\bf x} \bar{\bf y}} +2s_{{\bf x} \bar{\bf y}} + 2s_{\bar{\bf x} {\bf y}} -s_{{\bf y} {\bf z}} - s_{\bar{\bf y} \bar{\bf z}} -s_{{\bf y} \bar{\bf z}} - s_{\bar{\bf y} {\bf z}} -s_{{\bf x} {\bf z}} - s_{\bar{\bf x} \bar{\bf z}} -s_{{\bf x} \bar{\bf z}} - s_{\bar{\bf x} {\bf z}} \right)$ \\ \hline \hline \end{tabular} \caption{Combinations of $s$-orbitals centered at the 12 neighbor sites around an impurity having the symmetry of atomic $d$ orbitals.} \label{table:basis} \end{table} \end{widetext} To construct the effective interaction between the host electrons and the magnetic impurity, one can safely assume that the deep $d$-levels of the magnetic impurity hybridize only with the $s$-orbitals of the neighboring host atoms. However, by symmetry, the deep $d$-levels can hybridize only with appropriate linear combinations of these $s$-orbitals, $\alpha\in\{x^2-y^2,2z^2-x^2-y^2,xy,xz,yz\}$. In case of an fcc lattice, e.g., we have 12 nearest neighbor $s$-orbitals, which can be labeled by $s_{{\bf x} {\bf y}}$, $s_{\bar{\bf x} \bar{\bf y}}$, $s_{{\bf x} \bar{\bf y}}$, $s_{\bar{\bf x} {\bf y}}$, $\dots$, $s_{{\bf y} \bar{\bf z}}$, $s_{\bar{\bf y} {\bf z}}$, the subscripts ${\bf x} {\bf y}$ and $\bar{\bf x} \bar{\bf y}$ referring to neighboring sites at the positions $a( \frac{\footnotesize{1}}{\footnotesize{2}}, \frac{\footnotesize{1}}{\footnotesize{2}},0)$ and $a( -\frac{\footnotesize{1}}{\footnotesize{2}}, -\frac{\footnotesize{1}}{\footnotesize{2}},0)$ relative to the impurity, respectively, and $a$ denoting the cubic lattice constant. However, only 5 out of these 12 states will have a $d$-wave character, and hybridize with the $d$-levels of the magnetic impurity. These 5 states are listed in Table~\ref{table:basis}. Using these 5 spin-degenerate states, we can perform a Schrieffer-Wolff transformation\cite{schw} that leads to the following Hamiltonian, \begin{equation} H_{J,ss^\prime}= \sum_{i=x,y,z} \sum_{\alpha=1}^5 J_\alpha \sum_{\sigma,\sigma^{\prime} = \pm 1} c^\dagger_{\alpha\sigma} \, \sigma^i_{\sigma\sigma^{\prime}} c_ {\alpha \sigma^{\prime}} \; S^i_{ss^\prime}\;. \label{H-HSO2}% \end{equation} Here $s,s^\prime=-\frac{\footnotesize{5}}{\footnotesize{2}},\dots, \frac{\footnotesize{5}}{\footnotesize{2}}$ denote the $z$-components of the impurity spins, $S^i$, and $\sigma^i$ denote the Pauli matrices. The operator $c^\dagger_{\alpha\sigma}$ creates a conduction electron with spin $\sigma$ in one of the states $|\alpha\rangle$ listed in Table~\ref{table:basis}. In the bulk, only two of the exchange constants $J_\alpha$ are independent, since by symmetry we have $J_{xy} = J_{xz} = J_{yz}$ and $J_{x^2-y^2} = J_{2 z^2-x^2-y^2}$. In the following, for the sake of simplicity, we shall set all these coupling constants equal, and take $J_\alpha=J$. This assumption does not modify our conclusions. The anisotropy induced by the surface can be computed by representing the spin in terms of Abrikosov pseudofermions, and then doing second order calculation in the exchange coupling.\cite{orsi1} The zero temperature first and second order contributions to the static ($\omega=0$) self-energy of the impurity spin can be expressed in terms of the local density of states (spectral function) matrix, $\rho_{\alpha,\sigma;\alpha'\sigma'}$ as\cite{SZG:prl06} \begin{align} \Sigma_{s\,s^{\prime}}^{(1)} & =\int_{-\infty}^{\varepsilon_{F}}d\varepsilon \,\mathrm{Tr}\left\{ \varrho(\varepsilon)\,H_{J,s\,s^{\prime}}\right\} \nonumber\\ & =J\sum_{i} \;S_{s\,s^{\prime}}^{i} \int_{-\infty}^{\varepsilon_{F}}d\varepsilon\; \mathrm{Tr}\left\{ \varrho(\varepsilon)\sigma^{i}\right\} \; , \label{sigma1-HSO} \end{align} and \begin{align} \Sigma_{s\,s^{\prime}}^{(2)} & = \int_{-\infty}^{\varepsilon_{F}} \int_{\varepsilon_{F}}^{\infty} \;\frac{d\varepsilon \, d\varepsilon^{\prime}} {\varepsilon^{\prime }-\varepsilon} \sum_{\tilde{s}}\mathrm{Tr}\left\{ \varrho(\varepsilon )\,H_{J,s\,\tilde{s}} \, \varrho(\varepsilon^{\prime})\,H_{J,\tilde{s}\,s^{\prime}% }\right\} \nonumber\\ & =J^{2}\sum_{i,j} \sum_{\tilde{s}} \;S_{s\,\tilde{s}}^{i} S_{\tilde{s}\,s^{\prime}}^{j} \int_{-\infty}^{\varepsilon_{F}} \int_{\varepsilon_{F}}^{\infty} \;\frac{d\varepsilon \, d\varepsilon^{\prime}} {\varepsilon^{\prime }-\varepsilon} \times \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \mathrm{Tr}\left\{ \varrho(\varepsilon)\,\sigma^{i}\varrho(\varepsilon^{\prime})\,\sigma^{j}\right\} \; , \label{sigma2-HSO}% \end{align} with Tr$\{\dots\}$ denoting the trace in the 10-dimensional subspace of the conduction electrons, and $\varepsilon_{F}$ the Fermi energy. The spectral function, $\varrho_{\alpha,\sigma;\alpha'\sigma'}$, can easily be obtained from the real-space spectral function matrix elements, Eq.~(\ref{romat}). Exploiting furthermore the tetragonal ($C_{4v}$) symmetry of an fcc(001) surface system and time-reversal invariance, we find that $\varrho_{\alpha,\sigma;\alpha'\sigma'}$ has the following structure: \begin{equation} \varrho=\left( \begin{array} [c]{ccccc}% \varrho_{1}I_{2} & i\varrho_{5}\sigma_{z} & i\varrho_{6}\sigma_{x} & i\varrho_{7}\sigma_{y} & -i\varrho _{8}\sigma_{y} \\ -i\varrho_{5}\sigma_{z} & \varrho_{1}I_{2} & -i\varrho_{6}\sigma_{y} & i\varrho_{7} \sigma_{x} & i\varrho_{8}\sigma_{x}\\ -i\varrho_{6}\sigma_{x} & i\varrho_{6}\sigma_{y} & \varrho_{2}I_{2} & i\varrho_{9} \sigma_{z} & 0\\ -i\varrho_{7}\sigma_{y} & -i\varrho_{7}\sigma_{x} & -i\varrho_{9}\sigma_{z} & \varrho_{3}I_{2} & 0\\ i\varrho_{8}\sigma_{y} & -i\varrho_{8}\sigma_{x} & 0 & 0 & \varrho_{4}I_{2} \end{array} \right) \label{rhoc4v} \end{equation} where $\forall \varrho_{i}\in\mathbb{R}$ and we dropped the energy argument of the spectral functions. The above form of $\varrho$ is fully confirmed by our numerical calculations. Inserting Eq.~(\ref{rhoc4v}) into Eqs.~(\ref{sigma1-HSO}) and (\ref{sigma2-HSO}) yields, $\Sigma_{s\,s^{\prime}}^{(1)} \equiv 0$, and we find \begin{equation} \Sigma_{s\,s^{\prime}} \approx \Sigma^{(2)}_{s\,s^{\prime}} = C K_{\rm HSO} \, \left(S_{z}^{2}\right)_{s\,s^{\prime}} \; , \end{equation} where $C$ is a constant and the anisotropy constant, $K_{\rm HSO}$, can be expressed as \begin{equation} K_{\rm HSO}=K_{\rm HSO}^{6}+K_{\rm HSO}^{7}+K_{\rm HSO}^{8}-K_{\rm HSO}^{5}-K_{\rm HSO}^{9} \; , \end{equation} with \begin{equation} K^i_{\rm HSO} =-4J^{2} \int_{-\infty}^{\varepsilon _{F}} d\varepsilon \int_{\varepsilon _{F}}^{\infty} d\varepsilon^{\prime} \;\frac{ \varrho_{i}(\varepsilon)\varrho_{i}(\varepsilon^{\prime}) }{\varepsilon^{\prime }-\varepsilon} \; .\label{Kc4v}% \end{equation} If the impurity is placed in the bulk, then cubic symmetry further implies that \begin{equation} \begin{array}{c} \varrho_1(\varepsilon)=\varrho_2(\varepsilon) \:, \varrho_3(\varepsilon)=\varrho_4(\varepsilon) \:, \varrho_6(\varepsilon)=-\varrho_5(\varepsilon) \:, \\ \varrho_8(\varepsilon)=\sqrt{3}\varrho_7(\varepsilon) \:, \varrho_9(\varepsilon)=-2 \varrho_7(\varepsilon) \; , \end{array} \label{rhocubic} \end{equation} and we obtain $K_{\rm HSO}=0$. Thus the anisotropy is indeed generated by the surface, which breaks the cubic symmetry of the crystal. \subsection{The local spin-orbit coupling model of the magnetic anisotropy} As in Ref.~\onlinecite{SZG:prl06}, let us now consider a magnetic impurity in a $d^{1}$ configuration such as a $V^{4+}$ or $Ti^{3+}$ ion. In this case, according to Hund's third rule, a strong local spin-orbit coupling will lead to a $J=3/2$ multiplet that is separated from the $J=5/2$ multiplet typically by an energy of the order of $\sim 1$ eV. In a cubic crystal field, this $J=3/2$ ground multiplet remains degenerate ($\Gamma_{8}$ double representation), implying that no magnetic anisotropy appears if the magnetic impurity is in the bulk. Anisotropy will, however, arise, once the impurity is placed to the vicinity of a surface that breaks the cubic symmetry. To construct the exchange interaction between the conduction electrons and the magnetic impurity, we first notice that the impurity's $J=3/2$ multiplet can hybridize only with those linear combinations of neighboring $s$-states which transform according to the same ($\Gamma_{8}$) representation. Such a four--dimensional $d$-type set can be constructed from the states in Table~\ref{table:basis}, as \begin{eqnarray} |s_{-3/2} \rangle &=& D_{x^2-y^2} \,|\!\! \downarrow\rangle \; , \label{sm32}\\ |s_{-1/2} \rangle &=& D_{2z^2-x^2-y^2} \,|\!\! \downarrow\rangle \; , \label{sm12}\\ |s_{1/2} \rangle &=& D_{2z^2-x^2-y^2} \,|\!\! \uparrow\rangle \; , \label{s12}\\ |s_{3/2} \rangle &=& - D_{x^2-y^2} \,|\!\! \uparrow\rangle \; . \label{s32} \end{eqnarray} Assuming that the impurity--host interaction is mainly dominated by quantum fluctuations to the (non--degenerate) $d^{0}$ state, in lowest order of the hybridization, a Coqblin--Schrieffer transformation leads to the following effective exchange interaction, \cite{csch,SZG:prl06} \begin{equation} H_{J}=J\sum_{m,m^{\prime}}s_{m}^{\dagger}s_{m^{\prime}} \: \mid\! \frac{ \mbox{\footnotesize{3}} }{ \mbox{\footnotesize{2}} } m^{\prime}\rangle \langle\frac{ \mbox{\footnotesize{3}} }{ \mbox{\footnotesize{2}} } m \! \mid\;, \end{equation} where $|\frac{3}{2}m\rangle$ stand for the four states of the $\Gamma_{8}$ impurity multiplet, and $s_{m}^{\dagger}$ are creation operators creating an electron in the host states (\ref{sm32})--(\ref{s32}). Interestingly, due to the different orbital content of the impurity states $|\frac{3}{2}, \pm \frac{3}{2}\rangle$ and $|\frac{3}{2}, \pm \frac{1}{2}\rangle$, already the first order contribution to the self-energy gives a non-vanishing anisotropy in the vicinity of a surface,\cite{SZG:prl06} \begin{equation} \Sigma_{mm^{\prime}}^{(1)}=J\int_{-\infty}^{\varepsilon_{F}}d\varepsilon\varrho _{mm^{\prime}}(\varepsilon)\;. \label{Sigma1-LSO} \end{equation} The local spectral function of the host is now a $4\times 4$ matrix, $\varrho_{mm^{\prime}}(\varepsilon)$, that has a diagonal structure, and is related to the spectral functions defined in Eq.~(\ref{rhoc4v}) as follows, \begin{eqnarray} \varrho_{mm^{\prime}}(\varepsilon) &=& \varrho_{m}(\varepsilon) \, \delta_{mm^{\prime}} \; , \\ \varrho_{\pm 3/2}(\varepsilon) &\equiv& \varrho_{3}(\varepsilon) \: ,\phantom{nn} \varrho_{\pm 1/2}(\varepsilon) \equiv \varrho_{4}(\varepsilon) \: . \end{eqnarray} From Eq.~(\ref{rhocubic}) it is obvious that the $J=3/2$ multiplet is degenerate under cubic symmetry (in the bulk), while under tetragonal symmetry it is split by an effective anisotropy term, Eq.~(\ref{H-anisLSO}), with \begin{equation} K_{\rm LSO} = K^3_{\rm LSO} - K^4_{\rm LSO} \; , \end{equation} and \begin{equation} K^i_{\rm LSO} = \frac{J}{2} \int_{-\infty}^{\varepsilon_{F}}d\varepsilon \varrho_i(\varepsilon) \;. \label{K-LSO} \end{equation} \subsection{Asymptotic form of the anisotropy constants} The presence of the surface induces Friedel oscillations in the local spectral functions.\cite{lang-kohn} For large distances $d$ from the surface, an asymptotic analysis can be performed based on the rapid oscillations of the electronic wave function, $\sim e^{ik_z d}$. In the simple case, when the constant energy surface in the three-dimensional Brillouin zone of the bulk system is formed by a single band (like the Fermi surface of noble metals), this leads to the following expressions for the spectral functions appearing in Eq.~(\ref{rhoc4v}), \begin{equation} \varrho _{i}\left( \varepsilon,d \right) \simeq \varrho _{i}^{0}\left( \varepsilon \right) +\frac{1}{d} \sum_n g^n_{i}\left( \varepsilon \right) \cos\left[Q_{n}\left( \varepsilon \right) d + \theta_i^n\left( \varepsilon \right) \right] \; , \label{asyrho} \end{equation} where $\varrho _{i}^{0}\left( \varepsilon \right)$ is the spectral function in the bulk, and the $Q_{n}\left( \varepsilon \right)$'s denote the lengths of extremal vectors of the constant energy surface, normal to the geometrical surface. The factors $g^n_{i}\left( \varepsilon \right)$ denote the amplitudes of the oscillations, and $\theta^n_i\left( \varepsilon \right)$ are their phase. As we shall discuss later, in case of an fcc(001) geometry there are two different extremal vectors. Furthermore, it turns out that each of the spectral function matrixelements has a non-negligible contribution related only to one of these vectors, therefore, as what follows, we shall label the extremal vectors with the index of the matrixelements $i$. By substituting expression \eqref{asyrho} into Eqs.~(\ref{Kc4v}) and \eqref{K-LSO} we then obtain the asymptotic form of the anisotropy constants. \subsubsection{Host spin-orbit coupling model} In case of the host spin-orbit coupling model, the contributions, $K^i_{\rm HSO}$, to the magnetic anisotropy constant can be expressed in leading order of $1/d$ as \begin{eqnarray} K_{\rm HSO}^{i}&=&-\frac{4J^{2}}{d} \operatorname{Re} \int_{0}^{\infty } \frac{d \tilde{\varepsilon}}{\tilde{\varepsilon}}\, \label{Ki2} \\ && \left\{ \int_{\varepsilon_{F}-\tilde{\varepsilon}}^{\varepsilon_{F}} d\varepsilon \, \varrho_{i}^{0}\left( \varepsilon + \tilde{\varepsilon} \right) g_{i}\left( \varepsilon\right) e^{i \left[ Q_i\left( \varepsilon \right) d + \theta_i\left( \varepsilon \right) \right]} \right. \nonumber \\ && + \left. \int_{\varepsilon_{F}}^{\varepsilon_{F}+\tilde{\varepsilon}} d\varepsilon \, \varrho_{i}^{0}\left( \varepsilon - \tilde{\varepsilon} \right) g_{i}\left( \varepsilon\right) e^{i \left[ Q_i\left( \varepsilon \right) d + \theta_i\left( \varepsilon \right) \right] } \right\} \; . \nonumber \end{eqnarray} Assuming that $\rho^{0}_{i}\left( \varepsilon\right)$, $Q_{i}\left( \varepsilon\right)$, $g_{i}\left( \varepsilon\right)$ and $\theta_{i}\left( \varepsilon\right)$ are slowly varying functions of $\varepsilon$, whereas $e^{iQ_{i}\left( \varepsilon \right) d}$ is rapidly oscillating, the inner integrals in Eq.~(\ref{Ki2}) give sizable contributions only for small values of $\tilde{\varepsilon}$, and therefore, we can expand $Q_{i}\left( \varepsilon\right)$ around $\varepsilon _{F}$, $Q_{i}\left( \varepsilon\right) \simeq Q_{i}\left( \varepsilon_F \right) + Q_i^\prime\left( \varepsilon_F \right) \left(\varepsilon-\varepsilon_F\right)$ and substitute all the other functions by their values at $\varepsilon_{F}$. This procedure yields the following asymptotic form: \begin{equation} K_{\rm HSO}^{i}=- \frac{4J^{2} \pi \varrho_{i}^{0}\left( \varepsilon_F \right) g_{i}\left( \varepsilon_F \right)} {|Q^\prime_{i}\left( \varepsilon_F \right)|} \frac{\cos \left[ Q_{i}\left( \varepsilon_F \right) d + \theta_i\left( \varepsilon_F \right) \right] } {d^2} \; . \label{Kasy-HSO} \end{equation} For free electrons, $Q\left( \varepsilon_F \right)=2k_F$, and the above result resembles that of \'Ujs\'aghy {\em et al.},\cite{orsi4} however with a $\sim 1/d^2$ rather than a $\sim 1/d^3$ decay. This difference is a consequence of the assumption made in Ref.~\onlinecite{orsi4} that the scatterers in the host are distributed homogeneously. \subsubsection{Local spin-orbit coupling model} In case of the local spin-orbit coupling model the energy integral in Eq.~(\ref{K-LSO}) can be easily performed yielding \begin{equation} K^i_{\rm LSO} \approx \frac{J \, g_i\left( \varepsilon_{F}\right) } {2 |Q_i^{\prime}(\varepsilon_{F})|} \frac{\sin \left[ Q_i\left( \varepsilon_{F}\right) d+ \theta_i \left( \varepsilon_F \right) \right] }{d^2} \;. \label{Kasy-LSO} \end{equation} Interestingly, the asymptotic $d$-dependence of the magnetic anisotropy is described by very similar functions within both models, only the coefficients and the prefactors are different. \section{Computational details} For a realistic description of the host's valence and conduction bands we used the on-site energies and the first and second nearest neighbor hopping parameters as given in Ref.~\onlinecite{papac} for Au and in Ref.~\onlinecite{cupar} for Cu, and set the cubic lattice constants to their experimental value cubic, $a_{\rm Cu}=3.615$ \AA \ and $a_{\rm Au}=4.078$ \AA .\cite{webelements} The spin-orbit parameter, $\xi$, has been determined from the difference of the SO-split $d$-resonance energies, $\Delta E_d= E_{j=5/2}-E_{j=3/2}$, derived from self-consistent relativistic first-principles calculations.\cite{RSKKR} This splitting is related to our SO coupling as \begin{equation} \Delta E_d \simeq \frac{5}{2} \xi \;. \end{equation} For Au we thus obtained $\xi= 0.64$ eV, while for Cu $\xi= 0.1$ eV. In order to reduce the computational efforts in performing the Brillouin zone integrals, Eq.~(\ref{GBZ}), we made use of the $C_{4v}$ point-group symmetry of the fcc(001) surface and applied an adaptive uniform mesh refinement for sampling the $k$-points in the irreducible (1/8) segment of the Brillouin zone. In general, about $10^4$ $k$-points were sufficient to calculate all the spectral function matrix elements in (\ref{rhoc4v}) with a relative accuracy of 1~\%. We performed calculations for the $\varrho_i$'s for up to 50 monolayers below the surface, corresponding to a separation of $d\simeq90$ \AA \ and $d\simeq100$ \AA \ for Cu and Au, respectively. Performing the double energy integral in Eq.~(\ref{Kc4v}) is a quite demanding numerical procedure. Therefore, for the host spin-orbit model, we first fitted the spectral function matrix elements by the function \eqref{asyrho}, and then used the asymptotic form, Eq.~(\ref{Kasy-HSO}) to compute the magnetic anisotropy, $K_{\rm HSO}$. As we shall see later, beyond about 10 atomic layers ($d > 20$ \AA \ ) the calculated matrix elements followed the asymptotic form, and the parameters, $g_i(\varepsilon)$, $\theta_i(\varepsilon)$ and $Q_i(\varepsilon)$ could be fitted with a high accuracy. In case of the local spin-orbit coupling model, we also performed a similar procedure to calculate the magnetic anisotropy constant in the asymptotic regime, Eq.~(\ref{Kasy-LSO}). However, in this case, it was also possible to compute the anisotropy constant directly from Eq.~(\ref{K-LSO}): In this case, we could deform the energy integration contour by using the analyticity of the Green's function on the complex plane, and as few as 12 energy points along a semi-circular contour in the upper complex half-plane were sufficient for a very accurate evaluation of the corresponding integral. \section{Results} \subsection{Electronic structure of the bulk host} We first performed calculations of the density of states (DOS) of bulk Cu and Au. As shown in Fig.~\ref{fig:DOS}, the dispersion of the $3d$-band of Cu is about 4 eV, while the $5d$-band of Au is much broader ($\sim$ 7 eV). Reassuringly, the positions and the heights of the characteristic peaks of the DOS compare well with those obtained from self-consistent first principles calculations.\cite{RSKKR,SKKR} Clearly, in copper, the small SO coupling, $\xi=0.1$~eV, causes merely a slight modification in the DOS in the vicinity of the $d$-like on-site energy ($\sim$ 5.07 eV). In the case of Au the SO coupling is much stronger, $\xi=0.64$~eV, and is large enough to influence the whole $d$-band: It gives rise to strong splittings of the dispersion peaks and it also increases slightly the bandwidth. As indicated by the vertical lines in Fig.~\ref{fig:DOS}, the Fermi energies, $\varepsilon_{F}^{Cu}=8.3$~eV and $\varepsilon_{F}^{Au}=7.4$~eV, lie well above the $d$-band for both metals. \begin{figure}[ht!] \includegraphics[ width=7cm,bb=10 10 210 260,clip]{fig1.eps} \vskip -0.3cm \caption{(Color online) Calculated valence band densities of states for Cu and Au bulk without SO interaction (dots) and with SO interaction (solid line). For the latter case the Fermi energies, $\varepsilon_{F}^{Cu}=8.3$~eV and $\varepsilon_{F}^{Au}=7.4$~eV, are indicated by vertical lines. } \label{fig:DOS} \end{figure} \begin{figure}[ht!] \includegraphics[ width=7cm,bb=10 10 220 260,clip]{fig2.eps} \vskip -0.3cm \caption{(Color online) Calculated plane cuts perpendicular to the (1~-1~0) direction of the FS of Cu and Au. The arrows denote the extremal vectors of lengths, $Q_{\rm min}^{\rm Cu}=0.505$~\AA$^{-1}$, $Q_{\rm max}^{\rm Cu}=1.208$~\AA$^{-1}$ and $Q_{\rm min}^{\rm Au}=0.298$~\AA$^{-1}$, $Q_{\rm max}^{\rm Au}=1.228$~\AA$^{-1}$. \label{fig:FS}} \end{figure} As we learned from the asymptotic analysis presented in Sec.~II.D, extremal vectors of the Fermi surface play a crucial role in determining the magnetic anisotropy constants. Therefore, we next investigated the plane cuts of the Fermi surface perpendicular to the (1~-1~0) direction. One can easily read off the length of the (001) extremal vectors from the cuts depicted in Fig.~\ref{fig:FS}: The absolute minimum of the width of the Fermi surface, $Q_{\rm min}$, can be found at ${\bf k}=0$, while the maximum width of the corresponding cut, $Q_{\rm max}$, is related to saddle-points of the Fermi surface. In the case of a Cu host the values obtained from our tight-binding analysis, $Q_{\rm min}^{\rm Cu}=0.505$~\AA $^{-1}$ and $Q_{\rm max}^{\rm Cu}=1.208$~\AA $^{-1}$ correspond to periods of 12.44 \AA \ and 5.20 \AA \ (6.88 and 2.88 monolayers (ML)) of the oscillations, and agree fairly well with the periods, 6.08 ML and 2.60 ML, calculated by Lathiotakis {\em et al.}~\cite{iec-lath}. Similar satisfactory agreement can be found in the case of a Au host between the periods found from our present calculations, 10.34 ML and 2.51 ML, and those calculated by Bruno and Chappert, 8.6 ML and 2.6 ML.~\cite{iec-bruno} It should be noted, however, that the shape of the FS depends very sensitively on the position of the Fermi energy the precise determination of which is a quite subtle task, since for noble metals like Cu and Au the Fermi energy lies in the very flat 4$sp$ band (see also Fig.~\ref{fig:DOS}). \subsection{The magnetic anisotropy constants within the host spin-orbit coupling model} We calculated the spectral function matrices, Eq.~(\ref{rhoc4v}), at the Fermi energy of Cu and Au using the methods described in Sections II.B. and C., for up to 50 ML below the surface. As a convincing check of our numerical procedure we verified that the structure of the calculated matrices agrees with that derived analytically from symmetry principles. In the case of a Au host, in Fig.~\ref{fig:rho-off} we plotted the calculated off-diagonal matrix elements, $\varrho_5(\varepsilon_F)$, $\dots$, $\varrho_9(\varepsilon_F)$, as a function of the distance $d$ from the surface. As expected, large oscillations can be observed for all the spectral functions near the surface ($d < 20$ \AA ). These oscillations, however, survive for large distances only for $\varrho_6$, while they are strongly damped in all the other cases. The limiting values of $\varrho_i$ correspond to the bulk case and, as we checked, satisfy the conditions, Eq.~(\ref{rhocubic}) with less than 1\% relative numerical accuracy. \begin{figure}[ht!] \includegraphics[ width=7cm,bb=10 10 225 285,clip]{fig3.eps} \vskip -0.3cm \caption{(Color online) Calculated off-diagonal spectral function matrix elements (see Eq.~(\ref{rhoc4v})), at the Fermi energy as a function of the distance, $d$, from the (001) surface of a Au host. \label{fig:rho-off}} \end{figure} \begin{figure}[ht!] \includegraphics[ width=7cm,bb=10 10 225 220,clip]{fig4.eps} \vskip -0.3cm \caption{(Color online) Asymptotic fit to the function~(\ref{asyrho}) (solid line) of the calculated values of the $\varrho_6(\varepsilon_F)$ spectral function (triangles) as a function of the distance from a Au(001) surface. The dashed line denotes the bulk value of $\varrho_6(\varepsilon_F)$. \label{fig:rho6fit}} \end{figure} In Fig.~\ref{fig:rho6fit} we display the spectral function $\varrho_6(\varepsilon_F)$ on an enlarged scale, together with a fitting function of the form, Eq.~(\ref{asyrho}). Quite surprisingly, the asymptotic function applies even in the range of $d \gtrsim 20$ \AA \ and, therefore, there is no need to perform a 'preasymptotic' analysis as suggested in Ref.~\onlinecite{orsi4}. The fitted parameters of Eq.~(\ref{asyrho}) are as follows: $\varrho_{6}^{0}(\varepsilon_{F})$ = -3.99 $\pm$ 0.01 $\cdot$ $10^{-4}$ eV$^{-1}$, $g_6(\varepsilon _{F})$ = -1.484 $\pm$ 0.008 $\cdot$ 10$^{-3}$ \AA eV$^{-1}$, $Q_6(\epsilon _{F})$ = 1.2228 $\pm$ 0.0001 \AA $^{-1}$, and $\theta_6(\epsilon_{F})$ = 1.324 $\pm$ 0.006 rad. It is particularly noteworthy that the fitted wavenumber agrees with an accuracy of 0.5 \% with the length of the extremal vector, $Q_{\rm max}$, computed from the Au Fermi Surface. We could fit all other off-diagonal spectral function components entering the expression of $K_{\rm HSO}$ with a similar fit with exactly the same wavenumber. However, the amplitude of these other components was by at least two orders of magnitude smaller than $g_6(\varepsilon _{F})$. \begin{figure}[htb] \includegraphics[ width=7cm,bb=10 10 225 260,clip]{fig5.eps} \vskip -0.3cm \caption{(Color online) Upper panel: The $K^{6}_{HSO}$ contribution to the magnetic anisotropy constant within the host spin-orbit coupling model for a Au host as calculated from the asymptotic expression, Eq.~(\ref{Kasy-HSO}). Lower panel: The $K^{9}_{HSO}$ contribution to the magnetic anisotropy constant in the case of a Cu host. In both cases an exchange interaction parameter $J=1$ eV was used. \label{fig:khso}} \end{figure} Our calculations thus indicate that the long-wavelength oscillation corresponding to $Q_{\rm min}$ of the FS either enters with a negligibly small amplitude or doesn't enter at all in the asymptotic form of the off-diagonal spectral function matrixelements. This can easily be understood by noticing that the asymptotic contributions to the real-space spectral function matrixelements, $\varrho_{s \sigma,s \sigma^\prime}^{(q+p)n,(q+p^\prime) n^\prime}(\varepsilon)$ ($p,p^\prime=0,\pm1, d=q\frac{a}{2}$) related to $Q_{\rm min}$ are of the following form, \begin{eqnarray} \varrho_{s \sigma,s \sigma^\prime}^{(q+p)\,n,(q+p^\prime)\, n^\prime}(\varepsilon) &\simeq& \varrho_{s \sigma,s \sigma^\prime}^{(0)p\,n,p^\prime \,n^\prime}(\varepsilon) + \frac{g_{s \sigma,s \sigma^\prime}^{p,p^\prime}(\varepsilon)}{d} \times \nonumber \\ && \cos\left[ Q_{\rm min}(\varepsilon)d + \theta(\varepsilon) \right] \; , \label{rsrho-qmin} \end{eqnarray} where $\varrho_{s \sigma,s \sigma^\prime}^{(0)p\,n,p^\prime \,n^\prime}(\varepsilon)$ refer to the corresponding bulk matrixelements. Eq.~(\ref{rsrho-qmin}) implies that the oscillating part does not depend on the in-plane positions, $n$ and $n^\prime$, which is the consequence that the minimal extremal vector is at the ${\bf k}=0$ point of the 2D Brillouin zone. As explained in Sec.~II.B., the matrixelements in Eq.~(\ref{rhoc4v}) are linear combinations of the above real-space matrix elements according to the states in Table~I. Since the states $|\alpha\rangle$ ($\alpha=1,\dots,4$) are constructed as antisymmetric combinations of neighboring $s$-orbitals in the same plane, $q+p$, or as a sum of such antisymmetric combinations, in their matrixelements the asymptotic oscillatory part corresponding to $ Q_{\rm min}$ necessarily cancels. As a consequence, only the spectral function $\varrho_4 \equiv \langle 5 | \varrho | 5 \rangle$ has asymptotic oscillations with wavenumber $Q_{\rm min}$, which, however, does not give a contribution in the host SO model. We calculated the magnetic anisotropy constant using the asymptotic fits of the spectral functions and Eq.~(\ref{Kasy-HSO}). We numerically determined the energy derivative of the magnitude of the extremal vector, $Q^\prime(\varepsilon_F)$, by fitting the spectral functions at two energy values close below and above $\varepsilon_F$ and obtained $Q^{\prime }(\epsilon _{F})= 0.235$ (\AA \ eV)$^{-1}$. Thus, in case of a Au host we get the following asymptotic function for $K^6_{HSO}(d)$ (displayed in the upper panel of Fig.~\ref{fig:khso}) \begin{equation} K^{6}_{HSO}(d) = \frac{31.66}{d^2} \cos\left[ 1.2228 \cdot d + 1.324 \right] \: \mu{\rm eV} \; , \end{equation} where $d$ is measured in \AA . Notice the surprisingly small magnitude of $K^6_{HSO}$: even at a distance of about $d=20$ \AA \ the amplitude of the above oscillating function is about 0.079 $\mu$eV. We performed similar calculations for a Cu host. In Cu, the spectral functions show asymptotic oscillations with $Q(\varepsilon_F)=1.205$ \AA$^{-1}$ that agrees within 0.3 \% with the length of the extremal vector, $Q_{\rm max}$, of the Cu FS. In Cu, the $K^9_{HSO}$ contribution shown in the lower panel of Fig.~\ref{fig:khso} dominates the magnetic anisotropy. This is in the range of 0.01 neV = 10$^{-11}$ eV, i.e., it is at least by three orders of magnitude smaller than the one found in case a Au host. This decrease is mostly due to the smaller SO interaction in Cu than in Au. As we checked by varying $\xi$ for Au, the spectral functions in Eq.~(\ref{rhoc4v}) scale linearly with $\xi$, therefore, by Eq.~(\ref{Kc4v}) the magnetic anisotropy constant scales as $\sim \xi^2$. This result clearly justifies the approach of \'Ujs\'aghy {\em et al.}, who treated the SO interaction perturbatively.\cite{orsi1,orsi2,orsi3,orsi4}. \subsection{The magnetic anisotropy constants within the local spin-orbit coupling model} As pointed out in Ref.~\onlinecite{SZG:prl06}, a mechanism based on a strong local SO interaction of the impurity (local SO model) can give rise to a level splitting that is orders of magnitude larger than the host-induced anisotropy. To demonstrate this idea, in Ref.~\onlinecite{SZG:prl06} we studied the simple but unrealistic case of a single-band metal on a simple cubic lattice. Here we extend the calculations of Ref.~\onlinecite{SZG:prl06} and perform calculations for realistic host metals (Cu and Au). \begin{figure}[htb!] \includegraphics[ width=7cm,bb=10 10 235 260,clip]{fig6.eps} \vskip -0.3cm \caption{(Color online) Calculated values of $\Delta \varrho_4(\varepsilon_F) \equiv \varrho_4(\varepsilon_F)-\varrho_4^{0}(\varepsilon_F)$ (squares) with corresponding asymptotic fits, Eq.~(\ref{asyrho}), (solid line) as a function of the distance from the (001) surface of Au and Cu. \label{fig:deltarho}} \end{figure} According to the theory presented in Sec.~II.C, we need to compute the diagonal spectral function matrixelements, $\varrho_3 \equiv \langle 4 | \varrho | 4 \rangle$ and $\varrho_4 \equiv \langle 5 | \varrho | 5 \rangle$, see Table~I and Eq. (\ref{rhoc4v}). Our calculations clearly showed that the $d$ dependent Friedel oscillations of $\varrho_3$ are several order smaller in magnitude than those of $\varrho_4$. This can be understood by noticing that, due to the different spatial character of these two states ($D_{x^2-y^2}$ and $D_{2z^2-x^2-y^2}$), $\varrho_3$ comprises an average of spectral weights in layers $q-1$ and $q+1$, while $\varrho_4$ takes the difference of spectral weights in layer $q$ with respect those in layers $q-1$ and $q+1$, $q$ denoting the layer of the impurity's position. Recalling that for a cubic bulk $\varrho_3=\varrho_4$, see Eq.~(\ref{rhocubic}), in the asymptotic region $K_{\rm LSO}$ becomes proportional with the integral of the function, $\Delta \varrho_4(\varepsilon,d) \equiv \varrho_4(\varepsilon,d) - \varrho_4^{0}(\varepsilon)$. This function is displayed in Fig.~\ref{fig:deltarho} for both the Au and the Cu hosts. Remarkably, the amplitude of the Friedel oscillations is about one order of magnitude larger than those of the off-diagonal spectral functions (compare with Fig.~\ref{fig:rho-off} for the case of Au). Note that the off-diagonal matrixelements appear in first order of the spin-orbit coupling. The oscillations have larger periods as compared to the off-diagonal spectral functions: a fit to the asymptotic function, Eq.~(\ref{asyrho}), shown also in Fig.~\ref{fig:deltarho} gave the values $Q^{Au}=0.292$ \AA$^{-1}$ and $Q^{Cu}=0.505$ \AA$^{-1}$, in very good agreement with the length of the small extremal vector, $Q_{\rm min}$, of the corresponding Fermi surfaces. Interestingly, the amplitude of the oscillations is more than three times larger for Cu than for Au: from the fits we obtained $g_4(\varepsilon_{F})=1.16\cdot10^{-2}$ \AA \ eV$^{-1}$ and $3.53\cdot10^{-2}$ \AA \ eV$^{-1}$ for the case of Au and Cu, respectively. \begin{figure}[htb!] \includegraphics[ width=7cm,bb=10 10 230 190,clip]{fig7.eps} \vskip -0.3cm \caption{(Color online) Magnetic anisotropy constants within the local spin-orbit coupling model calculated by using the asymptotic formula, Eq.~(\ref{Kasy-LSO}), as a function of the distance $d$ from the (001) surface of a Au (dashes) and a Cu host (solid line). In case of Au the squares stand for the magnetic anisotropy constants calculated directly from Eq.~(\ref{K-LSO}). \label{fig:klso}} \end{figure} Fig.~\ref{fig:klso} shows the magnetic anisotropy constants obtained using Eq.~(\ref{Kasy-LSO}) with the parameters extracted from the fits of $\Delta \varrho_4(\varepsilon _{F},d)$. The parameter $Q^{\prime }(\epsilon _{F})$ was computed as for the off-diagonal spectral functions, and took a value of 0.245 (\AA eV)$^{-1}$ for Au and 0.238 (\AA eV)$^{-1}$ for Cu. Choosing again $J=1$ eV, we obtained for the amplitudes of the oscillations of $K$, $A(d)=0.0237/d^2$ eV and $A(d)=0.0742/d^2$ eV ($d$ measured in \AA \ ) in Au and Cu, respectively. In particular, for Cu this gives an amplitude of 0.03 meV at $d=50$ \AA \ or 0.007 meV at $d=100$ \AA, which is in the range of $T_K$ for typical dilute magnetic alloys, such as Cu(Mn), Cu(Cr). In Fig.~\ref{fig:klso}, we also compare the magnetic anisotropy constants obtained from the asymptotic analysis with the values we get by performing the contour integration in Eq.~(\ref{K-LSO}). Apparently, already for $d > 35$ \AA, these values lie almost perfectly on the asymptotic curve. This nice agreement proves the validity of the asymptotic formula, Eq.~(\ref{Kasy-LSO}), as well as the accuracy of our numerical procedure to compute the magnetic anisotropy constant. \section{Summary and conclusions} In this paper, we performed a theoretical study of two mechanisms for surface-induced magnetic anisotropy of a magnetic impurity: a local spin-orbit mechanism (LSO),\cite{SZG:prl06} and a host spin-orbit mechanism (HSO).\cite{orsi1} Both mechanisms appear as a result of Friedel-like oscillations in the local spectral functions, induced by the surface. In the local SO mechanism, the rather large diagonal, i.e., {\em charge oscillations} couple through the local spin-orbit coupling on the d- or f-level of the magnetic impurity to the impurity spin and lead to a surface-induced splitting of the spin states. The host SO mechanism, on the other hand, relies on oscillations in the {\em off-diagonal} elements of the local spectral functions, i.e., oscillations in the ``spin sector'' that couple directly to the spin through an exchange interaction. These oscillations are induced by the SO coupling in the host metal and, thus, they are much weaker than the Friedel oscillations in the ''charge sector''. Based upon this simple picture, one therefore expects that the first mechanism is dominant for impurities with a partially filled d- or f-shell, while the host SO mechanism may become important for half-filled shells, in which case the local SO mechanism cannot be at work. In this paper we attempted to compare these two mechanisms quantitatively. For the description of the host's valence and conduction electrons we used a tight-binding Green's function technique, which allows for a perfect treatment of the semi-infinite surface geometry, and makes also possible a non-perturbative treatment of the host SO interaction. We then used a field theoretical approach to compute the self-energy of the spin up to first (local SO) and second orders in the exchange coupling, $J$, (host SO model), and derived explicit expressions for the anisotropy constants, $K$, as a function of the separation $d$ between the impurity and the surface. These expressions have been analyzed using an asymptotic analysis which resulted in a very similar oscillatory dependence of $K$ on $d$ in both models: the periods of the oscillations could be identified as the magnitudes of the extremal vectors of the Fermi Surface of the bulk host and their amplitudes decayed in both models as 1/$d^2$. Here we must remark that in our calculations in Ref.~\onlinecite{SZG:prl06} we predicted a $1/d^3$ decay of the oscillations of $K$ within the host spin-orbit mechanism. This must be contrasted to the results of the present work, where we find rather a $1/d^2$ scaling of the host-induced anisotropy. This apparent controversy is due to a small difference in the calculations: Unlike the present work, in Ref.~\onlinecite{SZG:prl06} we neglected the potential scattering at the impurity site, i.e., we used the local spectral functions of a perfect semi-infinite host. In this case, however, one can show that certain off-diagonal elements of the local spectral function matrix must vanish due to two-dimensional translational symmetry. These off-diagonal matrix elements are non-zero, once translational invariance is broken by potential scattering at the impurity site, and they give rise to a $1/d^2$ decay of the anisotropy as shown in Sec. II.D. Using realistic tight-binding parameters, we calculated the amplitudes of the magnetic anisotropy oscillations for the cases of Au and Cu metal hosts. As expected from the very different SO interactions in these metals, within the host SO model, the magnetic anisotropy constant for Au turned out to be about three orders larger in magnitude than for Cu. Nevertheless, even for a Au host and close to the surface, the magnetic anisotropy constants remained below the range of 0.1 $\mu$eV. Though a direct comparison with the result of Ref.~\onlinecite{orsi4} is quite questionable mainly due to the different geometrical distribution of the host atoms and to the different approximations used, the above value is close to the {\em lower} limit of the estimated range of $K$ given in Ref.~\onlinecite{orsi4}. We therefore conclude that most probably the host SO mechanism of Ref.~\onlinecite{orsi4} is too weak to explain the size-dependence of the Kondo resistance. The local SO mechanism proposed in Ref.~\onlinecite{SZG:prl06}, on the other hand, gives a magnetic anisotropy constants for Cu in the range of 0.03-0.01 meV for even at distances 50-100 \AA \ away from the surface. Although they are in the same range, the magnetic anisotropy constants for Au were about three times smaller than the ones we got for Cu. Our numerical studies imply that the primary mechanism to produce a magnetic anisotropy in the vicinity of a surface is provided by the local SO coupling, where the local Hund's rule coupling conspires with Friedel oscillations to produce a large anisotropy effect. This mechanism seems to be large enough to explain the suppression of the Kondo resistance anomaly observed in thin films and it also supposed to be the dominant source of (random) magnetic anisotropy in metallic mesoscopic structures such as metallic nano-grains, nano-wires, or point contacts. \bigskip The authors are indebted to A. Zawadowski and O. \'Ujs\'aghy for valuable discussions. This work has been financed by the Hungarian National Scientific Research Foundation (OTKA T068312 and NF061726) and by a cooperation between the Spanish Ministry of Science and the Hungarian Science and Technology Foundation (HH2006-0027 and OMFB-01230/2007).
1,314,259,996,004
arxiv
\section{Introduction} \subsection{Planar Circular Flow Conjecture} For integers $a\ge 2b>0$, a \Emph{circular $a/b$-flow}\footnote{Jaeger~\cite{Jaeger1988} showed that if $p,q,r,s\in \Z^+$ and $p/q=r/s$, then each graph $G$ has a circular $p/q$-flow if and only if it has a circular $r/s$-flow. (See~\cite{Goddyn1998} for more details.) We use this result implicitly in the present paper.} is a flow that takes values from $\{\pm b, \pm(b+1), \dots, \pm(a-b)\}$. In this paper we study the following conjecture, which arises from Jaeger's Circular Flow Conjecture \cite{Jaeger1988}. \begin{conjecture}[Planar Circular Flow Conjecture] \label{CONJ: PCFC}~\\ Every $2k$-edge-connected planar graph admits a circular $(2+\frac{2}{k})$-flow. \end{conjecture} When $k=1$ this conjecture is the flow version of the 4 Color Theorem. It is true for planar graphs (by 4CT), but false for nonplanar graphs because of the Petersen graph, and all other snarks. Tutte's $4$-Flow Conjecture, from 1966, claims that Conjecture~\ref{CONJ: PCFC} extends to every graph with no Petersen minor. When $k=2$, Conjecture~\ref{CONJ: PCFC} is the dual of Gr\"{o}tzsch's 3-Color Theorem. Tutte's $3$-Flow Conjecture, from 1972, asserts that it extends to all graphs (both planar and nonplanar). In 1981 Jaeger further extended Tutte's Flow Conjectures, by proposing a general Circular Flow Conjecture: {\em for each even integer $k\ge 2$, every $2k$-edge-connected graph admits a circular $(2+\frac{2}{k})$-flow}. That is, he believed Conjecture~\ref{CONJ: PCFC} extends to all graphs for all even $k$. A weaker version of Jaeger's conjecture was proved by Thomassen~\cite{Thomassen2012}, for graphs with edge connectivity at least $2k^2+k$. This edge connectivity condition was substantially improved by Lov\'asz, Thomassen, Wu, Zhang~\cite{LTWZ2013}. \begin{theorem}{\em (Lov\'asz, Thomassen, Wu, Zhang~\cite{LTWZ2013})} For each even integer $k\ge 2$, every $3k$-edge-connected graph admits a circular $(2+\frac{2}{k})$-flow. \label{LTWZ-thm} \end{theorem} In contrast, Jaeger's Circular Flow Conjecture was recently disproved for all $k\ge 6$. In~\cite{HLWZ}, for each even integer $k\ge 6$, the authors construct a $2k$-edge-connected nonplanar graph admitting no circular $(2+\frac{2}{k})$-flow. And for large odd integers $k$, we can also modify the construction in~\cite{HLWZ} to get $2k$-edge-connected nonplanar graphs admitting no circular $(2+\frac{2}{k})$-flow. Thus, the planarity hypothesis of Conjecture~\ref{CONJ: PCFC} seems essential. The case $k=4$ of Jaeger's Circular Flow Conjecture, which remains open, is particularly important, since Jaeger \cite{Jaeger1988} observed that if every $9$-edge-connected graph admits a circular $5/2$-flow, then Tutte's celebrated $5$-Flow Conjecture follows. Our main theorems improve on Theorem~\ref{LTWZ-thm}, restricted to planar graphs, when $k\in\{4,6\}$. \begin{theorem} Every $10$-edge-connected planar graph admits a circular $5/2$-flow. \label{5/2-flow-thm} \end{theorem} \begin{theorem} Every $16$-edge-connected planar graph admits a circular $7/3$-flow. \label{7/3-flow-thm} \end{theorem} The dual version of Theorem~\ref{5/2-flow-thm}, on circular coloring, was proved by Dvo\v{r}\'{a}k and Postle~\cite{DP2017}. In fact, their coloring result holds for a larger class of graphs that includes some sparse nonplanar graphs, as well as all planar graphs with girth at least 10. However, our proof is much shorter and avoids using computers for case-checking. Our proof also has new implications for antisymmetric flows (see Theorem~\ref{antisymmetric-thm} below). Theorem~\ref{7/3-flow-thm} is especially interesting because the counterexamples in~\cite{HLWZ} to Jaeger's original circular flow conjecture are $12$-edge-connected nonplanar graphs that admit no circular $7/3$-flow. \subsection{Circular Flows and Modulo Orientations} Graphs in this paper are finite and can have multiple edges, but no loops. Our notation is mainly standard. For a graph $G$, we write $|G|$ for $|V(G)|$ and write $\|G\|$ for $|E(G)|$\aside{$|G|,\|G\|$}. Let $\delta(G)$ denote the minimum degree in a graph $G$. A $k$-vertex is a vertex of degree $k$. For disjoint vertex subsets $X$ and $Y$, let \Emph{$[X,Y]_G$} denote the set of edges in $G$ with one endpoint in each of $X$ and $Y$. Let $X^c=V(G)\setminus X$\aaside{$X^c$, $d(X)$}{0mm}, and let $d(X)=|[X,X^c]|$. For vertices $v$ and $w$, let $\mu(vw)=|[\{v\},\{w\}]_G|$ and $\mu(G)=\max_{v,w\in V(G)}\mu(vw)$\aaside{$\mu(vw),\mu(G)$}{-4mm}. To \Emph{lift} a pair of edges $w_1v$, $vw_2$ incident to a vertex $v$ in a graph $G$ means to delete $w_1v$ and $vw_2$ and create a new edge $w_1w_2$. To \Emph{contract} an edge $e$ in $G$ means to identify its two endpoints and then delete the resulting loop. For a subgraph $H$ of $G$, we write $G/H$ to denote the graph formed from $G$ by successively contracting the edges of $E(H)$. The lifting and contraction operations are used frequently in this paper. An orientation $D$ of a graph $G$ is a \EmphE{modulo $(2p+1)$-orientation}{-2mm} if $d^+_D(v)-d^-_D(v)\equiv 0\pmod{2p+1}$ for each $v\in V(G)$. By the following lemma of Jaeger~\cite{Jaeger1988}, this problem is equivalent to finding circular flows (for a short proof, see~\cite[Theorem 9.2.3]{Zhang-book}). \begin{lem}\cite{Jaeger1988} A graph admits a circular $(2+\frac{1}{p})$-flow if and only if it has a modulo $(2p+1)$-orientation. \label{prop1} \end{lem} To prove our results, we study modulo orientations. Let $G$ be a graph. A function $\beta: V(G) \mapsto \Z_{2p+1}$ is a \EmphE{$\Z_{2p+1}$-boundary}{-4mm} if $\sum_{v\in V(G)}\beta(v)\equiv 0\pmod{2p+1}$. Given a $\Z_{2p+1}$-boundary $\beta$, a \EmphE{$(\Z_{2p+1},\beta)$-orientation}{2mm} is an orientation $D$ such that $d_D^+(v)-d_D^-(v)\equiv \beta(v) \pmod{2p+1}$ for each $v\in V(G)$. When such an orientation exists, we say that the boundary $\beta$ is \EmphE{achievable}{-1mm}. If $\beta(v)=0$ for all $v\in V(G)$, then a $(\Z_{2p+1},\beta)$-orientation is simply a modulo $(2p+1)$-orientation. As defined in \cite{Lai2007, Lai2014}, a graph $G$ is \EmphE{strongly $\Z_{2p+1}$-connected}{-2mm} if for any $\Z_{2p+1}$-boundary $\beta$, graph $G$ admits a $(\Z_{2p+1},\beta)$-orientation. When the context is clear, we may simply write \EmphE{$\beta$-orientation}{4mm} for $(\Z_{2p+1},\beta)$-orientation. Suppose we are given a graph $G$, an integer $p$, a $\Z_{2p+1}$-boundary $\beta$ for $G$, and a connected subgraph $H\subsetneq G$. We form $G'$ from $G$ by contracting $H$; that is $G'=G/H$. Let $w$ denote the new vertex in $G'$, formed by contracting $E(H)$. Define $\beta'$ for $G'$ by $\beta'(v)=\beta(v)$ for each $v\in V(G')\setminus\{w\}$, and $\beta'(w)=\sum_{v\in V(H)}\beta(v) \pmod{2p+1}$. Note that $\beta'$ is a $\Z_{2p+1}$-boundary for $G'$. The motivation for generalizing modulo orientations is the following observation of Lai~\cite{Lai2007}, which is also applied in Thomassen et al.~\cite{Thomassen2012,LTWZ2013}. \begin{lem}[\cite{Lai2007}] \label{reduc-lem} Let $G$ be a graph with a subgraph $H$, and let $G'=G/H$. Let $\beta$ and $\beta'$ be $\Z_{2p+1}$ boundaries (respectively) of $G$ and $G'$, as defined above. If $H$ is strongly $\Z_{2p+1}$-connected, then every $\beta'$-orientation of $G'$ can be extended to a $\beta$-orientation of $G$. In particular, each of the following holds. \begin{enumerate} \item[(i)] If $H$ is strongly $\Z_{2p+1}$-connected and $G/H$ has a modulo $(2p+1)$-orientation, then $G$ has a modulo $(2p+1)$-orientation. \item[(ii)] If $H$ and $G/H$ are strongly $\Z_{2p+1}$-connected, then $G$ is also strongly $\Z_{2p+1}$-connected. \end{enumerate} \end{lem} \begin{proof} We prove the first statement, since it implies (i) and (ii). Fix a $\beta'$-orientation of $G'$. This yields an orientation $D$ of the subgraph $G-E(G[V(H)])$. By orienting arbitrarily each edge in $E(G[V(H)])\setminus E(H)$, we obtain a $\beta''$-orientation $D_1$ of $G-E(H)$, for some $\beta''$. For each $v\in V(H)$, let $\gamma(v)=\beta(v)-\beta''(v)$. It is easy to check that $\gamma$ is a $\Z_{2p+1}$-boundary of $H$. Since $H$ is strongly $\Z_{2p+1}$-connected, $H$ has a $\gamma$-orientation $D_2$. Hence $D_1\cup D_2$ is a $\beta$-orientation of $G$. \end{proof} ~ \noindent {\bf Proof Outline for Main Results.} To prove Theorems~\ref{5/2-flow-thm} and \ref{7/3-flow-thm}, we actually establish two stronger, more technical results on orientations; namely, we prove Theorems~\ref{THM: Main1} and \ref{THM: Main2}. Lemma~\ref{reduc-lem} shows that strongly $\Z_{2p+1}$-connected graphs are contractible configurations when we are looking for modulo orientations. To prove Theorems~\ref{THM: Main1} and \ref{THM: Main2}, we use lifting and contraction operations to find many more reducible configurations. These configurations eventually facilitate a discharging proof. The proofs of Theorems~\ref{5/2-flow-thm} and~\ref{7/3-flow-thm} are similar, though the latter is harder. In the next section we just discuss Theorem~\ref{5/2-flow-thm}, but most of the key ideas are reused in the proof of Theorem~\ref{7/3-flow-thm}. \section{Circular $5/2$-flows: Proof of Theorem~\ref{5/2-flow-thm}} \label{Z5-sec} \subsection{Modulo $5$-Orientations and Antisymmetric $\Z_5$-flows} To prove Theorem~\ref{5/2-flow-thm}, we will first present a more technical result, Theorem~\ref{THM: Main1}, which yields Theorem~\ref{5/2-flow-thm} as an easy corollary (as we show below in Theorem~\ref{10-edge-thm}). The hypothesis in Theorem~\ref{THM: Main1} uses a weight function $w$, which is motivated by the following Spanning Tree Packing Theorem of Nash-Williams \cite{Nash1961} and Tutte \cite{Tutte1961}: {\em a graph $G$ has $k$ edge-disjoint spanning trees if and only if every partition ${\mathcal P}=\{P_1, P_2,\dots, P_t\}$ satisfies $\sum_{i=1}^{t}d(P_i)-2k(t-1)\ge 0$.} This condition is necessary, since in a partition with $t$ parts, each spanning tree has at least $t-1$ edges between parts. It is shown in~\cite[Proposition~3.9]{LaLL17} that if $G$ is strongly $\Z_{2p+1}$-connected, then it contains $2p$ edge-disjoint spanning trees (although this necessary condition is not always sufficient). To capture this idea, we define the following weight function. \begin{definition} \label{DEF: partition} Let ${\mathcal P}=\{P_1, P_2,\dots, P_t\}$ be a partition of $V(G)$. Let $$w_G({\mathcal P})=\sum_{i=1}^{t}d(P_i)-11t+19$$ and $$w(G)=\min\{w_G({\mathcal P}): {\mathcal P}\ is\ a\ partition\ of\ V(G)\}.$$ \end{definition} Let \Emph{$T_{a,b,c}$} denote a 3-vertex graph (triangle) with its pairs of vertices joined by $a$, $b$, and $c$ parallel edges; let \Emph{$aH$} denote the graph formed from $H$ by replacing each edge with $a$ parallel edges. For example, $w(3K_2)=3$, $w(2K_2)=1$, $w(T_{2,2,3})=w(T_{1,3,3})=0$; see Figure~\ref{FIG: K23J12}. For each of these four graphs the minimum in the definition of $w(G)$ is attained only by the partition with each vertex in its own part. We typically assume $V(T_{a,b,c})=\{v_1,v_2,v_3\}$ and $d(v_1)\le d(v_2)\le d(v_3)$. \begin{figure}[t] \setlength{\unitlength}{0.08cm} \begin{center} \begin{picture}(150,30) \put(0,10){\circle*{2}}\put(20,10){\circle*{2}} \qbezier(0, 10)(0, 10)(20, 10)\qbezier(0, 10)(10, 15)(20, 10)\qbezier(0, 10)(10, 5)(20, 10) \put(5,-7){\footnotesize{$3K_2$}} \put(40,10){\circle*{2}}\put(60,10){\circle*{2}} \qbezier(40, 10)(50, 15)(60, 10)\qbezier(40, 10)(50, 5)(60, 10) \put(46,-7){\footnotesize{$2K_2$}} \put(80,5){\circle*{2}}\put(100,5){\circle*{2}}\put(90,25){\circle*{2}} \qbezier(80, 5)(90, 10)(100, 5)\qbezier(80, 5)(90, 5)(100, 5)\qbezier(80, 5)(90, 0)(100, 5) \qbezier(80, 5)(85, 11)(90, 25)\qbezier(80, 5)(84, 17)(90, 25) \qbezier(100, 5)(95, 10)(90, 25)\qbezier(100, 5)(94, 21)(90, 25) \put(86,-7){\footnotesize{$T_{2,2,3}$}} \put(120,5){\circle*{2}}\put(140,5){\circle*{2}}\put(130,25){\circle*{2}} \qbezier(120, 5)(130, 10)(140, 5)\qbezier(120, 5)(130, 5)(140, 5)\qbezier(120, 5)(130, 0)(140, 5) \qbezier(120, 5)(125, 15)(130, 25 \qbezier(140, 5)(132, 10)(130, 25)\qbezier(140, 5)(137, 22)(130, 25)\qbezier(140, 5)(135, 15)(130, 25) \put(127,-7){\footnotesize{$T_{1,3,3}$}} \end{picture} \end{center} \vspace{0.4cm} \caption{The graphs $3K_2, 2K_2, T_{2,2,3}, T_{1,3,3}$.} \label{FIG: K23J12} \end{figure} Let $\mathcal{T}=\{2K_2, 3K_2, T_{2,2,3}, T_{1,3,3}\}$. Each graph $G\in \mathcal{T}$ (see Figure~\ref{FIG: K23J12}) is not strongly $\Z_5$-connected, since there exists some $\Z_5$-boundary $\beta$ for which $G$ has no $\beta$-orientation. A short case analysis shows that none of the following boundaries are achievable. For $3K_2$, let $\beta(v_1)=\beta(v_2)=0$. For $2K_2$, let $\beta(v_1)=1$ and $\beta(v_2)=4$. For $T_{2,2,3}$, let $\beta(v_1)=1$ and $\beta(v_2)=\beta(v_3)=2$. For $T_{1,3,3}$, let $\beta(v_1)=\beta(v_2)=1$ and $\beta(v_3)=3$. Now suppose that $G$ has a partition $\P$ such that $G/\P\in\mathcal{T}$, where the vertices in each $P_i$ are identified to form $v_i$. To construct a $\Z_5$-boundary $\gamma$ for which $G$ has no $\gamma$-orientation, we assign boundary $\gamma$ so that $\sum_{v\in P_i}\gamma(v)\equiv \beta(v_i)$. Hence $G$ has no $\gamma$-orientation precisely because $G/\P$ has no $\beta$-orientation. We call a partition $\P$ \EmphE{troublesome}{-4mm} if $G/\P\in \mathcal{T}=\{2K_2, 3K_2, T_{2,2,3}, T_{1,3,3}\}$. The main result of Section~\ref{Z5-sec} is Theorem~\ref{THM: Main1}. \begin{theorem} \label{THM: Main1} Let $G$ be a planar graph and $\beta$ be a $\Z_5$-boundary of $G$. If $w(G)\ge 0$, then $G$ admits a $(\Z_5,\beta)$-orientation, unless $G$ has a troublesome partition. \end{theorem} \smallskip Before proving Theorem~\ref{5/2-flow-thm}, we prove a slightly weaker result, assuming the truth of Theorem~\ref{THM: Main1}. \begin{thm}\label{11-edge-thm} If $G$ is an 11-edge-connected planar graph, then $G$ is strongly $\Z_5$-connected. \end{thm} \begin{proof} Let $G$ be an 11-edge-connected planar graph. Fix a partition $\P$. Since $G$ is 11-edge-connected, $d(P_i)\ge 11$ for each $i$, which implies $w_G(\P)\ge 19$. Thus $w(G)\ge 19$. Since it is easy to see each troublesome partition $\P$ has $w(G/\P)\le 3$, we obtain that $G$ has no partition $\P$ such that $G/\P$ is troublesome. Now Theorem~\ref{THM: Main1} implies that $G$ is strongly $\Z_5$-connected. \end{proof} An \Emph{antisymmetric $\Z_5$-flow} in a directed graph $D=D(G)$ is a $\Z_5$-flow such that no two edges have flow values summing to 0. One example is any $\Z_5$-flow that uses only values 1 and 2. Esperet, de Verclos, Le, and Thomass\'{e} \cite{EJLT2017} proved that if a graph $G$ is strongly $\Z_5$-connected, then every orientation $D(G)$ of $G$ admits an antisymmetric $\Z_5$-flow. Together with work of Lov\'asz et al.~\cite{LTWZ2013}, this implies that every directed $12$-edge-connected graph admits an antisymmetric $\Z_5$-flow. Esperet et~al.~\cite{EJLT2017} conjectured the stronger result that {\em every directed $8$-edge-connected graph admits an antisymmetric $\Z_5$-flow}. The concept of antisymmetric flows and its dual, homomorphisms to oriented graphs, were introduced by Ne\v{s}et\v{r}il and Raspaud \cite{NR1999}. In \cite{NRS1997}, Ne\v{s}et\v{r}il, Raspaud and Sopena showed that every orientation of a planar graph of girth at least $16$ has a homomorphism to an oriented simple graph on at most 5 vertices. The girth condition is reduced to $14$ in \cite{BKNRS1999}, to $13$ in \cite{BKKW2004}, and finally to $12$ in \cite{BIK2007}. By duality, the results of \cite{NR1999}, \cite{EJLT2017}, and \cite{LTWZ2013} combine to imply that girth $12$ suffices. After the girth $12$ result of Borodin et al.~\cite{BIK2007} in 2007, Esperet et al.~\cite{EJLT2017} remarked that ``it is not known whether the same holds for planar graphs of girth at least $11$.'' Note that the result of Dvo\v{r}\'{a}k and Postle~\cite{DP2017} does not seem to apply to homomorphisms to oriented graphs. By Theorem~\ref{11-edge-thm}, we improve this girth bound for planar graphs. \begin{thm} \label{antisymmetric-thm} Every directed $11$-edge-connected planar graph admits an antisymmetric $\Z_5$-flow. Dually, every orientation of a planar graph of girth at least $11$ has a homomorphism to an oriented simple graph on at most 5 vertices. \end{thm} A graph $G$ has \Emph{odd edge-connectivity} $t$ if the smallest edge cut of odd size has size $t$. Our strongest result on modulo $5$-orientations is the following, which includes Theorem~\ref{5/2-flow-thm} as a special case. \begin{theorem} Every odd-$11$-edge-connected planar graph admits a modulo 5-orientation. In particular, every 10-edge-connected planar graph admits a modulo 5-orientation (and thus a circular 5/2-flow). \label{10-edge-thm} \end{theorem} \begin{proof} The second statement follows from the first, by Lemma~\ref{prop1}. To prove the first, suppose the theorem is false, and let $G$ be a counterexample minimizing $\|G\|$. By Zhang's Splitting Lemma\footnote{This says that if $G$ has a vertex $v$ with $d(v)\notin\{2,11\}$, then we can lift a pair of edges incident to $v$ that are successive in the circular order around $v$, and the resulting graph is still planar and odd-11-edge-connected. For example, if $d(v)=10$, then all edges incident to $v$ will be lifted in pairs, so the boundary value at $v$ in the resulting orientation will be 0. This is why the proof yields a modulo 5-orientation, but does not show that $G$ is strongly $\Z_5$-connected.} for odd edge-connectivity~\cite{CQsplitting02}, we know $\delta(G)\ge 11$. If $G$ is $11$-edge-connected, then we are done by Theorem~\ref{11-edge-thm}; so assume it is not. Choose a smallest set $W\subset V(G)$ such that $d(W)<11$. Note that $|W|\ge 2$, and every proper subset $W'\subsetneq W$ satisfies $d(W')\ge 11$. Let $H=G[W]$. For any partition $\P=\{P_1, P_2,\dots, P_t\}$ of $H$ with $t\ge 2$, we know that $d_G(P_i)\ge 11$ by the minimality of $W$, since $P_i\subsetneq W$. This implies \begin{eqnarray*} w_H({\mathcal P})&=&\sum_{i=1}^td_H(P_i)-11t+19\\ &=&\sum_{i=1}^td_G(P_i)-d_G(W^c)-11t+19\\ &>& 11t-11-11t+19\ge 8. \end{eqnarray*} Thus $w(H)\ge 9$, which implies $H$ is strongly $\Z_5$-connected by Theorem~\ref{THM: Main1}. By the minimality of $G$, the graph $G/H$ has a modulo 5-orientation. By Lemma~\ref{reduc-lem}, this extends to a modulo 5-orientation of $G$, which completes the proof. \end{proof} \subsection{Reducible Configurations and Partitions} \label{sec:Z5-prelims} To prove Theorem~\ref{THM: Main1}, we assume the result is false and study a minimal counterexample. In the next subsection we prove many structural results about the minimal counterexample, which ultimately imply it cannot exist. In this subsection we prove that a few small graphs cannot appear as subgraphs of the minimal counterexample. We call such a forbidden subgraph \Emph{reducible}. By Lemma~\ref{reduc-lem}, to show that $H$ is reducible it suffices to show $H$ is strongly $\Z_5$-connected. Let $G$ be a graph. We often lift a pair of edges $w_1v$, $vw_2$ incident to a vertex $v$ in $G$ to form a new graph $G'$. That is, we delete $w_1v$ and $vw_2$ and create a new edge $w_1w_2$. If $G'$ is strongly $\Z_k$-connected, then so is $G$, since from any $\beta$-orientation of $G'$ we delete the edge $w_1w_2$ and add the directed edges $w_1v$ and $vw_2$ to obtain a $\beta$-orientation of $G$. To prove $G$ is strongly $\Z_k$-connected, we use lifting in two similar ways. First, we lift some edge pairs to create a $G'$ that contains a strongly $\Z_k$-connected subgraph $H$. If $G'/H$ is strongly $\Z_k$-connected, then so is $G'$ by Lemma~\ref{reduc-lem}. As discussed in the previous paragraph, so is $G$. Second, given a $\Z_k$-boundary $\beta$, we orient some edges incident to a vertex $v$ to achieve $\beta(v)$. For each edge $vw$ that we orient, we increase or decrease by 1 the value of $\beta(w)$. Now we delete $v$ and all oriented edges, and lift the remaining edges incident to $v$ (in pairs). Call the resulting graph and boundary $G'$ and $\beta'$. If $G'$ has a $\beta'$-orientation, then $G$ has a $\beta$-orientation. We call these \EmphE{lifting reductions of the first and second type}{-10mm}, respectively. In this paper whenever we lift an edge pair $vw$, $wx$ we require that edge $vx$ already exists. Thus, our lifting reductions always preserve planarity. \begin{lemma} \label{LEM: 4K2J2K4inSZ5} Each of the graphs $4K_2, T_{2,3,3}$, $2K_4$, and $3C_4$, shown in Figure~\ref{FIG: 4K2J2K4C4}, is strongly $\Z_5$-connected. \end{lemma} \begin{figure}[ht] \setlength{\unitlength}{0.08cm} \begin{center} \begin{picture}(165,30) \put(0,10){\circle*{2}}\put(25,10){\circle*{2}} \qbezier(0, 10)(12, 12)(25, 10)\qbezier(0, 10)(12, 16)(25, 10)\qbezier(0, 10)(12, 8)(25, 10)\qbezier(0, 10)(12, 4)(25, 10) \put(10,-7){\footnotesize{$4K_2$}} \put(45,5){\circle*{2}}\put(65,5){\circle*{2}}\put(55,25){\circle*{2}} \qbezier(45, 5)(55, 8)(65, 5)\qbezier(45, 5)(55, 2)(65, 5) \qbezier(45, 5)(52, 10)(55, 25)\qbezier(45, 5)(48, 20)(55, 25)\qbezier(45, 5)(45, 5)(55, 25) \qbezier(65, 5)(65, 5)(55, 25)\qbezier(65, 5)(58, 10)(55, 25)\qbezier(65, 5)(62, 20)(55, 25) \put(51,-7){\footnotesize{$T_{2,3,3}$}} \put(85,5){\circle*{2}}\put(115,5){\circle*{2}}\put(100,30){\circle*{2}}\put(100,14){\circle*{2}} \qbezier(85, 5)(100, 7)(115, 5)\qbezier(85, 5)(100, 3)(115, 5) \qbezier(85, 5)(90, 17)(100, 30)\qbezier(85, 5)(94, 17)(100, 30) \qbezier(115, 5)(110, 17)(100, 30)\qbezier(115, 5)(106, 17)(100, 30) \qbezier(85, 5)(90, 10)(100, 14)\qbezier(85, 5)(94, 9)(100, 14) \qbezier(115, 5)(110, 10)(100, 14)\qbezier(115, 5)(106, 9)(100, 14) \qbezier(100, 30)(98, 22)(100, 14)\qbezier(100,30)(102, 22)(100, 14) \put(97,-7){\footnotesize{$2K_4$}} \put(143,-7){\footnotesize{$3C_4$}} \put(135,5){\circle*{2}}\put(155,5){\circle*{2}}\put(135,25){\circle*{2}}\put(155,25){\circle*{2}} \qbezier(135,5)(135,5)(155,5)\qbezier(135,5)(145,1)(155,5)\qbezier(135,5)(145,9)(155,5) \qbezier(135,5)(135,15)(135,25)\qbezier(135,5)(131,15)(135,25)\qbezier(135,5)(139,15)(135,25) \qbezier(135,25)(135,25)(155,25)\qbezier(135,25)(145,21)(155,25)\qbezier(135,25)(145,29)(155,25) \qbezier(155,5)(155,15)(155,25)\qbezier(155,5)(151,15)(155,25)\qbezier(155,5)(159,15)(155,25) \end{picture} \end{center} \vspace{0.4cm} \caption{The graphs $4K_2, T_{2,3,3}, 2K_4, 3C_4$.} \label{FIG: 4K2J2K4C4} \end{figure} \begin{proof} Proving the lemma amounts to checking a finite list of cases. So our goal is to make this as painless as possible. Throughout we fix a $\Z_5$-boundary $\beta$ and construct an orientation that achieves $\beta$. Let $G=4K_2$ and $V(G)=\{v_1,v_2\}$. To achieve $\beta(v_1)\in\{0,1,2,3,4\}$ the number of edges we orient out of $v_1$ is (respectively) 2, 0, 3, 1, 4. Let $G=T_{2,3,3}$ and $V(G)=\{v_1,v_2,v_3\}$, with $d(v_1)=d(v_2)=5$ and $d(v_3)=6$. If $\beta(v_1)\ne 0$, then we achieve $\beta$ by orienting 3 edges incident to $v_1$, and lifting a pair of unused, nonparallel, edges incident to $v_1$ to create a fourth edge $v_2v_3$. Since $4K_2$ is strongly $\Z_5$-connected, we can use the resulting 4 edges to achieve $\beta(v_2)$ and $\beta(v_3)$. (This is a lifting reduction of the second type. In what follows, we are less explicit about such descriptions.) So we assume that $\beta(v_1)=0$ and, by symmetry, $\beta(v_2)=0$. This implies $\beta(v_3)=0$. Now we orient all edges from $v_1$ to $v_3$, from $v_1$ to $v_2$ and from $v_3$ to $v_2$. Let $G=2K_4$ and $V(G)=\{v_1,v_2,v_3,v_4\}$. If $\beta(v_1)\in\{0,2,3\}$, then we achieve $\beta(v_1)$ by orienting two nonparallel edges incident to $v_1$. Now we lift two pairs of unused edges incident to $v_1$ to get a $T_{2,3,3}$. Since $T_{2,3,3}$ is strongly $\Z_5$-connected, we are done by Lemma~\ref{reduc-lem}. So assume $\beta(v_1)\notin\{0,2,3\}$. By symmetry, we assume $\beta(v_i)\in\{1,4\}$ for all $i$. Since $\beta$ is a $\Z_5$-boundary, we further assume $\beta(v_i)=1$ when $i\in\{1,2\}$ and $\beta(v_j)=4$ when $j\in\{3,4\}$. Let $V_1=\{v_1,v_2\}$ and $V_2=\{v_3,v_4\}$. Orient all edges from $V_2$ to $V_1$. For each pair of parallel edges within $V_1$ or $V_2$, orient one edge in each direction. This achieves $\beta$. Let $G=3C_4$ and $V(G)=\{v_1,v_2,v_3,v_4\}$ with $v_1,v_3\in N(v_2)\cap N(v_4)$. If $\beta(v_1)\in\{0,2,3\}$, then we achieve $\beta(v_1)$ by orienting two nonparallel edges incident to $v_1$ and lifting two pairs of edges incident to $v_1$. The resulting unoriented graph is $T_{2,3,3}$, so we are done by Lemma~\ref{reduc-lem}. Assume instead, by symmetry, that $\beta(v_i)\in\{1,4\}$ for all $i$. Since $\beta$ is a $\Z_5$-boundary, two vertices $v_i$ have $\beta(v_i)=1$ and two vertices $v_j$ have $\beta(v_j)=4$. By symmetry, assume $\beta(v_1)=1$. If $\beta(v_3)=1$, then orient all edges out from $v_1$ and $v_3$. Assume instead, by symmetry, that $\beta(v_2)=1$; now reverse one edge $v_3v_2$ from the previous orientation. \end{proof} \begin{defn} For partitions ${\P}=\{P_1, P_2,\dots, P_t\}$ and ${\P}'=\{P_1', P_2',\dots, P_s'\}$, we say that $\P'$ is a \EmphE{refinement}{0mm} of $\P$, denoted by ${\P}'\preceq{\P}$, if ${\P}'$ is obtained from ${\P}$ by further partitioning $P_i$ into smaller sets for some $P_i$'s in ${\P}$. More formally, we require that for every $P_j'\in {\P}'$, there exists $P_i\in {\P}$ such that $P_j'\subseteq P_i$. Since partitions are central to our theorems and proofs, we name a few common types of them. A partition ${\P}=\{P_1, P_2,\dots, P_t\}$ is \EmphE{trivial}{-3mm} if each part $P_i$ is a singleton, i.e., $V(G)$ is partitioned into $|G|$ parts; otherwise $\P$ is \EmphE{nontrivial}{-3mm}. A trivial partition is minimal under the relation $\prec$. A partition ${\P}=\{P_1, P_2,\dots, P_t\}$ is \EmphE{almost trivial}{-3mm} if $t=|G|-1$ and there is a unique part $P_i$ with $|P_i|=2$. A partition ${\P}$ is called \EmphE{normal}{4mm} if it is neither trivial nor almost trivial and ${\P}\neq \{V(G)\}$. \end{defn} Given a partition $\P$ of $V(G)$ and a partition $\mathcal{Q}$ of $G[P_1]$, the following lemma relates the weights of $\P$, $\mathcal{Q}$, and the refinement $\mathcal{Q}\cup(\P\setminus\{P_1\})$. \begin{lemma} Let ${\P}=\{P_1, P_2,\dots, P_t\}$ be a partition of $V(G)$ with $|P_1|>1$. Let $H=G[P_1]$ and let ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ be a partition of $V(H)$. Now ${\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ is a refinement of $\P$ satisfying \begin{eqnarray} \label{EQ: wH} w_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))= w_H({\mathcal{Q}})+ w_G({\P})-8. \end{eqnarray} \label{mod5-key-lem} \end{lemma} \begin{proof} Clearly, ${\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ is a refinement of $\P$, and it follows from Definition \ref{DEF: partition} that \begin{align*} w_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\})) &=\sum_{i=1}^sd_G(Q_i) + \sum_{j=2}^td_G(P_j) -11(s+t-1)+19\\ &=[\sum_{i=1}^sd_G(Q_i)-d_G(P_1)-11s+19] + [\sum_{j=1}^td_G(P_j) -11(t-1)]\\ &=[\sum_{i=1}^sd_H(Q_i)-11s+19] + [\sum_{j=1}^td_G(P_j) -11t+19]-(19-11)\\ &= w_H({\mathcal{Q}})+ w_G({\P})-(19-11). \end{align*} \par\vspace{-\belowdisplayskip}\vspace{-\parskip}\vspace{-\baselineskip} \end{proof} \subsection{Properties of a Minimal Counterexample to Theorem~\ref{THM: Main1}} {\bf Let $G$ be a counterexample to Theorem \ref{THM: Main1} that minimizes $|G|+\|G\|$.} Thus Theorem~\ref{THM: Main1} holds for all graphs smaller than $G$. This implies the following lemma, which we will use frequently. \begin{lemma}\label{LEM: smallH} If $H$ is a planar graph with $w(H)\ge 0$ and $|H|+\|H\|<|G|+\|G\|$, then each of the following holds. \begin{itemize} \item[(a)] If $w_H({\P})\ge 4$ for every nontrivial partition ${\P}$, then $H$ is strongly $\Z_5$-connected unless $H\in\{2K_2, 3K_2$, $T_{1,3,3}, T_{2,2,3}\}$. \item[(b)] If $w(H)\ge 1$ and $H$ is $4$-edge-connected, then $H$ is strongly $\Z_5$-connected. \item[(c)] If $w(H)\ge 4$, then $H$ is strongly $\Z_5$-connected. \end{itemize} \end{lemma} \begin{proof} To prove each part, we fix a $\Z_5$-boundary $\beta$ and apply Theorem \ref{THM: Main1} to $H$. Notice that each troublesome partition $\P$ satisfies $w(G/\P)\le 3$. So for (a), only the trivial partition can be troublesome. Thus, $H$ is strongly $\Z_5$-connected unless $H\in\{2K_2, 3K_2, T_{1,3,3}, T_{2,2,3}\}$. For (b), $G$ has no partition $\P$ with $G/\P\in\{2K_2,3K_2\}$ since $G$ is $4$-edge-connected. And $G$ has no partition $\P$ with $G/\P\in\{T_{1,3,3},T_{2,2,3}\}$ since $w(H)\ge 1$. So $H$ is again strongly $\Z_5$-connected, by Theorem~\ref{THM: Main1}. Finally, (c) follows from (b), since if $H$ has an edge cut $[X,X^c]$ of size at most 3, then $w_H(\{X,X^c\})\le 2(3)-11(2)+19=3$, which contradicts our assumption that $w(H)\ge 4$. \end{proof} The main idea of our proof is to show that the value of the weight function $w_G(\P)$ is relatively large for each nontrivial partition $\P$. This enables us to slightly modify certain proper subgraphs and still apply Lemma~\ref{LEM: smallH} to the resulting graph $H$. This added flexibility (to slightly modify the subgraph) helps us to prove that more subgraphs are reducible. In the next section, these forbidden subgraphs facilitate a discharging proof that shows that our minimal counterexample $G$ cannot exist. \begin{claim} \label{CL: nostrongZ5} $G$ has no strongly $\Z_5$-connected subgraph $H$ with $|H|>1$. In particular, \begin{itemize} \item[(a)] $G$ has no copy of $4K_2$, $T_{2,3,3}$, $2K_4$, or $3C_4$ (by Lemma~\ref{LEM: 4K2J2K4inSZ5}), and \item[(b)] $|G|\ge 4$. \end{itemize} \end{claim} \begin{proof} Suppose to the contrary that $H$ is a strongly $\Z_5$-connected subgraph of $G$ with $|H|>1$, and let $G'=G/H$. Since $G$ is a minimal counterexample, $G'$ is strongly $\Z_5$-connected, by Theorem~\ref{THM: Main1}. So Lemma~\ref{reduc-lem} implies $G$ is strongly $\Z_5$-connected, which is a contradiction. This proves both the first statement and (a). For (b), clearly $|G|\ge 3$, since $w(G)\ge 0$ and $G\notin\{2K_2,3K_2\}$ and $G$ contains no $4K_2$. So assume $|G|=3$. Since $w(G/\P)\ge 0$ for the trivial partition $\P$, we know that $\|G\|\ge 8$. Since $G\notin\{T_{1,3,3},T_{2,2,3}\}$, either $G$ contains $4K_2$ or $G$ contains $T_{2,3,3}$. Each case contradicts (a). \end{proof} \begin{claim} \label{CL: nontrivialge4} Let ${\P}=\{P_1, P_2,\dots, P_t\}$ be a nontrivial partition of $V(G)$. Now \begin{itemize} \item[(a)] $w_G({\P})\ge 5$, and \item[(b)] $w_G({\P})\ge 8$ if ${\P}$ is normal. \end{itemize} \end{claim} \begin{proof} Our proof is by contradiction. For an almost trivial partition ${\P}$, we have $w_G({\P})\ge w_G(V(G))-2(3)+11\ge 5$, since $G$ does not contain $4K_2$ by Claim~\ref{CL: nostrongZ5}(a). If $\P=\{V(G)\}$, then $w_G(\P)=0-11+19=8$. Since $|G|\ge 4$ by Claim~\ref{CL: nostrongZ5}(b), all other nontrivial partitions are normal. Let ${\P}=\{P_1, P_2,\dots, P_t\}$ be a normal partition of $V(G)$. By symmetry we assume $|P_1|> 1$ and let $H=G[P_1]$. For any partition ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ of $V(H)$, by Eq.~(\ref{EQ: wH}) the refinement ${\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ of $\P$ satisfies \begin{eqnarray}\label{EQ: wPwQ} w_H({\mathcal{Q}})= w_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\})) - w_G({\P})+8. \end{eqnarray} (a) We first show that $w_G(\P)\ge 5$. If $w_G(\P)\le 4$, then Eq.~(\ref{EQ: wPwQ}) implies $w_H(\mathcal{Q})\ge 4$ for any partition $\mathcal{Q}$ of $H$, since $w_G({\mathcal{Q}}\cup (\P\setminus\{P_1\}))\ge 0$. Hence $w(H)\ge 4$ and $H$ is strongly $\Z_5$-connected by Lemma~\ref{LEM: smallH}(c), which contradicts Claim~\ref{CL: nostrongZ5}. This proves (a). (b) We now show that $w_G({\P})\ge 8$. Suppose to the contrary that $w_G({\P})\le 7$. If ${\P}$ contains at least two nontrivial parts, say $|P_2|>1$, then (a) implies $w_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))\ge 5$ for any partition ${\mathcal{Q}}$ of $H$. Hence $w(H)\ge 6$ by Eq.~(\ref{EQ: wPwQ}), and so $H$ is strongly $\Z_5$-connected by Lemma~\ref{LEM: smallH}(c), which contradicts Claim~\ref{CL: nostrongZ5}. So assume instead that ${\P}$ contains a unique nontrivial part $P_1$ and $|P_1|\ge 3$. For any nontrivial partition ${\mathcal{Q}}$ of $H$, the refinement ${\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ of $\P$ is a nontrivial partition of $G$, and so $w_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))\ge 5$ by (a). Thus $w_H({\mathcal{Q}})\ge 6$ for any nontrivial partition ${\mathcal{Q}}$ of $H$ by Eq.~(\ref{EQ: wPwQ}). For the trivial partition ${\mathcal{Q}}^*$ of $H$, since $w_G({\P})\le 7$, Eq.~(\ref{EQ: wPwQ}) implies $w_H({\mathcal{Q}}^*)\ge 1$. Since $|H|=|P_1|\ge 3$, we know $H\notin \{2K_2,3K_2\}$. Since $w(H)\ge 1$, we know $H\not\cong T_{a,b,c}$ with $a+b+c\le 7$. So Lemma \ref{LEM: smallH}(a) implies that $H$ is strongly $\Z_5$-connected, which contradicts Claim~\ref{CL: nostrongZ5}. \end{proof} The next two claims are consequences of Claim \ref{CL: nontrivialge4}; they give lower bounds on the edge-connectivity of $G$. \begin{claim}\label{CL: 2nontrivialge6} For a partition ${\P}=\{P_1, P_2,\dots, P_t\}$, \begin{itemize} \item[(a)] if $|P_1|\ge 2$ and $|P_2|\ge 2$, then $w({\P})\ge 10$; and \item[(b)] if $|P_1|\ge 2$ and $|P_2|\ge 3$, then $w({\P})\ge 13$ \end{itemize} \end{claim} \begin{proof} Let $H=G[P_1]$ and ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ be a partition of $H$. Let $\P'= {\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$. Note that if $|P_2|\ge 2$, then the refinement $\P'$ is nontrivial, and if $|P_2|\ge 3$, then $\P'$ is normal. By Eq.~(\ref{EQ: wH}), \begin{align*} w_G(\P') &= w_H({\mathcal{Q}})+ w_G({\P})-8. \end{align*} (a) If $w_G(\P)\le 9$, then $w_H(\mathcal{Q})\ge 4$ for any partition $\mathcal{Q}$ of $H$ since $w_G(\P')\ge 5$ by Claim~\ref{CL: nontrivialge4}(a). So $H$ is strongly $\Z_5$-connected by Lemma \ref{LEM: smallH}(c), which contradicts Claim \ref{CL: nostrongZ5}. (b) Similar to (a), if $w_G({\P})\le 12$, then $w_H({\mathcal{Q}})\ge 4$ for any partition ${\mathcal{Q}}$ of $H$ since $w_G(\P')\ge 8$ by Claim~\ref{CL: nontrivialge4}(b). Again $H$ is strongly $\Z_5$-connected by Lemma \ref{LEM: smallH}(c), which contradicts Claim \ref{CL: nostrongZ5}. \end{proof} \begin{claim}\label{CL: ess7} Let $[X, X^c]$ be an edge cut of $G$. \begin{itemize} \item[(a)] Now $|[X,X^c]|\ge 6$. That is, $G$ is $6$-edge-connected. \item[(b)] If $|X|\ge 2$ and $|X^c|\ge 3$, then $|[X,X^c]|\ge 8$. \end{itemize} \end{claim} \begin{proof} If $[X, X^c]$ is an edge cut of $G$, then $\P=\{X, X^c\}$ is a partition of $V(G)$. (a) Clearly ${\cal P}$ is normal, since $|G|\ge 4$ by Claim~\ref{CL: nostrongZ5}(b). Now Claim \ref{CL: nontrivialge4}(b) implies $8\le w_G(\P)=2|[X,X^c]|-22+19$, which yields $|[X,X^c]|\ge 6$. (b) If $|X|\ge 2$ and $|X^c|\ge 3$, then $w(\P)\ge 13$ by Claim \ref{CL: 2nontrivialge6}(b). So $13\le w_G(\P)=2|[X,X^c]|-22+19$, which implies $|[X,X^c]|\ge 8$. \end{proof} Next we show that $G$ contains no copy of any graph in Figure~\ref{FIG: W123} below. We write $H\dit$\aside{$H\dit$, $H\diit$} to denote the graph formed from $H$ by subdividing one copy of an edge of maximum multiplicity. So, for example, $4K_2\dit=T_{1,1,3}$. We write $H\diit$ to denote $(H\dit)\dit$. (The reader may think of the $\circ$ as representing the new 2-vertex.) \begin{claim}\label{CL: noW1} $G$ has no copy of $T_{1,1,3}$. \end{claim} \begin{proof} Suppose $G$ contains a copy of $T_{1,1,3}$, with vertices $x,y,z$ and $\mu(xy)=3$. We lift $xz, zy$ to become a new edge $xy$ and then contract the corresponding $4K_2$ (contract $xy$). Let $G'$ denote the resulting graph. The trivial partition ${\cal Q}^*$ of $G'$ satisfies $w_{G'}({\cal Q}^*)\ge w(G)-2(5)+11\ge 1$. Every nontrivial partition ${\mathcal{Q}'}$ of $G'$ corresponds to a normal partition $\mathcal{Q}$ of $G$ in which the contracted vertex is replaced by $\{x, y\}$. Since $xz, zy$ are the only two edges possibly counted in $w_{G}({\mathcal{Q}})$ but not in $w_{G'}({\mathcal{Q}'})$, we have $w_{G'}({\mathcal{Q}'})\ge w_{G}({\mathcal{Q}})- 4\ge 4$, by Claim~\ref{CL: nontrivialge4}(b). Thus $w(G')\ge 1$. By Claim \ref{CL: ess7}, $G$ is $6$-edge-connected, so $G'$ is $4$-edge-connected. Thus $G'$ is strongly $\Z_5$-connected, by Lemma \ref{LEM: smallH}(b). This is a lifting reduction of the first type, so $G$ is strongly $\Z_5$-connected, which is a contradiction. \end{proof} \begin{figure}[t] \setlength{\unitlength}{0.08cm} \begin{center} \begin{picture}(130,40) \put(0,0){\circle*{2}}\put(30,0){\circle*{2}}\put(15,20){\circle*{2}} \qbezier(0, 0)(15, 6)(30, 0)\qbezier(0, 0)(15, -6)(30, 0)\qbezier(0, 0)(15, 0)(30, 0)\qbezier(0, 0)(0, 0)(15, 20)\qbezier(30, 0)(15, 20)(15, 20) \put(50,0){\circle*{2}}\put(70,0){\circle*{2}}\put(50,20){\circle*{2}} \put(70,20){\circle*{2}} \put(60,32){\circle*{2}} \qbezier(50, 0)(70, 0)(70, 0)\qbezier(50, 0)(60, -4)(70, 0)\qbezier(50, 0)(60, 4)(70, 0)\qbezier(50, 0)(50, 10)(50, 20)\qbezier(50, 0)(46, 10)(50, 20)\qbezier(50, 0)(54, 10)(50, 20)\qbezier(70, 20)(60, 17)(50, 20)\qbezier(70, 20)(60, 23)(50, 20)\qbezier(70, 20)(66, 10)(70, 0)\qbezier(70, 20)(70, 10)(70, 0)\qbezier(70, 20)(74, 10)(70, 0)\qbezier(70, 20)(70, 20)(60, 32)\qbezier(50, 20)(50, 20)(60, 32) \put(100,0){\circle*{2}}\put(130,0){\circle*{2}}\put(115,20){\circle*{2}} \put(133,20){\circle*{2}}\put(97,20){\circle*{2}} \qbezier(100, 0)(115, 4)(130, 0)\qbezier(100, 0)(115, -4)(130, 0)\qbezier(100, 0)(105, 10)(115, 20)\qbezier(100, 0)(110, 10)(115, 20)\qbezier(130, 0)(120, 10)(115, 20)\qbezier(130, 0)(125, 10)(115, 20) \qbezier(100, 0)(98.5, 10)(97, 20)\qbezier(115, 20)(106, 20)(97, 20)\qbezier(133, 20)(124, 20)(115, 20)\qbezier(130, 0)(131.5, 10)(133, 20) \put(11,-10){\footnotesize{$T_{1,1,3}$}}\put(57,-10){\footnotesize{$3C_4\dit$}}\put(111,-10){\footnotesize{$T_{2,3,3}\diit$}} \end{picture} \end{center} \vspace{0.4cm} \caption{The graphs $T_{1,1,3}, 3C_4\dit, T_{2,3,3}\diit$.} \label{FIG: W123} \end{figure} \begin{claim}\label{CL: noW2} $G$ has no copy of $3C_4\dit$. \end{claim} \begin{proof} Suppose $G$ contains a copy of $3C_4\dit$, with vertices $v_1, v_2, v_3, v_4, z$, where $z$ is a 2-vertex with $N(z)=\{v_1,v_2\}$. We lift $v_1z, zv_2$ to become a new edge $v_1v_2$ and then contract the corresponding $3C_4$ to obtain the graph $G'$. For the trivial partition ${\mathcal{Q}}^*$ of $G'$, we have $w_{G'}({\mathcal{Q}}^*)\ge w(G)-2(13)+3(11)\ge 7$. For every nontrivial partition ${\mathcal{Q}'}$ of $G'$, we have $w_{G'}({\mathcal{Q}'})\ge w_{G}({\mathcal{Q}})- 4\ge 4$ for the same reason as in the previous claim. Thus $w(G')\ge 4$, so $G'$ is strongly $\Z_5$-connected by Lemma~\ref{LEM: smallH}(c). This is a lifting reduction of the first type. Hence $G$ is strongly $\Z_5$-connected, which contradicts Claim~\ref{CL: nostrongZ5}. \end{proof} Now we can slightly strengthen Claim~\ref{CL: nontrivialge4}(b). \begin{claim} \label{CL: noalmostge9} Every normal partition ${\P}=\{P_1, P_2,\dots, P_t\}$ satisfies $$w({\P})\ge 9.$$ \end{claim} \begin{proof} Let ${\P}=\{P_1, P_2,\dots, P_t\}$ be a normal partition of $G$ with $|P_1|>1$. Suppose to the contrary that $w({\P})=8$, by Claim~\ref{CL: nontrivialge4}(b). Now $|P_1|\ge 3$ and $|P_2|=\ldots=|P_t|=1$, by Claim~\ref{CL: 2nontrivialge6}(a). As in Claim~\ref{CL: nontrivialge4}, let $H=G[P_1]$, let ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ be a partition of $H$, and let $\P'={\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ be a refinement of $\P$. Eq.~(\ref{EQ: wH}) implies \begin{eqnarray* w_H({\mathcal{Q}})= w_G(\P') - w_G({\P})+8=w_G(\P'). \end{eqnarray*} If ${\mathcal{Q}}$ is a nontrivial partition of $H$, then $\P'$ is nontrivial in $G$, so $w_H({\mathcal{Q}})= w_G(\P')\ge 5$, by Claim~\ref{CL: nontrivialge4}(a). If $\mathcal{Q}$ is the trivial partition of $H$, then $w_H(\mathcal{Q})= w_G(\P')\ge 0$. Since $|H|=|P_1|\ge 3$, we know $H\notin\{2K_2,3K_2\}$. And since $G$ has no copy of $T_{1,1,3}$, by Claim \ref{CL: noW1}, we know $H\notin\{T_{1,3,3}, T_{2,2,3}\}$. Now Lemma \ref{LEM: smallH}(a) implies that $H$ is strongly $\Z_5$-connected, which contradicts Claim~\ref{CL: nostrongZ5}. \end{proof} Claim~\ref{CL: noalmostge9} allows us to also prove that the third graph in Figure~\ref{FIG: W123} is reducible. \begin{claim}\label{CL: noW3} $G$ has no copy of $T_{2,3,3}\diit$. \end{claim} \begin{proof} Suppose $G$ contains a copy of $T_{2,3,3}\diit$ with vertices $w,x,y,z_1,z_2$, where $z_1$ and $z_2$ are 2-vertices with $N(z_1)=\{w,x\}$ and $N(z_2)=\{x,y\}$. We lift $wz_1, z_1x$ to become a new edge $wx$, and lift $xz_2, z_2y$ to become a new edge $xy$. Now $\{w,x,y\}$ induces a copy of $T_{2,3,3}$, so we contract $\{w,x,y\}$ to form a graph $G'$. Since $\delta(G)\ge 6$ by Claim~\ref{CL: ess7}(a), we have $\delta(G')\ge 4$. The size of each edge cut decreases at most $4$ from $G$ to $G'$, and it decreases at least $3$ only if that edge cut has at least two vertices on each side. In that case Claim~\ref{CL: ess7}(b) shows the original edge cut in $G$ has size at least $8$. Since $G$ is $6$-edge-connected by Claim~\ref{CL: ess7}, each edge cut in $G'$ has size at least $4$, so $G'$ is $4$-edge-connected. The trivial partition ${\mathcal{Q}}^*$ of $G'$ satisfies $w_{G'}({\mathcal{Q}}^*)\ge w(G)-2(10)+11(2)\ge 2$. Every nontrivial partition $\mathcal{Q}'$ of $G'$ corresponds to a normal partition $\mathcal{Q}$ of $G$ in which the contracted vertex is replaced by $\{w, x, y\}$. So $w_{G'}({\mathcal{Q}'})\ge w_{G}({\mathcal{Q}})-2(4)\ge 1$, by Claim \ref{CL: noalmostge9}. Thus, $G'$ is $4$-edge-connected and $w(G')\ge 1$. By Lemma \ref{LEM: smallH}(b), $G'$ is strongly $\Z_5$-connected. This is a lifting reduction of the first type. Since $T_{2,3,3}$ is strongly $\Z_5$-connected by Lemma~\ref{LEM: 4K2J2K4inSZ5}, graph $G$ is strongly $\Z_5$-connected, which contradicts Claim~\ref{CL: nostrongZ5}. \end{proof} \subsection{The final step: Discharging} Now we use discharging to show that some subgraph in Figure~\ref{FIG: 4K2J2K4C4} or \ref{FIG: W123} must appear in $G$. This contradicts one of the claims in the previous section, and thus finishes the proof. Fix a plane embedding of $G$. (We assume that all parallel edges between two vertices $v$ and $w$ are embedded consecutively, in the cyclic orders, around both $v$ and $w$.) Let $F(G)$ denote the set of all faces of $G$. For each face $f\in F(G)$, we write $\ell(f)$ for its length. A face $f$ is a \emph{$k$-face}, \emph{$k^+$-face}, or \emph{$k^-$-face}\aaside{$k/k^+/k^-$-face}{-4mm} if (respectively) $\ell(f)=k$, $\ell(f)\ge k$, or $\ell(f)\le k$. A sequence of faces $f_1f_2\ldots f_s$ is called a \Emph{face chain} if, for each $i\in\{1,\ldots,s-1\}$, faces $f_i$ and $f_{i+1}$ are adjacent, i.e., their boundaries share a common edge. The \emph{length} of this chain is $s+1$. Two faces $f$ and $f'$ are \Emph{weakly adjacent} if there is a face chain $ff_1\ldots f_s f'$ such that that $f_i$ is a $2$-face for each $i\in\{1,\ldots,s\}$. We allow $s$ to be $0$, meaning $f$ and $f'$ are adjacent. A \EmphE{string}{4mm} is a maximal face chain such that each of its faces is a $2$-face. The boundary of a string consists of two edges, each of which is incident to a $3^+$-face. A $k$-face is called a $(t_1, t_2, \ldots, t_k)$-face if its boundary edges are contained in strings with lengths $t_1, t_2, \ldots, t_k$. Here $t_i$ is allowed to be $1$, meaning the corresponding edge is not contained in a string. Since $w(G)\ge 0$, we have $2\|G\|-11|G|+19\ge 0.$ By Euler's Formula, $|G|+|F(G)|-\|G\|=2$. We solve for $|G|$ in the equation and substitute into the inequality, which gives \begin{align} \sum_{f\in F(G)}\ell(f)=2\|G\|\le \frac{22}{9}|F(G)|-\frac{2}{3}. \label{EQ: totalcharge} \end{align} We assign to each face $f$ initial charge $\ell(f)$. So the total charge is strictly less than $22|F(G)|/9$. To redistribute charge, we use the following three discharging rules. \begin{itemize} \item[(R1)] Each $2$-face receives charge $\frac{2}{9}$ from each weakly adjacent $3^+$-face. \item[(R2)] Each $(2,2,2)$-face receives charge $\frac{1}{9}$ from each weakly adjacent $4^+$-face and $(2,1,1)$-face. \item[(R3)] Each $(2,2,2)$-face receives charge $\frac{1}{18}$ from each weakly adjacent $(2,2,1)$-face. \end{itemize} If two faces are weakly adjacent through multiple edges or strings, then the discharging rules apply for each edge and string. After applying these rules, we claim that every face has charge at least $\frac{22}{9}$, which contradicts Eq.~\eqref{EQ: totalcharge}. Each $2$-face ends with $2+2(\frac29)=\frac{22}9$. Since $G$ contains no $4K_2$ and no $T_{1,1,3}$, the charge each face sends across each boundary edge is at most $2(\frac29)$. Thus, when $k\ge 5$ each $k$-face ends with at least $k-k(2(\frac{2}{9}))=\frac{5k}9\ge \frac{25}{9}$. Since $G$ contains no $3C_4$ and no $3C_4\dit$, each $4$-face ends with at least $4-7(\frac{2}{9})=\frac{22}{9}$. It is straightforward to check that each $(1,1,1)$-face ends with $3$, each $(2,1,1)$-face ends with at least $3-\frac{2}{9}-\frac{1}{9}=\frac{24}{9}$, and each $(2,2,1)$-face ends with at least $3-2(\frac{2}{9})-2(\frac1{18})=\frac{22}{9}$. It remains to check $(2,2,2)$-faces. Suppose to the contrary that a $(2,2,2)$-face $xyz$ ends with less than $\frac{22}{9}$. After (R1), face $xyz$ has $3-3(\frac29)=\frac{21}9$. Since $xyz$ ends with less than $\frac{22}9$, it receives at most $\frac1{18}$ by (R2) and (R3). So $xyz$ must be adjacent to three $3$-faces, and at most one of these is a $(2,2,1)$-face, while the others are $(2,2,2)$-faces. By Claim~\ref{CL: noW3}, $G$ contains no $T_{2,3,3}\diit$, so the three $3$-faces adjacent to $xyz$ must share a new common vertex, say $w$. If one of $wx, wy, wz$ is not contained in a string, then $xyz$ is adjacent to two $(2,2,1)$-faces, and so receives at least $2(\frac1{18})$ by (R3), contradicting our assumption above. Thus we assume $\mu(wx)=\mu(wy)=\mu(wz)=2$. So $G[\{x,y,z,w\}]$ contains a $2K_4$, contradicting Claim~\ref{CL: nostrongZ5}(a). This shows that each $(2,2,2)$-face ends with at least $\frac{22}{9}$, which completes the proof. \section{Circular $7/3$-flows: Proof of Theorem~\ref{7/3-flow-thm}} \label{Z7-sec} In this section we prove Theorem~\ref{7/3-flow-thm}. As in the previous section, this theorem is implied by the more technical result, Theorem~\ref{THM: Main2}. The proof of Theorem~\ref{THM: Main2} is similar to that of Theorem~\ref{THM: Main1}, but with more reducible configurations and more details. \subsection{Preliminaries on Modulo $7$-orientations} We define a weight function $\rho$ as follows (which is similar to $w$ in Definition~\ref{DEF: partition}). \begin{definition}\label{DEF: rho-partition} Let ${\P}=\{P_1, P_2,\dots, P_t\}$ be a partition of $V(G)$. Let $$\rho_G({\P})=\sum_{i=1}^{t}d(P_i)-17t+31$$ and $\rho(G)=\min\{\rho_G({\P}): {\P}\ is\ a\ partition\ of\ V(G)\}.$ \end{definition} Analogous to Lemma~\ref{mod5-key-lem}, we have the following. \begin{lemma} Let ${\P}=\{P_1, P_2,\dots, P_t\}$ be a partition of $V(G)$ with $|P_1|>1$. Let $H=G[P_1]$ and let ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ be a partition of $V(H)$. Now ${\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ is a refinement of $\P$ satisfying \begin{align}\label{EQ: wH2} \rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))= \rho_H({\mathcal{Q}})+ \rho_G({\P})-(31-17). \end{align} \label{mod7-key-lem} \end{lemma} \begin{proof} The proof is identical to that of Lemma~\ref{mod5-key-lem}, with 17 in place of 11 and with 31 in place of 19. \end{proof} We typically assume that each edge has multiplicity at most 5 (since $6K_2$ is strongly $\Z_7$-connected, and so cannot appear in a minimal counterexample to Theorem~\ref{THM: Main2}, as we prove in Claim~\ref{CL: nostrongZ7}, below). Now $\rho(aK_2)=2a-3$, $\rho(T_{a,b,c})=2a+2b+2c-20$, and $\rho(3K_4)=-1$; see Figure~\ref{FIG: K234}. In each case, the minimum in the definition of $\rho$ is achieved uniquely by the partition with each vertex in its own part. \begin{figure}[ht] \setlength{\unitlength}{0.08cm} \begin{center} \begin{picture}(170,35) \put(0,10){\circle*{2}}\put(20,10){\circle*{2}} \qbezier(0, 10)(10, 22)(20, 10) \qbezier(0, 10)(10, 17)(20, 10)\qbezier(0, 10)(10, 2)(20, 10) \put(9,7.5){$\vdots$}\put(11,8.5){\footnotesize{$a$}}\put(5,-7){\footnotesize{$aK_2$}} \put(40,5){\circle*{2}}\put(60,5){\circle*{2}}\put(50,25){\circle*{2}} \qbezier(40, 5)(50, 10)(60, 5 \qbezier(40, 5)(50, 0)(60, 5) \qbezier(40, 5)(45, 11)(50, 25)\qbezier(40, 5)(44, 17)(50, 25) \qbezier(60, 5)(55, 10)(50, 25)\qbezier(60, 5)(54, 21)(50, 25) \put(45.5,-7){\footnotesize{$T_{a,b,c}$}} \put(49,3.5){\footnotesize{$b$}}\put(41,16){\footnotesize{$a$}}\put(56.5,16){\footnotesize{$c$}} \put(85,5){\circle*{2}}\put(115,5){\circle*{2}}\put(100,30){\circle*{2}}\put(100,14){\circle*{2}} \qbezier(85, 5)(100, 7)(115, 5)\qbezier(85, 5)(100, 3)(115, 5)\qbezier(85, 5)(85, 5)(115, 5) \qbezier(85, 5)(90, 17)(100, 30)\qbezier(85, 5)(94, 17)(100, 30)\qbezier(85, 5)(85, 5)(100, 30) \qbezier(115, 5)(110, 17)(100, 30)\qbezier(115, 5)(106, 17)(100, 30)\qbezier(115, 5)(115, 5)(100, 30) \qbezier(85, 5)(90, 10)(100, 14)\qbezier(85, 5)(94, 9)(100, 14)\qbezier(85, 5)(85, 5)(100, 14) \qbezier(115, 5)(110, 10)(100, 14)\qbezier(115, 5)(106, 9)(100, 14)\qbezier(115, 5)(115, 5)(100, 14) \qbezier(100, 30)(98, 22)(100, 14)\qbezier(100,30)(102, 22)(100, 14)\qbezier(100,30)(100, 30)(100, 14) \put(97,-7){\footnotesize{$3K_4$}} \put(135,5){\circle*{2}}\put(165,5){\circle*{2}}\put(150,30){\circle*{2}}\put(150,14){\circle*{2}} \qbezier(135, 5)(150, 10)(165, 5)\qbezier(135, 5)(150, 7)(165, 5)\qbezier(135, 5)(150, 3)(165, 5) \qbezier(135, 5)(150, 0)(165, 5) \qbezier(135, 5)(140, 17)(150, 30)\qbezier(135, 5)(144, 17)(150, 30)\qbezier(135, 5)(135, 5)(150, 30) \qbezier(165, 5)(160, 17)(150, 30)\qbezier(165, 5)(156, 17)(150, 30)\qbezier(165, 5)(165, 5)(150, 30) \qbezier(135, 5)(140, 10)(150, 14)\qbezier(135, 5)(144, 9)(150, 14)\qbezier(135, 5)(135, 5)(150, 14) \qbezier(165, 5)(160, 10)(150, 14)\qbezier(165, 5)(156, 9)(150, 14)\qbezier(165, 5)(165, 5)(150, 14) \qbezier(150, 30)(148, 22)(150, 14)\qbezier(150,30)(152, 22)(150, 14)\qbezier(150,30)(150, 30)(150, 14) \put(147,-7){\footnotesize{$3K_4^+$}} \end{picture} \end{center} \vspace{0.4cm} \caption{\small\it The graphs $aK_2, T_{a,b,c}, 3K_4, 3K_4^+$.} \label{FIG: K234} \end{figure} Let ${\mathcal{F}}=\{aK_2: 2\le a\le 5\}\cup\{T_{a,b,c}: 10\le a+b+c\le 11 ~\mbox{and $T_{a,b,c}$ is 6-edge-connected}.\}$ It is straightforward\footnote{When $a\le 5$, the graph $aK_2$ has seven $\Z_7$-boundaries and at most 6 orientations, so at least one boundary is not achievable. The graph $3K_4$ cannot achieve the boundary $\beta(v)=0$ for all $v$. In such an orientation $D$ each vertex $v$ must have $|d^+_D(v)-d^-_D(v)|=7$. But now some two adjacent vertices must either both have indegree 8 or both have outdegree 8, and we cannot orient the three edges between them to achieve this. For $T_{a,b,c}$, it suffices to consider the case $a+b+c=11$. Let $V(G)=\{v_1,v_2,v_3\}$. By symmetry, we assume $d(v_1)\le d(v_2)\le d(v_3)$. For $T_{1,5,5}$, we cannot achieve $\beta(v_1)=\beta(v_2)=1$ and $\beta(v_3)=5$, since $v_1$ and $v_2$ must each have all incident edges oriented in. For $T_{2,4,5}$, we cannot achieve $\beta(v_1)=1$, $\beta(v_2)=2$, and $\beta(v_3)=4$, since $v_1$ must have all incident edges oriented in, and $v_2$ must have all but one edges oriented in. For $T_{3,3,5}$, we cannot achieve $\beta(v_1)=1$ and $\beta(v_2)=\beta(v_3)=3$, since $v_1$ must have all incident edges oriented in. For $T_{3,4,4}$, we cannot achieve $\beta(v_1)=\beta(v_2)=2$ and $\beta(v_3)=3$, since $v_1$ and $v_2$ must each have all but one incident edge oriented in. } to check that no graph in $\mathcal{F}$ is strongly $\Z_7$-connected. Further, if $T_{a,b,c}$ is 8-edge-connected, then $\|G\|\ge 3\delta(G)/2\ge 12$. Thus, no graph in $\mathcal{F}$ is 8-edge-connected. The following theorem is the main result of Section~\ref{Z7-sec}. We call a partition $\P$ \EmphE{problematic}{-4mm} if $G/\P\in \mathcal{F}$. \begin{theorem} \label{THM: Main2} Let $G$ be a planar graph and $\beta$ be a $\Z_7$-boundary of $G$. If $\rho(G)\ge 0$, then $G$ admits a $(\Z_7,\beta)$-orientation, unless $G$ has a problematic partition. \end{theorem} As easy corollaries of Theorem~\ref{THM: Main2} we get the following two results. \begin{theorem} Every $17$-edge-connected planar graph is strongly $\Z_7$-connected. \label{17-edge-thm} \end{theorem} \begin{theorem} Every odd-$17$-edge-connected planar graph admits a modulo $7$-orientation. In particular, every 16-edge-connected planar graph admits a modulo $7$-orientation (and thus a circular 7/3-flow). \label{16-edge-thm} \end{theorem} The proofs of Theorems~\ref{17-edge-thm} and~\ref{16-edge-thm} are identical to those of Theorems~\ref{11-edge-thm} and~\ref{10-edge-thm}, but with 17 in place of 11 and with 31 in place of 19. Note that Theorem~\ref{16-edge-thm} includes Theorem~\ref{7/3-flow-thm} as a special case. For the proof of Theorem~\ref{THM: Main2}, we need the following two lemmas. Their proofs are more tedious than enlightening, so we postpone them to the appendix. When a graph $H$ is edge-transitive, we write $H^+$ or $H^-$\aside{$H^+/H^-$} to denote the graph formed by adding or removing a single copy of one edge. \begin{lemma} \label{Z7-contract-configs} Each of the following graphs is strongly $\Z_7$-connected: $6K_2$, $3K_4^{+}$, and every 6-edge-connected graph $T_{a,b,c}$ where $a+b+c=12$. \end{lemma} Let \Emph{$5C_4^=$} denote the graph formed from $5C_4$ by deleting a perfect matching. \begin{lem} The graph $5C_4^=$ is strongly $\Z_7$-connected. Further, if $G$ is a graph with $|G|=4$, $\|G\|=19$, $\mu(G)\le 5$, and $\delta(G)\ge 8$, then $G$ is strongly $\Z_7$-connected. \label{K4weightsZ7-lem} \end{lem} \subsection{Properties of a Minimal Counterexample in Theorem \ref{THM: Main2}} {\bf Let $G$ be a counterexample to Theorem \ref{THM: Main2} that minimizes $|G|+\|G\|$.} Thus Theorem~\ref{THM: Main2} holds for all graphs smaller than $G$. This implies the following lemma, which we will use frequently. \begin{lemma} \label{LEM: smallH2} If $H$ is a planar graph with $\rho(H)\ge 0$ and $|H|+\|H\|<|G|+\|G\|$, then each of the following holds. \begin{itemize} \item[(a)] If $\rho_H({\P})\ge 8$ for every nontrivial partition ${\P}$, then $H$ is strongly $\Z_7$-connected unless $H\in {\mathcal{F}}$. \item[(b)] If $\rho(H)\ge 8$, then $H$ is strongly $\Z_7$-connected. \item[(c)] Assume that $H$ is $6$-edge-connected. \begin{itemize} \item[(c-i)] If $\rho_H({\P})\ge 3$ for every nontrivial partition ${\P}$, then $H$ is strongly $\Z_7$-connected unless $H\cong T_{a,b,c}$ with $a+b+c\in\{10,11\}$. \item[(c-ii)] If $\rho(H)\ge 3$, then $H$ is strongly $\Z_7$-connected. \item[(c-iii)] If $H$ is $8$-edge-connected, then $H$ is strongly $\Z_7$-connected. \end{itemize} \end{itemize} \end{lemma} \begin{proof} We apply Theorem \ref{THM: Main2} to $H$. (a) For each $J\in \mathcal{F}$, the trivial partition $\mathcal{Q}^*$ satisfies $\rho_J(\mathcal{Q}^*)\le \max\{2(5)-2(17)+31,2(11)-3(17)+31\}=7$. Since $\rho_H(\P)\ge 8$ for every nontrivial partition $\P$, we know that $H/\P\notin \mathcal{F}$. Part (b) follows immediately from (a). Consider (c). Since $H$ is 6-edge-connected, there does not exist $\P$ such that $|H/\P|=2$ and $\|H/\P\|\le 5$. For (c-i), suppose there is a nontrivial partition $\P$ such that $H/\P\cong T_{a,b,c}$ with $a+b+c\in\{10,11\}$. Now $\rho_H(\P)=2(11)-3(17)+31=2$, which contradicts the hypothesis. Note that (c-ii) follows directly from (c-i). Finally, we prove (c-iii). Since $G$ is 8-edge-connected, so is $G/\P$, for each partition $\P$. Recall that each element of $\mathcal{F}$ has edge-connectivity at most 7. Thus, $G/\P\notin \mathcal{F}$. \end{proof} As in Section~\ref{Z5-sec}, the main idea of the proof is to show that $\rho_G(\P)$ is relatively large for each nontrivial partition $\P$. This gives us the ability to apply Lemma~\ref{LEM: smallH2} to subgraphs of $G$ even after modifying them slightly, which yields more power when proving subgraphs are reducible. \begin{claim} \label{CL: nostrongZ7} $G$ has no strongly $\Z_7$-connected subgraph $H$ with $|H|>1$. In particular, \begin{itemize} \item[(a)] $G$ has no copy of $6K_2$, $3K_4^+$, or a 6-edge-connected graph $T_{a,b,c}$ with $a+b+c=12$; and \item[(b)] $|G|\ge 4$. \end{itemize} \end{claim} \begin{proof} The proof of the first statement is identical to that of Claim~\ref{CL: nostrongZ5}, with $\Z_7$ in place of $\Z_5$. Note that (a) follows from the first statement and Lemma~\ref{Z7-contract-configs}. Now we prove (b). Clearly $|G|\ge 2$, so first suppose $|G|=2$. Since $\rho(G)\ge 0$, we know $\|G\|\ge 2$. Since $G$ has no problematic partition, we know $\|G\|\ge 6$. But now $G$ contains $6K_2$, which contradicts (a). So assume $|G|=3$, that is $G=T_{a,b,c}$. Since $\rho(G)\ge 0$, we know $a+b+c\ge 10$. Since $G$ has no problematic partition, $G$ is 6-edge-connected. By the definition of $\mathcal{F}$, this implies that $a+b+c\ge 12$. Recall that $G$ contains no $6K_2$ by (a); thus $\max\{a,b,c\}\le 5$. A short case analysis shows that $G$ contains as a subgraph one of $T_{2,5,5}$, $T_{3,4,5}$, or $T_{4,4,4}$. Each of these has 12 edges and is 6-edge-connected, which contradicts (a). \end{proof} \begin{claim} \label{CL: nontrivialge716} If ${\P}=\{P_1, P_2,\dots, P_t\}$ is a nontrivial partition of $V(G)$, then \begin{enumerate} \item[(a)] $\rho_G({\P})\ge 7$; and \item[(b)] $\rho_G({\P})\ge 12$ if ${\P}$ is normal. \end{enumerate} \end{claim} \begin{proof} We argue by contradiction. For an almost trivial partition ${\P}$, we have $\rho_G({\P})\ge \rho_G(V(G))-2(5)+17\ge 7$, since $G$ does not contain $6K_2$ by Claim~\ref{CL: nostrongZ7}. If $\P=\{V(G)\}$, then $w_G(\P)=0-17+31=14$. Since $|G|\ge 4$ by Claim~\ref{CL: nostrongZ7}(b), we now only need to consider the weight of normal partitions. Let ${\P}=\{P_1, P_2,\dots, P_t\}$ be a normal partition of $V(G)$. We may assume $|P_1|> 1$ and let $H=G[P_1]$. For any partition ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ of $V(H)$, by Eq.~(\ref{EQ: wH2}) the refinement ${\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ of $\P$ satisfies \begin{eqnarray}\label{EQ: rPrQ} \rho_H({\mathcal{Q}})= \rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\})) - \rho_G({\P})+14. \end{eqnarray} (a) We first show that $\rho_G({\P})\ge 7$. If $\rho_G({\P})\le 6$, then Eq.~(\ref{EQ: rPrQ}) implies that $\rho_H({\mathcal{Q}})\ge 8$ for any partition ${\mathcal{Q}}$ of $H$, since $\rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))\ge 0$. Hence $\rho(H)\ge 8$ and $H$ is strongly $\Z_7$-connected by Lemma~\ref{LEM: smallH2}(b), which contradicts Claim~\ref{CL: nostrongZ7}. This proves (a). (b) We now show that $\rho_G({\P})\ge 12$. Suppose, to the contrary, that $\rho_G({\P})\le 11$. If ${\P}$ contains at least two nontrivial parts, say $|P_2|>1$, then (a) implies $\rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))\ge 7$ for any partition ${\mathcal{Q}}$ of $H$. Hence $\rho(H)\ge 10$ by Eq.~(\ref{EQ: rPrQ}), and so $H$ is strongly $\Z_7$-connected by Lemma \ref{LEM: smallH2}(b), which contradicts Claim~\ref{CL: nostrongZ7}. Assume instead that ${\P}$ contains a unique nontrivial part $P_1$ and $|P_1|\ge 3$. For any nontrivial partition ${\mathcal{Q}}$ of $H$, the refinement ${\mathcal{Q}}\cup ({\P}\setminus\{P_1\})$ of $\P$ is a nontrivial partition of $G$, and so $\rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))\ge 7$ by (a). Thus $\rho_H({\mathcal{Q}})\ge 10$ for any nontrivial partition ${\mathcal{Q}}$ of $H$ by Eq.~(\ref{EQ: rPrQ}). For the trivial partition ${\mathcal{Q}}^*$ of $H$, since $\rho_G({\P})\le 11$, Eq.~(\ref{EQ: rPrQ}) implies $\rho_H({\mathcal{Q}}^*)\ge 3$. Since $|H|=|P_1|\ge 3$, we know $H\not\cong aK_2$. Since $\rho(H)\ge 3$, we know $H\not\cong T_{a,b,c}$ with $a+b+c\le 11$. So Lemma \ref{LEM: smallH2}(a) implies that $H$ is strongly $\Z_7$-connected, which contradicts Claim \ref{CL: nostrongZ7}. \end{proof} The next two claims follow from Claim \ref{CL: nontrivialge716}. They give lower bounds on the edge-connectivity of $G$. \begin{claim} \label{CL: 2nontrivialge14z7} For a partition ${\P}=\{P_1, P_2,\dots, P_t\}$, \begin{enumerate} \item[(a)] if $|P_1|\ge 2$ and $|P_2|\ge 2$, then $\rho({\P})\ge 14$; and \item[(b)] if $|P_1|\ge 2$ and $|P_2|\ge 3$, then $\rho({\P})\ge 19$. \end{enumerate} \end{claim} \begin{proof} Let $H=G[P_1]$ and ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ be a partition of $H$. By Eq.~(\ref{EQ: wH2}), \begin{eqnarray}\nonumber \rho_H({\mathcal{Q}})= \rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\})) - \rho_G({\P})+14. \end{eqnarray} (a) If $\rho_G({\P})\le 13$, then $\rho_H({\mathcal{Q}})\ge 8$ for any partition ${\mathcal{Q}}$ of $H$ since $\rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))\ge 7$ by Claim \ref{CL: nontrivialge716}(a). So $H$ is strongly $\Z_5$-connected by Lemma~\ref{LEM: smallH2}(b), which contradicts Claim~\ref{CL: nostrongZ7}. (b) Similarly, if $\rho_G({\P})\le 18$, then $\rho_H({\mathcal{Q}})\ge 8$ for any partition ${\mathcal{Q}}$ of $H$ since $\rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))\ge 12$ by Claim \ref{CL: nontrivialge716}(b). Again $H$ is strongly $\Z_5$-connected by Lemma~\ref{LEM: smallH2}(b), which contradicts Claim~\ref{CL: nostrongZ7}. \end{proof} \begin{claim}\label{CL: ess11} Let $[X, X^c]$ be an edge cut of $G$. \begin{enumerate} \item[(a)] Now $|[X,X^c]|\ge 8$. That is, $G$ is $8$-edge-connected. \item[(b)] If $|X|\ge 2$ and $|X^c|\ge 3$, then $|[X,X^c]|\ge 11$. \end{enumerate} \end{claim} \begin{proof} (a) Let $\P=\{X, X^c\}$. Since $|G|\ge 4$ by Claim \ref{CL: nostrongZ7}(b), the partition $\P$ is normal. Now Claim~\ref{CL: nontrivialge716}(b) gives $12\le \rho_G(\P)=2|[X,X^c]|-34+31$, which implies $|[X,X^c]|\ge 8$. (b) If $|X|\ge 2$ and $|X^c|\ge 3$, then $\rho_G(\P)\ge 19$ by Claim~\ref{CL: 2nontrivialge14z7}(b). So $19\le \rho_G({\cal P})=2|[X,X^c]|-34+31$, which implies $|[X,X^c]|\ge 11$. \end{proof} \begin{figure}[ht] \setlength{\unitlength}{0.08cm} \begin{center} \begin{picture}(190,40) \put(5,0){\circle*{2}}\put(35,0){\circle*{2}}\put(20,30){\circle*{2}} \qbezier(5, 0)(20, 4)(35, 0)\qbezier(5, 0)(20, -4)(35, 0)\qbezier(5, 0)(20, 0)(35, 0) \qbezier(5, 0)(20, 8)(35, 0)\qbezier(5, 0)(20, -8)(35, 0) \qbezier(5, 0)(5, 0)(20, 30)\qbezier(35, 0)(35, 0)(20, 30) \put(52,15){\circle*{2}}\put(82,15){\circle*{2}}\put(67,30){\circle*{2}}\put(67,0){\circle*{2}} \qbezier(67, 0)(67, 0)(52, 15)\qbezier(67, 0)(67, 0)(82, 15)\qbezier(67, 30)(67, 30)(52, 15)\qbezier(67, 30)(67, 30)(82, 15) \qbezier(52, 15)(67, 21)(82, 15)\qbezier(52, 15)(67, 17)(82, 15)\qbezier(52, 15)(67, 13)(82, 15)\qbezier(52, 15)(67, 9)(82, 15) \put(100,0){\circle*{2}}\put(130,0){\circle*{2}}\put(102,30){\circle*{2}}\put(128,30){\circle*{2}} \qbezier(100, 0)(115, 4)(130, 0)\qbezier(100, 0)(115, -4)(130, 0)\qbezier(100, 0)(115, 0)(130, 0) \qbezier(100, 0)(115, 8)(130, 0)\qbezier(100, 0)(115, -8)(130, 0) \qbezier(100, 0)(100, 0)(102, 30)\qbezier(130, 0)(130, 0)(128, 30)\qbezier(102, 30)(102, 30)(128, 30) \put(150,0){\circle*{2}}\put(180,0){\circle*{2}}\put(165,30){\circle*{2}} \qbezier(150, 0)(165, 2)(180, 0)\qbezier(150, 0)(165, -2)(180, 0)\qbezier(150, 0)(165, 6)(180, 0) \qbezier(150, 0)(165, -6)(180, 0) \qbezier(150, 0)(158, 19)(165, 30)\qbezier(180, 0)(172, 19)(165, 30) \qbezier(150, 0)(158, 10)(165, 30)\qbezier(180, 0)(172, 10)(165, 30) \put(16,-10){\footnotesize{$T_{1,1,5}$}}\put(111,-10){\footnotesize{$T^{\,\bullet}_{1,1,5}$}}\put(63,-10){\footnotesize{$T_{1,1,5}\dit$}}\put(161,-10){\footnotesize{$T_{2,2,4}$}} \end{picture} \end{center} \vspace{0.6cm} \caption{\small\it The graphs $T_{1,1,5}, T^{\,\bullet}_{1,1,5},T_{1,1,5}\dit, T_{2,2,4}$. } \label{FIG: Y123} \end{figure} Let \Emph{$T^{\,\bullet}_{1,1,5}$} denote the graph formed from $T_{1,1,5}$ by subdividing an edge of multiplicity 1. % We now show that $G$ contains none of the folllowing (shown in Figure~\ref{FIG: Y123}) as subgraphs: $T_{1,1,5}$, $T_{1,1,5}\dit$, $T^{\,\bullet}_{1,1,5}$, and $T_{2,2,4}$. \begin{claim}\label{CL: noY1} $G$ has no copy of $T_{1,1,5}$. \end{claim} \begin{proof} Suppose $G$ contains a copy of $T_{1,1,5}$ with vertices $x,y,z$ and $\mu(xy)=5$. We lift $xz, zy$ to become a new edge $xy$ and contract the resulting $6K_2$ induced by $\{x,y\}$. Let $G'$ denote the resulting graph. The trivial partition ${\mathcal{Q}}^*$ of $G'$ satisfies $\rho_{G'}({\mathcal{Q}}^*)\ge \rho(G)-2(7)+17\ge 3$. Every nontrivial partition ${\mathcal{Q}'}$ of $G'$ corresponds to a normal partition $\mathcal{Q}$ of $G$ in which the contracted vertex is replaced by $\{x, y\}$. Since $xz, zy$ are the only two edges possibly counted in $\rho_{G}({\mathcal{Q}})$ but not in $\rho_{G'}({\mathcal{Q}'})$, we have $\rho_{G'}({\mathcal{Q}'})\ge \rho_{G}({\mathcal{Q}})- 2(2)\ge 8$, by Claim~\ref{CL: nontrivialge716}(b). So $\rho(G')\ge 3$. Since $G$ is $8$-edge-connected by Claim~\ref{CL: ess11}, graph $G'$ is $6$-edge-connected, and so $G'$ is strongly $\Z_7$-connected by Lemma~\ref{LEM: smallH2}(c-ii). This is a lifting reduction of the first type. It shows that $G$ is strongly $\Z_7$-connected, which contradicts Lemma~\ref{CL: nostrongZ7}. \end{proof} \begin{claim} \label{CL: Vge5} $|G|\ge 5$. \end{claim} \begin{proof} Suppose the claim is false. Claim~\ref{CL: nostrongZ7}(b) implies $|G|=4$. Since $\rho(G)\ge 0$, the trivial partition shows that $\|G\|\ge 19$. First suppose $\|G\|>19$, and let $G'=G-e$, for some arbitrary edge $e$. Since $\|G'\|<\|G\|$, we will apply Lemma~\ref{LEM: smallH2}(c-i) to prove $G'$ is strongly $\Z_7$-connected. Since $|G'|=4$, we know $G'\notin \mathcal{F}$. So it suffices to show that $G'$ is 6-edge-connected and $\rho_{G'}(\P)\ge 3$ for every nontrivial partition $\P$. The first condition holds because $G$ is 8-edge-connected, by Claim~\ref{CL: ess11}(a). The second holds because $\rho_{G'}(\P)\ge \rho_G(\P)-2\ge 5$, by Claim~\ref{CL: nontrivialge716}(a). So $G'$ is strongly $\Z_7$-connected by Lemma~\ref{LEM: smallH2}(c-i), which contradicts Claim~\ref{CL: nostrongZ7}. Instead assume $\|G\|=19$. Claim~\ref{CL: ess11}(a) implies $\delta(G)\ge 8$. Since $G$ contains no $6K_2$ by Claim~\ref{CL: nostrongZ7}(a), we know $\mu(G)\le 5$. Now Lemma~\ref{K4weightsZ7-lem} shows that $G$ is strongly $\Z_7$-connected. Thus, $G$ is not a counterexample, which proves the claim. \end{proof} \begin{claim}\label{CL: noY2} $G$ has no copy of $T_{1,1,5}\dit$. \end{claim} \begin{proof} Suppose $G$ contains a copy of $T_{1,1,5}\dit$ with vertices $w,x,y,z$ and $\mu(xy)=4$. We lift $xz, zy$ to become a new edge $xy$, and lift $xw,wy$ to become another new edge $xy$, and then contract the resulting $6K_2$ to form a new graph $G'$. The trivial partition ${\cal Q}^*$ of $G'$ satisfies $\rho_{G'}({\cal Q}^*)\ge \rho(G)-2(8)+17\ge 1$. Every nontrivial partition $\mathcal{Q}'$ of $G'$ corresponds to a normal partition $\mathcal{Q}$ of $G$ in which the contracted vertex is replaced by $\{x, y\}$. Since $xz, zy, xw, wy$ are the only edges possibly counted in $\rho_{G}({\cal Q})$ but not in $\rho_{G'}({\mathcal{Q}'})$, Claim~\ref{CL: nontrivialge716}(b) implies $\rho_{G'}({\mathcal{Q}'})\ge \rho_{G}({\mathcal{Q}})- 2(4)\ge 4$. Since $w\neq z$, Claim~\ref{CL: ess11}(a,b) implies $G'$ is $6$-edge-connected. Because $|V(G')|=|V(G)|-1\ge 4$, we know $G'\not\cong T_{a,b,c}$ with $a+b+c\in\{10,11\}$. Hence $G'$ is strongly $\Z_7$-connected by Lemma~\ref{LEM: smallH2}(c-i). This is a lifting reduction of the first type. So $G$ is strongly $\Z_7$-connected, which is a contradiction. \end{proof} \begin{claim} \label{CL: deltage10} $G$ has minimum degree at least $10$. So $G$ is $10$-edge-connected by Claim~\ref{CL: ess11}. \end{claim} \begin{proof} The second statement follows from the first. To prove the first, suppose there exists $x\in V(G)$ with $8\le d(x)\le 9$. Let $x_1, x_2$ be two neighbors of $x$. To form a graph $G'$ from $G$, we lift $x_1x, xx_2$ to become a new edge $x_1x_2$, orient the remaining edges incident with $x$ to achieve $\beta(x)$, and finally delete $x$. This is similar to achieving $\beta(v_1)$ in the proof of Lemma~\ref{LEM: 4K2J2K4inSZ5} (that $G$ has no copy of $6K_2$). This is a lifting reduction of the second type. So, to show $G$ has a $\beta$-orientation, it suffices to show that $G'$ is strongly $\Z_7$-connected. Observe that the trivial partition $\mathcal{Q}^*$ of $G'$ satisfies $\rho_{G'}(\mathcal{Q}^*)\ge \rho(G)-2(9-1)+17\ge 1$. Also, for an almost trivial partition $\mathcal{Q}'$ of $G'$ with $|Q_1|=2$, we have $\rho_{G'}({\mathcal{Q}}')\ge \rho_{G'}({\mathcal{Q}}^*)+17-2(5)\ge 8$. Note that when $Q_1=\{x_1,x_2\}$ we still have $\mu_{G'}(x_1x_2)\le 5$ by Claim~\ref{CL: noY1}. Moreover, for any normal partition ${\mathcal{Q}}'$ of $G'$, since $\mathcal{Q}=\mathcal{Q}'\cup\{x\}$ is a normal partition of $G$, we have $\rho_{G'}(\mathcal{Q}')\ge \rho_G(\mathcal{Q})-2(9)+17\ge 11$. Since $|G'|=|G|-1\ge 4$ and $\rho_{G'}(\mathcal{Q}')\ge 8$ for any nontrivial partition, Lemma~\ref{LEM: smallH2}(a) implies that $G'$ is strongly $\Z_7$-connected. \end{proof} \begin{claim}\label{CL: noT115dot} $G$ has no copy of $T^{\,\bullet}_{1,1,5}$. \end{claim} \begin{proof} Suppose $G$ has a copy of $T^{\,\bullet}_{1,1,5}$, with vertices $v_1,v_2,v_3,v_4$ (in order around a 4-cycle) and $\mu(v_1v_4)=5$. We lift the edges $v_1v_2, v_2v_3, v_3v_4$ to become a new copy of edge $v_1v_4$ and contract the resulting $6K_2$; call this new graph $G'$. The trivial partition ${\mathcal{Q}}^*$ of $G'$ satisfies $\rho_{G'}({\mathcal{Q}}^*)\ge \rho(G)-2(8)+17\ge 1$. Every nontrivial partition ${\mathcal{Q}'}$ of $G'$ corresponds to a normal partition $\mathcal{Q}$ of $G$ in which the contracted vertex is replaced by $\{v_1, v_4\}$. Since $v_1v_2,v_2v_3,v_3v_4$ are the only edges possibly counted in $\rho_G(\mathcal{Q})$ but not in $\rho_{G'}(\mathcal{Q}')$, we have $\rho_{G'}(\mathcal{Q}') \ge \rho_G(\mathcal{Q})-2(3)\ge 6$ by Claim~\ref{CL: nontrivialge716}(b). Claim~\ref{CL: Vge5} implies $|G'|=|G|-1\ge 4$, so $G'\notin \mathcal{F}$. Since $G$ is $10$-edge-connected by Claim \ref{CL: deltage10}, the graph $G'$ is $6$-edge-connected. So $G'$ is strongly $\Z_7$-connected by Lemma~\ref{LEM: smallH2}(c-i). \end{proof} \begin{claim} \label{CL: noY3} $G$ has no copy of $T_{2,2,4}$. \end{claim} \begin{proof} Suppose $G$ contains a copy of $T_{2,2,4}$ with vertices $x,y,z$ and $\mu(xy)=4$. To form a new graph $G'$ from $G$, we delete two copies (each) of $xz, zy$ and add two new parallel edges $xy$, and then contract the resulting $6K_2$ induced by $\{x,y\}$. Claim~\ref{CL: deltage10} shows $G'$ is $6$-edge-connected. Similar to the proof of Claim~\ref{CL: noY2}, the trivial partition ${\cal Q}^*$ of $G'$ satisfies $\rho_{G'}({\mathcal{Q}}^*)\ge \rho(G)-2(8)+17\ge 1$, and every nontrivial partition $\mathcal{Q}'$ of $G'$ satisfies $\rho_{G'}(\mathcal{Q}')\ge \rho_G(\mathcal{Q})- 2(4)\ge 4$. Since $|G'|=|G|-1\ge 4$, Lemma~\ref{LEM: smallH2}(c-i) implies $G'$ is strongly $\Z_7$-connected. This is a lifting reduction of the first type, which implies that $G$ is strongly $\Z_7$-connected, and thus gives a contradiction. \end{proof} \begin{claim} \label{CL: noalmostge14} \label{CL: noalmostge15} For any normal partition ${\P}=\{P_1, P_2,\dots, P_t\}$ with $|P_1|\ge 3$, we have $$\rho_G({\P})\ge 14.$$ \end{claim} \begin{proof} Suppose the claim is false and let $\P$ be such a partition with $\rho_G({\P})\le 13$. Let $H=G[P_1]$. Since $G$ contains no copy of $T_{1,1,5}$ or $T_{2,2,4}$, we know $H\not\cong T_{a,b,c}$ with $a+b+c\in\{10,11\}$ (and $\min\{a,b,c\}\ge 1$). Thus, since $|H|=|P_1|\ge 3$, we know $H\notin \mathcal{F}$. Let ${\mathcal{Q}}=\{Q_1, Q_2,\dots, Q_s\}$ be a partition of $H$. Now $\mathcal{Q}\cup (\P\setminus\{P_1\})$ is a partition of $G$, and Eq.~(\ref{EQ: wH2}) implies $\rho_H(\mathcal{Q})= \rho_G(\mathcal{Q}\cup (\P\setminus\{P_1\})) - \rho_G(\P)+14\ge\rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))+1$. If $\mathcal{Q}$ is a nontrivial partition of $H$, then $\mathcal{Q}\cup (\P\setminus\{P_1\})$ is a nontrivial partition of $G$, and so Claim~\ref{CL: nontrivialge716}(a) implies $\rho_H({\mathcal{Q}})\ge \rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))+1\ge 8$. If $\mathcal{Q}$ is the trivial partition of $H$, then $\rho_H({\mathcal{Q}})\ge \rho_G({\mathcal{Q}}\cup ({\P}\setminus\{P_1\}))+1\ge 1$. By Lemma~\ref{LEM: smallH2}(a), the subgraph $H$ is strongly $\Z_7$-connected, which contradicts Claim~\ref{CL: nostrongZ7}. \end{proof} Now we can strengthen Claim~\ref{CL: ess11}(b). \begin{claim}\label{CL: ess12} If $[X, X^c]$ is an edge cut with $|X|\ge 2$ and $|X^c|\ge 3$, then $|[X,X^c]|\ge 12$. \end{claim} \begin{proof} Let $X$ satisfy the hypotheses and let ${\P}=\{X, X^c\}$. We will prove $\rho_G(\P)\ge 21$. Assume, to the contrary, that $\rho_G(\P)\le 20$. Let $H=G[X]$ and let $\mathcal{Q}=\{Q_1,\ldots,Q_s\}$ be a partition of $H$. Let $\P'=\mathcal{Q}\cup\{X^c\}$. Eq.~\eqref{EQ: wH2} implies $\rho_H(\mathcal{Q})=\rho_G(\P')-\rho_G(\P)+14$. Since $|X^c|\ge 3$, Claim~\ref{CL: noalmostge14} implies $\rho_G(\P')\ge 14$. Thus $\rho_H(\mathcal{Q})\ge 14-20+14=8$. By Lemma~\ref{LEM: smallH2}(b), subgraph $H$ is strongly $\Z_7$-connected, which contradicts Claim~\ref{CL: nontrivialge716}(b). So $21\le \rho_G({\cal P})=2|[X,X^c]|-34+31$, which implies $|[X,X^c]|\ge 12$. \end{proof} The value of Claim~\ref{CL: ess12} is that it allows us to lift three pairs of edges (with at most two incident to a common vertex) and know that the resulting graph $G'$ is still 6-edge-connected. Thus, we will show that $G'$ is strongly $\Z_7$-connected, since it satisfies the hypotheses of Lemma~\ref{LEM: smallH2}(c-i). \begin{figure}[t] \setlength{\unitlength}{0.08cm} \begin{center} \begin{picture}(155,40) \put(10,0){\circle*{2}}\put(30,0){\circle*{2}}\put(10,20){\circle*{2}} \put(30,20){\circle*{2}} \put(20,32){\circle*{2}}\put(20,-10){\circle*{2}} \qbezier(10, 0)(20, 4)(30, 0)\qbezier(10, 0)(20, -4)(30, 0)\qbezier(10, 0)(20, 1.5)(30, 0)\qbezier(10, 0)(20, -1.5)(30, 0) \qbezier(10, 0)(8.5, 10)(10, 20)\qbezier(10, 0)(6, 10)(10, 20)\qbezier(10, 0)(14, 10)(10, 20)\qbezier(10, 0)(11.5, 10)(10, 20) \qbezier(30, 20)(20, 16)(10, 20)\qbezier(30, 20)(20, 18.5)(10, 20)\qbezier(30, 20)(20, 21.5)(10, 20)\qbezier(30, 20)(20, 24)(10, 20) \qbezier(30, 20)(26, 10)(30, 0)\qbezier(30, 20)(28.5, 10)(30, 0)\qbezier(30, 20)(34, 10)(30, 0)\qbezier(30, 20)(31.5, 10)(30, 0) \qbezier(30, 20)(30, 20)(20, 32)\qbezier(10, 20)(10, 20)(20, 32) \qbezier(30, 0)(30, 0)(20, -10)\qbezier(10, 0)(10, 0)(20, -10) \put(60,0){\circle*{2}}\put(80,0){\circle*{2}}\put(60,20){\circle*{2}} \put(80,20){\circle*{2}} \put(70,32){\circle*{2}} \qbezier(60, 0)(70, -1.5)(80, 0)\qbezier(60, 0)(70, 1.5)(80, 0)\qbezier(60, 0)(70, -4)(80, 0)\qbezier(60, 0)(70, 4)(80, 0) \qbezier(60, 0)(58.5, 10)(60, 20)\qbezier(60, 0)(61.5, 10)(60, 20)\qbezier(60, 0)(56, 10)(60, 20) \qbezier(60, 0)(64, 10)(60, 20) \qbezier(80, 20)(70, 16)(60, 20)\qbezier(80, 20)(70, 24)(60, 20)\qbezier(80, 20)(70, 21.5)(60, 20)\qbezier(80, 20)(70, 18.5)(60, 20) \qbezier(80, 20)(76, 10)(80, 0)\qbezier(80, 20)(78.5, 10)(80, 0)\qbezier(80, 20)(81.5, 10)(80, 0)\qbezier(80, 20)(84, 10)(80, 0) \qbezier(80, 20)(80, 20)(70, 32)\qbezier(60, 20)(60, 20)(70, 32) \qbezier(60, 0)(40, 20)(70, 32)\qbezier(80, 0)(100, 20)(70, 32) \put(110,0){\circle*{2}}\put(140,0){\circle*{2}}\put(125,20){\circle*{2}} \put(143,20){\circle*{2}}\put(107,20){\circle*{2}}\put(125,-10){\circle*{2}} \qbezier(140, 0)(125, -10)(125, -10)\qbezier(110, 0)(125, -10)(125, -10) \qbezier(110, 0)(125, 4)(140, 0)\qbezier(110, 0)(125, -4)(140, 0)\qbezier(110, 0)(125, 0)(140, 0) \qbezier(110, 0)(110, 0)(125, 20)\qbezier(110, 0)(115, 12)(125, 20)\qbezier(110, 0)(120, 8)(125, 20) \qbezier(140, 0)(140, 0)(125, 20)\qbezier(140, 0)(130, 8)(125, 20)\qbezier(140, 0)(135, 12)(125, 20) \qbezier(110, 0)(108.5, 10)(107, 20)\qbezier(125, 20)(116, 20)(107, 20)\qbezier(143, 20)(134, 20)(125, 20)\qbezier(140, 0)(141.5, 10)(143, 20) \put(14,-20){\footnotesize{$(5C_4^=)\diit$}}\put(55.5,-20){\footnotesize{identified $(5C_4^=)\diit$}}\put(121,-20){\footnotesize{$T_{4,4,4}\diiit$}} \end{picture} \end{center} \vspace{1.4cm} \caption{The graphs $(5C_4^=)\diit$, identified $(5C_4^=)\diit$, $T_{4,4,4}\diiit$.} \label{FIG: YT444} \end{figure} Recall that \Emph{$5C_4^=$} denotes the graph formed from $5C_4$ by removing the edges of a perfect matching. \begin{claim} $G$ contains neither a copy of $(5C_4^=)\diit$ nor a copy of $(5C_4^=)\diit$ with its two 2-vertices identified. \label{CL: noYnoYid} \end{claim} \begin{proof} Suppose $G$ contains a copy of $(5C_4^=)\diit$ with vertices $v_1,v_2,v_3,v_4,w_1, w_2$, where $v_1,\ldots,v_4$ lie on the 4-cycle and $N(w_1)=\{v_1,v_2\}$ and $N(w_2)=\{v_3,v_4\}$. In $G$ we lift edges $v_1w_1,w_1v_2$ to form a new copy of $v_1v_2$ and lift edges $v_3w_2,w_2v_4$ to form a new copy of $v_3v_4$; call this new graph $G'$. In $G'$ vertices $v_1,\ldots,v_4$ induce a copy of $5C_4^=$ (if either $v_1v_3$ or $v_2v_4$ is present in $G$, then $G$ contains $T_{1,1,5}\dit$, which is a contradiction). Claim~\ref{K4weightsZ7-lem} implies $5C_4^=$ is strongly $\Z_7$-connected. Form $G''$ from $G'$ by contracting $\{v_1,v_2,v_3,v_4\}$. Since $G$ is 10-edge-connected by Claim~\ref{CL: deltage10}, we know $G''$ is 6-edge-connected. The trivial partition $\mathcal{Q}^*$ of $G''$ satisfies $\rho_{G''}(\mathcal{Q}^*)\ge \rho(G)+3(17)-2(20)\ge 11$. Each nontrivial partition $\mathcal{Q}''$ of $G''$ corresponds to a normal partition $\mathcal{Q}$ of $G$ in which the contracted vertex is replaced by $\{v_1,v_2,v_3,v_4\}$. Since at most four edges are counted in $\rho_G(\mathcal{Q})$ but not in $\rho_{G''}(\mathcal{Q}'')$, we have $\rho_{G''}(\mathcal{Q}'')\ge \rho_G(\mathcal{Q})-2(4)\ge 6$ by Claim~\ref{CL: noalmostge14}. Thus, $\rho(G'')\ge 6$, so Lemma~\ref{LEM: smallH2}(c-ii) implies that $G''$ is strongly $\Z_7$-connected, and also that $G$ is strongly $\Z_7$-connected, which is a contradiction. If vertices $w_1$ and $w_2$ are identified, the same proof works, since Claim~\ref{CL: deltage10} still implies that $G''$ is 6-edge-connected. \end{proof} \begin{claim} \label{CL: noT444'''} $G$ contains no copy of $T_{4,4,4}\diiit$. \end{claim} \begin{proof} Suppose $G$ contains a copy of $T_{4,4,4}\diiit$ with vertices $v_1,v_2,v_3,w_1,w_2,w_3$ and $d(v_i)=8$ and $d(w_i)=2$ for all $i$ and $N(w_i)=\{v_1,v_2,v_3\}\setminus\{v_i\}$. Form $G'$ from $G$ by lifting the pair of edges incident to each vertex $w_i$ and contracting the resulting $T_{4,4,4}$. This is a lifting reduction of the first type. Since $T_{4,4,4}$ is strongly $\Z_7$-connected by Lemma~\ref{Z7-contract-configs}, it suffices to show that $G'$ is also strongly $\Z_7$-connected. Claims~\ref{CL: ess12} and~\ref{CL: deltage10} imply that $G'$ is 6-edge-connected. The trivial partition $\P^*$ of $G'$ satisfies $\rho_{G'}(\P^*)\ge \rho(G)+17(2)-2(15)\ge 4$. Each nontrivial partition $\P'$ of $G'$ corresponds to a normal partition $\P$ of $G$ in which the contracted vertex is replaced by $\{v_1,v_2,v_3\}$. We show below that for such a partition we can strengthen Claim~\ref{CL: noalmostge14} to $\rho_G(\P)\ge 15$. Then we have $\rho_{G'}(\P')\ge \rho_G(\P)-2(6)\ge 3$ by Claim~\ref{CL: nontrivialge716}(b), since at most six edges are counted in $\rho_G(\P)$ but not in $\rho_{G'}(\P')$. Thus, $\rho(G')\ge 3$, so Lemma~\ref{LEM: smallH2}(c-ii) implies that $G'$ is strongly $\Z_7$-connected, which is a contradiction. Now it suffices to show that $\rho_G(\P)\ge 15$. Suppose, to the contrary, that $\rho_G(\P)\le 14$. Let $P_1$ be the part of $\P$ containing $\{v_1,v_2,v_3\}$, and let $H=G[P_1]$. We will show that $H$ is strongly $\Z_7$-connected, which gives a contradiction. Let $\mathcal{Q}=\{Q_1,\ldots,Q_s\}$ be a partition of $H$. Let $\P''=\mathcal{Q}\cup(\P\setminus \{P_1\})$. Eq.~\eqref{EQ: wH2} implies $\rho_H(\mathcal{Q})=\rho_G(\P'')-\rho_G(\P)+14\ge \rho_G(\P'')\ge 0$. Further, if $\mathcal{Q}$ is a nontrivial partition of $H$, then $\P''$ is a nontrivial partition of $G$, so Claim~\ref{CL: nontrivialge716} implies $\rho_H(\mathcal{Q})\ge \rho_G(\P'')\ge 7$. Since $H$ contains $T_{3,3,3}$ by construction, and $G$ does not contain $T_{2,2,4}$, we know that $H\notin\mathcal{F}$. To apply Lemma~\ref{LEM: smallH2}(c-i), we show that $H$ is 6-edge-connected. Consider a bipartition $\mathcal{Q}=\{Q_1,Q_2\}$ of $H$. Since $\mathcal{Q}$ is nontrivial, $7\le \rho_G(\P'')\le \rho_H(\mathcal{Q})=2|[Q_1,Q_2]_H|-2(17)+31$, which implies $|[Q_1,Q_2]_H|\ge 5$. That is, $H$ is 5-edge-connected. If $H$ is 6-edge-connected, then Lemma~\ref{LEM: smallH2}(c-i) implies that $H$ is strongly $\Z_7$-connected, which is a contradiction. So assume $H$ has a bipartition $\mathcal{Q}=\{Q_1,Q_2\}$ with $|[Q_1,Q_2]_H|=5$. By symmetry, we assume $|Q_1|\ge |Q_2|$. Since $H$ contains $T_{3,3,3}$ and $T_{3,3,3}$ is 6-edge-connected, we know that $|Q_1|\ge3$. Now $\rho_G(\P'')=\rho_G(\P)+2(5)-17\le 14-7=7$. Since $\P''$ is normal with $|Q_1|\ge 3$, this contradicts Claim~\ref{CL: nontrivialge716}. \end{proof} \subsection{Discharging} Fix a plane embedding of a planar graph $G$ such that $\rho(G)\ge 0$. (We assume that all parallel edges between two vertices $v$ and $w$ are embedded consecutively, in the cyclic orders, around both $v$ and $w$.) If $G$ has a cut-vertex, then each block of is strongly $\Z_7$-connected by minimality, so $G$ is strongly $\Z_7$-connected by Lemma~\ref{reduc-lem}, which is a contradiction. Hence $G$ is $2$-connected. Since $\rho(G)\ge 0$, we have $2\|G\|-17|G|+31\ge 0$. By Euler's Formula, $|G|+|F(G)|-\|G\|=2$. Now solving for $|G|$ and substituting into the inequality gives: $$ \sum_{f\in F(G)}\ell(f)=2\|G\|\le \frac{34}{15}|F(G)|-\frac25. $$ We assign to each face $f$ initial charge $\ell(f)$. So the total charge is strictly less than $34|F(G)|/15$. To reach a contradiction, we redistribute charge so that each face ends with charge at least $34/15$. We use the following three discharging rules. \begin{enumerate} \item[(R1)] Each 2-face takes charge $2/15$ from each weakly adjacent $3^+$-face. \item[(R2)] Each 3-face takes charge $2/15$ from each weakly adjacent $4^+$-face with which its parallel edge has multiplicity at most 3 and $1/15$ from each weakly adjacent $4^+$-face with which its parallel edge has multiplicity 4. \item[(R3)] After (R1) and (R2), each 3-face with more than $34/15$ splits its excess equally among weakly adjacent 3-faces with less than $34/15$. \end{enumerate} Now we show that each face ends with charge at least $34/15$. By (R1) each 2-face ends with $2+2(2/15)=34/15$. Consider a $5^+$-face $f$. Since $G$ contains no copy of $6K_2$, each edge of $f$ has mutliplicity at most 5. Since $G$ contains no copy of $T_{1,1,5}$, face $f$ sends at most $4(2/15)$ across each of its edges. Thus $f$ ends with at least $\ell(f)-4(2/15)\ell(f)=7\ell(f)/15\ge 35/15$. Consider a 4-face $f$. Since $G$ contains no copy of $T^{\,\bullet}_{1,1,5}$, each edge of $f$ has multiplicity at most 4. So $f$ sends at most $3(2/15)+1/15=7/15$ across each of its edges. If $f$ sends at most 5/15 across one edge, then $f$ ends with at least $4-3(7/15)-5/15=34/15$. If $f$ sends at most $6/15$ across at least two of its edges, then $f$ ends with at least $4-2(7/15)-2(6/15)=34/15$. So assume that neither of these cases holds. Thus, each edge of $f$ has multiplicity 4, and $f$ is weakly adjacent to 3-faces across at least three of its edges. This contradicts Claim~\ref{CL: noYnoYid}. Let $f$ be a 3-face $T_{a,b,c}$. If $a+b+c\le 8$, then $f$ ends (R2) with at least $3-(8-3)(2/15)=35/15$. So assume $a+b+c\ge 9$. Since $G$ has no $T_{1,1,5}$, we know $\max\{a,b,c\}\le 4$. Since $G$ has no $T_{2,2,4}$, if $\max\{a,b,c\}=4$, then $\min\{a,b,c\}=1$. Thus, each 3-face $T_{a,b,c}$ finishes (R1) with excess charge at least $1/15$ unless $T_{a,b,c}\in\{T_{1,4,4},T_{3,3,3}\}$. So we only need to consider $T_{1,4,4}$ and $T_{3,3,3}$. Suppose $f$ is $T_{1,4,4}$. Each face adjacent to $f$ across an edge of multiplicity 4 is not a 3-face, since $G$ has no $T_{1,1,5}\dit$. So $f$ ends (R2) with at least $3-(9-3)(2/15)+2(1/15)=35/15$. Hence, each 3-face $f$ ends (R2) with at least $35/15$ unless $f$ is $T_{3,3,3}$. So assume that $f$ is $T_{3,3,3}$. If any adjacent face is not a 3-face, then $f$ ends (R2) with at least $3-(9-3)(2/15)+2/15=35/15$. So assume each adjacent face is a 3-face. If these three adjacent faces do not intersect outside $f$, then $G$ contains a copy of $T_{4,4,4}\diiit$, a contradiction. If all three faces intersect outside $f$, then $|V(G)|=4$, which contradicts Claim~\ref{CL: Vge5}. So assume that exactly two faces adjacent to $f$ intersect outside $f$. Let $f_1$ and $f_2$ denote the 3-faces adjacent to $f$ that intersect outside $f$. Denote the boundaries of $f$, $f_1$, and $f_2$ by (respectively) $vwx$, $vwy$, and $wxy$. Suppose $\mu(wy)\ne 3$. Now $f_1$ and $f_2$ each end (R2) with at least $35/13$, so by (R3) each gives $f$ at least $(1/2)(1/15)$. Thus $f$ ends happy. So assume $\mu(wy)=3$. Now $d(w)=3+3+3$, which contradicts that $\delta(G)\ge 10$, by Claim~\ref{CL: deltage10}. This completes the proof. {\footnotesize
1,314,259,996,005
arxiv
\section{Introduction} \label{sec:Introduction} Given a time series of graphs $\mathcal{G}^{(t)} = (\mathcal{V},\mathcal{E}^{(t)}), t=1,2,\dots$ where the vertex set $\mathcal{V}=[n]=\{1, \dots, n\}$ is fixed and the edge sets $\mathcal{E}^{(t)}$ depend on time $t$, we consider two natural anomaly detection problems. The first problem involves detecting whether a particular graph $\mathcal{G}^{(t^{*})}$ is anomalous in the time series. The second problem involves detecting individual vertices anomalous in time. These problems will be discussed in detail below. Existing literature on anomaly detection in graphs (see for instance recent surveys \cite{ranshous2015anomaly, 10.1007/s10618-014-0365-y}) can roughly be categorized according to the characteristics of methods for modeling anomalies. Community-based methods \cite{duan2009community, aggarwal2012event, chen2012community, gupta2012community,rossi2013modeling} first perform community detection by clustering (or partitioning) vertices and then subsequently use features summarised from communities to detect anomalies. Decomposition methods \cite{ide2004eigenspace, lakhina2004diagnosing, sun2007less, sun2006beyond, sun2006window} use eigenspace representations or tensor decomposition to extract features and monitor changes across time steps. Distance or (dis)similarity-based methods are also employed to monitor or identify changes \cite{koutra2013deltacon,koutra2016deltacon}. Probabilistic methods \cite{priebe2005scan,heard2010, 10.1145/1242572.1242600,aggarwal2011outlier,doi:10.1080/00401706.2013.822830} specify probability distributions to describe baseline ``typical'' behavior of features in networks (or networks themselves) and consider deviations from the baseline to be anomalies. Probabilistic methods, as with scan statistics \cite{priebe2005scan,Park2013AnomalyDI,wang2013locality,doi:10.1080/00401706.2013.822830} and Bayesian methods \cite{heard2010}, do not always perform a fixed mapping from features to normal or anomaly states, but can construct a probability for changes to be considered anomalies Random graph inference has witnessed a host of developments and advancements in recent years \cite{JMLR:v18:16-480, JMLR:v18:17-448, goldenberg2009survey}. Much work has focused on single graph inference, while recently there has been increased interest in multiple graphs both with respect to modeling and to performing statistical inference. Among recent developments, \text{OMNI} \cite{levin2017central} and \text{MASE} \cite{arroyo2019inference} are two statistically principled multiple random graph embedding methods for networks with latent space structure, absent dynamics or time dependency. This paper investigates a two-step procedure for detecting anomalies in time series of graphs that employs these methods for multiple network embedding. Notably, our approach benefits from simultaneous graph embedding to leverage common graph structure for accurate parameter estimation, improving downstream discriminatory power for testing. Furthermore, since it hinges on probabilistic assumptions, our approach provides a statistically meaningful threshold for achieving a desired false positive rate of anomaly detection. This article is organized as follows. Section~\hyperref[sec:Preliminaries]{II} introduces notation and formulates two anomaly detection problems for graph-valued time series data. OMNI and MASE are introduced and our methodology is described in Section~\hyperref[sec:Method]{III}. We present simulation results comparing the performance of \text{OMNI} and \text{MASE} in anomaly detection in Section~\hyperref[sec:illus]{IV} and Section~\hyperref[sec:compare]{V}. For a real data illustration, we identify excessive activity in a sub-region of a large-scale commercial search engine query-navigational graph in Section~\hyperref[sec:realdata]{VI}. Section~\hyperref[sec:realdata]{VII} concludes this paper with a discussion of outstanding issues and further summarizes our findings. \section{Setup} \label{sec:Preliminaries} \subsection{Notation and Preliminaries} This paper considers undirected, unweighted graphs without self-loops. Each graph is modeled via a random dot product graph (RDPG) \cite{JMLR:v18:17-448}, in which vertex connectivity is governed by latent space geometry. We begin by defining RDPGs as individual, static networks. \begin{definition} (\emph{Random Dot Product Graph}) Let $X_{1}, X_{2}, \dots, X_{n} \in\mathbb{R}^{d}$ be a collection of latent positions such that $0\leq X_{i}^{T} X_{j}\leq 1$ for each $i,j\in[n]$, and write $\boldsymbol{X} = [X_{1}|X_{2}|\cdots|X_{n}]^{T} \in \mathbb{R}^{n\times d}$. Suppose $\boldsymbol{A}$ is a symmetric hollow random adjacency matrix with \begin{equation*} P[\boldsymbol{A}] =\prod_{i< j}(X_{i}^{T}X_{j})^{\boldsymbol{A}_{ij}}(1-X_{i}^{T}X_{j})^{1-\boldsymbol{A}_{ij}}. \end{equation*} We then write $\boldsymbol{A}\sim \operatorname{RDPG}(\boldsymbol{X})$ and say that $\boldsymbol{A}$ is the adjacency matrix of a \emph{random dot product graph} with \emph{latent positions} given by the rows of $\boldsymbol{X}$ and positive semi-definite \emph{connectivity matrix} $\boldsymbol{P}=E[\boldsymbol{A}]=\boldsymbol{X} \boldsymbol{X}^{T}$ with low rank structure $\text{rank}(\boldsymbol{P})\leq d$. \end{definition} Random dot product graphs and their indefinite extensions \cite{rubin2017statistical} are flexible enough to encompass all low-rank independent-edge random graphs, including stochastic block model (SBM) graphs \cite{HOLLAND1983109} and their various generalizations. The matrix of latent positions $\boldsymbol{X}$ captures the behavior and structure of nodes in the graph (e.g.,~when $\boldsymbol{X}$ has a finite number of different latent positions then it corresponds a SBM with community structure), and presents a natural, unobserved ``target'' that one might hope to estimate or approximate via the observed data $\boldsymbol{A}$ \cite{sussman2012consistent}. \subsection{Latent position models for time series of graphs} \label{sec:model} Consider a sequence of graphs $\mathcal{G}^{(1)}, \ldots, \mathcal{G}^{(M)}$ observed at $M$ different time points. The graphs have common vertex set $\mathcal{V}$ but time varying edge sets $\mathcal{E}^{(t)} $. This paper considers graphs with matched vertices, in which there exists a known one-to-one correspondence between the vertices of the graphs; let $\mathcal{V}=[n] = \{1, 2, \dots, n\}$, where $n$ denotes the number of vertices. The RDPG model for a time series of graphs is derived from $n$ \emph{individual vertex processes} $\{X_{i}^{(t)}\}_{t=1}^{M}$, where $X_{i}^{(t)} \in \mathbb{R}^{d}$ is the latent position for vertex $i$ at time $t$. Latent positions in graph $\mathcal{G}^{(t)}$ are assembled in the matrix $\boldsymbol{X}^{(t)}$ = $[X_{1}^{(t)}, \dots ,X_{n}^{(t)}]^{T} \in \mathbb{R}^{n\times d}$. We call the collection $\boldsymbol{X}^{(t)}, 1 \leq t \leq M,$ the {\em overall vertex process}. Observing time series of graphs, it is natural to consider leveraging information from multiple graphs, which motivates us to assume some underlying structures in the overall vertex process. Note that any latent position matrix $\boldsymbol{X}^{(t)} \in \mathbb{R}^{n\times d}$ can be decomposed as $\boldsymbol{X}^{(t)} = \boldsymbol{V}^{(t)} \boldsymbol{S}^{(t)}$ (via QR decomposition for example), where $\boldsymbol{V}^{(t)}\in \mathbb{R}^{n\times d}$ consists of orthonormal columns (we call $\boldsymbol{V}^{(t)}$ an orthornormal basis of the \emph{left singular subspace} of $\boldsymbol{X}^{(t)}$) and $\boldsymbol{S}^{(t)} \in \mathbb{R}^{d\times d}$. So it is intuitive to consider characterizing the underlying structure type in the overall vertex process by their subspaces. Next, we introduce three types of structures across time in the overall vertex process: \begin{enumerate} \item All the latent positions $\boldsymbol{X}^{(t)} \in \mathbb{R}^{n\times d}$ share the same left singular subspace, so $\boldsymbol{V}=\boldsymbol{V}^{(t)}\in \mathbb{R}^{n\times d}$ is constant across time, but allow each individual matrix $\boldsymbol{S}^{(t)}\in \mathbb{R}^{d\times d}$ to be different, i.e.,~ \begin{equation} \label{eq:structure1} \boldsymbol{X}^{(t)} = \boldsymbol{V} \boldsymbol{S}^{(t)} \end{equation} where $\boldsymbol{V} $ consists of $d$ orthonormal columns. Observe that the subspace spanned by the columns of $\boldsymbol{V}$ is the same as the \emph{invariant subspace} of the connectivity matrices $\boldsymbol{P}^{(t)}=\boldsymbol{X}^{(t)}(\boldsymbol{X}^{(t)})^T$, $t\in[M]$. A special case of this structure type is a multilayer stochastic blockmodel \cite{han2015consistent,paul2020spectral, matias2017statistical} with positive definite connectivity matrices, in which the community structure of the nodes remains fixed throughout time, but the connectivity between and within communities can change over time via $\boldsymbol{S}^{(t)}$. Moreover, this structure type can capture other important node structures that remain fixed over time \cite{arroyo2019inference}, such as mixed memberships \cite{airoldi2008mixed} or hierarchical communities \cite{lyzinski2016community}. This structure type is the model of \cite{draves2020bias} if $\boldsymbol{S}^{(t)}$ is diagonal. \item In practice, some of the graphs in the time series might present deviations from the shared invariant subspace assumption defined in Equation~(\ref{eq:structure1}), so we characterize this behavior by allowing changes in $\boldsymbol{V}$ at some time points $\{t_1,\cdots,t_p\}$. Specifically, the latent positions $\boldsymbol{X}^{(t)} \in \mathbb{R}^{n\times d}$ share the same singular subspace for $t \in \{1,\cdots,M\}\setminus\{t_{1},\cdots,t_{p}\}$, while other $\boldsymbol{X}^{(t_{j})}$ are arbitrarily different i.e.,~ \begin{equation} \label{eq:structure2} \boldsymbol{X}^{(t)} = \begin{cases} \boldsymbol{V}\boldsymbol{S}^{(t)} ,& t \in [M]\setminus\{t_{1},\cdots,t_{p}\} , \\ \boldsymbol{V}^{(t)}\boldsymbol{S}^{(t)} ,& t=t_j\text{, }j=1,\cdots,p. \end{cases} \end{equation} This model can capture some deviations in the graphs at the node level, such as changes in community memberships for some vertices. \item More generally, when there is no shared structure in the vertices across time, all latent positions can be arbitrarily different, that is \begin{equation} \label{eq:structure3} \boldsymbol{X}^{(t)}= \boldsymbol{V}^{(t)}\boldsymbol{S}^{(t)}. \end{equation} \end{enumerate} The model defined by Equation~(\ref{eq:structure2}) bridges the gap between a model with shared structure in the nodes via the common singular subspace in Equation~(\ref{eq:structure1}) and a model with arbitrarily different node structure in Equation~(\ref{eq:structure3}). Intuitively, statistical inferences about the time series should benefit from a shared structure in the vertices across time, and as such, one of our goals in this paper is to exploit a common structure when possible. Even if some of the graphs are deviated from the common structure in the singular subspace defined by $\boldsymbol{V}$ in structure type~\eqref{eq:structure1}, observe that in general, the latent positions for structure of type~\eqref{eq:structure3} can be jointly represented using the same singular subspace with a potentially increased rank dimension $d'>d$ in a way that will be defined next. Such representation pays the price of building up the model complexity with more parameters compared with the common structure in structure type~\eqref{eq:structure1}. To define such representation, let $\boldsymbol{U}$ be a matrix $\boldsymbol{U}=[\boldsymbol{V}^{(1)} ,\boldsymbol{V}^{(2)},\cdots,\boldsymbol{V}^{(M)} ]\in\mathbb{R}^{n\times Md}$, and suppose that $d'=\text{rank}(\boldsymbol{U})$. This matrix can be decomposed (for example, via singular value decomposition) as $\boldsymbol{U} = \boldsymbol{V}'\boldsymbol{W}$, where $\boldsymbol{V}'\in\mathbb{R}^{n\times d'}$ is a matrix with orthonormal columns and $\boldsymbol{W} = [\boldsymbol{W}^{(1)}, \cdots, \boldsymbol{W}^{(M)}]$ is a $d'\times (Md)$ matrix. Hence, the latent positions in the structure of type \eqref{eq:structure3} can be expressed as $$\boldsymbol{X}^{(t)} = \boldsymbol{V}'\boldsymbol{W}^{(t)} \boldsymbol{S}^{(t)},$$ which is similar to the shared singular subspace in Equation~\eqref{eq:structure1}, but now the singular subspace has rank dimension $d'>d$. Therefore, it is natural to characterize the deviation from the shared singular subspace in Equation~(\ref{eq:structure1}) via some distance between the subspaces spanned by the columns of $\boldsymbol{V}$ and $\boldsymbol{V}'$, or the difference between $d'$ and $d$. We can characterize the evolution of the time process by observing the differences between adjacent time points $\boldsymbol{Y}^{(t)}= \boldsymbol{X}^{(t)} - \boldsymbol{X}^{(t-1)}$, we call $\boldsymbol{Y}^{(t)}$ a \emph{perturbation}. When the left singular subspaces of $\boldsymbol{Y}^{(t)}$ and $\boldsymbol{X}^{(t-1)}$ are the same, we say that $\boldsymbol{Y}^{(t)}$ is a \emph{linearly dependent perturbation}; when the singular subspaces of $\boldsymbol{Y}^{(t)}$ and $\boldsymbol{X}^{(t-1)}$ are different, we call $\boldsymbol{Y}^{(t)}$ a \emph{linearly independent perturbation} (see Fig.~\ref{fig:alpha0125} for a perturbation example, details in scenario $2$ in \S \hyperref[sec:illus]{ IV. A}). For example, in a setting with shared community structure across time, changes in the connectivity of the communities that keep the community memberships constant can be represented with a linearly dependent perturbation. When a few vertices are changing their community memberships across time, the perturbations are linearly independent, as the singular subspace of the latent positions needs to change. The models previously defined are constrained to have positive semidefinite connectivity matrices, but they can be extended to be able to generate an arbitrary low-rank connectivity matrices via the generalized random dot product model \cite{rubin2017statistical}. This model introduces an indefinite matrix $\boldsymbol{I}_{p,q}$ to express the connectivity matrix as $\boldsymbol{P}^{(t)}=\boldsymbol{X}^{(t)}\boldsymbol{I}_{p,q}\boldsymbol{X}^{(t)^{T}}$. Here $\boldsymbol{I}_{p,q}\in\mathbb{R}^{d\times d}$ with $p+q=d$ is a diagonal matrix with its first $p$ diagonal entries equal to $1$ and the remaining $q$ entries equal to $-1$. For simplicity of the exposition, we focus only on positive semidefinite RDPG models. \subsection{Anomaly detection problem} To define our anomaly detection problems, assume first that if there are no anomalies, the overall vertex process $\boldsymbol{X}^{(t)}, 1 \leq t \leq M,$ is evolving with some unknown variability $\tau \geq 0$ such that $\|\boldsymbol{X}^{(t)}-\boldsymbol{X}^{(t-1)}\|\leq \tau$. Here, $\|\cdot\|$ is a matrix norm measuring the difference between the latent positions at consecutive time points; we defer the discussion of this norm to the next section. Observing only the time-indexed graphs and without knowledge of the overall vertex process itself, anomaly detection consists of determining a set of time points for which the overall vertex process is significantly deviated from normal, such that $\|\boldsymbol{X}^{(t)}-\boldsymbol{X}^{(t-1)}\|> \tau$. The tasks of testing for these anomalous times, either for global graphs or for individual vertices, are our anomaly detection problems. Latent position model of time series of graphs has been considered in other related work such as in \cite{padilla2019change} for example. One of the differences between our approach and the related work is that our approach exploits common structure in the nodes across the time series via joint embedding of graphs, in a way that will be described next. \section{Methodology} \label{sec:Method} The adjacency matrix $\boldsymbol{A}$ of a random dot product graph provably approximates the matrix of edge probabilities $\boldsymbol{P}$ in a global sense with respect to spectral norm concentration of the difference $\boldsymbol{A} - \boldsymbol{P}$ \cite{oliveira2009concentration,Lu2013SpectraOE}. Under suitable eigengap and sparsity assumptions involving $\boldsymbol{P}$, a truncated eigendecomposition of $\boldsymbol{A}$ (locally) leads to consistent estimates for the true, unobserved latent positions $\boldsymbol{X}$ \cite{sussman2012consistent}. \begin{definition} (\emph{Adjacency spectral embedding}) For an adjacency matrix $\boldsymbol{A}$, let $\boldsymbol{A}=\hat{\boldsymbol{V}}\hat{\boldsymbol{D}}\hat{\boldsymbol{V}}^{T}+ \hat{\boldsymbol{V}}_\perp\hat{\boldsymbol{D}}_\perp\hat{\boldsymbol{V}}^{T}_\perp$ be the eigendecomposition of $\boldsymbol{A}$ such that $(\hat{\boldsymbol{V}}, \hat{\boldsymbol{V}}_\perp)$ is the $n\times n$ orthogonal matrix of eigenvectors, with $\hat{\boldsymbol{V}}\in \mathbb{R}^{n\times d}$, $\hat{\boldsymbol{V}}_\perp\in \mathbb{R}^{n\times(n- d)}$, and $\hat{\boldsymbol{D}}$ is the diagonal matrix containing the $d$ largest eigenvalues in magnitude in descending order. The \emph{(scaled) adjacency spectral embedding} \cite{sussman2012consistent} of $\boldsymbol{A}$ is defined as $\hat{\boldsymbol{X}}=\hat{\boldsymbol{V}}|\hat{\boldsymbol{D}}|^{1/2}$. We refer to $\hat{\boldsymbol{V}}$ as the \emph{unscaled adjacency spectral embedding}, or simply as the leading eigenvectors of $\boldsymbol{A}$. \label{model:ASE} \end{definition} The purpose of our inference is to detect a local (temporal) behavior change in a time series of graphs. In particular, we define a graph to be ``anomalous'' when a (potentially small) (unspecified) collection of vertices change behavior at some time $t^{*}$ as compared to recent past, while the remaining vertices continue with their normal behavior. As such, it is natural to consider a two-step procedure for anomaly detection: first perform spectral embedding and then assess changes in the estimated latent positions. In our context we consider detection of anomalies in either the \emph{overall graph} or in \emph{individual vertices}. We define \emph{individual vertex anomaly detection (VertexAD)} for the $i$-th vertex at time point $t^{*} $ as a test of the null hypothesis $H_{0i}^{(t^{*})}$ that $t^{*}$ is not an anomaly time against the alternative hypothesis $H_{Ai}^{(t^{*})}$ that $t^{*}$ is an anomaly time for vertex $i$. Under the null hypothesis the $i$-th latent state of the overall vertex process is not anomalous at $t^{*}$, i.e.,~the latent position is varying within some tolerance $\tau_{\text{vertex}}$: $$H_{0i}^{(t^{*})}: \|X_{i}^{(t^{*})} - X_{i}^{(t^{*}-1)}\|\leq \tau_{\text{vertex}},$$ $$H_{Ai}^{(t^{*})}: \|X_{i}^{(t^{*})} - X_{i}^{(t^{*}-1)}\|> \tau_{\text{vertex}}.$$ Letting $\tau_{\text{vertex}}=0$, this reduces to a classical two-sample test: $$H_{0i}^{(t^{*})}: X_{i}^{(t^{*})} = X_{i}^{(t^{*}-1)},$$ $$H_{Ai}^{(t^{*})}: X_{i}^{(t^{*})} \neq X_{i}^{(t^{*}-1)}.$$ Choosing $\tau_{\text{vertex}}>0$ allows us to consider some variability under the null. Control charts \cite{shewhart1986statistical} are a tool for analyzing process changes over time and are utilized to provide quantitative evidence regarding whether process variation is in control or out of control. Our approach considers graphs in a time window of length $l$ ending just before $t^{*}$: $\mathcal{W}^{(t^{*},l)}:=\{t^{*}-l, \dots, t^{*}-1\} \subseteq \{1, \dots,M\} $. In our control charts the tolerance $\tau_{\text{vertex}}=\tau_{\text{vertex}}^{(t^{*},l)}$ is a measure of dispersion of the latent positions $\{\boldsymbol{X}^{(t)}:t \in \mathcal{W}^{(t^{*},l)} \}$. To perform VertexAD for the $i$-th vertex, we define the test statistic \begin{equation} y_{i}^{(t)}=\|\hat{X}_{i}^{(t)} - \hat{X}_{i}^{(t-1)} \|_{2}, \label{eq:indverstat} \end{equation} where $\hat{X}_{i}^{(t)} $ is the latent position estimate of vertex $i$ at time $t$. Presumably, the latent position estimates are close to the true latent positions \cite{sussman2012consistent}, and this test statistic will be large if there exists a substantial change between the latent position for vertex $i$ between $t-1$ and $t$. An anomaly is detected at time $t$ if $y_{i}^{(t)}$ is large enough for the null hypothesis to be rejected at some specified level. Based on this formulation of VertexAD, we can analogously define \emph{graph anomaly detection (GraphAD)} as a test of the null hypothesis $$H_{0}^{(t^{*})}: \|\boldsymbol{X}^{(t^{*})} - \boldsymbol{X}^{(t^{*}-1)}\|\leq \tau_{\text{graph}}, $$ $$H_{A}^{(t^{*})}: \|\boldsymbol{X}^{(t^{*})} - \boldsymbol{X}^{(t^{*}-1)}\|> \tau_{\text{graph}}. $$ The corresponding test statistic is defined as \begin{equation} y^{(t)}=\|\hat{\boldsymbol{X}}^{(t)} - \hat{\boldsymbol{X}}^{(t-1)} \|. \label{eq:ovegraphstat} \end{equation} Here, $\|\boldsymbol{X}\|$ denotes the $\ell_2$ operator norm of a matrix $\boldsymbol{X}\in\mathbb{R}^{n\times d}$, which corresponds to the largest singular value of $\boldsymbol{X}$. Note that while it is also possible to use other norms to monitor the changes in the time series, we use the operator norm as test statistics since this norm is less sensitive to differences in the dimension of the latent positions $d$, as opposed to other norms such as Frobenius. Now we introduce two different methods for obtaining latent position estimates: the omnibus embedding of \cite{levin2017central} and the multiple adjacency spectral embedding of \cite{arroyo2019inference}. In both cases, multiple graphs on the same vertex set are jointly embedded into a single space with a distinct representation for each graph. Letting $\boldsymbol{A}^{(1)},\boldsymbol{A}^{(2)}, \dots , \boldsymbol{A}^{(M)}\in \mathbb{R}^{n\times n} $ be adjacency matrices of a collection of $m$ vertex-matched undirected graphs, the $mn$-by-$mn$ omnibus matrix is defined as $$\boldsymbol{O}=\left[\begin{smallmatrix} \boldsymbol{A}^{(1)} & \frac{1}{2}(\boldsymbol{A}^{(1)}+\boldsymbol{A}^{(2)}) & \dots & \frac{1}{2}(\boldsymbol{A}^{(1)}+\boldsymbol{A}^{(M)})\\ \frac{1}{2}(\boldsymbol{A}^{(2)}+\boldsymbol{A}^{(1)})& \boldsymbol{A}^{(2)} & \dots &\frac{1}{2}(\boldsymbol{A}^{(2)}+\boldsymbol{A}^{(M)})\\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{2}(\boldsymbol{A}^{(M)}+\boldsymbol{A}^{(1)}) & \frac{1}{2}(\boldsymbol{A}^{(M)}+\boldsymbol{A}^{(2)}) & \dots & \boldsymbol{A}^{(M)} \end{smallmatrix}\right]$$ and the $d$-dimensional omnibus embedding \cite{levin2017central} $\text{OMNI}(\boldsymbol{A}^{(1)},\boldsymbol{A}^{(2)}, \dots, \boldsymbol{A}^{(M)},d) $ is the adjacency spectral embedding (see Definition \ref{model:ASE}) of $\boldsymbol{O}$ into $d$ dimensions: \begin{equation*} \text{ASE}\boldsymbol(\boldsymbol{O},d) = (\hat{\boldsymbol{X}}^{(1)^{T}},\ldots,{\hat{\boldsymbol{X}}^{(M)^{T}}})^{T}\in \mathbb{R}^{nM \times d}. \end{equation*} When $\boldsymbol{X}^{(t)}=\boldsymbol{X}$ for $ t \in [M]$, the omnibus embedding provides consistent estimates of the true latent positions, up to an orthogonal transformation \cite{levin2017central}. The multiple adjacency spectral embedding (MASE) \cite{arroyo2019inference}, which is the other multiple random graph embedding approach considered in this paper, is a method intended to estimate the parameters of the common subspace independent edge (COSIE) model in which all the expected adjacency matrices of the graphs, denoted by $\boldsymbol{P}^{(t)}=E[\boldsymbol{A}^{(t)}]$, share the same invariant subspace. That is, these matrices can be expressed as $\boldsymbol{P}^{(t)}=\tilde{\boldsymbol{V}}\boldsymbol{R}^{{(t)}} \tilde{\boldsymbol{V}}^{T}$ with $\tilde{\boldsymbol{V}}\in \mathbb{R}^{n\times \tilde{d}}$ with orthonormal columns (we call the invariant subspace defined by $\tilde{\boldsymbol{V}}$ \emph{common subspace}), but allow each individual matrix $\boldsymbol{R}^{{(t)}}\in \mathbb{R}^{\tilde{d}\times \tilde{d}}$ to be different. Observe that this model allows us to incorporate the structure discussed in Equation~(\ref{eq:structure1}), since under the COSIE model, the latent positions are given by $\boldsymbol{X}^{(t)} = \tilde{\boldsymbol{V}}\boldsymbol{S}^{(t)}\in \mathbb{R}^{n\times \tilde{d}}$, where $\boldsymbol{S}^{(t)}=|\boldsymbol{R}^{{(t)}}|^{\frac{1}{2}}=\boldsymbol{Q}\boldsymbol{D}^{\frac{1}{2}}$ and $\boldsymbol{Q}$ and $\boldsymbol{D}$ are obtained from the singular value decomposition of $\boldsymbol{R}^{{(t)}}$. Under the COSIE model, \text{MASE} produces simultaneous consistent estimation of underlying parameters $\tilde{\boldsymbol{V}}$ and $\boldsymbol{R}^{{(t)}}$ for each graph \cite{arroyo2019inference}. In the \text{MASE} algorithm, the spectral decomposition of each of the adjacency matrices $\boldsymbol{A}^{(t)}$ is calculated first separately for each individual graphs. We denote the corresponding $d$ leading eigenvectors of $\boldsymbol{A}^{(t)}$ (corresponding to the $d$ leading eigenvalues in magnitude) as $\hat{\boldsymbol{V}}^{(t)}\in\mathbb{R}^{n\times d}$ for each $t\in[M]$. Then, let $\hat{\boldsymbol{U}}= \left( \hat{\boldsymbol{V}}^{(1)} \ \cdots \ \hat{\boldsymbol{V}}^{(M)}\right)$ be the $n\times\left(Md\right)$ matrix of concatenated spectral embeddings and define $\tilde{\boldsymbol{V}}\in\mathbb{R}^{n\times \tilde{d}}$ as the matrix containing the $\tilde{d}$ leading left singular vectors of $\hat{\boldsymbol{U}}$ at the risk of abusing the notation. Finally, for each $t\in[M]$, set $\hat{\boldsymbol{R}}^{(t)} = \tilde{\boldsymbol{V}}^{T}\boldsymbol{A}^{(t)}\tilde{\boldsymbol{V}}$ and obtain $\hat{\boldsymbol{X}}^{(t)}=\tilde{\boldsymbol{V}}|\hat{\boldsymbol{R}}^{(t)}|^{\frac{1}{2}}$. We discuss the motivation for \text{MASE} and \text{OMNI} under the structure types described in \S \hyperref[sec:model]{ II. B}. It is clear that the Equation~(\ref{eq:structure1}) can be formulated as COSIE model with $\tilde{d}=d$ so \text{MASE} is appropriate under~(\ref{eq:structure1}) \cite{arroyo2019inference}; \text{OMNI} achieves low variance with a small bias under Equation~(\ref{eq:structure1}) when $\boldsymbol{S}^{(t)}$ is diagonal \cite{draves2020bias}. The Equations~\eqref{eq:structure2} and~\eqref{eq:structure3} can also be formulated as a COSIE model with $\tilde{d}=d'>d$ and $\tilde{\boldsymbol{V}}=\boldsymbol{V}'$ but at the cost of introducing more parameters for the $\boldsymbol{V}'$ as explained in \S \hyperref[sec:model]{II. B}. Intuitively, \text{MASE} is preferable when the graphs are close to the Equation~(\ref{eq:structure1}) as they are under COSIE model with dimension $d$; \text{OMNI} is preferable when the graphs are far from the Equation~(\ref{eq:structure2}) with better bias-variance trade off as \text{MASE} needs $\tilde{d}=d'>d$ dimension to model the latent positions. We construct simulations in \S \hyperref[sec:compare]{V} to investigate performance of \text{MASE} and \text{OMNI} under the Equation~(\ref{eq:structure2}). For both \text{MASE} and \text{OMNI}, we can obtain the distribution of the test statistic via semi-parametric bootstrap proposed in \cite{tang2017semiparametric}. Specifically, for any $\hat{\boldsymbol{X}}\in \mathbb{R}^{n\times d}$ we generate \text{i.i.d.} samples of $\boldsymbol{A}\sim \rdpg{\hat{\boldsymbol{X}} }$ and obtain the corresponding \text{i.i.d.} test statistics under $\rdpg{\hat{\boldsymbol{X}}}$. The two-step procedure for anomaly detection via reporting significant p-values based on semi-parametric bootstrap is as follows: first, perform joint spectral embedding with time span $s=2$ or $s=M$, i.e.,~either jointly embed adjacent graphs or jointly embed all available graphs as number of embedding graphs can affect downstream inference task; then the second step is to perform hypothesis testing for $H_{0}$ (GraphAD) or $H_{0i}$ (VertexAD) as summarised in Algorithm~\ref{alg:hypothesis testing}. \begin{algorithm} \caption{Two-step anomaly detection with bootstrapped \text{p}-value} \begin{algorithmic} \Input A time series of graphs $\{\boldsymbol{A}^{(t)}\}_{t=1}^{M}$, embedding dimensions $d$, joint embedding method $\operatorname{EMBED} \in \{\operatorname{OMNI} ,\operatorname{MASE} \}$, time span $s$. \begin{enumerate} \item \algorithmiciterate{ Let $t=1$, while $t+s-1 \leq M$ \begin{enumerate} \item At time $t$ obtain the latent position estimates $(\{\hat{\boldsymbol{X}}^{(u)} \}_{u=t}^{t+s-1},d)=\operatorname{EMBED}(\boldsymbol{A}^{(t)}, \dots,\boldsymbol{A}^{(t+s-1)})$ within time span $s$, then calculate $y^{(v)}$ and $y_{i}^{(v)}$ for vertex $i=1, \dots,n$ based on ~(\ref{eq:indverstat}) and~(\ref{eq:ovegraphstat}) for times $v=t+1, \dots, t+s-1$. \item $t=t+1$ \end{enumerate} \item Use parametric bootstrap with $\hat{\boldsymbol{X}}=\hat{\boldsymbol{X}}^{(t)}$, $s$ and $f$ to generate $B$ samples of $y_b^{(t)}$ under the null hypothesis that $\boldsymbol{X}^{(t)}=\boldsymbol{X}^{(t-1)}$ and, generate $B$ samples of $y_{ib}^{(t)}$ under the null hypothesis that $X_{i}^{(t-1)}=X_{i}^{(t)}$. \item Calculate empirical \text{p}-values at $t$ as $p^{(t)}= \frac{\sum_{b=1,...,B} I(y_b^{(t)}>y^{(t)})}{B}$. Calculate empirical \text{p}-values $p_{i}^{(t)}= \frac{\sum_{b=1,...,B} I(y_{ib}^{(t)}>y_{i}^{(t)})}{B}$ at each time point $t$ and vertex $i$. \end{enumerate} \Output Report empirical \text{p}-values $p^{(t)}$ at $t$ for GraphAD and empirical \text{p}-values $p_{i}^{(t)}$ at time point $t$ and vertex $i$ for VertexAD, $t\in [M]$, $i \in [n]$. \end{algorithmic} \label{alg:hypothesis testing} \end{algorithm} Next, we introduce a second approach to GraphAD and VertexAD with a two-step procedure using control charts. There are four fundamental elements in control charts: estimated statistics, moving average mean, moving average measure of dispersion, and rule to claim out of control points. In step one, we do the same joint spectral embedding with time span $s=2$ or $s=M$ as in Algorithm~\ref{alg:hypothesis testing}, i.e.,~either jointly embed adjacent graphs or jointly embed all available graphs and calculate the corresponding estimated statistics in equations~(\ref{eq:indverstat}) and~(\ref{eq:ovegraphstat}). In step two, instead of testing the null hypotheses $H_{0}$ or $H_{0i}$ simultaneously for all time points, we approximately test the null hypotheses $H_{0}^{(t^{*})}$ and $H_{0i}^{(t^{*})}$ sequentially for $t^{*}=l+1, \dots,M$ using control charts with time window length $l$. Then to determine the tolerances $\tau_{\text{graph}}$ and $\tau_{\text{vertex}}$ in $H_{0}^{(t^{*})}$ and in $H_{0i}^{(t^{*})}$, jointly embed $l$ graphs $\{\boldsymbol{A}^{(t)}\}_{t=t^{*}-l}^{t^{*}-1}$ with time span $s$ as in step one in time window $\mathcal{W}^{(t^{*},l)}$ and obtain the corresponding test statistics $\widetilde{y}^{(t^{*}-l+1)}, \dots,\widetilde{y}^{(t^{*}-1)}$. Then calculate moving average mean and adjusted moving range \cite{montgomery2007introduction} as follows: \begin{equation} \label{eq:movingaverage} \bar{y}^{(t^{*})}= \frac{\sum_{v'=t^{*}-l+1}^{t^{*}-1} \widetilde{y}^{(v')} }{l-1} , \end{equation} \begin{equation} \label{eq:movingrange} \bar{\sigma}^{(t^{*})}= \frac{1}{1.128(l-2)} \sum_{v'=t^{*}-l+2}^{t^{*}-1} |\widetilde{y}^{(v')} - \widetilde{y}^{(v'-1)} | . \end{equation} To perform VertexAD, calculate moving average and UnWeighted AVErage of subgroup estimates based on subgroup Standard Deviations ("UWAVE-SD") \cite{wetherill1991statistical,doi:10.1080/00224065.1969.11980368} \begin{equation} \label{eq:movingaveragever} \bar{y}_{i}^{(t^{*})}= \frac{\sum_{i=1}^{n} \sum_{v'=t^{*}-l+1}^{t^{*}-1} \widetilde{y}_{i}^{(v')} }{ n(l-1)} , \end{equation} \begin{equation} \label{eq:movingsdver} \bar{\sigma}_{i}^{(t^{*})}= \frac{1}{c(n)(l-1)} \sum_{v'=t^{*}-l+1}^{t^{*}-1} \hat{\sigma}(v') , \end{equation} where $\hat{\sigma}(v')$'s are the sample standard deviations of $\widetilde{y}_{i}^{(v')}$ over $n$ vertices at time $t$, $$c(n)=\sqrt{(2/(n - 1))\exp(\log \gamma(n/2) - \log \gamma((n - 1)/2))}$$ and $\gamma(\cdot)$ is the Gamma function. Specifically, there exists a central solid line (CL) representing the moving average and dashed UCL or LCL representing upper central line or lower central line is $\bar{y}^{(t^{*})}\pm 3 \bar{\sigma}^{(t^{*})} $. Then add test statistics $y^{(t^{*})}$ as points in the plot. The out-of-control or anomaly points are marked as red in the control charts. We summarise in Algorithm~\ref{alg:controlchart} with $s=2$. See Fig.~\ref{fig:idealconchart1} for illustration (details in \S \hyperref[sec:illus]{ IV. A}). \begin{algorithm} \caption{Two-step anomaly detection with control charts} \begin{algorithmic} \Input A time series of graphs $\{\boldsymbol{A}^{(t)}\}_{t=1}^{M}$, embedding dimensions $d$, joint embedding method $\operatorname{EMBED} \in \{\operatorname{OMNI} ,\operatorname{MASE} \}$, time span $s=2$, and time window length $l$. \begin{enumerate} \item \algorithmiciterate{ Let $t=1$, while $t+1 \leq M$} \begin{enumerate} \item At time $t$ obtain the latent position estimates $(\{\hat{\boldsymbol{X}}^{(u)} \}_{u=t}^{t+1},d)=\operatorname{EMBED}(\boldsymbol{A}^{(t)}, \dots,\boldsymbol{A}^{(t+1)})$ within time span $s$, then calculate $y^{(t+1)}$ and $y_{i}^{(t+1)}$ for vertex $i=1, \dots,n$ based on equations~(\ref{eq:indverstat}) and~(\ref{eq:ovegraphstat}). \item If $t>l$, calculate $\bar{y}^{(t)}$ and $\bar{\sigma}^{(t)}$, and $\bar{y}_{i}^{(t)}$ and $\bar{\sigma}_{i}^{(t)}$ for vertex $i$ based on equations~(\ref{eq:movingaverage}), (\ref{eq:movingrange}), (\ref{eq:movingaveragever}) and (\ref{eq:movingsdver}) using $\{\widetilde{y}^{(v')}\}_{v'=t-l+1}^{t-1}$ and $\{\widetilde{y}_{i}^{(v')}\}_{v'=t-l+1}^{t-1}$. \item $t=t+1$ \end{enumerate} \end{enumerate} \Output Report anomalous graphs or vertices using $y^{(t)}$ and $y_{i}^{(t)}$ based on Shewhart's rule from the control chart. \end{algorithmic} \label{alg:controlchart} \end{algorithm} \section{Simulations of Time Series of Graphs} \label{sec:illus} In this section, we provide simulations to assess performance of our methods in different scenarios. Specifically, in \S \hyperref[sec:illus]{ IV. A}, we provide three scenarios for GraphAD and VertexAD via Algorithms~\ref{alg:hypothesis testing} and~\ref{alg:controlchart}. Furthermore, we empirically compare effects of different combinations of hyper-parameters on the subsequent inference. Here we discuss hyper-parameters for our methods. In Algorithm~\ref{alg:hypothesis testing}, threshold for empirical p-values to claim statistically significant existence of anomalous graphs or anomalous vertices is level $\alpha \in [0,1]$. In Algorithm~\ref{alg:controlchart}, our threshold for test statistics is $\bar{y}^{(t)}+ 3 \bar{\sigma}^{(t)} $. Our time window default length is $l=11$ except as otherwise mentioned. Furthermore, since performance of joint embedding methods depends on number of jointly embedded graphs, we investigate the efficacy of the different time spans $s=2$ or $s=M$ for GraphAD and VertexAD with both algorithms. For example, given $M$ available graphs, when $s=2$ we estimate the latent positions using adjacent graphs $(\boldsymbol{A}^{(t-1)},\boldsymbol{A}^{(t)})$ for $t=2, \dots,M$ (call \text{OMNI2} or \text{MASE2}). When $s=M$, we estimate the latent positions via jointly embedding all graphs $\{\boldsymbol{A}^{(t)}\}_{t=1}^{M}$ for Algorithm~\ref{alg:hypothesis testing} (call \text{OMNI12} or \text{MASE12} if $M=12$). For Algorithm~\ref{alg:controlchart}, we jointly embed graphs $\{\mathcal{G}^{(t)}\}_{t=l+1}^{M}$ to obtain test statistics $y^{(t^*)}$ (thus also call \text{OMNI12} or \text{MASE12} if $M=22$) when $s=M$. \subsection{Data Generation} \label{sec:datagen} \emph{Scenario 1:} We start with an illustrative example. Draw one-dimensional ($d=1$) latent positions $X_1^{(1)}$, $X_2^{(1)},...,X_n^{(1)} \in \mathbb{R}$ \text{i.i.d.} according to a Uniform distribution $U(0.2, 0.8)$. Here $n=100$, so there are $100$ vertices in the graph. Assembling these $n$ points into vector $X^{(1)} \in \mathbb{R}^{n}$, we generate graph $\mathcal{G}^{(1)}$ with adjacency matrix $\boldsymbol{A}^{(1)}$ with entries $\boldsymbol{A}^{(1)} \sim \bern{X^{(1)} {X^{(1)}}^{T} }$. Thus, $\boldsymbol{A}^{(1)}\sim \rdpg{X^{(1)}}$. We similarly generate graphs $\mathcal{G}^{(t)}$, $t=-9, \dots,5,8, \dots,12$ (note in order to show results of Algorithm~\ref{alg:controlchart} starting at $t=2$ with $l=11$, here our graphs start at time $-9$), assuming their latent positions do not change: $X^{(t)}=X^{(1)}$, $\boldsymbol{A}^{(t)} \sim \bern{ X^{(t)} {X^{(t)}}^{T} }$ for $t=-9,\dots,5,8,\dots,12$. We use $\delta_{x}\in \mathbb{R}^{+}$ and $\delta_{n}\in \mathbb{N}^{+}$ to control the scale of perturbation and number of perturbed vertices, and also denote $1_{n}:=(1,\cdots,1)^{T}\in \mathbb{R}^{n}$. We perturb the latent positions at two time points: $X^{(6)} = X^{(1)} + \delta_{x} \cdot \Delta$ and $X^{(7)} = X^{(1)} - \delta_{x} \cdot \Delta,$ where $\Delta=(1_{\delta_n/2}^{T},-1_{\delta_n/2}^{T},0_{n - \delta_n}^{T} )^{T} \in \mathbb{R}^{n}$ and generate graph $\mathcal{G}^{(t)}$ with adjacency matrices $\boldsymbol{A}^{(t)}:$ $\boldsymbol{A}^{(t)} \sim \bern{X^{(t)} {X^{(t)}}^{T} }$ for $t=6,7$. The perturbation $\Delta$ is constructed such that only first $\delta_n=20$ vertices are anomalous at $t=6,7$. So we have simulated a time series of graphs with $M=22$ each with $100$ vertices, and created artificial anomalies for graphs at time points $6$ and $7$ which both have $\delta_n =20$ anomalous vertices. The control chart presented in Fig.~\ref{fig:idealconchart1} provides intuition for the test statistic $y^{(t)}$ and motivation for applying Algorithm~\ref{alg:hypothesis testing} and Algorithm~\ref{alg:controlchart} in GraphAD and VertexAD. Fig.~\ref{fig:idealconchart1} plots $\|X^{(t-1)}-X^{(t)}\|$, $t=2,\dots,12$ from scenario $1$, with the central line (CL) being the average of $11$ of the $y^{(t)}$ and the dashed line (UCL) being the average plus three adjusted moving ranges. So, when $\hat{X}^{(t)} $ is sufficiently close to $X^{(t)}$, $y^{(t)}$ at an anomalous time point will lie out side of UCL, which motivates us to investigate Algorithm~\ref{alg:controlchart} in this scenario. \begin{figure}[!t] \centering \includegraphics[width=.61\linewidth]{illuswolcl.png} \caption{Illustrative control chart for time series of graphs when $M=22$ and $l=11$. Dots are $\|X^{(t)}-X^{(t-1)}\|$, $t=2,\dots,12$, center solid line (CL) represents average of $\|X^{(t)}-X^{(t-1)}\|$, dashed line (UCL) represents average of $\|X^{(t)}-X^{(t-1)}\|$ plus their three adjusted moving sample ranges. The red dot represents the anomaly time, which lies outside of UCL.} \label{fig:idealconchart1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.5\linewidth]{alpha0125illus.png} \caption{Illustrative plot of latent positions for time series of graphs generated in scenario $2$ (The latent positions form a $3$-simplex in $R^3$ ). The orange dots are normal latent positions $X_{i}^{(t)}$, $t=5,6,7,8$. The green triangles represent the shifted latent positions at anomalous time points $6$ and $7$ for vertices close to one of the communities. The perturbation at anomalous time points affects only the connectivity of one community in the MMSBM.} \label{fig:alpha0125} \end{figure} \emph{Scenario 2:} We first generate graphs from a mixed membership stochastic block model (MMSBM) \cite{airoldi2008mixed} with a constant community structure across time. Under this assumption, the singular vectors of the latent positions share an invariant subspace $\boldsymbol{V} \in \mathbb{R}^{n\times d}$ ($d=4$) as in Equation~(\ref{eq:structure1}). Specifically, we generate $\mathcal{G}^{(t)}$ with $n=400$ vertices from the MMSBM, i.e.,~$\boldsymbol{A}^{(t)}\sim \bern{\boldsymbol{Z}\boldsymbol{B}\boldsymbol{Z}^{T}}$ where $\boldsymbol{Z}\in\mathbb{R}^{n\times 4}$ and $\boldsymbol{B}\in \mathbb{R}^{4\times 4}$, $t=1,\cdots,12$. The block connectivity matrix is $$\boldsymbol{B} =\begin{bmatrix} p & q & q & q\\ q & p & q & q\\ q & q & p & q\\ q & q & q & p\\ \end{bmatrix},$$ with $p~\sim U(0.5,1)~>~q\sim U(0,0.5)$ so that $\text{rank}(\boldsymbol{B})=4$ and $\boldsymbol{B}$ is positive semi-definite. Each row of $\boldsymbol{Z}$ is generated from $\boldsymbol{Z}_{i} \sim \dirich{\theta \cdot 1_d}$, $\theta=0.125$ and $i=1, \dots,n$, representing community membership preferences. So graphs $\boldsymbol{A}^{(t)}\sim \bern{\boldsymbol{X}^{(t)}\boldsymbol{X}^{(t)^{T}} }$ where $\boldsymbol{X}^{(t)} = \boldsymbol{V} \boldsymbol{S}^{(t)}\in \mathbb{R}^{n\times d}$ with $\boldsymbol{V}=\boldsymbol{Z}$ and $\boldsymbol{S}^{(t)}=|\boldsymbol{B}|^{\frac{1}{2}}$. We then perturb some latent positions with a strong linear dependency between the perturbation and the invariant subspace $\boldsymbol{V}$. Specifically the perturbation at anomalous time points affects only the connectivity of one community in the MMSBM (see Fig.~\ref{fig:alpha0125}): consider the corresponding latent position $\boldsymbol{X}^{(1)}\in \mathbb{R}^{n\times 4}$ of $\mathcal{G}^{(1)}$ as $E[\boldsymbol{A}^{(1)}]=\boldsymbol{P}^{(1)}=\boldsymbol{Z}\boldsymbol{B}\boldsymbol{Z}^{T}=\boldsymbol{X}^{(1)}\boldsymbol{X}^{(1)^{T}}$ and keep $\boldsymbol{X}^{(t)}=\boldsymbol{X}^{(1)}$ as unchanged latent positions for $t= 2,\cdots,5,8,\cdots,12$. We perturb the latent positions at two time points: $\boldsymbol{X}^{(6)} = \boldsymbol{X}^{(1)} + \delta_x \cdot \boldsymbol{\Delta}$ and $\boldsymbol{X}^{(7)} = \boldsymbol{X}^{(1)} - \delta_x \cdot \boldsymbol{\Delta}.$ Then we choose a vertex and we perturb the neighbors that are the most similar in terms of their community assignments, i.e.,~the perturbation matrix $\boldsymbol{\Delta} \in \mathbb{R}^{n \times d}$ is constructed to affect only the first $\delta_{n}=100$ nearest vertices of one vertex: each row $\boldsymbol{\Delta}_{i \cdot} = \xi$, if $X_{i }^{(1)}$ for $\delta_{n}$-nearest neighbors of $X_{1}^{(1)}$ and $\boldsymbol{\Delta}_{i \cdot}=0$, otherwise. $X_{i }^{(1)}$ is said to be the $k$-th nearest neighbor of $x$ if the distance $\|X_{i }^{(1)}-x\|_{2} $ is the $k$-th smallest among $\|X_{1 }^{(1)}-x\|_{2},\cdots,\|X_{n}^{(1)}-x\|_{2}$. Here $\xi \sim .6 \cdot \dirich{1_{d}}+.2$ is a fixed $d$-dimensional vector. In order to assess robustness of our methods under different parameters, we add a distribution for the parameters $p$, $q$ and $\xi$ in simulations. With these $\{\boldsymbol{X}^{(t)}\}_{t=1}^{12}$ in hand, we generate graphs $\boldsymbol{A}^{(t)}\sim \rdpg{\boldsymbol{X}^{(t)}}$ independently, and obtain graphs under Equation~(\ref{eq:structure2}) but close to Equation~(\ref{eq:structure1}) with each approximately SBM. In summary, we generate a time series of graphs $\{\boldsymbol{A}^{(t)}\}_{t=1}^{12}$ with $n=400$. There exists $\delta_{n}=100$ anomalous vertices which are given $\delta_{x} \cdot \xi = 0.12 \cdot \xi$ perturbation at time $6,7$, with strong linear dependency between perturbation and subspace $\boldsymbol{V}$ as when $\theta = 0$, graphs are under Equation~\eqref{eq:structure1}; the perturbation $\boldsymbol{\Delta}$ at anomalous time points affects only the connectivity of one community in the MMSBM. When $\theta$ is not zero, the graphs are under Equation~\eqref{eq:structure2} and the perturbation changes community memberships for a group of vertices in the MMSBM \emph{Scenario 3:} In this last scenario, we generate graphs as in scenario $2$ except the $4$-dimensional vector $\boldsymbol{Z}_{i}$ (community membership preferences for every node) in graph $\mathcal{G}^{(1)}$ is first generated independently from $ \dirich{0.875 \cdot 1_d}$, $i=1, \dots,n$. In this case, graphs are under Equation~(\ref{eq:structure2}) but far from Equation~(\ref{eq:structure1}). \subsection{Metric} \label{sec:metric} To control the false positive rate in VertexAD, we consider a rank-based metric for evaluating the test statistics in Equation~(\ref{eq:indverstat}). First rank the test statistics decreasingly across vertices. Since for anomalous vertices, their test statistics should be large compared with non-anomalous vertices, their ranks should be small and their reciprocal ranks should be large, while non-anomalous vertices will have small reciprocal ranks. Specifically, we first obtain test statistics from Equation~(\ref{eq:indverstat}) as in step $1$ in Algorithm~\ref{alg:hypothesis testing}, then calculate reciprocal ranks $\text{RR}_{i}^{(t)}$ (ordered decreasingly across vertices) for vertices at time $t$, i.e.,~$\text{RR}_{i}^{(t)}=1/r(y_{i}^{(t)})$, where $r(y_{i}^{(t)})$ is decreasing order of $y_{i}^{(t)}$ in $\{y_{1}^{(t)}, \dots,y_{n}^{(t)}\}$. Finally, compute difference between average of reciprocal ranks of anomalous vertices and average of reciprocal ranks of non-anomalous vertices at anomalous time point $t^{*}$. This reciprocal rank difference is a metric for VertexAD since large values suggest a large difference between anomalous vertices and non-anomalous vertices. Using this reciprocal rank difference, we are able to compare results from different simulation settings or even different statistics. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{qccillusall.png} \caption{ Control charts for a time series of graphs with an anomaly at time points $6$ and $7$ (scenario $1$). Left panel is for GraphAD, center solid line (CL) represents moving average of sample means $\bar{y}^{(t)}$, dashed line (UCL) represents $\bar{y}^{(t)} + 3\bar{\sigma}^{(t)}$, where $\bar{\sigma}^{(t)}$ is adjusted moving sample range; black dots are $y^{(t)}$ at times where the latent positions are claimed to be normal, and the red dots are those $y^{(t)}$ which lie outside of UCL and are claimed as anomalous graphs. The right panel is for VertexAD, center solid line (CL) represents moving average of sample means ${\bar{y}_{i}}^{(t)}$, dashed line (UCL) represents ${\bar{y}_{i}}^{(t)} + 3{\bar{\sigma}_{i}}^{(t)}$, where ${\bar{\sigma}_{i}}^{(t)}$ is EWAVE-SD; black dots are $y_{i}^{(t)}$ at times where the latent positions are claimed to be normal, and the red dots are those $y_{i}^{(t)}$ which lie outside of UCL and are claimed as anomalous vertices. The first $\delta_n = 20$ vertices at time points $6:7$ are the true anomalous vertices.} \label{fig:conchart1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{pvaluesexample3method.png} \caption{Hypothesis test for a time series of graphs generated as in Fig.~\ref{fig:conchart1}. The left panel: green error bars are simultaneous confidence intervals for empirical p-values $\bar{p}^{(t)}$, horizontal line is the significance level $0.05$. The right panel: Green error bars are simultaneous confidence intervals for empirical \text{p}-values $p_{i}^{(t)}$ at vertex $i$, black dots are the mean empirical \text{p}-values $\bar{p}_{i}^{(t)}$ for the unperturbed vertices and the red dots are the mean empirical \text{p}-values $\bar{p}_{i}^{(t)}$ for the perturbed vertices. The first $\delta_n = 20$ vertices at time points $6:7$ are the true anomalous vertices.} \label{fig:mco} \end{figure} \subsection{Results} \label{sec:result} With Fig.~\ref{fig:idealconchart1} in mind, we first present results of our Algorithm~\ref{alg:controlchart} for scenario $1$, which should approximate Fig.~\ref{fig:idealconchart1} but plotting $y^{(t)}$ instead of $\|X^{(t)}-X^{(t-1)}\|$ and adjusting the calculation of CL and UCL based on $y^{(t)}$. The control chart generated from Algorithm~\ref{alg:controlchart} for GraphAD is presented in the left panel of Fig.~\ref{fig:conchart1}, and for VertexAD in the right panel, demonstrating applicability of our approaches in scenario $1$. In the left panel of Fig.~\ref{fig:conchart1}, we show for Algorithm~\ref{alg:controlchart} that both joint embedding methods with different time spans $s=2$ and $s=12$ can perform GraphAD successfully: the anomalies at time points $6$ and $7$ are successfully detected. Furthermore, in Fig.~\ref{fig:conchart1} we observe charts with different time spans based on \text{OMNI} have similar behavior, while those based on \text{MASE} are comparatively more distinct. This suggests that time span $s$ has a more significant effect on \text{MASE} than \text{OMNI}. In the right panel of Fig.~\ref{fig:conchart1}, we perform VertexAD via Algorithm~\ref{alg:controlchart} with the same combinations of joint embedding methods and time spans. This shows Algorithm~\ref{alg:controlchart} detects most of the anomalous vertices since most of first $20$ vertices lie outside of UCL. The only difference between GraphAD and VertexAD is that we calculate the moving average as in Equation~(\ref{eq:movingaveragever}) and UWAVE-SD in Equation~(\ref{eq:movingsdver}) instead of equations~(\ref{eq:movingaverage}) and (\ref{eq:movingrange}). Furthermore, all vertices at time $t$ share the same CL and UCL, in the sense that there is not an ordering among vertices like the natural ordering of time. We include only two time points in Fig.~\ref{fig:conchart1} right panel for display purposes: one for normal adjacent time points $2:3$ and the other for anomalous time points $6:7$. Other normal cases are similar to the normal one included here. From comparison of combinations of different joint embedding methods and time spans in Fig.~\ref{fig:conchart1}, we observed patterns in VertexAD similar as in GraphAD in Fig.~\ref{fig:conchart1}: time span $s$ has more significant effect on \text{MASE} than \text{OMNI} in this scenario. When $\tau=0$ we can consider inference as two-sample testing. Note that there exists a difference between anomaly detection and two-sample testing: on the one hand, anomaly detection can be defined in principle as hypotheses testing as in \S \hyperref[sec:Method]{III. A}, and there are a number of existing methods such as control charts to proceed approximately while allowing for intrinsic variance across time. On the other hand, we can directly use two-sample testing for GraphAD and VertexAD if we consider $\tau=0$. Furthermore, two-sample testing not only provides a natural algorithm for GraphAD and VertexAD, but also lays out a systematic framework to compare effects of different combinations of joint embedding methods and time spans on downstream GraphAD and VertexAD for both Algorithm~\ref{alg:hypothesis testing} and \ref{alg:controlchart} as in \S \hyperref[sec:compare]{V} (since control charts are approximately doing hypothesis testing). For both \text{MASE} and \text{OMNI}, we first generate $400$ samples of test statistic $y^{(t)} $ and $y_{i}^{(t)}$ under the null hypothesis to obtain the null distribution, then calculate p-values and claim existence of an anomaly at statistically significant p-values with level $0.05$. Another $200$ Monte Carlo simulations are implemented to obtain the median estimate \cite{bhattacharya2002median} for \text{p}-values for \text{MASE} and \text{OMNI} with different time spans $s$ and corresponding simultaneous confidence intervals are also supplied using the Bonferroni correction. In scenario $1$, $2$ and $3$ and we observe very similar patterns for \text{OMNI} between using time spans $2$ and $12$ in both GraphAD and VertexAD as in Algorithm~\ref{alg:controlchart}, so we include only \text{OMNI} with $s=2$ here. The result of Algorithm~\ref{alg:hypothesis testing} is presented in Fig.~\ref{fig:mco}; all simultaneous confidence intervals for \text{MASE} and \text{OMNI} for GraphAD and VertexAD are for significant level $0.05$. This shows that via Algorithm~\ref{alg:hypothesis testing} both \text{OMNI} and \text{MASE} perform well for the two tasks in scenario $1$. Note we observe from Fig.~\ref{fig:conchart1} right panel a potential for high false positive rate in \text{MASE} for VertexAD. Recall only the first $20$ vertices are created be anomalous, and while \text{MASE} detects the anomalous vertices correctly with \text{p}-values below level $.05$, the empirical \text{p}-values for unperturbed vertices are also close to level $.05$. For example the right panel of Fig.~\ref{fig:mco} shows that \text{p}-value estimates of non-anomalous vertices for \text{MASE} are close to level $0.05$, and they will be under $0.05$ as we increase number of vertices further. \begin{figure}[t!] \centering \includegraphics[width=.61\linewidth]{rrexample3method.png} \caption{Reciprocal rank estimates averaged across Monte Carlo simulations for time series of graphs generated in scenario $1$. Anomalous vertices are red with standard error bars in green. The first $\delta_n = 20$ vertices at time points $6:7$ are the true anomalous vertices.} \label{fig:rr} \end{figure} For this high false positive issue, we propose evaluating VertexAD with reciprocal rank statistics introduced in \hyperref[sec:metric]{IV. B}; results are presented in Fig.~\ref{fig:rr}. Fig.~\ref{fig:rr} shows that reciprocal rank detects anomalous vertices with a significant gap between anomalous vertices and non-anomalous vertices. From here on, we present VertexAD results using test statistics from Algorithm~\ref{alg:controlchart} or reciprocal rank for test statistics. In summary, Fig.~\ref{fig:mco} shows application of Algorithm~\ref{alg:hypothesis testing} to GraphAD and Fig.~\ref{fig:rr} shows application to VertexAD in scenario $1$. \begin{figure}[ht] \centering \includegraphics[width=0.61\linewidth]{sdmcomultid4june18n400.png} \caption{On the left, green error bars are simultaneous confidence intervals for empirical \text{p}-values $p^{(t)}$ with time series of graphs generated in scenario $2$. Black dots are the mean empirical \text{p}-values $\bar{p}^{(t)}$, horizontal line is the significant level $0.05$. In the right panel green error bars are for reciprocal rank $\text{RR}_{i}^{(t)}$ at vertex $i$, with the first $\delta_n = 100$ vertices at time points $6:7$ being the true anomalous vertices. } \label{fig:sdmultid4mar09} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.61\linewidth]{ddmcomultid4june18n400.png} \caption{On the left, green error bars are simultaneous confidence intervals for empirical \text{p}-values $p^{(t)}$ with time series of graphs generated in scenario $3$. Black dots are the mean empirical \text{p}-values $\bar{p}^{(t)}$, ($\delta_n = 100$, $\delta_x = 0.12$), horizontal line is the significant level $0.05$; In the right panel green error bars are for reciprocal rank $\text{RR}_{i}^{(t)}$ at vertex $i$, with the first $\delta_n = 100$ vertices at time points $6:7$ being the true anomalous vertices.} \label{fig:ddmultid4mar09} \end{figure} Fig.~\ref{fig:sdmultid4mar09} and Fig.~\ref{fig:ddmultid4mar09} consider scenarios $2$ and $3$. Note we include only span $s=2$ here as the performance difference between $s=2$ and $s=12$ is not significant. Specifically, for scenario $2$ in Fig.~\ref{fig:sdmultid4mar09}, the left panel shows that both methods detect the anomaly in GraphAD successfully with significant \text{p}-values. For VertexAD, we observe a significant gap between reciprocal ranks of anomalous vertices and non-anomalous vertices in the right panel of Fig.~\ref{fig:sdmultid4mar09}, which demonstrates that the test statistics for both \text{OMNI} and \text{MASE} are powerful in this approximate \text{SBM} scenario $2$. On the other hand, for scenario $3$ in Fig.~\ref{fig:ddmultid4mar09}, we demonstrate that \text{OMNI} performs better than \text{MASE} in GraphAD as its median \text{p}-value estimates are $0.239$ ($95\%$ simultaneous confidence interval $[0.168 , 0.312]$) for \text{OMNI} and $0.321$ ($95\%$ simultaneous confidence interval $[0.245 , 0.365 ]$) for \text{MASE} with $s=2$, so \text{OMNI} can be employed succesfully in this scenario $3$. Results of VertexAD for mixed-membership SBM scenario $3$ are in the right panel of Fig.~\ref{fig:ddmultid4mar09} and they show that there is subtle difference between reciprocal ranks in anomalous and non-anomalous cases. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{onepargradinvadnullnmc50pvalnmc200aug10.png} \caption{On the left, error bars ($\text{mean} \pm \text{standard error}$) are for empirical power $\beta$ for GraphAD at the anomalous time points $6:7$ with respect to varying $\theta$, as time series of graphs are generated as in scenario $2$ from \S \hyperref[sec:illus]{IV. A} except different $\theta$ and fixing $p=0.8$, $q=0.3$ and $\xi=0.3\cdot 1_{4}$; In the right panel error bars ($\text{mean} \pm \text{standard error}$) are for difference between the average reciprocal rank for anomalous vertices and the average reciprocal rank for non-anomalous vertices at time points $6:7$ for VertexAD.} \label{fig:power1} \end{figure} \section{\text{OMNI} vs. MASE} \label{sec:compare} Our goal in this section is to provide statistically justified answers for the following two questions: \begin{enumerate} \item We are looking to understand what structure in the model makes one method preferable over the other. \item How do embedding time spans affect subsequent inference? \end{enumerate} Previous work \cite{arroyo2019inference} and in scenario $2$ show \text{MASE} performs competitively with respect to \text{OMNI} for testing in simulated stochastic block models under approximate Equation~(\ref{eq:structure1}). Combining this with the fact that graphs generated in scenarios $1$ and $3$ in \S \hyperref[sec:illus]{IV. A} are under Equation~(\ref{eq:structure2}) and far from Equation~(\ref{eq:structure1}), we answer these two questions with time series of graphs generated similarly to scenarios $2$ and $3$. Specifically, graphs are generated as in scenario $2$ except community membership preferences $\boldsymbol{Z}_{i}$, $i=1, \dots,n$ are drawn from $\dirich{\theta \cdot 1_{d}}$ for some $\theta \in [0,1]$ and we fix $p=0.8$, $q=0.3$ and $\xi=0.3 \cdot 1_{4}\in \mathbb{R}^{4}$. In other words, time series of graphs are generated such that they share a invariant subspace $\boldsymbol{V}$ in Equation~(\ref{eq:structure1}) and then we perturb some latent positions with varying perturbation with parameter $\theta$; $\theta$ describes the extent of linear dependency between perturbation and subspace $\boldsymbol{V}$. When $\theta =0$, then graphs are from SBM with same block structures but different block connectivity matrices; $\theta =0$ implies the perturbation is linearly dependent of invariant subspace as $\boldsymbol{\Delta}$ can be factorized as multiplication of $\boldsymbol{Z}$ and some $4\times 4$ matrices. When $\theta = 1$, then graphs are from MMSBM with different community membership preferences; $\theta = 1$ implies perturbation is linearly independent of invariant subspace. To show relative performance of \text{OMNI} and \text{MASE} can be characterized as linear dependency between subspace $\boldsymbol{V}$ in Equation~(\ref{eq:structure2}) and perturbation, we carry out comparative analysis for GraphAD and VertexAD with Algorithm~\ref{alg:hypothesis testing}. Particularly, empirical power is calculated as the ratio of number of significant \text{p}-value estimates at anomalous time to number of Monte Carlo simulations. We use embedding dimension $d=4$ for both \text{MASE} and \text{OMNI} Results for GraphAD at anomalous time points $6:7$ are presented in the left panel of Fig.~\ref{fig:power1}. As $\theta$ increases from $0$ to $1$, each node is likely to belong to more communities. As a consequence, they express an ambiguous clustering pattern. This phenomenon results in the power for both methods approaching $0$ as the anomaly signal is less evident. For $\theta \approx0$, \text{MASE} has power appreciably better than \text{OMNI}. Since there exists only small noise among vertices in the time series of graphs when $\theta \approx0$, this should not be surprising as variance of estimates dominates bias. In particular, \text{OMNI} uses $Mnd$ parameters to model the perturbation and noise while \text{MASE} can utilize at most $Md^{2}+nd$ parameters. So in this case \text{OMNI} estimates are less competitive in subsequent inference. For $\theta \approx 1$, however, \text{OMNI} does appear to consistently outperform \text{MASE}. Furthermore, as $\theta$ increases to $1$, underlying subspace $\boldsymbol{V}'$ is increasingly variable and \text{OMNI} can describe the difference between graphs better with more parameters and achieve more accurate anomaly detection. Fig.~\ref{fig:power1} also shows that power difference between time spans $s=2,12$ for \text{MASE} is more significant than that of \text{OMNI} in GraphAD. This provides quantitative results regarding incremental power with increasing time span; for \text{OMNI} it is little, while it can be substantial for \text{MASE}. To assess performance of \text{MASE} or \text{OMNI} in VertexAD, we calculate the difference between the average of reciprocal ranks for anomalous vertices and the average of reciprocal ranks for non-anomalous vertices at time points $6:7$, then run $200$ Monte Carlo simulations to estimate means and standard errors for these differences. We present the results of these differences in the right panel of Fig.~\ref{fig:power1}. This shows that performance of \text{MASE} in VertexAD clearly climbs well above that of \text{OMNI} at $\theta \approx 0$. Differences between \text{MASE} and \text{OMNI} diminish to zero as $\theta$ increases to $1$. This is not surprising given the signal may be weak when $\theta \approx 1$. In summary, Fig.~\ref{fig:power1} shows that \text{OMNI} can have superior performance over \text{MASE} when perturbation is linearly independent of invariant subspace and \text{MASE} can be more competitive when perturbation is linearly dependent of invariant subspace. \section{Real Data Application: Large-scale Commercial Search Engine} \label{sec:realdata} We demonstrate our control chart methodology on a Microsoft Bing (MSB) entity-transition data set of monthly graphs spanning May 2018 through April 2019 (M=12). The graphs are undirected, with no self-loops, weighted, with positive integer weights representing connection strength. Considering the largest jointly connected component among the 12 graphs, we have $\mathcal{G}^{(t)}$ with $|\mathcal{V}| = 33,793$ vertices for each month $t=1,\dots,12$. In the absence of ground truth for the existence of anomalies, we design an anomaly-insertion strategy, creating artificially anomalous vertices for the graph at time point $t = 6$ (October). So long as our methods detect these artificial anomalies, other detected anomalies may have merit. Thus, our final result is that anomalies detected at the same level in the original, unperturbed data, are ``real.'' Our approach to the creation of artificial anomalies involves a planted clique \cite{10.5555/1540612}, as follows. We perform \text{ASE} with $d = 20$ for the $6$-th graph (October), and then apply Gaussian mixture modeling (GMM) to cluster the latent position estimates. This results in nine clusters of vertices; we add an edge with weight equal to $1$ for each pair of vertices in the smallest cluster ($n^{*} = 473$) to create a complete subgraph. Finally, we normalize edge weights for the entire (perturbed) time series of graphs so that the normalized weights lie in the interval $[0, 2]$. In practice, the dimensions of the latent positions are unknown. We use a scree plot method \cite{zhu2006automatic}, choosing an ``elbow" in singular values, which is a simple and automatic procedure for dimension selection. We apply truncated singular value decomposition (SVD) with $1000$ singular values to each of the graphs, then use the automatic scree plot selection method to choose dimension for \text{MASE} and \text{OMNI} as $ \hat{d}=64$. We then perform Algorithm~\ref{alg:controlchart} with time window length $l=3$ and present GraphAD results in Fig.~\ref{fig:MSRGraphAD} Fig.~\ref{fig:MSRGraphAD} shows all methods detect the artificial anomaly for GraphAD. In addition, other than these artificial anomalies, \text{OMNI} detects an anomaly in April 2019. However, that deviation is less evident than the artificial anomaly. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{qccmsrGRADd64s3arkiv.png} \caption{Control charts for GraphAD on the MSB time series of graphs for which an artificial anomaly is inserted at October. Center solid line (CL) represents moving average of sample means $\bar{y}^{(t)}$, dashed line (UCL) represents $\bar{y}^{(t)} + 3\bar{\sigma}^{(t)}$, where $\bar{\sigma}^{(t)}$ is adjusted moving sample range. Black dots are $y^{(t)}$ at times where the latent positions are claimed as normal. All methods detect the artificial anomaly while \text{OMNI} detects another anomaly at April. Note burn-in period ends at August. } \label{fig:MSRGraphAD} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{qccmsrvertexd64omni2.png} \caption{Control charts for VertexAD on the MSB time series of graphs with a subgroup of artificially anomalous vertices inserted at October. Center solid line (CL) represents moving average of sample means ${\bar{y}_{i}}^{(t)}$, dashed line (UCL) represents ${\bar{y}_{i}}^{(t)} + 3{\bar{\sigma}_{i}}^{(t)}$, where ${\bar{\sigma}_{i}}^{(t)}$ is EWAVE-SD. Black dots are $y_{i}^{(t)}$ at times where the latent positions are claimed as normal, and the red dots are those $y_{i}^{(t)}$ which lie outside of UCL and are claimed as anomalous. Green are the artificially anomalous vertices. Note burn-in period ends at August. } \label{fig:MSRVertexAD} \end{figure} We present the VertexAD results in Fig.~\ref{fig:MSRVertexAD}. We include only the result for \text{OMNI} for simplicity. Generally speaking, all methods detect our artificially anomalous vertices (green). Furthermore, treating these artificial anomalies as references, we can investigate other anomalous vertices which are detected by our methods. For example in Fig.~\ref{fig:degchangeanomaly}, we assemble vertices across months and plot the histogram of changes of degrees between adjacent months for each vertex. The red dots are the detected anomalous vertices for \text{OMNI}. For reference, the red dots circled by green eclipses in Fig.~\ref{fig:degchangeanomaly} are degree changes for artificially anomalous vertices. Fig.~\ref{fig:degchangeanomaly} shows \text{OMNI} detects vertices which change degree -- such vertices are likely to be anomalous. However, degree changes of artificially anomalous vertices are comparatively small but we are still able to detect them. This demonstrates that our approaches can detect anomalous vertices beyond just vertices with large degree change. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{msrdegchanged64s3.png} \caption{Histogram of degree changes for vertices across time points in the MSB time series of graphs with artificially injected anomalies. We mark both perturbed vertices (circled in green eclipse) and the vertices detected by \text{OMNI} in red (the degree change scale is square-root transformed and dots are vertically jittered for display purposes). This figure demonstrates that \text{OMNI} can detect anomalous vertices beyond just vertices with large degree change. } \label{fig:degchangeanomaly} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Bias-Variance Analysis and Community Structure} The simulation results in scenario $2$ in \S \hyperref[sec:result]{IV. C} show that when the latent positions of the graphs share the similar singular vectors, as in structure type~(\ref{eq:structure1}), \text{MASE} has significantly superior performance on subsequent inference compared with \text{OMNI}. Specifically, the empirical \text{p}-value estimate results in Fig.~\ref{fig:sdmultid4mar09} demonstrate that two sample testing using \text{MASE} correctly rejects the null hypothesis while \text{OMNI} does not. It is not surprising, as in this common singular vectors scenario the graphs are generated approximately from the model assumed by \text{MASE} \cite{arroyo2019inference}. The simulation results from scenarios $1$ and $3$ however, show that \text{OMNI} produces smaller \text{p}-value estimates than \text{MASE} under the alternative hypothesis, even though the time series of graphs are sampled from the model assumed by \text{MASE}. This is due to the increased dimension $d'$ of the $\boldsymbol{V}'$ for the overall vertex process under Equation~(\ref{eq:structure2}). For example, \text{COSIE} needs dimension $\tilde{d}=d'=2$ common subspace $\tilde{\boldsymbol{V}}=\boldsymbol{V}'$ to model the graphs generated in scenario $1$ in \S \hyperref[sec:compare]{IV. A}, while \text{OMNI} can use dimension $d=1$ to represent the graphs. Since the underlying true RDPG model is $d=1$ in scenario $1$, \text{MASE} with extra parameters induces more noise than \text{OMNI}, which makes \text{MASE} performance inferior in this scenario. Furthermore, \text{MASE} can not describe differences of left singular subspace between $\boldsymbol{V}$ at unperturbed time points and $\boldsymbol{V}^{(t_j)}$ at perturbed time points using the same dimension $d=4$ as \text{OMNI} in scenario $3$, as the dimensions in the $\boldsymbol{V}'$ increase to greater than four. As a consequence, the \text{OMNI}-based test yields smaller \text{p}-value estimates at the anomalous time point in Fig.~\ref{fig:mco} and Fig.~\ref{fig:ddmultid4mar09}. To compare and contrast the performance of \text{MASE} vs. \text{OMNI} to illustrate which embedding method is preferred, we present the simulation in \S \hyperref[sec:compare]{V} by varying community structure (see Fig.~\ref{fig:power1}). At $\theta=0$, the time series of graphs are under Equation~(\ref{eq:structure1}) and share exactly the same subspace $\boldsymbol{V}$ between unperturbed time points and perturbed time points. In particular, all the graphs have the same $4$-block community structure. \text{MASE} is superior to \text{OMNI} in this case due to fewer parameters using the information from the shared community structure. As $\theta$ increases from $0$ to $1$, the shared community structure among the graphs diminishes and \text{MASE} needs increasingly more dimensions to represent the underlying $\boldsymbol{V}'$ under Equation~(\ref{eq:structure2}). So \text{MASE} either uses extra parameters; or describes the underlying $\boldsymbol{V}'$ poorly. In both cases \text{MASE} has lower statistical power in subsequent inference tasks compared with \text{OMNI}. \subsection{Computation Cost and Scalability} Given a matrix $\boldsymbol{A}\in \mathbb{R}^{n\times n}$, the computational complexity for full SVD is $O(n^{3})$. For a truncated rank $d$ SVD, the computational complexity is $O(d n^{2})$. For $\text{OMNI}$ the complexity is $O(dn^{2}M^{2})$ while $\text{MASE}$ achieves $O(dn^{2}M)$ cost. Algorithm~\ref{alg:hypothesis testing} requires bootstrapping to obtain null distribution of test statistics, thus is more computational expensive than Algorithm~\ref{alg:controlchart}. In our simulations, \text{MASE} and \text{OMNI} are computationally comparable for small graphs and the MSB time series of graphs (it takes $581.833$ seconds for \text{OMNI} and $596.84$ seconds for \text{MASE} with $s=2$, $d=64$, $M=12$ and $n=33,793$). However, the extra cost of building a large omnibus matrix makes \text{OMNI} slower than \text{MASE}. Furthermore, the size of the omnibus matrix can prohibit performing \text{OMNI} with all graphs at hand due to complexity of singular value decomposition for large matrices and limited memory. For example, using \text{OMNI} with time length $M$ is computationally expensive in the sense that we need to build an $Mn \times Mn$ omnibus matrix. This makes \text{OMNI} unsuitable; i.e.,~in our real data application $M=12$ and $n=33,793$. \text{MASE} is easy to parallelize \cite{fan2019}: just a singular value decomposition for each individual graph. \subsection{Limitations and Extensions} The simulations in this paper are done on graphs generated from conditionally independent RDPG models at each time step. A next challenge is to design an algorithm to capture temporal structures (e.g.,~auto-regressive \cite{padilla2019change}) in time series of graphs. For the test statistic in Equation~(\ref{eq:indverstat}) it is natural to use \text{p}-values for VertexAD. However, naively obtaining the null distribution for~(\ref{eq:indverstat}) via bootstrapping the latent positions $\boldsymbol{X}$ can be problematic and inflate the Type-I error \cite{el2018can}. Additionally, bootstrapping latent position for each vertex is computationally intensive. Assuming no nonidentifiability, we instead use reciprocal rank to assess VertexAD to control false positive rate. Finally, our results are based on empirical simulations; theory describing relative strengths and limitations of these methods in terms of the underlying nature of detectable anomalies is of significant interest, but technically challenging \subsection{Conclusion} We have proposed two algorithms which use a test statistic obtained from an omnibus embedding methodology (\text{OMNI}) and multiple adjacency spectral embedding (\text{MASE}) for performing anomaly detection in time series of graphs. We have demonstrated, via simulation results using a latent process model for time series of graphs, that our algorithms can be useful for two anomaly detection tasks. Furthermore, the results presented herein exhibit a phenomenon that relative inferential efficacy between \text{OMNI} and \text{MASE} can be characterized via varying the common subspace among graphs in a time series. This phenomenon suggests that real application of multiple graphs inference should consider checking the common subspace assertion. In general, \text{MASE} is preferable when the graphs are approximately sharing invariant subspace; \text{OMNI} is preferable when the graphs have highly varying subspace. In addition, improvement of statistical power for \text{MASE} with longer time span is sometimes more substantial than that for \text{OMNI}. We also assess our algorithms in a real large-scale dataset and investigate the detected anomalies. \section*{Acknowledgments} This work was supported in part by DARPA programs D3M (FA8750-17-2-0112 ) and LogX (N6523620C8008-01) and MAA (FA8750-20-2-1001), and funding from Microsoft Research. This material is based in part on research sponsored by the Air Force Research Laboratory and DARPA under agreement number FA8750-20-2-1001. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory and DARPA or the U.S. Government. \bibliographystyle{IEEEtran}
1,314,259,996,006
arxiv
\section{INTRODUCTION} Since its explosion, SN~1987A has evolved from a supernova (SN) dominated by the emission from the radioactive decay of $^{56}$Co, $^{57}$Co and $^{44}$Ti in the ejecta to a supernova remnant whose emission is dominated by the interaction of the supernova blast wave with its surrounding medium. The medium surrounding the SN is dominated by the well-known ``circumstellar envelope" (CSE), which consists of an inner equatorial ring (ER) flanked by two outer rings \citep{Bur95}, possibly part of an hour-glass structure. The collision between the ejecta of SN~1987A and the ER predicted to occur sometime in the interval 1995-2007 \citep{Gae97,Bor97} is now underway. At UV-optical (UVO) wavelengths, ``hot spots" have appeared inside the ER \citep{Pun97}, and their brightness varies on time scales of a few months \citep{Law00}. New hot spots continue to appear as the whole inner rim of the ER lights up. The visible-light {\it HST} image obtained in 2004 Feb 20 reveals a necklace of such hot spots which nearly fill a lighted ring. Ongoing monitoring at X-rays with the {\it Chandra} and at radio frequencies, shows that the evolution of the emission from the ER follows a similar pattern at all wavelengths. There exist very few mid-infrared (IR) observations of supernovae in general. Therefore SN~1987A, the closest known supernova in 400 years, gives us an opportunity to explore the mid-IR properties of supernovae and the dust in their ejecta and surrounding medium with the help of the newest generation of large-aperture telescopes and sensitive mid-IR instrumentation like the T-ReCS on the {\it Gemini}, in combination with IR data obtained from {\it Spitzer Space Telescope}({\it SST}) \citep{Wer04}. The T-ReCS observations of the mid-IR emission from SNR~1987A are part of our continuous monitoring of the SN and its surrounding medium. The first detection and analysis of mid-IR emission at the position of the supernova has been reported in \citep{Bou04} (hereafter Paper I). The origin of the mid-IR emission could be line emission from atomic species, synchrotron or free-free continua, or thermal emission from dust which is probably the dominant source of emission. In general, there are several scenarios for the origin and the heating mechanism of the dust giving rise to the late time mid-IR emission in Type II supernovae \citep{Gra86,Ger00}. Thermal mid-IR emission could be: (1) the emission from SN-condensed dust that is collisionally heated by reverse shocks traveling through the SN ejecta; (2) the emission from circumstellar/interstellar dust heated by the interaction of the expanding SN blast wave with the ambient medium; and (3) the delayed emission (echo) from circumstellar dust radiatively heated by the early UVO supernova light curve. Our imaging observations provide strong constraints on these possible scenarios for the mid-IR emission. They show that the bulk of the mid-IR emission is not concentrated on the center of the explosion, but arises from a ring around the SN. The morphology of the mid-IR emission can therefore be used to eliminate several scenarios for its origin. Combined with {\it Spitzer} spectroscopy, our observations can be used to determine the dust composition and temperature distribution in the ring. The {\it Chandra} X-ray, and {\it HST} UVO data provide important constraints on the physical conditions of the medium overtaken by the supernova blast wave. These constraints can be used to determine the physical association and the heating mechanism of the dust giving rise to the mid-IR emission. The paper is organized as follows: We first describe in \S2 the imaging observations of the SN obtained by T-ReCS, and compare its IR morphology to that at radio, X-ray, and UVO wavelengths. Lower resolution {\it Spitzer} mid-IR imaging observations detected SNR~1987A (including the ring) as an unresolved point source, and are used in conjunction with spectroscopic data to determine the possible contribution of lines, and the composition of the dust giving rise to the continuum emission. In \S3 we describe the procedure used for the analysis of the spectroscopic and imaging data, determining the dust composition, and presenting maps of dust temperature, IR opacities, and dust column densities. The IR image of the circumstellar medium around SN~1987A has a morphology similar to the {\it Chandra} X-ray and {\it HST} UVO images. The limited mid-IR resolution does not allow us to unambiguously determine whether the dust resides in the X-ray emitting gas, or in the UV-optical line emitting knots in the ER. We therefore resort in \S4 to an analysis of possible dust heating mechanisms and to calculating the inferred dust masses and dust-to-gas mass ratios for several possible scenarios. In \S5 we discuss the evolution of the supernova and its environment as manifested from the observed light curves at various wavelengths. The results of our paper are summarized in \S6. \section{OBSERVATIONS} \subsection{Mid-Infrared {\it Gemini} Observations} The T-ReCS mid-IR imager/spectrometer at the {\it Gemini} 8m telescope offers a combined telescope and instrument with diffraction limited imaging ($\sim0.3"$ resolution) and superbly low thermal emissivity. On 2003 Oct 4 (day 6067), we imaged SNR~1987A with T-ReCS as part of the instrument's System Verification program and we reported on the detection of 10 and 20~$\micron$ emission from the ER, and on a 10~$\micron$ emission from the supernova's ejecta (Paper I). Subsequent observations were carried out in January 6, 2005 (day 6526) in the narrow Si5 filter ($\lambda_{eff}$ = 11.66 \mic; $>$ 50\% transmission at $\lambda$ = 11.09-12.22 \mic), and in February 1, 2005 (day 6552) in the Qa filter ($\lambda_{eff}$ = 18.30 \mic; $>$ 50\% transmission at $\lambda$ = 17.57-19.08 \mic). Results are presented in Figures 1-a and 1-b. These images show several luminous ``hot'' spots distributed over the ring. The calibrated flux density integrated within an aperture of 1.3 arcsec radius is $F_{\nu}$($11.7~\micron$) = 18.4 $\pm 1.2$~mJy in the Si5 filter, and $F_{\nu}$($18.3~\micron$) = 53.4 $\pm 9$ mJy in the Qa filter. No color correction was applied and this would most likely increase the flux density. The standard star used for the calibration of the 11.7~\mic\ measurement was HD~29291, whose flux density was taken to be 6.78 Jy at 11.7 \mic. We used $\alpha$ CMa with a flux density of 44.3 Jy at 18.30 \mic\ for the flux calibration of the 18.3~\mic\ observation. The black body colour temperature corresponding to the measured fluxes at these two wavelengths is $T = 185$~K and the luminosity is $L_{BB} = 3.74 \times 10^{36}$ erg s$^{-1}$. We use the \citet{Mat90} extinction law with $\tau_{18.3} = \tau_{11.7} / 1.35$ and $A_{18.3}/A_J = 0.083$ and $A_{11.7}/A_J = 0.098$ to compute the black body temperature and the optical depth for each individual pixel, resulting in the maps shown in Fig 1-c and 1-d. Note that in order to have the iterative algorithm converge we must assume reasonable values as a starting point. We stress that Fig 1-c and 1-d show the {\it color} temperature and optical depth maps, which are slightly different from the maps related to the physical dust as calculated in Section 3. They are shown only for illustrating the results obtained from our data fitted to the simplest modeling (eg. black body). In Figure~\ref{fig2} we compare our $11.7~\micron$ new data with the one obtained in the broad N band (10~\mic) on October 2003 at day 6067 (Paper I). This figure shows a clear brightening in the South-West region of the ER, superimposed on a general brightening all over the ring. Given that the two images are taken with significantly different filters we investigated whether the difference could be due to the different spectral coverage: the spectra obtained with {\it Spitzer} which are discussed in next section do not show any feature that could be a source of the difference. Thus we conclude that we are observing a true brightness change. Figure~\ref{fig3}(a,b) displays our images in both filters with the contours of the 0.3--8 keV X-ray image from the {\it Chandra} Observatory obtained nearly simultaneously at day 6529 (January 9--13, 2005) \citep{Par05b}. The correspondences between our $11.7~\micron$ and that obtained by the {\it Chandra} is very good, but less so at $18.3~\micron$ because of the lower signal-to-noise ratio in that image. A more detailed discussion of the relation between the IR and X-ray images of the ER will be presented in \S3 below. Figure~\ref{fig4}(a, b) shows the contours of the image obtained in the 12 mm band (16-26 GHz) at day 6003 (July 31, 2003) with the Australian Telescope Compact Array (ATCA) at the Australian National Telescope Facility (ATNF) \citep{Man05} superimposed on our 11.7 and 18.3~\mic\ T-ReCS images, respectively. The correspondences between our $11.7~\micron$ image with the synchrotron radio emission is not as remarkable as it is with the {\it Chandra} image. Nevertheless, the $18.3~\micron$ bright spot in the East side looks better correlated with the radio lobe than X-ray spots do. Furthermore, an image obtained in May 5, 2004 (day 6298) at the same frequencies is posted on the ATNF web page and is reproduced here in Figure~\ref{fig4}(c,d). This later radio image shows better correlation with our 11.7~\mic\ image than the radio image obtained on day 6003, probably because it was taken closer to the epoch of the mid-IR observations. This demonstrates the importance of evolutionary effects on the morphology of the emission at all wavelengths. There is good overall agreement in shape and size between our IR image and images obtained from the X-ray to the radio. The mean radii and approximate surface brightness distribution (brighter on the east side) of the ring are similar at all wavelengths, demonstrating that the dust is co-extensive with the gas components. The origin of that brightness asymmetry may be related to an asymmetric distribution of the ejecta or the CSM \citep{Par02,Par04}, and/or to a time-dependence effect caused by the tilt of the ER as argued by \citet{Pan91}. The most likely source for mid-IR radiation is thermal emission from warm dust (see discussion below). The X-ray radiation is thermal emission from very hot gas (\citet{Par05a}, and previous references therein), whereas the optical emission arises from the dense knots in the ER that are overrun by slower shocks. The radio emission is likely to be synchrotron radiation from shock-accelerated electrons spiralling in the remnant's magnetic field as stated by \citet{Dun03} and \citet {Man05}. \citet{Par02,Par04} argue that, until 2000 December, hard ($E > 1.2$~keV) X-ray and radio emissions were produced by fast shocks in the CS HII region while the optical and soft ($E < 1.2$~keV) X-ray emissions came from slower shocks in the denser ER. \citet{Par03} note that as of 2002 Dec. 31 (day 5791) correlations between the X-ray and the optical/radio images are more complex than the above simple picture, which is expected as the blast wave is reaching the main body of the inner ring. \subsection{Spitzer Observations} \subsubsection{Imaging Data} Imaging of SNR~1987A was carried out with Spitzer Space Telescope's MIPS instrument at 24 \mic\ \citep{Rie04} and IRAC instrument at 3.6 -- 8 \mic\ \citep{Faz04} (AORIDs = 5031424 and 5030912). Almost one year later SNR~1987A was again imaged with IRAC, serendipitously near the edge of the field of observations targeting other sources (AORIDs = 11191808 and 11526400). All these data were obtained from the {\it Spitzer} data archive. The corresponding images are shown in Figure~\ref{fig17}(a-f), which shows also a near-IR image from the {\it Hubble} Space Telescope for comparison. At 24 and 8 \mic\ SNR~1987A was detected as an unresolved point source amid a field of complex cirrus emission. At 5.8 \mic, the source appears very slightly distorted, and at 4.5 and 3.6 \mic\ the SN appears to be swamped by the emission of companion stars 2 and 3. The flux densities at 24, 8, and 5.8 \mic\ were measured using SExtractor \citep{Ber96} to perform aperture photometry on the post-BCD images. Aperture radii used were 4, 5, and 6 pixels ($4.8''$, $6.0''$, and $14.7''$) at 5.8, 8.0, and 24 $\mu$m respectively. Approximate aperture corrections of 1.10, 1.07, and 1.14 were applied using information from the IRAC and MIPS Data Handbooks. The calibrated flux densities are given in Tab.~\ref{tab1}. \subsubsection{Spectroscopic Data} Observations of SNR~1987A were also performed with {\it Spitzer}'s Infrared Spectrograph (IRS) in its short (wavelength) -- low (resolution), short-high, and long-high modes \citep{Hou04}. These data were also obtained from the {\it Spitzer} data archive (AORID = 5031168). Figure~\ref{fig18} shows the slit positions for the different sets of observations. For each spectral mode we extracted SN~1987A spectra from the 2--D coadded post-BCD images using SPICE (http://ssc.spitzer.caltech.edu/postbcd/spice.html). For the short--low data, the source is placed at 4 positions along the slit to generate spectra at two positions for two different spectral orders. For each slit position, a 2--D background image was generated from the median value of the images for the other three slit positions. These backgrounds were subtracted prior to extracting spectra with SPICE. For the high resolution observations, only two slit positions are observed within a much narrower slit. So for these observations, SPICE was used to extract the SN spectrum from the columns occupied by the unresolved source, and a background spectrum from columns toward the opposite end of the slit. The background spectra were then subtracted from the source spectra. For all modes the spectra from the two slit positions (per spectral order) were combined using a weighted ($1/\sigma^2$) average, and (generally noisy) data where spectral orders overlapped were discarded. For the short-hi data, an empirically determined scaling factor of 1.38 was applied before averaging to bring results from both slit positions into agreement. An additional scale factors of 1.46 was applied to the short--high spectrum to normalize it to the short--low data, and a subsequent factor of 1.25 was applied to long-high data to normalize to the short-high data. These latter scaling factors are to be expected if the subtracted background was partially contaminatead by the source, or in the case that the smaller high resolution slits were slightly misaligned. Finally for the sake of comparison with the broad band measurements and dust models, the high resolution spectra were median--binned using intervals of 12 and 11 wavelength samples for the short and long wavelengths respectively. Figure~\ref{fig5} shows the overall calibrated spectrum and Figure~\ref{fig6} displays individual profiles and identifications of the main emission lines detected with IRS from the {\it Spitzer} Space Telescope: it shows that the T-ReCS observations are dominated by the dust continuum emission, and not the lines. The [Ne II] 12.81 $\mu$m and [Ne III] 15.56 $\mu$m lines are however clearly seen. A weak [Si II] 34.81 $\mu$m line remains after background subtraction, while [S III] 33.48 $\mu$m disappears entirely. Two strong lines are seen near 26 $\mu$m: the redder line is [Fe II] at 25.99 $\mu$m and the bluer line is [O IV] at 25.89 $\mu$m. Both lines could be arising from [Fe II] (or both [O IV]) if from fast moving ejecta on near and far sides of the explosion, but the lack of splitting of the other lines makes this seem unlikely. A weak third component seems present here as well. If fitted as an unresolved line this may be [F IV] at 25.83 $\mu$m. However, it may also be fitted as the wing of a very broad line (FWHM $\sim$ 2800 km s$^{-1}$) underlying the narrower [O IV] and [Fe II] lines. Such high velocity would indicate an association with the SN ejecta rather than the ER for this line. Line fluxes, centers, and widths have been calculated by gaussian fits using SMART \citep{Hig04} and results are given in Tab.~\ref{tab2}. \subsection{Near-IR CTIO Observations} Near-IR J($1.25~\micron$), H($1.65~\micron$), and K($2.2~\micron$) imaging observations of SNR~1987A were obtained on 2005, January 3--5 with ISPI attached to the Blanco 4-m telescope at the {\it Cerro Tololo Interamericain Observatory}. Results are displayed in Figure~\ref{fig7}. We clearly see the ER at J and K, while most of the emission detected in the H band arises from the supernova itself. These images have been deconvolved by using the ``Multiscale Maximum Entropy Method" algorithm developed by \citet{Pan96}. The flux densities measured in the CTIO data are ($F_J = 0.33 \pm 0.25$ mJy; $F_H = 0.11 \pm 0.05$ mJy; $F_K = 0.51 \pm 0.2$ mJy). These fluxes refer to the total of the ER + ejecta in each band (although each band seems dominated by one or the other). Images obtained with HST on November 11, 2005 through the filters F110W, F160W, and F205W, are also shown in this figure, and show that compared to the ring, the ejecta are relatively brighter in H band (F160W) than in other bands. This clearly demonstrates the validity and the power of the deconvolution algorithm applied to our CTIO data. Note, however, that the HST bands are not exactly J, H, K: one significant difference is that the Paschen $\alpha$ line is missed between the ground based H and K bands, but would be seen in HST's F205W band. The Spitzer mid-IR spectrum indicates that dust in SN 1987A and the ER is too cool (see Section 3) to emit significantly in these near-IR bands. We believe that line emission dominates these broadband near-IR observations, and that differences in the composition, temperature, and density of the ejecta vs. CSM lead to the different relative brightnesses of these structures. The young supernova remnants Cas A and Kepler may be the closest analogs we have for interpreting SN 1987A's near-IR emission. Near-IR spectra and imaging of these SNRs by \citet{Ger01} and \citet{Rho03} reveal similar variation in near-IR colors. Circumstellar material (e.g. Cas A's quasi-stationary flocculi) is bright in J due to He I 1.083 $\mu$m, and emits strong [Fe II] lines in both J and H bands. Weaker hydrogen lines can be detected in all bands. The ejecta in several of Cas A's fast-moving knots emit strong [S II] lines at 1.03 $\mu$m in J, and weaker lines in [Fe II] and Si in various ionization states. Thus, the near-IR emission of the circumstellar ER may be dominated by H and He I lines in J and H with synchrotron emission possibly augmenting the K band \citep{Ger01,Rho03}. The synchrotron processes deserve consideration in SN~1987A particularly in view of the fact shown in Figure~\ref{fig4} that the radio morphology of SN~1987A reproduces the ring shape now. Also, the H2 molecule has many emission lines in the K-band and longer wavelengths. Spectra are clearly needed for testing the plausibility of the contribution from this molecule or otherwise. The first overtone band head of molecular CO lies at 2.29$\mu$m and may be also contributing to K-band emission in the ring. The relatively strong H band emission of SN 1987A's ejecta could reflect [Fe II] emission, because a large fraction of the ejecta should be Fe from the core of the explosion, whereas Cas A's FMKs generally consist of lighter metals originating in the outer layers of the progenitor. \section{DATA ANALYSIS} \subsection{The Dust Properties} Figure 5 clearly demonstrates that the mid-IR emission is dominated by thermal emission from dust. The specific luminosity of a single dust particle of radius $a$ at temperature $T_d$, at wavelength $\lambda$ is given by: \begin{eqnarray} {\ell}_{\nu}(\lambda) & = & 4 \pi a^2 \pi B_{\nu}(\lambda,\ T_d) Q(\lambda) \\ \nonumber & = & 4 m_d \kappa(\lambda,\ a) \pi B_{\nu}(\lambda,\ T_d) \end{eqnarray} where $B_{\nu}(\lambda,\ T_d)$ is the Planck function, $Q(\lambda)$ the dust emissivity at wavelength $\lambda$, and $\kappa(\lambda,\ a) \equiv 3Q(\lambda)/4 \rho a$ is the dust mass absorption coefficient, where $\rho$ is the mass density of the dust particle. In the Rayleigh limit, when $a < \lambda$, $\kappa$ is independent of particle radius. Figure~\ref{fig8} illustrates the $\kappa (\lambda)$ curves for amorphous carbon \citep{Rou91}, graphite and silicate grains \citep{Lao93} over the 5 to 30~$\micron$ wavelength region. The figure shows the distinct optical properties of these dust particles over the mid-IR wavelength regime which can greatly facilitate the identification of the emitting material with even limited broad band filters. For an optically thin point source, the flux density, $F_{\nu}(\lambda)$, at wavelength $\lambda$ is given by: \begin{equation} F_{\nu}(\lambda) = 4\ M_d\ { \kappa(\lambda)\ \pi B_{\nu}(\lambda,\ T_d)\over 4 \pi D^2} \end{equation} \noindent where $M_d$ is the dust mass, and $D$ is the distance to the supernova, taken to be $D =51.4$ kpc \citep{Pan99}. For an extended optically thin source with an angular size $\Omega$, the surface brightness, $I_{\nu}(\lambda)$, is given by: \begin{equation} I_{\nu}(\lambda) = \Omega\ \tau_d(\lambda) B_{\nu}(\lambda,\ T_d) \end{equation} where $\tau_d(\lambda)$, the dust optical depth, is given by: \begin{equation} \tau_d(\lambda) = {M_d\ \kappa(\lambda)\over \Omega \ D^2} \end{equation} \subsection{Spectral Analysis} We fitted the integrated T-ReCS flux densities with a population of dust particles consisting of a single population of bare graphite, silicate or amorphous carbon grains, using equation (2). Optical properties of silicate and graphite grains were taken from \citet{Lao93}, and those for amorphous carbon (BE) were taken from \citet{Rou91}. The results are given in Figure~\ref{fig9}, which shows that the T-ReCS observations alone cannot discriminate between the different dust compositions. Comparison with the {\it Spitzer} observations clearly demonstrates that the IRS observations can be well fit with a silicate dust composition (Figure~\ref{fig10}), ruling out graphite or carbon dust as major dust constituents in the CSM. Figure~\ref{fig10} shows the fit of the silicate dust spectrum to the {\it Spitzer } IRS spectrum. The silicate temperature is $T_d = 180^{+20}_{-15}$~K, and the dust mass is $M_d = (1.1^{+0.8}_{-0.5})\times 10^{-6} M_\odot$. The global parameters resulting from our model for silicates are given in Table~\ref{tab3}. Figure~\ref{fig11} shows the residuals of the fit, obtained by subtracting the silicate model fit from the data. The residuals are small, and their spectrum is too sharply peaked at short wavelengths to be fitted with any blackbody. The residuals are also broader than typical atomic line widths, suggesting that they may be due to solid state features in the dust, reflecting differences in the crystalline structure of the silicates in the CSM from the average interstellar silicate dust used in the model. This figure shows also that the residuals cannot be fit by emission from amorphous carbon dust (shown by the green line in this figure). \subsection{Image Analysis} Our spectral analysis showed that the mid-IR spectrum of the ER is dominated by silicate emission. We therefore used the 11.7 and 18.3~\mic\ images of the remnant to construct temperature, and dust opacity maps of the circumstellar ring using Eq. 3 to derive the dust temperature, and Eq. 4 to derive the dust optical depth. In calculating these quantities, we applied a background threshold of 0.03~mJy/pix at 11.7~\mic\ and of 0.08~mJy/pix at 18.3~\mic. The mass of the ring was calculated from the optical depth, over a surface area of 269 pixels, corresponding to the number of pixels that had a flux exceeding the respective thresholds at each wavelength. The average dust temperature in these maps is $T_d = 166_{-12}^{+18}$~K. The average 11.7~\mic\ optical depth per pixel is $\tau_d = (5.5^{+4.2}_{-2.7})\times10^{-6}$, and the total dust luminosity is $L_d = (2.3^{+0.5}_{-0.4}) \times 10^{36}$~erg~s$^{-1}$, giving a dust mass $M_d = (2.6_{-1.4}^{+2.0}) \times10^{-6}$~\Msun, in good agreement with the total mass obtained from the spectral analysis of the ER. Figure~\ref{fig12} shows the maps of the silicate dust temperature (a) and optical depth (b) in the ER. \section{THE ORIGIN OF THE MID-INFRARED EMISSION} The data presented in this paper show unambiguously that the emission is thermal emission from dust. At issue are the location and heating mechanism of the dust. The forward expanding non-radiative blast wave is currently interacting with the circumstellar material and the knots in the ER. The interaction of the blast wave with the knots transmits lower velocity radiative shocks into these dense regions, producing soft X-rays and the ``hot spots" seen in the {\it HST} images. The interaction of the blast wave with the dense knots also generates reflected shocks that propagate back into the medium that was previously shocked by the expanding SN blast wave. The complex morphology and density structure of the ER gives rise to a multitude of shocks characterized by different velocity, temperatures, and post shock densities. The mid-IR images cannot determine the location of the radiating dust, whether it resides in the X-ray emitting gas or in the denser UVO emitting knots. Therefore, we can not, a priori, assume a particular dust heating mechanism: collisional heating in the shocked gas, or radiative heating in the radiative shocks. \subsection{Dust Heating Mechanism} The relative importance of the two dust heating mechanisms is given by the ratio \citep{Are99}: \begin{equation} {\cal R} \equiv {H_{rad}\over H_{coll}} = {n_e\ n_H\ \Lambda(T_e)\ P_{abs} \over n_d\ n_e\ \Lambda_d(T_e)} \end{equation} \noindent where $H_{rad}$ and $H_{coll}$ are, respectively, the radiative and collisional heating rates of the dust, $n_e$, $T_e$, and $\Lambda(T_e)$ are, respectively, the electron density, temperature, and the atomic cooling function (erg cm$^3$ s$^{-1}$) of the gas, $P_{abs}$ is the fraction of the cooling radiation that is absorbed by the dust, and $\Lambda_d(T_e)$ is the cooling function of the gas via electronic collisions with the dust and given by: \begin{equation} \Lambda_d(T_e) = 2 {\bar v_e}\pi a^2 kT_e \langle h\rangle \end{equation} where $n_d$ the number density of dust grains, $a$ their average radius, ${\bar v_e} = ({8kT_e/\pi m_e})^{1/2}$ is the mean thermal speed of the electrons, and $\langle h \rangle \lesssim 1 $ is the collisional heating efficiency of the dust which measures the fractional energy of the electrons that is deposited in the dust \citep{Dwe87a}. In the following we will examine possible locations of the dust giving rise to the IR emission. For each possible site, the X-ray emitting gas or the optical knot, we determine the dominant cooling mechanism, the temperature of the dust, and the inferred dust mass. A site is viable if it can maintain a dust temperature between $\sim$ 150 and 200~K, the range of values reflecting the range of dust temperatures derived from the T-ReCS and {\it Spitzer} observations, and if the derived dust mass does not violate any reasonable abundances constraints. \subsection{Dust in the X-ray Emitting Gas} The morphological similarities between the $11.7~\micron$ emission, dust temperature, and optical depth maps on one hand and the X-ray maps of the supernova on the other hand suggests that the dust giving rise to the IR emission may be well mixed with the X-ray emitting gas. For an optically thin plasma $P_{abs} \approx \tau_d = n_d \pi a^2\langle Q \rangle \ell $, where $\langle Q \rangle$ is the radiative absorption efficiency of the dust averaged over grain sizes and the radiation spectrum, and $\ell$ is a typical dimension of the emitting region. Inserting this value in eq. (5) we get: \begin{eqnarray} {\cal R} & = & {n_e \Lambda(T_e) \ell \over 2 v_e kT_e}\ {\langle Q \rangle\over \langle h \rangle} \\ \nonumber & = & {\sqrt{\pi m_e\over 32}}\ \times {n_e \Lambda(T_e)\ \ell \over (kT_e)^{3/2}}\ \times {\langle Q \rangle\over \langle h \rangle} \end{eqnarray} For average conditions in the X-ray plasma (see below), characterized by plasma temperatures and densities of $T_e \approx 10^7$~K, and $n_e \approx 300-10^3$~cm$^{-3}$, we get $\Lambda(T_e) \approx 4\times 10^{-23}$ erg~cm$^3$~s$^{-1}$, and $\langle h \rangle \gtrsim$ 0.1 for dust particles with radii larger than 0.05~\mic. Adopting a value of $\langle Q \rangle \approx $~1, gives an upper limit of \begin{equation} {\cal R} \lesssim 2.1\times10^{-20} \ \ell \end{equation} The size of the X-ray emitting plasma is less than the radius of the ER which is $\ell \lesssim$ 0.7~lyr = $7\times10^{17}$~cm, giving ${\cal R} \lesssim 0.14$, that is, the dust heating in the X-ray gas is dominated by electronic collisions. \subsubsection{The Dust Temperature} Figure~\ref{fig13} depicts contours of the temperature of collisionally heated silicate grains of radius $a$ = 0.10~\mic\ as a function of plasma density and temperature. To calculate the energy deposited in the dust we used the electron ranges of \citet{Isk83} for energies between 20 eV and 10 keV, and those of \citet{Tab72} for higher incident electron energies. We also assumed that all incident electrons penetrate the dust (no reflection). Above gas temperatures of about $3\times 10^6$~K, the temperature of collisionally heated dust is independent of grain radius, and very well represented by a single dust temperature, since both the radiative cooling and the collisional heating rates are proportional to the mass of the radiating dust particle. Furthermore, at these plasma temperatures the dust temperature is essentially independent of gas temperature as well, and therefore an excellent diagnostic of plasma densities. The figure shows that for plasma temperatures above $\sim 3 \times 10^6$~K, dust temperatures between 150 and 200~K require electron densities of about 300 to 1400~cm$^{-3}$. X-ray observations show the presence of two X-ray emission components \citep{Par06}: one associated with the ``slow shock" with an electron temperature of $\sim0.23$~keV and a density of $n_e \sim$ 6000~cm$^{-3}$, and the second, associated with the ``fast shock" with electron temperatures and densities of $kT \sim2.2$~keV and $n_e \sim$ 280~cm$^{-3}$, respectively. The lastest {\it Chandra} data indicate \citep{Par06} that the ``fast shock" and the ``slow shock" of the model are becoming less distinguishable, as the overall shock front is now entering the main body of the inner ring, that is, the electron temperature of the soft component is increasing and that of the hard component is decreasing. Albeit rather speculative, the overall temperature might thus be ``merging" onto an ``average" temperature of $ kT \sim1.5$~keV $\sim 1.8\times 10^7$~K, \citet{Par06}, and intermediate electron densities. The range of densities expected for this ``average" shock is in very good agreement with that implied from the IR observations. \subsubsection{The Infrared-to-X-ray Flux Ratio (IRX)} An important diagnostic of a dusty plasma is the infrared-to-X-ray ($IRX$) flux ratio \citep{Dwe87b}. For a given dust-to-gas mass ratio and grain size distribution, the $IRX$ ratio is defined as the infrared cooling to X-ray cooling in the 0.2--4.0 keV band, and is given by: \begin{eqnarray} IRX(T_e) & \equiv & {n_e\ n_d\ \Lambda_d(T_e)\over n_e\ n_H\ \Lambda_x(T_e)} \\ \nonumber & & \\ \nonumber & = & {\mu m_H Z_d\over \langle m_d \rangle}\ {\Lambda_d(T)\over \Lambda_x(T_e)} \end{eqnarray} were $Z_d \equiv n_d \langle m_d \rangle/\mu n_H m_H$ is the dust-to-gas mass ratio, $\mu$ is the mean atomic weight of the gas, and $\langle m_d \rangle$ the mass of a dust particle, averaged over the grain size distribution. For a given value of $Z_d$, the $IRX$ ratio is only a function of plasma temperature, and ranges from a value of about 10 for $T_e = 10^6$~K to a value of $\sim 400$ for $T_e = 10^8$~K. For young supernova remnants, \citet{Dwe87b} found that the IRX ratio is significantly larger than unity for 7 of the 9 remnants considered in their paper, the other two having only an upper limit on their IR emission. The observed $IRX$ ratio can be obtained from the X-ray and IR fluxes from the SN. From the January 2005 {\it Chandra} data, we estimate the X-ray flux in the 0.2--4.0 keV X-ray band to be $7.24 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ after correcting for interstellar absorption by an H-column density of $N_H = 2.35 \times 10^{21}$ cm$^{-2}$. Note that these values are based on the two-temperature model as used in \citet{Par04}. Fractional contributions from the soft ($kT_e$=0.3 keV) component in the total flux is $\sim$70\% for the unabsorbed flux (and thus for $L_X$). Thus, a contribution from each component in the 0.2--4 keV X-ray flux seems to be significant rather than being dominated by one of them. The total IR flux is $F_{IR} = 7.7 \times 10^{-12}$ erg~cm$^{-2}$~s$^{-1}$ which leads to an $IRX$ ratio $\sim$ 1. This value is lower than the values reported in Paper I, in which $IRX$ = 6 for the decelerated slow shock component, and $IRX$ = 3 for the blast wave shock in the two-temperature model. It is much lower than the theoretical value of $\sim 10^2$, expected for a $T_e \approx 1.8\times10^7$~K plasma, which is observed in the young remnants Tycho and Cas~A. Other (mostly older) SNRs with measurable IR emission show somewhat lower $IRX$ ratios, but not as low as SNR~1987A. Several effects could be the cause for this very low value of the $IRX$ ratio in SNR~1987A. First, in remnants the $IRX$ ratio was calculated by \citet{Dwe87a} for an interstellar dust-to-gas mass ratio of 0.0075. The SNR~1987A blast wave is expanding into the circumstellar shell of its progenitor star, which, a priori, is not expected to have an interstellar dust-to-gas mass ratio. Moreover, when estimating the depletion of elements onto dust in the ER, we should compare the expected dust abundances in the ER with the maximum available abundances for the LMC. General LMC abundances exist for B stars \citep{Rol96} and ISM \citep{Wel99} and, although controversy still remains \citep{Kor02}, the LMC metallicity is usually assumed to be $0.5 - 0.7$ solar. Assuming that the fraction of metals locked up in LMC dust is the same as in the local ISM, and that the ER has the same metallicity as the LMC, then the IRX ratio in the ER should be about $0.5 - 0.7$ times that expected from Supernova Remnants in the Milky Way, still significantly larger than implied from the observations. The extremely low value of the $IRX$ ratio may therefore be due to a deficiency in the abundance of the dust, compared to interstellar values, which may reflect the low condensation efficiency of the dust in the circumstellar envelope. Second, the dust deficiency could be the result of grain destruction by thermal sputtering in the hot gas. The sputtering lifetime, $\tau_{sput}$, in a plasma with temperatures above $\sim 10^6$~K is about \citep{Dwe96}: \begin{equation} \tau_{sput} \approx 3\times 10^5\ {a(\mu{\rm m}) \over n({\rm cm}^{-3})} \ {\rm yr} \end{equation} \noindent where $n$ is the density of nucleons in the gas. The X-ray emitting gas is highly ionized, and we will assume that its density is that required to heat the dust to the observed range of temperatures, that is, $n \approx $ 300 to 1400~cm$^{-3}$. Grain destruction is important when the sputtering lifetime is about equal to the age of the shocked gas, which we take to be $\sim$1~yr. The low $IRX$ ratio can therefore be attributed to the effects of grain destruction if the dust particles had initial radii between 10 and 50 \AA. So attributing the small $IRX$ ratio to the effect of grain destruction in the hot plasma requires that only small grains had formed in the presupernova phase of the evolution of the progenitor star. The low $IRX$ ratio shows that IR emission from collisionally heated dust is not the dominant coolant of the shocked gas. Its lower than expected value suggests a dust-to-gas mass ratio in the ER that is only a few percent of its interstellar value. The puzzle of the low dust abundance is greater if, in fact, the IR emission arises from dust that is {\it not} embedded in the X-ray emitting gas, but from dust that resides in the UV/optical knots instead. \subsection{The Dust Heating mechanism} The possibility that the initial size of the dust grains swept up by the shock is small suggests that the temperature of the grains may not be at the equilibrium value but may fluctuate due to the stochastic nature of the heating and cooling. Very small grains will be stochastically heated if the energy deposited in the grain in a single collision is large compared to its internal energy content, and if its cooling time via IR emission is shorter than the time between subsequent collisions. Assuming that the electron and ion temperature is instantaneously equilibrated behind the shock, the mean thermal energy of the electrons will be 2.6~keV at a postshock temperature of $2\times10^7$~K. Using the electron ranges given by \citet{Isk83} we get that the energy, $\Delta E$, deposited in a dust grain of radius $a(\mu{\rm m})$ and density $\rho$(g~cm$^{-3}$) by electrons with energies $E\gtrsim 370$~eV is given by: \begin{equation} \Delta E ({\rm erg}) = 5.5\times10^{-7}\ {\rho\ a\over E({\rm eV})^{0.492}} \end{equation} which for $\rho = 3$~g~cm$^{-3}$, $a = 0.0050~\mu$m, and $E=2.6$~keV gives an energy deposition $\Delta E = 1.7\times10^{-10}$~erg. This value is small compared to the internal energy of the grain, which is about $6\times10^{-10}$~erg at the equilibrium grain temperature of 170~K. The average thermal speed of the electrons is about $3\times10^9$~cm~s$^{-1}$, giving an average time between electronic collisions of about 1~sec for an electron density of 300~cm$^{-3}$ and a grain radius of 0.0050~$\mu$m. This is shorter than the grain's cooling time at 170~K, which is about 10~s \citep{Dwe86}, suggesting that the dust can maintain an equilibrium temperature at that value. Collisions with protons can deposit a significantly larger amount of energy in the dust. Typical proton energies in the shocked gas are $\sim 5$~keV, and the stopping power for protons with that energy is about 240~MeV~cm$^2$~g$^{-1}$ (NIST tabulated values: {\it http://physics.nist.gov/cgi-bin/Star}). The energy deposited in a 0.0050~$\mu$m radius dust particle is then about $6\times10^{-10}$~erg, which is about equal to the internal energy of the dust grain at 170~K. So collisions with protons and heavier nuclei can be neglected if they are in thermal equilibrium with the electrons, because of their lower collision rate. However, if the electron and ion temperatures are not equilibrated behind the shock, then the dust heating rate will be dominated by collisions with the protons. The time between successive collisions will be about 40~s for a proton density of 300~cm$^{-3}$. The grains should therefore cool to a temperature of about 100~K before another collision will take place. However, the data does not support such a broad range of dust temperatures, suggesting that the gas density could be somewhat higher (by a factor of $\sim 3$), in which case the dust will cool only to a temperature of $\sim 140$~K, which may be more consistent with the observations. All these scenarios support the idea that the dust temperature in the X-ray emitting gas does not fluctuate wildly about the equilibrium dust temperature, which can still provides strong constraints on the density and the equilibration of the electron and ion temperatures in the postshock gas. \subsection{Dust in the Dense Knots of the Equatorial Ring} The UVO light emitting knots discovered with the {\it HST} resemble a string of beads uniformly distributed along the ER. Figure~\ref{fig14} and Figure~\ref{fig15} depict the map of the dust optical depth overlayed with contour levels of the HST emission obtained on Dec 15, 2004 (day 6502), the closest to our $11.7~\micron$ observations. The data look very similar, but the IR emission seems to emanate from a somewhat wider region than the optical emission, an effect that cannot be entirely accounted for by the lower resolution of the IR data. Nevertheless the good correlation between the IR emission maps and the {\it HST} image, suggest that a significant fraction, if not most, of the mid-IR emission may be emanating from the knots. The physical conditions of a particular knot (Spot 1 on the ER) have been modeled in detail by \citet{Pun02}, from the analysis of the UV/optical line emission detected by the {\it HST} Space telescope Imaging Spectrograph (STIS). They found that the UV fluxes could be fit with a model consisting of two shocks with velocities of $v_s$ = 135 and 250~km~s$^{-1}$ expanding into preshock densities of $n_0$ = 3.3$\times 10^4$ and 10$^4$~cm$^{-3}$, respectively. The postshock temperatures behind the slow and fast shocks are $T_s = 4.5\times 10^5$, and 1.5$\times 10^6$~K, with gas cooling rates of $\Lambda(T) \approx 10^{-22}$~erg~cm$^3$~s$^{-1}$. The cooling time of the shocked gas is given by $t_{cool} = kT_s / n\Lambda(T_s) $, and is equal to $\approx$ 0.1 and 2~yr for the slow and fast shock, respectively \citep{Pun02}. The thickness of the shock front is therefore $\sim 4\times 10^{13}$ and 8$\times 10^{14}$~cm for the slow and fast shocks, respectively, both significantly smaller than a typical radius of the knot, which is about $2\times10^{16}$~cm. So most of the dust in the knot resides in the unshocked gas and is heated by the radiation emitted from the cooling shocked gas. The radiative energy density seen by the dust is approximately given by: \begin{eqnarray} U_{rad} & \approx & {n^2 \Lambda(T_s) \ \ell_{cool} \over c} \\ \nonumber & \approx & n\ k\ T_s\ {v_s\over c} \\ \nonumber & \approx & f_c\ n_0\ k\ T_s\ {v_s\over c} \end{eqnarray} where $\ell_{cool} = t_{cool}\ v_s$, and $f_c$ is the compression factor of the gas in the postshock region. For $T_s =10^6$~K, $n_0 = 10^4$~cm$^{-3}$, and a shock velocity of 200~km~s$^{-1}$ we get that \begin{equation} U_{rad} \approx 2\times 10^{-9}\times f_c \ \ {\rm erg\ cm}^{-3} \end{equation} \citet{Pun02} find that compression factors can be as large as $\approx$ 550, giving radiation densities of $\sim 10^{-6}$~erg~cm$^{-3}$ throughout the knot. The energy density of the local interstellar radiation field is about $3\times 10^{-12}$~erg~cm$^{-3}$. Silicate dust particles immersed in this field achieve equilibrium dust temperatures of about 15~K \citep{Zub04}. The energy density in the knot is therefore higher by a factor of $\sim 3\times 10^5$ than that of the local interstellar radiation field, and the average dust temperature should therefore be higher by a factor of $\sim$ 8.3 for a $\lambda^{-2}$ dust emissivity law ($T_d \propto U_{rad}^{1/6}$). This gives a typical dust temperature of $\sim$ 125~K, in reasonable agreement with the observed average. The total mass of radiating dust was found to be $\sim 10^{-6}\ M_{\odot}$. The typical mass of gas in a knot of radius $r = 2\times 10^{16}$~cm, and density $n_0 =10^4$~cm$^{-3}$ is $\sim 10^{-4}\ M_{\odot}$. If ${\cal N} \approx$~20 is the number of knots in the ER, then the dust-to-gas mass ratio is $\sim 10^{-6}/({\cal N} \times 10^{-4}) \approx 5\times 10^{-4}$, or approximately a factor of 10 less than the average dust-to-gas mass ratio in the local interstellar medium. The low abundance of dust in the knots could be explained if the dust is efficiently destroyed in the shocked gas {\it and} if the transmitted shocks have already traversed most of the volume of the knots. Calculations presented by \citet{Jon04} show that about 49\% of the silicate grains swept up by 200~km~s$^{-1}$ shocks expanding into a medium with a preshocked density of 0.25~cm$^{-3}$ are destroyed. This fraction could be significantly higher for the densities encountered by the shocks traversing the knots. This scenario predicts that the IR emission from the knots was higher in the past, contrary to observed IR light curves (see below). Therefore, if the IR emission emanates from the knots, the low dust abundance must reflect the initial dust abundance in these objects. \section{THE LIGHT CURVES} The light curves at 10 and 20~$\micron$ are shown in Figure~\ref{fig16}: the absolute flux calibration have been made using \citet{Coh92} 0-magnitude fluxes [$F_0$(10.0~\mic) = 35.24~mJy, $F_0$(11.7 \mic) = 28.57~mJy, and $F_0$(18.30~\mic) = 10.25~mJy]. It can be seen from Figure~\ref{fig16} that the flux arising from the ejecta at 10~\mic\ declines exponentially from day 2200 through day 4200 (the ISOCAM and OSCIR observations) until day 6000, at a rate of $\sim0.32~mag~y^{-1}$. This would imply that the observations on day 4200 are not dominated by the ring emission. It is likely that the ring emission started around day 4000, at roughly the epoch when the first optical spot was discovered \citep{Pun97}, and in good agreement with Fig.~5 of \citet{Par02} which presents ATCA and {\it Chandra/ROSAT} data. The significant increase of the fluxes reported in the present paper compared to the previous fluxes (Fig.~\ref{fig16}) is consistent with the soft X-ray flux increase observed in the last set of data \citep{Par05b}. This clearly shows that ``something" has happened at around day $\sim6000$. This is also manifest in the radio light curve at 843Mhz from MOST. The earlier observations by \citep{Bal01}from 3000 to 6000 days show a steadily increasing flux together with a steadily increasing rate of change. Near day 6000 however \citep{Hun06} the rate of increase undergoes a more abrupt change upwards, and is therefore clearly associated with the changes seen at other wavelengths. Figure~\ref{fig16} also shows the Spitzer data point resulting from integrating the IRS data within the appropriate bands. Including these data points shows that the trend of the increase of the flux at 11.7~\mic\ since after day $\sim$6000 is better fit by a ``linear" function, while the increase in magnitude vs. time is not linear. Although we do not make strong conclusions, this could indicate that the shock is travelling through structures with cross sections (as seen from the center of the explosion) that do not increase with radial distance, like short cylindrical clouds. Sputtering time scales for all but the smallest grains are too long (10--100 years) to affect the IR emission, and the plasma cooling time due to the IR emission is also too long ($\sim100$ years) to affect the time variation in the IR emission. The 13~cm radio emission light curve (http://www.atnf.csiro.au/research/SN1987A/) is also shown in Figure~\ref{fig16}. No data are currently available after day 6244 so it is premature to discuss the associated 13 cm evolution. It would be surprising if the rate of increase does not change upwards. While the sudden increase in the X-ray light curve at day around 3700 has been interpreted as the encounter of the shock front with the first protrusions of the ER \citep{Par04}, the ``jump" after day around 6000 could be the sign of the shock reaching the main body of the ER \citep{Par05b}. These authors also show that the light curve of the hard X-rays (3--10 keV) is much flatter than one corresponding to the soft X-rays, and similar to the radio light curves, and they argue that it is likely that the hard X-ray emission comes from the fast reverse shock rather than the decelerating forward shock, just like the radio emission \citep{Man05}. We note that the reverse shock origin for the radio emission is one possible interpretation for explaining the inconsistency of the radio images with the IR and the soft X-rays images. It is not our intention to discuss evolution of radio fluxes and their relationship to other changes reported in this paper, but we note that the log-log plot in Figure 16 hides a significant change in the rate of increase of the 13 cm at 5000 days. The epoch of increase in the rate appears to be frequency dependent. \subsection{The Ejecta} An asymmetry in the profiles of optical emission lines that appeared at day 530 showed that dust had condensed in the metal-rich ejecta of the supernova \citep{Luc91}. Although it was discovered via spectroscopy, the presence of the dust could be easily inferred from the spectral energy distribution: as the dust thermalized the energy output, after day 1000 SN~1987A radiated mainly in the mid-infrared \citep{Bou91}. Although the presence of the dust emission from the condensates in the ejecta was reported in Paper I, we do not detect them in the present observations. The possibility that this was due to the fact that the present observations were achieved with the narrow Si5 filter instead of the broad N filter was investigated, for the occurence of the [Ne II] $12.8~\micron$ line. Indeed, this line corresponds to a fine structure transition in the ground state of Ne II so the temperature to excite the upper level does not need to be high if there is Ne II. However, Paper I reports a dust temperature of $90~K < T < 100~K$ in the ejecta, and there is little evidence for X-ray emission from the ejecta. Furthermore, the X-ray emitting gas would be ionized to a much greater extent than to produce NeII. It is thus most likely that Ne II is coming from warmer regions near the X-rays which are also responsible for the other HST and {\it Spitzer} lines. Why then is the ejecta not detected in our last observations? In Paper I, the N band background sigma was 0.033 mJy/pix. We used a 12 pixels area to integrate the central source, and then the 3-$\sigma$ detection limit was 0.34 mJy which is about the 0.32 mJy measurement reported for the central source in Paper I. In the present data, although the source is observed with a better signal-to-noise ratio, the background at $11.7~\micron$ is affected by a higher noise, with a 3-$\sigma$ background value of 0.47 mJy. It transpires that we could not have detected a source at the flux level expected from the radioactive decay based light curve (eg. $\sim0.3$ mJy). It appears then that the only detection of the ejecta at this late stage is achieved in the H band (see Figure~\ref{fig7}). The origin of this emission is most likely due to line emission, although we note that the limiting flux for any continuum emitter at the center of SN~1987A, in the wavelength range 2900--9650 \AA\ reported recently by \citet{Gra05} (4.3 mJy in the I filter) is much above our detection at H (0.11 mJy). Therefore, our near IR results are compatible with HST observations and do not exclude contribution from a continuum. Supernovae are known contributors to interstellar dust. The presence of isotopic anomalies in meterorites \citep{Cla04} and observations of Cas~A \citep{Dou99, Are99} and SNR~1987A (Paper~I and references therein) provide direct evidence for the formation of dust in SNe. However, the relative importance of SNe compared to quiescent outflows from AGB stars in the production of interstellar dust grains is still unclear \citep{Jon04, Dwe98, Tie98}. So far, the dust masses in the the ejecta have been determined observationally for two SNe, SN1987A \citep{Luc89} and SN1999em \citep{Elm03}. These masses were $\sim~3.10^{-4}$~\Msun\ and $\sim 10^{-4}$~\Msun\ . In both cases the authors note that these values could be much higher if the dust exists in opaque clumps which may be the case. At face value however they are much less than the 0.1 - 1.0~\Msun\ required to make SNe the dominant source of interstellar dust particles. In future one must find a means of quantifying the effects of clumping. Observations of the handful of other supernovae in which an IR excess is interpreted as dust forming in the ejecta (SN~1979C and SN~1985L, and probably SN~1980K) do not allow an estimate of the mass of dust. As for the young Galactic SNR which have been observed by IRAS and ISO (Cas~A, Kepler and Tycho) the dust mass deduced is only $10^{-7} - 10^{-3}$~\Msun\ \citep{Lag96, Dwe87c}, also many orders of magnitude lower than the solar mass quantities predicted. Paper I discusses the role of supernovae in dust production. We showed there that the dust which condensed in the ejecta of SN~1987A has survived 16 years since outburst, and was still radiating the energy released by the radioactive decay of $^{44}$Ti at the expected level. Unfortunately, we could not accurately estimate the mass of the dust, and the observations reported in this paper do not allow it either, in the absence of detection of the ejecta. Thus, if we consider SN~1987A as an archetype, our data can neither support nor rule out the hypothesis that supernovae are significant sources for dust production. \section{CONCLUSIONS} We have presented mid-IR images of SNR~1987A obtained with T-ReCS on the {\it Gemini} South telescope on day 6526 at 11.7 and 18.3 \mic\ and with IRAC (5.8 and 8~\mic) on day 6130 and MIPS (24~\mic) on day 6187 onboard {\it Spitzer}, together with 3 - 37~\mic\ spectroscopic observations of the remnant obtained with IRS on day 6190 at the same observatory. The imaging observations (Figure~\ref{fig1}) show that the mid-IR emission arises from the dust in the equatorial ring (ER) heated up by the interaction of the SN blast wave with its circumstellar medium. Several theoretical models predicted the presence of dust in the CSE of SN~1987A which was produced in the winds of the supergiant phase. The location of the IR emission rules out the possibility that the dust condensed out in the SN ejecta, strongly suggesting a circumstellar origin instead. The 3 - 37~$\micron$ spectrum (Figures~\ref{fig10}) shows that the emission arises from a population of astronomical silicate particles. Temperature maps show that the dust temperature is fairly uniform in the ER and about 166$^{+18}_{-12}$~K, with total dust masses of $\sim 2.6^{+2.0}_{-1.4} \times 10^{-6}$~\Msun. A comparison with {\it Chandra} and {\it HST} observations show an equally good correlation between the 11.7~\mic\ IR and the X-ray and UV-optical images of the supernova. Because of the limited angular resolution in the IR we cannot determine the location or heating mechanism of the radiating dust. The dust could be residing in the hot $\sim 10^7$~K gas and collisionally heated by the X-ray emitting plasma. The dust temperature is then an excellent diagnostic of the electron density, giving a value of $\sim $ 300-1400~cm$^{-3}$, similar to the value suggested by the {\it Chandra} observations. Comparison of the IR and X-ray fluxes suggests that the dust is depleted by a factor of $\sim$ 30 in the X-ray emitting gas, compared to its value in the local interstellar medium of the LMC. This low value could be due to its destruction by thermal sputtering in the shocked gas, requiring the initial grain radii to be below $\sim$ 50~\AA. Alternatively, the dust could be residing in the UV-optical emitting knots in the ER and radiatively heated by the cooling gas that was excited by the shocks propagating through these knots. A simple calculation for a particular knot shows that the radiative energy density can heat the dust to typical temperatures of about 125~K, similar to those inferred from the IR observations. A comparison with the mass of the knots shows that the dust-to-gas mass ratio in the knots is lower by a factor of $\sim$ 10 compared to its value in the ISM of the LMC. This low abundance reflects the low condensation efficiency of the dust in the outflow of the progenitor star. We stress that in order to assess the role of SNe in the production of dust in the Universe, it is clearly important to measure the presence of dust that survives into the formation of the remnant, and for this, mid-IR and sub-mm observations are critical. \acknowledgements{PB is most grateful to Eric Pantin for the use of his {\it ``Multiscale Maximum Entropy Method"} program for the deconvolution of the images, and for helpful discussions related to it. The authors acknowledge R. Manchester for providing the ATCA image obtained on July, 31, 2003. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. SMART was developed by the IRS Team at Cornell University and is available through the Spitzer Science Center at Caltech. JD acknowledges support by grants from MIUR COFIN, Italy. NBS acknowledges support for the study of SN~1987A though the HST grants GO-8648 and GO-9114 for the Supernova INtensive Survey (SInS: Robert Kirshner, PI). SP was in parts supported by the SAO grant GO5-6073X. ED was supported in part by NASA LTSA 2003.}
1,314,259,996,007
arxiv
\section{Introduction} \label{sec:intro} One of the main, unresolved questions in planet formation is how the material in the disk grows from (sub-$\mu$m) Interstellar Medium dust to planetesimals. In the past years, there has been a lot of attention on pressure bumps at the edges of cavities, because they can efficiently trap relatively large dust grains, and those are regions in the disks where dust growth is expected to take place \citep[e.g.,][]{pinilla12, pinilla2015c, vandermarel2013, vandermarel2015, macias2018}. Transitional disks (TD)--disks with sub-mm cavities--are then, prime targets to study pressure bumps, dust traps, grain growth, and their connection to planet formation. One of the mechanisms proposed to explain cavities on disks is the accumulation of large particles into pressure bumps created by planet-disk interactions, halting their radial drift to the central star and driving grain growth \citep{pinilla12}. If the planet producing the cavity is massive enough $M_{p} > 5 M_\mathrm{Jup}$, small and large particles are going to get trapped at the inner edge of the cavity; if instead a low-mass planet ($M_{p} \sim 1M_\mathrm{Jup}$) is present, then the small particles will filtrate into the cavity reaching inner regions \citep{pinilla2016a}. This mechanism seems to explain the differences in radii observed not only between the gaseous and the dusty disk, but also between dust grains of different sizes \citep[e.g,][]{dong2012,dejuanOvelar2013,follette2013,garufi2013,vandermarel2016,vandermarel2018}. Dead zones have also been invoked as an alternative mechanism by which particles accumulate in protoplanetary disks producing ring-like structures. In this case, no planets are needed in order to generate the pressure gradient required for dust grains to get trapped. Instead, low-ionization regions in the disk locally reduce the magnetorotational instability (MRI) which causes the gas flow to significantly decrease, accumulating particles near the boundary of the dead zone \citep{pinilla2016b}. High resolution images of protoplanetary disks at sub-mm wavelengths, like the ones reported in the Disk Substructures at High Angular Resolution Project \citep[DSHARP;][]{andrea2018,andrews2018,birnstiel2018,dullemond2018,guzman2018,huang2018a,huang2018b,isella2018,kurtovic2018,zhang2018} have shown with stunning details the variety of sub-structures present in these young systems. Multiple rings and gaps, spiral arms, azimuthal asymmetries/vortices, seem to be common features in planet-hosting disks. Similar advances have been made at optical/NIR wavelengths thanks to the polarimetric differential imaging technique \citep[PDI; e.g.][]{canovas2011,tsukagoshi2014,benisty2017,avenhaus2018}, which uses the polarized light scattered at the disk surface, to obtain linear Stokes parameters of the incoming light, without being contaminated by the stellar contribution which is mostly unpolarized. Since the scattered polarized light will depend on the properties of dust grains (reflectivity, albedo, porosity, composition), polarimetric observations provide a useful tool to estimate grain properties in protoplanetary disks. High-contrast imaging in the IR is a useful technique to detect the companions that could be responsible for the structures observed in protoplanetary disks \citep[e.g.][]{Absil2010, Bowler2016}. Nowadays, adaptive-optics (AO) assisted observations with the so-called angular differential imaging technique \citep[ADI;][]{Marois2006} are routinely used to search for these young giant exoplanets. Using this strategy with several AO-equipped instruments has allowed to obtain the first images of a protoplanet in the large cavity of the disk surrounding T-Tauri star PDS~70 \citep{Keppler2018,Muller2018,Christiaens2019a}. Sz\,91 is an M0 T Tauri Star (TTS), located in the Lupus III molecular cloud \citep{romero2012}, hosting a TD with the largest mm-dust cavity observed in a low-mass star, and with a significant mass accretion rate ($\rm \dot{M}\sim 10^{-8.8}M_{\odot}yr^{-1}$) \citep{alcala2017}. ALMA observations have revealed a ring-like concentration of mm-sized particles peaking at $\sim$95 au, and gas extended emission from less than 16 au up to almost 400 au \citep{canovas2015a,canovas2016,tsukagoshi2019}. PDI observations taken with the Subaru Telescope by \citet{tsukagoshi2014} showed a crescent-like emission peaking closer to the star than the ALMA sub-mm data. They suggested that the observed polarized intensity originates at the inner edge of the transition disk. In this paper, we report new PDI observations of Sz\,91 obtained in the NIR ($H$ and $K_s$ bands) and new high-contrast ADI observations in the thermal IR ($L'$ band), both obtained with VLT/NaCo. \section{Observations and data reduction} \label{sec:obs} \subsection{$H$ and $K_s$ band Polarimetry} \label{sec:pol_obs} We observed Sz\,91 in visitor mode with the NaCo instrument \citep{lenzen2003,rousset2003} at the VLT/UT 1 on March 21st, 2017. The observations were carried out in the polarimetric mode using the broad-band NaCo $H$ and $K_{s}$ filters ($\lambda_c$ = 1.66, 2.18 {\micron}, respectively). In this observing mode a half-wave plate (HWP) first rotates the polarization plane of the incoming light and then a Wollaston prism splits the light into two orthogonally polarized beams, which are projected on different regions of the detector. The pixel size of the camera was set to 0.{\arcsec}027 px$^{-1}$, the readout mode to {\tt Double RdRstRd} and detector mode to {\tt HighDynamic} ensuring a $\sim$68 e- readout noise and a $\sim$110,000 e- linear dynamic range. As Sz\,91 is relatively faint and red ($m_{\rm 2MASS\ H,\ K_{s}} = 10.1,\ 9.8$) we used the {\tt N20C80} dichroic that sends 80\% of the light to the AO wavefront sensor and 20\% to the detector to maximise the throughput of our observations. The observations were divided in several polarimetric cycles where each cycle contains four datacubes, one per HWP position angle (at 0{\degr}, 22.5{\degr}, 45{\degr}, and 67.5{\degr}, measured on sky east from north). The airmass ranged from 1.0 to 1.5 during the complete sequence, which included observations of the comparison star (GSPC S264-D). We used detector integration times (DIT) of 15 and 30s for the $H$ and $K_s$ band observations, respectively. During the $K_s$ band observations the seeing was very good and stable with a value of 0.{\arcsec}53$\pm$0.{\arcsec}08. The observing conditions degraded during the $H$ band observations and the average seeing increased to 0.{\arcsec}76 $\pm$ 0.{\arcsec}14. Standard calibrations including darks and flat fields, as well as observations of a photometric standard star (GSPC S264-D) were provided by the ESO observatory. Our observations are summarised in Table~\ref{tab:obs_tab}. \begin{table} \begin{center} \caption{Summary of observations} \label{tab:obs_tab} \begin{tabular}{llcccc} \hline \hline Date & Band & DIT & NDIT & $t_{\rm exp}$ & $<$seeing$>$\\ & &(s) & & (s) & ({\arcsec})\\ \hline 2017-03-21 & $H$ & 15 & 4 & 4268$^{\mathrm{a}}$ & 0.{\arcsec}76 $\pm$ 0.{\arcsec}14\\ 2017-03-21 & $K_s$ & 30 & 8 & 8640 & 0.{\arcsec}53 $\pm$ 0.{\arcsec}08\\ 2017-04-11 & $L'$ & 0.2 & 100 & 1080 & 0.{\arcsec}93 $\pm$ 0.{\arcsec}11\\ 2017-04-12 & $L'$ & 0.2 & 100 & 2760 & 0.{\arcsec}84 $\pm$ 0.{\arcsec}10\\ 2017-05-15 & $L'$ & 0.2 & 100 & 2280 & 1.{\arcsec}26 $\pm$ 0.{\arcsec}18\\ \hline \end{tabular} \end{center} \begin{footnotesize} {\bf Notes.} The fifth column indicates the total (including the four HWP position angles) exposure time for the H and $K_s$ bands. $^{\mathrm{a}}$We discarded nearly 2/3 of these images due to bad weather.\\ \end{footnotesize} \end{table} The two simultaneous, orthogonally polarized images recorded on the detector when the HWP is at 0{\degr}(45{\degr}) were subtracted to produce the Stokes parameter $Q^+(Q^-)$. Repeating this process for the 22.5{\degr}(67.5{\degr}) angles produces the Stokes $U^+(U^-)$ images. The total intensity (Stokes I) was computed by adding all the images. We used customised scripts to process the raw data following the imaging polarimetry pipeline described by \citet{canovas2011}. First, each science frame was dark current subtracted and flat-field corrected. Hot and dead pixels were identified with a $\sigma$-clipping algorithm and masked out using the average of their surrounding good pixels. The two images recorded in each science frame were aligned with an accuracy of 0.05 pixels as described in \citet{canovas2011,canovas2015b}. This process was applied to every science frame resulting in a datacube for each Stokes $Q^{\pm}, U^{\pm}$ parameter. The images were median combined and corrected for instrumental polarization using the double-difference method as described in \citet{canovas2011} to produce the final Stokes Q and U images. We then derived the polarization angle (P$_{θ}$ = 0.5 arctan (U/Q)), the polarized intensity ($P_I = \sqrt{Q^2 + U^2}$), and the $Q_{\phi}$ and $U_{\phi}$ images following the Stokes formalism \citep[see,][]{schmid2006}: \begin{equation} Q_{\phi} = + Qcos(2\phi) + Usin(2\phi) \end{equation} \begin{equation} U_{\phi} = - Qsin(2\phi) + Ucos(2\phi), \end{equation} \noindent where $\phi$ is the position angle of the image coordinates (x, y) with respect to the star location (x$_0$ , y$_0$): \begin{equation} \phi = arctan \frac{x-x_0}{y-y_0} + \theta, \end{equation} \noindent with $\theta$ as the offset needed to correct for instrumental polarization produced by the angular misalignment of the HWP. This is a convenient coordinate system since, under single scattering assumption, all the emission from a protoplanetary disk should be in the azimuthal direction and be observed as a positive signal in $Q_{\phi}$, whereas emission on $U_{\phi}$ can be taken as disk residual noise \citep{schmid2006}. The central r $<$ 4 px are dominated by noise, and therefore this region has been masked out and is not considered for the analysis. Many of the individual frames were slightly overexposed and have saturated and/or non-linear pixels around the star projected center. We have median combined a subsample of unsaturated frames to construct a representative point spread function (PSF) for each band. From these PSFs we derive a full width at half maximum (FWHM) of 0.{\arcsec}14 and 0.{\arcsec}19 at $K_s$ and $H$ band, respectively. The narrower FWHM at $K_s$ band is most likely related to a better AO performance, as Sz\,91 is slightly brighter towards redder wavelengths and the weather conditions were more stable during the $K_s$ observations. Combining these two PSFs with the zero points derived from the observations of the standard star we find that the measured flux is consistent, within an error bar of 0.05 mag, with the published 2MASS photometry. We therefore use the 2MASS photometry to calibrate our observations. At $K_s$ band the disk is clearly detected, while a preliminary analysis of the $H$ band data-set showed that frame selection had to be applied in order to recover the disk signal. We performed data reductions using different subsets of $H$ band observations in order to obtain the highest signal to noise (S/N) disk image. The $H$ band results here presented were obtained after processing a subset with total exposure time of 1440 s and average seeing of 0.{\arcsec}57 $\pm$ 0.{\arcsec}05. \subsection{$L'$ band imaging} \label{sec:Lband_img} In order to search for (sub)stellar companions, we observed Sz\,91 with NaCo at $L'$ band ($\lambda_c = 3.80 \mu$m) on 11 April, 12 April and 15 May, 2017. The first two datasets were obtained in average and relatively stable conditions, while the last one was acquired under mediocre and more variable seeing. Since Sz\,91 is relatively faint \citep[$m_L \approx 9.7$ mag;][]{Wright2010}, no coronagraph was used. All observations were obtained in pupil-tracking mode. The DIT was set to 0.2s and data were obtained in cube mode with 100 frames per cube (NDIT = 100). With this choice of DIT, neither the background thermal emission nor the star itself saturated on the detector. The star was jittered in the three good quadrants of the detector throughout the observing sequence, to allow for an optimal sky subtraction. We considered the plate scale of NaCo to be 0\farcs0271 $\pm$ 0.0002 px$^{-1}$ in $L'$ band, as per the astrometric calibrations presented in \citet{Milli2017}. Details of the observations can be found in Table~\ref{tab:obs_tab}. We implemented our own pipeline to calibrate the data, which is similar to the one used for the NaCo data presented in \citet{Milli2017}. Our pipeline is based on routines of the Vortex Imaging Package \citep[\texttt{VIP};][]{GomezGonzalez2017}\footnote{\url{https://github.com/vortex-exoplanet/VIP}}, an open-source set of python codes for calibration and post-processing of high-contrast images. Our calibration procedure consists of dark subtraction, flat-fielding, bad pixel correction, sky subtraction, centering of the star and bad frames rejection. For the centering, we fitted the stellar PSF with a Moffat function, and shifted frames to place the stellar centroid on the central pixel of all images. Given the relatively short integration of each individual dataset, we combined them all in a single datacube. The total parallactic angle rotation achieved in the combined datacube is 111.6\degr. We then removed frames with the least correlated stellar PSF compared to the median of all PSF images, as measured with the Pearson correlation coefficient. About 10\% of all frames were removed on that basis. This trimming ensured a good PSF modeling and subtraction in post-processing. For the latter, we used principal component analysis coupled with ADI \citep[PCA-ADI;][]{Amara2012,Soummer2012} as implemented in \texttt{VIP}. We considered PCA-ADI either in full frames or in frames divided in 2-FWHM wide concentric annuli. In the latter case, a threshold in parallactic angle corresponding to 1 FWHM azimuthal motion is used to build the PCA library for each annulus \citep[see e.g.][]{Absil2013a}. This is to minimize self-subtraction of any putative companion. \section{Modeling and Results} \label{sec:results} Figure~\ref{fig:obs_img} shows the observed polarized intensity (left), the $Q_{\phi}$ (center), and $U_{\phi}$ (right) images for the $K_s$ (top) and $H$ (bottom) bands. We detect two lobes north and south of the star at both bands which correspond to the major axis of the disk. Fainter emission is also seen on the minor axis at the right side, which has been identified as the front-facing side of the disk closest to us \citep{tsukagoshi2014,tsukagoshi2019}. The disk shows polarized emission above noise level up to $\sim$0.{\arcsec}52 along the major axis in the $Q_{\phi}$ maps. A central cavity is observed at both bands as seen by the substantial emission dips close to the center. The residual signal observed in the $U_{\phi}$ images, especially at $K_{s}$ band, might be related to multiple scattering events where the linear polarization is not purely azimuthally but have a radial contribution, an effect that is even more pronounced for disks with inclinations $\geq$ 40{\degr} \citep{canovas2015c,pohl2017}. \begin{figure*} \centering \includegraphics[scale=0.7]{Sz91_K_PQU.eps} \includegraphics[scale=0.7]{Sz91_H_PQU.eps} \caption{From {\it left to right}: observed polarized intensity, $Q_{\phi}$, and $U_{\phi}$ image at $K_s$ (top) and $H$ (bottom) band. The central 0.{\arcsec}1 region, dominated by noise, has been masked. The observing conditions degraded during the $H$ band observations. In the $U_{\phi}$ images blue corresponds to positive values, red to negative values. North is up and east is to the left in all panels.} \label{fig:obs_img} \end{figure*} In this section, we aim to provide a radiative transfer model for Sz\,91, that reproduces the main characteristics of the polarized emission observed at $H$ and $K_s$ bands. We first estimate the stellar parameters of the source (model inputs) and then fit a radiative transfer model to the observations. \subsection{Stellar Properties: VOSA} \label{sec:stellarprop} We estimated the stellar properties of Sz\,91 using the Virtual Observatory (VO), VO-tool VOSA\footnote{\url{http://svo2.cab.inta-csic.es/theory/vosa/}} \citep[Virtual Observatory SED Analyzer;][]{bayo2008}. The observed (stellar) SED of the source is compared to the synthetic photometry obtained using a suit of theoretical models via a $\chi^2$ test. In our case we considered the BT-Settl-CIFIST and Kurucz models in the analysis. For this, we used a distance to the source of $d$ = 159.06$\pm$1.63 pc \citep[Gaia DR2;][]{bailer-jones2018}, and consider the extinction in the line of sight, $A_{\rm v}$, also as a fit parameter with an initial upper limit of 2.5 mag taken from the extinction maps of the IRSA Infrared Science Archive \citep{schlegel1998}. The best fit was found for the BT-Settl-CIFIST models which are used to infer the total observed flux from the star. We highlight that this estimate is more accurate than the one obtained using a bolometric correction derived only from a single colour. Then, we locate the object in a Hertzsprung-Russell diagram, given its estimated luminosity, $L_{*}$, and effective temperature, $T_{\rm eff}$, and use the isochrones and evolutionary tracks from \citet{baraffe2015} to estimate the stellar mass, $M_*$, and age of the object. The uncertainties are estimated through a Bayesian approach as explained in Section~\ref{sec:best_fit}. The stellar radius, $R_{*}$, on the other hand, is estimated using the dilution factor defined as $M_d = (R_{*}/d)^2$, with an uncertainty set by error propagation. Table~\ref{tab:stellar_para} lists the stellar parameters obtained in this work along with their uncertainties. \begin{table} \begin{center} \caption{Stellar parameters} \label{tab:stellar_para} \begin{tabular}{lrr} \hline \hline Parameter & Value & Uncertainty\\ \hline $T_{\rm eff}$ (K) & 3800 & [3750, 3850]\\ Log $L_{*}$ ($L_{\odot}$) & -0.59 & [-0.63, -0.56] \\ $R_{*}$ ($R_{\odot}$) & 1.18 & [1.16, 1.18] \\ $M_{*}$ ($M_{\odot}$) & 0.58 & [0.51, 0.62] \\ Age (Myr) & 5.0 & [3.6, 7.4] \\ $A_{\rm v}$ (mag) & 1.65 & [1.58 1.72]\\ \hline \end{tabular} \end{center} \end{table} The new distance reported in the Gaia DR2 catalog (159 pc; before the source was thought to be located at 200 pc) results in a significantly older age for Sz\,91. Using the above methodology, we obtained an age of $5^{+2.4}_{-1.4}$ Myr, older than the $\sim$3 Myr reported by \citet{tsukagoshi2019}. Since the estimate of ages of individual objects is model dependent and very uncertain, we considered Sz\,91 to be older than at least 3 Myr. \subsection{Radiative transfer modeling} \subsubsection{MCFOST Model}\label{model} We used 3D radiative transfer code MCFOST \citep{pinte2006,pinte2009} to model the polarimetric images at $H$ and $K_s$ band. MCFOST computes the dust temperature structure and scattering source function, under the assumption of radiative equilibrium between the dust and the local radiation field, via a Monte Carlo method. Images are then obtained via a ray-tracing method, which calculates the output intensities by integrating formally the source function estimated by the Monte Carlo calculations. Full calculations of the polarization are included using the Stokes formalism\footnote{\url{http://ipag-old.osug.fr/~pintec/mcfost/docs/html/overview.html}}. The surface density distribution of the disk is described by a simple profile of the form: \begin{equation} \Sigma(r)\ = \Sigma_{\rm o} \left(\frac{r}{[\mathrm{au}]}\right)^{\gamma}, \end{equation} \noindent where $\Sigma_o$ depends on the mass and size of the disk, and $\gamma$ represents the power-law index of the surface density profile. A Gaussian profile is used to describe the vertical density distribution with a disk aspect ratio which is radially parametrized as $H(r) = H_{100} (r/100\,\mathrm{au})^{\psi}$, where $H_{100}$ is the scale height at $r = 100$\,au, and $\psi$ is the flaring index of the disk. In Table~\ref{tab:mod_par} (top) we show the fix model parameters used in this work. We adopted the same $H_{100}$, $\gamma$ and $\psi$ values of \citet{canovas2015a}. The optical depth is changed using different disk's dust masses. We consider dust grains to be irregular in shape by assuming a distribution of hollow spheres (DHS) as our grain type with a maximum volume void fraction of 0.8 \citep{min2005}. We used an inclination of 49.7{\degr} and a position angle (PA) of 18.1{\degr} derived from ALMA observations \citep{tsukagoshi2019} assuming that the polarized emission comes from a region co-planar to the sub-mm ring. We stress that there is a small degeneracy between the PA and the grain size (i.e. the phase function), therefore in order to sample in more detail the dust properties (grain size, porosity, dust mass) we fixed the PA of the models to the value estimated from the ALMA observations. \begin{table} \begin{center} \caption{Model parameters} \label{tab:mod_par} \begin{tabular}{lr} \hline \hline Parameter & Value\\ \hline $H_{100}$ (au) & 5 \\ $\gamma$ & -1 \\ $\psi$ & 1.15 \\ $R_{\rm out}$ (au) & 150 \\ \hline Parameter space \\ \hline $R_{\rm in}$ (au) & 35-55, steps = 5\\ Porosity & 0.1-0.9, steps = 0.1\\ $a_{\rm min}$ ({\micron}) & 0.05-0.175, steps = 0.025\\ $\delta s$ ({\micron}) & 0.05-0.25, steps = 0.05\\ $m_{\rm dust}$ ($M_{\odot}$) & $10^{-6}-10^{-7}$, steps = 1 (in log scale)\\ \hline \end{tabular} \end{center} \begin{footnotesize} {\bf Notes.} $a_{\rm max} = a_{\rm min} + \delta s$.\\ \end{footnotesize} \end{table} Once the surface density and temperature structure is computed, synthetic ray-traced polarized images (Stokes I, Q, and U maps) can be produced at any wavelength. To compare with our observations, these images were projected into a grid with pixel size of 0.{\arcsec}027 px$^{-1}$ (equal to the scale of the NaCo/VLT images). Then, they were scaled using the stellar $H$ and $K_s$ 2MASS magnitudes and were convolved using a Gaussian point spread function (PSF) of 2.5-px width size. Finally, we computed monochromatic Stokes $Q_{\phi}$ and $U_{\phi}$ images, at 1.7 and 2.2 {\micron} following the same strategy as for the observations (section~\ref{sec:pol_obs}). \subsubsection{Best fit} \label{sec:best_fit} We ran a grid of 13500 models varying the following parameters: the minimum/maximum grain size ($a_{\rm min}$, $a_{\rm max}$), the grain porosity, the size of the cavity ($R_{\rm in}$) and the dust mass ($m_{\rm dust}$). We fixed the outer radius ($R_{\rm out}$) of the disk to 150 au since it does not affect the final image (it only depends on $R_{\rm in}$). We considered pure silicate grains with a small amount of carbonaceous particles using the dust opacity from \citet{drainelee84}. Table~\ref{tab:mod_par} (bottom) shows the parameter space used in this work. We determined both the best fit model as well as the uncertainties using the Bayesian approach. For this, we constructed the probability distribution functions (PDFs) for our model parameters following a Bayesian analysis as described in \citet[][VOSA 6.0]{bayo2008}\footnote{\url{http://svo2.cab.inta-csic.es/theory/vosa/index.php}}, where for each model we assign a relative probability as: \begin{equation} W_i = {\rm exp} (-\chi_i^2/2) \end{equation} \noindent where the subscript $i$ represents each individual model on the grid, and the $\chi_i^2$ represents the goodness of the fit estimated as: \begin{equation} \chi_i^2 = \sum_n \frac{(Q_{\phi}^{\rm mod} - Q_{\phi}^{\rm obs})^2}{\sigma^2}, \end{equation} \noindent with $n$ the number of pixels included in the fit and $\sigma$ as the standard deviation measured in concentric annulii from the center of the $U_{\phi}^{\rm obs}$ image excluding the central 0.{\arcsec}1 region. For the $\chi^2$ values, the central r $<$ 0.{\arcsec}15 region (dominated by noise) as well as the outer r $>$ 0.{\arcsec}63 region (free of disk emission) of each image have been masked out and were not considered for the analysis. Then the probability corresponding to a given parameter value $\alpha_j$ is given by: \begin{equation} P(\alpha_j) = \sum_i W_i \end{equation} The final normalized PDF for each parameter is obtained by dividing by the total probability (the sum of the probabilities obtained for each value): \begin{equation} P{\arcmin}(\alpha_j) = \frac{\alpha_j}{\sum_i P(\alpha_i)} \end{equation} Figure~\ref{fig:best_fit} shows the models that best fitted our observations along with their corresponding residuals, which are estimated as $(Q_{\phi}^{\rm mod} - Q_{\phi}^{\rm obs})/\sigma$. The dashed circles plotted in the left panel circumscribe the area taken into account for the $\chi^2$ fit, basically, all the emission coming from the disk. Table~\ref{tab:best_val} lists the estimated parameters along with their uncertainties, which have been defined as the limits encompassing 68\% (1$\sigma$) of the total area around the PDF maximum for each parameter. For those cases where the best parameter falls on one of the edges of the range of values used in the models, we have considered these values as upper or lower limits, and they are indicated by parentheses instead of square brackets in Table~\ref{tab:best_val}. The PDFs of our model parameters are shown on Figure~\ref{fig:KH_pdfs}. As seen on the Figure, the polarized emission can be explained by small grains ($<$0.4 {\micron}), with moderate porosity ($<$40\%), distributed in a ring located at $\sim$45 au from the central star. \begin{figure*} \centering \includegraphics[scale=0.7]{HSG4_fit_mod06578.eps} \includegraphics[scale=0.7]{HSG4_fit_mod06339.eps} \caption{From {\it left to right}: $Q_{\phi}$ observed image, best MCFOST model, normalized residuals for the $K_s$ (top) and $H$ (bottom) bands. The dashed circles plotted in the left panels circumscribe the area taken into account for the $\chi^{2}$ fit. The faint butterfly pattern observed in the residual map at the $K_s$ band can be the result of the structure of the noise in the $U_{\phi}$ image (see text for details). In the right panels blue corresponds to positive values, red to negative values. North is up and east is to the left in all panels. } \label{fig:best_fit} \end{figure*} \begin{table} \begin{center} \caption{Best fit values} \label{tab:best_val} \begin{tabular}{lrrrr} \hline \hline Parameter & H & uncertainty & K & uncertainty \\ \hline $R_{\rm in}$ (au) & 45 & [43, 47] & 45 & [43, 47]\\ Porosity & 0.40 & (<0.40) & 0.10 & (0.1, 0.17] \\ $a_{\rm min}$ ({\micron}) & 0.15 & [0.12, 0.16] & 0.15 & [0.13, 0.16] \\ $\delta_s$ ({\micron}) & 0.05 & (0.05, 0.08] & 0.20 & [0.15, 0.22]\\ $m_{\rm dust}$ ($10^{-7}M_{\odot}$) & 1.29 & [1.07, 1.35] & 1.67 & [1.59, 2.27] \\ \hline \end{tabular} \end{center} \begin{footnotesize} {\bf Notes.} Upper/lower limits are indicated by parentheses instead of square brackets.\\ \end{footnotesize} \end{table} \begin{figure*} \centering \includegraphics[scale=0.35]{likelihood_HIPER_Super_GRID_v4_HK_2.eps} \caption{Probability distribution functions (PDFs) of our model parameters at $K_s$ (orange) and $H$ (blue) bands. The confidence intervals reported on Table~\ref{tab:best_val} are estimated for a 68\% (1$\sigma$) confidence level. } \label{fig:KH_pdfs} \end{figure*} We note, however, that since $\sigma$ is estimated in concentric rings from the center, the stronger signal towards the center of the $U_{\phi}$ image will translate into larger uncertainties at the inner regions and hence, will give preference to models that best match outer regions. This explains the faint butterfly remnant observed in the residual map at $K_s$ band and why we do not observe any feature on the $H$ band, where the $U_{\phi}$ signal is weaker. In any case, we do not see any significant emission on our residual maps besides the noise induced by the faint $U_{\phi}$ signal structure, which reinforces the validity of our modeling. \subsubsection{Caveats of the modeling} Reproducing the shape of the polarized (or scattered light) phase function is a known challenge when modeling young disks or debris disks \citep[e.g.,][]{Milli2017}. As discussed in \citet{min2016}, the polarized phase function may best trace the optical properties of the smallest constituents of dust grains, remaining insensitive to large-scale structures such as aggregates. For this reason, the grain size distribution inferred from our modeling results may be biased towards smaller sizes. As noted above, the models that best explain our NaCo observations are those having a very narrow range of grain sizes (Table~\ref{tab:best_val}); similar results were also found for the debris disk around HD\,61005 \citep{olofsson2016}. Grains may well be in the form of aggregates, and the polarized observations would then be dominated by the small monomers (see Sec.~\ref{sec:dust_prop_pol}). Our analysis, based on the observational data currently at hand, suggests that small dust grains are indeed present at the disk surface layers of Sz\,91, however, in what shape or form (and as a consequence the exact grain size distribution) remain uncertain. Therefore, both the total dust mass and the optical depth reported here should be treated carefully. \subsection{Companion detection limits} \label{sec:Lband_limits} Our final $L'$ PCA-ADI images did not reveal any significant point source, for a wide range of tested number of principal components (between 1 and 100). We used \texttt{VIP} (Sect.~\ref{sec:Lband_img}) to compute the $5\sigma$-contrast curve achieved by annular PCA-ADI, using the number of principal components that optimizes contrast at each radial separation. We then used the COND/Dusty models for brown dwarfs and giant planets atmospheres \citep{allard2001} to convert the contrast curve into mass sensitivity limits. We used an age of 5 Myr. Giant planets with masses above $\sim$8 $M_\mathrm{Jup}$ orbiting beyond 45au could be detected in our observations, as shown on Figure~\ref{fig:Lband_masslimit}. Note that inside the innermost (r < 45 au) regions, massive giant planets can still be present. We remark, however, that the uncertainties are probably very high given the sensitivity of the planet brightness to the initial conditions of planet formation \citep{mordasini2013}, which in our case are based on ``hot start models", and that the disk emission may mask the planet signal. \begin{figure} \centering \includegraphics[scale=0.40]{SZ91_mass_plot_noirdis_vlines.eps} \caption{Contrast curve from the NaCo $L'$ band observations derived using the COND/Dusty models \citep{allard2001} for an age of 5 Myr in Jupiter masses (solid line). Error bars indicate the Sz\,91 age uncertainty (Table~\ref{tab:stellar_para}). Dotted lines indicate the location of the dust cavities radius observed in polarized light at 45 au (from this work) and in the sub-mm at 82 au (from \citet{tsukagoshi2019}). We can rule out massive giant planets ($\geq$8 $M_\mathrm{Jup}$) orbiting beyond 45 au. } \label{fig:Lband_masslimit} \end{figure} \section{Discussion} \label{sec:Disc} In this section, we inquire about the potential origin of the observed cavity around Sz\,91 based on our NaCo data and ALMA observations from previous studies. We also discuss the implications of asymmetric features observed in the polarized emission and in the phase function profile. \subsection{A large cavity in the disk around Sz\,91; evidence for dust trapping} ALMA data of Sz\,91 has revealed a sub-mm narrow ring at $\sim$95 au from the central star, along with $^{12}$CO (3-2) emission extending from the innermost regions (< 16au) up to almost 400 au \citep{tsukagoshi2019}. Previous modeling of the $^{12}$CO (3-2) emission made by \citet{vandermarel2018}, showed a gas-depleted cavity at 39.7 au (after scaling with the new Gaia distance) from the star. This apparent inconsistency may be a sensitivity issue, since the \citet{tsukagoshi2019} observations are about twice more sensitive. Additionally, \citet{vandermarel2018} used a simplified model of sharp gas cavity edges, whereas in reality these edges are probably not sharp but rather smooth transitions. This could potentially account for the differences observed by these authors. One should note, however, that even when the gas reaches at least 16 au according the models of \citet{tsukagoshi2019}, estimated based on blueshifted emission near the highest velocity on the $^{12}$CO channel map, their $^{12}$CO and HCO$^+$ moment maps also suggest that there is a lack of signal in the inner regions (rapidly decrease of emission towards the star in their Figures 4 and 6, although within one beam of resolution). This could be in line with a drop of density inwards of 40 au (a gas-depleted cavity), as suggested by \citet{vandermarel2018}. Nevertheless, this should be treated carefully due to the different angular resolution and sensitivity of these observations. Polarimetric data, on the other hand, showed a ring-like structure of small (<0.4{\micron}) grains peaking inside the sub-mm cavity as seen in Figure~\ref{fig:ALMA_NACO}, where we show the ALMA Band 7 continuum archival image of Sz\,91 from project ID 2015.1.01301.S (white contours), overlaid on the $K_s$ band $Q_{\phi}$ NaCo image (color scale). TD showing different radii between the gas and dust material, particularly a larger gas extent than (sub)mm dust, have been observed in the past \citep[e.g,][]{panic2009,andrews2012,rosenfeld2013,canovas2016,ansdell2018,gabellini2019}. \begin{figure} \includegraphics[scale=0.40]{Sz91_ALMA_NACO_v3.eps} \caption{ALMA Band 7 continuum archival image of Sz\,91 from project ID 2015.1.01301.S (white contours) overlaid on the $K_s$ band $Q_{\phi}$ NaCo image (color scale). The ALMA synthesized beam is shown as the filled ellipse at the bottom-left corner of the plot. The polarized emission observed with NaCo lies inside the sub-mm cavity. } \label{fig:ALMA_NACO} \end{figure} A few physical mechanisms have been proposed to explain the formation of a large (sub)mm cavity: dynamical clearing by stellar or planetary companions \citep[e.g.,][]{zhu2011,Pinilla2012b}, an extended dead zone \citep[e.g.,][]{flock15,pinilla2016b}, and internal photoevaporation due to irradiation from the central star \citep{alexander_armitage07}. Models of internal photoevaporation predict dust cavities smaller than $\sim$20 au and accretion rates of less than $\rm 10^{-9}M_{\odot}yr^{-1}$ \citep{owen2011, ercolano2017}. Therefore, the presence of such a large sub-mm cavity around Sz\,91, along with its relatively high mass accretion rate of $\rm \dot{M}\sim 10^{-8.8}M_{\odot}yr^{-1}$ \citep{alcala2017}, make this mechanism very unlikely. \subsubsection{Dynamical clearing by stellar or planetary companions} We have analyzed one epoch of high-resolution Las Campanas/MIKE data obtained on June 2014 using the 1.0{\arcsec} slit. We focus on data from the red arm of MIKE which covers 4900 to 9500 {\AA} with a S/N $\sim$10. The data were reduced using the MIKE pipeline in the Carnegie Observatories' CarPy package \citep{kelson2000,kelson03}. From this single epoch we can discard an SB2 nature of the object, therefore equal mass binaries and mass ratios above 0.7 can be discarded out to 1au. Furthermore, \citet{romero2012} excluded a stellar companion down to separations of $\sim$30 au and \citet{melo2003} found no evidence for a close-in binary companion in their 3 yr radial velocity survey down to masses of $\approx$ 0.2 $M_{\odot}$. This is in agreement with the work of \citet{villenave2019} where they found that Sz\,91's possible companions must be in the planetary mass regime ($M_{\rm p}< 13 M_\mathrm{Jup}$), according to the prescription of \citet{dejuanOvelar2013} which relates the ratio between the radius of the scattered light cavity and that of the sub-mm ring. We remark, however, that using the updated radius for the peak of the sub-mm ring from \citet{tsukagoshi2019} and that of the scattered light cavity from this work ($R_{\rm in}$ on Table~\ref{tab:mod_par}), and within the uncertainties, Sz\,91 seems to fall in the region of companion masses above 13 $M_\mathrm{Jup}$ but it is very close to the limit between companion masses above or below this value (red shaded region in Figure 10 of \citealp{villenave2019}). According to our new $L'$ band ADI observations (Sect.~\ref{sec:Lband_limits}), on the other hand, we found a mass sensitivity limit for any putative planet of $M_{\rm p}$ $\leq$ 8 $M_\mathrm{Jup}$ beyond 45 au. Within $35$\,au, our sensitivity constraints are poor and we cannot rule out the presence of brown dwarf companions (Figure~\ref{fig:Lband_masslimit}). Follow-up observations are needed in order to completely rule out a stellar companion, however, we will mainly focus on companions of planetary origin hereafter. In the planet scenario, a large sub-mm cavity is created along with a pressure bump at the outer edge of the cavity. Small dust coupled to the gas move at sub-Keplerian velocities while large particles ($>$ 1mm) move with a Keplerian motion. This difference in velocity causes big grains to experience a head wind driven by the gas movement. This makes large particles to loose angular momentum and fall into inner radius. If a positive pressure gradient exists, as a consequence of a planet carving a cavity in the disk, then these particles will get trapped into pressure bumps located at the outer edge of the newly formed cavity. The fact that the emission from small ($\mu$m) grains, as probed by our NaCo observations, peaks inside the sub-mm cavity, suggests partial filtration of dust. This could happen in the presence of {\it low-mass} planets as small grains may not be completely filtered at the outer edge of the planet induced-cavity, and hence pass through the edge to inner regions. Even though \citet{tsukagoshi2014} suggested that the polarized intensity emission at $ K_s$ band was the result of light coming from the inner edge of the disk, it might very well be that this emission is caused by small, optically thin dust passing through the pressure bump into inner regions. Our modeling reveals optical depths around 0.2-0.3, so the polarized emission is rather partially optically thin. In fact, lower optical depths are expected since our models only used small grains. Optically thin dust inside cavities of several TD and pre-transitional disks have been observed \citep[e.g.,][]{calvet05,espaillat07, espaillat10,follette2013,mauco2018,perez_alice_2018}. With respect to the gas distribution, if the gas is not depleted inside the sub-mm cavity and embedded planets are indeed the cause of this structure, then these planets must be of low-mass \citep[0.1 - 1 $M_{\rm Jup}$,][]{Pinilla2012b,zhu12,rosotti2016}. This is also consistent in the case of a gas-depleted cavity, as suggested by \citet{vandermarel2018}, since multiple low-mass planets can lead to shallower cavities with depletion factors of at least an order of magnitude \citep{duffell2015}. Embedded giant planets, on the contrary, have shown to produce deeper cavities in both, the gas and the dust component on protoplanetary disks \citep[e.g,][]{rice2006,pinilla2016a, pinilla2016b,gabellini2019}. \subsubsection{An extended dead zone} Dead zones have also been invoked to explain TD structures. These are low-ionization regions on the disk where the high energy (X-rays and UV) radiation from the star cannot penetrate and, as a consequence, the MRI is suppressed. At the outer edge of these low ionization regions a bump in the gas density profile is created, due to the change of accretion from the dead to the active MRI zones. Strong accumulation of (sub)mm-sized particles are expected at the location of the outer edge of the dead zone, while the gas is only slightly depleted in the inner part of the disk \citep[e.g,][]{flock15, pinilla2016b}, as seem to be the case of Sz\,91 according to \citet{tsukagoshi2019}. If we considered, on the other hand, a gas-depleted cavity \citep{vandermarel2018}, then it requires the inclusion of a MHD wind to the dead zone in order to create the spatial segregation between the distribution of gas and dust \citep{pinilla2016b,pinilla2018}. In this case, the gas surface density inside the cavity can be depleted by several orders of magnitude and increases smoothly with radius, in agreement with the $^{12}$CO and HCO$^+$ moment maps of \citet{tsukagoshi2019}. Nonetheless, dead zones always produce a highly depleted gaseous outer disk, which is not the case of Sz\,91 ($^{12}$CO is observed up to $\sim$400 au). Additionally, \citet{pinilla2016b} studied the effects of a large dead zone (with an outer edge at $\sim$40 au) in the radial evolution of gas and dust in protoplanetary disks through MHD simulations. On their polarized synthetic images they observed that small (0.65{\micron}) grains lie just in front of the pressure maximum and slightly closer to the central star than large (mm) grains, although this segregation is very small. In fact, they concluded that this scenario always produces dust cavities at short {\it and} long wavelengths of similar size at the location of the pressure bump, contrary to what is found in Sz\,91 (i.e. $\mu$m-sized particles closer in than sub-mm grains). \subsubsection{Final remarks} Overall, Sz\,91 morphology suggests that whatever is the origin of the sub-mm cavity allows enough gas to reside in the inner regions of the disk. One way to discriminate between these gap opening mechanisms is to radially resolve the sub-mm ring. Models of embedded planets predict a radially asymmetric ring with a wider outer tail at early times, while dead zones always produce radially symmetric ring-like structures \citep{pinilla2018}. However, for a $\sim$5 Myr old star it is not clear if this diagnostic still applies. Higher angular resolution observations are needed to validate whether or not low-mass embedded planets are the most likely mechanism for the origin of the cavity in Sz\,91. \subsection{Apparent ``dip" on the polarized emission} Figure~\ref{fig:QU_multiscat} shows the observed $Q_{\phi}$ (left) and $U_{\phi}$ (right) images of Sz\,91 at $K_s$ band. For the left panel, we used a different color bar than that of Figure~\ref{fig:obs_img} in order to better visualize changes on disk emission as well as negative values found close to the center of the image (reddish colors). Dotted lines have slopes of $\pm$45{\degr} and, as seen on the right panel, follow transition regions of positive/negative values on the $U_{\phi}$ image. The ``dip" observed on the $Q_{\phi}$ image located at the NW quadrant, and marked by the black arrow, might be related to one of these transition regions on $U_{\phi}$. In fact, at this quadrant is where the $U_{\phi}$ signal is most prominent. We highlight that at these angular positions is where the polarized images are subtracted in order to produce the Stokes parameters (Sect.~\ref{sec:obs}). Additionally, the negative signal on $Q_{\phi}$ supports the fact that multiple scattering events might be contributing to the total emission \citep{canovas2015c} at least at the disk inner wall; note that the ``dip" is just located at the same radial location and direction (traced by the dotted line) as the NW negative blob. All this suggests that the apparent decrease of disk emission in the $Q_{\phi}$ image might not be a real dip on the disk, but rather a hint related to the combined effect of the violation of the strictly azimuthal linear polarization assumption (negative values on $Q_{\phi}$) and/or the data reduction process. Besides, the dip is very faint. We quantified how significant the dip is, compared to the region located at the complementary angle in the southern side, by measuring the emission of the disk along azimuthally distributed apertures and found that it is significant by only 1.2$\sigma$. Also, it is very close to the central star and might be affected by centering effects. Therefore, the veracity of the ``dip" as a real gap or shadow on the disk is unlikely. The fact that the same ``dip" also appears at the same location and with the same direction in the \citet{tsukagoshi2014} observations is certainly intriguing; considering that this data set was taken with a different instrument, at different epochs, and from the northern hemisphere. Nonetheless, given that \citet{tsukagoshi2014} also used the same HWP angles to produce their polarized images may suggest that the ``dip" could be the result of the data reduction process and it is not of astrophysical origin. \begin{figure*} \centering \includegraphics[scale=0.7]{Sz91_K_QU_multiscat3.eps} \caption{Observed $Q_{\phi}$ (left) and $U_{\phi}$ (right) images at $K_s$ band. The black arrow indicates the location of the ``dip" in polarized light. Dotted lines have $\pm$45{\degr} slopes and follow transition regions of positive/negative values in $U_{\phi}$. Note the reddish colors in $Q_{\phi}$ which may indicate multiple scattering events (see text for details). } \label{fig:QU_multiscat} \end{figure*} \subsection{Polarized phase function} Figure~\ref{fig:Phase_function_comparison} shows the observed polarized phase function at $K_s$ band of the northern (red) and southern (blue) sides of the disk around Sz\,91. To measure the phase function, we placed adjacent circular apertures along an ellipse that traces the main disk (semi-major axis $a$ of 0.34{\arcsec}, position angle PA of 18.1{\degr} and inclination $i$ of 49.1{\degr}, similarly to our MCFOST modeling). The size of the aperture was fixed to 0.04{\arcsec}. For each aperture we measure the mean flux in the $Q_{\phi}$ image, and the $1\sigma$ uncertainty corresponds to the standard deviation in the aperture. We normalize both the northern and southern sides by the same factor, so that the northern side phase function has a maximum of $1$. We also plot the phase function of the best fit model (black line) for comparison. As seen on the figure, the disk is clearly asymmetric along the minor axis, with the northern side being brighter than the southern one (something also visible in Figure~\ref{fig:obs_img} and \ref{fig:ALMA_NACO}). Moreover, the disk also exhibits azimuthal asymmetries, for instance the ``dip" discussed in the above section is clearly seen at low scattering angles ($<25{\degr}$). Our best model represents relatively well both functions, considering that the MCFOST models used here are centro-symmetric and thus, insensitive to any asymmetric feature (a non-symmetric treatment of the polarized emission is out of the scope of this paper). \begin{figure} \centering \includegraphics[scale=0.5]{Observed_NS_Phase_function_aper0.04_PA18.eps} \caption{Observed polarized phase function of the northern (red) and southern (blue) sides of Sz\,91. We also plot the polarized phase function of the best fit model for comparison (black). The disk is clearly asymmetric along the minor axis, being brighter at the north side. } \label{fig:Phase_function_comparison} \end{figure} The intensity peaks for the phase functions of both the north and south sides are separated by less than 23{\degr}. We found that this deviation is minimized using a PA of $9{\degr}$ (see Appendix). As stated in Section~\ref{model}, we fixed the PA of the models to the value estimated from the ALMA observations ($18.1{\degr}$), since these are the most sensitive data-set of Sz\,91 to date. Nonetheless, as shown on Figure~\ref{fig:ALMA_NACO} there is a shift between the semi-major axis of the submm ring (given by the white concentric ellipse contours) and the intensity peaks (north-south lobs also in contours) in the ALMA image. In fact, the bright lobes seen on ALMA tend to follow more or less the same (azimuthal) position of the NaCo north-south blobs. This peculiar behavior might explain why a smaller PA produces phase function curves peaking at similar scattering angles at both sides. In any case, the phase function is most sensitive to the properties of the dust (e.g. grain size, porosity) and therefore changes on PA of a few degrees will not affect significantly the results described in Sec~\ref{sec:best_fit} and reported in Table~\ref{tab:best_val}. The change in PA between the disk semi-major axis, as measured from the ALMA observations, and the intensity peaks of the polarized emission can be due to: 1) a signal-to-noise issue where our observations fail to locate the intensity peaks maximum properly. 2) a projection effect of a flaring disk since ALMA observations probe the disk midplane whereas NaCo observations probe the disk surface layers \citep{stolker2016}. 3) that the disk that we are detecting with our NaCo observations is slightly warped, suggesting a complex and structured circumstellar disk. Our NaCo observations are not sufficient, and higher SN observations are needed to discriminate between these possible scenarios. Remarkably, \citet{tsukagoshi2019} also reported an interesting discrepancy regarding the PA of the gaseous disk, going the other way around. From the first-moment map, the authors estimated a PA of $30{\degr}$ for the gaseous disk, which is $12{\degr}$ off compared to the PA estimated for the dust continuum, and $21{\degr}$ compared to the PA that minimizes the phase functions between the northern and southern sides. Nonetheless, this estimate may not be as constrained as for the dust since it may suffer from cloud contamination or by uncertainties in the position of the central star which in turn affects the location of the minimum/maximum velocity in the first-moment map. Overall, this suggests that the disk around Sz\,91 could be highly structured. \subsection{Impact of dust properties on scatter polarized emission} \label{sec:dust_prop_pol} It is expected that grain growth happens via the sticking of small dust grains together. Therefore, a natural consequence is that grains in protoplanetary disks may resemble aggregates build out of small particles (or ``monomers'', see e.g., \citealp{min2016,roy2017,halder2018,tazaki2019}). Since these aggregates can have different sizes, shapes and with monomers of different compositions, computing realistic optical properties of aggregated particles is a very demanding task. Although exact computations of large aggregates are possible, their use in radiative transfer calculations has been quite limited due to this complexity \citep{min2016,tazaki2019}. This is why approximate methods, like the DHS which simulates irregular shape, porous aggregates for instance, are usually applied in order to significantly decrease the computational demand without losing the essential information of the aggregates optical properties. Our modeling results suggest that the disk probed with our NaCo observations mostly contains very small grains ($<$0.4 {\micron}) in order to reproduce the polarized emission at $K_s$ and $H$ bands. The grains must be also relatively porous with porosity values lower than 40\%. This implies two possible scenarios: that the emission at the disk surface comes indeed from very small grains, or that large porous aggregates, with radius larger than the wavelength of observation, are present at the upper layers of the disk but the polarized emission we see, is dominated by the small monomers and is insensitive to the global size of the aggregates (one of the conclusions of \citealp{min2016}, see as well \citealp{tazaki2019}). Additional observations of the disk at higher signal-to-noise ratio, in scattered-light (to retrieve the total intensity), and at different wavelengths (e.g., at $J$ band, to measure the color of the disk) may provide new insights in order to discriminate between these possibilities by studying the scattered and/or polarized-light colors and their dependence on size and composition of dust aggregates. \section{Conclusions} \label{sec:conclusions} We present polarized light images at $K_s$ and $H$ band of the $\sim$5 Myr protoplanetary disk around the TTS Sz\,91 taken with VLT/NaCo. We detect a ring-like structure with bright lobes north and south of the star at both bands. A central cavity is also detected. We provide a radiative transfer model that successfully reproduces the main characteristics of the observed polarized emission, and discuss the implications of this study based on the current observational data available for the source. Our main conclusions are as follows: \begin{enumerate} \item The polarized emission is well reproduced using a disk composed of small (<0.4 {\micron}), porous (<40\%) grains (adopting a distribution of hollow spheres for the scattering theory) with a central cavity of $\sim$45 au in size. Dust grains are most likely in the form of large aggregates and the polarized observations are probably dominated by the small monomers forming the aggregate. \item Dynamical clearing by multiple low-mass planets arises as the most likely gap-opening mechanism in Sz\,91. Although, dead zones may account for the presence of gas extended emission inside the dust cavity up to a few au from the central star, a highly depleted gaseous disk beyond the sub-mm ring is also expected. Furthermore, the cavity size in scattered light is expected to have the same size as the sub-mm cavity, which is not the case of Sz\,91. Higher angular resolution observations are needed to confirm the existence of these planets and validate the origin of the disk cavity. \item Our $L'$ band mass detection limits put constraints for possible companions of $M_{\rm p}$ < 8 $M_\mathrm{Jup}$ beyond 45au. Within 35au, our sensitivity constraints are poor, and do not rule out the presence of a brown dwarf companion. \item The apparent ``dip" observed in the $Q_{\phi}$ image at $K_s$ band is very faint (1.2$\sigma$), and it is most likely the result of the data reduction process and/or contamination by multiple scattering events. \item The disk is clearly asymmetric along the minor axis with the north side brighter than the south. We also found a change in PA between the disk semi-major axis, measured from the ALMA observations, and the PA needed to minimize the location of the intensity peaks of the phase functions at the north and south sides of our NaCo polarized observations. This suggests that the disk around Sz\,91 could be highly structured. \end{enumerate} ALMA images with higher resolution and signal-to-noise capable of resolving the sub-mm ring in the radial direction as well as non-uniform features in the gas around Sz\,91 will undoubtedly help at disentangling between the physical mechanisms behind the origin of the disk cavity. Furthermore, complementary observations in scattered light at different wavelengths using the reference star differential imaging technique, in order to solve the issue of self-subtraction when doing ADI, will also provide new insights about the properties of dust grains at the disk surface layers. \section*{Acknowledgements} We thank an anonymous referee for a careful reading of our manuscript and many useful comments. K.M. acknowledges financial support from FONDECYT-CONICYT project no. 3190859. K.\,M., J.\,O., M.\,R.\,S., A.\,B., C.\,C., M.\,M., and C.\,P. acknowledge financial support from the ICM (Iniciativa Cient\'ifica Milenio) via the N\'ucleo Milenio de Formaci\'on Planetaria grant. J.\,O. acknowledges financial support from the Universidad de Valpara\'iso, and from Fondecyt (grant 1180395). C.\,C. acknowledges support from project CONICYT PAI/Concurso Nacional Insercion en la Academia, convocatoria 2015, folio 79150049. M.\,M. acknowledges financial support from the Chinese Academy of Sciences (CAS) through a CAS-CONICYT Postdoctoral Fellowship administered by the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. L. C. acknowledges financial support from FONDECYT-CONICYT grant no.1171246. This publication makes use of VOSA, developed under the Spanish Virtual Observatory project supported by the Spanish MINECO through grant AyA2017-84089. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2015.1.01301.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
1,314,259,996,008
arxiv
\section{Introduction} A recurring theme in analytic number theory is the study of central value of a family of $L$-functions. In this paper, we prove asymptotic formulae for the twisted first and second moments of central $L$-values for the family of Hecke--Maass cusp forms over the classical modular group ${\mathrm {PGL}}_2 (\BZ)$ or a Bianchi modular group ${\mathrm {PGL}}_2 (\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}})$ (here $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}$ is the ring of integers of a imaginary quadratic field $F$). As a standard application, we obtain non-vanishing results for the central value of such Maass form $L$-functions in the Archimedean aspect. There are abundant non-vanishing results for holomorphic modular forms over ${\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ in \cite{Duke-1995,IS-Siegel,KM-Analytic-Rank,KM-Analytic-Rank-2,VanderKam-Rank,KMV-Derivatives,Djankovic-Gamma1,Rouymi-1,Rouymi-2,BF-prime-power,Lau-Tsang-Mean-Square,Luo-Weight,BF-Moments,Liu-L-Derivative,Jobrack-Derivative} and also over a totally real field in \cite{Trotabas-Hilbert}. Recently there are two papers \cite{SH-Liu-Maass} and \cite{BHS-Maass} on the non-vanishing of central $L$-value for the family of Maass forms for ${\mathrm {PGL}}_2 (\BZ)$ in the aspect of spectral parameter $t_f$; in the former, the existence of a positive proportion of non-vanishing is proven for eigenvalues in short intervals, while in the latter, a lower bound for the proportion is obtained effectively. In both works, a formula of Motohashi\footnote{As indicated in \cite[\S 3.6]{Motohashi-Riemann}, this formula was claimed by Kuznetsov \cite{Kuznetsov-Motohashi-formula} with no rigorous proof. It should therefore be called the Kuznetsov--Motohashi formula. Nevertheless, to avoid confusion, we shall still name it after Motohashi.} (see \cite[Lemma]{Motohashi-JNT-Mean} or \cite[Lemma 3.8]{Motohashi-Riemann}) is used for the twisted second moment, but the authors of \cite{BHS-Maass} are able to obtain an asymptotic formula so that their effective non-vanishing result becomes possible. In this paper, we use the formula of Kuznetsov instead of Motohashi. The reader might wonder: ``What is new here? The Kuznetsov formula has already been used for a lot of problems." To illustrate the novelty of this work, we need to answer two questions: \begin{itemize} \item [(1)] Why do the previous authors abandon the Kuznetsov formula? \item [(2)] Why do we abandon the Motohashi formula? \end{itemize} There are two Kuznetsov formulae for $\mathrm{PSL}_2 (\BZ)$ (see \cite{Kuznetsov,Kuznetsov-Motohashi-formula} or \cite[Theorem 2.2, 2.4]{Motohashi-Riemann}): weighted either with or without root number $\epsilon_f$ and containing either the $J$- or the $K$-Bessel function. The Kuznetsov formula for ${\mathrm {PGL}}_2 (\BZ)$ is deduced from summing up these two formulae (Maass forms for ${\mathrm {PGL}}_2 (\BZ)$ are termed even forms in the literature). In \cite[\S 3.3]{Motohashi-Riemann}, Motohashi derives his formula from the Kuznetsov formula for $\mathrm{PSL}_2 (\BZ)$ that is weighted by $\epsilon_f$ and contains $K_{2it}(x)$. A simple but crucial observation is that $L \big(\frac 1 2 , f \big) = 0$ if $\epsilon_f = - 1$. He avoids using the other Kuznetsov formula with $ J_{2it}(x)$, as ``the relevant transformation is difficult to handle because its integrand is not of rapid decay''. In \cite{SH-Liu-Maass}, Shenhui Liu follows Wenzhi Luo's mollification analysis for the holomorphic case in \cite{Luo-Weight}. In his Introduction, he lists several advantages of Motohashi's formula for the mollified second moment and explains that there is a ``deeper reason for using Motohashi's formula'': If one were to use the Kuznetsov formula, it would not be easy ``to extract information from the off-diagonal terms by using properties of Estermann zeta-functions'', ``since the Mellin--Barnes representation of $J_{2it}(x)$ gives very narrow room for contour shifting''. A direct approach by the Kuznetsov formula is certainly of its own interests and merits. A more general goal on our mind is to obtain a positive proportion for the non-vanishing problem over an imaginary quadratic field $F$. However, there is currently no Motohashi formula over a field other than ${\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$. Indeed, the root number $\epsilon_f$ is always $+1$ for any spherical Maass form $f$ over $\mathrm{PSL}_2 (\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}})$, so Motohashi's idea does not work here directly. At any rate, one must reconsider the problem over ${\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ and find a way to solve it without recourse to the Motohashi formula. Our idea is to bypass the difficulties encountered in \cite[\S 3.3]{Motohashi-Riemann} and \cite{SH-Liu-Maass} by \begin{itemize} \item [(1)] applying the Vorono\"i summation as a substitute of the functional equation for Estermann zeta-functions, and \item [(2)] applying the Fourier-type representation instead of the Mellin--Barnes representation for Bessel functions. \end{itemize} The Vorono\"i summation has occurred in the case of holomorphic modular forms in \cite{Hough-Zero-Density,BF-Moments}, but their analyses are quite different from ours. A key feature of our analysis is a uniform treatment of the integrals involving $J_{2it} (x)$ and $K_{2it} (x)$ which can be extended to the complex setting. More explicitly, we shall have Fourier integrals in the off-diagonal terms, and the problem will be reduced to estimating the area of certain regions defined via hyperbolic or trigonometric-hyperbolic functions. \subsection*{Statement of Results} Let $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} $ or an imaginary quadratic field ${\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$ of discriminant $d_F$ and class number $h_F = 1$. Let $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} $ be its ring of integers. Let $N = 1$ or $2$ be the degree of $F$. For a nonzero integral ideal $\mathfrak{n} \subset \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}$ let $\RN (\mathfrak{n}) = |\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} / \mathfrak{n}|$ be its norm and $\tau (\mathfrak{n}) = \sum_{\mathfrak{d} | \mathfrak{n}} 1$. Let $\gamma_{0}$ and $\gamma_{1}$ respectively be the constant term and the residue of Dedekind's $\zeta_F (s)$ at $s=1$. Define $ c_1 = 1/8\pi^2 $ or $\sqrt{|d_F|}/8\pi^3$ according as $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ or ${\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$. Moreover, let $\theta = \frac {13}{84}$, $\theta' = \frac {9} {56}$ (see \S \ref{sec: zeta function}), and for $0 < \beta \leqslant 1$ define \begin{align}\label{9eq: exponent for E2} \valpha_1 = \left\{ \begin{aligned} & \hskip -1.5pt 0, && \text{if } \tfrac {1273} {4053}+\vepsilon \leqslant \beta \leqslant 1, \\ & \hskip -1.5pt 2 \theta , && \text{if } 0 < \beta < \tfrac {1273} {4053} +\vepsilon , \end{aligned} \right. \hskip 8pt \valpha_2 = \left\{ \begin{aligned} & \hskip -1.5pt 0, && \text{if } \tfrac 2 3+\vepsilon \leqslant \beta \leqslant 1, \\ & \hskip -1.5pt 2 \theta , && \text{if } \tfrac {1273} {4053}+\vepsilon \leqslant \beta < \tfrac 2 3+\vepsilon , \\ & \hskip -1.5pt 4 \theta, && \text{if } 0 < \beta < \tfrac {1273} {4053} +\vepsilon , \end{aligned} \right. \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, and \begin{align} \valpha_1 = \left\{ \begin{aligned} & \hskip -1.5pt 0, && \text{if } \tfrac 7 8+\vepsilon \leqslant \beta \leqslant 1, \\ & \hskip -1.5pt 2 \theta', && \text{if } 0 < \beta < \tfrac {7} {8} +\vepsilon , \end{aligned} \right. \quad \valpha_2 = \left\{ \begin{aligned} & \hskip -1.5pt 2 \theta', && \text{if } \tfrac 7 8+\vepsilon \leqslant \beta \leqslant 1, \\ & \hskip -1.5pt 4 \theta', && \text{if } 0 < \beta < \tfrac {7} {8} +\vepsilon , \end{aligned} \right. \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$. Let $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} $ be an orthonormal basis consisting of Hecke--Maass cusp forms for the spherical cuspidal spectrum for $ {\mathrm {PGL}}_2 (\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}})$. For $f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}}$, let $ t_{f } \in [0, \infty) \cup i \big[ 0, \frac 1 2 \big)$ be its Archimedean parameter, $\lambda_f (\mathfrak{n})$ be its Hecke eigenvalues, $L(s, f)$ and $L(s, \mathrm{Sym}^2 f)$ respectively be its standard and symmetric square $L$-functions. For any sequence of complex numbers $a_f$ we introduce the harmonic summation \begin{align} \sideset{}{^h}\sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}}} a_f = \sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}}} \omega_f a_f, \qquad \omega_f = \frac 1 {2 L(1, \mathrm{Sym}^2 f)} . \end{align} For large $T > M $ we define \begin{align}\label{1eq: defn of kq} k (t) = e^{- (t - T)^2 / M^2} + e^{-(t + T)^2 / M^2} , \end{align} and for $q = 1$ or $2$ we introduce the smoothly weighted twisted moments: \begin{align} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_q (\mathfrak{m} ) = \sideset{}{^h}\sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} } \hskip -1pt k ( t_f ) \lambda_f ( \mathfrak{m} ) L \big( \tfrac 1 2 , f \big)^q . \end{align} \begin{thm}\label{thm: moment} Define $ \gamma_0 ' = \gamma_{0} - \gamma_{1} \log \big( (2\pi)^N/|d_F| \big) $. Let $ M = T^{\beta}$ with $\vepsilon \leqslant \beta \leqslant 1-\vepsilon$. Then \begin{align} \label{10eq: 1st moment} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_1 (\mathfrak{m} ) = 4 \sqrt{\pi} c_1 \frac { M T^{N} } {\sqrt{\RN(\mathfrak{m})}} \big( 1 + O_{\vepsilon} \big( (M/T)^2 \big) \big) + O_{\vepsilon} \left ( M T^{ N \valpha_1 + \vepsilon} \right ), \end{align} for $\RN (\mathfrak{m}) \leqslant T^{N-\vepsilon}$, and \begin{align} \label{10eq: 2nd moment} \begin{aligned} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_2 (\mathfrak{m} ) = 8 \sqrt{\pi} c_1 \frac { \tau (\mathfrak{m}) M T^N } {\sqrt{\RN(\mathfrak{m})}} \bigg( \gamma_{1} \log \frac {T^N} {\sqrt{\RN(\mathfrak{m})}} + \gamma_0 ' + O_{\vepsilon} ( M T^{\vepsilon} /T ) \bigg) & \\ + O_{\vepsilon} \bigg( M T^{ N \valpha_2 + \vepsilon} + \frac {\sqrt{\RN(\mathfrak{m}) } T^{N/2 + \vepsilon}} {M^{1-N/2}} \bigg) & , \end{aligned} \end{align} for $\RN (\mathfrak{m}) \leqslant T^{2 N -\vepsilon}$. Moreover, if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, then the error term $O_{\vepsilon} ( M T^{\vepsilon} /T )$ in the first line of {\rm\eqref{10eq: 2nd moment}} may be improved into $O_{\vepsilon} \big( (M/T)^2 \log T \big) $ and the second error term in the second line may be removed if $ \RN (\mathfrak{m}) \leqslant M^{2-\vepsilon}$. \end{thm} The error terms are always inferior to the main term in {\rm\eqref{10eq: 1st moment}} as long as $\RN(\mathfrak{m}) \leqslant T^{N-\vepsilon}$, while this holds for {\rm\eqref{10eq: 2nd moment}} as long as \begin{align}\label{1eq: m < T} \RN (\mathfrak{m}) \leqslant \min \left\{ M^{2-N/2} T^{N/2 - \vepsilon}, T^{2 N (1 - \valpha_2) - \vepsilon} \right\} . \end{align} When $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, with cleaner error terms, \eqref{10eq: 1st moment} and \eqref{10eq: 2nd moment} are essentially Theorem 4.1 and 5.1 in \cite{BHS-Maass} prior to the averaging process for the $T$-parameter. For $T^{\vepsilon} \leqslant 3 H \leqslant T $ define \begin{align}\label{1eq: truncated moments} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_q (T, H) = \sideset{}{^h}\sum_{|t_f-T| \leqslant H} L \big( \tfrac 1 2 , f \big)^q + \frac {1} {2\pi} \gamma_1 \int_{\, T-H}^{T+H} \frac {\left| \zeta_F \big(\tfrac 1 2 + it \big) \right|^{2q} } { | \zeta_F (1 + 2 it ) |^2 } \hskip 0.5 pt \mathrm{d} t . \end{align} Then we have simpler asymptotic formulae for $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_q (T, H)$ as follows. \begin{cor} \label{cor: unsmooth} For $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ we have \begin{align}\label{1eq: N1, Q} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_1 (T, H) = \frac 1 {\pi^2} H T + O_{\vepsilon} \left ( T^{1+\vepsilon} \right ), \end{align} and \begin{align}\label{1eq: N2, Q} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_2 (T, H) = \frac 1 {\pi^2} \int_{\,T-H}^{T+H} K ( \log K + \gamma_0 ' ) \mathrm{d} K + O_{\vepsilon} \big( T^{1+\vepsilon} \big) . \end{align} For $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$ we have \begin{align} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_1 (T, H) = \frac {\sqrt{|d_F|}} {3 \pi^{3} } \big( 3 H T^2 + H^3 \big) + O_{\vepsilon} \left ( T^{2+\vepsilon} \right ), \end{align} and \begin{align} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_2 (T, H) = \frac {\sqrt{|d_F|}} { \pi^{3} } \int_{\,T-H}^{T+H} K^2 ( 2 \gamma_{1} \log K + \gamma_0 ' ) \mathrm{d} K + O_{\vepsilon} \big(T^{2+\vepsilon}\big). \end{align} \end{cor} In the case $ H = T/3$, the formulae \eqref{1eq: N1, Q} and \eqref{1eq: N2, Q} should be compared with \cite[Theorem 1]{Iviv-Jutila-Moments} and \cite[Theorem 2]{Motohashi-JNT-Mean}. As a consequence of the mollification technique as in \cite{IS-Siegel,KMV-Derivatives}, one may derive from the asymptotic formulae in Theorem \ref{thm: moment} the following effective lower bound for the proportion of non-vanishing $ L \big(\frac 1 2 , f\big) $. \begin{thm}\label{thm: non-vanishing} For any $\vepsilon > 0$ and sufficiently large $T$, we have \begin{align} \mathop{\sideset{}{^h}\sum_{|t_f - T| \leqslant H}}_{L (\frac 1 2 , f) \neq 0} 1 \geqslant \left ( \frac {\varDelta } {1 + \varDelta } -\vepsilon \right ) \sideset{}{^h}\sum_{|t_f - T| \leqslant H} 1, \end{align} where $3 H = T^{\beta}$ with $\vepsilon \leqslant \beta \leqslant 1$ and \begin{align}\label{1eq: Delta} \varDelta \leqslant \min \bigg\{ 1 - \valpha_2 , \frac 1 4 + \left ( \frac 1 {N} - \frac 1 {4} \right ) \beta \bigg\}. \end{align} \end{thm} For $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ this is essentially Theorem 1.2 in \cite{BHS-Maass}. For $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}(\sqrt{d_F})$ it follows from almost the same arguments in \cite[\S 8]{BF-Moments} and \cite[\S 7]{BHS-Maass}. As such, we omit the details of proof and only remark that the limitation \eqref{1eq: Delta} comes from the inequality \eqref{1eq: m < T}. To avoid extra work on $ L(1, \mathrm{Sym}^2 f)$, we allow the harmonic weight to be present (see the paragraph below (2.9) in \cite{IS-Siegel}). The following results follow if we choose $H = T/3$ in Theorem \ref{thm: non-vanishing} and use a dyadic partition. \begin{cor} We have \begin{align} \mathop{\sideset{}{^h}\sum_{ t_f \leqslant T}}_{L (\frac 1 2 , f) \neq 0} 1 \geqslant \left ( \frac 1 2 -\vepsilon \right ) \sideset{}{^h}\sum_{t_f \leqslant T } 1 \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} $, and \begin{align} \mathop{\sideset{}{^h}\sum_{ t_f \leqslant T}}_{L (\frac 1 2 , f) \neq 0} 1 \geqslant \left ( \frac 1 3 -\vepsilon \right ) \sideset{}{^h}\sum_{t_f \leqslant T } 1 \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$. \end{cor} Finally, we remark that, with some efforts, our results can be extended to an arbitrary imaginary quadratic field (see \cite{Qi-GL(3)}). \subsection*{Notation} By $X \Lt Y$ or $X = O (Y)$ we mean that $|X| \leqslant c Y$ for some constant $c > 0$, and by $X \asymp Y$ we mean that $X \Lt Y$ and $Y \Lt X$. We write $X \Lt_{P, \hskip 0.5 pt Q, \, \dots} Y$ or $X = O_{P, \hskip 0.5 pt Q, \, \dots} (Y)$ if the implied constant $c$ depends on $P$, $Q, \dots$. We say that $X$ is negligibly small if $X = O_A (T^{-A})$ for arbitrarily large but fixed $A \geqslant 0$. We adopt the usual $\vepsilon$-convention of analytic number theory; the value of $\vepsilon $ may differ from one occurrence to another. \begin{acknowledgement} We thank the referee for careful readings and helpful comments. \end{acknowledgement} { \large \part{Preliminaries}} \section{Number Theoretic Notation}\label{sec: notation} \subsection{Basic Notions} Let $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} $ or an imaginary quadratic field ${\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$ of class number $h_F = 1$, where $d_F$ is the discriminant of $F$. Let $N = 1$ or $2$ be the degree of $F$. Let $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} $ be its ring of integers and $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O}$}}}^{\times}$ be the group of units. Let $\mathfrak{D}$ be the different ideal of $F$ and $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' = \mathfrak{D}^{-1}$ be the dual of $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}$. Let $w_F$ be the number of roots of unity in $F$. Let $\mathrm{N}$ and $\mathrm{Tr}$ denote the norm and the trace for $F $, respectively. Let $F_{\infty}$ be the Archimedean completion of $F$. Let $\| \hskip 3.5 pt \|_{\infty} = | \hskip 3.5 pt |^N$ denote the normalized module of $F_{\infty}$, where $| \hskip 3.5 pt |$ is the usual absolute value. Define the additive character $\psi_{\infty} (x) = e (- x)$ if $F _\infty = \BR$ and $\psi_{\infty} (z) = e (- (z + \widebar z))$ if $F _\infty = {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$. We choose the Haar measure $\mathrm{d} x$ of $F _{\infty}$ self-dual with respect to $\psi_{\infty}$: the Haar measure is the ordinary Lebesgue measure on the real line if $F _{\infty} = \BR$, and twice the ordinary Lebesgue measure on the complex plane if $F _{\infty} = {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$. In general, we use Gothic letters $\mathfrak{a} , \mathfrak{b} , \mathfrak{m}, \mathfrak{n}, \dots$ to denote {\it nonzero} integral ideals of $F$. Let $\mathfrak{p}$ always stand for a prime ideal. Let $\RN (\mathfrak{a})$ denote the norm of $\mathfrak{a}$. \subsection{Arithmetic Functions} Let $\tau (\mathfrak{n}) $ and $\mu (\mathfrak{n})$ be the divisor function and the M\"obius function. For $s \in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$ define \begin{align}\label{1eq: defn of tau (n)} \tau_s (\mathfrak{n} ) = \tau_{-s} (\mathfrak{n} ) = \sum_{ \scriptstyle \mathfrak{a} \mathfrak{b} = \mathfrak{n} } \RN \big(\mathfrak{a} \mathfrak{b}^{-1} \big)^{ s }. \end{align} \subsection{Kloosterman and Ramanujan Sums} For $m, n \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}'$ and $c \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}$ we define \begin{align}\label{2eq: defn Kloosterman KS} S (m, n ; c ) = \sum_{a \, \in (\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} /c)^{\scalebox{0.55}{$\times$}} } \psi_{\infty} \bigg( \frac {m a + n \widebar{a} } {c} \bigg), \end{align} where $\widebar{a} $ is the multiplicative inverse of $a$ modulo $c$. The sum $S(m, 0; c)$ is usually named after Ramanujan. We have \begin{align}\label{2eq: Ramanujan} S(m, 0; c) = \sum_{\mathfrak{d} | (m \mathfrak{D}, c)} \RN(\mathfrak{d}) \mu \big( c \mathfrak{d}^{-1} \big) . \end{align} \subsection{The Dedekind Zeta Function}\label{sec: zeta function} Let $\zeta_F (s)$ be the Dedekind $\zeta$ function for $F$: \begin{align} \zeta_F (s) = \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { 1 } {\RN(\mathfrak{n})^{ s} }, \qquad \mathrm{Re} (s) > 1. \end{align} It is well-known that $\zeta_F (s)$ is a meromorphic function on the complex plane with a simple pole at $s=1$. Define the constants $\gamma_{0} $ and $\gamma_{1} $ by \begin{align}\label{2eq: zeta (s), s=1} \zeta_F (s) = \frac {\gamma_{1}} {s-1} + \gamma_{0} + O (|s-1|), \qquad s \ra 1. \end{align} For $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$ we have $\zeta_F (s) = \zeta (s) L (s, \chi^{}_{d_F})$ with $\chi^{}_{d_F}$ the primitive quadratic character associated to $F$. Let $\theta > 0$ be a sub-convex exponent for $ \zeta_F (s) $; namely, \begin{align}\label{2eq: subconvex} \zeta_F \big( \tfrac 1 2 + it \big) \Lt_{\vepsilon } (1+|t|)^{N \theta + \vepsilon} \end{align} for any $\vepsilon > 0$. For the Riemann $\zeta (s)$ the best sub-convex exponent to date $\theta = \frac {13} {84}$ is due to Bourgain \cite{Bourgain}. This together with the Weyl sub-convex bound for $ L (s, \chi^{}_{d_F}) $ (see for example \cite{H-B-Hybrid}) yields $\theta = \frac {9} {56}$ in the case $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$. It should be remarked that for arbitrary $F$ the Weyl exponent $\theta = \frac 1 6$ is always admissible \cite{Heath-Brown-Weyl}. \section{\texorpdfstring{Automorphic Forms on ${\mathrm {GL}}_2$}{Automorphic Forms on GL(2)}} In this section, we briefly compile some results and introduce the relevant notation from the theory of spherical automorphic forms on ${\mathrm {PGL}}_2 (\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}) \backslash {\mathrm {PGL}}_2 (F_{\infty}) / K_{\infty}$, where $K_{\infty} = \mathrm{O}_2 (\BR) / \{\pm 1_{2} \}$ or $ \mathrm{U}_2 ({\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}})/ \{\pm 1_{2} \}$ according as $F_{\infty} = \BR $ or $ {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$, especially the Kuznetsov trace formula and the Vorono\"i summation formula (for Eisenstein series). The reader is referred to \cite{Qi-GL(3),Qi-VO,Venkatesh-BeyondEndoscopy} for further details. \vskip 5 pt \subsection{Archimedean Representations}\label{sec: Archimedean} In this paper, we shall be concerned only with {spherical} representations of ${\mathrm {PGL}}_2 (F_{\infty})$. By definition, an irreducible representation of ${\mathrm {PGL}}_2 (F_{\infty})$ is spherical if it contains a nonzero $K_{\infty}$-invariant vector. Let $Y = (-\infty, \infty) \cup i \hskip -1.5 pt \left(- \frac 1 2, \frac 1 2 \right)$. We associate to $t \in Y$ a unique spherical unitary irreducible representation $\pi (i t)$ of ${\mathrm {PGL}}_2 (F_{\infty})$. Namely, the parameter $t $ determines a character of the diagonal torus via \begin{align*} \begin{pmatrix} x & \\ & y \end{pmatrix} \ra \|x / y\|_{\infty}^{i t} , \qquad x, y \in F_{\infty}^{\times}, \end{align*} and we let $\pi (i t)$ be the irreducible spherical constituent of the representation unitarily induced from this character. For $t$ real, the spherical $ \pi (i t) $ is tempered, and the Plancherel measure $\mathrm{d} \mu (t) $ is defined by \begin{equation}\label{1eq: defn Plancherel measure} \mathrm{d} \mu (t) = \left\{ \begin{aligned} & t \tanh (\pi t) \mathrm{d} \hskip 0.5 pt t , \ & & \text{ if } F_{\infty} \text{ is real}, \\ & t^2 \mathrm{d} \hskip 0.5 pt t , & & \text{ if } F_{\infty} \text{ is complex}. \end{aligned}\right. \end{equation} Moreover, we define \begin{equation}\label{1eq: defn of Pl(t)} \mathrm{Pl} (t) = \left\{ \begin{aligned} & 4 \cosh (\pi t) , & & \text{ if } F_{\infty} \text{ is real}, \\ & 8\pi \sinh (2\pi t) / t , \ & & \text{ if } F_{\infty} \text{ is complex}. \end{aligned}\right. \end{equation} Compared with \cite[(3.2)]{Qi-GL(3)}, we have normalized $\mathrm{Pl} (t)$ here by the factors $4$ and $8 \pi$. Let $W_{ i t} $ be the spherical ($K_{\infty}$-invariant) Whittaker vector so that \begin{equation} W_{i t} \begin{pmatrix} x & \\ & 1 \end{pmatrix} = \left\{ \begin{aligned} & \|x \|_{\infty}^{\frac 1 2} K_{i t} (2\pi |x |) , & & \text{ if } F_{\infty} \text{ is real}, \\ & \|x \|_{\infty}^{\frac 1 2} K_{2 i t} (4\pi |x |) , \ & & \text{ if } F_{\infty} \text{ is complex}. \end{aligned}\right. \end{equation} \vskip 5 pt \subsection{Hecke--Maass Cusp Forms}\label{sec: automorphic forms} Fix an orthonormal basis $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} $ for the cuspidal subspace of $L^2 ({\mathrm {PGL}}_2 (\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}) \backslash {\mathrm {PGL}}_2 (F_{\infty}) / K_{\infty})$ that consists of eigenforms for the Hecke algebra as well as the Laplacian operator (Hecke--Maass cusp forms). Each $f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} $ transforms under a certain representation $\pi (i t_f)$ of ${\mathrm {PGL}}_2 (F_{\infty})$, for some $ t_f \in Y $. In general, we have the Kim--Sarnak bound in \cite{Blomer-Brumley}: \begin{align}\label{2eq: Kim-Sarnak} |\Im (t_{f })| \leqslant \frac 7 {64}, \end{align} but it is known that $t_f$ is real for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ or ${\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$ with $d_F = -3,-4,-7,-8, - 11 $ (see \cite[\S 7.6]{EGM}). Accordingly, define $Y_{\mathrm{KS}} = (-\infty, \infty) \cup i \hskip -1.5 pt \left[ - \frac 7 {64}, \frac 7 {64} \right]$. The Fourier expansion of $f $ is of the form: \begin{align}\label{2eq: Fourier expansion} f ( g_{\infty}) = \sum_{ n \hskip 0.5 pt \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' \smallsetminus \{0\}} \frac {a_f (n \mathfrak{D})} {\sqrt{ \RN (n \mathfrak{D}) }} W_{i t_f} (\begin{pmatrix} n & \\ & 1 \end{pmatrix} g_{\infty}), \quad g_{\infty} \in {\mathrm {GL}}_2 (F_{\infty}). \end{align} As indicated by the notation, the Fourier coefficient $a_f (\mathfrak{n}) = a_f (n \mathfrak{D})$ only depends on the ideal $\mathfrak{n} = n \mathfrak{D}$. Let $\lambda_f (\mathfrak{n})$ denote the $\mathfrak{n}$-th Hecke eigenvalue of $f$. It is known that $\lambda_f (\mathfrak{n})$ are real. We have the Hecke relation: \begin{align}\label{3eq: Hecke relation} \lambda_f (\mathfrak{n}_1) \lambda_f (\mathfrak{n}_2) = \sum_{ \mathfrak{d} | (\mathfrak{n}_1, \mathfrak{n}_2) } \lambda_f \big(\mathfrak{n}_1 \mathfrak{n}_2 / \mathfrak{d}^2 \big) . \end{align} As usual, there is a constant $C_f$ so that \begin{align}\label{2eq: af = lambda f} a_f (\mathfrak{n}) = C_f \lambda_f (\mathfrak{n}) \end{align} for any nonzero integral ideal $\mathfrak{n}$. By the Rankin--Selberg method, we have \begin{align}\label{3eq: Cf2 = L(1, Sym2)} { |C_f|^2 } { } = \frac { \mathrm{Pl} (t_f) } { 2 L(1, \mathrm{Sym^2} f) } . \end{align} \vskip 5 pt \subsection{Kuznetsov Trace Formula} \begin{defn}[Space of test functions] \label{defn: test functions} Let $S > \frac 1 2$. We set $ \mathscr{H} (S ) $ to be the space of functions $h (t) $ which extends to an even holomorphic function on the strip $\big\{ t + i \sigma : | \sigma | \leqslant S \big\}$ such that \begin{align*} h (t + i \sigma) \Lt e^{-\pi |t|} (1+|t|)^{- N}, \end{align*} holds uniformly for some $N > 6$. \end{defn} \begin{defn} [Bessel kernel] \label{defn: Bessel kernel} Let $s \in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}} $. {\rm(1)} When $F_{\infty} = \BR$, for $x \in \BR_+$ we define \begin{align*} &B_{s} (x) = \frac {\pi} {\sin (\pi s) } \big( J_{-2 s} (4 \pi \sqrt {x }) - J_{2 s} (4 \pi \sqrt {x }) \big), \\ &B_{s} (-x ) = {4 \cos (\pi s)} K_{2 s} (4 \pi \sqrt {x }) . \end{align*} {\rm(2)} When $F_{\infty} = {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$, for $z \in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}^{\times}$ we define \begin{equation*} B_{s} (z ) = \frac {2\pi^2} {\sin (2\pi s) } \big( { \textstyle J_{-2 s} (4 \pi \sqrt {z}) J_{- 2s} (4 \pi \sqrt { \widebar z}) - J_{2 s} (4 \pi \sqrt {z}) J_{ 2s} (4 \pi \sqrt { \widebar z}) } \big). \end{equation*} \end{defn} The Kuznetsov trace formula of Bruggeman and Miatello in the spherical case is as follows. See \cite[Proposition 3.5]{Qi-GL(3)} or \cite[Proposition 1]{Venkatesh-BeyondEndoscopy}. \begin{prop}[Kuznetsov trace formula]\label{prop: Kuznetsov} Let $h (t)$ be a test function in $ \mathscr{H} (S) $ and define \begin{align}\label{1eq: defn Bessel integral} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} = \int_{-\infty}^{\infty} h (t) \mathrm{d} \hskip 0.5 pt \mu (t), \quad \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x) = \int_{-\infty}^{\infty} h (t) B_{i t} (x ) \mathrm{d} \hskip 0.5 pt \mu (t), \quad \quad \text{$x\in F_{\infty}^{\times}$}. \end{align} For nonzero integral ideals $\mathfrak{m} = m \mathfrak{D}$ and $ \mathfrak{n} = n \mathfrak{D}$ we have \begin{equation}\label{1eq: Kuznetsov} \begin{aligned} \sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} } \hskip -1pt \omega_f h ( t_f ) \lambda_f ( \mathfrak{m} ) & {\lambda_f ( \mathfrak{n} )} + \frac {1} {4\pi} c_0 \int_{-\infty}^{\infty} \hskip -2pt \omega (t) h ( t ) \tau_{it} (\mathfrak{m} ) {\tau_{it} (\mathfrak{n} )} \hskip 0.5 pt \mathrm{d} t \\ & = c_1 \delta_{\mathfrak{m}, \mathfrak{n}} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} + c_2 \sum_{ \epsilon \, \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O}$}}}^{\scalebox{0.55}{$\times$}} \hskip -1pt / \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O}$}}}^{\scalebox{0.55}{$\times$} 2} } \sum_{c \, \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} \smallsetminus \{0\} } \frac {S ( m ; \epsilon n ; c ) } { |\RN (c )| } \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} \bigg( \frac {\epsilon m n } { c^2 } \bigg), \end{aligned} \end{equation} where \begin{align}\label{1eq: omegas} \omega_f = \frac {|C_f|^2} {\mathrm{Pl}(t_f)} = \frac { 1 } { 2 L(1, \mathrm{Sym^2} f) }, \quad \quad \omega (t) = \frac { 1 } {|\zeta_F(1+2it)|^2}, \end{align} $\delta_{\mathfrak{m}, \mathfrak{n}} $ is the Kronecker $\delta$ that detects $\mathfrak{m} = \mathfrak{n}$, and $c_0$, $c_1$, and $c_2$ are given by \begin{align} \label{3eq: constants, Q} c_0 = 1, \qquad c_1 = \frac {1} {8 \pi^{2 } } , \qquad c_2 = \frac {1} {16 \pi^{2 } } , \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, and \begin{align} \label{3eq: constants, C} c_0 = \frac { 2\pi } {w_F \sqrt{|d_F|} } , \quad c_1 = \frac {\sqrt{|d_F|}} {8 \pi^{3} } , \quad c_2 = \frac {1} {16 \pi^{3} } , \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}(\sqrt{d_F})$. \end{prop} For our normalized $\mathrm{Pl} (t)$ the constants in \cite[(3.18), (3.19)]{Qi-GL(3)} have been modified here accordingly. Note that $c_0 = \gamma_1$ and $c_2 = c_1 /2 \sqrt{|d_F|}$. By the discussions below \cite[Lemma 2.2]{Qi-Liu-LLZ}, it is known that the lower bound $|\zeta_F(1+2it)| \Gt \log (3+|t|) $ holds, and hence \begin{align}\label{3eq: bound for omega(t)} \omega (t) \Lt \log^2 (3+|t|) . \end{align} \subsection{Vorono\"i Summation Formula} The Vorono\"i summation formula for the divisor function $\tau (\mathfrak{n}) = \tau_0 (\mathfrak{n})$ is as follows. Compare \cite[(4.49)]{IK}. \begin{prop}[Vorono\"i summation formula] \label{prop: Voronoi tau} Let $ a, \widebar{a}, c \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}$ with $c \neq 0$ be such that $(a, c) = (1)$ and $a \widebar{a} \equiv 1 (\mathrm{mod}\, c)$. For $\varww (x) \in C^{\infty}_c (F^{\times}_{\infty}) $ we define its Hankel transform $ \widetilde{\varww}_{0} (y) $ with Bessel kernel $B_0 $ {\rm(}as in Definition {\rm\ref{defn: Bessel kernel}}{\rm)}{\rm:} \begin{align}\label{3eq: Hankel, global} \widetilde{\varww}_{0} (y) = \int_{F^{\scalebox{0.55}{$\times$}}_{\scalebox{0.55}{$\infty$} } } \varww (x) B_{0} ( x y) \mathrm{d} x, \qquad y \in F^{\times}_{\infty} , \end{align} and define its associated Mellin integrals{\rm:} \begin{align}\label{3eq: Mellin} \widetilde{\varww}_0 (0) = \int_{F^{\scalebox{0.55}{$\times$}}_{\scalebox{0.55}{$\infty$} } } \varww (x) \mathrm{d} x , \qquad \widetilde{\varww}_0' (0) = \int_{F^{\scalebox{0.55}{$\times$}}_{\scalebox{0.55}{$\infty$} } } \varww (x) \log \|x\|_{\infty} \mathrm{d} x . \end{align} Then we have the identity \begin{align}\label{app: Voronoi, tau} \begin{aligned} \frac {{|\RN(c)|}} {\sqrt{|d_F|}} \hskip -2 pt \sum_{n \hskip 0.5 pt \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} '} \hskip -2 pt \psi_{\infty} \Big( \frac {a n} {c} \Big) \tau (n \mathfrak{D}) \varww (n ) = \gamma_{1} \widetilde{\varww}_0' (0) + 2 \bigg( \gamma_{0} -\gamma_{1} \log \frac {{|\RN(c)|}} {\sqrt{|d_F|}} \bigg) \widetilde{\varww}_0 (0) & \\ + \frac 1 {\sqrt{|d_F|}} \sum_{ n \hskip 0.5 pt \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' \smallsetminus \{0\} } \hskip -2 pt \psi_{\infty} \Big( \hskip -2 pt - \frac {\widebar{a} n} {c} \Big) \tau (n \mathfrak{D} ) \widetilde{\varww}_{0} \Big(\frac {n}{c^2}\Big) & , \end{aligned} \end{align} where the constants $\gamma_{0} $ and $\gamma_{1} $ are defined as in {\rm\eqref{2eq: zeta (s), s=1}}. \end{prop} \begin{proof} Apply \cite[Corollary 1.4]{Qi-VO} with $\zeta = a/c$, $\mathfrak{a} = (1)$, $S = \big\{ \mathfrak{p} : \mathfrak{p} | (c) \big\}$, and $\mathfrak{b} = (c^2)$. Note that every $\zeta \in F$ may be expressed as a fraction $\zeta = a/c$ with $(a, c) = (1)$ since the class number $h_F = 1$. \end{proof} It will be more convenient to interpret the zero frequency as the limit: \begin{align}\label{3eq: limit for 0} \gamma_{1} \widetilde{\varww}_0' (0) + 2 \bigg( \gamma_{0} -\gamma_{1} \log \frac {{|\RN(c)|}} {\sqrt{|d_F|}} \bigg) \widetilde{\varww}_0 (0) = \lim_{s \ra 0} \sum_{\pm} \frac {\zeta_F (1\pm 2s) \widetilde{\varww}_{\pm s} (0)} { | \RN ( c )^2 / d_F | ^{ \pm s} }, \end{align} where $ \widetilde{\varww}_{s} (0) $ is the Mellin transform \begin{align} \widetilde{\varww}_{s} (0) = \int_{F^{\scalebox{0.55}{$\times$}}_{\scalebox{0.55}{$\infty$} } } \varww (x) \|x\|_{\infty}^{s} \mathrm{d} x . \end{align} See \cite[Theorem 1.3]{Qi-VO}. \section{Approximate Functional Equations} Let $f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} $ be a {(spherical)} Hecke--Maass cusp form for $ {\mathrm {PGL}}_2 (\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}) $ with Hecke eigenvalues $\lambda_f (\mathfrak{n})$ and Archimedean parameter $ t_f \in Y$. The $L$-function attached to $ f $ is defined by \begin{equation} L (s, f ) = \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac {\lambda_f( \mathfrak{n}) } {\RN(\mathfrak{n})^{ s} }. \end{equation} The completed $L$-function for $ f $ is $\Lambda (s, f ) = \RN(\mathfrak{D})^{ s } \gamma (s, t_f ) L (s, f ) $, where \begin{equation}\label{4eq: defn of gamma (s, f)} \gamma (s, t) = (N \pi )^{- N s } \Gamma \bigg(\frac {N(s- i t)} 2 \bigg) \Gamma \bigg(\frac {N(s+ i t)} 2 \bigg) . \end{equation} Recall that $N = 1$ or $2$ according as $F$ is rational or imaginary quadratic. It is known that $\Lambda (s, f )$ is entire and has the functional equation \begin{align*} \Lambda (s, f ) = \Lambda (1-s, f ). \end{align*} For $\mathrm{Re} (s) > 1$, it follows from the Hecke relation \eqref{3eq: Hecke relation} that \begin{align*} \begin{aligned} L (s, f )^2 & = \mathop{\sum\sum}_{\mathfrak{n}_1, \mathfrak{n}_2 \subset \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}} \sum_{ \mathfrak{d} | (\mathfrak{n}_1, \mathfrak{n}_2) } \frac { \lambda_f \big(\mathfrak{n}_1 \mathfrak{n}_2 / \mathfrak{d}^2 \big) } {{\RN(\mathfrak{n}_1 \mathfrak{n}_2)^s }} = \sum_{ \mathfrak{d} \subset \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}} \frac 1 {\RN (\mathfrak{d})^{2s} } \mathop{\sum\sum}_{\mathfrak{n}_1, \mathfrak{n}_2 \subset \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}} \frac {\lambda_f (\mathfrak{n}_1 \mathfrak{n}_2) } {\RN (\mathfrak{n}_1 \mathfrak{n}_2)^s} , \end{aligned} \end{align*} and hence \begin{align}\label{5eq: L(s,f) square} L (s, f )^2 = \zeta_F (2s) \sum_{\mathfrak{n} \subset \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}} \frac {\lambda_f (\mathfrak{n}) \tau (\mathfrak{n}) } {\RN (\mathfrak{n})^s}, \end{align} where $\tau (\mathfrak{n})$ is the divisor function. Similarly, if $\lambda_f (\mathfrak{n})$ are replaced by $\tau_{i t} (\mathfrak{n}) $, then \begin{align}\label{5eq: L (E)} \zeta_F (s+it ) \zeta_F (s-it ) = \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac {\tau_{it} ( \mathfrak{n}) } {\RN(\mathfrak{n})^{ s} } , \end{align} and \begin{align}\label{5eq: L (E) square} \zeta_F (s+it )^2 \zeta_F (s-it )^2 = \zeta_F (2s) \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac {\tau_{it} ( \mathfrak{n}) \tau (\mathfrak{n})} {\RN(\mathfrak{n})^{ s} } . \end{align} We have the Approximate Functional Equations for $L (s, f )$ and $L (s, f )^2$ (see \cite[Theorem 5.3]{IK}): \begin{equation} \label{5eq: AFE, 1} L \big(\tfrac 1 2, f \big) = 2 \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { \lambda_f (\mathfrak{n} ) } {\sqrt{\RN ( \mathfrak{n} )} } V_1 \big( \RN \big( \mathfrak{n} \mathfrak{D}^{-1} \big); t_f \big) , \end{equation} \begin{equation} \label{5eq: AFE, 2} L \big(\tfrac 1 2, f \big)^2 = 2 \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { \lambda_f (\mathfrak{n} ) \tau (\mathfrak{n}) } { \sqrt{\RN ( \mathfrak{n} )} } V_2 \big( \RN \big( \mathfrak{n} \mathfrak{D}^{-2} \big); t_f \big) , \end{equation} with \begin{equation}\label{5eq: def of V1 (y, t)} V_1 (y; t ) = \frac 1 {2 \pi i} \int_{(3)} G (v, t) y^{ - v} \frac { \mathrm{d} v } {v} , \end{equation} \begin{equation}\label{5eq: def of V2 (y, t)} V_2 (y; t ) = \frac 1 {2 \pi i} \int_{(3)} G (v, t)^2 \zeta_F (1+2v) y^{ - v} \frac { \mathrm{d} v } {v} , \end{equation} for $y > 0$, where \begin{align}\label{4eq: def G} G (v, t) = \frac {\gamma \big(\frac 1 2 + v , t \big) } {\gamma \big(\frac 1 2, t \big) } \cdot e^{v^2 } . \end{align} In parallel, we have \begin{align} \label{5eq: AFE zeta, 1} \left| \zeta_F \big(\tfrac 1 2 + it \big) \right|^2 & = 2 \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { \tau_{it} (\mathfrak{n} ) } {\sqrt{\RN ( \mathfrak{n} )} } V_1 \big( \RN \big( \mathfrak{n} \mathfrak{D}^{-1} \big); t \big) + O \big( e^{-t^2/2} \big) , \\ \label{5eq: AFE zeta, 2} \left| \zeta_F \big(\tfrac 1 2 + it \big) \right|^4 & = 2 \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { \tau_{it} (\mathfrak{n} ) \tau (\mathfrak{n}) } { \sqrt{\RN ( \mathfrak{n} )} } V_2 \big( \RN \big( \mathfrak{n} \mathfrak{D}^{-2} \big); t \big) + O \big( e^{-t^2 } \big) , \end{align} in which the errors arise from the polar terms. \begin{lem}\label{lem: afq} For $t \in Y_{\mathrm{KS}}$ define \begin{align* {\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t) = \sqrt{\tfrac 1 4 + t^2} . \end{align*} {\rm(1)} Let $ U > 1 $, $A > 0$, and $\vepsilon > 0$. We have \begin{align}\label{1eq: derivatives for V(y, t), 1} V_1 (y; t ) \Lt_{ A } \bigg( 1 + \frac {y} {{\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^{N} } \bigg)^{-A} , \quad V_2 (y; t ) \Lt_{ A } \bigg( 1 + \frac {y} {{\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^{2N} } \bigg)^{-A}, \end{align} and \begin{align} \label{1eq: approx of V1} V_1 (y; t) & = \frac 1 {2 \pi i } \int_{ \vepsilon - i U}^{\vepsilon + i U} G (v, t) y^{ - v} \frac {\mathrm{d} v} {v} + O_{\vepsilon } \bigg( \frac {{\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^{ \vepsilon} } {y^{ \vepsilon} e^{U^2 / 2} } \bigg), \\ \label{1eq: approx of V2} V_2 (y; t) & = \frac 1 {2 \pi i } \int_{ \vepsilon - i U}^{\vepsilon + i U} G (v, t)^2 \zeta_F (1+2v) y^{ - v} \frac {\mathrm{d} v} {v} + O_{\vepsilon } \bigg( \frac {{\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^{ \vepsilon} } {y^{ \vepsilon} e^{U^2 } } \bigg) . \end{align} {\rm(2)} Define \begin{align}\label{4eq: defn of psi} \psi (t) = \frac {N} 2 \left ( \frac {\Gamma'} {\Gamma} \left ( \frac {N (1+2 i t)} {4} \right ) + \frac {\Gamma'} {\Gamma} \left ( \frac {N (1 - 2 i t)} {4} \right ) - 2 \log (N \pi) \right ) . \end{align} {\rm(}This $\psi $ should not be confused with the additive character as it stands here for the digamma function.{\rm)} We have \begin{align}\label{4eq: asymptotic for V1} V_1 (y; t) = 1 + O_{A } \left ( \left ( \frac {y} {{\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^N} \right ) \hskip -8.5 pt {\phantom{\Big)}}^{A } \right ) , \end{align} for $1 \Lt y < {\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^{N}$, and \begin{align}\label{4eq: asymptotic for V2} V_2 (y; t) = \gamma_{0} + \gamma_{1} \left ( \psi (t) - \log \sqrt{y} \right ) + O_{A } \left ( \left ( \frac {y} {{\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^{2N}} \right ) \hskip -8.5 pt {\phantom{\Big)}}^{A } \right ), \end{align} for $1 \Lt y < {\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^{2N}$, where $\gamma_{0}$ and $\gamma_{1}$ are defined as in {\rm\eqref{2eq: zeta (s), s=1}}, and \begin{align}\label{4eq: asymp of psi} \psi (t) = N \log \left ( {\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t) / 2\pi \right ) + O \big(1/ {\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)^2 \big). \end{align} \end{lem} \begin{proof} The asymptotics in (1) are analogous to those in \cite[Lemma 5.1 (1)]{Qi-GL(3)}. See also \cite[Proposition 5.4]{IK}, \cite[Lemma 1]{Blomer}, and \cite[Lemma 3.7]{Qi-Gauss}. To derive \eqref{4eq: asymptotic for V1} and \eqref{4eq: asymptotic for V2}, we choose $U = \sqrt{{\mathrm {C}}} \newcommand{\RD}{{\mathrm {D}} (t)} $, say, and shift the integral contour in \eqref{1eq: approx of V1} and \eqref{1eq: approx of V2} from $\mathrm{Re}(v) = \vepsilon$ further down to $\mathrm{Re}(v) = - A $; the main term is the residue from the pole at $v = 0$ while the error term is from the Stirling formula. Note that the integrand in \eqref{5eq: def of V2 (y, t)} has a double pole at $v = 0$, and its residue may be computed using \begin{align* \zeta_F (1 + 2v) = \frac {\gamma_{1}} {2 v} + \gamma_{0} + O (|v|), \qquad v \ra 0, \end{align*} by \eqref{2eq: zeta (s), s=1}, and \begin{align* G (v, t) = 1 + \psi (t) v + O \big(|v|^2 \big), \qquad v \ra 0, \end{align*} by \eqref{4eq: defn of gamma (s, f)} and \eqref{4eq: def G}. Moreover, \eqref{4eq: asymp of psi} follows readily from \begin{align* \frac {\Gamma'} {\Gamma} (s) = \log s - \frac 1 {2s} + O \bigg(\frac 1 {|s|^2} \bigg), \end{align*} for $|s| \ra \infty$ and $|\arg (s)| \leqslant \pi -\delta < \pi$. \end{proof} \section{Choice of Weight Function}\label{sec: choice of h} \begin{defn}\label{def: weight k} Let $1 \Lt T^{\vepsilon} \leqslant M \leqslant T^{1-\vepsilon} $. Define the function \begin{align}\label{5eq: defn k(nu)} k (t) = k_{T, M} (t) = e^{- (t - T)^2 / M^2} + e^{-(t + T)^2 / M^2} . \end{align} \end{defn} Next we introduce an unsmoothing process as in \cite[\S 3]{Iviv-Jutila-Moments} by an average of the weight $k_{T, M} $ in the $T$-parameter. \begin{defn} Let $ 3 H \leqslant T $ and $T^{\vepsilon} \leqslant M \leqslant H^{1-\vepsilon} $. Define \begin{align}\label{5eq: defn w(nu)} \varww (t) = \varww_{T, M, H} (t) = \frac 1 { \sqrt{\pi} M} \int_{\, T- H}^{T + H} k_{K, M} (t) \mathrm{d} K . \end{align} \end{defn} Let $\chi (t) = \chi^{}_{T, H} (t)$ denote the characteristic function for $ ||t| - T | \leqslant H$ ($t$ real). By adapting the arguments in \cite[\S 3]{Iviv-Jutila-Moments} (see also \cite[\S 3]{BHS-Maass}\footnote{Note that the $1$ in \cite[(3.4)]{BHS-Maass} should be the characteristic function.}), it is easy to prove that $\varww (t) - 1 $ is exponentially small if $ | |t| - T | \leqslant H - M^{1+\vepsilon} $, \begin{align* \varww (t) - \chi (t) = O \left ( \frac {M^3} {\big( M + \min \big\{ ||t|-T \pm H| \big\} \big)^3 } \right ), \end{align*} if $ ||t| - T \pm H | \leqslant M^{1+\vepsilon} $, and $\varww (t)$ is exponentially small if otherwise. From these, along with \eqref{2eq: subconvex} and \eqref{3eq: bound for omega(t)}, one may prove the following lemmas (compare \cite[(3.6), (3.7)]{Iviv-Jutila-Moments}). \begin{lem}\label{lem: unsmooth, 1} Let $\lambda $ be a real constant. Suppose that $a_f \geqslant 0$ and that \begin{align*} \sideset{}{^h}\sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}}} k (t_f) a_f = O_{\lambda, \vepsilon} \big( M T^{\lambda} \big) \end{align*} for any $M$ with $ T^{\vepsilon} \leqslant M \leqslant T^{1-\vepsilon} $. Then for $ M^{1+\vepsilon} \leqslant 3 H \leqslant T $ we have \begin{align*} \mathop{\sideset{}{^h}\sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}}} }_{ |t_f - T| \hskip 0.5 pt \leqslant H} a_f = \sideset{}{^h}\sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}}} \varww (t_f) a_f + O_{\lambda, \vepsilon} \big( M T^{\lambda} \big) . \end{align*} \end{lem} Note that we obtain \cite[Lemma 3.1]{BHS-Maass} by applying Lemma \ref{lem: unsmooth, 1} with $M = H^{1-\vepsilon}$ and $a_f = \delta_{L (\frac 1 2 , f) \neq 0}$ (the Kronecker $\delta$ that detects $L (\frac 1 2 , f) \neq 0 $). \begin{lem}\label{lem: unsmooth, 2} For $ M^{1+\vepsilon} \leqslant 3 H \leqslant T $ we have \begin{align*} 2 \int_{\, T-H}^{T+H} \frac {\left| \zeta_F \big(\tfrac 1 2 + it \big) \right|^{2q} } { | \zeta_F (1 + 2 it ) |^2 } \mathrm{d} t = \int_{-\infty}^{\infty} \frac {\left| \zeta_F \big(\tfrac 1 2 + it \big) \right|^{2q} } { | \zeta_F (1 + 2 it ) |^2 } \varww (t) \hskip 0.5 pt \mathrm{d} t + O_{ \vepsilon} \big( M T^{2q N \theta + \vepsilon} \big), \end{align*} where $\theta$ is a sub-convex exponent for $\zeta_F (s)$ as in {\rm\eqref{2eq: subconvex}}. \end{lem} { \large \part{Analysis of Integrals} \label{part: analysis}} In the subsequent sections, we shall analyze the Bessel integrals, their Hankel and Mellin integral transforms over $F_{\infty} = \BR$ or ${\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$. Henceforth, $x$, $y$ will always stand for real variables, while $z$, $u$ for complex variables. \vskip 5pt \section{Asymptotics for Bessel Kernels} Let $ B_{s}(x) $ and $B_{s} (z)$ be the real and complex Bessel kernels as in Definition \ref{defn: Bessel kernel}, respectively. By the works in \cite{Qi-Liu-LLZ,Qi-GL(3)}, the Bessel integrals $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x)$ and $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (z)$ in the Kuznetsov trace formula are well understood, and their results will be recollected in the next section. In this section, we are mainly concerned with the Bessel kernel $B_0 (x)$ or $B_0 (z)$ for the Hankel transform arising in the Vorono\"i summation formula. In view of Definition \ref{defn: Bessel kernel}, the connection formulae in \cite[3.61 (1), (2)]{Watson} may be applied to deduce \begin{align}\label{4eq: formula B0, R, 2} B_0 (x) = \pi i \big( H_0^{(1)} (4\pi \sqrt{x}) - H_0^{(2)} (4\pi \sqrt{x}) \big), \quad B_0 (-x) = 4 K_0 (4\pi \sqrt{x}), \end{align} and \begin{align}\label{4eq: formula B0, C, 2} B_0 (z) = \pi^2 i \big( { \textstyle H_{0}^{(1)} (4 \pi \sqrt {z}) H_{0}^{(1)} (4 \pi \sqrt { \widebar z}) - H_{0}^{(2)} (4 \pi \sqrt {z}) H_{0}^{(2)} (4 \pi \sqrt { \widebar z}) } \big) . \end{align} By the asymptotic expansions in \cite[7.2 (1, 2), 7.23 (1)]{Watson}, for any non-negative integer $K$, there are smooth functions $ W_0 (x) $ and $W_0 (z)$ (depending on $K$) with \begin{align}\label{5eq: bounds for W0} x^j \frac {\mathrm{d}^j W_0 (x)} {\mathrm{d} x^j} \Lt_{j, K} 1, \qquad z^j \widebar{z}^k \frac {\partial^{j+k} W_0 (z)} {\partial z^j \partial \widebar z^k } \Lt_{j, k, K} 1 , \end{align} such that \begin{align} \label{4eq: asymptotic, R+} & B_0 (x) = \sum_{ \pm} \frac {e (\pm (2 \sqrt{x} + 1/8))} { \sqrt[4]{x \phantom{|\hskip -2 pt}} } W_0 (\pm \sqrt{ x}) + O_{ K} \bigg( \frac 1 {x^{(2K+1)/4}} \bigg), \\ \label{4eq: asymptotic, R-} & B_0 (-x) = O \bigg( \frac {\exp (-4\pi \sqrt{x})} {\sqrt[4]{x \phantom{|\hskip -2 pt}}} \bigg), \end{align} for $x > 1$, and \begin{align}\label{4eq: asymptotic, C} & B_0 (z) = \sum_{ \pm} \frac {e (\pm 4 \hskip 0.5 pt \mathrm{Re} \sqrt{z})} {\sqrt{|z|} } W_0 (\pm \sqrt{ z}) + O_{ K} \bigg( \frac 1 {|z|^{(K+1)/2}} \bigg), \end{align} for $|z| > 1$. \begin{rem} For the real case, \eqref{4eq: asymptotic, R+} has a cleaner form without the error term. For the complex case, however, the error term must be included in {\rm\eqref{4eq: asymptotic, C}}, for the two product functions in {\rm\eqref{4eq: formula B0, C, 2}} are {\it not} individually well defined on ${\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}} \smallsetminus \{0\}$. \end{rem} \section{Properties of Bessel Integrals} For $1 \Lt T^{\vepsilon} \leqslant M \leqslant T^{1-\vepsilon} $, let $k (t) = k_{T, \hskip 0.5 pt M} (t)$ be the weight function as defined in \S \ref{sec: choice of h}. Define \begin{align}\label{5eq: defn h(nu)} h^q (t; v) = h^q_{T, \hskip 0.5 pt M} (t; v) = k_{T, \hskip 0.5 pt M} (t) G (v, t)^q , \end{align} with $\mathrm{Re}(v) = \vepsilon$ and $|\mathrm{Im}(v)| \leqslant \log T$. Note that $ h^q (t; v) $ lies in the space $\mathscr{H} \big(\frac 1 2 + \vepsilon \big)$ as in Definition \ref{defn: test functions}. Since $q$ and $v$ are inessential to our analysis, we shall simply write $h (t) = h^q (t; v)$. Let $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x) $ or $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (z)$ be its associated Bessel integral (see \eqref{1eq: defn Bessel integral}) defined by \begin{align}\label{7eq: defn of H(x)} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x) = \int_{-\infty}^{\infty} h (t) B_{i t} (x) t \tanh (\pi t ) \mathrm{d} \hskip 0.5 pt t, \quad \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (z) = \int_{-\infty}^{\infty} h (t) B_{i t} (z) t^2 \mathrm{d} \hskip 0.5 pt t . \end{align} The following results for $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x) $ and $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (z)$ are essentially established in \cite{Qi-Liu-LLZ} and \cite[\S 8]{Qi-GL(3)} in different settings (see also \cite{Iviv-Jutila-Moments,XLi2011,Young-Cubic} for the real case). For the complex case, however, it will be more convenient to work here in the Cartesian coordinates. The estimates above may be derived from shifting the integral contour to $\Im (t) = \frac 1 2 + \vepsilon$. See \cite{Young-Cubic} and \cite{Qi-Liu-LLZ}. \begin{lem}\label{lem: H(x), |z|>1} There exists a Schwartz function $ g (r)$ satisfying $g^{(j)} (r) \Lt_{j, \hskip 0.5 pt A, \hskip 0.5 pt \vepsilon} (1 + |r| )^{-A}$ for any $j, A \geqslant 0$, and such that {\rm(1)} if $F_{\infty} $ is real, then $ \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x) = \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{ +} (x) + \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{ -} (x) + O (T^{-A} ) $ for $|x| > 1$, with \begin{equation}\label{8eq: H+natural} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{ \pm} ( x^2) = MT^{1+\vepsilon} \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} g ( { M r} ) e( Tr / \pi \mp 2 x \cosh r ) \mathrm{d} r, \end{equation} and \begin{equation}\label{8eq: H-natural} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{ \pm} (- x^2) = MT^{1+\vepsilon} \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} g ( { M r} ) e( Tr / \pi \pm 2 x \sinh r ) \mathrm{d} r, \end{equation} for $x > 1${\rm;} {\rm(2)} if $F_{\infty} $ is complex, then $ \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (z) = \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{ +} (z) + \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{ -} (z) + O (T^{-A} ) $ for $|z| > 1$, with \begin{align}\label{8eq: H-sharp(z)} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{\scriptscriptstyle \pm } ( z^2 ) = M T^{2+\vepsilon} \int_0^{ \pi} \hskip -1 pt \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} g ( M r ) e (2 T r/ \pi \mp 4 \mathrm{Re} (z \mathrm{trh} ( r, \omega ) ) ) \mathrm{d} r \hskip 0.5 pt \mathrm{d} \omega, \end{align} for $ \arg (z) \in [0, \pi)$, where $ \mathrm{trh} (r, \omega )$ is the ``trigonometric-hyperbolic" function defined by \begin{align}\label{8eq: trh function} \mathrm{trh} (r, \omega ) = \cosh r \cos \omega + i \sinh r \sin \omega . \end{align} Furthermore, {\rm (3)} for real $x$ with $1 < |x| \Lt T^2 $, we have $ \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x) = O (T^{-A})${\rm;} {\rm (4)} for complex $z$ with $1 < |z| \Lt T^2 $, we have $ \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (z) = O (T^{-A})${\rm;} {\rm(5)} for real $x$ with $|x| \leqslant 1$, we have \begin{equation}\label{7eq: crude bound for H, R} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (x) \Lt_{A , \hskip 0.5 pt \vepsilon} M T^{1 - 2 A } \sqrt{|x|} ; \end{equation} {\rm(6)} for complex $z$ with $|z| \leqslant 1$, we have \begin{equation}\label{7eq: crude bound for H, C} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} (z) \Lt_{A , \hskip 0.5 pt \vepsilon} M T^{2 - 4 A } |z| . \end{equation} \end{lem} \begin{rem} In {\rm\cite{Qi-GL(3)}}, for the proof in the case $|x| \leqslant 1$ or $|z| \leqslant 1$ a certain polynomial is introduced to annihilate the poles of the gamma factor, but it is redundant because the residues of the integrand in {\rm\eqref{7eq: defn of H(x)}} at these poles are actually exponentially small in view of $|\mathrm{Im}(v)| \leqslant \log T$. \end{rem} In the real case, Lemma \ref{lem: H(x), |z|>1} (3) may be strengthened for $x > 1$ as follows. \begin{lem}\label{lem: x > MT} We have $ \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_{ \pm} (x) = O \big(T^{-A}\big) $ for $1 < x \leqslant M^{2-\vepsilon} T^2 $. \end{lem} \section{Analysis of Hankel Transforms}\label{sec: Hankel} Let $\varww (x) \in C_c^{\infty} [1, 2] $ satisfy $ \varww^{(j)} (x) \Lt_{j} (\log T)^{j} $ for all $j \geqslant 0$. For $|\varLambda| \Gt T^2$, define \begin{align}\label{11eq: defn of w (x, Lmabda), R} \varww (x, \varLambda ) = \varww (|x|) \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} ( \varLambda x ) , \end{align} if $F_{\infty}$ is real, and \begin{align}\label{11eq: defn of w (z, Lmabda), C} \varww (z, \varLambda ) = \varww (|z|) \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}} ( \varLambda z ) , \end{align} if $F_{\infty}$ is complex. Let $\widetilde {\varww}_0 (y , \varLambda )$ and $\widetilde {\varww}_0 ( u , \varLambda )$ be their Hankel transform defined by \begin{align}\label{9eq: Hankel} & \widetilde {\varww}_0 (y , \varLambda ) = \int {\varww} (x , \varLambda ) B_0 (xy) \mathrm{d} x, \quad \widetilde {\varww}_0 (u , \varLambda ) = \int \hskip -4 pt \int {\varww} (z , \varLambda ) B_0 (z u) \mathrm{d} z . \end{align} First of all, let us assume $\varLambda > 0$ with no loss of generality, as \begin{align}\label{w(y, -L) = w (-y, L)} \widetilde {\varww}_{0} (y , \varLambda )= \widetilde {\varww}_{0} ( \epsilon y , \epsilon \varLambda ), \qquad \widetilde {\varww}_{0} (u , \varLambda )= \widetilde {\varww}_{0} ( {\epsilon} u , \epsilon \varLambda ) , \end{align} for any $\epsilon \in F^{\times}_{\infty}$ with $ |\epsilon| = 1 $. \begin{lem}\label{lem: Hankel} Suppose that $ \varLambda \Gt T^2$. {\rm(1)} When $F_{\infty}$ is real, for $ y \geqslant T^{\vepsilon}$ we have \begin{align}\label{10eq: tilde w = Phi, R} \widetilde {\varww}_0 ( \pm y , \varLambda) = \frac{MT^{1+\vepsilon} } { \sqrt[4]{y \phantom{|\hskip -2 pt}} } \Psi^{\pm} \big( \sqrt{y / \varLambda } , \sqrt{ \varLambda }\big) + O \big(T^{-A} \big) , \end{align} with \begin{align}\label{10eq: Phi+ (x), R} \Psi^{+} (x, \varDelta) = \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} e( Tr / \pi ) g ( { M r} ) \widehat{V} ( \varDelta (x - \cosh r ) ) \mathrm{d} r , \end{align} or $\Psi^{+} (x, \varDelta) = 0$ according as $\varDelta > M^{1-\vepsilon} T$ or not, and \begin{align}\label{10eq: Phi- (x), R} \Psi^{-} (x, \varDelta) \hskip -1pt = \hskip -2pt \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} \hskip -1pt e( Tr / \pi ) g ( { M r} ) \big( \widehat{V} ( \varDelta ( x \hskip -1pt + \hskip -1pt \sinh r ) ) \hskip -1pt + \hskip -1pt \widehat{V} ( \varDelta ( x \hskip -1pt - \hskip -1pt \sinh r ) ) \big) \mathrm{d} r , \end{align} where $ \widehat{V} (x) $ is a Schwartz function satisfying \begin{align}\label{8eq: Schwartz, R} \frac{\mathrm{d}^{j} \widehat{V} (x) } {\mathrm{d} x^{j}} \Lt_{j, A} \left ( 1 + \frac {|x|} {\log T} \right )^{- A} \end{align} for any $j, A \geqslant 0$. {\rm(2)} When $F_{\infty}$ is complex, for $ |u| \geqslant T^{\vepsilon}$ we have \begin{align}\label{10eq: tilde w = Phi, C} \widetilde {\varww}_0 ( u , \varLambda) = \frac{MT^{2+\vepsilon} } {\sqrt{|u|} } \Psi \big( \sqrt{ u / \varLambda } , \sqrt{ \varLambda } \big) \mathrm{d} \omega + O \big(T^{-A} \big) , \end{align} with \begin{align}\label{10eq: Phi (x), C} \Psi (z, \varDelta) = \int_0^{2 \pi} \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} e( 2 Tr / \pi ) g ( { M r} ) \widehat{V} ( \varDelta ( z - \mathrm{trh} (r, \omega) ) ) \mathrm{d} r \mathrm{d} \omega , \end{align} where $ \widehat{V} (z) $ is a Schwartz function satisfying \begin{align}\label{8eq: Schwartz, C} \frac{\partial^{j + k} \widehat{V} (z) } {\partial z^{j} \partial \widebar{z}^{k} } \Lt_{j, k, A} \left ( 1 + \frac {|z|} {\log T } \right )^{- A} \end{align} for any $j, k, A \geqslant 0$. \end{lem} \begin{proof} First, let $F_{\infty}$ be real. By \eqref{4eq: asymptotic, R+} (with $K$ large in terms of $\vepsilon$ and $A$), \eqref{4eq: asymptotic, R-}, and \eqref{8eq: H+natural}, \eqref{8eq: H-natural} in Lemma \ref{lem: H(x), |z|>1} (1), along with the substitution $\pm 2 \sqrt{ x} \ra x$, it follows that, up to a negligible error, $\widetilde {\varww}_0 ( y , \varLambda ) $ or $\widetilde {\varww}_0 ( - y , \varLambda ) $ becomes the sum of \begin{align*} \frac{MT^{1+\vepsilon} } { \sqrt[4]{y \phantom{|\hskip -2 pt}} } \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} e( Tr / \pi ) g ( { M r} ) \left ( \int_{-\infty}^{\infty} V (x) e \big( \hskip -1pt - x \big( \sqrt{y} \pm \sqrt{\varLambda} \cosh r \big) \big) \mathrm{d} x \right ) \mathrm{d} r , \end{align*} or \begin{align*} \frac{MT^{1+\vepsilon} } { \sqrt[4]{y \phantom{|\hskip -2 pt}} } \int_{- M^{\vepsilon} / M}^{M^{\vepsilon}/ M} e( Tr / \pi ) g ( { M r} ) \left ( \int_{-\infty}^{\infty} V (x) e \big( \hskip -1pt - x \big( \sqrt{y} \mp \sqrt{\varLambda} \sinh r \big) \big) \mathrm{d} x \right ) \mathrm{d} r , \end{align*} respectively, where $V (x)$ is a certain smooth weight function supported in $|x| \in [1/2, 1/ \sqrt{2}]$ with $$ V^{(j)} (x) \Lt_{j, A} (\log T)^{j} . $$ (To be explicit, $ V (\pm 2 x ) = (1 \mp i) \sqrt{x/2} \hskip 0.5 pt \varww (x^2) W_0 (\mp \sqrt{y} x)$.) By Lemma \ref{lem: x > MT}, the first integral is negligibly small unless $ \sqrt{\varLambda} > M^{1-\vepsilon} T $. Observe that the inner integral is a Fourier integral, and that $ \sqrt{y} + \sqrt{\varLambda} \cosh r \Gt T $ is large, so the results follow immediately. Second, let $F_{\infty}$ be complex. Similar to the real case, one may prove \eqref{10eq: tilde w = Phi, C} on applying \eqref{4eq: asymptotic, C} and \eqref{8eq: H-sharp(z)}, along with the substitution $\pm 2 \sqrt{z} \ra z$. \end{proof} \subsection{Analysis for the Hyperbolic Functions} \begin{lem}\label{lem: I-} Let $ \delta < \rho \Lt 1$. For $0 \leqslant x < 1$ define the region ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^- (\delta, \rho; x)$ by \begin{align}\label{8eq: defn I-} |r| \leqslant \rho , \qquad |\sinh r \pm x | \leqslant \delta. \end{align} {\rm(1)} ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^- (\delta, \rho; x)$ is non-empty unless $ {x} \Lt \rho $. {\rm(2)} ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^- (\delta, \rho; x)$ has length $O (\delta)$. \end{lem} \begin{proof} The first assertion is obvious in view of $\sinh r = O (\rho)$. By the mean value theorem, the second inequality in \eqref{8eq: defn I-} implies that $ |r \pm \mathrm{arcsinh} \hskip 1pt {x} | \Lt \delta $, and hence the length of ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^- (\delta, \rho; x)$ is bounded by $ O (\delta) $. \end{proof} \begin{lem}\label{lem: I+} Let $ \sqrt{\delta} < \rho \Lt 1$. For $0 < x < \sqrt{2}$ define the region ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^+ (\delta, \rho; x)$ by \begin{align}\label{8eq: defn I+} |r| \leqslant \rho , \qquad |\cosh r - x | \leqslant \delta . \end{align} {\rm(1)} ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^+ (\delta, \rho; x)$ is non-empty unless $ \left| x -1 \right| \Lt \rho^2 $. {\rm(2)} ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^+ (\delta, \rho; x)$ has length $ O (\delta / \sqrt{|x -1|} ) $. {\rm(3)} We have $ \sinh r \allowbreak \Lt \sqrt{\delta} $ on the region ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^+ (\delta, \rho; 1)$. \end{lem} \begin{proof} By $ \sinh^2 r = \cosh^2 r - 1$, the second inequality in \eqref{8eq: defn I+} implies \begin{align}\label{8eq: cosh, 2} \big|\sinh^2 r - (x^2 - 1 ) \big| \Lt \delta . \end{align} Then (1) and (3) are obvious. As for (2), \eqref{8eq: cosh, 2} yields $ | r | \Lt \sqrt{\delta}$ if $ \left| x -1 \right| \Lt {\delta} $, the empty set if $ 1-x \Gt {\delta} $, and $ \big|r \pm \mathrm{arcsinh} \sqrt{x^2-1} \big| \Lt \delta / \sqrt{x-1}$ if $ x -1 \Gt {\delta} $ (again, by the mean value theorem), and hence the length of ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^+ (\delta, \rho; x)$ is bounded by $ O (\delta / \sqrt{| x -1 |} ) $ in every case. \end{proof} \subsection{Analysis for the Trigonometric-Hyperbolic Function} \begin{lem}\label{lem: I, C} Let $ {\delta} < \rho \Lt 1$. For $ |x| < \sqrt{2}$ and $|y| < 1$ define ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; x + i y)$ to be the set of $(r, \omega)$ such that \begin{align}\label{8eq: defn I} |r| \leqslant \rho , \qquad |\cos \omega \cosh r - x | \leqslant \delta, \qquad |\sin \omega \sinh r - y | \leqslant \delta . \end{align} {\rm(1)} ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; x + i y)$ is non-empty unless $ |x| < 1 + 2 \rho $ and $|y| \Lt \rho$. {\rm(2)} The area of ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; x + i y)$ has bound as follows, \begin{align}\label{8eq: bound area} \mathrm{Area} \, {\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; x + i y) \Lt \frac {\delta^2} {\sqrt{(|x|-1)^2 + y^2}}. \end{align} {\rm(3)} We have $ \sinh r, \sin \omega \Lt \sqrt{\delta} $ on the region ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; \pm 1)$. \end{lem} \begin{proof} We shall focus on (2), since (1) is obvious while (3) will be transparent in the last case of its proof. By symmetry, we only need to work in the setting with $ (r, \omega) \in [0, \rho] \times [0, \pi / 2] $ and $(x, y) \in [0, \sqrt{2}) \times [0, 1)$. Consider the mapping \begin{align} f : (r, \omega) \ra (\cos \omega \cosh r, \sin \omega \sinh r), \end{align} so that ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; x + i y)$ is contained in the preimage under $f$ of the square with center $(x, y)$ and area $4\delta^2$. The Jacobian matrix \begin{align*} J_f (r, \omega) = \begin{pmatrix} \, \cos \omega \sinh r & \sin \omega \cosh r \\ - \sin \omega \cosh r & \cos \omega \sinh r \end{pmatrix} . \end{align*} On the semi-closed rectangle $(0 , \rho] \times (0, \pi/2)$, since all the principal minors of $J_f (r, \omega)$ are positive, by the Univalence Theorem of Gale and Nikait\^o (\cite[\S \S 4.2, 4.3]{Gale-Nikaido}), $f$ is a univalent mapping. Note that the Jacobian determinant is equal to $\sinh^2 r + \sin^2 \omega $. Therefore $f $ may be used as a coordinate transform, and if we are able to prove the lower bound \begin{align} \label{8eq: lower bound} \sinh^2 r + \sin^2 \omega \Gt \sqrt{(x-1)^2 + y^2 } \end{align} on $ {\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; x + i y)$ for either $ |x - 1| \Gt \delta $ or $y \Gt \delta$, then \eqref{8eq: bound area} follows immediately in this case. Now we prove \eqref{8eq: lower bound}. For $ x \leqslant 1/2$, say, the second inequality in \eqref{8eq: defn I} implies $ \cos \omega \leqslant 1/\sqrt{2}$ (provided that $\rho \Lt 1$, so that $\cosh r$ is near $1$ and $\delta < \rho$ is small), and hence \eqref{8eq: lower bound} is clear. For $x > 1/2$, observe that the second inequality in \eqref{8eq: defn I} implies \begin{align}\label{8eq: cos cosh, 2} \big|\sinh^2 r - \sin^2 \omega - \sin^2 \omega \sinh^2 r - (x^2-1) \big| \Lt \delta , \end{align} due to $ \cos^2 \omega \cosh^2 r = 1 + \sinh^2 r - \sin^2 \omega - \sin^2 \omega \sinh^2 r $. In the case when $ |x - 1| \Gt \delta $ and $y \Gt \delta$, the last inequality in \eqref{8eq: defn I} and \eqref{8eq: cos cosh, 2} together yield \begin{align*} \sinh^2 r - \sin^2 \omega \text{ \small $\asymp$ } x^2-1, \qquad \sin \omega \sinh r \text{ \small $\asymp$ } y , \end{align*} and hence \eqref{8eq: lower bound} by $ \sinh^2 r + \sin^2 \omega = \sqrt{\big(\sinh^2 r - \sin^2 \omega\big)^2 + 4 \sin^2 \omega \sinh^2 r } $. The proof is similar for the remaining two cases when $ |x - 1| \Lt \delta $ or $y \Lt \delta$. Finally, in the case when $ |x - 1| \Lt \delta $ and $y \Lt \delta$, we have $|\cos \omega \cosh r - 1| \Lt \delta$ and $|\sin \omega \sinh r | \Lt \delta$ (so the Jacobian of $f$ could be very small or vanish). Since $(\cosh r - \cos \omega )^2 = (\cos \omega \cosh r - 1)^2 + (\sin \omega \sinh r)^2 $ and $ \cosh^2 r - \cos^2 \omega = \sin^2 \omega + \sinh^2 r$, it follows that the area of ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; x + i y)$ is bounded by $O (\delta)$, and hence \eqref{8eq: bound area}. Moreover, (3) is also clear from these arguments. \end{proof} In practice $z = \sqrt{n/m}$ ($m, n \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' \smallsetminus \{0\}$). The simple lemma below will help us take care of the square root in the complex case, with (1)--(4) corresponding to \eqref{12eq: -}--\eqref{12eq: O+, 2} in \S \ref{sec: off, C}. \begin{lem}\label{lem: square root} Write $z = x+iy$ and $z^2 = x_2+iy_2$. Let $ y \Lt \rho$. {\rm(1)} If $ |x| \Lt \rho $, then $ |z^2 | \Lt \rho^2 $. {\rm(2)} If $|x| \Gt \rho $, then $x_2 \asymp x^2$ and $y_2 \Lt \rho |x| $. {\rm(3)} If $ ||x|-1| \Lt \rho $, then $ |z^2-1| \Lt \rho $ and $|z^2-1|^2 \asymp (|x|-1)^2 + y^2$. {\rm(4)} If $1 - |x| \Gt \rho $, then $|x_2-1| \asymp 1-|x|$ and $y_2 \Lt \rho$. \end{lem} \subsection{Estimates for the $\Psi$-integrals} \label{sec: estimates for Phi} Let \begin{align}\label{8eq: rho and delta} \rho = M^{\vepsilon}/ M, \qquad \delta = T^{\vepsilon} / {\varDelta} . \end{align} It is then clear that the $\Psi$-integrals $\Psi^{\pm} (x, \varDelta)$ and $\Psi (z, \varDelta)$ defined in Lemma \ref{lem: Hankel} are trivially bounded by the area of ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^{\pm} ( \delta , \rho; x) $ and ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} ( \delta , \rho; z) $ respectively. A direct consequence of Lemma \ref{lem: I-}, \ref{lem: I+}, and \ref{lem: I, C} is the following proposition. For brevity, we shall allow $M^{\vepsilon}$ to absorb absolute constants---for example, the factor $2$ in $|x| < 1+ 2 \rho$ and the implied constant in $|y| \Lt \rho$ ($\rho = M^{\vepsilon}/ M$). \begin{prop}\label{lem: bounds for Phi, R} Let $ \Psi^{\pm} (x, \varDelta) $ and $\Psi (z, \varDelta)$ be as in {\rm\eqref{10eq: Phi+ (x), R}}, {\rm\eqref{10eq: Phi- (x), R}} and {\rm\eqref{10eq: Phi (x), C}}. {\rm(1)} $\Psi^{-} (x, \varDelta)$ or $\Psi^{+} (x, \varDelta)$ is negligibly small unless $x < M^{\vepsilon}/ M$ or $|x-1| < M^{\vepsilon}/ M^2 $ respectively, in which case \begin{align} \Psi^{-} (x, \varDelta) \Lt \frac {T^{\vepsilon}} {\varDelta }, \qquad \Psi^{+} (x, \varDelta) \Lt \frac {T^{\vepsilon}} {\varDelta \sqrt{|x-1|}} . \end{align} {\rm(2)} $\Psi (x+i y, \varDelta) $ is negligibly small unless $ |x| < 1 + M^{\vepsilon}/M $ and $|y| < M^{\vepsilon}/M$, in which case \begin{align} \Psi (x + iy , \varDelta) \Lt \frac {T^{\vepsilon}} {\varDelta^2 \sqrt{(|x|-1)^2 + y^2} } . \end{align} \end{prop} Finally, by recourse to partial integration for the $r$-integral, we prove that $ \Psi^{+} (1, \varDelta)$ and $\Psi (\pm 1, \varDelta)$ are negligibly small for $\varDelta \leqslant T^{2-\vepsilon}$. \begin{prop}\label{prop: small Phi(1)} Let $ \Psi^{+} (x, \varDelta) $ and $\Psi (z, \varDelta)$ be defined as in {\rm\eqref{10eq: Phi+ (x), R}} and {\rm\eqref{10eq: Phi (x), C}}. {\rm(1)} We have $\Psi^{+} (1, \varDelta) = O_{A, \vepsilon} (T^{-A})$ if $\varDelta \leqslant T^{2-\vepsilon}$. {\rm(2)} We have $ \Psi (\pm 1, \varDelta) = O_{A, \vepsilon} (T^{-A})$ if $\varDelta \leqslant T^{2-\vepsilon}$. \end{prop} \begin{proof} There are three steps. First, smoothly truncate the $r$-integral to the range $|r| \leqslant \rho$. Second, repeat partial integration. Fa\`a di Bruno's formula (\cite{Faa-di-Bruno}) and its extension are required to calculate the higher $r$-derivatives of $ \widehat{V} ( \varDelta (x - \cosh r ) )$ and $\widehat{V} ( \varDelta ( z - \mathrm{trh} (r, \omega) ) )$. Third, confine the integration to the region ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}}^{+} (\delta, \rho; 1) $ or ${\mathrm {I}}} \newcommand{\RJ}{{\mathrm {J}} (\delta, \rho; \pm 1) $, and use the bounds for $\sinh r$ or $\sin \omega$ in Lemma \ref{lem: I+} (3) or Lemma \ref{lem: I, C} (3), respectively. In this way, one obtains high powers of $ \varDelta \sqrt{\delta} / T = \sqrt{\varDelta} / T^{1-\vepsilon}$. The details are left to the readers. \end{proof} \subsection{Remarks on the Complex Case} The results in the complex case may be improved when $ x $ is close to $\pm 1$, in correspondence to the case of $\Psi^+ (x, \varDelta)$. However, the improvements will not be useful, since the worst case scenario is when $x$ stays away from $0$ and $\pm 1$, say around $1/2$. See \S \ref{sec: off, C}. \section{Mellin Transform of Bessel Kernels}\label{sec: Mellin} In this section, we derive explicit formulae for the Mellin transform of the Bessel kernel $B_{it} (x)$ and $B_{it} (z)$. To be precise, define \begin{align}\label{9eq: Mellin, R, 0} \widetilde{B}_{it} (s) = \int B_{it} (x ) |x|^{ s - 1} { \mathrm{d} x }, \end{align} or \begin{align}\label{9eq: Mellin, C, 0} \widetilde{B}_{it} (s) = \iint B_{it} (z ) |z|^{ 2 s - 2} { \mathrm{d} z }, \end{align} according as $F_{\infty}$ is real or complex. \begin{lem}\label{lem: Mellin} For $ |\mathrm{Im} (t)| < \mathrm{Re} (s) < \frac 1 4 $ the Mellin integral $\widetilde{B}_{it} (s)$ in {\rm\eqref{9eq: Mellin, R, 0}} or {\rm\eqref{9eq: Mellin, C, 0}} is absolutely convergent, and \begin{align}\label{9eq: Mellin=gamma} \widetilde{B}_{it} (s) = \frac{\gamma (s, t)}{\gamma (1-s, t)}, \end{align} with $\gamma (s, t)$ defined in {\rm\eqref{4eq: defn of gamma (s, f)}}. \end{lem} \begin{proof} For $ |\mathrm{Im} (t)| < \frac 1 4 $ we have crude estimates: \begin{align* B_{it} (x) \hskip -1 pt \Lt_{t, \vepsilon} \hskip -2 pt \min \bigg\{ \hskip -1 pt \frac 1 {|x|^{ |\mathrm{Im} (t)| + \vepsilon}}, \frac 1 {\sqrt[4]{|x|}} \hskip -1 pt \bigg\}, \quad B_{it} (z) \hskip -1 pt \Lt_{t, \vepsilon} \hskip -2 pt \min \bigg\{ \hskip -1 pt \frac 1 {|z|^{ |\mathrm{Im} (2t)| + \vepsilon}}, \frac 1 {\sqrt{|z| }} \hskip -1 pt \bigg\} , \end{align*} so the convergence of integrals is clear. For the real case, by \cite[\S 7.7.3 (19), (27)]{ET-II}, along with Euler's reflection formula, we have \begin{align*} \int_0^{\infty} J_{\mu} (4 \pi x) x^{\rho-1} \mathrm{d} x = \frac { 1 } { (2\pi)^{\rho+1} } \sin \bigg( \frac { \pi (\rho - \mu) } 2 \bigg) \Gamma \bigg(\frac { \rho + \mu } 2 \bigg) \Gamma \bigg(\frac { \rho - \mu } 2 \bigg) , \end{align*} for $- \mathrm{Re} (\mu) < \mathrm{Re} (\rho) < \tfrac 1 2$, and \begin{align*} \int_0^{\infty} K_{\mu} (4 \pi x) x^{\rho-1} \mathrm{d} x = \frac { 1 } { 4 (2\pi)^{\rho} } \Gamma \bigg(\frac { \rho + \mu } 2 \bigg) \Gamma \bigg(\frac { \rho - \mu } 2 \bigg), \end{align*} for $|\mathrm{Re} (\mu)| < \mathrm{Re} (\rho )$. For the complex case, we have \begin{equation*} \int_{0}^{2 \pi} \int_0^\infty \boldsymbol J_{ \mu } ( x e^{i\phi} ) x^{2 \rho - 1} \mathrm{d} x \hskip 0.5 pt \mathrm{d} \phi = \frac { \cos (\pi \mu) - \cos (\pi \rho) } { (2 \pi)^{ 2 \rho + 2 } } \Gamma \bigg(\frac { \rho + \mu } 2 \bigg)^2 \Gamma \bigg(\frac { \rho - \mu } 2 \bigg) ^2 . \end{equation*} for $ |\mathrm{Re} (\mu)| < \mathrm{Re} (\rho) < \frac 1 2 $, with \begin{equation* \boldsymbol{J}_{ \mu } (z) = \frac {1} {\sin (\pi \mu)} \big( J_{-\mu } (4 \pi z) J_{-\mu } (4 \pi \widebar z) - J_{ \mu } (4 \pi z) J_{ \mu } (4 \pi \widebar z) \big) . \end{equation*} This is a simple consequence of Theorem 1.1 and Proposition 3.2 in \cite{Qi-BE}, specialized to the case $d = 0$ and $y = 0$. Note that Gauss' hypergeometric function is equal $1$ at the origin. In view of Definition \ref{defn: Bessel kernel}, one derives \begin{align* \int B_{it} (x ) |x|^{ s - 1} { \mathrm{d} x } & = \frac { 2 \left ( \cos (\pi it) + \cos (\pi s) \right ) } { (2\pi )^{2s} } \Gamma ( s + it ) \Gamma ( s - it ) , \\ \iint B_{it} (z ) |z|^{ 2 s - 2} { \mathrm{d} z } & = \frac { 2 \left ( \cos (2\pi it) - \cos (2\pi s) \right ) } { (2 \pi )^{ 4 s } } \Gamma ( s + it ) ^2 \Gamma ( s - it )^2. \end{align*} Then \eqref{9eq: Mellin=gamma} readily follows from Euler's reflection formula and Legendre's duplication formula (the latter is needed only for the real case). \end{proof} \begin{rem} The formula {\rm\eqref{9eq: Mellin=gamma}} can also be interpreted from the view point of representation theory for local functional equations. See {\rm\cite[\S 17]{Qi-Bessel}}. \end{rem} { \large \part{The Twisted First and Second Moments}} \section{Setup: Application of the Kuznetsov Formula} Now we turn to the investigation of the twisted first and second moments: \begin{align} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_q (\mathfrak{m} ) = \sideset{}{^h}\sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} } \hskip -1pt k ( t_f ) \lambda_f ( \mathfrak{m} ) L \big( \tfrac 1 2 , f \big)^q \end{align} for $q = 1$ or $2$, and weight function $ k (t) $ defined as in \eqref{1eq: defn of kq} or \eqref{5eq: defn k(nu)}. In the sequel, we shall always let $\mathfrak{m} = m \mathfrak{D}$. By the Approximate Functional Equations \eqref{5eq: AFE, 1} and \eqref{5eq: AFE, 2}, we infer that \begin{align} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_q (\mathfrak{m} ) = 2 \sum_{\mathfrak{n} \hskip 0.5 pt \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { \tau (\mathfrak{n})^{q-1} } { \sqrt{\RN ( \mathfrak{n} )} } \sideset{}{^h}\sum_{f \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}B} \hskip 0.5pt $}}} } \hskip -1pt k ( t_f ) \lambda_f ( \mathfrak{m} ) \lambda_f (\mathfrak{n} ) V_q ( \RN ( \mathfrak{n} \mathfrak{D}^{-q} ); t_f ) . \end{align} In view of \eqref{1eq: derivatives for V(y, t), 1} in Lemma \ref{lem: afq} (1), at the cost of a negligible error term, we may truncate the summations over $\mathfrak{n}$ to the range $ \RN (\mathfrak{n}) \leqslant T^{ q N + \vepsilon}$. Next, we use the expressions of $V_q \left ( \RN ( \mathfrak{n} \mathfrak{D}^{-q} ); t \right )$ as in \eqref{1eq: approx of V1} and \eqref{1eq: approx of V2} in Lemma \ref{lem: afq} (1) with $U = \log T$ (so that the errors therein are negligible), and then apply the Kuznetsov trace formula in Proposition \ref{prop: Kuznetsov} inside the $v$-integral with test function: \begin{align} \label{9eq: h (t; v)} h^q (t; v) = k (t) G (v, t)^q ; \end{align} see \eqref{5eq: defn h(nu)}. Moreover, for the diagonal and the Eisenstein contributions, with the loss of negligible errors, we revert the $v$-integral to $V_q ( \RN ( \mathfrak{n} \mathfrak{D}^{-q} ); t )$, and for the latter convert the $\mathfrak{n}$-sum to $ \left| \zeta_F \big(\tfrac 1 2 + it \big) \right|^{2q} $ by the Approximate Functional Equations \eqref{5eq: AFE zeta, 1} and \eqref{5eq: AFE zeta, 2}. It follows that \begin{align} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_q (\mathfrak{m} ) = \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_q (\mathfrak{m} ) - \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_q (\mathfrak{m} ) + \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_q (\mathfrak{m} ) + O \left ( T^{-A} \right ), \end{align} where $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_q (\mathfrak{m} ) $ is the diagonal term (it exists when $\RN (\mathfrak{m}) \leqslant T^{q N + \vepsilon}$) \begin{align}\label{9eq: Dp(m)} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_q (\mathfrak{m} ) = 2 c_1 \frac { \tau (\mathfrak{m})^{q-1}} {\sqrt{\RN(\mathfrak{m})}}\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_q (\mathfrak{m} ), \end{align} with \begin{align}\label{9eq: Hp(m)} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_q (\mathfrak{m} ) = \int_{-\infty}^{\infty} k (t) V_q ( \RN ( \mathfrak{m} \mathfrak{D}^{-q} ); t ) \mathrm{d} \hskip 0.5 pt \mu (t) , \end{align} $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_q (\mathfrak{m} )$ is the Eisenstein (continuous spectrum) term \begin{align}\label{9eq: Ep(m)} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_q (\mathfrak{m} ) = \frac {1} {4\pi} c_0 \int_{-\infty}^{\infty} k (t) \tau_{it} (\mathfrak{m} ) \omega (t) \left| \zeta_F \big(\tfrac 1 2 + it \big) \right|^{2q} \hskip 0.5 pt \mathrm{d} t, \end{align} and $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_q (\mathfrak{m} )$ is the off-diagonal term \begin{align}\label{9eq: O1(m)} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_1 (\mathfrak{m}) & = \frac {2 } { \pi i } \frac {c_2} {\sqrt{|d_F|}} \int_{ \hskip 0.5 pt \vepsilon - i \log T}^{\vepsilon + i \log T} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_1 (\mathfrak{m}; v ) \frac {\mathrm{d} v} {v}, \\ \label{9eq: O2(m)}\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_2 (\mathfrak{m}) & = \frac {2 } { \pi i } \frac {c_2} {\sqrt{|d_F|}} \int_{ \hskip 0.5 pt \vepsilon - i \log T}^{\vepsilon + i \log T} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_2 (\mathfrak{m}; v ) \zeta (1+2v) |d_F|^{ v} \frac {\mathrm{d} v} {v}, \end{align} with \begin{align}\label{8eq: O (m), 0} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_q (\mathfrak{m}; v ) = \sum_{(c) \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { 1 } { |\RN (c )| } \mathop{ \sum_{n \hskip 0.5 pt \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' } }_{ |\RN(n )| \hskip 0.5 pt \leqslant T^{q N +\vepsilon} } \frac { \tau (n \mathfrak{D})^{q-1} } { {|\RN ( n )|^{1/2+v}} } {S ( m, n ; c ) } \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_q \bigg( \frac { mn } { c^2 } ; v \bigg), \end{align} and \begin{align}\label{9eq: Hp(x)} \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_q (x; v) = \int_{-\infty}^{\infty} h^q (t; v) B_{i t} (x ) \mathrm{d} \hskip 0.5 pt \mu (t) . \end{align} Note that the factor $2$ arises in \eqref{9eq: O1(m)} and \eqref{9eq: O2(m)} when we combine the $\epsilon$- and $\mathfrak{n}$-sums into an $n$-sum, and fold the $c$-sum into a $(c)$-sum over ideals. \section{The Twisted First Moment} \label{sec: 1st moment} Let us first treat the diagonal term $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_1 (\mathfrak{m}) $ as defined by \eqref{9eq: Dp(m)} and \eqref{9eq: Hp(m)}. It contains the main term for $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_1 (\mathfrak{m} )$. Recall the definitions of $\mathrm{d} \mu (t)$ and $ k (t) $ given by \eqref{1eq: defn Plancherel measure} and \eqref{5eq: defn k(nu)}. Now we apply \eqref{4eq: asymptotic for V1} in Lemma \ref{lem: afq} (1) to analyze $ \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_1 (\mathfrak{m})$. The main term yields \begin{align*} \int_{-\infty}^{\infty} k (t) \mathrm{d} \hskip 0.5 pt \mu (t) = 2 \sqrt{\pi} M T^{N} \big( 1 + O \big( (M/T)^2 \big) \big), \end{align*} which can be easily seen by truncation near $t = \pm T$ and the change of variable $t \ra M t \pm T$. The error-term contribution is bounded by $ ( \RN (\mathfrak{m}) / T^N )^{A }$ and hence negligibly small if $\RN (\mathfrak{m}) \leqslant T^{N-\vepsilon}$ and $A $ is large in terms of $\vepsilon$. We conclude that \begin{align}\label{10eq: diagonal, D1} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_1 (\mathfrak{m} ) = 4 \sqrt{\pi} c_1 \frac { M T^N } {\sqrt{\RN(\mathfrak{m})}} \big( 1 + O_{\vepsilon} \big( (M/T)^2 \big) \big) . \end{align} For the Eisenstein term $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_1 (\mathfrak{m})$, on inserting \eqref{2eq: subconvex} and \eqref{3eq: bound for omega(t)} into \eqref{9eq: Ep(m)} and estimating the integral trivially, we obtain \begin{align}\label{10eq: bound for E1(m)} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_1 (\mathfrak{m}) = O_{\vepsilon} \big( M T^{2 N \theta + \vepsilon} \big) . \end{align} However, \eqref{10eq: bound for E1(m)} may be improved into \begin{align}\label{10eq: bound for E1(m), 2} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_1 (\mathfrak{m}) = O_{\vepsilon} ( M T^{ \vepsilon} ), \end{align} if $M \geqslant T^{\frac {1273} {4053}+\vepsilon}$ for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ or $M \geqslant T^{\frac 7 8 + \vepsilon } $ for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$. For this use the estimate for the second moment of $\zeta \big(\frac 1 2 + it \big)$ on short intervals in \cite[Theorem 3]{BW-Riemann-2} or the asymptotic formula for the second moment of $\zeta_F \big(\frac 1 2 + it \big)$ in \cite{Muller-Dedekind-Quadratic}. Finally, we consider the off-diagonal term $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_1 (\mathfrak{m} )$ given by \eqref{9eq: O1(m)}, \eqref{8eq: O (m), 0}, and \eqref{9eq: Hp(x)}. Since $|\RN( m )| \hskip 0.5 pt \leqslant T^{N - \vepsilon}$ and $|\RN(n )| \hskip 0.5 pt \leqslant T^{N +\vepsilon}$, one may adjust $\vepsilon$ so that $ \left|m n/c^2 \right| \Lt T^2$, and Lemma \ref{lem: H(x), |z|>1} (3)--(6) implies that $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_1 ( { mn } / { c^2 } ; v )$, $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_1 (\mathfrak{m}; v )$, and hence $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_1 (\mathfrak{m} )$ are negligibly small. A remark is that Weil's bound for $S (m, n; c)$ is needed (one could use $O \big(\sqrt{\RN(c\mathfrak{m})}\tau (c) \big)$) to ensure that the $(c)$-sum is convergent. In conclusion, the asymptotic formula \eqref{10eq: 1st moment} in Theorem \ref{thm: moment} is established on the foregoing arguments. \section{The Twisted Second Moment}\label{sec: 2nd moment} This section is devoted to the proof of the asymptotic formula \eqref{10eq: 2nd moment} for $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_2 (\mathfrak{m})$ in Theorem \ref{thm: moment}. The analysis of $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_2 (\mathfrak{m})$, albeit slightly more involved, is similar to that of $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_1 (\mathfrak{m})$. By \eqref{4eq: asymptotic for V2} and \eqref{4eq: asymp of psi} in Lemma \ref{lem: afq} (2), $\text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_2 (\mathfrak{m})$ is equal to \begin{align*} \int_{-\infty}^{\infty} k (t) \Big( \gamma_{1} \Big( N \log {\textstyle \sqrt{\tfrac 1 4 + t^2}} - \log {\sqrt{\RN(\mathfrak{m})}} \Big) + \gamma_0 ' \Big) \mathrm{d} \hskip 0.5 pt \mu (t) + O _{\vepsilon} \big(M T^{N-2} \big), \end{align*} with $\gamma_0'$ defined as in Theorem \ref{thm: moment}. Consequently, \begin{align}\label{10eq: diagonal, D2} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_2 (\mathfrak{m} ) = 4 \sqrt{\pi} c_1 \frac { \tau (\mathfrak{m}) M T^N } {\sqrt{\RN(\mathfrak{m})}} \bigg( \gamma_{1} \log \frac {T^N} {\sqrt{\RN(\mathfrak{m})}} + \gamma_0 ' + O_{\vepsilon} \big( (M/T)^2 \log T \big) \bigg) . \end{align} It should be stressed that $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_2 (\mathfrak{m}) $ only contributes {\it half} the main term for $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_2 (\mathfrak{m} )$. The trivial estimate for $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_2 (\mathfrak{m})$ obtained from \eqref{2eq: subconvex} and \eqref{3eq: bound for omega(t)} is as follows: \begin{align}\label{10eq: bound for E2(m)} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_2 (\mathfrak{m}) = O_{\vepsilon} \big( M T^{4 N \theta + \vepsilon} \big) . \end{align} By \eqref{2eq: subconvex} and \eqref{10eq: bound for E1(m), 2}, we improve \eqref{10eq: bound for E2(m)} into \begin{align}\label{10eq: bound for E2(m), 2} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_2 (\mathfrak{m}) = O_{\vepsilon} \big( M T^{2 N \theta + \vepsilon} \big) , \end{align} for $M \geqslant T^{\frac {1273} {4053}+\vepsilon}$ or $M \geqslant T^{\frac 7 8 + \vepsilon } $ according as $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$ or $ {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}(\sqrt{d_F})$. Further, if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, then \eqref{10eq: bound for E2(m), 2} may be improved into \begin{align}\label{10eq: bound for E2(m), Q} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_2 (m) = O_{\vepsilon} \big( M T^{ \vepsilon} \big) \end{align} for $M \geqslant T^{\frac 2 3 + \vepsilon } $, by the estimate for the fourth moment of $\zeta \big(\frac 1 2 + it \big)$ on short intervals in \cite[\S 6]{Ivic-Riemann-4} (see also \cite{IM-4th-Moment}). As for the fourth moment of $ \zeta_F \big(\frac 1 2 + it \big) $ for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$, an explicit spectral formula is known over the Gaussian field in \cite{B-Mo} but currently we do not know how it can be used to obtain non-trivial estimate (asymptotic is beyond our reach as $ | \zeta_F (s) |^4$ is of degree $8$). Now we turn to the study of the off-diagonal term $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_2 (\mathfrak{m} ) $ (see \eqref{9eq: O2(m)}--\eqref{9eq: Hp(x)}). First of all, by Lemma \ref{lem: H(x), |z|>1} (3)--(6), one may impose the condition $ \big| m n / c^2 \big| \Gt T^{2 } $ to the summations, with the cost of a negligible error. Let $\sum_{R} \varvv (|x| / R ) $ be a dyadic partition of unity for $F^{\times}_{\infty}$, with $R = 2^{j / 2}$ and $ \varvv (r) \in C_c^{\infty} [1, 2]$. It may be exploited to partition the sum in \eqref{8eq: O (m), 0} into $O ( \log T )$ many sums of the form \begin{align}\label{12eq: a-sum} \begin{aligned} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}_2 (\mathfrak{m}; R ; v ) = \frac 1 {R^{N/2+N v}} & \mathop{\sum_{(c) \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} }}_{|c| \Lt \sqrt{|m| R}/ T} \frac { 1 } { |\RN (c )| } \\ & \cdot \sum_{n \hskip 0.5 pt \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' \smallsetminus \{0\} } { \tau ( n \mathfrak{D}) } S ( m, n ; c ) \varww \left ( \frac n R , \frac {m R} {c^2} ; v \right ) , \end{aligned} \end{align} for $R \leqslant T^{2 +\vepsilon}$, where \begin{align*} \varww \left ( x , \varLambda ; v \right ) = {\varww (|x|; v) \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_2 ( \varLambda x ; v ) } , \qquad \varww (r; v) = \frac{\varvv (r) } {r^{N/2 + N v}}. \end{align*} Clearly, the weight function $\varww \left ( x , \varLambda; v \right )$ is of the form in \eqref{11eq: defn of w (x, Lmabda), R} or \eqref{11eq: defn of w (z, Lmabda), C}. Note that $ \varww^{(j)} (r; v) \Lt_{j } (\log T)^{j} $ holds uniformly for $v \in [\vepsilon - i \log T, \vepsilon + i \log T]$. \subsection{Application of the Vorono\"i Summation} Next, in \eqref{12eq: a-sum} we open the Kloosterman sum $S ( m, n ; c )$ (as in \eqref{2eq: defn Kloosterman KS}) and apply the Vorono\"i summation formula (see \eqref{app: Voronoi, tau} and \eqref{3eq: limit for 0}) to the $n$-sum. It is clear that the exponential sum over $(\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} / c \hskip 1pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}})^{\times}$ turns into the Ramanujan sum $S (m-n, 0; c)$. For the entire zero-frequency contribution, we reverse the procedures above---truncation and partition of unity---and shift the integral contour for $v$ to $\mathrm{Re} (v) = \frac 1 3$, costing only negligible errors. We obtain \begin{align}\label{12eq: 0-frequency} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}Z}\hskip 1pt $}}} (\mathfrak{m}) = \frac {2 } { \pi i } c_2 \int_{-\infty}^{\infty} k (t) \int_{ (\frac 1 3) } G (v, t)^2 \zeta (1+2v) \widetilde Z (m; v, t) \frac {\mathrm{d} v} {v} \mathrm{d} \hskip 0.5 pt \mu (t) , \end{align} where \begin{align}\label{12eq: Z} \widetilde Z (m; v, t ) \hskip -1 pt = \lim_{\delta \ra 0} \sum_{\pm} {\zeta_F (1\pm 2 \delta) } | d_F | ^{v \pm \delta} \hskip -2 pt \sum_{(c) \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \hskip -2 pt \frac { S (m , 0; c) } { |\RN (c )|^{2 \pm 2\delta} } \widetilde {B}_{ it} \big(m/c^2; \tfrac 1 2 -v \pm \delta \big), \end{align} and \begin{align}\label{12eq: Mellin} \widetilde {B}_{it} (y; s) = \int_{F^{\times}_{\infty}} B_{it} (x y ) \|x\|_{\infty}^{ s - 1} { \mathrm{d} x }. \end{align} Note that we can effectively truncate the $t$-integral near $\pm T$ and the $v$-integral at height $\log T$, that the $(c)$-sum and the $x$-integral are absolutely convergent (see the proof of Lemma \ref{lem: Mellin}), and that the expression in the limit is analytic in the $\delta$-variable. At any rate, it is legitimate to arrange the order of sums and integrals in the above manner. The next lemma manifests that $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}Z}\hskip 1pt $}}} (\mathfrak{m} )$ contributes the other {\it half} of the main term for $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_2 (\mathfrak{m})$. Compare \eqref{10eq: diagonal, D2}. \begin{lem}\label{lem: zero} We have \begin{align}\label{10eq: zero} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}Z}\hskip 1pt $}}} (\mathfrak{m} ) = 4 \sqrt{\pi} c_1 \frac { \tau (\mathfrak{m}) M T^N } {\sqrt{\RN(\mathfrak{m})}} \bigg( \gamma_{1} \log \frac {T^N} {\sqrt{\RN(\mathfrak{m})}} + \gamma_0 ' + O_{\vepsilon} \big( (M/T)^2 \log T \big) \bigg) . \end{align} \end{lem} For the dual sum, it remains to prove the following estimates. For brevity, we have suppressed $v$ from our notation. \begin{lem}\label{lem; bound for dual} Let $R \leqslant T^{2+\vepsilon}$. Let $\varww (r) \in C_c^{\infty} [1, 2] $ satisfy $ \varww^{(j)} (r) \Lt_{j} (\log T)^{j} $. Define \begin{align}\label{12eq: a-sum, 2} \begin{aligned} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}_2 (\mathfrak{m}; R ) = \hskip -3 pt \mathop{\sum_{(c) \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} }}_{|c| \Lt \sqrt{|m| R}/ T} \hskip -3 pt \frac { 1 } { |\RN (c )|^2 } \hskip -2 pt \sum_{n \hskip 0.5 pt \in \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' \smallsetminus \{0\} } \hskip -2 pt { \tau ( n \mathfrak{D}) } S ( m - n , 0 ; c ) \widetilde \varww_0 \left ( \frac {n R} {c^2} , \frac {m R} {c^2} \right ) , \end{aligned} \end{align} with \begin{align} \varww \left ( x , \varLambda \right ) = {\varww (|x| ) \text{\raisebox{- 1 \depth}{\scalebox{1.06}{$ \text{\usefont{U}{dutchcal}{m}{n}H} $}}}_2 ( \varLambda x ) }, \qquad \widetilde {\varww}_0 (y , \varLambda ) = \int_{F_{\infty}} {\varww} (x , \varLambda ) B_0 (xy) \mathrm{d} x . \end{align} Then \begin{align}\label{12eq: bound for O, Q} \sqrt{R }\, \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}_2 (m ; R ) \Lt \left\{ \begin{aligned} & T^{-A}, & & \text{ if } 0 < m \leqslant M^{2-\vepsilon}, \\ & \frac {\sqrt{ m } T^{1/2+\vepsilon}} {\sqrt{M}}, & & \text{ if } M^{2-\vepsilon} < m \leqslant T^{2-\vepsilon}, \end{aligned}\right. \end{align} for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, and \begin{align}\label{12eq: bound for O, C} R \, \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}_2 (\mathfrak{m}; R ) \Lt {\sqrt{\RN(\mathfrak{m}) } T^{1+\vepsilon}} + \frac {M^2 T^{1+\vepsilon}} {\sqrt{\RN(\mathfrak{m})}}, \end{align} for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$. \end{lem} The asymptotic formula in \eqref{10eq: 2nd moment} now follows by combining \eqref{10eq: diagonal, D2}--\eqref{10eq: bound for E2(m), Q}, \eqref{10eq: zero}, \eqref{12eq: bound for O, Q}, and \eqref{12eq: bound for O, C}. \subsection{Proof of Lemma \ref{lem: zero}} We start with cleaning up the expression of $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}Z}\hskip 1pt $}}} (\mathfrak{m}) $ in \eqref{12eq: 0-frequency}--\eqref{12eq: Mellin}. By the change of variable $x \ra x/y$ in \eqref{12eq: Mellin}, \begin{align*} \widetilde {B}_{it} (y; s) = \|y\|_{\infty}^{-s} \widetilde {B}_{it} ( s), \end{align*} where $\widetilde {B}_{it} ( s) = \widetilde {B}_{it} ( 1; s)$. Then the factor $ |\RN (c)|^{1 - 2v \pm 2\delta} / |\RN(m)|^{\frac 1 2 -v \pm \delta} $ is extracted from $\widetilde {B}_{ it} \big(m/c^2; \tfrac 1 2 -v \pm \delta \big)$. The resulting $(c)$-sum may be evaluated by the Ramanujan identity: \begin{align}\label{12eq: sum of (c)} \sum_{(c) \subset \hskip 0.5 pt \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}} } \frac { S (m , 0; c) } { |\RN (c )|^{1 + 2 v} } = \frac { \tau_v (\mathfrak{m}) } { \RN(\mathfrak{m})^v \zeta (1+2v) }, \end{align} due to \eqref{2eq: Ramanujan} and $\mathfrak{m} = m \mathfrak{D}$ (so that $\RN (\mathfrak{m}) = |d_F \RN (m) |$). The two $ \zeta (1+2v) $ in \eqref{12eq: 0-frequency} and \eqref{12eq: sum of (c)} cancel, so there is now only a simple pole at $v = 0$. By Lemma \ref{lem: Mellin}, the Mellin integral \begin{align} \widetilde {B}_{ it} \big( \tfrac 1 2 -v \pm \delta \big) = \frac { \gamma (\tfrac 1 2 -v \pm \delta, t) } {\gamma (\tfrac 1 2 + v \mp \delta, t)} . \end{align} Moreover, $c_2 = c_1 / 2 \sqrt{|d_F|}$ (see \eqref{3eq: constants, Q} and \eqref{3eq: constants, C}). Thus $ \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}Z}\hskip 1pt $}}} (\mathfrak{m}) $ is simplified into \begin{align}\label{12eq: 0-frequency, 2} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}Z}\hskip 1pt $}}} (\mathfrak{m}) = c_1 \frac {\tau (\mathfrak{m})} {\sqrt{\RN(\mathfrak{m})}} \int_{-\infty}^{\infty} k (t) \cdot \frac { 1 } { \pi i } \int_{ (\frac 1 3) } G (v, t)^2 Z(\mathfrak{m}; v, t) \frac {\mathrm{d} v} {v} \mathrm{d} \hskip 0.5 pt \mu (t) , \end{align} where \begin{align}\label{12eq: Z '} Z(\mathfrak{m}; v, t) = \lim_{\delta \ra 0} \sum_{\pm} {\zeta_F (1\pm 2 \delta) } \frac{| d_F | ^{ \pm 2 \delta}} {\RN (\mathfrak{m})^{\pm \delta}} \frac { \gamma (\tfrac 1 2 -v \pm \delta, t) } {\gamma (\tfrac 1 2 + v \mp \delta, t)} . \end{align} In view of \eqref{4eq: def G} and \eqref{12eq: Z '}, it is clear that $G (v, t)^2 Z(\mathfrak{m}; v, t) $ is {\it even} in the $v$-variable, and therefore the $v$-integral in \eqref{12eq: 0-frequency, 2} is equal to exactly its value at $v = 0$ (to see this, apply $v \ra - v$ to half of the integral). Consequently, \begin{align}\label{12eq: 0-frequency, 3} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}Z}\hskip 1pt $}}} (\mathfrak{m}) = c_1 \frac {\tau (\mathfrak{m})} {\sqrt{\RN(\mathfrak{m})}} \int_{-\infty}^{\infty} k (t) Z(\mathfrak{m}; 0, t) \mathrm{d} \hskip 0.5 pt \mu (t). \end{align} We have \begin{align}\label{12eq: Z(0)} Z (\mathfrak{m}; 0, t) = 2 \gamma_0 + \gamma_1 \left ( \log \frac {|d_F|^2} {\RN(\mathfrak{m})} + 2 \psi (t) \right ) , \end{align} for $\gamma_0$, $\gamma_1$, and $\psi (t)$ as in \eqref{2eq: zeta (s), s=1} and \eqref{4eq: defn of psi}. By \eqref{4eq: asymp of psi}, \eqref{12eq: 0-frequency, 3}, and \eqref{12eq: Z(0)}, we can conclude the proof with the same arguments for the diagonal term $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}D} \hskip 0.5pt $}}}_2 (\mathfrak{m})$. \subsection{Proof of Lemma \ref{lem; bound for dual} for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$} In this subsection, let $c$, $d$, $m$, and $n$ be positive integers. It follows from $m \leqslant T^{2-\vepsilon}$ and $c^2 \Lt m R / T^2$ that $ {n R} / {c^2} \geqslant T^{\vepsilon} $, so Lemma \ref{lem: Hankel} (1) yields \begin{align*} \widetilde \varww_0 \left ( \pm \frac {n R} {c^2} , \frac {m R} {c^2} \right ) = \frac{ \sqrt{c} MT^{1+\vepsilon} } {\sqrt[4]{n R}} \Psi^{\pm} \big( \sqrt{ n/m } , \sqrt{ m R } / c \big) + O \big(T^{-A} \big) . \end{align*} Recall that we defined $ \Psi^{+} ( x , \varDelta ) = 0 $ unless $ \varDelta > M^{1-\vepsilon} / T $ (due to Lemma \ref{lem: x > MT}). Moreover, by $m \hskip 0.5 pt \leqslant T^{2 - \vepsilon}$ and $R \hskip 0.5 pt \leqslant T^{2 +\vepsilon}$, one may adjust $\vepsilon$ so that $ {\sqrt{m R}} / {c} \leqslant T^{2-\vepsilon}$. By the formula for the Ramanujan sum $S (m \pm n, 0; c)$ in \eqref{2eq: Ramanujan} and the estimates for the $\Psi^{\pm}$-integrals in \S \ref{sec: estimates for Phi}, in particular Proposition \ref{lem: bounds for Phi, R} (1) and \ref{prop: small Phi(1)} (1), we infer that, up to a negligibly small error, $\sqrt{R} \, \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}_2 (m; R )$ is bounded by the sum of \begin{align}\label{12eq: sum -} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{-} (m ) =\frac {M T^{1+\vepsilon} } { \sqrt{m} \sqrt[4]{R} }\sum_{0< n \hskip 0.5 pt < m /M^{2-\vepsilon} } \frac {\tau(n)} { \sqrt[4]{n} } \sum_{d | m+n } \sqrt{d} \sum_{ c d \hskip 0.5 pt \Lt {\sqrt{mR}}/ T} \frac{|\mu (c )| } {\sqrt c } , \end{align} and \begin{align}\label{12eq: sum +} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+} (m ) = \frac {M T^{1+\vepsilon} } { \sqrt[4]{m R} } \sum_{ 0 < |l| < m /M^{2-\vepsilon} } \frac {\tau(m+l)} {\sqrt{| l |}} \sum_{d | l } \sqrt{d} \sum_{ c d \hskip 0.5 pt < {\sqrt{mR} } / { M^{1-\vepsilon} T}} \frac{|\mu (c )| } {\sqrt{c} } , \end{align} with $l = n-m$. A critical point is that the diagonal term with $n = m$ ($l=0$) is removed from the second sum $\widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+} (m)$ because it is negligibly small by Proposition \ref{prop: small Phi(1)} (1). Finally, if $ m \leqslant M^{2-\vepsilon}$ then $\widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{-} (m )$ and $\widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+} (m )$ vanish since the $n$-sum and $l$-sum have no terms, and if otherwise we have estimates \begin{align* \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{-} (m ) \Lt \frac {M T^{1/2+\vepsilon} } { \sqrt[4]{m} } \sum_{0 < n \hskip 0.5 pt < m /M^{2-\vepsilon} } \frac {\tau(n) \tau (m+n)} { \sqrt[4]{n} } \Lt \frac {\sqrt{m } T^{1/2+\vepsilon} } {\sqrt{M}} , \end{align*} \begin{align* \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+} (m ) \Lt {\sqrt{M} T^{1/2+\vepsilon} } \sum_{ 0 < |l| < m /M^{2-\vepsilon} } \frac { \tau(l) \tau(m+l)} {\sqrt{| l |}} \Lt \frac { \sqrt{m} T^{1/2+\vepsilon}} {\sqrt{M} }, \end{align*} as desired. \subsection{Proof of Lemma \ref{lem; bound for dual} for $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$}\label{sec: off, C} For the case $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$ we use Lemma \ref{lem: Hankel} (2), Proposition \ref{lem: bounds for Phi, R} (2) and \ref{prop: small Phi(1)} (2). Let $z = x+i y = \sqrt{n/m}$ ($m, n \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' \smallsetminus \{0\}$). We partition the region $ |x| < 1 + \rho $ and $|y| < \rho$ in Proposition \ref{lem: bounds for Phi, R} (2) ($\rho = M^{\vepsilon}/M$) according to the $x$-coordinate as follows: \begin{align*} |x| \Lt \rho, \qquad \delta < |x| \leqslant 2 \delta , \qquad | |x| - 1| \Lt \rho, \qquad \delta < 1 - |x| \leqslant 2 \delta, \end{align*} for dyadic $\delta$ of the form $2^{-j}$ ($j = 2, 3, ...$) with $\rho \Lt \delta < 1/2$. In view of Lemma \ref{lem: square root}, the problem is reduced to proving that the following four sums have bound as in \eqref{12eq: bound for O, C}: \begin{align}\label{12eq: -} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{-} (m ) & = \frac {M T^{2+\vepsilon} } { |m| \sqrt{ R} } \sum_{ 0 < |n| < |m| /M^{2-\vepsilon} } \frac {\tau(n \mathfrak{D})} { {\sqrt{|n|}}} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}R} \hskip 0.5pt $}}} (m-n, m) , \\ \label{12eq: O-, 2} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{-}_{\delta} (m ) & = \frac {M T^{2+\vepsilon} } { |m| \delta \sqrt{ |m| R} } \mathop{\sum_{ |\mathrm{Re}(n/m)| \asymp \delta^2 }}_{ |\mathrm{Im}(n/m)| < \delta / M^{1-\vepsilon} } {\tau(n \mathfrak{D})} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}R} \hskip 0.5pt $}}} (m-n, m) , \\ \label{12eq: O+} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+} (m ) & = \frac {M T^{2+\vepsilon} } { \sqrt{|m| R} } \sum_{ 0 < |l| < |m| /M^{1-\vepsilon} } \frac {\tau((m+l) \mathfrak{D} )} { {| l |}} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}R} \hskip 0.5pt $}}} (l, m) , \\ \label{12eq: O+, 2} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+}_{\delta} (m ) & = \frac {M T^{2+\vepsilon} } { |m| \delta \sqrt{ |m| R} } \mathop{\sum_{ |\mathrm{Re}(l/m)| \asymp \delta }}_{ |\mathrm{Im}(l/m)| < M^{\vepsilon} / M } {\tau((m+l) \mathfrak{D})} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}R} \hskip 0.5pt $}}} (l, m) , \end{align} where \begin{align}\label{12eq: Ramanujan} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}R} \hskip 0.5pt $}}} (l, m) = \sum_{ \mathfrak{d} | l \mathfrak{D} } \sqrt{\RN (\mathfrak{d})} \sum_{ \RN (\mathfrak{c} \mathfrak{d}) \hskip 0.5 pt \Lt { {|m|R} } / { T^2 }} \frac{|\mu ( \mathfrak{c} )| } { \sqrt{\RN (\mathfrak{c})} } . \end{align} It is clear that \begin{align*} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}R} \hskip 0.5pt $}}} (l, m) = O \bigg( \frac {\tau (l \mathfrak{D}) \sqrt{|m| R} } {T} \bigg), \end{align*} therefore \begin{align*} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{-} (m ) \Lt \frac {M T^{1+\vepsilon} } { \sqrt{|m|} } \sum_{ 0 < |n| < |m| /M^{2-\vepsilon} } \frac {\tau(n \mathfrak{D}) \tau((m-n) \mathfrak{D})} { {\sqrt{|n|}}} \Lt \frac {|m| T^{1+\vepsilon}} { M^2 } , \end{align*} \begin{align*} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{-}_{\delta} (m ) & \Lt \frac {M T^{1+\vepsilon} } { {|m|} \delta } \mathop{\sum_{ |\mathrm{Re}(n/m)| \asymp \delta^2 }}_{ |\mathrm{Im}(n/m)| < \delta / M^{1-\vepsilon} } {\tau(n \mathfrak{D}) \tau((m-n) \mathfrak{D}) } \\ & \Lt \frac {M T^{1+\vepsilon} } { {|m|} \delta } \mathop{\sum_{ |\mathrm{Re}(n/m)| \Lt \delta^2 }}_{ |\mathrm{Im}(n/m)| < \delta / M^{1-\vepsilon} } 1 , \end{align*} and similarly \begin{align*} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+} (m ) \Lt {M T^{1+\vepsilon} } \sum_{ 0 < |l| < |m| /M^{1-\vepsilon} } \frac {\tau (l \mathfrak{D}) \tau((m+l)\mathfrak{D})} { {| l |}} \Lt |m| T^{1+\vepsilon}, \end{align*} \begin{align*} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{+}_{\delta} (m ) & \Lt \frac {M T^{1+\vepsilon} } {|m| \delta } \mathop{\sum_{ |\mathrm{Re}(l/m)| \asymp \delta }}_{ |\mathrm{Im}(l/m)| < M^{\vepsilon} \hskip -1pt / M } {\tau (l \mathfrak{D}) \tau((m+l)\mathfrak{D})} \\ & \Lt \frac {M T^{1+\vepsilon} } {|m| \delta} \mathop{\sum_{ |\mathrm{Re}(l/m)| \Lt \delta }}_{ |\mathrm{Im}(l/m)| < M^{\vepsilon} \hskip -1pt / M } 1 . \end{align*} The final estimation for $\widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{\pm}_{\delta} (m )$ can be done by the next lemma. \begin{lem}\label{lem: count lattice points} Let $m \in \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}'$. For $ Q \Lt P $ define the rectangle $\RR (P, Q) = \big\{ x + iy : |x| < P, |y| < Q \big\}$. The number of points in $ m^{-1} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}' \cap \RR (P, Q) $ has bound $O \left ( (|m|P+1) (|m|Q+1) \right ) $. \end{lem} \begin{proof} Firstly, it is clear that $ m \cdot \RR (P, Q)$ is contained in a parallelogram of the form $\RR_a (|m|P, |m|Q ) = \big\{ x+iy : |x| \Lt |m| P, |y - a x| \Lt |m| Q \big\}$. Exchanging $ x \leftrightarrow y$ if necessary, one may assume that $|a| \leqslant 1$. Secondly, $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}'$ is contained in a certain rectangular lattice spanned by a real scalar and an imaginary scalar. By rescaling, it is reduced to counting the integral lattice points in $\RR_a (|m|P, |m|Q )$, which can be done very easily. \end{proof} It follows from Lemma \ref{lem: count lattice points}, along with $M^{\vepsilon}/M \Lt \delta < 1/2$, that \begin{align*} \widetilde{\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}O} \hskip 0.5pt $}}}}{}^{\pm}_{\delta} (m ) \Lt |m| T^{1+\vepsilon} + \frac {M^2 T^{1+\vepsilon} } {|m|} . \end{align*} \section{Moments without Twist and Smooth Weight} In this section, we use the unsmoothing technique in \S \ref{sec: choice of h} to prove Corollary \ref{cor: unsmooth}. By the proof of Theorem \ref{thm: moment} in the previous sections, for $ T^{\vepsilon} \leqslant M \leqslant T^{1-\vepsilon} $ we have \begin{align}\label{13eq: M+E, 1} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_1 ( 1 ) + \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_1 (1) = 4 \sqrt{\pi} c_1 M T^{N} + O_{\vepsilon} \big( M^3 / T^{2-N} \big) , \end{align} and \begin{align}\label{13eq: M+E, 2.1} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_2 ( 1 ) + \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_2 (1) = 8 \sqrt{\pi} c_1 M T ( \log T + \gamma_0 ' ) + O_{\vepsilon} \big( {M^3 } \log T / {T} \big), \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, and \begin{align}\label{13eq: M+E, 2.2} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_2 ( 1 ) + \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_2 (1) = 8 \sqrt{\pi} c_1 M T^2 ( 2 \gamma_{1} \log T + \gamma_0 ' ) + O_{\vepsilon} \big( M^2 T^{1+\vepsilon} \big) , \end{align} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$.\footnote{For the case $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, the reader may compare our formulae with those in \cite[Proposition 1]{SH-Liu-Maass}.} It follows that \begin{align} \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}M} \hskip 0.5pt $}}}_q ( 1 ) + \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_q (1) = O_{\vepsilon} \big(M T^{N + \vepsilon}\big) \end{align} for any $T^{\vepsilon} \leqslant M \leqslant T^{1-\vepsilon}$. It is known that $ L \big(\frac 1 2 , f\big) $ is non-negative by \cite{Guo-Positive}. Applying Lemma \ref{lem: unsmooth, 1} and \ref{lem: unsmooth, 2} (with $\lambda = N+\vepsilon$ and $a_f = L \big(\frac 1 2 , f\big)^q $) and the averaging process to \eqref{13eq: M+E, 1}--\eqref{13eq: M+E, 2.2}, we infer that \begin{align* \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_1 (T, H) = 4 c_1 \int_{\,T-H}^{T+H} K^N \mathrm{d} K + O_{\vepsilon} \left ( M T^{N+\vepsilon} \right ), \end{align*} and \begin{align* \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_2 (T, H) = 8 c_1 \int_{\,T-H}^{T+H} K ( \log K + \gamma_0 ' ) \mathrm{d} K + O_{\vepsilon} \big( M T^{1+\vepsilon} \big) , \end{align*} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}}$, and \begin{align* \text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_2 (T, H) = 8 c_1 \int_{\,T-H}^{T+H} K^2 ( 2 \gamma_{1} \log K + \gamma_0 ' ) \mathrm{d} K + O_{\vepsilon} \big(M T^{2+\vepsilon}\big), \end{align*} if $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$. Then Corollary \ref{cor: unsmooth} follows on choosing $M = T^{\vepsilon}$. Finally, we remark that the arguments for $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}E} \hskip 0.5pt $}}}_q (\mathfrak{m})$ in \S \ref{sec: 1st moment} and \S\ref{sec: 2nd moment} may be easily employed here to show that, except when $ T^{ \frac 5 7} < H < T^{ \frac 7 8 + \vepsilon} $ for $q = 2$ and $F = {\mathbb {Q}}} \newcommand{\BR}{{\mathbb {R}} (\sqrt{d_F})$, the Eisenstein contribution in $\text{\raisebox{- 2 \depth}{\scalebox{1.1}{$ \text{\usefont{U}{BOONDOX-calo}{m}{n}N} \hskip 0.5pt $}}}_q (T, H)$ is $O \big(T^{N + \vepsilon} \big)$ so that it may be removed from the asymptotic formulae in Corollary \ref{cor: unsmooth}.
1,314,259,996,009
arxiv
\section{Introduction} The phase diagram of QCD has been investigated in the finite chemical potential $(\mu)$ and finite temperature $(T)$ region. Recently, the existence of the inhomogeneous chiral phase has been newly suggested and the property of this phase has been actively discussed by the analysis of the chiral effective models \cite{nakano,nickel} or the Schwinger-Dyson approach \cite{mueller} (for a review see Ref.\,\cite{review}). The observational possibility by the lattice QCD has been also suggested \cite{kashiwa,yoshiike2}. In this phase, quark condensate has a spatially modulating configuration and such a modulating condensate resembles the FFLO-type superconductivity \cite{ff,lo} or spin/charge density wave \cite{peierls,overhauser}. Among some configurations of the inhomogeneous quark condensate, we here consider the dual chiral density wave (DCDW) \cite{nakano}, decided by the form, $\Delta({\bf r}) \equiv \langle \bar{\psi} \psi \rangle + i \langle \bar{\psi} i\gamma^5 \tau_3 \psi \rangle = \Delta e^{i{\bf q}\cdot{\bf r}}$, within teh two-flavor QCD. The DCDW phase is favored compared to other configurations in the 1+1 dimensional system \cite{basar2} or the external magnetic field $(B)$ \cite{nishiyama}. One may expect that the DCDW phase may be realized in neutron stars because it is suggested to emerge in the moderate density region by the analysis of the Nambu-Jona-Lasinio (NJL) model \cite{nakano}. From the observation, the strong magnetic field ($B>10^{12-15}$G) seems to exist in compact stars. The origin or mechanism of the maintenance of the strong magnetic field have not been fully understood yet though some mechanisms have been proposed from the macroscopic view point. It may be important and interesting to investigate the magnetic properties of quark matter to suggest the mechanism from the microscopic theory. \section{Thermodynamic potential with the weak magnetic field} We use the two-flavor NJL model with the external magnetic field $(B)$ in the chiral limit. Using the DCDW ansatz in the mean field approximation, the Dirac Hamiltonian is obtained as, \begin{eqnarray} H = -i\vec{\alpha} \cdot {\bf D} - 2G\Delta\gamma^0 \left( \cos qz + i\gamma_5\tau_3\sin qz \right), \label{ham} \end{eqnarray} where $\vec{\alpha}\equiv\gamma_0\vec{\gamma}$, ${\bf D} = \nabla + iQ{\bf A}$ and $Q={\rm diag}(e_u,e_d)$ is the electric charge matrix in flavor space. In the following, $B$ is taken along the $z$ axis, ${\bf A}=(0,Bx,0)$. According to the Ref.~\cite{frolov}, the energy spectrum is constituted by the Landau levels, \begin{eqnarray} E_{p_zn\zeta\epsilon}^{f} &=& \epsilon\sqrt {\left(\zeta\sqrt{p_z^2+m^2}+q/2 \right)^2 + 2|e_fB|n},~~~(n>0) \\ E_{p_z\epsilon} &=& \epsilon\sqrt{p_z^2+m^2}+q/2,~~~(n=0) \label{energy} \end{eqnarray} where $m\equiv-2G\Delta$ and $\zeta=\pm 1$ denotes the spin polarization. If $m\geq q/2$, $\epsilon=\pm 1$ corresponds to the positive (negative) energy state, that is, the (anti-)particle state. However the sign of $\epsilon$ does not always imply that for the lowest Landau level (LLL) ($n=0$) if $m\leq q/2$. It is worth mentioning that the spectrum of LLL becomes asymmetric about the zero value, while the one of the higher Landau levels (hLLs) ($n>0$) is always symmetric. Then the thermodynamic potential takes the form, $\Omega(\mu,T,B;m,q) = \frac{m^2}{4G} + N_c\sum_{f=u,d} \Omega_f$, where, \begin{eqnarray} &\Omega_f = -\frac{|e_fB|T}{4\pi} \int \frac{dp_z}{2\pi} \sum_k \bigg\{ \sum_{n,\zeta,\epsilon} {\rm ln}\left[\omega_k^2 + (E^{f}_{p_zn\zeta\epsilon} - \mu)^2\right] \nonumber \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~+ \sum_{\epsilon} {\rm ln}\left[\omega_k^2 + (E_{p_z\epsilon} - \mu)^2\right] \bigg\}, \end{eqnarray} with the Matsubara frequency, $\omega_k=(2k+1)\pi T$. For the analysis of the response of quark matter in the DCDW phase to the weak external magnetic field, the thermodynamic potential is expanded about $eB$, \begin{eqnarray} \Omega(\mu,T,B\,;m,q) =& \Omega^{(0)}(\mu,T\,;m,q) + eB\,\Omega^{(1)}(\mu,T\,;m,q) \nonumber \\ &+ (eB)^2\Omega^{(2)}(\mu,T\,;m,q) + \cdots. \label{thermo} \end{eqnarray} $\Omega^{(0)}$ corresponds to the thermodynamic potential without $B$ \cite{nakano}, \begin{eqnarray} \Omega^{(0)} &=& \frac{m^2}{4G} -2N_c \int \frac{d^3{\bf p}}{(2\pi)^3} \sum_{s=\pm1} \nonumber \\ &\times& \left\{ E^{(0)}_s + \frac{1}{\beta}\log\left[ e^{-\beta(E^{(0)}_s-\mu)} + 1 \right]\left[ e^{-\beta(E^{(0)}_s+\mu)} + 1 \right] \right\}, \label{ome0} \end{eqnarray} with, $E^{(0)}_s = \left(m^2 + {\bf p}^2 + q^2/4 + sq\sqrt{m^2 + p_z^2}\right)^{1/2}$. The vacuum part in $\Omega^{(0)}$ should be properly regularized because it includes the UV divergence. For the analysis of $\Omega^{(1)}$, the chiral anomaly must be considered with caution \cite{yoshiike}. According to Refs.~\cite{niemi}, anomalous particle number is generally brought about by spectral asymmetry. Accordingly anomalous contribution emerges in the thermodynamic potential. In the DCDW phase, it gives rise to anomalous particle number proportional to $B$ \cite{tatsumi}. Then $\Omega^{(1)}$ takes the form, \begin{eqnarray} \Omega^{(1)}&=&\frac{\mu N_c}{4\pi} \eta_H -\frac{N_c}{4\pi} \int \frac{dp_z}{2\pi} \sum_{\zeta=\pm1} \sum_{\tau=\pm1} \zeta\tau\left( \mu - \tau\omega_\zeta \right) \theta(\tau\omega_\zeta)\theta(\mu - \tau\omega_\zeta) \nonumber \\ &~& -\frac{N_cT}{4\pi} \int \frac{dp_z}{2\pi} \sum_{\zeta=\pm1}\sum_{\tau=\pm1} \zeta\tau\ln\left(1 + e^{-\beta|\omega_\zeta - \tau\mu|} \right), \label{ome1} \end{eqnarray} with $\omega_{\zeta} = \sqrt{p_z^2 + m^2} + \zeta q/2$, which is the odd function about $q$. The first term represents the contribution of anomaly derived from only LLL while the second and third term are interpreted as the contributions of valence quarks coming from all the Landau levels. Note that $\Omega^{(1)}$ does not {\bf diverge} without any regularization. The anomalous contribution is caused by the spectral asymmetry and the $\eta$-invariant, $\eta_H$, renders, \begin{eqnarray} \eta_H &\equiv& \lim_{s \to +0}\int \frac{dp_z}{2\pi} \sum_{\epsilon}|E_{p_z\epsilon}|^{-s}{\rm sign}(E_{p_z\epsilon}) \nonumber \\ &=& \left\{ \begin{array}{lc} -\frac{q}{\pi} & (m>q/2) \\ -\frac{q}{\pi} + \frac{2}{\pi}\sqrt{q^2/4 - m^2} & (m<q/2) \end{array} \right.. \end{eqnarray} Spectral asymmetry can be evaluated with the proper regularization which does not violate the gauge invariance. Note that in $m>q/2$, this quantity reproduces the Wess-Zumino-Witten (WZW) term effectively derived from the chiral anomaly, argued in Ref.~\cite{son}. Although the WZW term does not depend on $m$, the $m$ dependence emerges in our case when $m$ sufficiently becomes small. Moreover it vanishes in the limit, $m \rightarrow 0$, as should be. Finally, $\Omega^{(2)}$ does not include the contribution of anomaly and takes the form, \begin{eqnarray} \Omega^{(2)} &=& -\frac{5}{216\pi} \int \frac{dp_z}{2\pi} \sum_{\zeta=\pm1} \frac{1}{\omega_\zeta} \left[ \frac{1}{e^{\beta(\omega_\zeta + \mu)}+1} - \frac{1}{e^{\beta(\omega_\zeta - \mu)}+1} - 1\right]. \end{eqnarray} There is still included the UV divergence. The values of the order parameters, $m$ and $q$, are determined for each $\mu$, $T$ and $B$ and make the thermodynamic potential minimum. They can be expanded about $eB$, \begin{eqnarray} m(\mu,T,B) &=& m^{(0)}(\mu,T) + eB m^{(1)}(\mu,T) + (eB)^2 m^{(2)}(\mu,T), \\ q(\mu,T,B) &=& q^{(0)}(\mu,T) + eB q^{(1)}(\mu,T) + (eB)^2 q^{(2)}(\mu,T). \end{eqnarray} The minimized thermodynamic potential is represented as, $\Omega^{\rm min}(\mu,T,B) \equiv \Omega(\mu,T,B;m,q)|_{m=m(\mu,T,B),q=q(\mu,T,B)}$. \section{Magnetic properties} \subsection{Spontaneous magnetization} Magnetization is defined as the first derivative of the thermodynamic potential about the magnetic field. From the extremum conditions, $\partial \Omega / \partial m,q = 0$, the spontaneous magnetization ($M_0$) takes the form, \begin{eqnarray} M_0(\mu,T) \equiv -\frac{\partial \Omega^{\rm min}}{\partial B}\bigg|_{B\rightarrow0} = -e\Omega^{(1)}(\mu,T;m=m^{(0)}, q=q^{(0)}). \end{eqnarray} From the Eq.\,(\ref{ome1}), $\Omega^{(1)}$ has a finite value only when $m^{(0)}$ or $q^{(0)}$ does not vanish. In other words, $M_0$ does not vanish only in the DCDW phase. Furthremore, it includes the contribution of not only anomaly but also valence quarks. The Fig.\,\ref{spo} shows the $\mu$ dependence of $M_0$ at $T=0, 30$MeV. As temperature increases, the range of $\mu$ gets narrow and the magnitude of $M_0$ decreases. Assuming a sphere of quark matter with constant density ($\mu=340$MeV) and uniform magnetization, the magnitude of the magnetic field made from $M_0$ is estimated, $B_{\rm mag} = \frac{8\pi}{3}M_0 \sim 10^{16}$G, on the surface. \begin{figure*}[ht] \centering \begin{tabular}{c} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=6cm]{parammag0tem.eps} (a) $T=0$ \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=6cm]{ordermag005.eps} (b) $T=30{\rm MeV}$ \end{center} \end{minipage} \end{tabular} \caption{The chemical potential dependence of the order parameters and spontaneous magnetization. The DCDW phase exists in the $\mu$ region bounded by the first-order and the second-order phase transition points.} \label{spo} \end{figure*} \subsection{Magnetic susceptibility} \begin{figure}[hb] \centering \includegraphics[width=6cm]{sus.eps} \caption{The chemical potential dependence of normalized magnetic susceptibility and spontaneous magnetization at zero temperature.} \label{sus} \end{figure} Magnetic susceptibility ($\chi$) is defined as the first derivative of the magnetization about the magnetic field. From the stationary conditions, $\chi$ takes the form, \begin{eqnarray} \chi(\mu,T) &\equiv& - \frac{\partial^2 \Omega^{\rm min}}{\partial B^2}\bigg|_{B\rightarrow0} \nonumber \\ &=& - e^2\Big( 2\Omega^{(2)}(\mu,T;m,q) + m^{(1)}\partial_m \Omega^{(1)}(\mu,T;m,q) \nonumber \\ &~&~~~~~~~~~~~~~~~~~~~~+ q^{(1)} \partial_q \Omega^{(1)}(\mu,T;m,q) \Big)\Big|_{m=m^{(0)},q=q^{(0)}}, \end{eqnarray} which has the UV divergence coming from $\Omega^{(2)}$. Therefore it should be normalized to satisfy the condition, $\chi(\mu=0,T=0)=0$. Thus the normalized $\chi$ is defined by subtracting the vacuum one, $\chi^{\rm nom}(\mu,T) \equiv \chi(\mu,T) - \chi(\mu=0,T=0)$. Fig.\,\ref{sus} shows the $\mu$ dependence of $\chi^{\rm nom}$, which exhibits some singular behavior. The cusp behavior reflects the singularity of the thermodynamic potential on the point where valence quarks begin to appear. Quark matter undergoes the first order phase transition at the critical point, where order parameters are discontinuous (Fig.\,\ref{spo}) and $\chi^{\rm nom}$ is also discontinuous. However, $\chi^{\rm nom}$ does not indicate any singularity on the second order critical point (Fig.\,\ref{spo}) while, in the Ising model, magnetic susceptibility diverges on the second order critical point. Therefore such behavior is unique in the case of the DCDW phase transition. \section{Summary} We have investigated the magnetic properties of quark matter in the inhomogeneous chiral phase. Spontaneous magnetization, which has both the contributions of anomaly and valence quarks, emerges there. The magnitude of magnetic field made from it may be estimated to be comparable with that in the observation of magnetars. Magnetic susceptibility does not diverge on the second order phase transition point. The behavior is different from one in the familiar ferromagnetic system and unique in the inhomogeneous chiral phase transition. One of the author (RY) thanks C. Providencia for her hospitality during the stay in Coimbra.
1,314,259,996,010
arxiv
\section{Introduction} Vehicle-sharing is an emerging smart mobility service leveraging connectivity and modern technology to enable users to share their vehicles with others. Users can book in advance and access a vehicle, where the traditional key distribution is naturally replaced. With the use of in-vehicle telematics and omnipresent \glspl{PD}, such as smartphones, vehicle owners can distribute (temporary) digital vehicle-keys, a.k.a. \glspl{AT}, to other users enabling them to access a vehicle~\cite{DBLP:conf/isc2/SymeonidisMP16}. To support sharing, \glspl{VSS} can effectively facilitate dynamic key distribution at a global scale. They can enable the occasional use of multiple types of vehicles (e.g., cars, motorbikes, scooters), catering to diverse user needs and preferences~\cite{millard2005car,le2014carsharing,FERRERO2018501}. Beyond user convenience and increased usability, by providing better utilization of available vehicles, \glspl{VSS} contribute to sustainable smart cities. This, in turn, leads to positive effects such as a reduction of emissions~\cite{DBLP:journals/tits/MartinS11}, a decrease of city congestion~\cite{shaheen2013}, and more economical use of parking space~\cite{DBLP:journals/computer/NaphadeBHPM11}. \glspl{VSS} are gaining increased popularity: the worldwide number of users of vehicle-sharing services rose by 170\% from 2012 to 2014 (for a total of 5 million users)~\cite{acea}, while there is a tendency to reach a total of 26 million users by 2021~\cite{bert2016s}.~\footnote{Note that these predictions were made in pre-COVID-19 times.} The Car Connectivity Consortium~\cite{carconnectivity}, an organization of automotive manufactures and smartphone companies, is developing an open standard for ``smartphone-to-car'' services, where a smartphone equipped with digital keys can be used to access vehicles. The SECREDAS EU project~\cite{secredas} proposes a reference architecture for vehicular sharing~\cite{secredas:cybersecurity}, highlighting high-level security and privacy challenges that should be under consideration. The automotive supplier Valeo~\cite{valeo} in collaboration with Orange~\cite{orange} proposes an NFC based solutions for vehicular sharing~\cite{nfcw}. Volvo~\cite{volvo}, BMW~\cite{bmw}, Toyota~\cite{toyota}, Apple~\cite{apple:patent}, and several other companies have been investing in vehicle-sharing services as well. For instance, Apple announced the ``CarKey'' API in the first quarter of 2020, allowing users to (un)lock and start a vehicle using an iPhone or Apple Watch. ``CarKey'' can also be shared with other people, such as family members, enabling vehicle-sharing~\cite{apple:patent,apple:carkey}. Despite these advantages, a major concern is the \gls{VSS} system security. An adversary may eavesdrop, and attempt to extract the key of a vehicle stored in untrusted devices, tamper with the vehicle sharing details, generate a rogue \gls{AT} to access or deny having accessed a vehicle maliciously. These significant concerns require \glspl{VSS} to deploy security mechanisms; to ensure that vehicle-sharing details cannot be tampered with by unauthorized entities, digital vehicle-keys are stored securely, and attempt to use rogue \glspl{AT} are blocked. Furthermore, it is necessary to address dispute resolution, key revocation (esp. when a user device is stolen)~\cite{gov_uk_reducing_mobile_phone_theft}, and connectivity issues~\cite{TheGuardian:vehicle-offline-connectivity,DBLP:conf/codaspy/DmitrienkoP17}. For dispute resolution, \gls{VSS} users must be accountable while their private information is protected. Current proposals to address these security issues for \gls{VSS} rely on a centralized \gls{SP}~\cite{DBLP:conf/codaspy/BusoldTWDSSS13,DBLP:conf/rfidsec/KasperKOZP13,DBLP:journals/access/WeiYWWD17,DBLP:conf/vehits/GrozaAM17,DBLP:journals/access/GrozaABMG20}, which collects all \gls{VSS} users data for every vehicle sharing-access provision, while having access to the master key of each vehicle~\cite{DBLP:conf/codaspy/DmitrienkoP17}. However, \gls{VSS} user privacy is equally important, especially with \glspl{VSS} collecting rich personal and potentially sensitive user and vehicle data~\cite{DBLP:conf/itsc/RemeliLAB19}. An adversary may eavesdrop on data exchanges to infer sensitive information about \gls{VSS} users. For example, Enev et al.~\cite{DBLP:journals/popets/EnevTKK16} demonstrated that with 87\% to 99\% accuracy, drivers could be identified by analyzing their 15 minutes-long driving patterns. An adversary may link vehicle-sharing requests, by the same user or for the same vehicle, to deduce vehicle usage patterns and preferences; e.g., sharing patterns, such as time of use, pickup location, duration of use, person(s) a vehicle is shared with~\cite{DBLP:journals/popets/EnevTKK16}. Furthermore, the adversary could infer sensitive information about user health status by identifying vehicles for special-need passengers or their race and religious beliefs~\cite{reddit_ny_cabs}. Such user profiling would be a direct violation of the \gls{GDPR}~\cite{gdpr}. Thus, any \gls{VSS} system needs to preserve user and vehicle requests' unlinkability and keep the user and vehicle identities concealed. Furthermore, vehicle sharing operations such as \gls{AT} generation, update, or revocation should be indistinguishable in \gls{VSS}. Towards addressing such privacy challenges, the state-of-the-art in \gls{VSS}, SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}, proposes leveraging \gls{MPC} and focuses on privacy-preserving \gls{AT} provision, deploying multiple non-colluding servers for the generation and distribution of vehicle \glspl{AT}. In a real-world deployment, the number of vehicles-per-user available for sharing could range from few for private individuals to thousand of vehicles for companies or (large) branches of companies~\cite{acea,statistics:num-of-cars-per-carsharing}, e.g., in car-rental scenarios~\cite{munzel2020explaining}. At the same time, the number of users registered with a \gls{VSS} can be highly varying; including all types of users, with access to varying size sets of vehicles. Designing and deploying a \gls{VSS} that serves large numbers of users and large numbers of vehicles is far from straightforward. Security and privacy safeguards significantly affect the performance of a \gls{VSS}, especially so with large numbers of users and vehicles. SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}, although efficient ($1.55$ seconds for access provision based on an owner-single-vehicle evaluation of the protocol), has not been tested in settings replicating real-world deployment: with a large number of vehicles per user. It is paramount to have secure and privacy-preserving \glspl{VSS} that are \emph{scalable}, that is, \glspl{VSS} that remain \textit{efficient} and capable of serving users effectively as the system dimensions (number of users, number of vehicles) grow. Hence, to the best of our knowledge, there is no \gls{VSS} solution in the literature that provides security and privacy guarantees while at the same time being efficient and scalable. This work fills this gap. \subsubsection*{Contribution} In this work, we present HERMES\@, an efficient, scalable, secure, and privacy-enhancing system for vehicle sharing-access provision that supports dispute resolution while protecting user privacy. HERMES\@\ is an extension over SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}, and fundamentally differs in certain design choices to make it scalable and more efficient. Specifically, the contributions of this work are: \begin{enumerate} \item \textit{Refined system design for improved security and privacy:} HERMES\@\ provides a comprehensive solution to vehicle sharing-access mitigating security and privacy issues considering untrusted \glspl{SP}. It deploys \acrfull{MPC} and several cryptographic primitives to ensure that \glspl{AT} are generated so that no \gls{VSS} entity other than users and vehicles learn the vehicle-sharing details. Vehicle secret keys stay oblivious towards the untrusted \gls{VSSP} although used for the \glspl{AT} generation. With the use of a \gls{PL} and anonymous communication channels~\cite{torproject} combined, HERMES\@\ also ensures the unlinkability of any two user requests, the anonymity of users, and vehicles and the indistinguishability between the \gls{AT} generation, update, and revocation operations. It also supports dispute resolution without compromising user private information while keeping users accountable. \item \textit{Supporting efficiency and scalability:} We choose cryptographic primitives and underlying \gls{MPC} protocols to i) minimize the number of non-linear operations in a circuit and its circuit depth of \gls{MPC} protocols, and ii) enable parallelization of cryptographic evaluations over \gls{MPC}. For instance, optimization of \gls{MPC} consists of substituting the \gls{MAC} used in~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} by an Enc-then-MAC mode. Performing \gls{MAC} operations directly on secretly shared data over \gls{MPC} is costly, as non-linear operations are the main constraint in the performance of \gls{MPC}. Instead, encrypting a message over \gls{MPC}, revealing the output, and applying \gls{MAC} to the output result in a significantly faster solution. This enables HERMES\@\ to remain efficient with multiple vehicles per user, showing a significant performance gain over SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}. We use AES-CBC-MAC for the Boolean case and an HtMAC mode for the arithmetic case with the respective field. The latter allows parallelization, and the benchmark results in an efficient solution as HtMAC requires fewer communication rounds. These improvements are tailored towards scalable \glspl{VSS}. \item \textit{Formal semantically secure analysis:} We prove that HERMES\@\ is secure and meets its appropriate security and privacy requirements. We provide a detailed semantic security analysis overall and per security and privacy requirements extending security proofs to include the refined design and changes of the cryptographic primitives advancing SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}. \item \textit{Improved implementation and benchmarking including a prototype OBU:} Unlike~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}, we implement HERMES\@\ with the fully-fledged open-sourced \gls{MPC} framework MP-SPDZ \cite{DBLP:conf/ccs/Keller20}. The parties run an optimized virtual machine for the execution of the protocol. For comparison, we test and evaluate HERMES\@\ using two \gls{MPC} instantiations for Boolean and arithmetic circuits. Our performance evaluation demonstrates that HERMES\@\ can be highly efficient even for users with thousand of vehicles, hence making it ready for real-world deployment. Its significant improvement shows that it requires only \boldmath{$\approx 30,3$}~ms for a owner-single-vehicle \gls{AT} generation (\boldmath{$42$} times faster compared to~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}). Simultaneously, it can handle multiple \gls{AT} generations per second (\boldmath{$\approx 84$}~\glspl{AT}/s) for owner-multi-vehicles individuals and (branches of) rental companies, resulting in an efficient and scalable solution that is ready for real-world deployment. Furthermore, we implement the \gls{AT} verification on a prototype \gls{OBU}, demonstrating that HERMES\@\ is practical on the vehicle side too. \end{enumerate} The rest of the paper is organized as follows: Section~\ref{sec:system_model} provides the system model and preliminaries on HERMES\@. Section~\ref{sec:crypto_build_blocks} describes the cryptographic building blocks used in HERMES\@. Section~\ref{sec:system} describes the system in detail and Section~\ref{sec:extended_analysis} provides the security and privacy analysis of HERMES\@. Section~\ref{sec:protocol_evaluation} evaluates its performance and complexity, and demonstrates its efficiency and scalability. Section~\ref{sec:related_work} gives an overview of the state-of-the-art related work. Section~\ref{sec:conclusion} concludes our work. \section{System, Adversarial Models and Requirements}\label{sec:system_model} We outline a system model of \glspl{VSS}, along with the adversarial model. We provide the functional, security, privacy, and performance requirements any secure and privacy-enhancing \gls{VSS} needs to satisfy. \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{% \includegraphics{system_model.pdf}} \caption{\acrfull{VSS} model. } \label{VSS} \end{figure} \subsection{System model} A \gls{VSS} is comprising of users, vehicles, vehicle-manufacturers, and authorities as it is illustrated in Fig.~\ref{VSS}. We consider two types of users: \textit{owners} ($u_o$), individuals or vehicle rental companies willing to share (rent out) their vehicles, and \textit{consumers} ($u_c$), individuals using vehicles available for sharing; both use \acrfullpl{PD}, such as smartphones, to interact with \gls{VSS} entities and each other. The \gls{OBU} is a hardware/software component that enables vehicle connectivity~\cite{nexcom:vtc6201-ft,preserve,ivners}. It is equipped with a secure wireless interface for short-range (e.g., NFC, or Bluetooth) or over the Internet interface (e.g., cellular data) for accessing securely the \textit{vehicle}. The \gls{VM} is responsible for managing the digital keys that enable access into each vehicle. These keys are used for enabling vehicle sharing-access in \gls{VSS} as well. The \gls{VSSP} is a cloud infrastructure that facilitates the vehicle \gls{AT} generation, distribution, update, and revocation. It consists of \textit{servers} that collaboratively generate \glspl{AT} and publish them on the \gls{PL}, a secure public bulletin board~\cite{micali2016algorand}. Prior to each vehicle sharing-access session, the owner and consumer agree on the booking details. We denote $BD^{u_o, u_c}$, $AT^{veh_{u_o}}$, and $K^{veh_{u_o}}$ as \acrfull{BD}, an \acrfull{AT} for a vehicle, and for the vehicle secret key, respectively. \subsection{Assumptions} We assume the existence of secure and authenticated communication over all channels between entities at \gls{VSS}, e.g., by using SSL-TLS~\cite{rfc8446} or NFC. There is a \gls{PKI} in place (e.g.,~\cite{DBLP:journals/tits/KhodaeiJP18}), and each entity has a digital certificate and a corresponding private/public-key pair. The \gls{VSSP} servers are managed by organizations with conflicting interests, such as user unions, \glspl{VM}, and authorities. Thus, these are non-colliding organizations. The intra-\gls{VSSP} communication is considered to be up to a 10Gb/s network. The \gls{OBU} has an embedded \gls{HSM}~\cite{automotive_tpm} that enables secure execution environment and key storage. Before each evaluation, the \gls{BD} are agreed upon by the owner and consumer. Both keep the \gls{BD} confidential against external parties. The \gls{BD} contains the owner, consumer, and vehicle identities and the location and time duration of the reservation. \subsection{Adversarial Model} The \gls{VSSP}, the \gls{PL}, and the \gls{VM} are passive adversaries, i.e., honest-but-curious in our case. They execute the protocol correctly, but they may attempt to deduce user private information. The owners can be passive adversaries, as they hold information about the booking, but they will not deviate from the protocol. Consumers and outsiders can be active adversaries aiming to illegally access a vehicle, alter the booking information, and hide incidents. Authorities are trusted entities only for specific transactions in case of disputes. The vehicle, specifically its \gls{OBU}, is trusted and tamper-evident designed to resist accidental or deliberate physical destruction (i.e., it serves as an event data recorder equipped with software and hardware security mechanisms~\cite{automotive_tpm}). User \glspl{PD} are untrusted as they can get stolen, broken, or lost. Relay attacks, which can be tackled with distance bounding protocols~\cite{DBLP:conf/eurocrypt/BrandsC93}, are left out of the scope of this paper. \subsection{System Design Requirements} We detail functional, security, privacy, and performance requirements that a \gls{VSS} should satisfy, denoted \emph{FR}, \emph{SR}, \emph{PR}, and \emph{ESR}, respectively. The list builds on the requirements specified in~\cite{DBLP:conf/isc2/SymeonidisMP16}, extending the ones of SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}. \textit{Functional requirements:} \begin{itemize} \item \textit{FR\addtocounter{fr}{1}\arabic{fr} -- Offline vehicle access.} Vehicle access should be supported in locations with no (or limited) network connectivity. \item \textit{FR\addtocounter{fr}{1}\arabic{fr} -- \acrfull{AT} update and revocation by the owner $u_o$.} No-one except the owner, $u_o$, can initiate an \gls{AT} update or revocation. \end{itemize} \textit{Security requirements:} \begin{itemize} \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Confidentiality of \acrfull{BD}, $BD^{u_o, u_c}$.} No-one except the owner $u_o$, consumer $u_c$, and the shared vehicle $veh_{u_o}$ should access $BD^{u_o, u_c}$. \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Entity and data authenticity of $BD^{u_o, u_c}$ from the owner $u_o$.} The origin and integrity of the \gls{BD}, $BD^{u_o, u_c}$, by the owner $u_o$ should be verified by the shared vehicle $veh_{u_o}$. \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Confidentiality of $AT^{veh_{u_o}}$.} No-one except the consumer $u_c$ and the shared vehicle $veh_{u_o}$ should access $AT^{veh_{u_o}}$. \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Confidentiality of vehicle key, $K^{veh_{u_o}}$.} No-one except the \gls{VM} and the shared vehicle $veh_{u_o}$ should hold a copy of vehicle's key $K^{veh_{u_o}}$. \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Backward and forward secrecy of $AT^{veh_{u_o}}$.} Compromise of a session key used to encrypt any $AT^{veh_{u_o}}$ should not compromise future and past \glspl{AT} published on \gls{VSS} e.g., on the \gls{PL}, for any honest consumer $u_c$. \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Non-repudiation of origin of $AT^{veh_{u_o}}$.} The owner $u_o$ should not be able to deny agreeing on \gls{BD} terms, $BD^{u_o, u_c}$, or deny initiating the corresponding \gls{AT} generation operation for $AT^{veh_{u_o}}$. \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Non-repudiation of $AT^{veh_{u_o}}$ receipt by $veh_{u_o}$ at $u_o$.} The consumer $u_c$ should not be able to deny receiving and using the $AT^{veh_{u_o}}$ to open and access the $veh_{u_o}$ (once it has done so). \item \textit{SR\addtocounter{sr}{1}\arabic{sr} -- Accountability of users (i.e., owner $u_o$ and consumer $u_c$).} On a request of law enforcement, \gls{VSSP} should be able to supply authorities with the vehicle-access transaction details without compromising the privacy of other users. \end{itemize} \textit{Privacy requirements:} \begin{itemize} \item \textit{PR\addtocounter{pr}{1}\arabic{pr} -- Unlinkability of (any two) requests of any consumer, $u_c$, and the vehicle, $veh_{u_o}$(s).} No-one except the onwer $u_o$, the consumer $u_c$, and the shared vehcile $veh_{u_o}$ should be able to link two booking requests of any consumer $u_c$ and for any shared vehicle $veh_{u_o}$ linking their identities, i.e., $ID^{u_c}$, and $ID^{veh_{u_o}}$. \item \textit{PR\addtocounter{pr}{1}\arabic{pr} -- Anonymity of any consumer, $u_c$, and vehicle, $veh_{u_o}$.} No-one except the owner $u_o$, the consumer $u_c$, and the shared vehicle $veh_{u_o}$ should learn the identity of $u_c$ and $veh_{u_o}$. \item \textit{PR\addtocounter{pr}{1}\arabic{pr} -- Indistinguishability of $AT^{veh_{u_o}}$ operations.} No-one except the owner $u_o$, the consumer $u_c$ and the vehicle $veh_{u_o}$, should be able to distinguish between operations of generation, update and revocation of the \gls{AT}, $AT^{veh_{u_o}}$. \end{itemize} \textit{Performance requirement:} \begin{itemize} \item \textit{ESR\addtocounter{psr}{1}\arabic{psr} -- Efficiency and scalability in a real-world deployment.} The \gls{VSS} should remain capable of efficiently and effectively servicing users, as their numbers and the numbers of vehicles per user increase to levels required for real-world deployment. \end{itemize} \subsubsection{Multiparty Computation} MPC allows a set of parties to compute a function over their inputs without revealing them. To evaluate a function on secret inputs using \gls{MPC}, one needs to unroll the function to a series of additions and multiplications in a field. Following the seminal papers of Yao for the two-party case~\cite{DBLP:conf/focs/Yao86} and by Goldreich, Micali and Wigderson in the multiple parties setting~\cite{micali1987play}, secure \gls{MPC} has gained much traction in the past years with many open-source frameworks~\cite{hastings2019sok}. Our algorithms use building blocks whose instantiation depends on the protocol type. However, they can be treated generically. This is also called an arithmetic black-box functionality~\cite{damgaard2003universally}. The functionality mainly in use consists of: \begin{itemize} \item \textit{Secret sharing function:} $[x] \leftarrow \mathsf{share}(x)$ is a function that inputs $x$ and outputs $[x]$ in secret shared form to all parties. The underlying secret sharing scheme is described in Araki et al.~\cite{DBLP:conf/ccs/ArakiFLNO16}. \item \textit{Shares reconstruction:} $x \leftarrow \mathsf{open}([x])$ which takes a secret shared value $[x]$ and opens it, making $x$ known to all parties. \item \textit{Equality check:} $[z] \leftarrow ([x] \stackrel{?}{=} [y])$ outputs a secret bit $[z]$ where $z \in \{0, 1\}$. If $x$ is equal to $y$ then set $z \leftarrow 1$ otherwise set $z \leftarrow 0$. Note that for the large field case there is a statistical security parameter $\mathsf{sec}$, whereas for the $\mathbb{F}_2$ case the comparison is done with perfect security (i.e. no $\mathsf{sec}$ parameter). The equality operator is implemented using the latest protocols of Escudero et al.~\cite{DBLP:conf/crypto/0001GKRS20}. \item $c \leftarrow \mathsf{E}([k], [m])$ An encryption function, i.e., $\mathsf{E}$, takes as inputs a secret shared key $[K]$ and a vector of $128$ bit blocks $[m]$. For the $\mathbb{F}_2$ case, $\mathsf{E}$ is implemented using AES in CTR mode. Concretely, the AES circuit description is the one from SCALE-MAMBA~\cite{aly2020scale}, which has $6400$ AND gates. For the $\mathbb{F}_p$ case, MiMC is used as a \gls{PRF} in counter mode as presented in~\cite{rotaru2017modes} to take advantage of \gls{PRF} invocations done in parallel. \item $t \leftarrow \mathsf{mac}([k], [m])$ is a tag generation function for secret shared key $[k]$ and message $[m]$. For the case when inputs are in a large field, we will not compute the \gls{MAC} as above, but rather as $\mathsf{mac}([k], \mathsf{E}([k'], [m]))$. The reason is that, according to~\cite{rotaru2017modes}, we can obtain a more efficient cryptographic \gls{MAC} in MPC by first computing $\mathsf{E}([k'], [m])$ in parallel with a secret shared key $[k']$, opening the result, and evaluate the \gls{MAC} function in the clear. Their optimizations hold only for arithmetic circuits with HtMAC over a large field~\cite{chida2018fast}, although they could likely be extended to Boolean circuits as well. In the Boolean case, the $\mathsf{mac}$ function is implemented as CBC-MAC-AES. Note that for the $\mathbb{F}_2$ case there are more efficient ways to do this, but we keep CBC-MAC as a comparison baseline to SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}. \end{itemize} \section{Cryptographic Building Blocks}\label{sec:crypto_build_blocks} \begin{figure*}[ht] \centering \resizebox{\textwidth}{!}{% \includegraphics{detailed_system_model.pdf} } \caption{HERMES\@\ high level overview. Numbers correspond to the steps outlined in the text of Section~\ref{sec:system}. Figures \ref{fig:step1}, \ref{fig:step2}, \ref{fig:step3} and \ref{fig:step4} describe Steps~1, 2, 3 and 4 in more detail.} \label{fig:vss_overview} \end{figure*} \subsubsection{Cryptographic Primitives} HERMES\@\ uses cryptographic building blocks, as described below. For each of the building blocks, we provide concrete instantiations we use in our proof-of-concept implementation detailed in Section~\ref{sec:protocol_evaluation}. \begin{itemize} \item \textit{Signagure scheme:} $\sigma \leftarrow \mathsf{sign}(Sk,m)$ and $\mathsf{true}/\mathsf{false} \leftarrow \mathsf{verify}(Pk, m,\sigma)$ are public-key operations for signing and verification respectively. These can be implemented using RSA, as defined in the PKCS $\#1$ v2.0 specification~\cite{rfc_2437}. \item \textit{Key derivation function:} $K \leftarrow \mathsf{kdf}(K, counter)$ is a key derivation function using a master key and a counter as inputs. It can be based on a \gls{PRF} and implemented using CTR mode with AES~\cite{barker2012nist}.~\footnote{In our case, the message input is small, i.e., $\ll 2^{64}$ blocks for AES in CTR, and the generation is performed with side channel attacks not to be a concern~\cite{DBLP:conf/sp/CohneyKPGHRY20}.} \item \textit{Public key encryption/decryption:} $c \leftarrow \mathsf{enc}(Pk,m)$ and $m \leftarrow \mathsf{dec}(Sk,c)$ are encryption and decryption functions based on public key primitives. These can be implemented using RSA, as defined in the RSA-KEM specifications~\cite{rfc_5990}. \item \textit{Symmetric key encryption/decryption:} $c \leftarrow \mathsf{E}(K,m)$ and $m \leftarrow \mathsf{D}(K,c)$ are encryption and decryption functions based on symmetric key primitives. These can be implemented using AES in CTR mode. \item \textit{Cryptographic hash:} $z \leftarrow \mathsf{hash}(m)$ it is a message digest function. This can be SHA-2 or SHA-3. \item \textit{Message Authentication Code:} $t \leftarrow \mathsf{mac}(k, m)$ is a cryptographic \gls{MAC} that outputs an authentication tag, $t$, given a message $m$ and a key $k$. These can be implemented using CBC-MAC-AES or HtMAC-MiMC. \end{itemize} Furthermore, we use $z \leftarrow \mathsf{query}(x,y)$ to denote the retrieval of the $x$th value from the $y$th database $DB$ (to be defined in Sect.~\ref{sec:system}), and $z \leftarrow \mathsf{query\_an}(y)$ to denote the retrieval of the $y$th value from the \gls{PL} through an anonymous communication channel such as Tor~\cite{torproject}, aiming to anonymously retrieve a published record (e.g., \gls{AT}) submitted using the $\mathsf{publish}(y)$ function. \input{3.1.multiparty-intro} \section{HERMES\@}\label{sec:system} In this section, we present HERMES\@\ in detail. We provide the complete system description; its entities, the functional and cryptographic operations performed and messages exchanged (see Fig.~\ref{fig:prot} in Appendix~\ref{appendix:protocol_complete}). Prior to explaining HERMES\@\ in detail, we provide a brief description overview as an introduction. \subsection{Overview of HERMES\@} We consider a single owner, a single consumer, and a set of shared vehicles for simplicity in presentation, without loss of generality. There are two prerequisite steps: \textit{vehicle key distribution (Step~A)} and establishing the details for the \textit{vehicle booking (Step~B)}. In a nutshell, as vehicle owners register their vehicles, the \gls{VSSP}, using the owner identity, retrieves the vehicle identity and the corresponding key from \gls{VM} in Step~A. Note that the \gls{VM}, a trusted \gls{SP} for \gls{VSS}, holds all the secret keys of vehicles. Both the identity and the vehicle key are transferred from \gls{VM} to \gls{VSSP} in a secret-shared form~\cite{DBLP:conf/ccs/ArakiFLNO16}, that is indistinguishable from randomness~\cite{DBLP:journals/cacm/Shamir79}. Thus, there is nothing the \gls{VSSP} can deduce from the vehicle identity and corresponding digital master key. For each initialization of HERMES\@, the \gls{BD} was specified between the owner and the consumer, tailored to each vehicle sharing agreement. During \textit{vehicle booking} in Step~B, the owner and consumer specified the identity of the vehicle from the pool of vehicles, the duration of the reservation, the access rights, and location.~\footnote{Note that HERMES\@\ is agnostic to the specificities of \gls{BD} drawing from the analogy in \gls{VSS} from car-rental scenarios.} HERMES\@\ consists of four main steps: \textit{session key generation and \gls{BD} distribution} (Step~1), \textit{\acrfull{AT} generation} (Step~2), \textit{\acrfull{AT} distribution and verification} (Step~3), and \textit{vehicle access} (Step~4). During the \textit{session key generation and data distribution} in Step~1, the consumer generates three session keys. One of these session keys is used to \textit{encrypt} the generated \gls{AT} at the \gls{VSSP} servers, so that only the consumer has access to it. The two other session keys are used to generate an \textit{authentication tag} of the \gls{BD}, such that only the consumer can identify and retrieve the \gls{AT} from the \gls{PL} as well as verify that the beforehand agreed \gls{BD} is included in the \gls{AT}. As the consumer considers the owner and the \gls{VSSP} as honest-but-curious entities, the consumer conceals the three-session keys before forwarding them -- the keys are transformed in secret shared form~\cite{DBLP:conf/ccs/ArakiFLNO16}. Moreover, to protect its identity, the consumer avoids direct communication with \gls{VSSP} by forwarding the shares of session keys to the owner. The owner then forwards to the \gls{VSSP} the \gls{BD} and its signature in a shared form, together with the concealed session keys, to each $\mathsf{S}_i$ server of \gls{VSSP}. Once each $\mathsf{S}_i$ of \gls{VSSP} receives the shares of the session keys and the booking details, the \textit{\acrfull{AT} generation}, Step~2, commences. The vehicle key is retrieved from the \gls{DB} in each $\mathsf{S}_i$ server, using an equality check over \gls{MPC}, thus preserving the key secrecy. The \gls{AT} is generated by encrypting the \gls{BD} and its signature with the vehicle key, such that only the vehicle itself can retrieve them. Moreover, the session keys, generated by the consumer, are used to encrypt the \gls{AT}, and also create an \textit{authentication tag}, such that only the consumer can identify and access the \gls{AT}. Each of the servers, $\mathsf{S}_i$, then forwards the encrypted \gls{AT} and its authentication tag to the \gls{PL}. The \gls{PL} serves as a bulletin board and notifies the \gls{VSSP} once it publishes the information. At the \textit{\acrfull{AT} distribution and verification}, Step~3, the consumer can identify and retrieve the corresponding \gls{AT}. As the consumer considers the \gls{PL} as honest-but-curious, it hides its identity (i.e., IP address) by querying the \gls{PL} using an anonymous communication channel such as Tor~\cite{torproject}. The consumer then retrieves the \gls{AT}, to be used by the vehicle, to verify and allow access to the consumer for the predefined booking duration of \textit{vehicle access} at Step~4. \subsection{HERMES\@\ in Detail} We first describe the \textit{prerequisite} steps. We detail the core operations in four steps. Table~\ref{table:notations} lists the notation used throughout the paper. \subsubsection*{Prerequisite steps} Before HERMES\@\ commences, two prerequisite steps are necessary: \textit{vehicle key distribution} and establishing the details for booking, i.e., \textit{vehicle booking}. \paragraph*{Step A - Vehicle key distribution} This step takes place immediately after the $x$th owner, $ID^{u_o}_x$, registers her $y$th vehicle, $ID^{veh_{u_o}}_y$, with the \gls{VSSP}. The \gls{VSSP} request from the \gls{DB} of \gls{VM}, $DB^{VM}$, the secret symmetric key of the vehicle, $K^{veh_{u_o}}_y$, and the corresponding identity of the owner, $ID^{veh_{u_o}}_y$, i.e., \begin{equation*} DB^{VM} = \begin{pmatrix} ID^{u_o}_1 & ID^{veh_{u_o}}_1 & K^{veh_{u_o}}_1 \\ \vdots & \vdots & \vdots \\ ID^{u_o}_x& ID^{veh_{u_o}}_y & K^{veh_{u_o}}_y \\ \vdots & \vdots & \vdots \\ ID^{u_o}_m& ID^{veh_{u_o}}_n & K^{veh_{u_o}}_n \\ \end{pmatrix}. \end{equation*} Then, \gls{VM} replies and \gls{VSSP} retrieves these values in secret shared form, denoted by $[K^{veh_{u_o}}_y]$ and $[ID^{veh_{u_o}}_y]$, respectively. It stores, $ID^{u_o}_x$, $[ID^{veh_{u_o}}_y]$ and $[K^{veh_{u_o}}_y]$ in its \gls{DB} denoted $DB^{\mathsf{S}_i}$, i.e., \begin{equation*} DB^{\mathsf{S}_i} = \begin{pmatrix} ID^{u_o}_1 & [ID^{veh_{u_o}}_1] & [K^{veh_{u_o}}_1] \\ \vdots & \vdots & \vdots \\ ID^{u_o}_x& [ID^{veh_{u_o}}_y] & [K^{veh_{u_o}}_y] \\ \vdots & \vdots & \vdots \\ ID^{u_o}_m& [ID^{veh_{u_o}}_n] & [K^{veh_{u_o}}_n] \\ \end{pmatrix}. \end{equation*} For simplicity, we use the $ID^{u_o}$, $ID^{veh_{u_o}}$ and $K^{veh_{u_o}}$ instead of $ID^{u_o}_x$, $ID^{veh_{u_o}}_y$ and $K^{veh_{u_o}}_y$ throughout the paper. \paragraph*{Step B - Vehicle booking} This step allows the owner and consumer to agree on the \gls{BD} before HERMES\@\ commences. In specific, $u_o$ and $u_c$ to agree on the booking details, i.e., $ BD^{u_o, u_c} = \{\mathsf{hash}(\mathit{Cert}^{u_c})$, $ID^{veh_{u_o}}$, $L^{veh_{u_o}}$, $CD^{u_c}$, $AC^{u_c}$, $ID^{BD}\}$, where $\mathsf{hash}(\mathit{Cert}^{u_c})$ is the hash value of the digital certificate of $u_c$, $L^{veh_{u_o}}$ is the pick-up location of the vehicle, $CD^{u_c}$ is the set of conditions under which $u_c$ is allowed to use the vehicle (e.g., restrictions on locations, time period), $AC^{u_c}$ are the access control rights based on which $u_c$ is allowed to access the vehicle, and $ID^{BD}$ is the booking identifier. \subsubsection*{HERMES\@\ operations in four steps} \begin{figure*}[hbt!] \centering \resizebox{0.95\textwidth}{!}{% \includegraphics{fig-step1.pdf}} \caption{Step 1: session key generation and \gls{BD} distribution.} \label{fig:step1} \end{figure*} \paragraph*{Step 1 -- Session key generation and \gls{BD} distribution}\label{step1} While $u_o$ signs the booking details, $BD^{u_o, u_c}$, $u_c$ generates session keys for encryption and data authentication, i.e., $K^{u_c}_{enc}$ and $\vec{K}^{u_c}_{tag}=(K^{u_c}_{tag_{mac}},K^{u_c}_{tag_{enc}})$, respectively. The generated material by $u_c$ and $u_o$ are sent to each $\mathsf{S}_i$ via $u_o$. These will be used for the generation of the \gls{AT}. In detail, as depicted in Fig.~\ref{fig:step1}, $u_o$ sends a request for \textit{session-key-generation}, \textit{SES\_K\_GEN\_REQ}, together with $ID^{BD}$ to $u_c$. Once it receives the request, $u_c$ generates the session keys, $K^{u_c}_{enc}$ and $\vec{K}^{u_c}_{tag}$. $K^{u_c}_{enc}$ is used by the \gls{VSSP} servers, $\mathsf{S}_i$, to encrypt the \gls{AT}, and ensure that only $u_c$ has access to it. Note that each $\mathsf{S}_i$ does encryption evaluations in a secret shared way. $\vec{K}^{u_c}_{tag}$ is used to generate an authentication tag, allowing $u_c$ to verify that \gls{AT} contains $BD^{u_o, u_c}$ agreed upon during the \textit{vehicle booking}. It utilizes a $\mathsf{kdf}()$ function with $u_c$'s master key as an input, i.e., $K^{u_c}_{master}$ and a $counter$. For $\vec{K}^{u_c}_{tag}$, two session keys are generated and stored: one for encryption, $K^{u_c}_{tag_{enc}}$ (i.e., $\vec{K}^{u_c}_{tag}[0] = K^{u_c}_{tag_{enc}}$), and one for authentication, $K^{u_c}_{tag_{mac}}$ (i.e., $\vec{K}^{u_c}_{tag}[1] = K^{u_c}_{tag_{mac}}$). Then, $u_c$ constructs $\ell$ secret shares of $[K^{u_c}_{enc}]$ and $[\vec{K}^{u_c}_{tag}]$, one for each $\mathsf{S}_i$. This ensures that none of the servers alone has access to these session keys. Nonetheless, they can jointly perform evaluations utilizing the shares of these keys. The consumer encrypts $[K^{u_c}_{enc}]$ and $[\vec{K}^{u_c}_{tag}]$ with the public-key of each $\mathsf{S}_i$, $C^{\mathsf{S}_i} = \mathsf{enc}(Pk^{\mathsf{S}_i},\{[K^{u_c}_{enc}], [\vec{K}^{u_c}_{tag}]\})$. It ensures that only the specific $\mathsf{S}_i$ can access the corresponding shares. Finally, $u_c$ forwards to $u_o$ an acknowledgment message, \textit{SES\_K\_GEN\_ACK}, along with $ID^{BD}$ and $\{C^{\mathsf{S}_1}, \dots, C^{\mathsf{S}_l}\}$. The owner, $u_o$, signs $BD^{u_o, u_c}$ with her private key, i.e., $\sigma^{u_o} = \mathsf{sign}(Sk^{u_o},BD^{u_o, u_c})$. In a later stage, the vehicle will use $\sigma^{u_o}$ to verify that $BD^{u_o, u_c}$ was approved by $u_o$. Then $u_o$ transforms $M^{u_c}=\{BD^{u_o, u_c},\sigma^{u_o}\}$ into $\ell$ secret shares, i.e., $[M^{u_c}]$. Upon receipt of the response of $u_c$, $u_o$ forwards to each $\mathsf{S}_i$ an access-token-generation request, \textit{AT\_GEN\_REQ}, along with $ID^{u_o}$, the corresponding $C^{\mathsf{S}_i}$ and $[M^{u_c}]$. \begin{figure*}[hbt!] \centering \resizebox{0.95\textwidth}{!}{% \includegraphics{fig-step2.pdf}} \caption{Step 2: \gls{AT} generation.} \label{fig:step2} \end{figure*} \paragraph*{Step 2 -- \acrfull{AT} generation}\label{step2} The servers generate an \gls{AT} and publish it on the \acrfull{PL}. In detail, as depicted in Fig.~\ref{fig:step2}, after receiving the \textit{AT\_GEN\_REQ} from $u_o$, the servers obtain the session key shares, $\{[K^{u_c}_{enc}], [\vec{K}^{u_c}_{tag}]\}$. Each $\mathsf{S}_i$ decrypts $C^{\mathsf{S}_i}$ using its private key. Session keys are for encrypting the \gls{AT} used to access a vehicle by $u_c$ and for generating an authentication tag used by $u_c$ to verify the data authenticity of \gls{BD} contained in the \gls{AT}, respectively. To generate the \gls{AT}, $[AT^{veh_{u_o}}]$, the key of the vehicle, $[K^{veh_{u_o}}]$, is retrieved from $DB^{\mathsf{S}_i}$ using query and equality check operations as proposed in~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}. In specific, for each $\mathsf{S}_i$, it uses the $ID^{u_o}$ to extract $[K^{veh_{u_o}}]$ from $DB^{\mathsf{S}_i}$. The result is stored in a vector $\vec{D}^{u_o}$ of size $n\times3$, i.e., \begin{equation*} \vec{D}^{u_o} = \begin{pmatrix} ID^{u_o} & [ID^{veh_{u_o}}_1] & [K^{veh_{u_o}}_1] \\ \vdots & \vdots & \vdots \\ ID^{u_o} & [ID^{veh_{u_o}}_y] & [K^{veh_{u_o}}_y] \\ \vdots & \vdots & \vdots \\ ID^{u_o} & [ID^{veh_{u_o}}_{n}] & [K^{veh_{u_o}}_{n}] \end{pmatrix}, \end{equation*} where $n$ is the number of vehicles owned by $u_o$ and registered with the \gls{VSS}. To retrieve the record for the vehicle to be shared, each $\mathsf{S}_i$ uses the $([x] \stackrel{?}{=} [y])$ operation to extract $[ID^{veh_{u_o}}]$ from $[M^{u_c}]$ performing an equality check with each of the $n$ records of $\vec{D}^{u_o}$. The comparison outputs $1$ for identifying the vehicle at position $y$ or $0$ in case of mismatch. The results are stored in a vector $\vec{D}^{veh_{u_o}}$ of length $n$, i.e., \begin{equation*} \vec{D}^{veh_{u_o}}= \Big(\overset{1}{[0]}\cdots\overset{}{[0]}\overset{y}{[1]}\overset{}{[0]}\cdots\overset{n}{[0]}\Big) \enspace . \end{equation*} Each $\mathsf{S}_i$ then multiplies $\vec{D}^{veh_{u_o}}$ and $\vec{D}^{u_o}$ to construct a vector of length $3$, i.e., \begin{equation*} \vec{D}^{veh_{u_o}}\times\vec{D}^{u_o} = \Big(ID^{u_o}\; [ID^{veh_{u_o}}_y]\; [K^{veh_{u_o}}_y]\Big) \enspace . \end{equation*} Based on the resultant vector, $\vec{D}^{veh_{u_o}}\times\vec{D}^{u_o}$, the secret key share of the vehicle $[K^{veh_{u_o}}_y]$ is retrieved. To preserve the confidentiality of $[M^{u_c}]$, each $\mathsf{S}_i$ encrypts it with the $[K^{veh_{u_o}}_y]$ using the symmetric key encryption, $\mathsf{E}()$, function. The generated \gls{AT} requires a second layer of encryption making $[AT^{veh_{u_o}}]$ and the $[ID^{veh_{u_o}}]$ available only to $u_c$. Specifically, the \gls{VSSP} servers, $\mathsf{S}_i$, collaboratively encrypt $[M^{u_c}]$ using the retrieved $[K^{veh_{u_o}}]$ to generate an \gls{AT} for the vehicle in shared form, i.e., $[AT^{veh_{u_o}}]$. Then, each $S_i$ collaboratively perform a second layer of encryption, using $[AT^{veh_{u_o}}]$ and $[ID^{veh_{u_o}}]$ with $[K^{u_c}_{enc}]$ to generate and retrieve $C^{u_c}$ using $\mathsf{open}([C^{u_c}])$. In addition, each $\mathsf{S}_i$ generates an authentication tag, $[AuthTag^{BD^{u_o, u_c}}]$, that can be later used to retrieve the associated $AT^{veh_{u_o}}$ from the \gls{PL} by $u_c$. Using $\mathsf{mac}()$ with $[\vec{K}^{u_c}_{tag}]$ and $[BD^{u_o, u_c}]$ as inputs, each $\mathsf{S}_i$ creates an authentication tag $[AuthTag^{BD^{u_o, u_c}}]$.~\footnote{Recall that $\vec{K}^{u_c}_{tag}=(K^{u_c}_{tag_{mac}},K^{u_c}_{tag_{enc}})$.} Prior to posting on the \gls{PL}, we use $\mathsf{open}([AuthTag^{BD^{u_o, u_c}}])$, reconstructing the shares and obtain $AuthTag^{BD^{u_o, u_c}}$. Note that for the efficient \gls{MPC}, we perform Enc-then-Hash-then-MAC. The reason is that, following~\cite{rotaru2017modes} encryption, i.e., $\mathsf{E}()$, can be done in parallel and separately (thus efficient); the hash does not need to be done in \gls{MPC} and the \gls{MPC} parties, $\mathsf{S}_i$, can apply the hash function locally (see Sec.~\ref{sec:protocol_evaluation}). Essentially, we trade ``parallel \gls{MPC} encryption'' for ``having to evaluate a hash function on large input in \gls{MPC}''.~\footnote{In our implementation, we use CBC-MAC-AES and HtMAC-MiMC as we describe in Sec.~\ref{sec:protocol_evaluation}.} Finally, each $\mathsf{S}_i$ sends an access-token-publication request, i.e., \textit{AT\_PUB\_REQ}, to \gls{PL} along with $C^{u_c}$ and $AuthTag^{BD^{u_o, u_c}}$. \begin{figure*}[hbt!] \centering \resizebox{0.95\textwidth}{!}{% \includegraphics{fig-step3.pdf}} \caption{Step 3: Access token distribution and verification.} \label{fig:step3} \end{figure*} \paragraph*{Step 3 -- \acrfull{AT} distribution and verification}\label{step3} The encrypted \gls{AT} is published at the \gls{PL}. The \gls{AT} then is retrieved by $u_c$ to access the vehicle. In detail, as depicted in Fig.~\ref{fig:step3}, after receiving the \textit{AT\_PUB\_REQ}, \gls{PL} publishes $C^{u_c}$, $AuthTag^{BD^{u_o, u_c}}$ and the publication time-stamp, i.e., $TS^{Pub}$. The consumer, $u_c$, monitors \gls{PL} for concurrent and announced time-stamps, $TS^{Pub}$, to identify the corresponding $C^{u_c}$ using $AuthTag^{BD^{u_o, u_c}}$. Upon identification, $C^{u_c}$ queries and anonymously retrieves $C^{u_c}$ from \gls{PL} using $\mathsf{query\_an()}$, such that \gls{PL} cannot identify $u_c$. Then, $u_c$ decrypts $C^{u_c}$ using $K^{u_c}_{enc}$ to obtain the \gls{AT} and the vehicle identity, $\{AT^{veh_{u_o}}, ID^{veh_{u_o}}\}$. Note that, in a parallel manner and for synchronization purposes, \gls{PL} forwards an acknowledgment of the publication, \textit{AT\_PUB\_ACK}, along with $TS^{Pub}_i$ to at least one $\mathsf{S}_i$ which then it forwards $TS^{Pub}_i$ to $u_c$ via $u_o$. Upon receipt of \textit{AT\_PUB\_ACK}, $u_c$ uses $TS^{Pub}_i$ to query $\mathsf{PL}$. In the same manner, it uses $\mathsf{query\_an()}$ to anonymously retrieve $C^{u_c}$ and $AuthTag^{BD^{u_o, u_c}}$. Then, $u_c$ verifies locally the authentication tag $C^{B}$ using the $\vec{K}^{u_c}_{tag}$ and $BD^{u_o, u_c}$ as inputs to the $\mathsf{mac}()$ function. A successful verification assures $u_c$ the validity of \gls{AT}, that it contains the agreed \gls{BD} during \textit{vehicle booking} prerequisite step. Next, $u_c$ using $K^{u_c}_{tag_{enc}}$ decrypts $C^{u_c}$ to retrieve, $\{AT^{veh_{u_o}}, ID^{veh_{u_o}}\}$, the access token and the identifier of the vehicle respectively. \begin{figure*}[hbt!] \centering \resizebox{0.95\textwidth}{!}{% \includegraphics{fig-step4.pdf}} \caption{Step 4: vehicle access. Dashed lines represent close range wireless communication.} \label{fig:step4} \end{figure*} \paragraph*{Step 4 -- Car Access}\label{step4} The consumer uses the $AT^{veh_{u_o}}$, $ID^{veh_{u_o}}$, and $\mathit{Cert}^{u_c}$, to obtain access to the vehicle, using any challenge-response protocol based on public key implementations~\cite{DBLP:conf/codaspy/DmitrienkoP17,DBLP:journals/dcc/DiffieOW92} (see Fig.~\ref{fig:step4}). In detail, $u_c$ sends directly to the vehicle $\{AT^{veh_{u_o}}, ID^{veh_{u_o}}, \mathit{Cert}^{u_c}\}$, using a secure and authenticated short-range communication channel such as NFC. It can use any challenge-response protocol for the connection establishment based on public/private key~\cite{DBLP:conf/codaspy/DmitrienkoP17,DBLP:journals/dcc/DiffieOW92}. Upon receipt, the \gls{OBU} of vehicle decrypts $AT^{veh_{u_o}}$ using $K^{veh_{u_o}}$ to obtain $M^{u_c} = \{BD^{u_o, u_c}, \sigma^{u_o}\}$. The \gls{OBU} then performs the following verification. First, it checks the signature $\sigma^{u_o}$ to verify that the booking details, $BD^{u_o, u_c}$, were not altered and were indeed approved by the vehicle owner. Then, it verifies the identity of $u_c$, using the received $\mathit{Cert}^{u_c}$ (along with the $\mathsf{hash}(\mathit{Cert}^{u_c})$ in $BD^{u_o, u_c}$). Finally, it verifies that the access attempt satisfies the conditions specified in $BD^{u_o, u_c}$. If successful, the \gls{OBU} grants $u_c$ access to $veh_{u_o}$. It signs $\{ BD^{u_o, u_c}, TS^{veh_{u_o}}_{Access} \}$, where $TS^{veh_{u_o}}_{Access}$ is the time-stamp of the instant at which access was granted to $veh_{u_o}$. Finally, it forwards the msg$\{\sigma^{veh_{u_o}}_{Access},TS^{veh_{u_o}}_{Access}\}$ to $u_o$. Otherwise, if any verification fails, the \gls{OBU} terminates the vehicle access process, denying access to the vehicle. \section{Functional, Security and Privacy Requirements Analysis}\label{sec:extended_analysis} \newcommand{\xleftarrow{{\scriptscriptstyle\$}}}{\xleftarrow{{\scriptscriptstyle\$}}} \newcommand{\advsign}[1]{\mathrm{Adv}_{\mathrm{sign}}^{#1\text{-}\mathrm{euf}}} \newcommand{\advprf}[1]{ \mathrm{Adv}_{\mathrm{prf}}^{#1\text{-}\mathrm{prf}}} \newcommand{\advpke}[1]{ \mathrm{Adv}_{\mathrm{enc}}^{#1\text{-}\mathrm{pke}}} \newcommand{\advske}[1]{ \mathrm{Adv}_{\mathrm{E}}^{#1\text{-}\mathrm{ske}}} \newcommand{\advmac}[1]{ \mathrm{Adv}_{\mathrm{mac}}^{#1\text{-}\mathrm{mac}}} \newcommand{ \mathrm{Adv}_{\mathsf{hash}}^{\mathrm{col}}}{ \mathrm{Adv}_{\mathsf{hash}}^{\mathrm{col}}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{A}_{\mathrm{sign}}}{\mathcal{A}_{\mathrm{sign}}} \newcommand{\mathcal{A}_{\mathrm{prf}}}{\mathcal{A}_{\mathrm{prf}}} \newcommand{\mathcal{A}_{\mathrm{pke}}}{\mathcal{A}_{\mathrm{pke}}} \newcommand{\mathcal{A}_{\mathrm{ske}}}{\mathcal{A}_{\mathrm{ske}}} \newcommand{\mathcal{A}_{\mathrm{mac}}}{\mathcal{A}_{\mathrm{mac}}} \newcommand{\mathsf{E}}{\mathsf{E}} We argue that HERMES\@\ fulfills its functional requirements and prove that it is secure and privacy-enhancing satisfying the requirements of Section~\ref{sec:system_model}. \subsection{Functional Requirements Realization} \paragraph*{FR1 -- Offline vehicle access} While Steps 1-3 (Fig.~\ref{fig:step1} - Fig.~\ref{fig:step3}) require a network connection, Step 4 provides vehicle access using short-range wireless communication. The vehicle can offline decrypt and verify the \gls{AT} using its key, $K^{veh_{u_o}}$, and the public-key, $Pk^{u_o}$, of $u_o$, both stored locally. The signature of access confirmation, $\sigma^{veh_{u_o}}_{Access}$, can be sent over the Internet to $u_o$, or when $veh_{u_o}$ and $u_o$ are in close proximity. \paragraph*{FR2 -- \acrfull{AT} update and revocation by the owner $u_o$} HERMES\@\ can update or revoke \gls{AT} as described in Steps 1-3, as a new booking request. After an agreement for an update action between $u_o$ and $u_c$, the necessary \gls{BD} values are updated to $\hat{BD}^{u_o, u_c}$. In case of revocation, upon agreement between $u_o$ and $u_c$, the parameters in $\hat{BD}^{u_o, u_c}$ are set to a predefined value specifying the revocation action. There might be occasions in which the \gls{AT} update or revocation needs to be enforced by $u_o$ while preventing $u_c$ from blocking such requests/operations. HERMES\@\ can execute requests initiated by $u_o$ alone, without the involvement of $u_c$. More specifically, the generation of session keys is performed by $u_o$, requesting an \gls{AT} from \gls{VSSP}, querying the \gls{PL}, and forwarding the token to vehicle, $veh_{u_o}$. The \gls{PD} of the owner forwards the updated \gls{AT} using a short-range (in close proximity) or an Internet connection (e.g., cellular data) if needed, for restricting access a dishonest consumer (e.g., fleeing with the vehicle). \subsection{Security and Privacy} HERMES\@\ is secure and privacy-enhancing, provided that its underlying cryptographic primitives are sufficiently secure. Informally, we demonstrate the following: \begin{theorem}[Informal] Assume that communication between all entities at \gls{VSS} takes place over private channels - are secure and authenticated using, e.g., SSL-TLS~\cite{rfc8446}. If \begin{itemize} \item the \gls{MPC} is statistically secure~\cite{rotaru2017modes}, \item the key derivation function $\mathsf{kdf}$ is multi-key secure~\cite{DBLP:journals/jacm/GoldreichGM86}, \item the signature scheme $\mathsf{sign}$ is multi-key existentially unforgeable~\cite{DBLP:journals/siamcomp/GoldwasserMR88}, \item the public-key encryption scheme $\mathsf{enc}$ is multi-key semantically secure~\cite{DBLP:conf/eurocrypt/BellareBM00}, \item the symmetric key encryption scheme $\mathsf{E}$ is multi-key chosen-plaintext secure~\cite{DBLP:conf/focs/BellareDJR97}, \item the \gls{MAC} function $\mathsf{mac}$ is multi-key existentially unforgeable~\cite{DBLP:journals/siamcomp/GoldwasserMR88}, and \item the hash function $\mathsf{hash}$ is collision resistant~\cite{DBLP:conf/fse/RogawayS04}, \end{itemize} then, HERMES\@\ fulfills the security and privacy requirements of Sect.~\ref{sec:system_model}. \end{theorem} Details on the \textit{semantic security analysis} of HERMES\@\ is given below. More precisely, in Section~\ref{sec:analysis-model} we describe the security models of the cryptographic primitives. Then, the formal reasoning is given in Section~\ref{sec:analysis-proof}. \subsection{Cryptographic Primitives}\label{sec:analysis-model} Note that in HERMES\@, the cryptographic primitives are evaluated under different keys, and therefore we will need the security of the cryptographic primitives in the \emph{multi-key} setting. For example, $\mathsf{enc}$ is used for different keys, each for a different party in the \gls{VSSP}, $\mathsf{E}$ and $\mathsf{mac}$ are used for independent keys (i.e., session keys) for every fresh evaluation of the protocol; and $\mathsf{sign}$ is used by all owners, $u_o$, each with a different key. Bellare et al.~\cite{DBLP:conf/eurocrypt/BellareBM00} showed how public key encryption can be generalized to \emph{multi-key} security; the adaptation straightforwardly generalizes to the other security models. In the definitions below, for a function $f$, we define by $\mathrm{Func}(f)$ as the set of all functions with the exact same interface as $f_K$. We denote a random drawing by $\xleftarrow{{\scriptscriptstyle\$}}$. \begin{definition}\label{def:prf} Let $\mu\geq1$. Consider a key derivation function using a pseudorandom function $\mathrm{prf}=(\mathsf{kg},\mathsf{prf})$. We define the advantage of an adversary $\mathcal{A}$ in breaking the $\mu$-multikey pseudorandom function security as \begin{align*} &\advprf{\mu}(\mathcal{A}) = \\ &\qquad\left| \Pr\left( K^1,\ldots,K^{\mu}\xleftarrow{{\scriptscriptstyle\$}}\mathsf{kg} \;:\; \mathcal{A}^{\mathsf{prf}(K^i,\cdot)}=1\right) -\right.\\ &\qquad\qquad\qquad\left.\Pr\left( \$^1,\ldots,\$^{\mu}\xleftarrow{{\scriptscriptstyle\$}}\mathrm{Func}(\mathsf{prf}) \;:\; \mathcal{A}^{\$^i}=1\right) \right|\,. \end{align*} We define by $\advprf{\mu}(q,t)$ the maximum advantage, taken over all adversaries making at most $q$ queries and running in time at most $t$. \end{definition} \begin{definition}\label{def:sign} Let $\mu\geq1$. Consider a signature scheme $\mathrm{sign}=(\mathsf{kg},\mathsf{sign},\mathsf{verify})$. We define the advantage of adversary $\mathcal{A}$ in breaking the $\mu$-multikey existential unforgeability as \begin{align*} &\advsign{\mu}(\mathcal{A}) = \\ &\qquad\Pr\left((Pk^1,Sk^1), \ldots,(Pk^{\mu},Sk^{\mu})\xleftarrow{{\scriptscriptstyle\$}}\mathsf{kg} \;:\;\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.\mathcal{A}^{\mathsf{sign}(Sk^i,\cdot)}(Pk^i) \text{ forges} \right)\,, \end{align*} where ``forges'' means that $\mathcal{A}$ outputs a tuple $(i,M,\sigma)$ such that $\mathsf{verify}(Pk^i,M,\sigma)=1$ and $M$ has never been queried to the $i$-th signing oracle. We define by $\advsign{\mu}(q,t)$ the maximum advantage, taken over all adversaries making at most $q$ queries and running in time at most $t$. \end{definition} \begin{definition}\label{def:enc} Let $\mu\geq1$. Consider a public-key encryption scheme $\mathrm{enc}=(\mathsf{kg},\mathsf{enc},\mathsf{dec})$. We define the advantage of adversary $\mathcal{A}$ in breaking the $\mu$-multikey semantic security as \begin{align*} &\advpke{\mu}(\mathcal{A}) = \\ &\left| \Pr\left( (Pk^1,Sk^1),\ldots,(Pk^{\mu},Sk^{\mu})\xleftarrow{{\scriptscriptstyle\$}}\mathsf{kg} \;:\; \mathcal{A}^{\mathcal{O}_0}(Pk^i)=1\right) -\right. \\ &\left.\Pr\left( (Pk^1,Sk^1),\ldots,(Pk^{\mu},Sk^{\mu})\xleftarrow{{\scriptscriptstyle\$}}\mathsf{kg} \;:\; \mathcal{A}^{\mathcal{O}_1}(Pk^i)=1\right) \right|\,, \end{align*} where $\mathcal{O}_b$ for $b\in\{0,1\}$ gets as input a tuple $(i,m_0,m_1)$ with $i\in\{1,\ldots,\mu\}$ and $|m_0|=|m_1|$ and outputs $\mathsf{enc}_{Pk^i}(BD^{u_o, u_c})$. We define by $\advpke{\mu}(t)$ the maximum advantage, taken over all adversaries running in time at most $t$. \end{definition} \begin{definition}\label{def:e} Let $\mu\geq1$. Consider a symmetric-key encryption scheme $\mathrm{E}=(\mathsf{kg},\mathsf{E},\mathsf{D})$. We define the advantage of adversary $\mathcal{A}$ in breaking the $\mu$-multikey chosen-plaintext security as \begin{align*} & \advske{\mu}(\mathcal{A}) = \\ &\qquad\left| \Pr\left( K^1,\ldots,K^{\mu}\xleftarrow{{\scriptscriptstyle\$}}\mathsf{kg} \;:\; \mathcal{A}^{\mathsf{E}(K^i,\cdot)}=1\right) - \right.\\ &\qquad\qquad\qquad\left.\Pr\left( \$^1,\ldots,\$^{\mu}\xleftarrow{{\scriptscriptstyle\$}}\mathrm{Func}(\mathsf{E}) \;:\; \mathcal{A}^{\$^i}=1\right) \right|\,. \end{align*} We define by $\advske{\mu}(q,t)$ the maximum advantage, taken over all adversaries making at most $q$ queries and running in time at most $t$. \end{definition} \begin{definition}\label{def:mac} Let $\mu\geq1$. Consider a MAC function $\mathrm{mac}=(\mathsf{kg},\mathsf{mac})$. We define the advantage of adversary $\mathcal{A}$ in breaking the $\mu$-multikey existential unforgeability as \begin{multline*} \advmac{\mu}(\mathcal{A}) = \\\Pr\left( K^1,\ldots,K^{\mu}\xleftarrow{{\scriptscriptstyle\$}}\mathsf{kg} \;:\; \mathcal{A}^{\mathsf{mac}(K^i,\cdot)} \text{ forges} \right)\,, \end{multline*} where ``forges'' means that $\mathcal{A}$ outputs a tuple $(i,M,\sigma)$ such that $\mathsf{mac}(K^i,M)=\sigma$ and $M$ has never been queried to the $i$-th MAC function. We define by $\advmac{\mu}(q,t)$ the maximum advantage, taken over all adversaries making at most $q$ queries and running in time at most $t$. \end{definition} Finally, we consider the hash function $\mathsf{hash}$ to be collision-resistant. We denote the supremal probability of any adversary in finding a collision for $\mathsf{hash}$ in $t$ time by $ \mathrm{Adv}_{\mathsf{hash}}^{\mathrm{col}}(t)$. The definition is, acknowledgeably, debatable: for any hash function there exists an adversary that can output a collision in constant time (namely, one that has a collision hardwired in its code). We ignore this technicality for simplicity and refer to \cite{DBLP:conf/fse/RogawayS04,DBLP:journals/dcc/Stinson06,DBLP:conf/vietcrypt/Rogaway06} for further discussion. \subsection{Analysis}\label{sec:analysis-proof} We prove that HERMES\@\ satisfies the security and privacy requirements of Section~\ref{sec:system_model}, provided that its underlying cryptographic primitives are sufficiently secure. \begin{theorem}\label{thmextended} Suppose that communication takes place over private channels, the MPC is statistically secure, $\mathsf{hash}$ is a random oracle, and \begin{equation*} \begin{split} \advsign{\mu_o+\mu_{\mathrm{veh_{u_o}}}}(2q,t) + \advprf{\mu_c}(2q,t) + \advpke{l}(t) + \\ \advske{2q+\mu_\mathrm{veh_{u_o}}}(3q,t) + \advmac{q}(q,t) + \mathrm{Adv}_{\mathsf{hash}}^{\mathrm{col}}(t) \ll1\,, \end{split} \end{equation*} where $\mu_o$ denotes the maximum number of $u_o$s, $\mu_c$ the maximum number of $u_c$s, $\mu_\mathrm{veh_{u_o}}$ the maximum number of vehicles, $l$ the number of servers in the \gls{VSSP}, $q$ the total times the system gets evaluated, and $t$ the maximum time of any adversary. Then, HERMES\@\ fulfills the security and privacy requirements of Section~\ref{sec:system_model}. \end{theorem} \begin{proof} Recall from Section~\ref{sec:system_model} that owners, $u_o$, and \gls{VM} are honest-but-curious, whereas consumers, $u_c$, and outsiders may be malicious and actively deviate from the protocol. Vehicles are trusted. Via a hybrid argument, we replace the key derivation functions utilizing pseudorandom functions $\mathsf{prf}(K^{u_c},\cdot)$ by independent random functions $\$^{u_c}$. This step is performed at the cost of \begin{align} \advprf{\mu_c}(2q,t)\,,\label{eqn:cost1} \end{align} as in every of the $q$ evaluations of HERMES\@\ there are two evaluations of a function $\mathsf{prf}$, and at most $\mu_c$ instances of these functions. As we assume that the \gls{MPC} is statistically secure, we can replace the \gls{VSSP} by a single, trusted \acrfull{SP} (with $l$ interfaces) -- it perfectly evaluates the protocol, and it does not reveal/leak any information. Assuming that the public-key encryption reveals nothing, which can be done at the cost of \begin{align} \advpke{l}(t)\,,\label{eqn:cost2} \end{align} we can, for simplicity, replace it with a perfectly secure public-key encryption $\rho^{VSSP}$ at the \gls{VSSP} directly. Thus, an encryption does not reveal its origin and content, and only \gls{VSSP} can straightforwardly decrypt, therewith eliminating the fact that \gls{VSSP} has $l$ interfaces and has to perform multiparty computation. Now, as the pseudorandom functions are replaced by random functions, the keys to the symmetric encryption scheme, $\mathsf{E}$, are all independently and uniformly distributed, and as the public-key encryption scheme is secure, these keys never leak. Therefore, we can replace the symmetric encryption functionality by perfectly random invertible functions, $\pi^{veh_{u_o}}$ for the vehicles, unique $\pi^{u_c}_{enc}$ for every new encryption with the $u_c$ session keys, and $\pi^{u_c}_{tag_{enc}}$ for every new encryption in the tag computation with $u_c$ session keys, at the cost of \begin{align} \advske{2q+\mu_\mathrm{veh_{u_o}}}(3q,t)\,,\label{eqn:cost3} \end{align} as there are $2q+\mu_\mathrm{veh_{u_o}}$ different instances involved and at most $3q$ evaluations are made in total. This means that instead of randomly drawing $K^{u_c}_{enc} \leftarrow \$^{u_c}$, we now randomly draw $\pi^{u_c}_{enc}\xleftarrow{{\scriptscriptstyle\$}} \mathrm{Func}(\mathsf{E})$. Likewise, for $K^{u_c}_{tag_{enc}}\leftarrow \$^{u_c}$ we now randomly draw $\pi^{u_c}_{tag_{enc}}\xleftarrow{{\scriptscriptstyle\$}} \mathrm{Func}(\mathsf{E})$. We are left with a simplified version of HERMES\@. The \gls{VSSP} is replaced by a single trusted authority. The pseudorandom functions are replaced by independent random drawings - $u_c$ uses $\$^{u_c}$ which generates fresh outputs for every call. The public-key encryptions are replaced with a perfectly secure public-key encryption function $\rho^{VSSP}$. Finally, the symmetric-key encryptions are replaced by perfectly random invertible functions $\pi^{veh_{u_o}}$, $\pi^{u_c}_{enc}$, and $\pi^{u_c}_{tag_{enc}}$. The simplified system is illustrated in Figure~\ref{fig:protsimplified}. Here, the derivation of the vehicle key (or, formally, the random function corresponding to the encryption) from the database is abbreviated to $\pi^{veh_{u_o}} \leftarrow \mathsf{query}(ID^{u_o}, DB^{S_i})$ for conciseness. \bigskip\noindent We will now treat the security and privacy requirements, and discuss how these are achieved from the cryptographic primitives, separately. We recall that the consumer $u_c$ and owner $u_o$ have agreed upon the \gls{BD} prior to the evaluation of HERMES\@, hence they know each other by design. \paragraph*{SR1 -- Confidentiality of \acrfull{BD}, $BD^{u_o, u_c}$} In one evaluation of the protocol, $u_c$, $u_o$, the \emph{trusted} \gls{VSSP}, and the shared vehicle, $veh_{u_o}$ learn the \gls{BD} by default or by design. \gls{BD} can only become public through the values $AuthTag^{BD^{u_o, u_c}}$ and $C^{u_c}$ satisfying \begin{equation}\label{eqn:AuthT} \begin{split} AuthTag^{BD^{u_o, u_c}} & = \mathsf{mac}(K^{u_c}_{tag_{mac}},\mathsf{E}(K^{u_c}_{tag_{enc}},BD^{u_o, u_c}) \\ & = \mathsf{mac}(K^{u_c}_{tag_{mac}},\pi^{u_c}_{tag_{enc}}(BD^{u_o, u_c}))\,, \end{split} \end{equation} \begin{equation}\label{eqn:Cuc} \begin{split} C^{u_c} & = \mathsf{E}(K_{enc}^{u_c},\{\mathsf{E}(K^{veh_{u_o}}_y,\{BD^{u_o, u_c},\sigma^{u_o}\}),ID^{veh_{u_o}}\}) \\ & = \pi^{u_c}_{enc}(\{\pi^{veh_{u_o}}(\{BD^{u_o, u_c},\sigma^{u_o}\}),ID^{veh_{u_o}}\})\,. \end{split} \end{equation} Eqn:~\ref{eqn:AuthT},~\ref{eqn:Cuc} reveal nothing about $BD^{u_o, u_c}$ to a malicious outsider, thanks to the security of $\mathsf{mac}$, $\mathsf{E}$, and the independent uniform drawing of the keys $K^{u_c}_{enc}$ and $\vec{K}^{u_c}_{tag}=(K^{u_c}_{tag_{enc}}, K^{u_c}_{tag_{mac}})$; $\pi^{u_c}_{enc}$ and $\pi^{u_c}_{tag_{enc}}$ randomly generated for every evaluation. The nested encryption $\mathsf{E}$, i.e., $\pi^{u_c}_{enc}\circ \pi^{veh_{u_o}}$ (Eqn:~\ref{eqn:Cuc}), does not influence the analysis due to the mutual independence of the two functions, i.e. the mutual independence of the keys $K_{enc}^{u_c}$ and $K^{veh_{u_o}}_y$. \paragraph*{SR2 -- Entity and data authenticity of $BD^{u_o, u_c}$ from the owner $u_o$} An owner who initiates the \gls{AT} generation and distribution, first signs the \gls{BD} using its private key before sending those to the \gls{VSSP} in shares. Therefore, once the vehicle receives the token and obtains the booking details, it can verify the signature of $u_o$ on $BD^{u_o, u_c}$. In other words, the vehicle can verify the source of $BD^{u_o, u_c}$, $u_o$, and its integrity. Suppose, to the contrary, that a malicious consumer can get access to a vehicle of an $u_o$. This particularly means that it created a tuple $(BD^{u_o, u_c},\sigma^{u_o})$ such that $\mathsf{verify}(Pk^{u_o},BD^{u_o, u_c},\sigma^{u_o})$ holds. If $\sigma^{u_o}$ is new, this means that $u_c$ forges a signature for the secret signing key $\mathit{Sk}^{u_o}$. Denote the event of this happening by \begin{equation}\label{eqn:event1} \begin{split} &\mathsf{E}_1\;:\;\mathcal{A} \text{ forges } \mathsf{sign}(Sk^{u_o},\cdot) \text{ for some }Sk^{u_o}\,. \end{split} \end{equation} On the other hand, if $(BD^{u_o, u_c},\sigma^{u_o})$ is old but the evaluation is fresh, this means a collision $\mathsf{hash}(\mathit{Cert}^{u_c})=\mathsf{hash}(\mathit{Cert}^{u_c\prime})$. Denote the event of this happening by \begin{align} &\mathsf{E}_2\;:\;\mathcal{A} \text{ finds a collision for } \mathsf{hash}\,. \label{eqn:event2} \end{align} We thus obtain that a violation of SR2 implies $\mathsf{E}_1\vee\mathsf{E}_2$. \paragraph*{SR3 -- Confidentiality of $AT^{veh_{u_o}}$} The \gls{AT} is generated by the \gls{VSSP} obliviously - as the \gls{VSSP} is trusted. The \gls{AT} is only revealed to the public in encrypted form, through $C^{u_c}$ of (\ref{eqn:Cuc}). Due to the uniform drawing of $\pi^{u_c}_{enc}$ (and the security of $\rho^{VSSP}$ used to transmit this function), only the legitimate user (i.e., $u_c$) can decrypt and learn the \gls{AT}. It shares it with the vehicle over a secure and private channel. \paragraph*{SR4 -- Confidentiality of vehicle key, $K^{veh_{u_o}}$} By virtue of our hybrid argument on the use of the symmetric-key encryption scheme, $\mathsf{E}_{K^{veh_{u_o}}}$ got replaced with $\pi^{veh_{u_o}}$, which itself is a keyless random encryption scheme. As the key is now absent, it cannot leak. Moreover, only the \acrfull{VM} and the vehicle itself hold copies of the vehicle key. The \gls{VM}, as a trusted \gls{SP}, holds all the secret keys of vehicles. As vehicle owners register their vehicles, the \gls{VM} forwards the list of $ID^{veh_{u_o}}$ to \gls{VSSP}. Each \gls{VSSP} server receives $K^{veh_{u_o}}$ in secret shared form; is indistinguishable from randomness. Hence, these servers learn nothing about the vehicle secret key by virtue of the statistical security of the \gls{MPC}. In a nutshell, to retrieve the $y$th key from $DB^{\mathsf{S}_i}$, i.e., $[K^{veh_{u_o}}_y]$, each $\mathsf{S}_i$ performs an equality check over \gls{MPC}. The comparison outcomes 0 for mismatch and 1 for identifying the vehicle at position $y$, i.e., \begin{equation*} \vec{D}^{veh_{u_o}}= \Big(\overset{1}{[0]}\cdots\overset{}{[0]}\overset{y}{[1]}\overset{}{[0]}\cdots\overset{n}{[0]}\Big) \enspace . \end{equation*} from which the share of the vehicle's secret key, $[K^{veh_{u_o}}]$, can be retrieved. Due to the properties of threshold secret sharing, the secret vehicle keys stay secret to each $\mathsf{S}_i$. Thus, among all \gls{VSS} entities, only the \gls{VM} and the vehicle hold the vehicle key. \paragraph*{SR5 -- Backward and forward secrecy of $AT^{veh_{u_o}}$} The \gls{AT} is published on the \acrfull{PL} as $C^{u_c}$ of (\ref{eqn:Cuc}), encrypted using $\pi^{u_c}_{enc}$ (i.e., symmetric key $K_{enc}^{u_c}$). Every honest $u_c$ generates a uniformly randomly drawn function $\pi^{u_c}_{enc}$ (a fresh key $K_{enc}^{u_c}$) for every new evaluation. It uses a key derivation function $\mathsf{kdf}$ utilizing a \gls{PRF} for each key generation and every new evaluation of the protocol, and that is secure. This implies that all session keys are drawn independently and uniformly at random. In addition, the symmetric encryption scheme $\mathsf{E}$ is multi-key secure. Thus, all encryptions $C^{u_c}$ are independent and reveal nothing of each other. Note that nothing can be said about \glspl{AT} for malicious users who may deviate from the protocol and reuse one-time keys. \paragraph*{SR6 -- Non-repudiation of origin of $AT^{veh_{u_o}}$} The vehicle, who is a trusted entity, verifies the origin through verification of the signature, i.e., $\mathsf{verify}(Pk^{u_o},BD^{u_o, u_c},\sigma^{u_o})$. The consumer $u_c$ verifies the origin through the verification of the \gls{MAC} function, i.e., \begin{equation*} AuthTag^{BD^{u_o, u_c}} \stackrel{?}{=} \mathsf{mac}(K^{u_c}_{tag_{mac}},\pi^{u_c}_{tag_{enc}}(BD^{u_o, u_c}))\,. \end{equation*} Note that $u_c$ does not effectively verify $AT^{veh_{u_o}}$ but rather $AuthTag^{BD^{u_o, u_c}}$, which suffices under the assumption that the \gls{MPC} servers evaluate their protocol correctly. In either case, security fails only if the asymmetric signature scheme or the \gls{MAC} function are forgeable. The former is already captured by event $\mathsf{E}_1$ in (\ref{eqn:event1}). For the latter, denote the event this happens by \begin{align} &\mathsf{E}_3\;:\;\mathcal{A} \text{ forges } \mathsf{mac}(K^{u_c}_{tag_{mac}},\cdot) \text{ for some }K^{u_c}_{tag_{mac}}\,. \label{eqn:event3} \end{align} We thus obtain that a violation of SR6 implies $\mathsf{E}_1\vee\mathsf{E}_3$. \paragraph*{SR7 -- Non-repudiation of $AT^{veh_{u_o}}$ receipt by $veh_{u_o}$ at $u_o$} The owner $u_o$ can verify the correct delivery of $AT^{veh_{u_o}}$ with the successful verification and message sent by the vehicle to the owner, $\mathsf{verify}(Pk^{veh_{u_o}},\{BD^{u_o, u_c},TS^{veh_{u_o}}_{Access}\}, \sigma^{veh_{u_o}}_{Access})$ at the end of the protocol. Security breaks only if the signature scheme is forgeable. Denote the event of this happening by \begin{align} &\mathsf{E}_4\;:\;\mathcal{A} \text{ forges } \mathsf{sign}(Sk^{veh_{u_o}},\cdot) \text{ for some }Sk^{veh_{u_o}}\,. \label{eqn:event4} \end{align} We thus obtain that a violation of SR7 implies $\mathsf{E}_4$. \paragraph*{SR8 -- Accountability of users (i.e., owner $u_o$ and consumer $u_c$)} In case of wrongdoing or a dispute, a specific transaction may need to be retrieved and its information reconstructed (and only this information). Reconstruction of information is possible under the condition that \gls{VSSP} servers collude to reveal the shares of a transaction. However, these servers in our setting have competing interests. They would not collaborate and collude to reveal the shares of a transaction unless on wrongdoings were there is a request from legitimate entities such as law authorities'. In our scenario, the private inputs, i.e., information of transactions, can be reconstructed by a majority coalition due to threshold secret sharing properties~\cite{DBLP:conf/ccs/ArakiFLNO16,DBLP:journals/cacm/Shamir79}. This is, if the \gls{VSSP} consists of three servers, it suffices two of the server-shares required to reconstruct the secret. \paragraph*{PR1 -- Unlinkability of (any two) requests of any consumer, $u_c$, and the vehicle, $veh_{u_o}$(s)} Consumer and vehicle-identifiable data are included only in $BD^{u_o, u_c}$. It contains the certificate of $u_c$, $\mathit{Cert}^{u_c}$, and the identities of $u_c$, $ID^{u_c}$ and vehicle, $ID^{veh_{u_o}}$. Recall that $BD^{u_o, u_c}$ data are agreed between $u_c$ and $u_o$ before HERMES\@\ commences, so $u_o$ learns the identity of $u_c$ by default (prerequisite \textit{Step~B: vehicle booking} in Sec.~\ref{sec:system}). Beyond that, $u_c$ communicates only with the vehicle, $veh_{u_o}$, to forward the $AT^{veh_{u_o}}$ and perform access control. The consumer, $u_c$, queries the $AT^{veh_{u_o}}$ using an anonymous communication channel such ag Tor~\cite{torproject}. The \gls{BD} data are exchanged with the \gls{VSSP} encrypted and do not leak information by virtue of their confidentiality (security requirement SR1). \paragraph*{PR2 -- Anonymity of any consumer, $u_c$, and vehicle, $veh_{u_o}$} The reasoning is identical to that of PR1. \paragraph*{PR3 -- Indistinguishability of $AT^{veh_{u_o}}$ operations} HERMES\@\ utilizes the same steps and type of messages to \gls{VSSP} and \gls{PL} for access token generation, update, or revocation operation. Hence, system entities and outsiders can not distinguish which type of operation has been requested. \paragraph{Conclusion} HERMES\@\ operates securely as long as the costs of (\ref{eqn:cost1}-\ref{eqn:cost3}), together with the probability that one of the events (\ref{eqn:event1}-\ref{eqn:event4}) occurs, are sufficiently small: \begin{align*} & \advprf{\mu_c}(2q,t) + \advpke{l}(t) \:+\\ &\qquad\qquad\advske{2q+\mu_\mathrm{veh_{u_o}}}(3q,t) \:+\\ &\qquad\qquad\qquad\qquad\Pr\left(\mathsf{E}_{1}\vee\mathsf{E}_{2}\vee\mathsf{E}_{3}\vee\mathsf{E}{4}\right) \ll 1\,. \end{align*} By design, the probability that the event $\mathsf{E}_1\vee\mathsf{E}_4$ occurs is upper bounded by $\advsign{\mu_o+\mu_{\mathrm{veh_{u_o}}}}(2q,t)$; the probability that event $\mathsf{E}_3$ occurs is upper bounded by $\advmac{q}(q,t)$, and the probability that $\mathsf{E}_2$ occurs is upper bounded by $ \mathrm{Adv}_{\mathsf{hash}}^{\mathrm{col}}(t)$. We thus obtain: \begin{align*} &\Pr\left(\mathsf{E}_{1}\vee\mathsf{E}_{2}\vee\mathsf{E}_{3}\vee\mathsf{E}{4}\right)\\ \leq\:& \advsign{\mu_o+\mu_{\mathrm{veh_{u_o}}}}(2q,t) + \advmac{q}(q,t) + \mathrm{Adv}_{\mathsf{hash}}^{\mathrm{col}}(t)\,, \end{align*} which completes the proof. \end{proof} \section{Performance Evaluation and Analysis}\label{sec:protocol_evaluation} \begin{table*}[ht!] \caption {HERMES\@\ performance: efficiency and scalability improvements, i.e., \acrfull{AT} generation, per number of vehicles utilizing: CBC-MAC-AES and HtMAC-MiMC. Throughput is evaluated for all servers and communication cost per server.} \label{tab:dabit} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{center} \begin{tabularx}{\textwidth}{lCCCCC} \toprule \toprule Type of Vehicle Owners & Protocol & Number of Vehicles per Owner & Communication Rounds & Communication Data (kB) & Throughput (ops/s) \\ \midrule & CBC-MAC-AES & 1 & 568 & 64 & 33 \\ & HtMAC-MiMC & 1 & 167 & 108 & 546 \\ \cmidrule{2-6} \multirow{3}{*}{Individuals} & CBC-MAC-AES & 2 & 568 & 64 & 32 \\ & HtMAC-MiMC & 2 & 167 & 108 & 546 \\ \cmidrule{2-6} & CBC-MAC-AES & 4 & 568 & 107.7 & 32 \\ & HtMAC-MiMC & 4 & 167 & 117 & 544 \\ \midrule & CBC-MAC-AES & 256 & 568 & 76 & 32 \\ & HtMAC-MiMC & 256 & 167 & 150 & 260 \\ \cmidrule{2-6} \multirow{3}{*}{Vehicle-rental company branches} & CBC-MAC-AES & 512 & 568 & 88 & 32 \\ & HtMAC-MiMC & 512 & 167 & 194 & 151 \\ \cmidrule{2-6} & CBC-MAC-AES & 1024 & 568 & 112 & 32 \\ & HtMAC-MiMC & 1024 & 167 & 280 & 84 \\ \bottomrule \bottomrule \end{tabularx} \end{center} \end{table*} We argue that HERMES\@\ fulfills its performance requirement for efficiency and scalability in supporting a large volume of vehicles per user as in real-world deployment (see. Sect.~\ref{sec:system_model}). \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{% \includegraphics{nexcom.jpeg}} \caption{Nexcom vehicular \acrfull{OBU} box~\cite{nexcom:vtc6201-ft}.} \label{nexcom} \end{figure} \subsection{Benchmark and environment settings} In HERMES\@\ we take a different approach to SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} as we implement our protocols in a fully-fledged open-sourced \gls{MPC} framework, i.e., MP-SPDZ \cite{DBLP:conf/ccs/Keller20} (in Step~2 -- see Fig.~\ref{fig:step4}). The framework supports more than $30$ \gls{MPC} protocols carefully implemented in {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}. In addition, a Python front-end compiler allows the expression of circuits in a relatively simple way. For MP-SPDZ, the compiler reduces the high-level program description to bytecode or set of instructions for which the parties then run an optimized virtual machine written in {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}\ to execute the protocols. In our case, two versions of HERMES\@\ were benchmarked: one with CBC-MAC tailored for binary circuits, while the other one uses HtMAC, which is tailored for arithmetic circuits. We deployed the cryptographic operations in Step~1, Step~3 and Step~4 with OpenSSl~\cite{openssl} and python script for the secret sharing implementation~\cite{DBLP:conf/ccs/ArakiFLNO16}. For our benchmark we use the following settings: $\length{M^{u_c}} = \length{AT^{veh_{u_o}}} = 10 \cdot 128$-bits, whereas $ID^{veh_{u_o}} \leq 2^{32}$, which thus fits into one $128$ bit-string, and $\length{BD^{u_o, u_c}} = 6 \cdot 128$-bits (including padding). Specifically, we consider $BD^{u_o, u_c}$ with the following message configuration-size: the vehicle identifier $ID^{veh_{u_o}}$ of $32$-bits, the location of the vehicle $L^{veh_{u_o}}$ of $64$-bits, the hashed certificate value of $u_c$ $\mathsf{hash}(Cert^{u_c})$ of $512$-bits, the \gls{BD} identifier $ID^{BD}$ of $32$-bits, the conditions and access rights accessing a vehicle by $u_c$, $CD^{u_c}$ of $96$-bits, and $AC^{u_c}$ of $8$-bits respectively. The $BD^{u_o, u_c}$ signature, $\sigma^{u_o}$, that $u_o$ will provide is of $2048$-bits using RSA-PKCS $\#1$ v2.0~\cite{rfc_2437}. \paragraph{Environment Settings} We benchmarked our protocols using three distinct computers connected on a LAN network equipped with Intel i$7$-$7700$ CPU with $3,60$~GHz and $32$~GB of RAM. For intra-\gls{VSSP} communication, we consider $10$~Gb/s network switch and $0.5$~ms \gls{RTT}.~\footnote{The implementation can be obtained from: https://github.com/rdragos/MP-SPDZ/tree/hermes} The vehicular \gls{OBU} \textit{Nexcom VTC 6201-FT} box (see Fig.~\ref{nexcom}) is used to benchmark Step~4. It is equipped with an Intel Atom-D$510$ CPU with $1,66$~GHz and $1$~GB of RAM~\cite{nexcom:vtc6201-ft} from the PRESERVE project~\cite{preserve}.~\footnote{The prototype \gls{OBU} is a hardware module without any additional cryptographic hardware accelerator.} \subsection{Theoretical Complexity} Measuring the complexity of an \gls{MPC} protocol usually boils down to counting the number of non-linear operations in the circuit and the circuit depth. We consider the case where a protocol is split into two phases. An input-independent (preprocessing) phase, where the goal is to produce correlated randomness. Additionally, an input-dependent (online) phase where parties in \gls{VSSP} provide their inputs and start exchanging data using the correlated randomness produced beforehand. One secret multiplication (or an AND gate for the $\mathbb{F}_2$ case) requires one random Beaver triple (correlated randomness)~\cite{DBLP:conf/crypto/Beaver91a} from the preprocessing phase and two $\mathsf{open()}$ operations in the online phase. Note that, in our case, the two versions of HERMES\@\ are benchmarked using the following two executables: \textit{replicated-bin-party.x} ($\mathbb{F}_2$ case, CBC-MAC) and \textit{replicated-field-party.x} ($\mathbb{F}_p$ case, HtMAC). The first executable is the implementation of Araki et al.'s binary-based protocol~\cite{DBLP:conf/ccs/ArakiFLNO16}, while the latter is for the field case. Next, we analyze the complexity of these two separately and motivate the two choices. \paragraph{\textit{CBC-MAC-AES -- Case for Binary Circuits}} This solution is implemented to have a baseline comparison with SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} using MP-SPDZ~\cite{DBLP:conf/ccs/Keller20}. The equality check is implemented using a binary tree of AND operations with a $\log{n}$ depth where $n$ is the number of vehicles (see Fig.~\ref{fig:step2}). Obtaining the corresponding vehicle key $[K^{veh_{u_o}}]$ assuming there are $n$ vehicles per user has a cost of $159 \cdot n$ Beaver triples assuming $32$ bit length vehicle IDs. When evaluating the operations depicted in~Fig.~\ref{fig:step2} in \gls{MPC}, the most expensive part is computing $[AT^{veh_{u_o}}]$ since that requires encrypting $10 \cdot 128$ bits, calling AES $10$ times, which has a cost of $6400 \cdot 10$ AND gates. In the next step, (in line~$8$, Fig.~\ref{fig:step2}), AES is called $11$ times, while the operation computing CBC-MAC-AES (in line~$10$, Fig.~\ref{fig:step2}) takes only $6$ AES calls. Given the above breakdown, the theoretical cost for generating an \acrfull{AT} has a cost of $159 \cdot n + 6400 \cdot 28$ AND gates. \paragraph{\textit{HtMAC-MiMC -- Case for Arithmetic Circuits}} Recent results of Rotaru et al.~\cite{rotaru2017modes} showed that, when considering \gls{MPC} over arithmetic circuits, efficient modes of operation over encrypted data are possible if the underlying \gls{PRF} is MPC-friendly. We integrate their approach~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} into HERMES\@, and results from Table~\ref{tab:dabit} show that it is at least $16$ times faster than using \gls{MPC} over binary circuits with CBC-MAC-AES. This might come as a surprise because comparisons are more expensive to do in arithmetic circuits. Recent improvements using edaBits~\cite{DBLP:conf/crypto/0001GKRS20} made comparisons much faster, which, in turn, improved the \gls{MPC} protocols used by HERMES\@. To summarize, we breakdown the cost into the following: \begin{itemize} \item $10$ calls to MiMC to encrypt $M^{u_c}$ (excluding one call for computing the tweak according to \cite{rotaru2017modes}), \item $11$ more calls to compute $C^{u_c}$ - encrypting the concatenation of $AT^{veh_{u_o}}$ and $ID^{veh_{u_o}}$. Note that since we are using a different key than the first step we need to compute another tweak (one extra \gls{PRF} call), \item $6$ calls to compute $AuthTag^{BD^{u_o, u_c}}$, one more \gls{PRF} call for computing the tweak $N = E_{\vec{K}^{u_c}_{tag}[0]}(1)$ and a final \gls{PRF} call $E_{\vec{K}^{u_c}_{tag}[1]}(\mathsf{hash}'(ct))$ where $ct$ are the opened ciphertexts from encrypting $BD^{u_o, u_c}$ and $\mathsf{hash}'(\cdot))$ is a truncated version of a SHA-3 where we keep the first $128$ bits $\mathsf{hash}$~\cite{rotaru2017modes}. \end{itemize} If we include the \gls{PRF} calls to compute the tweaks, there are $31$ calls to a \gls{PRF}, so one can think that the Boolean case is more efficient than the arithmetic case. In practice, we see that HtMAC construction is faster than CBC-MAC-AES, albeit with a factor of two communication overhead (see Table~\ref{tab:dabit}). One reason for this is that HtMAC is fully parallelizable, resulting in an \gls{MPC} protocol with fewer rounds than CBC-MAC-AES. One of the main benefits of HtMAC construction is that it can be instantiated with the Legendre \gls{PRF}, which can make the number of communication even lower. We chose the MiMC based \gls{PRF} as that is demonstrated to be faster on a LAN~\cite{rotaru2017modes} and to have a lower communication overhead -- although a higher number of communication rounds than the Legendre-based \gls{PRF}~\cite{DBLP:conf/crypto/Damgard88}. \subsection{Benchmark Results - Efficiency and Total Time} Although our protocol's construction is agnostic to the underlying \gls{MPC}, its efficiency depends on the chosen \gls{MPC} scheme. We evaluate HERMES\@\ utilizing the semi-honest 3-party protocol by Araki et al.~\cite{DBLP:conf/ccs/ArakiFLNO16}. We evaluate its efficiency in both Boolean and arithmetic circuits with CBC-MAC-AES and HtMAC-MiMC, respectively. Note that we report timings for cryptographic operations and secure multiparty evaluations, leaving \gls{DB} accessing and communication latency on client-server and client-vehicle timings outside of our evaluations, as these are not dependent on the actual system construction. \paragraph*{\textit{Step~1}} Recalling Step~1 (see Fig.~\ref{fig:step1}), the operations of the \textit{session key generation and \gls{BD} sharing} is taking place on both users, the owner and consumer. At the consumer, $u_c$, the session key generation, using the $\mathsf{kdf}$ function, is implemented with AES in CRT mode ($\approx 2,87$~ms). The session keys are encrypted, using the $\mathsf{enc}$ function, with RSA-KEM specifications~\cite{rfc_5990} and $2048$-bit key-size ($\approx 9,53$~ms). At the owner, $u_o$, the signature of the \gls{BD} is generated, using the $\mathsf{sign}$ function, with RSA-PKCS $\#1$ v2.0 specification~\cite{rfc_2437} and $2048$-bit output ($\approx 4,25$~ms). For the creation of the secret shares of $\mathsf{share}$ function, we implemented by the sharing primitive of Araki et al.~\cite{DBLP:conf/ccs/ArakiFLNO16} ($\approx 10,78$~ms). That results in a total estimation of $\approx 52,7$~ms in Step~1. \paragraph*{\textit{Step~2}} In Step~2, the \gls{AT} generation takes place at \gls{VSSP} (see Fig.~\ref{fig:step2}). We report the full range of experiments for a varying number of vehicles, $veh_{u_o}$ per owner, $u_o$ as it is illustrated in Fig.~\ref{fig:sims-er}: Fig.~\ref{fig:hermes-comm} regarding the intra-\gls{VSSP} communication cost, and Fig.~\ref{fig:hermes-tru} the throughput between servers. Specifically, we vary the number of vehicle IDs (i.e., the numbers of vehicles registered per owner) and compute the communication rounds and data sent between the \gls{VSSP} servers. We also compute the total throughput meaning the total number of \gls{AT}s generated per second (see Fig.~\ref{fig:step2}). In Table~\ref{tab:dabit}, we report the performance for a low number of vehicle IDs (i.e., $1,2,4$), representing individuals, but also for a large number of vehicles (i.e., $256,512,1024$), representing (large branches of) vehicle-rental companies. \begin{figure*} \centering \resizebox{\textwidth}{!}{% \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture} \begin{axis}[ tick pos = left, legend style = {cells={anchor=west}}, legend pos = north west, legend entries = { {CBC-MAC-AES}, {HtMAC MiMC}, }, ylabel = {Communication cost (kB)}, xlabel = {Number of vehicles per owner}, ymode=log, xmode=log, log basis y={10}, ] \addplot table[x=cars, y=comms] {experiments/cbc-comm}; \addplot table[x=cars, y=comms] {experiments/htmac-comm}; \end{axis} \end{tikzpicture} \caption{Communication cost per server: The intra-\gls{VSSP} communication cost for \acrfullpl{AT} generation (i.e., data sent-received).} \label{fig:hermes-comm} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \begin{tikzpicture} \pgfplotsset{every axis legend/.append style={at={(10.5,10.1)},anchor=west}} \begin{axis}[ ytick pos = left, ylabel = {Throughput (ops/sec)}, xlabel = {Number of vehicles per owner}, ymode=log, xmode=log, log basis y={10}, ] \addplot table[x=cars, y=tru] {experiments/cbc-comm}; \addplot table[x=cars, y=tru] {experiments/htmac-comm}; \end{axis} \end{tikzpicture} \caption{Throughput for all servers: The throughput of all servers in \gls{VSSP} for \acrfullpl{AT} generation (i.e., ops) per second.} \label{fig:hermes-tru} \end{subfigure} } \caption{Communication cost and throughput at Step~2 (see Fig.~\ref{fig:step2}) for a variant number of vehicles per owner - form private individuals with a few vehicles, to rental companies with hundred or thousand of vehicles per branch.} \label{fig:sims-er} \end{figure*} We can see that the throughput of the \gls{AT} generation when instantiated using CBC-MAC-AES remains constant, whereas, for HtMAC-MiMC, it is decreasing. The reason for this is that when scaling up the number of vehicles, the number of comparisons is increasing as well. For arithmetic circuits, the comparisons become costly operations, whereas, for Boolean circuits, comparisons can be made efficiently. However, the throughput for HtMAC-MiMC is always better than CBC-MAC-AES, and this is because MiMC-based \gls{PRF} is more lightweight -- requiring fewer multiplications -- and has a smaller circuit depth. \paragraph*{\textit{Step~3}} The consumer in Step~3, queries, retrieves, verifies and decrypts the given \gls{AT} (see Fig.~\ref{fig:step3}). The verification of the \gls{AT} is implemented using the $\mathsf{mac}$ function ($\approx 3,49$~ms). The total cost is $\approx 6,65$~ms in Step~3. \paragraph*{\textit{Step~4}} The consumer delivers the \gls{AT} to \gls{OBU} of vehicle, which decrypts and verifies the signature in Step~4 (see Fig.~\ref{fig:step4}). Cryptographic operations are benchmarked at Nexcom \gls{OBU} box~\cite{nexcom:vtc6201-ft,preserve}. The decryption of \gls{AT} with the vehicle key, using the $\mathsf{D}$ function, is implemented with AES in CTR mode ($\approx 3,15$~ms). The verification of signature of the \gls{BD}, using $\mathsf{verify}$ function, is implemented with RSA $2048$ ($\approx 15,16$~ms). Finally, the signature is generated, using the $\mathsf{sign}$ function, with $2048$-bit output ($\approx 32,43$~ms). Note that the challenge-response protocol between the consumer and the vehicle does not directly affect the performance of HERMES\@, and thus we omit from our implementation and measurements. The total cost is $\approx 62,087$~ms in Step~4. \paragraph*{\textit{Total}} The total cost of our cryptographic operations and \gls{MPC} evaluations considering the arithmetic circuits case (i.e., HtMAC-MiMC) is: {$\approx 127,37$}~ms for a single-vehicle owner, and {$\approx 137,44$}~ms for thousand vehicles per owner. It handles $546$ and $84$ access token generations per second, respectively. In addition, client-side \glspl{PD}, owner and consumer, and vehicle \glspl{OBU} need to perform only a few symmetric encryptions, signature and verification operations, making HERMES\@\ practical. \subsection{Comparison with SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}} We report the main difference on efficiency and scalability between HERMES\@\ and SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} is on Step~2 (see Table 2 in~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}) -- the intra-\gls{VSSP} communication cost and throughput. SePCAR\@\ reports $\approx 1.2$ seconds for generating the access token. When benchmarked on similar hardware we get a throughput of $33$ access tokens per second.~\footnote{SePCAR\@\ specifications: Intel $i7$, $2.6$~Ghz CPU and $8$GB of RAM.} This makes HERMES\@\ with the CBC-MAC-AES construction roughly $42$ times faster than SePCAR\@. Switching from CBC-MAC-AES to HtMAC offers a throughput of $546$ \glspl{AT} per second, which makes it $\approx 16.5$ times better than CBC-MAC, making it around $696$ times faster than original timings in SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}. Thus, these results, specifically for Step~2 (see Fig.~\ref{fig:step2}), demonstrate the benefits of integrating our solution in a fully-fledged \gls{MPC} framework such as MP-SPDZ~\cite{DBLP:conf/ccs/Keller20}. We stress that our implementation of SePCAR\@\ was faster due to writing CBC-MAC-AES using a mature \gls{MPC} framework such as MP-SPDZ rather than using custom code as in \cite{DBLP:conf/esorics/SymeonidisAMMDP17}. \subsection{Satisfying ESR1 -- Efficiency and scalability in a real-world deployment} We demonstrate that HERMES\@\ maintains its efficiency, and it is scalable, supporting owners that could span from a few up to a thousand vehicles for (branches of) car-rental companies. To argue about the real-world deployment aspect, we need first to find the answer to: ``\textit{how many vehicles per branch exist in a real-world deployment?}'' There is on average a few hundred (i.e., average $\approx~230$ / median $\approx~122$) of vehicles per branch in the U.S. in 2018~\cite{statistics:num-of-cars-per-carsharing} -- drawing from the analogy in \gls{VSS} of car-rental scenarios. It ranged from tens of vehicles (i.e., $\approx~29$) to an upper bound of almost a thousand (i.e., $\approx~900$) of vehicles per branch. Thus, is a safe approximation for HERMES\@\ supporting $1024$ vehicles per single owner (e.g., per branch), as in car-rental scenarios. A follow up question is: ``\textit{how many daily vehicle-sharing operations are performed in \gls{VSS}?}'' This corresponds to the number of \gls{AT} generations in Step~2 (see Fig.~\ref{fig:step2}). According to reports~\cite{statistics:num-of-transactions-europe,statistics:num-of-transactions-avis-worldwide}, the total number of sharing operations of all car-rental transactions in Europe in 2017 is $86,41$~M ($\approx 237,000$ daily)~\cite{statistics:num-of-transactions-europe}. World-wide, the number of sharing operations compiles to $40$~M transactions in 2019 ($\approx 110,000$ daily) for Avis Budget group, one of the world-leading car-rental companies~\cite{statistics:num-of-transactions-avis-worldwide}.~\footnote{Assuming a uniform distribution for approximating the daily number of operations is reasonable.} As HERMES\@\ supports a volume of $\approx~58,06$~M daily \glspl{AT} generation (see Table~\ref{tab:dabit}) -- considering the demanding scenario where an owner (i.e., a branch) shares a thousand vehicles -- our results show a two orders of magnitude more \glspl{AT} generation than the daily needs in real-world car-rental scenarios. Note that for comparison, we consider Step~2 computations for \glspl{AT} generation (see Fig.~\ref{fig:step2}) - intra-\gls{VSSP} computations can hinder the efficiency when scaling to multiple vehicles for a single owner.~\footnote{Recall that the costs are the non-linear operations such as comparisons over \gls{MPC}, the $\mathsf{eqz}$ function, to retrieve the vehicle keys in $\vec{D}^{u_o}$.} Thus, HERMES\@\ can scale and remain efficient, carrying millions of \glspl{AT} operations daily, capable of supporting a large number of vehicles per owner for short-term rental. Hence, it satisfies \textit{ESR1}. HERMES\@\ straightforwardly can expand to support a vehicle-sharing company. Considering a single owner, as per branch-holder, a vehicle-sharing company can create multiple owners within the HERMES\@\ that each will manage their corresponding number of vehicles. Recall that each owner, branch-holder, can retrieve the corresponding set of their vehicles with a simple query at the DB, $DB^{S_i}$, that each \gls{VSSP} server holds. The query operation can be parallelizable, and its efficiency and scalability are related mainly by the underlying database structures and technologies, thus an orthogonal to HERMES\@. \section{Related Work}\label{sec:related_work} \begin{table*}[h] \rowcolors{4}{}{gray!25} \centering \caption{Comparison of HERMES\@\ with state-of-the-art \acrfullpl{VSS} in terms of solution-design requirements, and \acrfull{SP} trust assumptions (see Sec.~\ref{sec:system_model}). Papers are listed in a chronological order.} \label{table:vehicle-access-comp} \resizebox{\textwidth}{!}{% \begin{tabular}{lccccccccccccccccc} \toprule \toprule \multicolumn{1}{c}{\multirow{2}{*}{Paper}} & \multicolumn{2}{c}{Functional} & \multicolumn{8}{c}{Security} & \multicolumn{3}{c}{Privacy} & \multicolumn{2}{c}{Performance} & \multicolumn{2}{c}{\multirow{2}{*}{\gls{SP} assumption}} \\ \cmidrule{2-16} \multicolumn{1}{c}{} & FR1 & FR2 & SR1 & SR2 & SR3 & SR4 & SR5 & SR6 & SR7 & SR8 & PR1 & PR2 & PR3 & 1-1 & ESR1 & \\ \toprule \toprule Busold et al.~\cite{DBLP:conf/codaspy/BusoldTWDSSS13} & \checkmark & \checkmark & - & \checkmark & - & \checkmark & \checkmark & - & - & - & - & - & - & \checkmark & - & \textcolor{red}{Trusted} \\ [1ex] Kasper et.al.~\cite{DBLP:conf/rfidsec/KasperKOZP13} & \checkmark & - & \checkmark & \checkmark & \checkmark & - & - & \checkmark & \checkmark & \checkmark & - & - & - & \checkmark & - & \textcolor{red}{Trusted} \\ [1ex] Wei et al.~\cite{DBLP:journals/access/WeiYWWD17} & \checkmark & \checkmark & - & - & - & - & - & \checkmark & - & \checkmark & - & - & - & \checkmark & - & \textcolor{red}{Trusted} \\ [1ex] Groza et al.~\cite{DBLP:conf/vehits/GrozaAM17} & \checkmark & - & - & \checkmark & - & - & - & - & - & - & - & - & - & \checkmark & - & \textcolor{red}{Trusted} \\ [1ex] Dmitrienko and Plappert~\cite{DBLP:conf/codaspy/DmitrienkoP17} & \checkmark & - & \checkmark & \checkmark & \checkmark & - & & \checkmark & - & \checkmark & - & - & - & \checkmark & - & \textcolor{red}{Trusted} \\ [1ex] SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & - & UnTrusted \\ [1ex] Groza et al.~\cite{DBLP:journals/access/GrozaABMG20} & \checkmark & \checkmark & - & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & - & \checkmark & - & - & - & \checkmark & - & \textcolor{red}{Trusted} \\ [1ex] \textbf{HERMES\@} & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & UnTrusted \\ [1ex] \bottomrule \bottomrule \end{tabular}% } \end{table*} State of the art on \acrfullpl{VSS} ranges from (fully) trusting \acrfullpl{SP} to consider them having an adversarial behavior. Design assumptions on trust affect the selected requirements and, subsequently, solution designs. As illustrated in Table~\ref{table:vehicle-access-comp}, there is a large body of work for secure vehicle access and sharing in \glspl{VSS}. However, users' privacy towards an untrusted \gls{SP} is only considered by~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} and HERMES\@, with the current system design advancing significantly~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} in terms of efficiency and scalability. All other proposed solution designs for vehicle accessing and delegation are considering \gls{SP} as a trusted entity to collect data for access and sharing operations in \gls{VSS}. There can have control over users' data by generating and storing session keys of transactions and master keys of vehicles. Initially, Busold et al.~\cite{DBLP:conf/codaspy/BusoldTWDSSS13} proposed a protocol for dynamic access to a car's immobilizer and delegation possibilities for accessing. At their proposed protocol, the vehicle owner and \gls{VM} exchange keys used to encrypt and sign two \acrfullpl{AT} -- one for authenticating the owner accessing the car and one for delegating access rights to a consumer. Confidentiality of \gls{BD} and \gls{AT} is not preserved, as the delegation is happening using a \gls{MAC}-signing operation. Accountability, non-reputation of origin, and delivery of \gls{AT} is also not preserved due to \gls{MAC}-signing - the session key is generated by the owner for each delegation operation. \cite{DBLP:conf/codaspy/BusoldTWDSSS13} treats the \gls{VM} as a trusted holding session keys for authentication and access of the vehicle. The work of Kasper et. al.~\cite{DBLP:conf/rfidsec/KasperKOZP13} considers a trusted car-sharing \gls{SP} and a eID. The eID interacts with a user to register the user at the vehicle-sharing \gls{SP}. Their solution design considers public keys for encryption and signing a delegation compromising backward and forward secrecy along with privacy requirements. Wei et al.~\cite{DBLP:journals/access/WeiYWWD17} offers a similar solution to~\cite{DBLP:conf/rfidsec/KasperKOZP13} using identity-based encryption for the generation of the public/private key pairs for the owner and consumer. The keys are generated with the identity of the owner and its car. For the consumer, the inputs are the customer's identity and the access rights granted for the vehicle. The \gls{BD} is sent in the clear, lacking data and entity authentication verification by the vehicle. Groza et al.~\cite{DBLP:conf/vehits/GrozaAM17} proposed an access control and delegation of rights protocol using an MSP430 Microcontroller. Their main security operations for delegation are to provide data authentication of \gls{BD} using \gls{MAC} cryptographic primitive. Dmitrienko and Plappert~\cite{DBLP:conf/codaspy/DmitrienkoP17} designated a secure free-floating vehicle sharing system. They proposed using two-factor authentication, RFID-enabled smart cards, and \Glspl{PD} to access a vehicle. In contrast to HERMES\@, their solution design considers a centralized vehicle-sharing \gls{SP} that is fully trusted. The \gls{SP} has access to the master key of vehicles in clear contrast to HERMES\@. Thus, it allows the \gls{SP} to collect the information exchanged between the vehicle, the \gls{SP} and each of the users for every vehicle access provision. In recent work, Groza et al.~\cite{DBLP:journals/access/GrozaABMG20} proposed an access control protocol using smartphones as a vehicle key for vehicle access and delegation enabling sharing. To preserve the security and anonymity of users accessing a vehicle, they combined identity-based encryption and group signatures. In their solution design~\cite{DBLP:journals/access/GrozaABMG20}, they distinguished two types of sharing, persistent and ephemeral. Although they utilize group signatures for privacy preservation, it is only for persistent delegation. In ephemeral delegation for dynamic vehicle-sharing, identity encryption is used, removing the anonymity properties that group signatures can apply. Hence, we consider their solution as only a secure approach to vehicle sharing. SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} improves on the work proposed in~\cite{DBLP:conf/codaspy/DmitrienkoP17} in terms of the adversarial consideration of the \gls{SP} (i.e., \gls{VSSP}), the privacy requirements, and the secrecy of vehicle keys towards the \gls{SP}, to mention a few. In specific, it considers untrusted servers in \gls{VSSP} for the generation and distribution of \glspl{AT}. The authors utilize \gls{MPC} in combination with several cryptographic primitives. With their work, they also consider malicious users and support user accountability, revealing a user's identity in wrongdoings. However, SePCAR\@\ is not tested on how it scales to multiple evaluations -- to a large fleet of the vehicle with multiple owners of multiple vehicles. HERMES\@\ maintains the design advantages of SePCAR\@~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}, and is proven to run significantly faster than~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} due to its optimized design and \gls{MPC} constructions. Work on vehicle-sharing also focuses on complementary operations to access provisions, such as booking, payments, and accountability. Huang et al.~\cite{DBLP:journals/tvt/HuangLNS20} proposed a privacy-preserving identity management protocol focusing on authentication while verifying users' misbehavior. They utilize decentralized entities and a centralized vehicle sharing \gls{SP}. However, the \gls{SP} is trusted and can know who is sharing, which vehicle, with whom. Madhusudan~et~al.~\cite{DBLP:conf/icissp/MadhusudanSMZP19} and De Troch~\cite{de2020dpace} proposed privacy-preserving protocols for booking and payment operations on vehicle sharing systems. Their protocols utilize smart contracts on the Ethereum blockchain. Trust is placed on cryptographic primitives and blockchain instead of a centralized \gls{SP}. De Troch~\cite{de2020dpace} also considers accountability in case of misbehavior, in which there is a loss of privacy and deposit to punish malicious behavior. Beyond vehicle sharing security and privacy, vehicular communications security and privacy received extensive attention over the years~\cite{DBLP:journals/tits/KhodaeiJP18,9311462,PapadimitratosGH:C:2006}. Recent results focus, for example, on scalable systems, notably for credential management~\cite{DBLP:journals/tits/KhodaeiJP18,DBLP:conf/wisec/KhodaeiNP19,DBLP:journals/corr/abs-2004-03407}, and decentralized cooperative defenses~\cite{DBLP:journals/tissec/JinP19,DBLP:journals/adhoc/JinP19}. Moreover, Huayi et al.~\cite{qi2020scalable} proposed an enhanced scheme of~\cite{DBLP:journals/tdsc/TroncosoDKBP11}, namely DUBI, a decentralized and privacy-preserving usage-based insurance scheme built on the blockchain technology to address privacy concerns for pay-as-you-drive insurances using zero-knowledge proofs and smart contracts. \section{HERMES\@\ simplified representation for the proof of Theorem~\ref{thmextended}.}\label{appendix:sec_protocol_complete} \newcommand\yproof{51.5} \begin{landscape} \begin{figure}[!ht] \centering \resizebox{1.25\textwidth}{!}{% \fbox{ \begin{tikzpicture} \node (o) at (0,0) {\textbf{Owner ($u_o$)}}; \node[below = \sy em of o] (odot) {}; \draw[dotted] (o)--(odot); \node[right = 8em of o] (vehicle) {\textbf{Vehicle ($veh_{u_o}$)}}; \node[below = \sy em of vehicle] (vehicledot) {}; \draw[dotted] (vehicle)--(vehicledot); \node[right = 9.5em of vehicle] (consumer) {\textbf{Consumer ($u_c$)}}; \node[below = \sy em of consumer] (cdot) {}; \draw[dotted] (consumer)--(cdot); \node[right = 5.5em of consumer] (pl) {\textbf{Public Ledger ($\mathsf{PL}$)}}; \node[below = \sy em of pl] (pdot) {}; \draw[dotted] (pl)--(pdot); \node[right = 13.5em of pl] (s) {\textbf{\gls{VSSP} (trusted)}}; \node[below = \sy em of s] (sdot) {}; \draw[dotted] (s)--(sdot); \node[below = 1em of o] (o11) {}; \node[right = 31em of o11] (consumer11) {}; \draw [<->] (o11) -- node[fill=white] {$BD^{u_o, u_c} = \{\mathsf{hash}(\mathit{Cert}^{u_c}) ,ID^{veh_{u_o}},L^{veh_{u_o}},CD^{u_c},AC^{u_c},ID^{BD}\}$} (consumer11); \node[below = 3em of o] (o1) {}; \node[right = 31em of o1] (consumer1) {}; \draw [->] (o1) -- node[fill=white] {msg$\{SES\_K\_GEN\_REQ, ID^{BD}\}$} (consumer1); \node[below = 1em of consumer1, fill=white, draw, rounded corners] (prf1) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\{\pi^{u_c}_{enc},\pi^{u_c}_{tag_{enc}}\} \xleftarrow{{\scriptscriptstyle\$}} \mathrm{Func}(\mathsf{E})$ \STATE $K^{u_c}_{tag_{mac}} \leftarrow \$^{u_c}$ \STATE $C^{VSSP} \leftarrow \rho^{VSSP}(\{\pi^{u_c}_{enc},\pi^{u_c}_{tag_{enc}},K^{u_c}_{tag_{mac}}\})$ \end{algorithmic}% \end{varwidth} }; \node[below = 1em of o1, fill=white, draw, rounded corners] (mpc3) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\sigma^{u_o} \leftarrow \mathsf{sign}(Sk^{u_o},BD^{u_o, u_c})$ \STATE $M^{u_c} \leftarrow \{BD^{u_o, u_c}, \sigma^{u_o}\}$ \end{algorithmic}% \end{varwidth} }; \node[below = 10em of consumer1] (consumer2) {}; \node[left = 31em of consumer2] (o2) {}; \draw [->] (consumer2) -- node[fill=white] {msg$\{SES\_K\_GEN\_ACK, ID^{BD}, C^{VSSP}\}$} (o2); \node[below = 1em of o2] (o3) {}; \node[right = 67em of o3] (s1) {}; \draw [->] (o3) -- node[fill=white] {msg$_i\{AT\_GEN\_REQ, ID^{u_o}, C^{VSSP}, M^{u_c}\}$} (s1); \node[below = 1em of s1, fill=white, draw, rounded corners] (s2) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\{\pi^{u_c}_{enc},\pi^{u_c}_{tag_{enc}},K^{u_c}_{tag_{mac}}\} \leftarrow (\rho^{VSSP})^{-1}(C^{VSSP})$ \STATE $\pi^{veh_{u_o}} \leftarrow \mathsf{query}(ID^{u_o}, DB^{VSSP})$ \STATE $AT^{veh_{u_o}} \leftarrow \pi^{veh_{u_o}}(M^{u_c})$ \STATE $C^{u_c} \leftarrow \pi^{u_c}_{enc}(\{AT^{veh_{u_o}}, ID^{veh_{u_o}}\})$ \STATE $AuthTag^{BD^{u_o, u_c}} \leftarrow \mathsf{mac}(K^{u_c}_{tag_{mac}}, \pi^{u_c}_{tag_{enc}}(BD^{u_o, u_c}))$ \end{algorithmic}% \end{varwidth} }; \node[below = 1em of s2] (s3) {}; \node[left = 22em of s3] (pl1) {}; \draw [->] (s3) -- node[fill=white] {msg$_i\{AT\_PUB\_REQ, C^{u_c}, AuthTag^{BD^{u_o, u_c}}\}$} (pl1); \node[below = 1em of pl1, fill=white, draw, rounded corners] (pl2) { \begin{varwidth}{\linewidth} \begin{algorithmic} \STATE $\mathsf{publish}(TS^{Pub}_i, C^{u_c}, AuthTag^{BD^{u_o, u_c}})$ \end{algorithmic}% \end{varwidth} }; \node[below = 1em of pl2] (pl3) {}; \node[right = 22em of pl3] (s4) {}; \draw [->] (pl3) -- node[fill=white] {msg$\{M\_PUB\_ACK, TS^{Pub}_i\}$} (s4); \node[below = 1em of s4] (s5) {}; \node[left = 67em of s5] (o4) {}; \draw [->] (s5) -- node[fill=white] {msg$\{AT\_PUB\_ACK, TS^{Pub}_i\}$} (o4); \node[below = 1em of o4] (o5) {}; \node[right =31em of o5] (consumer3) {}; \draw [->] (o5) -- node[fill=white] {msg$\{AT\_PUB\_ACK, TS^{Pub}_i\}$} (consumer3); \node[below = 5em of pl3] (pl4) {}; \draw (pl4) node[fill=white] { \begin{tabularx}{9.5cm}{|X|X|X|} \hline $TS^{Pub}_i$ & $C^{u_c}$ & $AuthTag^{BD^{u_o, u_c}}$ \\ \hline 14774098 & ersdf3tx0 & fwefw234 \\ \hline $\dots$ & $\dots$ & $\dots$ \\ \hline \end{tabularx} }; \node[below = 5em of consumer3] (consumer4) {}; \node[right = 13em of consumer4] (pl5) {}; \draw [->] (consumer4) -- node[fill=white] {$\mathsf{query\_an}(TS^{Pub}_i)$} (pl5); \node[below = 1em of pl5] (pl6) {}; \node[left = 13em of pl6] (consumer5) {}; \draw [->] (pl6) -- node[fill=white] {msg$\{C^{u_c}, AuthTag^{BD^{u_o, u_c}}\}$} (consumer5); \node[below = 1em of consumer5, fill=white, draw, rounded corners] (consumer6) { \begin{varwidth}{\linewidth} \begin{algorithmic} \IF {$AuthTag^{BD^{u_o, u_c}} \stackrel{?}{=} \mathsf{mac}(K^{u_c}_{tag_{mac}}, E(K^{u_c}_{tag_{enc}}, BD^{u_o, u_c}))$} \STATE $\{AT^{veh_{u_o}}, ID^{veh_{u_o}}\} \leftarrow (\pi^{u_c}_{enc})^{-1}(C^{u_c})$ \ELSE \STATE Break \ENDIF \end{algorithmic}% \end{varwidth} }; \node[below = 1em of consumer6] (consumer7) {}; \node[left = 16.5em of consumer7] (c1) {}; \draw [dashed,->] (consumer7) -- node[fill=white] {msg$\{AT^{veh_{u_o}}, ID^{veh_{u_o}},\mathit{Cert}^{u_c}\}$} (c1); \node[below = 1em of c1, fill=white, draw, rounded corners] (c2) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\{BD^{u_o, u_c}, \sigma^{u_o}\} \leftarrow (\pi^{veh_{u_o}})^{-1}(AT^{veh_{u_o}})$ \STATE $\mathsf{verify}(Pk^{u_o},BD^{u_o, u_c},\sigma^{u_o})$ \end{algorithmic}% \end{varwidth} }; \node[below = 5em of c1] (c2) {}; \node[right = 16em of c2] (consumer8) {}; \draw [dashed,<->] (c2) -- node[fill=white] {Challenge / Response} (consumer8); \node[below = 1em of c2, fill=white, draw, rounded corners] (c2) { \begin{varwidth}{\linewidth} \begin{algorithmic} \STATE $\sigma^{veh_{u_o}}_{Access} \leftarrow \mathsf{sign}(Sk^{veh_{u_o}}, \{BD^{u_o, u_c},TS^{veh_{u_o}}_{Access}\})$ \end{algorithmic}% \end{varwidth} }; \node[below = 1em of c2] (c3) {}; \node[left = 14em of c3] (o6) {}; \draw [dashed, ->] (c3) -- node[fill=white] {msg$\{\sigma^{veh_{u_o}}_{Access},TS^{veh_{u_o}}_{Access}\}$} (o6); \node[below = 1em of o6, fill=white, draw, rounded corners] (o7) { \begin{varwidth}{\linewidth} \begin{algorithmic} \STATE $\mathsf{verify}(Pk^{veh_{u_o}},\{BD^{u_o, u_c},TS^{veh_{u_o}}_{Access}\}, \sigma^{veh_{u_o}}_{Access})$ \end{algorithmic}% \end{varwidth} }; \end{tikzpicture}}} \caption{Simplified representation of HERMES\@\ for the proof of Theorem~\ref{thmextended}.} \label{fig:protsimplified} \end{figure} \end{landscape} \section{HERMES\@\ complete representation.}\label{appendix:protocol_complete} \begin{landscape} \begin{figure}[!ht] \centering \resizebox{1.25\textwidth}{!}{% \fbox{ \begin{tikzpicture} \node (o) at (0,0) {\textbf{Owner ($u_o$)}}; \node[below = \y em of o] (odot) {}; \draw[dotted] (o)--(odot); \node[right = 8em of o] (vehicle) {\textbf{Vehicle ($veh_{u_o}$)}}; \node[below = \y em of vehicle] (vehicledot) {}; \draw[dotted] (vehicle)--(vehicledot); \node[right = 9.5em of vehicle] (consumer) {\textbf{Consumer ($u_c$)}}; \node[below = \y em of consumer] (cdot) {}; \draw[dotted] (consumer)--(cdot); \node[right = 5.5em of consumer] (pl) {\textbf{Public Ledger ($\mathsf{PL}$)}}; \node[below = \y em of pl] (pdot) {}; \draw[dotted] (pl)--(pdot); \node[right = 13em of pl] (s) {\textbf{Servers} $\mathsf{S}_1\dots \mathsf{S}_i\dots \mathsf{S}_l$}; \node[below = \y em of s] (sdot) {}; \draw[dotted] (s)--(sdot); \node[below = 1em of o] (o11) {}; \node[right = 31em of o11] (consumer11) {}; \draw [<->] (o11) -- node[fill=white] {$BD^{u_o, u_c} = \{\mathsf{hash}(\mathit{Cert}^{u_c}) ,ID^{veh_{u_o}},L^{veh_{u_o}},CD^{u_c},AC^{u_c},ID^{BD}\}$} (consumer11); \node[below = 3em of o] (o1) {}; \node[right = 31em of o1] (consumer1) {}; \draw [->] (o1) -- node[fill=white] {msg$\{SES\_K\_GEN\_REQ, ID^{BD}\}$} (consumer1); \node[below = 1em of consumer1, fill=white, draw, rounded corners] (prf1) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\{K^{u_c}_{enc}, \vec{K}^{u_c}_{tag}\} \leftarrow \mathsf{kdf}(K^{u_c}_{master}, counter)$ \STATE $[K^{u_c}_{enc}] \leftarrow \mathsf{share}(K^{u_c}_{enc})$ \STATE $[\vec{K}^{u_c}_{tag}] \leftarrow \mathsf{share}(\vec{K}^{u_c}_{tag})$ \FOR{$i = 1\dots l$} \STATE $C^{\mathsf{S}_i} \leftarrow \mathsf{enc}(Pk^{\mathsf{S}_i}, \{[K^{u_c}_{enc}],[\vec{K}^{u_c}_{tag}]\})$ \ENDFOR \end{algorithmic}% \end{varwidth} }; \node[below = 1em of o1, fill=white, draw, rounded corners] (mpc3) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\sigma^{u_o} \leftarrow \mathsf{sign}(Sk^{u_o},BD^{u_o, u_c})$ \STATE $M^{u_c} \leftarrow \{BD^{u_o, u_c}, \sigma^{u_o}\}$ \STATE $[M^{u_c}] \leftarrow \mathsf{share}(M^{u_c})$ \end{algorithmic}% \end{varwidth} }; \node[below = 10em of consumer1] (consumer2) {}; \node[left = 31em of consumer2] (o2) {}; \draw [->] (consumer2) -- node[fill=white] {msg$\{SES\_K\_GEN\_ACK, ID^{BD}, \{C^{S_1}, \dots, C^{S_l}\}\}$} (o2); \node[below = 1em of o2] (o3) {}; \node[right = 67em of o3] (s1) {}; \draw [->] (o3) -- node[fill=white] {msg$_i\{AT\_GEN\_REQ, ID^{u_o}, C^{\mathsf{S}_i}, [M^{u_c}]\}$} (s1); \node[below = 1em of s1, fill=white, draw, rounded corners] (s2) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\{[K^{u_c}_{enc}],[\vec{K}^{u_c}_{tag}]\} \leftarrow \mathsf{dec}(Sk^{\mathsf{S}_i},C^{\mathsf{S}_i})$ \STATE $\vec{D}^{u_o} \leftarrow \mathsf{query}(ID^{u_o}, DB^{\mathsf{S}_i})$ \FOR{$y = 1\dots n$} \STATE $\vec{[D]}^{u_o} \leftarrow ([ID^{veh_{u_o}}] \stackrel{?}{=} [ID^{veh_{u_o}}_y])$ \ENDFOR \STATE $[K^{veh_{u_o}}] \leftarrow \vec{D}^{veh_{u_o}} \times \vec{D}^{u_o}$ \STATE $[AT^{veh_{u_o}}] \leftarrow \mathsf{E}([K^{veh_{u_o}}], [M^{u_c}])$ \STATE $[C^{u_c}] \leftarrow \mathsf{E}([K^{u_c}_{enc}],\{[AT^{veh_{u_o}}], [ID^{veh_{u_o}}]\})$ \STATE $C^{u_c} \leftarrow \mathsf{open}([C^{u_c}])$ \STATE $[AuthTag^{BD^{u_o, u_c}}] \leftarrow \mathsf{mac}([K^{u_c}_{tag_{mac}}], E(K^{u_c}_{tag_{enc}}, [BD^{u_o, u_c}]))$ \STATE $AuthTag^{BD^{u_o, u_c}} \leftarrow \mathsf{open}([AuthTag^{BD^{u_o, u_c}}])$ \end{algorithmic}% \end{varwidth} }; \node[below = 1em of s2] (s3) {}; \node[left = 22em of s3] (pl1) {}; \draw [->] (s3) -- node[fill=white] {msg$_i\{AT\_PUB\_REQ, C^{u_c}, AuthTag^{BD^{u_o, u_c}}\}$} (pl1); \node[below = 1em of pl1, fill=white, draw, rounded corners] (pl2) { \begin{varwidth}{\linewidth} \begin{algorithmic} \STATE $\mathsf{publish}(TS^{Pub}_i, C^{u_c}, AuthTag^{BD^{u_o, u_c}})$ \end{algorithmic}% \end{varwidth} }; \node[below = 1em of pl2] (pl3) {}; \node[right = 22em of pl3] (s4) {}; \draw [->] (pl3) -- node[fill=white] {msg$\{M\_PUB\_ACK, TS^{Pub}_i\}$} (s4); \node[below = 1em of s4] (s5) {}; \node[left = 67em of s5] (o4) {}; \draw [->] (s5) -- node[fill=white] {msg$\{AT\_PUB\_ACK, TS^{Pub}_i\}$} (o4); \node[below = 1em of o4] (o5) {}; \node[right =31em of o5] (consumer3) {}; \draw [->] (o5) -- node[fill=white] {msg$\{AT\_PUB\_ACK, TS^{Pub}_i\}$} (consumer3); \node[below = 5em of pl3] (pl4) {}; \draw (pl4) node[fill=white] { \begin{tabularx}{9.5cm}{|X|X|X|} \hline $TS^{Pub}_i$ & $C^{u_c}$ & $AuthTag^{BD^{u_o, u_c}}$ \\ \hline 14774098 & ersdf3tx0 & fwefw234 \\ \hline $\dots$ & $\dots$ & $\dots$ \\ \hline \end{tabularx} }; \node[below = 5em of consumer3] (consumer4) {}; \node[right = 13em of consumer4] (pl5) {}; \draw [->] (consumer4) -- node[fill=white] {$\mathsf{query\_an}(TS^{Pub}_i)$} (pl5); \node[below = 1em of pl5] (pl6) {}; \node[left = 13em of pl6] (consumer5) {}; \draw [->] (pl6) -- node[fill=white] {msg$\{C^{u_c}, AuthTag^{BD^{u_o, u_c}}\}$} (consumer5); \node[below = 1em of consumer5, fill=white, draw, rounded corners] (consumer6) { \begin{varwidth}{\linewidth} \begin{algorithmic} \IF {$AuthTag^{BD^{u_o, u_c}} \stackrel{?}{=} \mathsf{mac}(K^{u_c}_{tag_{mac}}, E(K^{u_c}_{tag_{enc}}, BD^{u_o, u_c}))$} \STATE $\{AT^{veh_{u_o}}, ID^{veh_{u_o}}\} \leftarrow \mathsf{D}(K^{u_c}_{enc},C^{u_c})$ \ELSE \STATE Break \ENDIF \end{algorithmic}% \end{varwidth} }; \node[below = 1em of consumer6] (consumer7) {}; \node[left = 16.5em of consumer7] (c1) {}; \draw [dashed,->] (consumer7) -- node[fill=white] {msg$\{AT^{veh_{u_o}}, ID^{veh_{u_o}},\mathit{Cert}^{u_c}\}$} (c1); \node[below = 1em of c1, fill=white, draw, rounded corners] (c2) { \begin{varwidth}{\linewidth} \begin{algorithmic}[1] \STATE $\{BD^{u_o, u_c}, \sigma^{u_o}\} \leftarrow \mathsf{D}(K^{veh_{u_o}},AT^{veh_{u_o}})$ \STATE $\mathsf{verify}(Pk^{u_o},BD^{u_o, u_c},\sigma^{u_o})$ \end{algorithmic}% \end{varwidth} }; \node[below = 5em of c1] (c2) {}; \node[right = 16em of c2] (consumer8) {}; \draw [dashed,<->] (c2) -- node[fill=white] {Challenge / Response} (consumer8); \node[below = 1em of c2, fill=white, draw, rounded corners] (c2) { \begin{varwidth}{\linewidth} \begin{algorithmic} \STATE $\sigma^{veh_{u_o}}_{Access} \leftarrow \mathsf{sign}(Sk^{veh_{u_o}}, \{BD^{u_o, u_c},TS^{veh_{u_o}}_{Access}\})$ \end{algorithmic}% \end{varwidth} }; \node[below = 1em of c2] (c3) {}; \node[left = 14em of c3] (o6) {}; \draw [dashed, ->] (c3) -- node[fill=white] {msg$\{\sigma^{veh_{u_o}}_{Access},TS^{veh_{u_o}}_{Access}\}$} (o6); \node[below = 1em of o6, fill=white, draw, rounded corners] (o7) { \begin{varwidth}{\linewidth} \begin{algorithmic} \STATE $\mathsf{verify}(Pk^{veh_{u_o}},\{BD^{u_o, u_c},TS^{veh_{u_o}}_{Access}\}, \sigma^{veh_{u_o}}_{Access})$ \end{algorithmic}% \end{varwidth} }; \end{tikzpicture}}} \caption{HERMES\@\ complete representation.} \label{fig:prot} \end{figure} \end{landscape} \section{Conclusion}\label{sec:conclusion} In this paper, we proposed HERMES\@\ -- an efficient, scalable, secure, and privacy-enhancing system for vehicle access provision. It allows users to dynamically instantiate, share, and access vehicles in a secure and privacy-enhancing fashion. To achieve its security and privacy guarantees, HERMES\@\ deploys secure multiparty computation for access token generation and sharing while keeping transactions and booking details confidential. To ensure efficiency and scalability, HERMES\@\ utilizes cryptographic primitives in combination with secure multiparty-computation protocols, supporting various users and vehicles per user. We presented a formal analysis of our system security and privacy requirements and designed a prototype as a proof-of-concept. We demonstrated that HERMES\@\ is suitable for serving large numbers of individuals, each with few vehicles and rental companies with hundred or thousand of vehicles per branch. We benchmarked the cryptographic operations and secure multiparty evaluations testing over arithmetic circuits with HtMAC-MiMC demonstrating its efficiency and scalability. For comparison to SePCAR\@, we tested HERMES\@\ for the case of binary circuits with CBC-MAC-AES. We showed that HERMES\@\ achieves a significant performance improvement: $\approx 30,3$~ms for a vehicle access provision, thus demonstrating its efficiency compared to~\cite{DBLP:conf/esorics/SymeonidisAMMDP17} (i.e., $42$ times faster). We also demonstrated that HERMES\@\ is practical on the vehicle side too as a \gls{AT} operations on a prototype \gls{OBU} box takes only $\approx 62,087$~ms. In the future, aiming to make the operations even more efficient, we will investigate cryptographic primitives using lightweight block ciphers such as Rasta. We also plan to extend HERMES\@\ to booking and payment operations and protect against active adversaries on the untrusted servers. \bibliographystyle{IEEEtran} \section{Security and Privacy Analysis}\label{sec:analysis} We prove that HERMES\@\ satisfies the security and privacy requirements of Sect.~\ref{sec:system_model}, provided that its underlying cryptographic primitives are sufficiently secure. The theorem statement and the proof given below are informal, in a similar sense by~\cite{DBLP:conf/esorics/SymeonidisAMMDP17}. A complete formal description of the semantic security models and the stand-alone proofs are given in the online version of the system~\cite{HERMES_full}. \begin{theorem} Assume that communication takes place over private channels. If: \begin{itemize} \item the \gls{MPC} is statistically secure~\cite{rotaru2017modes}, \item the signature scheme $\mathsf{sign}$ is multi-key existentially unforgeable~\cite{DBLP:journals/siamcomp/GoldwasserMR88}, \item the key derivation function $\mathsf{kdf}$ is multi-key secure~\cite{DBLP:journals/jacm/GoldreichGM86}, \item the public-key encryption scheme $\mathsf{enc}$ is multi-key semantically secure~\cite{DBLP:conf/eurocrypt/BellareBM00}, \item the symmetric key encryption scheme $\mathsf{E}$ is multi-key chosen-plaintext secure~\cite{DBLP:conf/focs/BellareDJR97}, \item the hash function $\mathsf{hash}$ is collision resistant~\cite{DBLP:conf/fse/RogawayS04}, and \item the \gls{MAC} function $\mathsf{mac}$ is multi-key existentially unforgeable~\cite{DBLP:journals/siamcomp/GoldwasserMR88}, \end{itemize} then HERMES\@\ fulfills the security and privacy requirements of Sect.~\ref{sec:system_model}. \end{theorem} Note that, indeed, for each of the keyed cryptographic primitives we require security in the \emph{multi-key} setting, as these are evaluated under different keys. For example, $\mathsf{sign}$ is used by all owners, each with a different key; $\mathsf{enc}$ is used for different keys, each for a different party in the \gls{VSSP}, and $\mathsf{E}$ and $\mathsf{mac}$ are used for independent keys (i.e., session keys) for every fresh evaluation of the protocol. We refer to Bellare et al.~\cite{DBLP:conf/eurocrypt/BellareBM00} for a discussion on generalizing semantic security of public-key encryption to multi-key security; the adaptation straightforwardly generalizes to the other security models. \begin{proof}[Proof sketch] We treat the security and privacy requirements, and discuss how these are achieved from the cryptographic primitives, separately. We recall that the consumer and owner have agreed upon the booking details prior to evaluating HERMES\@. Hence they know each other. \paragraph{SR1 -- Confidentiality of $M^B$} In one evaluation of the protocol, $u_c$, $u_o$, and the shared vehicle learn the booking details by default or design. The \gls{VSSP} servers only learn shares of the booking data, and under the assumption that the \gls{MPC} is statistically secure, nothing about the booking data is revealed during the \gls{MPC}. The outcomes of the \gls{MPC} are $AuthTag^{M_B}$ and $C^{u_c}$ satisfying \begin{align} &AuthTag^{M_B} = \mathsf{mac}(K^{u_c}_{tag_{mac}},\mathsf{E}(K^{u_c}_{tag_{enc}},M^{B}))\,,\label{eqn:AuthT}\\ &C^{u_c} = \mathsf{E}(K_{enc}^{u_c},\{\mathsf{E}(K^{veh_{u_o}}_y,\{M^{B},\sigma^{u_o}\}),ID^{veh_{u_o}}\})\,,\label{eqn:Cuc} \end{align} both of which reveal nothing about $M^B$ to a malicious outsider due to the assumed security of $\mathsf{mac}$, $\mathsf{E}$, and the independent uniform drawing of the keys $K^{u_c}_{enc}$ and $\vec{K}^{u_c}_{tag}=(K^{u_c}_{tag_{enc}}, K^{u_c}_{tag_{mac}})$. The nested encryption $\mathsf{E}$ does not influence the analysis due to the mutual independence of the keys $K_{enc}^{u_c}$ and $K^{veh_{u_o}}_y$. \paragraph{SR2 -- Authenticity of $M^B$} An owner who initiates the \gls{AT} generation and distribution, first signs the booking details using its private key before sending those to the \gls{VSSP} in shares. Therefore, once the vehicle receives the token and obtains the booking details, it can verify the owner's signature on the booking details. In other words, the vehicle can verify the source of the booking details, the owner, and their integrity. Suppose, to the contrary, that a malicious consumer can get access to a vehicle of an owner $u_o$. This particularly means that it created a tuple $(M^B,\sigma^{u_o})$ such that $\mathsf{verify}(Pk^{u_o},M^B,\sigma^{u_o})$ holds. If $\sigma^{u_o}$ is new, this means that $u_c$ forges a signature for the secret signing key $\mathit{Sk}^{u_o}$. This is impossible by the assumption that the signature scheme is existentially unforgeable. On the other hand, if $(M^B,\sigma^{u_o})$ is old but the evaluation is fresh, this means a collision $\mathsf{hash}(\mathit{Cert}^{u_c})=\mathsf{hash}(\mathit{Cert}^{u_c\prime})$, which is computationally infeasible as $\mathsf{hash}$ is collision-resistant. \paragraph{SR3 -- Confidentiality of $AT^{veh_{u_o}}$} The \gls{AT} is generated by the \gls{VSSP} servers obliviously (as the \gls{MPC} is statistically secure), and only revealed to the public in encrypted form, through $C^{u_c}$ of (\ref{eqn:Cuc}). Due to the uniform drawing of the key $K_{enc}^{u_c}$ (and the security of the public-key encryption scheme used to transmit this key), only the legitimate user can decrypt and learn the \gls{AT}. It shares it with the vehicle over a secure and private channel. \paragraph{SR4 -- Confidentiality of $K^{veh_{u_o}}$} Only the vehicle manufacturer and the vehicle itself hold copies of the vehicle key. Considering the \gls{VM} as a \acrfull{TP}, it holds all the secret keys of vehicles that produce. The \gls{VSSP} servers learn these in shared form, hence learn nothing about it by virtue of the statistical security of the \gls{MPC}. Retrieving a vehicle key from encryptions made under this key constitutes a key recovery attack, which in turn allows breaking the chosen-plaintext security of the symmetric key encryption scheme. \paragraph{SR5 -- Backward and forward secrecy of $AT^{veh_{u_o}}$} The \gls{AT} is published on the \acrfull{PL} as $C^{u_c}$ of (\ref{eqn:Cuc}), encrypted under symmetric key $K_{enc}^{u_c}$. Every honest consumer generates a fresh key $K_{enc}^{u_c}$ for every new evaluation. It uses a key derivation function $\mathsf{kdf}$ utilizing a \gls{PRF} for each key generation and every new evaluation of the protocol, and that is secure. This implies that all session keys are drawn independently and uniformly at random. In addition, the symmetric encryption scheme $\mathsf{E}$ is multi-key secure. Concluding, all encryptions $C^{u_c}$ are independent and reveal nothing of each other. Note that nothing can be said about \glspl{AT} for malicious users who may deviate from the protocol and reuse one-time keys. \paragraph{SR6 -- Non-repudiation of origin of $AT^{veh_{u_o}}$} The vehicle, which is a trusted entity, verifies the origin through verification of the signature, $\mathsf{verify}(Pk^{u_o},M^B,\sigma^{u_o})$. The consumer $u_c$ verifies the origin through the verification of the \gls{MAC} function, i.e., $AuthTag^{M_B} \stackrel{?}{=} \mathsf{mac}(K^{u_c}_{tag_{mac}}, E(K^{u_c}_{tag_{enc}}, M^{B}))$. Note that $u_c$ does not effectively verify $AT^{veh_{u_o}}$, but rather $AuthTag^{M_B}$, which suffices under the assumption that the \gls{MPC} servers evaluate their protocol correctly. In either case, security fails only if the asymmetric signature scheme or the $\mathsf{mac}$ function are forgeable. \paragraph{SR7 -- Non-repudiation of delivery of $AT^{veh_{u_o}}$} The owner can verify the correct delivery of $AT^{veh_{u_o}}$ with the successful verification and message sent by the vehicle to the owner, $\mathsf{verify}(Pk^{veh_{u_o}},\{M^B,TS^{veh_{u_o}}_{Access}\}, \sigma^{veh_{u_o}}_{Access})$ at the end of the protocol. Security breaks only if the signature scheme is forgeable. \paragraph{SR8 -- Accountability of users (i.e., owners, and consumers)} In the case of disputes, the information related to a specific transaction (and only this information) may need to be reconstructed. Reconstruction can be done only if the \gls{VSSP} servers collude and reveal their shares. In our setting, these servers have competing interests. Thus they would not collude unless law authorities' request. Due to threshold secret sharing properties, the private inputs can be reconstructed by a majority coalition. This is, if the \gls{VSSP} consists of three parties, it suffices two of party-shares required to reconstruct the secret. \paragraph{PR1 -- Unlinkability of any two requests of $u_c$ for any $veh_{u_o}$(s)} The only consumer-identifiable data is in the consumer's certificate included in the booking details. Note that these are agreed upon and between the consumer and the owner, so the owner learns the consumer's identity by default. Beyond that, the consumer only communicates with the vehicle, which is supposed to learn the consumer's identity to perform proper access verification. The consumer consults the \gls{PL} over an anonymous communication channel~\cite{torproject}. The booking details are transferred to and from the \gls{VSSP}, but these are encrypted and do not leak by virtue of their confidentiality (security requirement SR1). \paragraph{PR2 -- Anonymity of $u_c$ and the $veh_{u_o}$} The reasoning is identical to that of PR1. \paragraph{PR3 -- Indistinguishability of $AT^{veh_{u_o}}$ operations} Access token generation, update, or revocation is performed using the same steps and the same type of messages sent to the \gls{VSSP} and \gls{PL}. Hence, outsiders and system entities cannot distinguish which operation has been requested. \end{proof}
1,314,259,996,011
arxiv
\section*{Acknowledgements} We gratefully acknowledge financial support of the DFG via the SFB 631, the German Excellence Initiative via NIM, the EU-FP7 via SOLID, and the BMBF via QuaHLRep project 01BQ1036. AL acknowledges support of the TUM-GS, and SF of the Alexander von Humboldt Foundation. \section*{References}
1,314,259,996,012
arxiv
\section{Triviality and the Standard Model\,\protect{\cite{rscintro}}} In the standard Higgs model, one introduces a fundamental scalar doublet: \begin{displaymath} \phi=\left(\matrix{\phi^+ \cr \phi^0 \cr}\right) {}~, \end{displaymath} with potential: \begin{displaymath} V(\phi)=\lambda \left(\phi^{\dagger}\phi - {v^2\over 2}\right)^2 {}~. \label{eq:pot} \end{displaymath} While this theory is simple and renormalizable, it has a number of shortcomings. First, while the theory can be constructed to accommodate the breaking of electroweak symmetry, it provides no {\it explanation} for it -- one simply assumes that the potential is of the form in eqn.~(\ref{eq:pot}). In addition, in the absence of supersymmetry, quantum corrections to the Higgs mass are naturally of order the largest scale in the theory \begin{displaymath} {\lower5pt\hbox{\epsfysize=0.25 truein \epsfbox{figures/msq.eps}}} \Rightarrow m_H^2 \propto \Lambda^2~, \end{displaymath} leading to the hierarchy and naturalness problems.\cite{thooft} Finally, the $\beta$ function for the self-coupling $\lambda$ \begin{displaymath} {\lower7pt\hbox{\epsfysize=0.25 truein \epsfbox{figures/beta.eps}}} \Rightarrow \beta = {3\lambda^2 \over 2 \pi^2} \, > \, 0 {}~, \end{displaymath} leading to a ``Landau pole'' and triviality.\cite{trivial} The hierarchy/naturalness and triviality problems can be nicely summarized in terms of the Wilson renormalization group. Define the theory with a fixed UV-cutoff: \begin{eqnarray} {\cal L}_\Lambda = & D^\mu \phi^\dagger D_\mu \phi + m^2(\Lambda)\phi^\dagger \phi + {\lambda(\Lambda)\over 4}(\phi^\dagger\phi)^2 \nonumber\\ & + {\hat{\kappa}(\Lambda)\over 36\Lambda^2}(\phi^\dagger\phi)^3+\ldots \label{eq:liz} \end{eqnarray} Here $\hat{\kappa}$ is the coefficient of a representative irrelevant operator, of dimension greater than four. Next, integrate out states with $\Lambda^\prime < k < \Lambda$, and construct a new Lagrangian with the same {\it low-energy} Green's functions: \begin{eqnarray} {\cal L}_\Lambda & \Rightarrow & {\cal L}_{\Lambda^\prime} \nonumber\\ m^2(\Lambda)& \rightarrow & m^2(\Lambda^\prime) \nonumber \\ \lambda(\Lambda) & \rightarrow & \lambda(\Lambda^\prime) \nonumber \\ \hat{\kappa}(\Lambda) & \rightarrow & \hat{\kappa}(\Lambda^\prime) \end{eqnarray} The low-energy behavior of the theory is then nicely summarized in terms of the evolution of couplings in the infrared.\footnote{For convenience, we ignore the corrections due to the weak gauge interactions. In perturbation theory, at least, the presence of these interactions does not qualitatively change the features of the Higgs sector.} A three-dimensional representation of this flow in the infinite-dimensional space of couplings shown in Figure \ref{Fig1}. \begin{figure}[tbp] \centering \epsfysize=2in \hspace*{0in} \epsffile{figures/wilson.ps} \caption{Renormalization group flow of Higgs mass $m^2$, Higgs self-coupling $\lambda$, and the coefficient of a representative irrelevant operator $\hat{\kappa}$. The flows go from upper-left to lower-right as one scales to the infrared.} \label{Fig1} \end{figure} From Figure \ref{Fig1}, we see that as we scale to the infrared the coefficients of irrelevant operators, such as $\hat{\kappa}$, tend to zero; {\it i.e.} the flows are attracted to the finite dimensional subspace spanned (in perturbation theory) by operators of dimension four or less; this is the modern understanding of {\it renormalizability}. On the other hand, the coefficient of the only {\it relevant} operator (of dimension 2), $m^2$, tends to infinity. This leads to the naturalness/hierarchy problem.\cite{thooft} Since we want $m^2 \propto v^2$ at low energies we must adjust the value of $m^2(\Lambda)$ to a precision of \begin{displaymath} {\Delta m^2(\Lambda) \over m^2(\Lambda)} \propto {v^2 \over \Lambda^2}~. \end{displaymath} Central to our discussion here is the fact that the coefficient of the only marginal operator $\lambda$ tends to 0, because of the positive $\beta$ function. If we try to take the continuum limit, $\Lambda \to +\infty$, the theory becomes free or trivial.\cite{trivial} The triviality of the scalar sector of the standard one-doublet Higgs model implies that this theory is only an effective low-energy theory valid below some cut-off scale $\Lambda$. Physically this scale marks the appearance of new strongly-interacting symmetry-breaking dynamics. Examples of such high-energy theories include ``top-mode'' standard models\,\cite{topmode} and composite Higgs models.\cite{chiggs} As the Higgs mass increases, the upper bound on the scale $\Lambda$ decreases. An estimate of this effect can be obtained by integrating the one-loop $\beta$-function, which yields \begin{displaymath} \lambda(m_H) \stackrel{<}{\sim} {{2\pi^2}\over 3\log{\Lambda\over m_H}}\, . \label{eq:est} \end{displaymath} Using the relation $m^2_H = 2\lambda(m_H) v^2$ we find \begin{displaymath} m^2_H \ln\left({\Lambda\over m_H}\right)\le {4\pi^2 v^2 \over 3}~. \label{estimate} \end{displaymath} Hence a lower bound\,\cite{cabbibo,dashen} on $\Lambda$ yields an upper bound on $m_H$. We must require that $M_H / \Lambda$ in eqn.~(\ref{estimate}) be small enough to afford the effective Higgs theory some range of validity (or to minimize the effects of regularization in the context of a calculation in the scalar theory). Quantitative\,\cite{lattice} studies on the lattice using analytic and Monte Carlo techniques result in an upper bound on the Higgs mass of approximately 700 GeV. The lattice Higgs mass bound is potentially ambiguous because the precise value of the bound on the Higgs boson's mass depends on the (arbitrary) restriction placed on $M_H / \Lambda$. The ``cut-off'' effects arising from the regulator are not universal: different schemes can give rise to different effects of varying sizes and can change the resulting Higgs mass bound. In this talk we show that, for models that reproduce the standard one-doublet Higgs model at low energies, electroweak and flavor phenomenology provide a lower bound on the scale $\Lambda$ of order 10 -- 20 TeV that is regularization-independent (i.e. independent of the details of the underlying physics). Using eqn.~(\ref{estimate}) we estimate that this gives an {\it upper} bound of 450 -- 500 GeV on the Higgs boson mass. The discussion we have presented is based on perturbation theory and is valid in the domain of attraction of the ``Gaussian fixed point'' ($\lambda=0$). In principle, however, the Wilson approach can be used {\it non-perturbatively}, even in the presence of nontrivial fixed points or large anomalous dimensions. In a conventional Higgs theory, neither of these effects is thought to occur.\cite{lattice} We return to the issue of the possible existence of other, potentially non-trivial, fixed points in section 4 below. \section{Dimensional Analysis} We will analyze the effects of the underlying physics by estimating the sizes of various operators in a low-energy effective lagrangian containing the (presumably composite) Higgs boson and the ordinary gauge bosons and fermions. Since we are considering theories with a heavy Higgs field, we expect that the underlying high-energy theory will be strongly interacting. Borrowing a technique from QCD we will rely on dimensional analysis\,\cite{QCDNDA} to estimate the sizes of various effects of the underlying physics. A strongly interacting theory has no small parameters. As noted by Georgi,\cite{generalized} a theory\,\footnote{These dimensional estimates only apply if the low-energy theory, when viewed as a scalar field theory, is defined about the infrared-stable Gaussian fixed-point. We return to potentially ``non-trivial'' theories below.} with light scalar particles belonging to a single symmetry-group representation depends on two parameters: $\Lambda$, the scale of the underlying physics, and $f$ (the analog of $f_\pi$ in QCD), which measures the amplitude for producing the scalar particles from the vacuum. Our estimates will depend on the ratio $\kappa = \Lambda / f$, which is expected to fall between 1 and $4\pi$. Consider the kinetic energy of a scalar bound-state in the appropriate low-energy effective lagrangian. The properly normalized kinetic energy is \begin{displaymath} \partial^\mu \phi^\dagger \partial_\mu \phi = { \Lambda^2 f^2} \left({\partial^\mu \over { \Lambda}}\right) \left({\phi^\dagger \over { f}}\right) \left({\partial_\mu \over { \Lambda}}\right) \left({\phi \over { f}}\right)~, \end{displaymath} where, because the fundamental scale of the interactions is $\Lambda$, we ascribe a $\Lambda$ to each derivative and an $f$ to each $\phi$ since $f$ measures the amplitude to produce the bound state. This tells us that the overall magnitude of each term in the effective lagrangian is ${\cal{O}}(f^2\Lambda^2)$. We can next estimate the ``generic'' size of a mass term in the effective theory: \begin{displaymath} m^2 \phi^\dagger \phi = { \Lambda^2 f^2} \left({\phi^\dagger \over { f}}\right) \left({\phi \over { f}}\right) \Rightarrow {m^2 \propto \Lambda^2}~. \end{displaymath} This is a reproduction of the hierarchy problem. In the absence of some other symmetry not accounted for in these rules, fine-tuning\,\footnote{We will not be addressing the hierarchy problem here; we will simply assume that some other symmetry or dynamics has produced the appropriate light scalar state.} is required to obtain $m^2 \ll \Lambda^2$. Next, consider the size of scalar interactions. From the simplest interaction \begin{displaymath} \lambda (\phi^\dagger \phi)^2 \Rightarrow { \lambda \propto \left({\Lambda \over f}\right)^2 = \kappa^2}~, \end{displaymath} we see that $\kappa$ will determine the size of coupling constants. Similarly, for a higher-dimension interaction such as the one in eqn.~(\ref{eq:liz}) we find \begin{displaymath} {\hat{\kappa} \over { \Lambda^2}}(\phi^\dagger \phi)^3 \Rightarrow { \hat{\kappa} \propto \kappa^4}~. \end{displaymath} These rules are easily extended to include strongly-interacting fermions self-consistently. Again, we start with the properly normalized kinetic-energy \begin{displaymath} \bar{\psi}\slashchar{\partial}\psi = { \Lambda^2 f^2} \left({\bar{\psi} \over { f\sqrt{\Lambda}}}\right) \left({\slashchar{\partial} \over { \Lambda}}\right) \left({\psi \over { f\sqrt{\Lambda}}}\right)~, \end{displaymath} and learn that $f\sqrt{\Lambda}$ is a measure of the amplitude for producing a fermion from the vacuum. Next, consider a Yukawa coupling of a strongly-interacting fermion to our composite Higgs, \begin{displaymath} y (\bar{\psi}\phi \psi) \Rightarrow { y \propto \kappa}~. \label{eq:natyukawa} \end{displaymath} And finally, the natural size of a four-fermion operator is \begin{displaymath} {\nu \over { \Lambda^2}} (\bar{\psi}\psi)^2 \Rightarrow { \nu \propto \kappa^2}~. \label{eq:grhoex} \end{displaymath} We will rely on these estimates to derive bounds on the scale $\Lambda$. By way of justification, we note that these estimates work in QCD for the chiral-Lagrangian,\cite{QCDNDA} with $f \to f_\pi$, $\Lambda \to 1$ GeV, and $\kappa \approx {\cal{O}}(4 \pi)$. For example, four nucleons operators of the form shown in eqn.~(\ref{eq:grhoex}) arise in the vector channel from $\rho$-exchange and we obtain $\Lambda = m_\rho$ and $\kappa = g_\rho \approx 6$. In a QCD-like theory with $N_c$ colors and $N_f$ flavors one expects\,\cite{reconsider} that \begin{displaymath} \kappa \approx \min \left({4\pi a\over N_c^{1/2}}, {4\pi b\over N_f^{1/2}}\right)~, \end{displaymath} where $a$ and $b$ are constants of order 1. In the results that follow, we will display the dependence on $\kappa$ explicitly; when giving numerical examples, we set $\kappa$ equal to the geometric mean of 1 and $4\pi$, {\it i.e.} $\kappa \approx 3.5$. \section{Isospin Violation and Bounds\,\protect{\cite{rscehs}} on $m_H$} Because of the $SU(2)_W \times U(1)_Y$ symmetry of the low-energy theory, all terms of dimension less than or equal to four respect custodial symmetry.\cite{custodial} The leading custodial-symmetry violating operator is of dimension six\,\cite{wyler,grinstein} and involves four Higgs doublet fields $\phi$. According to the rules of dimensional analysis, the operator \begin{displaymath} {\lower35pt\hbox{\epsfysize=1.0 truein \epsfbox{figures/fourhiggs.eps}}} \Rightarrow {\kappa^2 \over \Lambda^2} (\phi^\dagger D^\mu \phi) (\phi^\dagger D_\mu \phi)~, \label{eq:isoviol} \end{displaymath} should appear in the low-energy effective theory with a coefficient of order one.\cite{grinstein} Such an operator will give rise to a deviation \begin{displaymath} \Delta \rho_* = - {\cal O}\left(\kappa^2 {v^2 \over \Lambda^2}\right) ~, \end{displaymath} where $v \approx 246$ GeV is the expectation value of the Higgs field. Imposing the constraint\,\cite{data,something} that $|\Delta \rho_*| \le 0.4\%$, we find the lower bound \begin{displaymath} \Lambda \stackrel{>}{\sim} 4\, {\rm TeV} \cdot \kappa ~. \end{displaymath} For $\kappa \approx 3.5$, we find $\Lambda \stackrel{>}{\sim} 14$ TeV. Alternatively, it is possible that the underlying strongly-interacting dynamics respects custodial symmetry. Even in this case, however, there must be custodial-isospin-violating physics (analogous to extended-technicolor\,\cite{Lane,Dimopoulos} interactions) which couples the $\psi_L=(t,\ b)_L$ doublet and $t_R$ to the strongly-interacting ``preon'' constituents of the Higgs doublet in order to produce a top quark Yukawa coupling at low energies and generate the top quark mass. If, for simplicity, we assume that these new weakly-coupled custodial-isospin-violating interactions are gauge interactions with coupling $g$ and mass $M$, dimensional analysis allows us to estimate the size of the resulting top quark Yukawa coupling. The ``natural size'' of a Yukawa coupling (eqn.~(\ref{eq:natyukawa})) is $\kappa$ and that of a four-fermion operator (eqn.~(\ref{eq:grhoex})) is $\kappa^2/\Lambda^2$; the ratio $(g^2/M^2)/(\kappa^2/\Lambda^2)$ is the ``small parameter'' associated with the extra flavor interactions and we find \begin{displaymath} {\lower35pt\hbox{\epsfysize=1.0 truein \epsfbox{figures/yukawa.eps}}} \Rightarrow {g^2 \over M^2} {\Lambda^2 \over \kappa}\bar{q}_R \phi \psi_L ~. \label{eq:quarkpreon} \end{displaymath} In order to give rise to a quark mass $m_q$, the Yukawa coupling must be equal to \begin{displaymath} {\sqrt{2} m_q \over v} \end{displaymath} where $v\approx 246$ GeV. This implies \begin{displaymath} \Lambda \stackrel{>}{\sim}{M \over g} \sqrt{\sqrt{2} \kappa {m_q \over v}}~. \label{eq:yukawa} \end{displaymath} These new gauge interactions will typically also give rise to custodial-isospin-violating 4-preon interactions\,\footnote{These interactions have previously been considered in the context of technicolor theories.\cite{appelquist}} which, at low energies, will give rise to an operator of the same form as the one in eqn.~(\ref{eq:isoviol}). Using dimensional analysis, we find \begin{displaymath} {\lower35pt\hbox{\epsfysize=1.0 truein \epsfbox{figures/yukawaiso.eps}}} \Rightarrow \left[{ {g^2 \over M^2}} \left({ {\kappa^2 \over \Lambda^2}}\right)^{-1}\right] { \kappa^2 \over \Lambda^2} (\phi^\dagger D^\mu \phi) (\phi^\dagger D_\mu \phi)~, \label{eq:isoviola} \end{displaymath} which results in the bound $M/g \stackrel{>}{\sim} 4$ TeV. From eqn.~(\ref{eq:yukawa}) with $m_t \approx 175$ GeV we then derive the limit \begin{displaymath} \Lambda \stackrel{>}{\sim} 4\, {\rm TeV} \cdot \sqrt{\kappa}~. \end{displaymath} For $\kappa \approx 3.5$, we find $\Lambda \stackrel{>}{\sim} 7.5$ TeV. \section{Non-Trivial Scaling} Dimensional analysis was crucial to the discussion given above. If the low-energy Higgs theory does not flow toward the trivial Gaussian fixed-point in the infrared limit, the scaling dimensions of the fields and operators can be very different than naively expected. In this case the bounds given above do not apply. A nice example of a scalar theory with non-trivial behavior has been given by Jansen, Kuti, and Liu.\cite{jansen} They consider a theory defined by an $O(4)$-symmetric Lagrange density with a modified kinetic-energy \begin{displaymath} {\cal L}_{kin} = -{1\over 2} \phi^\dagger (\Box + {\Box^3\over {\cal M}^4}) \phi~. \end{displaymath} In the large-$N$ limit, this higher-derivative kinetic term is sufficient to eliminate all divergences. A lattice simulation of this theory\,\cite{liu} indicates that this approach can be used to define a non-trivial Higgs theory with a Higgs boson mass as high as 2 TeV, while avoiding any noticeable effects from the (complex-conjugate) pair of ghosts which are present because of the higher derivative kinetic-energy term. As shown by Kuti,\cite{kutii} in the infrared this higher-derivative theory flows to a non-trivial fixed point on an infinite dimensional critical surface, which corresponds to a continuum field theory with an infinite number of relevant operators. The reason there are an infinite number of relevant operators is that, if the continuum limit is taken so that the scale ${\cal M}$ remains finite as required in order to flow to a non-trivial theory, the scaling dimension\,\cite{kutii} of the Higgs doublet field $\phi$ is -1 instead of the canonical value of +1! If one could impose an exact $O(4)$ symmetry on the symmetry breaking sector, this would lead to a strongly-interacting electroweak symmetry-breaking sector without technicolor\,\cite{liu}. However, as argued above, custodial isospin violation in the flavor sector must couple to the symmetry-breaking sector to give rise to the different top- and bottom-quark masses. Furthermore, if the scaling dimension of the Higgs field is -1, there is an infinite class of custodial-isospin-violating operators (including the operator in eqn.~(\ref{eq:isoviol})) which are relevant. Since these operators are relevant, even a small amount of custodial isospin violation coming from high-energy flavor dynamics will be amplified as one scales to low energies, ultimately contradicting the bound on $\Delta\rho_*$. We therefore conclude that these non-trivial scalar theories cannot provide a phenomenologically viable theory of electroweak symmetry breaking. To construct a phenomenologically viable theory of a strongly-interacting Higgs sector it is not sufficient to construct a theory with a heavy Higgs boson; one must also ensure that all potentially custodial-isospin-violating operators remain irrelevant.\footnote{This is also a concern in walking technicolor.\cite{rsc}} \section{Flavor-Changing Neutral-Currents\,\protect{\cite{rscbadehs}}} The high-energy flavor physics responsible for the generation of the quark-preon couplings {\it must} distinguish between different flavors so as to give rise to the different masses of the corresponding fermions. In addition to the Higgs-fermion coupling discussed above, the flavor physics will also give rise to flavor-specific couplings of ordinary fermions to themselves. These new current-current interactions among ordinary fermions generically give rise to flavor-changing neutral currents (as previously noted\,\cite{Lane} for the case of ETC theories) that affect Kaon and $D$-meson physics. For instance, consider the interactions responsible for the $s$-quark mass. Through Cabibbo mixing, these interactions must couple to the $d$-quark as well. This will give rise to the interactions \begin{eqnarray} {\cal L}_{eff} & = & - \, (\cos \theta_L^s \sin \theta_L^s)^2 \frac{g^{2}}{M^{2}} ( \overline s_L \gamma^{\mu} d_L )(\overline s_L \gamma_{\mu} d_L) \nonumber \\ [2mm] & & - \, (\cos \theta_R^s \sin \theta_R^s)^2 \frac{g^{2}}{M^{2}} ( \overline s_R \gamma^{\mu} d_R)(\overline s_R \gamma_{\mu} d_R) \nonumber \\ [2mm] & & - \, \cos \theta_L^s \sin \theta_L^s \cos \theta_R^s \sin \theta_R^s \frac{g^{2}}{M^{2}} ( \overline s_L \gamma^{\mu} d_L )(\overline s_R \gamma_{\mu} d_R)~, \label{ops1} \end{eqnarray} where the coupling $g$ and mass $M$ are of the same order as those in the interactions which ultimately give rise to the $s$-quark Yukawa coupling in eqn.~(\ref{eq:quarkpreon}), and the angles $\theta^s_L$ and $\theta^s_R$ represent the relation between the gauge eigenstates and the mass eigenstates. The operators in eqn.~(\ref{ops1}) will clearly affect neutral Kaon physics. Similarly, the interactions responsible for other quarks' masses will give rise to operators that contribute to mixing and decays of various mesons. Since the operators responsible for generating quark masses and for causing flavor-changing neutral currents violate flavor symmetries differently,\cite{ctsm} in principle one could construct a theory with an approximate GIM symmetry.\cite{ctsm,oldtcgim,technigim} In such models, flavor-changing neutral currents would be suppressed but different quarks would still receive different masses. A theory of this type which included a light scalar state (unlike previous examples\,\cite{ctsm,oldtcgim,technigim}) would be able to evade the flavor-changing neutral current limits discussed here. \subsection{Flavor-Changing Neutral Currents: $\Delta S$} To start, let us consider the four-fermion interactions in eqn.~(\ref{ops1}), which will alter the predicted value of the $K_L - K_S$ mass difference. Using the vacuum-insertion approximation,\cite{vacinsert} we can estimate separately how much the purely left-handed (LL), purely right-handed (RR) and mixed (LR) current-current operators contribute. Requiring each contribution to be less than the observed mass difference $\Delta m_K$, we find the bounds \begin{eqnarray} \left(\frac{M}{g}\right)_{\! {\rm LL,RR}} \! & \stackrel{>}{\sim} &\! f_K \left( \frac{2 m_K B_K}{3 \Delta m_K }\right)^{\! 1/2} \cos \theta_{L,R}^s \sin \theta_{L,R}^s \\ [2mm] & \approx & \! 0.92 \times 10^{3} \, {\rm TeV} \cos \theta_{L,R}^s \sin \theta_{L,R}^s \end{eqnarray} from the first two operators in eqn.~(\ref{ops1}), and \begin{eqnarray} \hspace*{-8mm} \!\!\left(\frac{M}{g}\right)_{\! {\rm LR}}\!\! &\!\stackrel{>}{\sim}\! & \! f_K \left\{ \frac{m_K B_K^{\prime}}{3 \Delta m_K } \left[ \frac{m_K^2}{(m_s + m_d)^2} - \frac{3}{2} \right] \right\}^{\! 1/2}\!\!\! (\cos \theta_L^s \sin \theta_L^s \cos \theta_R^s \sin \theta_R^s)^{\! 1/2} \nonumber\\ [2mm] & \! \approx \! & \! 1.4 \times 10^{3} \, {\rm TeV} \, (\cos \theta_L^s \sin \theta_L^s \cos \theta_R^s \sin \theta_R^s)^{\! 1/2} \end{eqnarray} from the last operator in eqn.~(\ref{ops1}). In evaluating these expressions, we have used the values $f_K \approx 113$ MeV, the ``bag'' factors $B_K, B_K^\prime \sim 0.7$, and $m_s + m_d \sim 200$ MeV. In order to produce the observed $d - s$ mixing, we expect that at least one of the angles $\theta_L^s,\ \theta_R^s$ is of order the Cabibbo angle, $\theta_C$. Then we find from any one operator that \begin{displaymath} \frac{M}{g} \stackrel{>}{\sim} 200 \, {\rm TeV}~. \label{fcncmbound} \end{displaymath} From eqn.~(\ref{eq:yukawa}) it follows that \begin{displaymath} \Lambda \stackrel{>}{\sim} 6.8 \, {\rm TeV} \sqrt{\kappa\left({m_s\over 200\, {\rm MeV}}\right)}~. \label{eq:fcncbound} \end{displaymath} For $\kappa\approx 3.5$, this yields a lower bound of approximately 13 TeV on $\Lambda$. Typically, in addition to the operators in eqn.~(\ref{ops1}) there will be flavor-changing operators which are products of color-octet currents\,\footnote{ Note that it is likely that color must be embedded in the flavor interactions in order to avoid possible Goldstone bosons\,\protect\cite{Lane} and large contributions to the $S$ parameter.\protect\cite{Spar}}. At least in the vacuum-insertion approximation, the matrix elements of products of color-octet currents are enhanced relative to those shown in (\ref{ops1}) by a factor of 4/3 for the LL and RR operators and a factor of approximately 7 for the LR operator. Furthermore, because left-handed quarks are weak doublets it is possible that flavor physics associated with the $c$-quark mass also contributes to $\Delta S = 2$ interactions. If so, one would replace $m_s$ with $m_c$ in eqn.~(\ref{eq:fcncbound}), yielding a lower bound on $\Lambda$ of order 20$\sqrt{\kappa}$ TeV. For these reasons, the bounds given above may be conservative. \subsection{Flavor-Changing Neutral Currents: $\Delta C$} Usually, the strongest constraints on nonstandard physics from flavor-changing neutral currents come from processes involving Kaons, like those considered above. In the present case, however, the constraints from $D^0 - \overline{D}^0$ mixing are also important because the $c$-quark is heavier than the $s$-quark, while the $u-c$ mixing is as large as the $d-s$ mixing. Again, there are contributions to $D$-meson mixing from the color-singlet products of currents analogous to those in eqn.~(\ref{ops1}). The purely left-handed or right-handed current-current operators yield \begin{displaymath} \left(\frac{M}{g}\right)_{ \! {\rm LL,RR}} \stackrel{>}{\sim} f_D\left( \frac{2 m_D B_D}{3 \Delta m_D }\right)^{\! 1/2} \cos \theta_{L,R}^c \sin \theta_{L,R}^c \approx 120 \, {\rm TeV} ~, \end{displaymath} where we have used the limit\,\cite{data} on the neutral $D$-meson mass difference, $\Delta m_D \stackrel{<}{\sim} 1.4 \times 10^{-10}$ MeV, and $f_D \sqrt{B_D} = 0.2$ GeV, $\theta_{L,R}^c \approx \theta_C$. The bound on the scale of the underlying strongly-interacting dynamics follows from eqn.~(\ref{eq:yukawa}): \begin{displaymath} \Lambda \stackrel{>}{\sim} 11 \, {\rm TeV} \sqrt{\kappa\left({m_c\over 1.5\, {\rm GeV}}\right)}~, \label{eq:Dbound} \end{displaymath} so that $\Lambda \stackrel{>}{\sim} 21$ TeV for $\kappa \approx 3.5$. The $\Delta C = 2$, LR product of color-singlet currents gives a weaker bound than eqn.~(\ref{eq:Dbound}) but the LR product of color-octet currents, \begin{displaymath} {\cal L}_{eff} = - \, \cos \theta_L^c \sin \theta_L^c \cos \theta_R^c \sin \theta_R^c \frac{g^2}{M^2} ( \overline c_L \gamma^{\mu} T^a u_L) (\overline c_R \gamma_{\mu} T^a u_R) ~, \label{ops2} \end{displaymath} where $T^a$ are the generators of $SU(3)_C$, gives a stronger bound: \begin{eqnarray} \left(\frac{M}{g}\right)_{ \! {\rm LR}} & \stackrel{>}{\sim} & \frac{4 f_D}{3(m_c + m_u)} \left( \frac{m_D^3 B_D^\prime}{\Delta m_D}\right)^{\! 1/2} (\cos \theta_L^c \sin \theta_L^c \cos \theta_R^c \sin \theta_R^c)^{\! 1/2} \\ [2mm] & \approx & 240 \, {\rm TeV} \left({1.5\, {\rm GeV}\over m_c}\right)~, \end{eqnarray} corresponding to \begin{displaymath} \Lambda \stackrel{>}{\sim} 22 \, {\rm TeV} \sqrt{\kappa\left({1.5\, {\rm GeV}\over m_c}\right)}~. \label{eq:DDbound} \end{displaymath} \section{Higgs Mass Limits} Because of triviality, a lower bound on the scale $\Lambda$ yields an upper limit on the Higgs boson's mass. A rigorous determination of this limit would require a nonperturbative calculation of the Higgs mass in an $O(4)$-symmetric theory subject to the constraint on $\Lambda$. Here we use eqn.~(\ref{estimate}) to provide an estimate of this upper limit by naive extrapolation of the lowest-order perturbative result.\footnote{The naive perturbative bound has been remarkably close to the non-perturbative estimates derived from lattice Monte Carlo calculations.\cite{lattice}} The bound $\Lambda \stackrel{>}{\sim} 13$ TeV given by the contribution of the $\Delta S = 2$ product of color-singlet currents to the $K_L - K_S$ mass difference, eqn.~(\ref{eq:fcncbound}), in the case $\kappa \approx 3.5$, results in the limit\,\footnote{If $\kappa \approx 4\pi$, $\Lambda$ would have to be greater than 24 TeV, yielding an upper limit on the Higgs boson's mass of 450 GeV. If $\kappa \approx 1$, $\Lambda$ would be greater than 6.8 TeV, yielding the upper limit $m_H \stackrel{<}{\sim} 570$ GeV.} $m_H \stackrel{<}{\sim} 490$ GeV. The bound $\Lambda \stackrel{>}{\sim} 21$ TeV, given by the contribution of the $\Delta C = 2 \,$, LL or RR product of color-singlet currents to the neutral $D$-meson mass difference, eqn.~(\ref{eq:Dbound}), yields $m_H \stackrel{<}{\sim} 460$ GeV. Limits from the contributions of color-octet currents or from the relationship between $m_c$ and $\Delta m_K$ would be even more stringent. \section{Conclusions} Because of triviality, theories with a heavy Higgs boson are effective low-energy theories valid below some cut-off scale $\Lambda$. We have shown that the experimental constraint on the amount of custodial symmetry violation implies that the scale $\Lambda$ must be greater than of order 7.5 TeV. The underlying high-energy theory must also include flavor dynamics at a scale of order $\Lambda$ or greater in order to produce the different Yukawa couplings of the Higgs to ordinary fermions. This flavor dynamics will generically give rise to flavor-changing neutral currents. In this note we showed that satisfying the experimental constraints on extra contributions to $\Delta m_K$ and $\Delta m_D$ requires that the scale of the associated flavor dynamics exceed certain lower bounds. At the same time, the new physics must provide sufficiently large Yukawa couplings to give the quarks their observed masses. In order to give rise to a sufficiently large $s$-quark Yukawa coupling, we showed that $\Lambda$ must be greater than of order 13 TeV, while in the case of the $c$-quark the bound is even more stringent, $\Lambda \stackrel{>}{\sim} 21$ TeV. For theories defined about the infrared-stable Gaussian fixed-point, we estimated that this lower bound on $\Lambda$ yields an upper limit of approximately 460 GeV on the Higgs boson's mass, independent of the regulator chosen to define the theory. We also showed that some regulator schemes, such as higher-derivative regulators, used to define the theory about a different fixed-point are particularly dangerous because an infinite number of custodial-isospin-violating operators become relevant. \vspace{12pt} \centerline{\bf Acknowledgments} \vspace{10pt} R.S.C. and E.H.S. thank Koichi Yamawaki and the organizers of SCGT 96 for holding a stimulating conference. E.H.S. acknowledges the support of the NSF Faculty Early Career Development (CAREER) program, the DOE Outstanding Junior Investigator program, and the JSPS Invitation Fellowship Program. {\em This work was supported in part by the National Science Foundation under grant PHY-9501249, and by the Department of Energy under grant DE-FG02-91ER40676.} \section*{References}
1,314,259,996,013
arxiv
\section{Introduction} Hom-Lie algebras and related hom-algebra structures have recently become a subject of growing interest and extensive investigations, in part due to the prospect of providing a general framework in which one can produce many types of natural deformations of (Lie) algebras, in particular $q$-deformations which are of interest both in mathematics and in physics. One of the main initial motivations for this development came from mathematical physics works on $q$-deformations of infinite-dimensional algebras, primarily the $q$-deformed Heisenberg algebras ($q$-deformed Weyl algebras), oscillator algebras, and the Virasoro algebra \cite{AizawaSato,ChaiElinPop,ChaiKuLukPopPresn,ChaiKuLuk,ChaiPopPres,CurtrZachos1,CurtrFairlZachos,DaskaloyannisGendefVir,Hu,K92,LiuKQuantumCentExt,LiuKQCharQuantWittAlg,LiuKQPhDthesis}. Quasi-Lie algebras, subclasses of quasi-hom-Lie algebras, and hom-Lie algebras as well as their general colored (graded) counterparts were introduced between 2003 and 2005 in \cite{HLS,LS1,LS2,LSGradedquasiLiealg,Czech:witt}. Further on, between 2006 and 2008, Makhlouf and Silvestrov introduced the notions of hom-associative algebras, hom-(co, bi)algebras and hom-Hopf algebras, and also studied their properties \cite{MS1,MS2,MS3}. A hom-associative algebra, being a generalization of an associative algebra with the associativity axiom extended by a linear twisting map, is always hom-Lie admissible, meaning that the commutator multiplication in any hom-associative algebra yields a hom-Lie algebra \cite{MS1}. Whereas associativity is replaced by hom-associativity in hom-associative algebras, hom-coassociativity for hom-coalgebras can be considered in a similar way. One of the main tools in these important developments and in many constructions of examples and classes of hom-algebra structures in physics and in mathematics are based on twisted derivations, or $\sigma$-derivations, which are generalized derivations twisting the Leibniz rule by means of a linear map. These types of twisted derivation maps are central for the associative Ore extension algebras, or rings, introduced in algebra in the 1930s, generalizing crossed product (semidirect product) algebras, or rings, incorporating both actions and twisted derivations. Non-associative Ore extensions on the other hand were first introduced in 2015 and in the unital case, by Nystedt, {\"O}inert, and Richter \cite{2015arXiv150901436N} (see also \cite{2017arXiv1705.02778} for an extension to monoid Ore extensions). In the present article, we generalize this construction to the non-unital case, as well as investigate when these non-unital, non-associative Ore extensions are hom-associative. Finding necessary and sufficient conditions for such to exist, we are also able to construct families of hom-associative quantum planes (\autoref{ex:hom-quant}), universal enveloping algebras of a Lie algebra (\autoref{ex:hom-env}), and Weyl algebras (\autoref{ex:hom-weyl}), all being hom-associative generalizations of their classical counterparts. We do not make use of any previous results about non-associative Ore extensions, but our construction of hom-associative Weyl algebras has some similarities to the non-associative Weyl algebras in \cite{2015arXiv150901436N}; for instance they both are simple. At last, in \autoref{sec:weak-unitalization}, we prove constructively that any multiplicative hom-associative algebra can be embedded in a multiplicative, weakly unital hom-associative algebra. \section{Preliminaries} In this section, we present some definitions and review some results from the theory of hom-associative algebras and that of non-associative Ore extensions. \subsection{Hom-associative algebras} Here we define what we mean for an algebraic structure to be \emph{hom-associative}, and review a couple of results concerning the construction of them. First, throughout this paper, by \emph{non-associative} algebras we mean algebras which are not necessarily associative, which includes in particular associative algebras by definition. We also follow the convention of calling a non-associative algebra $A$ \emph{unital} if there exist an element $1\in A$ such that for any element $a\in A$, $a\cdot 1=1\cdot a=a$. By \emph{non-unital} algebras, we mean algebras which are not necessarily unital, which includes also unital algebras as a subclass. \begin{definition}[Hom-associative algebra]\label{def:hom-assoc-algebra} A \emph{hom-associative algebra} over an associative, commutative, and unital ring $R$, is a triple $(M,\cdot,\alpha)$ consisting of an $R$-module $M$, a binary operation $\cdot\colon M\times M\to M$ linear over $R$ in both arguments, and an $R$-linear map $\alpha\colon M\to M$ satisfying, for all $a,b,c\in M$, \begin{equation} \alpha(a)\cdot(b\cdot c)=(a\cdot b)\cdot\alpha(c). \label{eq:hom-condition} \end{equation} \end{definition} Since $\alpha$ twists the associativity, we will refer to it as the \emph{twisting map}, and unless otherwise stated, it is understood that $\alpha$ without any further reference will always denote the twisting map of a hom-associative algebra. \begin{remark} A hom-associative algebra over $R$ is in particular a non-unital, non-associative $R$-algebra, and in case $\alpha$ is the identity map, a non-unital, associative $R$-algebra. \end{remark} Furthermore, if the twisting map $\alpha$ is also multiplicative, i.e. if $\alpha(a\cdot b) = \alpha(a)\cdot \alpha(b)$ for all elements $a$ and $b$ in the algebra, then we say that the hom-associative algebra is \emph{multiplicative}. \begin{definition}[Morphism of hom-associative algebras]\label{def:morphism} A \emph{morphism} between two hom-associative algebras $A$ and $A'$ with twisting maps $\alpha$ and $\alpha'$ respectively, is an algebra homomorphism $f\colon A\to A'$ such that $f\circ \alpha= \alpha'\circ f$. If $f$ is also bijective, the two are \emph{isomorphic}, written $A\cong A'$. \end{definition} \begin{definition}[Hom-associative subalgebra] Let $A$ be a hom-associative algebra with twisting map $\alpha$. A \emph{hom-associative subalgebra} $B$ of $A$ is a subalgebra of $A$ that is also a hom-associative algebra with twisting map given by the restriction of $\alpha$ to $B$. \end{definition} \begin{definition}[Hom-ideal] A \emph{hom-ideal} of a hom-associative algebra is an algebra ideal $I$ such that $\alpha(I)\subseteq I$. \end{definition} In the classical setting, an ideal is in particular a subalgebra. With the above definition, the analogue is also true for a hom-associative algebra, in that a hom-ideal is a hom-associative subalgebra. \begin{definition}[Hom-simplicity] We say that a hom-associative algebra $A$ is \emph{hom-simple} provided its only hom-ideals are $0$ and $A$. \end{definition} In particular, we see that any simple hom-associative algebra is also hom-simple, while the converse need not be true; there may exist ideals that are not invariant under $\alpha$. \begin{definition}[Hom-associative ring]\label{def:hom-ring} A \emph{hom-associative ring} can be seen as a hom-associative algebra over the ring of integers. \end{definition} \begin{definition}[Weakly unital hom-associative algebra]\label{def:weak-hom} Let $A$ be a hom-associ\-ative algebra. If for all $a\in A$, $e\cdot a=a\cdot e=\alpha(a)$ for some $e\in A$, we say that $A$ is \emph{weakly unital} with \emph{weak unit} $e$. \end{definition} \begin{remark}Any unital, hom-associative algebra with twisting map $\alpha$ is weakly unital with weak unit $\alpha(1)$, since by hom-associativity \begin{equation*} \alpha(1)\cdot a=\alpha(1)\cdot (1\cdot a)=(1\cdot 1)\cdot\alpha(a)=\alpha(a)=\alpha(a)\cdot(1\cdot 1)=(a\cdot 1)\cdot \alpha(1)=a\cdot\alpha(1). \end{equation*} \end{remark} Any non-unital, associative algebra can be extended to a non-trivial hom-associ\-ative algebra, which the following proposition demonstrates: \begin{proposition}[\cite{lietheoryyau}]\label{prop:star-alpha-mult} Let $A$ be a non-unital, associative algebra, $\alpha$ an algebra endomorphism on $A$ and define $*\colon A\times A\to A$ by $a* b:=\alpha(a\cdot b)$ for all $a,b\in A$. Then $(A,*,\alpha)$ is a hom-associative algebra. \end{proposition} \begin{proof} Linearity follows immediately, while for all $a,b,c\in A$, we have \begin{align*} \alpha(a)* (b* c)&=\alpha(a)* (\alpha(b\cdot c))=\alpha(\alpha(a)\cdot \alpha(b\cdot c))=\alpha(\alpha(a\cdot b\cdot c)),\\ (a* b)* \alpha(c)&=\alpha(a\cdot b)* \alpha(c)=\alpha(\alpha(a\cdot b)\cdot\alpha(c))=\alpha(\alpha(a\cdot b\cdot c)), \end{align*} which proves that $(A,* ,\alpha)$ is hom-associative. \end{proof} Note that we are abusing the notation in \autoref{def:hom-assoc-algebra} a bit here; $A$ in $(A,*,\alpha)$ does really denote the algebra and not only its module structure. From now on, we will always refer to this construction when writing $*$. \begin{corollary}[\cite{2009arXiv0904.4874F}]\label{cor:weak-unit} If $A$ is a unital, associative algebra, then $(A,*,\alpha)$ is weakly unital with weak unit 1. \end{corollary} \begin{proof} $1* x=\alpha(1\cdot x)=\alpha(x)=\alpha(x\cdot1)=x* 1.$ \end{proof} \subsection{Non-unital, non-associative Ore extensions} In this section, we define non-unital, non-associative Ore extensions, together with some new terminology. \begin{definition}[Left $R$-additivity] If $R$ is a non-unital, non-associative ring, we say that a map $\beta\colon R\to R$ is \emph{left $R$-additive} if for all $r,s,t\in R$, $r\cdot\beta(s+t)=r\cdot\left(\beta(s)+\beta(t)\right)$. \end{definition} In what follows, $\mathbb{N}$ will always denote the set of non-negative integers, and $\mathbb{N}_{>0}$ the set of positive integers. Now, given a non-unital, non-associative ring $R$ with left $R$-additive maps $\delta\colon R\to R$ and $\sigma\colon R\to R$, by a \emph{non-unital, non-associative Ore extension} of $R$, $R[X;\sigma,\delta]$, we mean the set of formal sums $\sum_{i\in\mathbb{N}} a_i X^i,\ a_i\in R,$ called polynomials, with finitely many $a_i$ nonzero, endowed with the addition \begin{equation*} \sum_{i\in\mathbb{N}}a_iX^i + \sum_{i\in\mathbb{N}}b_i X^i = \sum_{i\in\mathbb{N}}(a_i+b_i)X^i,\quad a_i,b_i\in R, \end{equation*} where two polynomials are equal if and only if their corresponding coefficients are equal, and for all $a,b\in R$ and $m,n\in\mathbb{N}$, a multiplication \begin{equation} aX^m\cdot bX^n=\sum_{i\in \mathbb{N}}\left(a\cdot\pi^m_i(b)\right)X^{i+n}.\label{eq:ore-mult} \end{equation} Here $\pi_i^m$ denotes the sum of all $\binom{m}{i}$ possible compositions of $i$ copies of $\sigma$ and $m-i$ copies of $\delta$ in arbitrary order. Then, for example $\pi_0^0=\mathrm{id}_R$ and $\pi^3_1=\sigma\circ\delta\circ\delta+\delta\circ\sigma\circ\delta+\delta\circ\delta\circ\sigma$. We also extend the definition of $\pi_i^m$ by setting $\pi^m_i\equiv0$ whenever $i<0$, or $i>m$. Imposing distributivity of the multiplication over addition makes $R[X;\sigma,\delta]$ a ring. In the special case when $\sigma=\mathrm{id}_R$, we say that $R[X;\mathrm{id}_R,\delta]$ is a \emph{non-unital, non-associative differential polynomial ring}, and when $\delta\equiv0$, $R[X;\sigma,0]$ is said to be a \emph{non-unital, non-associative skew polynomial ring}. Note that when $m=n=0$, $aX^0\cdot bX^0 = \sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^0(b)\right)X^i=(a\cdot b)X^0,$ so $R\cong RX^0$ by the isomorphism $r\mapsto rX^0$ for any $r\in R$. Since $RX^0$ is a subring of $R[X;\sigma,\delta]$, we can view $R$ as a subring of $R[X;\sigma,\delta]$, making sense of expressions like $a\cdot bX^0$. \begin{remark}If $R$ contains a unit, we write $X$ for the formal sum $\sum_{i\in\mathbb{N}}a_i X^i$ with $a_1=1$ and $a_i=0$ when $i\neq 1$. It does not necessarily make sense to think of $X$ as an element of the non-associative Ore extension if $R$ is not unital. \end{remark} The left-distributivity of the multiplication over addition forces $\delta$ and $\sigma$ to be left $R$-additive: for any $r,s,t\in R$, $rX\cdot(s+t)=rX\cdot s+rX\cdot t$, and by expanding the left- and right-hand side, \begin{align*} rX\cdot(s+t)&=r\cdot\sigma(s+t)X+r\cdot\delta(s+t),\\ rX\cdot s+rX\cdot t&=r\cdot\sigma(s)X+r\cdot\delta(s)+r\cdot\sigma(t)X+r\cdot\delta(t), \end{align*} so by comparing coefficients, we arrive at the desired conclusion. \begin{definition}[$\sigma$-derivation]\label{def:sigma-derivation} Let $R$ be a non-unital, non-associative ring where $\sigma$ is an endomorphism and $\delta$ an additive map on $R$. Then $\delta$ is called a \emph{$\sigma$-derivation} if $\delta(a\cdot b)=\sigma(a)\cdot\delta(b)+\delta(a)\cdot b$ holds for all $a,b\in R$. If $\sigma=\mathrm{id}_R$, $\delta$ is a $\emph{derivation}$. \end{definition} \begin{remark}\label{re:sigma-derivation} If $R$ and $\sigma$ are unital and $\delta$ a $\sigma$-derivation, then $\delta(1)=\delta(1\cdot 1)=2\cdot\delta(1)$, so that $\delta(1)=0$. Furthermore, if $R$ is also associative, then it is both a necessary and sufficient condition that $\sigma$ be an endomorphism and $\delta$ a $\sigma$-derivation on $R$ for the unital, associative Ore extension $R[X;\sigma,\delta]$ to exist. \end{remark} \begin{definition}[Homogeneous map] Let $R[X;\sigma,\delta]$ be a non-unital non-associ\-ative Ore extension of a non-unital, non-associative ring $R$. Then we say that a map $\beta\colon R[X;\sigma,\delta]\to R[X;\sigma,\delta]$ is \emph{homogeneous} if for all $a\in R$ and $m\in\mathbb{N}$, $\beta(aX^m)=\beta(a)X^m$. If $\gamma\colon R\to R$ is any (additive) map, we may \emph{extend it homogeneously} to $R[X;\sigma,\delta]$ by defining $\gamma(aX^m):=\gamma(a)X^m$ (imposing additivity). \end{definition} \section{Non-associative Ore extensions of non-associative rings} We use this small section to present a couple of results that hold true for any non-unital, non-associative Ore extension of a non-unital, non-associative ring. \begin{lemma}[Homogeneously extended ring endomorphism]\label{lem:endo-extension} Let $R[X;\sigma,\delta]$ be a non-unital, non-associative Ore extension of a non-unital, non-associative ring $R$. If $\gamma$ is an endomorphism on $R$, then the homogeneously extended map is an endomorphism on $R[X;\sigma,\delta]$ if and only if \begin{equation} \gamma(a)\cdot\pi_i^m(\gamma(b))=\gamma(a)\cdot\gamma(\pi_i^m(b)),\quad\text{for all } i,m\in\mathbb{N}\text{ and } a,b\in R.\label{eq:homogeneous-endomorphism} \end{equation} \end{lemma} \begin{proof} Additivity follows from the definition, while for any monomials $aX^m$ and $bX^n$, \begin{align*} \gamma(aX^m\cdot bX^n)&=\gamma\left(\sum_{i\in\mathbb{N}}a\cdot\pi_i^m(b)X^{i+n}\right)=\sum_{i\in\mathbb{N}}\gamma(a)\cdot\gamma\big(\pi_i^m(b)\big)X^{i+n},\\ \gamma(aX^m)\cdot\gamma(bX^n)&=\gamma(a)X^m\cdot\gamma(b)X^n=\sum_{i\in\mathbb{N}}\gamma(a)\cdot\pi_i^m(\gamma(b))X^{i+n}. \end{align*} Comparing coefficients between the two completes the proof. \end{proof} \begin{corollary}[Homogeneously extended unital ring endomorphism]\label{cor:unital-endo-extension} Let $R[X;\sigma,\delta]$ be a unital, non-associative Ore extension of a unital, non-associative ring $R$. If $\alpha$ is an endomorphism on $R$ and there exists an $a\in R$ such that $\alpha(a)=1$, then the homogeneously extended map on $R[X;\sigma,\delta]$ is an endomorphism if and only if $\alpha$ commutes with $\delta$ and $\sigma$. \end{corollary} \begin{proof} This follows from \autoref{lem:endo-extension} by choosing $a$ so that $\alpha(a)=1$: if $\alpha$ commutes with $\delta$ and $\sigma$, then $\pi_i^m(\alpha(b))=\alpha(\pi_i^m(b))$. On the other hand, if $\pi_i^m(\alpha(b))=\alpha(\pi_i^m(b))$, then by choosing $m=1$ and $i=0$, $\pi_0^1(\alpha(b))=\delta(\alpha(b)),$ and $\alpha(\pi_0^1(b))=\alpha(\delta(b)).$ If choosing $m=1$ and $i=1$, $\pi_1^1(\alpha(b))=\sigma(\alpha(b)),$ $\alpha(\pi_1^1(b))=\alpha(\sigma(b)).$\qedhere \end{proof} \section{Hom-associative Ore extensions of non-associative rings} The following section is devoted to the question what non-unital, non-associative Ore extensions of non-unital, non-associative rings $R$ are hom-associative? \begin{proposition}[Hom-associative Ore extension]\label{prop:hom-ass-ore-condition} Let $R[X;\sigma,\delta]$ be a non-unital, non-associative Ore extension of a non-unital, non-associative ring $R$. Furthermore, let $\alpha_{i,j}(a)\in R$ be dependent on $a\in R$ and $i,j\in\mathbb{N}_{>0}$, and put for an additive map $\alpha\colon R[X;\sigma,\delta]\to R[X;\sigma,\delta]$, \begin{equation} \alpha\left(aX^m\right)=\sum_{i\in\mathbb{N}} \alpha_{i+1,m+1}(a)X^i, \quad\forall a\in R,\forall m\in\mathbb{N}.\label{eq:alpha-definition} \end{equation} Then $R[X;\sigma,\delta]$ is hom-associative with the twisting map $\alpha$ if and only if for all $a,b,c\in R$ and $k,m,n,p\in\mathbb{N}$, \begin{equation} \sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\alpha_{i+1,m+1}(a)\cdot\pi_{k-j}^i\left(b\cdot\pi_{j-p}^n(c)\right) = \sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot \pi_{k-j}^{i+n}\left(\alpha_{j+1,p+1}(c)\right).\label{eq:hom-ore-general} \end{equation} \end{proposition} \begin{proof} For any $a,b,c\in R$ and $m,n,p\in\mathbb{N}$, {\allowdisplaybreaks \begin{align*} \alpha\left(aX^m\right)\cdot \left(b X^n \cdot cX^p\right)&=\alpha\left(aX^m\right)\cdot\left(\sum_{q\in\mathbb{N}}\left(b\cdot\pi_q^n(c)\right)X^{q+p}\right)\\ &=\sum_{q\in\mathbb{N}}\alpha\left(aX^m\right)\cdot\left(\left(b\cdot\pi_q^n(c)\right)X^{q+p}\right)\\ &=\sum_{q\in\mathbb{N}}\sum_{i\in\mathbb{N}}\alpha_{i+1,m+1}(a)X^i\cdot\left(\left(b\cdot\pi_q^n(c)\right)X^{q+p}\right)\\ &=\sum_{q\in\mathbb{N}}\sum_{i\in\mathbb{N}}\sum_{l\in\mathbb{N}}\alpha_{i+1,m+1}(a)\cdot\pi_l^i\left(b\cdot\pi_q^n(c)\right)X^{l+q+p}\\ &=\sum_{l\in\mathbb{N}}\sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\alpha_{i+1,m+1}(a)\cdot\pi_l^i\left(b\cdot\pi_{j-p}^n(c)\right)X^{l+j}\\ &=\sum_{k\in\mathbb{N}}\sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\alpha_{i+1,m+1}(a)\cdot\pi_{k-j}^i\left(b\cdot\pi_{j-p}^n(c)\right)X^{k},\\ \left(aX^m\cdot bX^n\right)\cdot \alpha\left(cX^p\right)&=\left(\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)X^{i+n}\right)\cdot \alpha\left(cX^p\right)\\ &=\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)X^{i+n}\cdot \alpha\left(cX^p\right)\\ &=\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)X^{i+n}\cdot \sum_{j\in\mathbb{N}} \alpha_{j+1,p+1}(c)X^j\\ &=\sum_{i\in\mathbb{N}}\sum_{j\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)X^{i+n}\cdot \alpha_{j+1,p+1}(c)X^j\\ &=\sum_{i\in\mathbb{N}}\sum_{j\in\mathbb{N}}\sum_{l\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot \pi_{l}^{i+n}\left(\alpha_{j+1,p+1}(c)\right)X^{l+j}\\ &=\sum_{k\in\mathbb{N}}\sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot \pi_{k-j}^{i+n}\left(\alpha_{j+1,p+1}(c)\right)X^{k}. \end{align*}} Comparing coefficients completes the proof. \end{proof} \begin{corollary}\label{cor:hom-ore-ness} Let $R[X;\sigma,\delta]$ be a non-unital, hom-associative Ore extension of a non-unital, non-associative ring $R$, with twisting map defined by \eqref{eq:alpha-definition}. Then the following assertions hold for all $a,b,c\in R$ and $k,p\in\mathbb{N}$: {\allowdisplaybreaks \begin{align} \sum_{i=k-p}^{I_{0,a}}\alpha_{i+1,1}(a)\cdot\pi_{k-p}^i(b\cdot c)=&(a\cdot b)\cdot\alpha_{k+1,p+1}(c),\label{eq:hom-ness-1}\\ \sum_{i=k-p-1}^{I_{0,a}}\alpha_{i+1,1}(a)\cdot\left(\pi_{k-p-1}^i(b\cdot\sigma(c))\right)\phantom{=}\nonumber\\ +\sum_{i=k-p}^{I_{0,a}}a_{i+1,1}\left(\pi_{k-p}^i(b\cdot\delta(c))\right)=&(a\cdot b)\cdot\left(\delta(\alpha_{k+1,p+1}(c))+\sigma(\alpha_{k,p+1}(c))\right)\nonumber\\ =&(a\cdot b)\cdot\left(\alpha_{k+1,p+1}(\delta(c))+\alpha_{k,p+1}(\sigma(c))\right)\hspace{-.5ex},\label{eq:hom-ness-2}\\ \sum_{i=k-p}^{I_{1,a}}\alpha_{i+1,2}(a)\cdot\pi^i_{k-p}(b\cdot c)=&\left(a\cdot\sigma(b)\right)\cdot\left(\delta(\alpha_{k+1,p+1}(c))+\sigma(\alpha_{k,p+1}(c))\right)\nonumber\\ &+(a\cdot\delta(b))\cdot\alpha_{k+1,p+1}(c),\label{eq:hom-ness-3} \end{align}} where $\alpha_{0,p+1}(\cdot):=0$, and $I_{p,a}$ is the smallest natural number, depending on $p$ and $a$, such that $\alpha_{i+1,p}(a)=0$ for all $i> I_{p,a}$. \end{corollary} \begin{proof} We get \eqref{eq:hom-ness-1}, the first equality in \eqref{eq:hom-ness-2}, and \eqref{eq:hom-ness-3} immediatly from the cases $m=n=0$, $m=0, n=1$, and $m=1,n=0$ in \eqref{eq:hom-ore-general}, respectively. The second equality in \eqref{eq:hom-ness-2} follows from comparison with \eqref{eq:hom-ness-1}. \end{proof} \begin{remark} In case $k<p$, or $k>I_{0,a}$ in \eqref{eq:hom-ness-1}, $(a\cdot b)\cdot\alpha_{k+1,p+1}(c)=0$. The statement is analogous for \eqref{eq:hom-ness-2} and \eqref{eq:hom-ness-3}. \end{remark} \begin{corollary} Let $R[X;\sigma,\delta]$ be a non-unital, hom-associative Ore extension of a non-unital, non-associative ring $R$, with twisting map defined by \eqref{eq:alpha-definition}. Then the following assertions hold for all $a,b,c\in R$ and $j,p\in\mathbb{N}$: \begin{align} (a\cdot b)\cdot \sigma(\alpha_{I+1,p+1}(c))&=(a\cdot b)\cdot\alpha_{I+1,p+1}(\sigma(c)),\quad I=\max(I_{p,c}, I_{p,\delta(c)}),\label{eq:assertion1}\\ (a\cdot b)\cdot\delta(\alpha_{1,p+1}(c))&=(a\cdot b)\cdot \alpha_{1,p+1}(\delta(c))=\begin{cases}(a\cdot b)\cdot \alpha_{j+1,j+1}(\delta(c))&\text{if } p=0,\\ 0&\text{if }p\neq0.\label{eq:assertion2}\end{cases} \end{align} \end{corollary} \begin{proof} Put $k=\max(I_{p,c}, I_{p,\delta(c)})$ and $k=0$ in \eqref{eq:hom-ness-2}, respectively. \end{proof} \section{Hom-associative Ore extensions of hom-associative rings} In this section, we will continue our previous investigation, but narrowed down to hom-associative Ore extensions of hom-associative rings. \begin{corollary}\label{cor:hom-alpha-condition}Let $R[X;\sigma,\delta]$ be a non-unital, non-associative Ore extension of a non-unital, hom-associative ring $R$, and extend the twisting map $\alpha\colon R\to R$ homogeneously to $R[X;\sigma,\delta]$. Then $R[X;\sigma,\delta]$ is hom-associative if and only if for all $a,b,c\in R$ and $l,m,n\in\mathbb{N}$, \begin{equation} \sum_{i\in\mathbb{N}} \alpha(a)\cdot\pi_i^m\left(b\cdot\pi_{l-i}^n(c)\right) = \sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot\pi_l^{i+n}\left(\alpha(c)\right).\label{eq:pi-function-sum} \end{equation} \end{corollary} \begin{proof}A homogeneous $\alpha$ corresponds to $\alpha_{i+1,m+1}(a)=\alpha(a)\cdot\delta_{i,m}$ and $\alpha_{j+1,p+1}(c)=\alpha(c)\cdot\delta_{j,p}$ in \autoref{prop:hom-ass-ore-condition}, where $\delta_{i,m}$ is the Kronecker delta. Then the left-hand side reads \begin{align*} \sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\alpha_{i+1,m+1}(a)\cdot\pi_{k-j}^i\left(b\cdot\pi_{j-p}^n(c)\right)&=\sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\alpha(a)\cdot\delta_{i,m}\cdot\pi_{k-j}^i\left(b\cdot\pi_{j-p}^n(c)\right)\\ &=\sum_{j\in\mathbb{N}}\alpha(a)\cdot\pi_{k-j}^m\left(b\cdot\pi_{j-p}^n(c)\right)\\ &=\sum_{i\in\mathbb{N}}\alpha(a)\cdot\pi_i^m\left(b\cdot\pi_{k-p-i}^n(c)\right)\\ &=\sum_{i\in\mathbb{N}}\alpha(a)\cdot\pi_i^m\left(b\cdot\pi_{l-i}^n(c)\right), \end{align*} and the right-hand side \begin{align*} \sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot \pi_{k-j}^{i+n}\left(\alpha_{j+1,p+1}(c)\right)&=\sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot \pi_{k-j}^{i+n}\left(\alpha(c)\cdot\delta_{j,p}\right)\\ &=\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot \pi_{k-p}^{i+n}\left(\alpha(c)\right)\\ &=\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot \pi_{l}^{i+n}\left(\alpha(c)\right), \end{align*} which completes the proof. \end{proof} \begin{corollary}Let $R[X;\sigma,\delta]$ be a non-unital, hom-associative Ore extension of a non-unital, hom-associative ring $R$, with the twisting map $\alpha\colon R\to R$ extended homogeneously to $R[X;\sigma,\delta]$. Then, for all $a,b,c\in R$, \begin{align} (a\cdot b)\cdot\delta(\alpha(c))&=(a\cdot b)\cdot\alpha(\delta(c)),\label{eq:homogeneous-delta-commute}\\ (a\cdot b)\cdot\sigma(\alpha(c))&=(a\cdot b)\cdot\alpha(\sigma(c)),\label{eq:homogeneous-sigma-commute}\\ \alpha(a)\cdot\delta(b\cdot c)&=\alpha(a)\cdot(\delta(b)\cdot c+\sigma(b)\cdot\delta(c)),\label{eq:homogeneous-sigma-derivation}\\ \alpha(a)\cdot\sigma(b\cdot c)&=\alpha(a)\cdot\left(\sigma(b)\cdot\sigma(c)\right).\label{eq:homogeneous-sigma-homomorphism} \end{align} \end{corollary} \begin{proof}Using the same technique as in the proof of \autoref{cor:hom-alpha-condition}, this follows from \autoref{cor:hom-ore-ness} with a homogeneous $\alpha$. \end{proof} For the two last equations, it is worth noting the resemblance to the unital and associative case (see the latter part of \autoref{re:sigma-derivation}). \begin{corollary} Assume $\alpha\colon R\to R$ is the twisting map of a non-unital, hom-associative ring $R$, and extend the map homogeneously to $R[X;\sigma, \delta]$. Assume further that $\alpha$ commutes with $\delta$ and $\sigma$. Then $R[X;\sigma,\delta]$ is hom-associative if and only if for all $a,b,c\in R$ and $l,m,n\in \mathbb{N}$, \begin{equation} \alpha(a)\cdot\sum_{i\in\mathbb{N}}\pi_i^m\left(b\cdot\pi_{l-i}^n(c)\right)=\alpha(a)\cdot\sum_{i\in\mathbb{N}}\left(\pi_i^m(b)\cdot\pi_l^{i+n}(c)\right).\label{eq:homogeneous-iff-condition} \end{equation} \end{corollary} \begin{proof} Using \autoref{cor:hom-alpha-condition}, we know that $R[X;\sigma,\delta]$ is hom-associative if and only if for all $a,b,c\in R$ and $l,m,n\in\mathbb{N}$, \begin{equation*} \sum_{i\in\mathbb{N}}\alpha(a)\cdot\pi_i^m\left(b\cdot\pi_{l-i}^n(c)\right)=\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot\pi_l^{i+n}\left(\alpha(c)\right). \end{equation*} However, since $\alpha$ commutes with both $\delta$ and $\sigma$, and $R$ is hom-associative, the right-hand side can be rewritten as \begin{align*} \sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot\pi_l^{i+n}\left(\alpha(c)\right)&=\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)\cdot\alpha\left(\pi_l^{i+n}(c)\right)\\ &=\sum_{i\in\mathbb{N}}\alpha(a)\cdot\left(\pi_i^m(b)\cdot\pi_l^{i+n}(c)\right). \end{align*} As a last step, we use left-distributivity to pull out $\alpha(a)$ from the sums. \end{proof} \begin{proposition}\label{prop:extended-ore} Assume $\alpha\colon R\to R$ is the twisting map of a non-unital, hom-associative ring $R$, and extend the map homogeneously to $R[X;\sigma, \delta]$. Assume further that $\alpha$ commutes with $\delta$ and $\sigma$, and that $\sigma$ is an endomorphism and $\delta$ a $\sigma$-derivation. Then $R[X;\sigma,\delta]$ is hom-associative. \end{proposition} \begin{proof}We refer the reader to the proof in \cite{NYSTEDT20132748}, where it is seen that neither associativity, nor unitality is used to prove that for all $b,c\in R$ and $l,m,n\in\mathbb{N}$, \begin{equation} \sum_{i\in\mathbb{N}}\pi_i^m\left(b\cdot\pi_{l-i}^n(c)\right) = \sum_{i\in\mathbb{N}}\pi_i^m(b)\cdot\pi_l^{i+n}\left(c\right), \label{eq:assoc-id} \end{equation} and therefore also \eqref{eq:homogeneous-iff-condition} holds. \end{proof} One may further ask oneself whether it is possible to construct non-trivial hom-associative Ore extensions, starting from associative rings? The answer is affirmative, and the remaining part of this section will be devoted to show that. \begin{proposition}\label{prop:hom*ore} Let $R[X;\sigma,\delta]$ be a non-unital, associative Ore extension of a non-unital, associative ring $R$, and $\alpha\colon R\to R$ a ring endomorphism that commutes with $\delta$ and $\sigma$. Then $\left(R[X;\sigma,\delta],*,\alpha\right)$ is a multiplicative, non-unital, hom-associative Ore extension with $\alpha$ extended homogeneously to $R[X;\sigma,\delta]$. \end{proposition} \begin{proof} Since $\alpha$ is an endomorphism on $R$ that commutes with $\delta$ and $\sigma$, \eqref{eq:homogeneous-endomorphism} holds, so by \autoref{lem:endo-extension}, the homogeneously extended map $\alpha$ on $R[X;\sigma,\delta]$ is an endomorphism. Referring to \autoref{prop:star-alpha-mult}, $\left(R[X;\sigma,\delta],*,\alpha\right)$ is thus a hom-associative ring. Furthermore, we see that $*$ is the multiplication \eqref{eq:ore-mult} of a non-unital, non-associative Ore extension, since for all $a,b\in R$ and $m,n\in\mathbb{N}$, \begin{align*} aX^m*bX^n&=\alpha\left(\sum_{i\in\mathbb{N}}\left(a\cdot\pi_i^m(b)\right)X^{i+n}\right)=\sum_{i\in\mathbb{N}}\alpha\left(a\cdot\pi_i^m(b)\right)X^{i+n}\\ &=\sum_{i\in\mathbb{N}}\left(a*\pi_i^m(b)\right)X^{i+n}. \end{align*} \end{proof} \begin{remark}\label{re:weak-unit}Note in particular that if $R[X;\sigma,\delta]$ is unital, then $(R[X;\sigma,\delta],*,\alpha)$ is weakly unital with weak unit 1 due to \autoref{cor:weak-unit}. \end{remark} \begin{proposition}[Hom-associative $\sigma$-derivation]\label{{prop:hom-associative-sigma}} Let $A$ be an associative algebra, $\alpha$ and $\sigma$ algebra endomorphisms, and $\delta$ a $\sigma$-derivation on $A$. Assume $\alpha$ commutes with $\delta$ and $\sigma$. Then $\sigma$ is an algebra endomorphism and $\delta$ a $\sigma$-derivation on $(A,*,\alpha)$. \end{proposition} \begin{proof} Linearity follows immediately, while for any $a,b\in A$, \begin{align*} \sigma(a* b)&=\sigma(\alpha(a\cdot b))=\alpha(\sigma(a\cdot b))=\alpha(\sigma(a)\cdot \sigma(b))=\sigma(a)* \sigma(b),\\ \delta(a* b)&=\delta(\alpha(a\cdot b))=\alpha(\delta(a\cdot b))=\alpha(\sigma(a)\cdot \delta(b)+\delta(a)\cdot b)\\ &=\alpha(\sigma(a)\cdot \delta(b))+\alpha(\delta(a)\cdot b)=\sigma(a)*\delta(b)+\delta(a)* b, \end{align*} which completes the proof. \end{proof} \begin{remark}\label{re:skew-ore} For a non-unital, associative skew polynomial ring $R[X;\sigma,0]$, one can always achieve a deformation into a non-unital, hom-associative skew polynomial ring using \autoref{prop:hom*ore} by defining the twisting map $\alpha$ as $\sigma$, due to the fact that $\sigma$ always commutes with itself and the zero map. \end{remark} \begin{example}[Hom-associative quantum planes]\label{ex:hom-quant} The quantum plane can be defined as the unital, associative skew polynomial ring $K[Y][X;\sigma,0]=:A$ where $K$ is a field of characteristic zero and $\sigma$ the unital $K$-algebra automorphism of $K[Y]$ such that $\sigma(Y)=qY$ and $q\in K^\times$, $K^\times$ being the multiplicative group of nonzero elements in $K$. From \autoref{re:skew-ore}, we know that at least one nontrivial deformation of $K[Y][X;\sigma,0]$ into a hom-associative skew polynomial ring exist, so let us try to see if there are others as well. Putting $\alpha(Y)=a_mY^m+\ldots+a_1Y + a_0$ for some constants $a_m,\ldots,a_0\in K$ and $m\in\mathbb{N}$ and then comparing $\sigma(\alpha(Y))=a_m q^m Y^m+\ldots+a_1 qY+a_0$ and $\alpha(\sigma(Y))=\alpha(qY)=q\alpha(Y)=a_m qY^m+\ldots+a_1qY+a_0q$ gives $\alpha(Y)=a_1Y$ since $q\in K^\times$ is arbitrary. By the same kind of argument, $\alpha(\sigma(1))=\sigma(\alpha(1))$ if and only if $\alpha(1)=1$. For such $\alpha$ and any monomial $b_n Y^n$ where $b_n\in K$ and $n\in\mathbb{N}_{>0}$, \begin{align*} \alpha(\sigma(b_n Y^n))=&b_n\alpha(\sigma(Y^n))=b_n\alpha(\sigma^n(Y))=b_n\alpha^n(\sigma(Y))=b_n\sigma^n(\alpha(Y))\\ =&b_n\sigma(\alpha^n(Y))=b_n\sigma(\alpha(Y^n))=\sigma(\alpha(b_nY^n)). \end{align*} By linearity, $\alpha$ commutes with $\sigma$ on any polynomial in $K[Y][X;\sigma,0]$, and by excluding the possibility $\alpha\equiv0$, we put the twisting map to be $\alpha_k(Y)=kY$ for some element $k\in K^\times$, the index $k$ making evident that the map depends on the parameter $k$. This $K$-algebra endomorphism gives us a family of hom-associative quantum planes $(A,*,\alpha_k)$, each value of $k$ giving a weakly unital hom-associative skew polynomial ring, the member for which $k=1$ corresponding to the unital, associative quantum plane. If $k\neq1$, we get nontrivial deformations, since for instance $X*(Y*Y)=k^4q^2 Y^2X$, while $(X*Y)*Y=k^3q^2Y^2X$. Now that \autoref{{prop:hom-associative-sigma}} guarantees that $\sigma$ is a $K$-algebra endomorphism on any member of $(A,*,\alpha_k)$ as well, we call these members \emph{hom-associative quantum planes}, satisfying the commutation relation $X*Y=kqY*X$. \end{example} \begin{example}[Hom-associative universal enveloping algebras]\label{ex:hom-env} The two-di\-men\-sio\-nal Lie algebra $L$ with basis $\{X,Y\}$ over the field $K$ of characteristic zero is defined by the Lie bracket $[X,Y]_L=Y$. Its universal enveloping algebra, $U(L)$, can be written as the unital, associative differential polynomial ring $K[Y][X;\mathrm{id}_{K[Y]},\delta]$ where $\delta=Y\frac{\mathrm{d}}{\mathrm{d}Y}$. Put for the $K$-algebra endomorphism $\alpha(Y)=a_n Y^n+\ldots+ a_1 Y+ a_0$ where $a_n,\ldots,a_0\in K$ and $n\in\mathbb{N}$. Then $\alpha(\delta(Y))=\alpha(Y)=a_n Y^n+\ldots+ a_1 Y+ a_0$ and $\delta(\alpha(Y))=n a_n Y^n+\ldots+a_1 Y$, so by comparing coefficients, $a_1$ is the only nonzero such. Using the same kind of argument, $\alpha(\delta(1))=\delta(\alpha(1))$ if and only if $\alpha(1)=1$. Let $b_n\in K$ be arbitrary and $m\in\mathbb{N}_{>0}$. Then $\alpha(\delta(b_n Y^n))=nb_n \alpha(Y^n)=nb_n\alpha^n(Y)=nb_na_1^n Y^n$, and $\delta(\alpha(b_nY^n))=\delta(b_n\alpha^n(Y))=\delta(b_n a_1^nY^n)=nb_n a_1^n Y^n$. Since it is sufficient to check commutativity of $\alpha$ and $\delta$ on an arbitrary monomial, we define the twisting map as $\alpha_k(Y)=kY, k\in K^\times$, giving a family of \emph{hom-associative universal enveloping algebras of $L$}, $(U(L), *, \alpha_k)$, where the commutation relation $X\cdot Y-Y\cdot X=Y$ is deformed to $X*Y-Y*X=kY$. \end{example} \begin{example}[Hom-associative Weyl algebras]\label{ex:hom-weyl} Consider the first Weyl algebra exhibited as a unital, associative differential polynomial ring, $K[Y][X;\mathrm{id}_{K[Y]},\delta]=:A$, where $K$ is a field of characteristic zero and $\delta=\frac{\mathrm{d}}{\mathrm{d}Y}$. Clearly any algebra endomorphism $\alpha$ on $K[Y]$ commutes with $\mathrm{id}_{K[Y]}$, but what about $\delta$? Since $\alpha(\delta(Y))=\alpha(1)=1$, we need to have $\delta(\alpha(Y))=1$ which implies $\alpha(Y)=Y+k$, for some $k\in K$. On the other hand, if $\alpha$ is an algebra endomorphism such that $\alpha(Y)=Y+k$ for any $k\in K$, then for any monomial $aY^m$ where $m\in\mathbb{N}_{>0}$, \begin{align*} \alpha(\delta(aY^m))&=am\alpha(Y^{m-1})=am\alpha^{m-1}(Y)=a m(Y+k)^{m-1},\\ \delta(\alpha(aY^m))&=a\delta(\alpha^m(Y))=a\delta((Y+k)^m)=a m(Y+k)^{m-1}. \end{align*} Hence any algebra endomorphism $\alpha$ on $K[Y]$ that satisfies $\alpha(Y)=Y+k$ for any $k\in K$ will commute with $\delta$ (and any algebra endomorphism that commutes with $\delta$ will be on this form). Since $\alpha$ commutes with $\delta$ and $\sigma$, we know from \autoref{cor:unital-endo-extension} that $\alpha$ extends to a ring endomorphism on $A$ as well by $\alpha(aX^m)=\alpha(a)X^m$. Linearity over $K$ follows from the definition, so in fact $\alpha$ extends to an algebra endomorphism on $A$. Appealing to \autoref{prop:hom*ore} and \autoref{re:weak-unit}, we thus have a family of hom-associative, weakly-unital differential polynomial rings $(A,*,\alpha_k)$ with weak unit 1, where $k\in K$ and $\alpha_k$ is the $K$-algebra endomorphism defined by $\alpha_k\left(p(Y)X^m\right)=p(Y+k)X^m$ for all polynomials $p(Y)\in K[Y]$ and $m\in\mathbb{N}$. Since \autoref{{prop:hom-associative-sigma}} assures $\delta$ to be a $K$-linear $\sigma$-derivation on any member $(A,*,\alpha_k)$ as well, we call these \emph{hom-associative Weyl algebras}, including the associative Weyl algebra in the member corresponding to $k=0$. One can note that the hom-associative Weyl algebras all satisfy the commutation relation $X*Y-Y*X=1$, where 1 is a weak unit. \end{example} \begin{lemma}\label{lem:diff-mult} Let $R$ be a non-unital, non-associative ring. Then in $R[X;\mathrm{id}_R,\delta]$, \begin{equation} aX^n\cdot b=\sum_{i=0}^n \left(\binom{n}{i}\cdot a\cdot\delta^{n-i}(b)\right)X^i, \quad\text{ for any } a,b\in R \text{ and } n\in\mathbb{N}. \end{equation} \end{lemma} \begin{proof}This follows from \eqref{eq:ore-mult} with $\sigma=\mathrm{id}_R$. \end{proof} \begin{lemma}\label{le:weak-unit-hom-conditions} Let $R$ be a weakly unital, hom-associative ring with weak unit $e$ and twisting map $\alpha$ commuting with the derivation $\delta$ on $R$, and extend $\alpha$ homogeneously to $R[X;\mathrm{id}_R,\delta]$. Then the following hold: \begin{enumerate}[(i)] \item $a\cdot\delta^n(e)=\delta^n(e)\cdot a=0$ for any $a\in R$ and $n\in\mathbb{N}_{>0}$, \item $e$ is a weak unit in $R[X;\mathrm{id}_R,\delta]$, \item $eX\cdot q -q\cdot eX =\sum_{i=0}^n \alpha(\delta(q_i))X^i$ for any $q=\sum_{i=0}^n q_i X^i\in R[X;\mathrm{id}_R,\delta]$. \end{enumerate} \end{lemma} \begin{proof} First, note that \begin{align*} \delta(a\cdot e)=&a\cdot\delta(e)+\delta(a)\cdot e=a\cdot\delta(e)+e\cdot\delta(a),\\ \delta(a\cdot e)=&\delta(e\cdot a)=e\cdot\delta(a)+\delta(e)\cdot a, \end{align*} and hence $\delta(e)\cdot a=a\cdot\delta(e)$. Moreover, $\delta(a\cdot e)=\delta(\alpha(a))=\alpha(\delta(a))=e\cdot\delta(a)$, so $\delta(e)\cdot a=0$. Assume $\delta^n(e)\cdot a=a\cdot\delta^n(e)=0$ for all $n\in\mathbb{N}_{>0}$. Then, since $a$ is arbitrary, $\delta^n(e)\cdot\delta(a)=\delta(a)\cdot\delta^n(e)=0$ as well, and hence \begin{align*} 0=&\delta(0)=\delta\left(a\cdot\delta^n(e)\right)=a\cdot\delta^{n+1}(e)+\delta(a)\cdot\delta^n(e)=a\cdot\delta^{n+1}(e),\\ 0=&\delta(0)=\delta\left(\delta^n(e)\cdot a\right)=\delta^n(e)\cdot\delta(a)+\delta^{n+1}(e)\cdot a= \delta^{n+1}(e)\cdot a, \end{align*} so the first assertion holds by induction. The second assertion follows from the first and \autoref{lem:diff-mult} with $b=e$, since for any $m\in\mathbb{N}$, \begin{equation*} aX^m\cdot e=(a\cdot e)X^m=\alpha(a)X^m=\alpha\left(aX^m\right)=(e\cdot a)X^m=e\cdot \left(a X^m\right), \end{equation*} and by distributivity of the multiplication, $e\cdot q=q\cdot e=\alpha(q)$ for any $q\in R[X;\mathrm{id}_R;\delta]$. The last assertion follows from a direct computation using the first assertion and \autoref{lem:diff-mult}.\qedhere \end{proof} A well-known fact about the associative Weyl algebras are that they are simple. This fact is also true in the case of the non-associative Weyl algebras introduced in \cite{2015arXiv150901436N}, and it turns out that the hom-associative Weyl algebras have this property as well. \begin{proposition}The hom-associative Weyl algebras are simple. \end{proposition} \begin{proof} The main part of the proof follows the same line of reasoning that can be applied to the unital and associative case; let $(A,*,\alpha_k)$ be any hom-associative Weyl algebra, and $I$ any nonzero ideal of it. Let $p=\sum_{i\in\mathbb{N}}p_i(Y)X^i\in I$ be an arbitrary nonzero polynomial with $p_i(Y)\in K[Y]$, and put $m:=\max_i(\deg(p_i(Y)))$. Then, since $1\in A$ is a weak unit in $(A,*,\alpha_k)$, we may use \autoref{le:weak-unit-hom-conditions} and the commutator $[\cdot,\cdot]$ to compute \begin{equation*} [X,p]=\sum_{i\in\mathbb{N}}\alpha_k\left(p'_i(Y)X^i\right)=\sum_{i\in\mathbb{N}}p'_i(Y+k)X^i. \end{equation*} Since $\max_i(\deg(p'_i(Y+k))=m-1$, by applying the commutator to the resulting polynomial with $X$ $m$ times, we get a polynomial $\sum_{j\in\mathbb{N}}a_jX^j$ of degree $n$, where $a_n\in K$ is nonzero. Then \begin{align*} \sum_{j\in\mathbb{N}}a_jX^j*Y&=\sum_{j\in\mathbb{N}}\sum_{i\in\mathbb{N}}a_j*\pi_i^j(Y)X^i=\sum_{j\in\mathbb{N}}\left(a_j*X^{j-1}+a_j*YX^j\right),\\ Y*\sum_{j\in\mathbb{N}}a_jX^j&=\alpha_k\left(Y\sum_{j\in\mathbb{N}}a_jX^j\right)=\alpha_k\left(\sum_{j\in\mathbb{N}}a_jYX^j\right)=\sum_{j\in\mathbb{N}}\alpha_k\left(a_jYX^j\right)\\ &=\sum_{j\in\mathbb{N}}a_j*YX^j. \end{align*} Therefore $\deg\left(\left[\sum_{j\in\mathbb{N}}a_jX^j,Y\right]\right)=n-1$, where $\deg(\cdot)$ now denotes the degree of a polynomial in $X$. By applying the commutator to the resulting polynomial with $Y$ $n$ times, we get $a_n*1\in I$; \begin{equation*} a_n*1=\alpha_k(a_n)=a_n\in I\implies a_n^{-1}*\left(a_n*1\right)=a_n^{-1}*a_n=\alpha_k(1)=1\in I. \end{equation*} Take any polynomial $q=\sum_{i\in\mathbb{N}}q_i(Y)X^i$ in $(A,*,\alpha_k)$. Then \begin{equation*} 1*\sum_{i\in\mathbb{N}}q_i(Y-k)X^i=\sum_{i\in\mathbb{N}}q_i(Y)X^i=q\in I,\text{ and therefore } I=(A,*,\alpha_k). \end{equation*} \end{proof} \section{Weak unitalizations of hom-associative algebras}\label{sec:weak-unitalization} For a non-unital, associative $R$-algebra $A$ consisting of an $R$-module $M$ endowed with a multiplication, one can always find an embedding of the algebra into a unital, associative algebra by taking the direct sum $M\oplus R$ and defining multiplication by \begin{equation*} (m_1,r_1)\cdot(m_2,r_2):=(m_1\cdot m_2+r_1\cdot m_2+r_2\cdot m_1, r_1\cdot r_2),\quad m_1,m_2\in M \text{ and } r_1,r_2\in R. \end{equation*} $A$ can then be embedded by the injection map $M\to M\oplus 0$, being an isomorphism into the unital, associative algebra $M\oplus R$ with the unit given by $(0,1)$. In \cite{2009arXiv0904.4874F}, Frégier and Gohr showed that not all hom-associative algebras can be embedded into even a weakly unital hom-associative algebra. In this section, we prove that any multiplicative hom-associative algebra can be embedded into a multiplicative, weakly unital hom-associative algebra by twisting the above unitalization of a non-unital, associative algebra with $\alpha$. We call this a \emph{weak unitalization}. \begin{proposition}\label{prop:bullet-algebra} Let $M$ be a non-unital, non-associative $R$-algebra and $\alpha$ a linear map on $M$. Endow $M\oplus R$ with the following multiplication: \begin{align} (m_1,r_1)\bullet(m_2,r_2):=&(m_1\cdot m_2+r_1\cdot \alpha(m_2)+r_2\cdot \alpha(m_1),r_1\cdot r_2),\label{eq:bullet-mult} \end{align} for any $m_1,m_2\in M$ and $r_1,r_2\in R$. Then $M\oplus R$ is a non-unital, non-associative $R$-algebra. \end{proposition} \begin{proof} $R$ can be seen as a module over itself, and since any direct sum of modules over $R$ is again a module over $R$, $M\oplus R$ is a module over $R$. For any $m_1,m_2\in M$ and $\lambda, r_1, r_2\in R$, \begin{align*} \lambda\cdot\left((m_1,r_1)\bullet(m_2,r_2)\right)&=\lambda\cdot\left(m_1\cdot m_2+r_1\cdot \alpha(m_2)+r_2\cdot \alpha(m_1),r_1\cdot r_2\right)\\ &=\left(\lambda\cdot m_1\cdot m_2+\lambda\cdot r_1\cdot\alpha(m_2)+\lambda\cdot r_2\cdot\alpha(m_1),\lambda\cdot r_1\cdot r_2\right)\\ &=\left(\lambda\cdot m_1\cdot m_2+\lambda\cdot r_1\cdot\alpha(m_2)+ r_2\cdot\alpha(\lambda\cdot m_1),\lambda\cdot r_1\cdot r_2\right)\\ &=(\lambda\cdot m_1,\lambda\cdot r_1)\bullet(m_2,r_2)=\left(\lambda\cdot(m_1,r_1)\right)\bullet(m_2,r_2), \end{align*} \begin{align*} \left((m_1,r_1)+(m_2,r_2)\right)\bullet(m_3,r_3)=&(m_1+m_2,r_1+r_2)\bullet(m_3,r_3)\\ =&\left((m_1+m_2)\cdot m_3+(r_1+r_2)\cdot\alpha(m_3)\right.\\ &\left.+ r_3\cdot\alpha(m_1+m_2),(r_1+r_2)\cdot r_3)\right)\\ =&\left(m_1\cdot m_3+r_1\cdot\alpha(m_3)+r_3\cdot\alpha(m_1),r_1\cdot r_3\right)\\ &+\left(m_2\cdot m_3+m_2\cdot\alpha(m_3)+r_3\cdot\alpha(m_2),r_2\cdot r_3\right)\\ =&(m_1,r_1)\bullet(m_3,r_3)+(m_2,r_2)\bullet(m_3,r_3), \end{align*} so the binary operation $\bullet$ is linear in the first argument, and by symmetry, also linear in the second argument. \end{proof} \begin{proposition}[Weak unitalization] If $(M,\cdot,\alpha)$ is a multiplicative hom-associ\-ative algebra over an associative, commutative, and unital ring $R$, then $(M\oplus R,\bullet ,\beta_\alpha)$ is a multiplicative, weakly unital hom-associative algebra over $R$ with weak unit $(0,1)$. Here, $\bullet$ is given by \eqref{eq:bullet-mult} and $\beta_\alpha\colon M\oplus R\to M\oplus R$ by \begin{align} \beta_\alpha((m_1,r_1)):=&(\alpha(m_1),r_1),\quad \text{for any } m_1\in M \text{ and } r_1\in R. \end{align} We call $(M\oplus R,\bullet, \beta_\alpha)$ a \emph{weak unitalization} of $(M,\cdot,\alpha)$. \end{proposition} \begin{proof} We proved in \autoref{prop:bullet-algebra} that the multiplication $\bullet$ made $M\oplus R$ a non-unital, non-associative algebra, and due to the fact that $\alpha$ is linear, it follows that $\beta_\alpha$ is also linear. Multiplicativity of $\beta_\alpha$ also follows from that of $\alpha$, since for any $m_1,m_2,m_3\in M$ and $r_1,r_2,r_3\in R$, \begin{align*} \beta_\alpha\left((m_1,r_1)\right)\bullet\beta_\alpha\left((m_2,r_2)\right)=&(\alpha(m_1),r_1)\bullet(\alpha(m_2),r_2)\\ =&\left(\alpha(m_1)\cdot\alpha(m_2)+r_1\cdot\alpha(\alpha(r_2))+r_2\cdot\alpha(\alpha(r_1)), r_1\cdot r_2\right)\\ =&\left(\alpha(m_1\cdot m_2)+r_1\cdot\alpha(\alpha(r_2))+r_2\cdot\alpha(\alpha(r_1)),r_1\cdot r_2\right)\\ =&\left(\alpha\left(m_1\cdot m_2+r_1\cdot\alpha(r_2)+r_2\cdot\alpha(r_1)\right),r_1\cdot r_2\right)\\ =&\beta_\alpha\left((m_1,r_1)\bullet(m_2,r_2)\right), \end{align*} while hom-associativity can be proved by the following calculation: \begin{align*} \beta_\alpha\left((m_1,r_1)\right)\bullet\left((m_2,r_2)\bullet(m_3,r_3)\right)=&(\alpha(m_1),r_1)\bullet(m_2\cdot m_3+r_2\cdot\alpha(m_3)\\ &+r_3\cdot\alpha(m_2),r_2\cdot r_3)\\ =&\left(\alpha(m_1)\cdot(m_2\cdot m_3)+r_2\cdot\alpha(m_1)\cdot\alpha(m_3)\right.\\ &+r_3\cdot\alpha(m_1)\cdot\alpha(m_2)\\ &+r_1\cdot\alpha(m_2\cdot m_3+ r_2\cdot\alpha(m_3)+r_3\cdot\alpha(m_2))\\ &\left.+r_2\cdot r_3\cdot\alpha(\alpha(m_1)), r_1\cdot r_2\cdot r_3\right)\\ =&\left((m_1\cdot m_2)\cdot\alpha(m_3)+r_2\cdot\alpha(m_1)\cdot\alpha(m_3)\right.\\ &+r_3\cdot\alpha(m_1)\cdot\alpha(m_2)\\ &+r_1\cdot\alpha(m_2\cdot m_3+ r_2\cdot\alpha(m_3)+r_3\cdot\alpha(m_2))\\ &\left.+r_2\cdot r_3\cdot\alpha(\alpha(m_1)), r_1\cdot r_2\cdot r_3\right)\\ =&\left((m_1\cdot m_2+r_1\cdot\alpha(m_2)+r_2\cdot\alpha(m_1))\cdot\alpha(m_3)\right.\\ &+r_1\cdot r_2\cdot\alpha(\alpha(m_3))+r_3\cdot\alpha(m_1\cdot m_2)\\ &+\left. r_3\cdot\alpha(r_1\cdot\alpha(m_2)+r_2\cdot\alpha(m_1)),r_1\cdot r_2\cdot r_3\right)\\ =&\left((m_1,r_1)\bullet (m_2,r_2)\right)\bullet \beta_\alpha((m_3,r_3)). \end{align*} At last, $(m_1,r_1)\bullet(0,1)=(0,1)\bullet(m_1,r_1)=(1\cdot\alpha(m_1),1\cdot r_1)=\beta_\alpha((m_1,r_1))$. \end{proof} \begin{remark} In case $\alpha$ is the identity map, so that the algebra is associative, the weak unitalization is the unitalization described in the beginning of this section, thus giving a unital algebra. \end{remark} \begin{corollary}\label{cor:iso-bullet} $(M,\cdot,\alpha)\cong(M\oplus 0,\bullet,\beta_\alpha)$. \end{corollary} \begin{proof} The projection map $\pi\colon M\oplus 0\to M$ is a bijective algebra homomorphism. For any $m\in M$, $\pi(\beta_\alpha(m,0))=\pi(\alpha(m),0)=\alpha(m)$ and $\alpha(\pi(m,0))=\alpha(m)$, therefore $\alpha\circ\pi=\pi\circ\beta_\alpha$, so by \autoref{def:morphism}, $(M\oplus 0,\bullet,\beta_\alpha)\cong (M,\cdot,\alpha)$. \end{proof} Using \autoref{cor:iso-bullet}, we identify $(M,\cdot,\alpha)$ with its image in $(M\oplus R,\bullet,\beta_\alpha)$, seeing the former as embedded in the latter. \begin{lemma}\label{lem:weakly-unital-hom-ideal}All ideals in a weakly unital hom-associative algebra are hom-ideals. \end{lemma} \begin{proof} Let $I$ be an ideal, $a\in I$ and $e$ a weak unit in a hom-associative algebra. Then $\alpha(a)=e\cdot a\in I$, so $\alpha(I)\subseteq I$. \end{proof} A simple hom-associative algebra is always hom-simple, the hom-associative Weyl algebras in \autoref{ex:hom-weyl} being examples thereof. The converse is also true if the algebra has a weak unit, due to \autoref{lem:weakly-unital-hom-ideal}. \begin{corollary} $(M,\cdot,\alpha)$ is a hom-ideal in $(M\oplus R,\bullet,\beta_\alpha)$. \end{corollary} \begin{proof} For any $m_1,m_2$ and $r_1\in R$, $(m_1,r_1)\bullet (m_2,0)=(m_1\cdot m_2+r_1\cdot\alpha(m_2),0)\in M$, and $(m_2,0)\bullet (m_1,r_1)=(m_2\cdot m_1+r_1\cdot\alpha(m_2),0)\in M$, so $(M,\cdot,\alpha)$ is an ideal in a weakly unital hom-associative algebra, and by \autoref{lem:weakly-unital-hom-ideal} therefore also a hom-ideal. \end{proof} Recall that for a ring $R$, if there is a positive integer $n$ such that $n\cdot a=0$ for all $a\in R$, then the smallest such $n$ is the \emph{characteristic of the ring $R$}, $\characteristic(R)$. If no such positive integer exists, then one defines $\characteristic(R)=0$. \begin{proposition}Let $R$ be a weakly unital hom-associative ring with weak unit $e$ and injective or surjective twisting map $\alpha$. If $n\cdot e\neq0$ for all $n\in\mathbb{Z}_{>0}$, then $\characteristic(R)=0$. If $n\cdot e=0$ for some $n\in\mathbb{Z}_{>0}$, then the smallest such $n$ is the characteristic of $R$. \end{proposition} \begin{proof}If $n\cdot e\neq0$ for all $n\in\mathbb{Z}_{>0}$, then clearly we cannot have $n\cdot a=0$ for all $a\in R$, and hence $\characteristic(R)=0$. Now assume $n$ is a positive integer such that $n\cdot e=0$. If $\alpha$ is injective, then for all $a\in R$, \begin{equation*} \alpha(n\cdot a)=n\cdot\alpha(a)=n\cdot (e\cdot a)=(n\cdot e)\cdot a=0\cdot a=0\iff n\cdot a=0. \end{equation*} On the other hand, if $\alpha$ is surjective, then for all $a\in R$, $a=\alpha(b)$ for some $b\in R$, and hence $n\cdot a=n\cdot\alpha(b)=n\cdot(e\cdot b)=(n\cdot e)\cdot b=0\cdot b=0.$ \end{proof} \begin{proposition} Let $R:=(M,\cdot,\alpha)$ be a hom-associative ring, and define \begin{equation*} S:=\begin{cases}(M\oplus\mathbb{Z},\bullet,\beta_\alpha),&\text{if } \characteristic(R)=0,\\ (M\oplus\mathbb{Z}_n,\bullet,\beta_\alpha),&\text{if }\characteristic(R)=n. \end{cases} \end{equation*} Then the weak unitalization $S$ of $R$ has the same characteristic as $R$. \end{proposition} \begin{proof}This follows immediately by using the definition of the characteristic. \end{proof} The main conclusion to draw from this section is that any multiplicative hom-associative algebra can be seen as a multiplicative, weakly unital hom-associative algebra by its weak unitalization. The converse, that any weakly unital hom-associative algebra is necessarily multiplicative if also $\alpha(e)=e$, where $e$ is a weak unit, should be known. However, since we have not been able to find this statement elsewhere, we provide a short proof of it here for the convenience of the reader. \begin{proposition}If $e$ is a weak unit in a weakly unital hom-associative algebra $A$, and $\alpha(e)=e$, then $A$ is multiplicative. \end{proposition} \begin{proof} For any $a,b\in A$, $\alpha(e)\cdot\left(a\cdot b\right)=e\cdot\left(a\cdot b\right)=\alpha\left(a\cdot b\right)$. Using hom-associativity, $\alpha(e)\cdot\left(a\cdot b\right)=\left(e\cdot a\right)\cdot \alpha(b)=\alpha(a)\cdot \alpha(b)$. \end{proof} \noindent {\bf Acknowledgment.} We would like to thank Lars Hellstr\"om for discussions leading to some of the results presented in the article.
1,314,259,996,014
arxiv
\section{The Theory} \subsection{Introduction} General relativity is a quintessential example of a successful physical theory. At the same time as its conceptional simplicity is striking, it has to this day not failed a single experimental test. However, one theoretical prediction of GR that contradicts physical intuition is the formation of singularities from physically realistic initial conditions, as was shown in the singularity theorems of Hawking and Penrose \cite{HawkingPenrose}. The standard approach to argue away these singularities is to delegate their removal to a hypothetical theory of quantum gravity which should begin to take over as the curvature approaches its Planckian value. For a variety of reasons it is clear that GR is bound to fail at this scale. Moreover, at this point even the description of our physical world as a smooth manifold is no longer justified, complicating drastically any approach to quantum gravity. A different approach to singularity resolution is to allow deviations from GR already at curvature scales some orders below the Planck curvature where the smooth manifold description of spacetime is still a sensible concept. If we can manage to modify gravity at these scales in such a way as to implement an upper bound on all curvatures, we could get rid of singularities on a classical level. Moreover, we would never enter the Planck regime and thus potentially avoid the practical need for a theory of quantum gravity altogether. Einstein gravity is distinguished among all local, covariant theories of metric gravity in four spacetime dimensions by the fact that it has equations of motion which are only second order. It seems that the only chance to alter this theory and not end up with higher derivatives is to bring something else other than the metric into the game. In the case of the mimetic field, however, this is not an entirely new physical entity. Rather, it represents a reshuffling of the degrees of freedom of the metric itself. The starting point of so-called ``mimetic gravity'' was in \cite{mimetic} to reparametrize the physical metric $g_{\mu\nu}$ in the form \begin{equation} g_{\mu\nu}=h_{\mu\nu}h^{\alpha\beta}\phi_{,\alpha}\phi_{,\beta} % \end{equation} in terms of an auxiliary metric $h_{\mu\nu}$ and a scalar field $\phi$, called the mimetic field. Since the physical metric is invariant under Weyl transformations of $h_{\mu\nu}$, the mimetic field takes over the job of representing the conformal degree of freedom of gravity. By definition, $\phi$ identically satisfies \begin{equation} g^{\mu\nu}\phi_{,\mu}\phi_{,\nu}=1, \label{constraint}% \end{equation} and we can impose the nature of the mimetic field also by adding this condition as a constraint to the gravity action \cite{Golovnev}, giving it the general form \begin{equation*} S=\frac{1}{16\pi}\int\textup{d}^{4}x\sqrt{-g}\left(-\mathcal{L}\left[g_{\mu\nu},\phi\right] +\lambda\left( g^{\mu\nu}\phi_{,\mu}% \phi_{,\nu}-1\right) \right), \label{3}% \end{equation*} where $\lambda$ is a Lagrange multiplier. This formulation is advantageous over simply applying the reparametrization because at the end of the day we are interested in a theory of the physical metric rather than a theory of the auxiliary metric. Note that constraint (\ref{constraint}) can be derived as a consequence of 3d volume quantization in noncommutative geometry \cite{Quanta}, \cite{Hilbert}. Inserting the Einstein-Hilbert Lagrangian $\mathcal{L} = R[g_{\mu\nu}]$ just reproduces standard GR with an additional contribution of mimetic matter (cf. \cite{mimetic}, \cite{Golovnev}). Different choices for $\mathcal{L}$ with added potentials depending on $\phi$ or $\Box \phi$ have been considered in \cite{mimcos}, \cite{Singular}, \cite{BH} and the mimetic field has also been used successfully to define ghost-free massive gravity \cite{massivmim1}, \cite{massivmim2}. Recently, in \cite{AFmimetic}, it was realized that the mimetic field can be used to implement in a covariant manner the idea of a running gravitational and cosmological constant by means of a Lagrangian of the form \begin{equation} \mathcal{L} = f[\phi] R[g_{\mu\nu}] + 2\Lambda[\phi] \end{equation} where the ``inverse gravitational constant'' $f$ and ``cosmological constant'' $\Lambda$ can depend on $\phi$ and its derivatives in a way to be determined. To this end, let us find out which covariant quantities can be constructed from $\phi$. First, note that by virtue of (\ref{constraint}), $t:=\phi$ qualifies to be used as the time coordinate of a synchronous coordinate system (cf. appendix A), \begin{equation} \textup{d}s^{2}=\textup{d}t^{2}-\gamma_{ab}\textup{d}x^{a}\textup{d}x^{b}. \label{synch}% \end{equation} Hence, a simple $\phi$ dependence of $f$ and $\Lambda$ would resemble the introduction of a time dependent background. Moreover, the only covariant quantity constructed from first derivatives of $\phi$ is identically constant by (\ref{constraint}). Gratifyingly, the second covariant derivatives of $\phi$, however, represent measures of the curvature related to the conformal degree of freedom of the gravitational field. More precisely, \begin{equation*} -\phi_{;ab} = \kappa_{ab} = \frac{1}{2}\frac{\partial}{\partial t}\gamma_{ab} \end{equation*} is the extrinsic curvature of the slices of constant $\phi$, while $\phi_{;0\alpha}=0$. The Ricci scalar written out in this synchronous slicing given by $\phi$ reads \begin{equation} -R = 2\dot{\kappa}+\kappa^2+\kappa_{b}^{a}\kappa_{a}^{b}+ {^3\!R}, \label{Ricciscalar} \end{equation} where dot denotes $t$-derivatives, $\kappa_{b}^{a}=\gamma^{ac}\kappa_{cb}$, ${^3\!R}$ is the 3-curvature of the spatial slices and \begin{equation*} \kappa : = \gamma^{ab}\kappa_{ab} = g^{\alpha\beta}\phi_{;\alpha\beta} = \Box\phi% \end{equation*} is the trace of extrinsic curvature. From expression (\ref{Ricciscalar}) we can read off the reason why the Einstein equation is only second order: because second derivatives of the metric appear only linearly in $R$ and thus only contribute as total derivatives to the action. The only chance to introduce a curvature dependence of the gravitational constant and not spoil this property is $f[\phi] = f(\Box\phi)$. In this case \begin{equation} -f(\Box \phi)R=2 \dot{F}(\kappa) + f(\kappa)\left(\kappa^{2}+\kappa_{b}^{a}\kappa_{a}^{b}+{^{3}\!R}\right) \label{5} \end{equation} where $f$ is assumed to be integrable with $f(\kappa)=F^{\prime}(\kappa)\equiv\partial F / \partial \kappa$. Since up to a total derivative such a Lagrangian still contains only first time derivatives of the metric, we can expect the modified Einstein equation of such a theory to be second order in time. While the arguments used to arrive at this result were rather heuristic\footnote{Strictly speaking, it is not allowed to use $\Box\phi=\kappa$ and impose gauge conditions in the action before variation.}, we can of course explicitly verify this statement. Indeed, the theory defined by \begin{equation} \mathcal{L}= f(\Box\phi) R + 2\Lambda(\Box\phi) \label{6} \end{equation} that has been studied in \cite{AFmimetic} turned out to be free of higher time derivatives in the synchronous frame. In the general spatially non-flat case, however, higher spatial and mixed derivatives will appear. The origin of these terms can be traced back to the presence of $f(\Box \phi)$ in front of ${^{3}\!R}$ in (\ref{5}). Luckily, we can use $\phi$ to dissect this term in a covariant way as \begin{equation*} \widetilde{R}= 2\phi^{,\mu}\phi^{,\nu}G_{\mu\nu} - (\Box \phi)^2 + \phi ^{;\mu\nu}\phi_{;\mu\nu} \overset{\cdot}{=} {^3\!R}, \end{equation*} where $G_{\mu\nu}$ is the Einstein tensor and $\overset{\cdot}{=}$ means equality under the condition that (\ref{constraint}) is satisfied. Subtracting the term that was added involuntarily, we will find the theory defined by \begin{equation*} \mathcal{L} = f(\Box\phi)R+ (f(\Box\phi)-1) \widetilde{R} +2\Lambda(\Box\phi) \label{eq-Lgrav} \end{equation*} to be free of higher derivatives of all sorts. The addition of the second summand is hence motivated by the same argument that made Einstein gravity unique. For generality, we will still find it useful to include also a non-linear, spatial curvature dependent potential $h(\widetilde{R})$. It is clear that thereby higher spatial derivatives will reappear, but no higher time or mixed derivatives. While higher time derivatives would typically introduce additional (potentially ghost-like) degrees of freedom, higher spatial derivatives could actually be useful to improve the renormalizability of gravity, along the lines of Ho\v{r}ava gravity \cite{MimeticHorava}. For this extended Lagrangian the interpretation of the functions $f$ and $\Lambda$ in terms of gravitational and cosmological constant might look less transparent than for (\ref{6}). However, we would like to stress that the homogeneous, spatially flat sectors of both theories are identical. In the context of a theory like (\ref{6}) it is natural to realize the concept of limiting curvature (cf. \cite{Markov},\cite{MukBran},\cite{Mukbransol}) by limiting the measure of curvature provided by $\Box \phi$. Motivated by the analysis of the anisotropic sector made in \cite{AFmimetic}, the concept of ``asymptotic freedom'' of gravity gains special importance. This name is awarded to modifications where such a limiting curvature is implemented by a vanishing of the gravitational constant at some limiting value $ \Box\phi =\kappa_0$ which is a free parameter of the theory and can be chosen well below the Planckian curvature. In this paper, after presenting the general form of the theory motivated above, we will consider applications to spatially non-flat Friedmann and Bianchi universes and black holes, providing more detailed calculations for the results presented in \cite{BlackHoleRemnants}. \newpage \subsection{Action and equations of motion} Let us consider the theory defined by the action \begin{equation} S=\frac{1}{16\pi}\int\textup{d}^{4}x\sqrt{-g}\left(-\mathcal{L} +\lambda\left( g^{\mu\nu}\phi_{,\mu}% \phi_{,\nu}-1\right) \right) , \label{action}% \end{equation} where% \begin{equation} \mathcal{L} = f(\Box\phi)R+ (f(\Box\phi)-1) \widetilde{R} +2\Lambda(\Box\phi) + h(\widetilde{R} ) \label{Lgrav}% \end{equation} and \begin{equation} \widetilde{R}= 2\phi^{,\mu}\phi^{,\nu}G_{\mu\nu} - (\Box \phi)^2 + \phi ^{;\mu\nu}\phi_{;\mu\nu}. \label{7} \end{equation} This action contains two free functions $f$ and $\Lambda$ of $\Box \phi$, representing the inverse running gravitational constant $G(\Box\phi)^{-1}$ and cosmological constant\footnote{Note that for this interpretation we still need to factor out the gravitational constant such that $\Lambda = f \overline{\Lambda}.$} $\bar{\Lambda}(\Box\phi)$, respectively. For generality, we also included a spatial curvature dependent potential $h(\widetilde{R})$. In the following we will use Planck units setting $ G\left( \Box\phi=0\right) =G_0=1$, such that $f\left( \Box\phi=0\right)=1.$ Variation of the action with respect to the metric $g_{\mu\nu}$ yields the modified Einstein equation \begin{align} (1&-h')R_{\mu\nu}-\left (\tfrac{1}{2}\mathcal{L}+(\Tilde{Z}\phi^{,\alpha} )_{;\alpha} +\Box h' \right )g_{\mu\nu} +\left (\phi_{,\mu}\phi_{,\nu} \Tilde{f}^{,\alpha} -\phi_{;\mu\nu} \Tilde{f} \phi^{,\alpha} \right )_{;\alpha} \phantom{\frac{1}{1}} \nonumber \\ &+ 2\Tilde{f}\phi^{,\alpha}\phi_{(,\mu}R_{\nu)\alpha}+ 2 \phi_{(,\mu} \Tilde{Z}_{,\nu )} +h'_{;\mu\nu} = (\lambda + \Tilde{f}R) \phi_{,\mu}\phi_{,\nu} + 8\pi T_{\mu\nu}^{\textup{(m)}}, \phantom{\frac{1}{1}} \label{mEE} \end{align} where \begin{equation*} \Tilde{f} := f-1+h',\qquad Z:=\tfrac{1}{2}f'\left((\Box\phi)^2+\phi^{;\mu\nu}\phi_{;\mu\nu} \right) - \Lambda', \qquad \Tilde{Z} := Z - \phi^{,\alpha}h'_{,\alpha}, \end{equation*} $f':= \mathrm{d}f/\mathrm{d}\Box\phi$, $\Lambda':= \mathrm{d}\Lambda/\mathrm{d}\Box\phi$, $h':= \mathrm{d}h/\mathrm{d} \widetilde{R}$ and $T_{\mu\nu}^{\textup{(m)}} = \tfrac{2}{\sqrt{-g}}\tfrac{\delta S^\textup{m}}{\delta g^{\mu\nu}} $ is the matter energy momentum tensor. For the detailed calculations in the variation of the action the interested reader is referred to appendix B. While at first glance this modified Einstein equation looks more involved than the one presented in \cite{AFmimetic}, calculating its components in the synchronous frame will reveal that in the general case, apart from the new terms due to $h$, the theory defined by (\ref{Lgrav}) is in fact the simpler one. \clearpage The evolution of the mimetic field is already completely determined by the constraint (\ref{constraint}), which we obtain from variation with respect to the Lagrange multiplier. The equation obtained by varying (\ref{action}) with respect to $\phi$ hence can only return the favor and provide a condition to determine $\lambda$. Conveniently written in terms of the quantity \begin{equation} \Xi := \lambda +\Tilde{f}\left(R-R_{\mu\nu}\phi^{,\mu}\phi^{,\nu}\right)-\Box {f} -\phi^{,\mu}{Z}_{,\mu}- \phi^{,\mu}{h'}_{,\mu} \Box \phi, \label{Xi} \end{equation} this ``equation of motion'' of $\phi$ reads \begin{align} \left(\Xi \,\phi^{,\nu}\right)_{;\nu}= \left [ (f-h')^{,\mu}\phi^{;\nu}_{\mu} + Z^{,\nu}-\phi^{,\nu}\phi^{,\mu}Z_{,\mu}+\Box\phi \left(h'^{,\nu}-\phi^{,\nu}\phi^{,\mu}h'_{,\mu}\right) \phantom{\frac{1}{1}}\right. \nonumber\\ + \left. \phantom{\frac{1}{1}}\Tilde{f}\left ( R^{\mu\nu}\phi_{,\mu}- R^{\alpha\beta}\phi_{,\alpha}\phi_{,\beta}\phi^{,\nu} \right ) \right ]_{;\nu}. \end{align} In the synchronous frame the right hand side turns out to be just the 3-divergence of a 3-vector (denoted by $ X^a_{|a}$) and we find the solution \begin{equation} \Xi = \tfrac{1}{\sqrt{\gamma}} \int \!\textup{d}t \sqrt{\gamma}\, \left( \kappa^a_{b}f^{,b}+Z^{,a} +\Tilde{f}R^{a}_{0} +\kappa h'^{,a}-\kappa^a_{b}h'^{,b} \right)_{|a}. \label{Xisol} \end{equation} Let us now evaluate (\ref{mEE}) in the synchronous frame $t=\phi$ where it takes its simplest form. Using (\ref{Xi}), the $0-0$ component of the modified Einstein equation becomes \begin{equation} \frac{1}{3}\left( f-2\kappa f^{\prime}\right) \kappa^{2}-\Lambda +\kappa\Lambda^{\prime}-\frac{1}{2}\left( f+\kappa f^{\prime}\right) \tilde{\kappa}_{b}^{a}\tilde{\kappa}_{a}^{b}= \frac{1}{2}\left(h-{^3\!R}\right) + \Xi + 8\pi T^{\textup{(m)}}_{00}, \label{mEE00} \end{equation} where $\tilde{\kappa}_{b}^{a} : = \kappa_{b}^{a}-\tfrac{1}{3}\kappa \delta_{b}^{a}$ is the traceless part of the extrinsic curvature. Note that inserting the solution (\ref{Xisol}) for $\Xi$, this equation becomes in general an integro-differential equation. Suitably taking another time derivative $\left(\% \phi^\mu\right)_{;\mu}$ of (\ref{mEE00}) yields a differential equation containing second time derivatives of the metric. This is a manifestation of the fact that in mimetic gravity the conformal degree of freedom of the gravitational field becomes dynamical. Note that for spaces where ${^3\!R}=0$ and the integrand in $\Xi$ vanishes by homogeneity, (\ref{mEE00}) is precisely the same as the $0-0$ component of the modified Einstein equation presented in \cite{AFmimetic}. \clearpage The spatial components of (\ref{mEE}) after raising one index read \begin{equation} -\frac{1}{\sqrt{\gamma}}\,\partial_t\left(\sqrt{\gamma}\,\left(f\kappa^{a}_{b}+Z \delta^{a}_{b}\right)\right) -\tfrac{1}{2}\mathcal{L}\,\delta^{a}_{b} = S^{a}_{b} +8\pi {T^{\textup{(m)}a}_{\phantom{\textup{(m)}}b}}, \label{mEEab} \end{equation} where \begin{equation} S^{a}_{b} : = (1-h') {^3\!R^{a}_{b}} + h'\,^{|a}_{\:b}- \Delta h'\delta^{a}_{b} \label{Sab} \end{equation} contains spatial curvature terms. Subtracting from this equation one third of its (spatial) trace removes all isotropic terms proportional to $\delta^{a}_{b}$ and we obtain \begin{equation} -\frac{1}{\sqrt{\gamma}}\,\partial_t\left(\sqrt{\gamma}\,f \,\tilde{\kappa}^{a}_{b}\right) = \tilde{S}^{a}_{b} +8\pi {\tilde{T}^{\textup{(m)}a}_{\phantom{\textup{(m)}}b}} \label{mEEabsub} \end{equation} where the right hand side consists of the traceless parts of $S^{a}_{b}$ and $T^{\textup{(m)}a}_{\phantom{\textup{(m)}}b}$. The spatial components of the modified Einstein equation are hence second order in time. A non-linear potential $h({^3\!R})$ introduces higher spatial derivatives of up to forth order. Finally, the mixed components of the modified Einstein equation (\ref{mEE}) read \begin{equation} f R_{0a} +Z_{,a}+\kappa^b_{a}f_{,b} = 8\pi T_{0a}^{\textup{(m)}}. \label{mEE0a} \end{equation} These equations, as in standard GR, contain only first time derivatives of the metric and could be thought of as a constraint that needs to be satisfied on an initial hypersurface $\phi = \phi_{\textup{i}}$ and then continues to hold by virtue of validity of the other components of the modified Einstein equation. Note that $h$ does not appear in the mixed equations. Moreover, (\ref{mEE0a}) can be used to simplify (\ref{Xisol}) to \begin{equation} \Xi = \tfrac{1}{\sqrt{\gamma}} \int \!\textup{d}t \sqrt{\gamma}\, \left( T^{\textup{(m)}a}_{\phantom{\textup{(m)}}0}-(1-h')R^{a}_{0} +\kappa h'^{,a} - \kappa^{a}_{b} h'^{,b}\right)_{|a}. \label{Xisol2} \end{equation} Note that time reversal invariance of general relativity is maintained in our modification if we choose $f$ and $\Lambda$ as symmetric functions of $\kappa$. Moreover, if we require \begin{equation*} f=1+\mathcal{O}\left( \kappa^{2}\right),\quad \Lambda=\mathcal{O}\left( \kappa^{4}\right),\quad h = \mathcal{O}\left(\widetilde{R}^2\right) , \end{equation*} then in the limit of low curvatures (\ref{mEE00}), (\ref{mEEab}) and (\ref{mEE0a}) are just the components of the usual Einstein equation with a contribution of mimetic matter, given by the constant of integration in $\Xi$. \clearpage \section{Non-flat Universes} \subsection{Friedmann Universes} The metric of a homogeneous, isotropic universe with cosmological time $t$ is given by \begin{equation} \textup{d}s^2=\textup{d}t^2-a^2(t)\left (\frac{\textup{d}r^2} {1-\varkappa r^2} + r^2\textup{d}\Omega ^2\right ), \end{equation} where $\varkappa \in \left\{-1,0,+1 \right\}$ and $\mathrm{d}\Omega^2 = \mathrm{d}\vartheta^2+ \sin^2\vartheta\mathrm{d}\varphi^2$. Note that the unique solution of constraint (\ref{constraint}) in such a spacetime which is compatible with homogeneity is $\phi=t + const$. Hence the quantities in the synchronous frame (\ref{synch}) are given by \begin{equation*} \kappa = \frac{\dot{u}}{u} = 3 \,\frac{\dot{a}}{a}, \qquad \qquad {^3\!R}=\frac{6\varkappa}{a^2}, \end{equation*} where we introduced $u:= a^3$. Moreover, by isotropy, $\tilde{\kappa}^{a}_{b} = 0$ and by homogeneity $\Xi \propto 1/u$ describes only the dust-like contribution of mimetic matter. For the sake of simplicity and because the remaining equation is still general enough for all our purposes we make again the simplifying choice \begin{equation} \Lambda=\frac{2}{3}\kappa^{2}(f-1), \label{18}% \end{equation} familiar from \cite{AFmimetic}. The $0-0$ modified Einstein equation (\ref{mEE00}) then becomes \begin{equation} \left ( f-\frac{2}{3}\right ) \kappa^2= \frac{1}{2}\left(h({^3\!R})-{^3\!R}\right) + \varepsilon \label{mF} \end{equation} where $\varepsilon:= \Xi + 8\pi T^{(\textup{m})}_{00}$ is the total energy density of mimetic and ordinary matter. Note that this modified Friedmann equation is still formulated in terms of the same variables as the original Friedmann equation. Only the relation between curvature and energy density is changed at large curvatures. While the left hand side of (\ref{mF}) depends on $\kappa^2$ only, the right hand side is in general some function of $u$. Such a relation can be thought of as an integral curve in the ``phase space'' spanned by $u$ and $\kappa=\mathrm{d}\ln{u}/\mathrm{d}t$. Drawing this phase portrait for a specific modification will allow us to understand its qualitative behaviour without the need to obtain explicit solutions. \paragraph{Spatially flat universe:} Let us use this phase portrait technique to show that in the case $\varkappa =0$ there is essentially only one possibility for the behaviour of a non-singular modification. Since the total energy density $\varepsilon$ is in general a monotonically decreasing function in $u$, in this case (\ref{mF}) can be understood as a relation of the form $u(\kappa^2)$. Furthermore, if this relation is to describe a non-singular modified Friedmann equation, it must be one-to-one. Otherwise, if at some point $\mathrm{d}u/\mathrm{d}\kappa^2 =0$, then \begin{equation} \dot{\kappa} = \dot{u}\frac{\mathrm{d}\kappa}{\mathrm{d} u} = u\,\kappa \frac{\mathrm{d} \kappa}{\mathrm{d} u} = \frac{u}{2}\,\frac{\mathrm{d} \kappa^2}{\mathrm{d} u} \label{kdot} \end{equation} would diverge. Since in the general non-vacuum case we cannot expect divergences of $\dot{\kappa}$ and $\kappa^2$ to cancel out exactly in the Ricci scalar (\ref{Ricciscalar}), both quantities have to be bounded separately to avoid a curvature singularity. Hence, in the following we will assume that the relation provided by (\ref{mF}) is of the form $\kappa^2(u)$ and it is one-to-one in the case $\varkappa=0$. By (\ref{kdot}), boundedness of $\dot{\kappa}$ implies that the domain of definition of the relation $\kappa^2(u)$ can be extended to $u\in\left[0,\infty\right)$. Boundedness of $\kappa$ implies that \begin{equation} \int_{0}^{\infty}\!\mathrm{d}u \frac{\mathrm{d} \kappa}{\mathrm{d} u} = -\kappa(u=0) \end{equation} has a finite value, where we made use of the low curvature limit $\kappa^2(u\to\infty) = 0$. At the lower bound, the integral can only converge if $ u \frac{\mathrm{d} \kappa}{\mathrm{d} u} \to 0$. By (\ref{kdot}) it follows that $\dot{\kappa}\to 0$. Hence, in this limit $\kappa$ must be asymptotically constant at some limiting value $\pm \kappa_0$. Recalling that $\kappa$ is the logarithmic derivative of $u$, this means that asymptotically \begin{equation} u=a^3\propto \exp \left(\pm\kappa_0 t\right) \end{equation} as $t\to \mp \infty$. In conclusion, the most natural modifications generically replace Big Bang/Big Crunch singularities by a smooth transition to a de Sitter-like initial/final state with limiting curvature. In \cite{AFmimetic} we provided a concrete example of a non-singular, spatially flat universe using the simple choice \begin{equation} f=\frac{1}{1-\left( \kappa^{2}/\kappa_{0}^{2}\right) }. \label{fAF}% \end{equation} Assuming $\varepsilon\propto (1/u)^{1+w}$, we found the implicit solution \begin{equation} \frac{1+w}{2}\kappa_0 t = \frac{\kappa_0}{\kappa}-\mathrm{atanh}\frac{\kappa}{\kappa_0} - \sqrt{2}\arctan\left(\sqrt{2} \frac{\kappa}{\kappa_0}\right) \end{equation} for $\kappa(t)$. The expanding branch $\kappa>0$ describes a smooth transition from an expanding de Sitter universe to an expanding Friedmann universe with $a\propto t^{2/3(1+w)}$. Its conformal diagram is given by the upper triangle of the left diagram of figure (\ref{fig:CDfriedmann}), cf. \cite{Mbook}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.55] \tikzmath{\dx = 13;} \tikzmath{\s = 0.95;} \tikzmath{\dy = 6*(1-\s);} \node (I) at ( 4,0) {}; \node (II) at (0, 4) {}; \path (I) +(90:4) coordinate[label=right:$\widetilde{i^+}$] (Itop)+(-90:4) coordinate[label=below:$\phantom{-}i^-$] (Ibot)+(180:4) coordinate (Ileft); \path (II) +(90:4) coordinate[label=above:$i^+$] (IItop)+(-90:4) coordinate[label= left:$\phantom{--}\widetilde{i^-}$] (IIbot); \draw (Ileft) -- node[midway, below left]{$\cal{J}^-$}(Ibot) -- (Itop) --node[midway, below, sloped]{$t=+\infty$} (Ileft); \draw (Itop) -- node[midway, above right]{$\cal{J}^+$}(IItop) -- (IIbot)--node[midway, above, sloped]{$t=-\infty$} (Itop); \node (0) at ( 0.5*\dx+2,2) {\Huge{$\approx$}}; \node (0') at ( \dx+\s*2,\s*2) {}; \node (0'tra) at ( \dx+\s*4,\s*4) {}; \node (I') at (\dx+\s*4,-\dy) {}; \node (II') at (\dx+0, \s*4+\dy) {}; \path (0'tra)+(0,0) coordinate(0'tr) +(-\s*4,0) coordinate(0'tl)+(-\s*4,-\s*4) coordinate(0'bl)+(0,-\s*4) coordinate(0'br); \path (I')+(0:0) coordinate (I'right) +(0:-\s*4) coordinate (I'left)+(-90:\s*4) coordinate(I'bot); \path (II')+(0:0) coordinate (II'left) +(0:-\s*4) +(0:\s*4) coordinate (II'right)+(90:\s*4) coordinate(II'top); \draw[dashed] (0'tr) -- (0'tl); \draw (0'tl) -- (0'bl); \draw[dashed] (0'bl) --(0'br); \draw (0'br)--(0'tr); \draw (0'tr) -- (0'bl); \draw [dashed] (I'right) -- (I'left); \draw (I'left) -- (I'bot) -- (I'right); \draw[dashed] (II'left) -- (II'right); \draw (II'right) -- (II'top) -- (II'left); \node (E) at (1.5*\dx+0.5,\s*6+\dy) {expanding}; \node (D) at (1.5*\dx+0.5,\s*2) {dS}; \node (C) at (1.5*\dx+0.5,-\s*2-\dy) {contracting}; \end{tikzpicture} \caption{Conformal diagram of a modified spatially flat Friedmann universe.} \label{fig:CDfriedmann} \end{figure} Comoving geodesics start from $\widetilde{i^-}$ and reach $i^+$ after infinite proper time. Other causal geodesics, however, are past incomplete in the same way as in an expanding de Sitter space in the flat slicing. At the line $t=-\infty$ all curvature invariants are bounded and hence we can complete the diagram by gluing the contracting solution $\kappa<0$, which is related to the expanding solution simply by time reversal. Albeit $\phi =t$ is obviously discontinuous at the junction, the metrics can be joined smoothly just like the expanding and contracting de Sitter space. In this way we obtain a geodesically complete, non-singular spacetime. \paragraph{Non-flat universe:} Let us now extend our analysis to the spatially non-flat case $\varkappa = \pm 1$. In this case the relation $\kappa^2(u)$ provided by (\ref{mF}) does not have to be bijective. In fact, if it were one-to-one, the same arguments as above would apply and we would run into a curvature singularity as $ {^3\!R}(u\to0)\to \infty$. This can only be avoided by a bounce at some finite $u_{\min}$. A zero of the relation $\kappa^2(u)$ at $u_{\min}$ can connect the contracting half-plane $\kappa<0$ to the expanding half-plane $\kappa>0$. If at this point also $\dot{\kappa}$ would vanish, there would be a static fixed point solution at $(u,\kappa) = (u_0,0)$. Thus, by (\ref{kdot}), only a zero of first order of $\kappa^2(u)$ can describe an actual bounce. For concreteness, consider the case where $\varepsilon=3c/u^{1+w}$ and make the simple power law ansatz \begin{equation} h({^3\!R}) = - 6\left(\frac{\delta}{6}{^3\!R}\right)^{2n} = - 6\left(\frac{\delta}{a^2}\right)^{2n}, \end{equation} where the prefactors were chosen for later convenience and we restrict to even powers in order to obtain the same high curvature modifications for both $\varkappa=\pm1$. The right hand side of the modified Friedmann equation (\ref{mF}) then becomes proportional to \begin{equation*} \frac{c}{a^{3(1+w)}} - \frac{\varkappa}{a^{2}} - \left(\frac{\delta}{a^{2}}\right)^{2n}. \end{equation*} Since the left hand side of (\ref{mF}) should be a one-to-one function of $\kappa^2$ which is linear in the low curvature limit, a zero of this right hand side means that $\kappa=0$ at such a point. In standard GR, setting $\delta =0$, this right hand side has just one non-trivial zero, namely in the case $\varkappa =1$ at \begin{equation} a_{\textup{max}} = c^{1/(1+3w)}, \label{umax} \end{equation} corresponding to the moment of recollapse of a closed universe. In our modification, by the choice of sign of $h$, there is now also another first order zero describing a bounce. At the moment of bounce the linear contribution of ${^3\!R}$ is negligible compared to $h$ and we find \begin{equation} a_{\textup{min}} = \left(\frac{\delta^{2n}}{c}\right)^{1/(4n-3(1+w))}. \label{umin} \end{equation} Assuming that $a_{\max} \gg a_{\min}$, there is some intermediate region where the energy density $\varepsilon$ dominates both spatial curvature terms. In this region (\ref{mF}) becomes like in the spatially flat case and we expect a stage where $\kappa^2 \sim \kappa_0^2$ is approximately constant at the limiting curvature. A contracting universe will hence undergo a stage of exponential contraction before going through a bounce followed by exponential expansion. We can estimate the duration of this inflationary stage which is, by time reversal symmetry, equal to the duration of the stage of exponential contraction. Inflation will end at the moment when $\kappa^2$ and hence also the left hand side of (\ref{mF}) drops below the order of $\kappa_0^2$. This will happen around $a=a_f$ where \begin{equation} a_{f}^{3(1+w)} \sim \frac{c}{\kappa_0^2}. \end{equation} The value $a_i$ at the beginning of inflation will not be much different from $a_{\textup{min}}$. The number of e-folds expressed through the dimensionless quantities $\tilde{c}$ and $\tilde{\delta}$ defined by \begin{equation*} \tilde{c} = c \,\kappa_0^{3(1+w)-2},\qquad \tilde{\delta} = \delta \,\kappa_0^{2-1/n}, \end{equation*} can hence be approximated by \begin{equation} N \sim \frac{2n}{4n-3(1+w)}\ln \left(\tilde{c}^{\frac{2}{3(1+w)}} \tilde{\delta}^{-1} \right). \label{efolds1} \end{equation} A necessary number of e-folds can be achieved e.g. by an exponentially small value of $\tilde{\delta}$ or an exponentially large value of $\tilde{c}$. Note that for radiation with $w=1/3$ we have to choose $n>1$ in order to still have a bounce. On the other hand, for $n=1$, a value smaller than, but close to $1/3$, \begin{equation} w=\frac{1}{3}(1 -\epsilon), \quad \epsilon \ll 1, \end{equation} can also explain a large number of e-folds even when all other dimensionless parameters are of the order of unity. In this case \begin{equation} N \sim \frac{\ln (\tilde{c}/ \tilde{\delta}^{2} )}{\epsilon}. \label{efolds2} \end{equation} Let us illustrate this in a simple concrete example for a closed universe ($\varkappa =1$) and with the familiar choice (\ref{fAF}) for the function $f$. In this case (\ref{mF}) becomes \begin{equation} \frac{1}{9}\kappa^{2}\left( \frac{1+2\left( \kappa^{2}/\kappa_{0}% ^{2}\right) }{1-\left( \kappa^{2}/\kappa_{0}^{2}\right) }\right) = \frac{1}{a^2} \left[\left(\frac{a_\textup{max}}{a}\right)^{2-\epsilon}\left(1-\left(\frac{a_\textup{min}}{a}\right)^{\epsilon}\right)-1\right], \label{nfex} \end{equation} where $a_{\max}$ and $a_{\min}$ are defined by (\ref{umax}) and (\ref{umin}). Let us analyze the asymptotics of this equation starting at the moment of recollapse where we set $t=0$. At this point $a\sim a_{\max} \gg a_{\min}$ and $\kappa \ll \kappa_0$. Moreover, the deviation from $w=1/3$ is irrelevant for the behaviour in this region and we set $\epsilon=0$ for simplicity. Recalling that $\kappa = 3\dot{a}/a$, (\ref{nfex}) in this case becomes \begin{equation} \dot{a}^2 =\left(\frac{a_\textup{max}}{a}\right)^{2}-1 \end{equation} and has the solution \begin{equation} a(t) = \sqrt{a_\textup{max}^2-t^2}. \label{recolsol} \end{equation} This would be the full exact solution of (\ref{nfex}) in standard GR, i.e. if we would set $a_{\min}=0$ and $\kappa_0\to\infty$. It describes a closed universe starting from a Big Bang at $t= -a_{\max}$, expanding until $t=0$, then recollapsing until finally reaching a Big Crunch at $t= a_{\max}$. Since for this solution \begin{equation} \kappa(t) = \frac{3t}{t^2-a^2_\textup{max}}, \label{ksingsol} \end{equation} both Big Bang and Big Crunch represent curvature singularities. In our theory, however, $\kappa$ becomes of order of the limiting curvature at $t -a_{\max}\sim 1/\kappa_0$ and the modification starts to take over. In the region where the energy density is dominating both spatial derivative terms, (\ref{nfex}) becomes like for a flat Friedmann universe. The exact implicit solution of an equation of this form was obtained in \cite{AFmimetic} and for the case at hand it reads \begin{equation} \frac{2}{3}\kappa_{0}(t-a_{\max})=\frac{\kappa_{0}}{\kappa}-\operatorname{atanh}% \frac{\kappa}{\kappa_{0}}-\sqrt{2}\arctan\left( \sqrt{2}\frac{\kappa}% {\kappa_{0}}\right), \label{impsol}% \end{equation} where the constant of integration was fixed such that the $-\kappa\ll\kappa_0$ asymptotic \begin{equation} \kappa = \frac{3}{2(t-a_{\max})} \label{crunchasymp} \end{equation} of (\ref{impsol}) matches the $t\sim a_{\max}$ asymptotic of (\ref{ksingsol}). The contracting branch of (\ref{impsol}) describes a smooth transition between the asymptotic (\ref{crunchasymp}) at $t \ll a_{\max}$ and the asymptotically constant solution $\kappa \sim -\kappa_0$ at $t \gg a_{\max}$. Hence, the scale factor does not vanish at $t= a_{\max}$ but after $t \gg a_{\max}$ starts to decrease exponentially as \begin{equation} a \propto \exp \left(-\tfrac{1}{3}\kappa_0 t\right). \end{equation} Contrary to the spatially flat case, the exponential contraction now cannot continue until $t\to\infty$ because at some point the spatial curvature dependent potential will start to counteract this contraction and thus prevent an unbounded growth of the spatial curvature. Expanded around $a\sim a_{\min}$ (\ref{nfex}) becomes \begin{equation} \kappa^{2}/\kappa_0^2\left( \frac{1+2\left( \kappa^{2}/\kappa_{0}% ^{2}\right) }{1-\left( \kappa^{2}/\kappa_{0}^{2}\right) }\right) = \tilde{\epsilon} \left(\frac{a}{a_{\min}}-1\right), \end{equation} where \begin{equation} \tilde{\epsilon} : = \epsilon\,\frac{9}{ \kappa_0^2\,a_{\min}^2} \left(\frac{a_{\max}}{a_{\min}}\right)^{2-\epsilon}. \end{equation} Isolating $a$ from this equation on one side and taking the time derivative of the logarithm of this equation, we obtain a separable first order differential equation for $\kappa$ with the implicit solution \begin{equation} \frac{1}{6}\kappa_0 (t -t_b) = \mathrm{atanh}\frac{\kappa}{\kappa_0} + \tilde{\epsilon}_{-}\arctan\left(\tilde{\epsilon}_{-} \frac{\kappa}{\kappa_0} \right)+\tilde{\epsilon}_{+}\arctan\left(\tilde{\epsilon}_{+} \frac{\kappa}{\kappa_0} \right) , \label{aminas} \end{equation} where the constant of integration $t_b$ corresponds the moment of bounce and \begin{equation*} \tilde{\epsilon}_{\pm}^2 : = \frac{4}{\tilde{\epsilon}-1\pm\sqrt{1-10\tilde{\epsilon}+\tilde{\epsilon}^2}}. \end{equation*} The $t\ll t_b$ asymptotic of the contracting branch of this solution is $\kappa \sim -\kappa_0$, in agreement with the late time asymptotic of (\ref{impsol}). Note that the sign in front of the $\mathrm{atanh}$ term in (\ref{aminas}) is opposite to that in (\ref{impsol}). This means that $-\kappa$ is now decreasing again until $\kappa =0$ at $t=t_b$. Expansion of (\ref{aminas}) around $\kappa \sim 0$ yields \begin{equation} \kappa = \frac{\tilde{\epsilon}}{6 } \kappa_0 (t-t_b). \end{equation} Integrating again, we find a smooth bounce described by the leading order solution \begin{equation} a= a_{\textup{min}}\left(1+\frac{\tilde{\epsilon}}{12} \kappa_0 \left(t-t_{b}\right)^2\right)^{1/3}. \end{equation} At $t=t_b$ we pass from the contracting into the expanding halfplane. By time reversal invariance, the solution in the expanding plane will be just a mirror image of the solution in the contracting plane. Hence, after the bounce $\kappa$ will grow until reaching $\kappa_0$ as described by the expanding branch of (\ref{aminas}). It will stay approximately constant for the number of e-folds given by (\ref{efolds2}) followed by a smooth graceful exit described by the expanding branch of (\ref{impsol}). Finally, after $\kappa\ll\kappa_0$, the term linear in spatial curvature will dominate again and cause a recollapse according to (\ref{recolsol}), restarting the whole cycle anew. The solution is hence eternally oscillating clockwise in the phase space $(u,\kappa)$ on the closed trajectory described by (\ref{nfex}). \clearpage \subsection{Bianchi Universes} The general\footnote{Excluding Kantowski-Sachs type models which bear more similarity with the interior of black holes than with cosmological models, cf. section \ref{OnGen}.} form of a homogeneous but not necessarily isotropic spatial metric is given by \begin{equation} \gamma_{ab} = \gamma_{AB} \,e^{A}_ae^{B}_b, \end{equation} where $\gamma_{AB}$ is spatially constant and the frame one-forms $e^{A}_a$ are constant in time and satisfy \begin{equation} \left ( \frac{\partial e^{C}_a}{\partial x^b} -\frac{\partial e^{C}_b}{\partial x^a}\right )e^a_Ae^b_B = \mathcal{C}^C_{AB}. \label{structureconst} \end{equation} Here $e^a_A$ is the inverse of $e^{A}_a$ and $\mathcal{C}^C_{AB}$ are the structure constants of the group of motion of the three-dimensional homogeneous space under consideration \cite{Landau}, \cite{Henneaux}. The inverse metric and the metric determinant are given by \begin{equation} \gamma^{ab} = \gamma^{AB} \,e_{A}^a e_{B}^b, \qquad \sqrt{\gamma} = u v, \end{equation} where $\gamma^{AB}$ is the inverse of $\gamma_{AB}$, $v$ is the determinant of $e^{A}_a$ and $u^2$ is the determinant of $\gamma_{AB}$ and hence depends only on time. The extrinsic curvature of these homogeneous spatial slices is given by \begin{equation} \kappa_{ab} = \frac{1}{2} \dot{\gamma}_{AB} \,e^{A}_ae^{B}_b=:\kappa_{AB} \,e^{A}_ae^{B}_b, \qquad \kappa^a_{b} = \frac{1}{2} \gamma^{AC}\dot{\gamma}_{CB} \,e_{A}^ ae^{B}_b=:\kappa^A_{B} \,e_{A}^ ae^{B}_b \end{equation} and we see that \begin{equation} \kappa = \kappa^a_{a} = \kappa^A_{A} = \frac{\dot{u}}{u}. \end{equation} The spatial connection coefficients are given by \begin{equation} \lambda^c_{ab} = \frac{\partial e^C_a }{\partial x^b}\,e^c_C - \mathcal{A}^C_{AB}\,e^c_Ce^A_ae^B_b, \end{equation} with the spatially constant coefficients \begin{equation} \mathcal{A}^C_{AB} = \frac{1}{2}\left ( \mathcal{C}^C_{AB} -\mathcal{C}^E_{DB}\gamma_{EA}\gamma^{DC}-\mathcal{C}^E_{DA}\gamma_{EB}\gamma^{DC} \right ). \end{equation} Note that $\gamma^{AB}\mathcal{C}^C_{AB}=0$ and $\mathcal{A}^C_{DC} = \mathcal{C}^C_{DC}$. The mixed components of the $4-$Ricci tensor are given by \begin{equation} R^0_{a} = \left(\kappa^{D}_{C}\mathcal{C}^C_{AD}-\kappa^{D}_{A}\mathcal{C}^{E}_{DE}\right)e^A_a, \end{equation} and the spatial Ricci curvature is \begin{equation} {^3\!R_{ab}} = {^3\!R_{AB}} \,e_{a}^A e^{B}_b,\qquad ^3\!R_{AB} = \mathcal{A}^C_{DC}\mathcal{A}^D_{AB}-\mathcal{A}^D_{BC}\mathcal{A}^C_{AD}. \end{equation} The spatial Bianchi identity reads \begin{equation} {^3\!R^{D}_{C}}\mathcal{C}^C_{AD}-{^3\!R^{D}_{A}}\mathcal{C}^{E}_{DE} = 0. \end{equation} Note that ${^3\!R^a_{a}} = {^3\!R^A_{A} }= {^3\!R}$ is constant in space. Hence we can immediately solve (\ref{Xisol}) to find that \begin{equation} \Xi \propto \frac{1}{u}, \end{equation} and the spatial curvature contribution (\ref{Sab}) to the spatial modified Einstein equation is just \begin{equation} S^{a}_{b} = (1-h') {^3\!R^{a}_{b}}. \end{equation} The whole modified Einstein equation can be expressed independent of the frame vectors. Let us restrict to the case of isotropic, comoving matter, e.g. dust or mimetic matter, with total energy density $\varepsilon$. Then the temporal equation (\ref{mEE00}) reads \begin{equation} \frac{1}{3}\left( f-2\kappa f^{\prime}\right) \kappa^{2}-\Lambda +\kappa\Lambda^{\prime}=\frac{1}{2}\left( f+\kappa f^{\prime}\right) \tilde{\kappa}_{B}^{A}\tilde{\kappa}_{A}^{B}+ \frac{1}{2}\left(h-{^3\!R}\right) + \varepsilon. \label{Bianchi00} \end{equation} The trace subtracted spatial equations (\ref{mEEabsub}) become \begin{equation} -\frac{1}{u}\,\partial_t\left(u\,f \,\tilde{\kappa}^{A}_{B}\right) = (1-h')\left({^3\!R^{A}_{B}}- \frac{1}{3}{^3\!R}\,\delta^{A}_{B}\right). \label{Bianchiab} \end{equation} Contracting with $ \tilde{\kappa}^{B}_{A}$ we find the useful equation \begin{equation} \frac{1}{f u^2}\partial_t \left(u^2 f^2 \tilde{\kappa}^{A}_{B} \tilde{\kappa}^{B}_{A} \right) = - 2 \left(1-h'\right) {^3\!R^{A}_{B}}\,\tilde{\kappa}^{B}_{A}. \label{Bianchispcontr} \end{equation} Finally, the mixed component equation is simply \begin{equation} \Tilde{\kappa}^{D}_{C}\mathcal{C}^C_{AD}-\Tilde{\kappa}^{D}_{A}\mathcal{C}^{E}_{DE} = 0. \label{Oa} \end{equation} Let us in the following assume that the frame metric $\gamma_{AB}$ is diagonal. This additional assumption is an expression of non-rotating Kasner axes, cf. \cite{Henneaux}. \subsubsection{Bianchi type I.} This is the case where all structure constants vanish and the spatial slices are hence Euclidean. (\ref{Bianchiab}) is easily integrated and yields \begin{equation} \tilde{\kappa}^{A}_{B} = \frac{\lambda^{A}_{B}}{fu}, \label{Bianchi1int} \end{equation} with constants of integration $\lambda^{A}_{B}$ and (\ref{Bianchi00}) becomes \begin{equation} \frac{1}{3}\left( f-2\kappa f^{\prime}\right) \kappa^{2}-\Lambda +\kappa\Lambda^{\prime}=\frac{f+\kappa f^{\prime}}{2f^2} \frac{\bar{\lambda}^2}{u^2} + \varepsilon, \label{Bianchi100} \end{equation} where $\bar{\lambda}^2:=\lambda^{A}_{B}\lambda^{B}_{A}$. Like in the Friedmann case, this equation defines an integral curve in the phase space spanned by $u$ and $\kappa$. Provided that $(f+\kappa f')/f^2$ is bounded, it will look qualitatively similar to (\ref{mF}) for the Friedmann universe. At low curvatures where $\kappa \ll \kappa_0$ and $u\to \infty$, the right hand side will be dominated by any contribution of matter with equation of state $w<1$. The evolution of $u$ is then given by \begin{equation} u \propto t^{2/(1+w)} \end{equation} like in a Friedmann universe. Moreover, according to (\ref{Bianchi1int}), the anisotropic extrinsic curvature contribution $\tilde{\kappa}^{A}_{B} \propto u^{-1}$ will decay at $t \to \infty$ faster than $\kappa \propto t^{-1}$. This means that the presence of any kind of isotropic matter with $w<1$ will eventually lead to isotropy of an expanding universe. On the other hand, approaching $u\to 0$, we see that the term $u^{-2}$ coming from curvature due to anisotropy will dominate any such matter contribution. This is why in order to understand the behaviour close to singularities it is sufficient to study the vacuum case. Without modifications, the vacuum solution is given by the Kasner metric, featuring a singularity. In an attempt to remove this anisotropic singularity, we will find that the property of asymptotic freedom is an inevitable condition for any non-singular modification: Just as for a modified flat Friedmann universe, (\ref{Bianchi100}) establishes a one-to-one relation between $\kappa$ and $u$. By the same argument as in the last section, the only way it can be non-singular is for $\kappa$ to tend to its constant limiting value as $u\to 0$. While in this case $\kappa^2$ as well as $\dot{\kappa}$ are bounded, \begin{equation} \tilde{\kappa}_{B}^{A}\tilde{\kappa}_{A}^{B} = \frac{\bar{\lambda}^2}{f^2 u^2} \end{equation} will become singular, unless that $f(\kappa)$ diverges fast enough as $\kappa \to \kappa_0$. It follows that the only way to avoid a curvature singularity in (\ref{Ricciscalar}) is a fast enough divergence of $f$ at the limiting curvature. Moreover, if $1/(fu) \to 0$, then the anisotropic Kasner solution will even become isotropic close to the limiting curvature. A concrete illustration of this was presented in \cite{AFmimetic}. \subsubsection{Bianchi type V.} As an example for an anisotropic, spatially non-flat universe where we can still understand the modified Einstein equation in terms of a two dimensional phase portrait, we consider Bianchi type $\textup{V}$. Here the non-vanishing structure constants are determined by \begin{equation} \mathcal{C}^{\bar{2}}_{\bar{1}\bar{2}}= 1 ,\qquad \mathcal{C}^{\bar{3}}_{\bar{1}\bar{3}}= 1. \end{equation} We calculate \begin{equation} \mathcal{A}^{\bar{2}}_{\bar{1}\bar{2}}= \mathcal{A}^{\bar{3}}_{\bar{1}\bar{3}}=1, \qquad \mathcal{A}^{\bar{1}}_{\bar{2}\bar{2}} = - \gamma^{\bar{1}\bar{1}}\gamma_{\bar{2}\bar{2}}, \qquad \mathcal{A}^{\bar{1}}_{\bar{3}\bar{3}} = - \gamma^{\bar{1}\bar{1}}\gamma_{\bar{3}\bar{3}}, \end{equation} and find that the spatial curvature components in the frame are given by \begin{equation} {^3\!R^{\bar{1}}_{\bar{1}}} = {^3\!R^{\bar{2}}_{\bar{2}} }={^3\!R^{\bar{3}}_{\bar{3}} } = -2\gamma^{\bar{1}\bar{1}}. \end{equation} Hence the spatial curvature is still isotropic and (\ref{Bianchiab}) has the first integral \begin{equation} \tilde{\kappa}^{A}_{B} = \frac{\lambda^{A}_{B}}{fu}. \end{equation} By the mixed components (\ref{Oa}) it follows that \begin{equation} \tilde{\kappa}^{\bar{1}}_{\bar{1}} =0, \qquad \tilde{\kappa}^{\bar{2}}_{\bar{2}} = - \tilde{\kappa}^{\bar{3}}_{\bar{3}} =: \frac{\tilde{\lambda}}{fu}, \end{equation} where we can assume without loss of generality that the constant of integration $\tilde{\lambda}\geq 0$. Integrating again yields the frame metric \begin{equation} \gamma_{AB} = u^{2/3} \,\mathrm{diag}\left(1/\alpha^2,b^2,\alpha^2 b^{-2}\right), \end{equation} where $\alpha$ is a constant of integration and $b$ is a function of time. The differential equations (\ref{structureconst}) are solved by the frame vectors \begin{equation} e^{\bar{1}}_a = \alpha \, \delta^1_a, \quad e^{\bar{2}}_a = e^{\alpha x} \,\delta^2_a, \quad e^{\bar{3}}_a = e^{\alpha x}/\alpha \,\delta^3_a \end{equation} and, fixing overall constant factors, this frame metric corresponds to the spacetime metric \begin{equation} \mathrm{d}s^2 = \mathrm{d}t^2 - u^{2/3}\left[\mathrm{d}x^2 + e^{2 \alpha x } \left(b^2\mathrm{d}y^2+b^{-2} \mathrm{d}z^2\right)\right]. \end{equation} Note that the spatial slices are spaces of constant negative curvature. The equation (\ref{Bianchi00}) for this class becomes \begin{equation} \frac{1}{3}\left( f-2\kappa f^{\prime}\right) \kappa^{2}-\Lambda +\kappa\Lambda^{\prime}= \frac{f+\kappa f^{\prime}}{f^2}\frac{\tilde{\lambda}^2}{u^2}+ \frac{1}{2}\left(h(-6 \alpha^2 u^{-2/3})+6 \alpha^2 u^{-2/3}\right) + \varepsilon. \label{type5-00} \end{equation} It is again just an integral curve in the phase space $(u,\kappa)$ which can be treated as for the non-flat Friedmann universe. If $h$ includes a term $\propto ({^3\!R})^n$ with $n\geq3$ and has the right sign, it will describe a bounce in the same way as (\ref{nfex}). The solution for $b$ is determined by \begin{equation} \frac{\dot{b}}{b} = \tilde{\kappa}^{\bar{2}}_{\bar{2}} = \frac{\tilde{\lambda}}{fu} \geq 0. \label{BianchiVb} \end{equation} Hence $b$ is monotonically increasing and the moment of greatest slope of $b$ is at the bounce, where $u$ assumes its minimum, $\kappa =0$ and $f(\kappa)=1$. Long before/long after the bounce, i.e. at $ u\gg u_{\min}$, the linear contribution of spatial curvature $\propto u^{-2/3}$ is dominating both $h$ and $\tilde{\lambda}^2/u^2$. Moreover, in this region $\kappa^2 \ll \kappa_0^2$. In the case of vacuum, the asymptotic solution of (\ref{type5-00}) is hence given by \begin{equation} u(t) = \left(\alpha \left|t\right|\right)^3. \end{equation} Integrating (\ref{BianchiVb}) then yields \begin{equation} b \propto \exp \left(\frac{-\tilde{\lambda}}{2\alpha^3 t^2}\right) \xrightarrow[t\to \pm \infty]{ } b_0^\pm. \end{equation} Fixing the constant of integration, we can achieve that $b_0^+ = 1/b_0^-$. Hence, starting at $t\to -\infty$ from a contracting spacetime, after the bounce we obtain the time reversed expanding spacetime where the directions $y$ and $z$ are interchanged. Since $\kappa^a_b\kappa^b_a $ is everywhere bounded and in the early/late time asympotic it holds that $\kappa^a_b\kappa^b_a\propto \kappa^2 \propto 1/ t^2$, the condition for causal completeness from appendix A is satisfied. \newpage \subsubsection{Bianchi types II, VI$_{0}$, VII$_{0}$, VIII, IX.} These five Bianchi types can be treated on a common footing by labeling the non-vanishing structure constants as \begin{equation} \mathcal{C}^{\bar{1}}_{\bar{2}\bar{3}}=\lambda, \qquad \mathcal{C}^{\bar{2}}_{\bar{3}\bar{1}}=\mu, \qquad \mathcal{C}^{\bar{3}}_{\bar{1}\bar{2}}=\nu. \end{equation} The individual classes can then be read off from the following table: \begin{figure}[h] \centering \begin{tabular}{c|c|c|c|c|c} &II & VI$_{0}$ & VII$_{0}$ & VIII & IX \\ \hline $\lambda$ & 1 & 1 & 1 & 1 & 1 \\ $\mu$ & 0 & -1 & 1 & 1 & 1 \\ $\nu$ & 0 & 0 & 0 & -1 & 1 \end{tabular} \end{figure} \\ \noindent Corresponding solutions for the frame vectors can be found in \cite{Henneaux}. Taking a diagonal metric, the mixed components (\ref{Oa}) are trivially satisfied. Parametrizing the frame metric as \begin{equation} \gamma_{AB} = u^{2/3}\mathrm{diag}(a^2,b^2,c^2), \end{equation} where $abc=1$, it holds that \begin{equation} \tilde{\kappa}^{\bar{1}}_{\bar{1}} = \frac{\dot{a}}{a},\qquad \tilde{\kappa}^{\bar{2}}_{\bar{2}} = \frac{\dot{b}}{b},\qquad \tilde{\kappa}^{\bar{3}}_{\bar{3}} = \frac{\dot{c}}{c}. \end{equation} The spatial curvature components are given by \begin{align} ^3\!R^{\bar{1}}_{\bar{1}} &= \frac{1}{2u^{2/3}}\left(\lambda^2 a^4-(\mu b^2-\nu c^2)^2\right),\\ ^3\!R^{\bar{2}}_{\bar{2}} &= \frac{1}{2u^{2/3}}\left(\mu^2 b^4 -(\nu c^2-\lambda a^2 )^2\right),\\ ^3\!R^{\bar{3}}_{\bar{3}} &= \frac{1}{2u^{2/3}}\left(\nu^2c^4-(\lambda a^2-\mu b^2)^2\right). \end{align} Finally, the traceless spatial modified Einstein equations (\ref{Bianchiab}) become \begin{align} \partial_t\left(uf \frac{\dot{a}}{a}\right) &= \frac{(h'-1)u^{1/3}}{3}\left( \lambda a^2 \left(2\lambda a^2 - \mu b ^2 - \nu c^2 \right) - \left(\mu b^2-\nu c^2\right)^2\right), \label{bianchispeq11}\\ \partial_t\left(uf \frac{\dot{b}}{b}\right) &= \frac{(h'-1)u^{1/3}}{3}\left( \mu b^2 \left(2\mu b^2 - \nu c^2 - \lambda a ^2 \right) - \left(\nu c^2-\lambda a^2\right)^2\right), \\ \partial_t\left(u f \frac{\dot{c}}{c}\right) &= \frac{(h'-1)u^{1/3}}{3}\left( \nu c^2 \left(2\nu c^2 - \lambda a^2 - \mu b ^2 \right) - \left(\lambda a^2-\mu b^2\right)^2\right). \end{align} In general, (\ref{Bianchi00}) describes a hypersurface in a six-dimensional phase space parametrized e.g. by $(u,a,b,\kappa, \tilde{\kappa}^{\bar{1}}_{\bar{1}}, \tilde{\kappa}^{\bar{2}}_{\bar{2}})$. Clearly, the general analysis of this system becomes intractable analytically. Let us hence restrict to the case where the conformal degree of freedom decouples from the rest. For general $\lambda,\mu,\nu$ we look for special solutions where (\ref{Bianchi00}) contains only $u$ and $\kappa$ and can be decoupled from the other equations. By (\ref{Bianchispcontr}), the condition for this to be possible is \begin{equation} {^3\!R^{A}_{B}}\,\tilde{\kappa}^{B}_{A}=0\quad \Leftrightarrow \quad \tilde{\kappa}^{A}_{B} \tilde{\kappa}^{B}_{A} \propto \frac{1}{u^2 f^2}. \label{decouplingcond} \end{equation} After a short calculation, we find that for the Bianchi types under consideration \begin{equation} {^3\!R^{A}_{B}}\,\tilde{\kappa}^{B}_{A} = -\frac{1}{2u^{2/3}}\, \partial_t\left(u^{2/3}\, {^3\!R}\right). \end{equation} Hence the condition (\ref{decouplingcond}) is equivalent to \begin{equation} {^3\!R} = \frac{-d}{2u^{2/3}}, \end{equation} where \begin{equation} d:= \lambda^2a^4+\mu^2b^4+\nu^2c^4-2\left(\mu\nu b^2c^2+\nu\lambda a^2c^2+\lambda \mu a^2b^2 \right ) \overset{!}{=} const. \end{equation} The temporal modified Einstein equation is in this case just the same as (\ref{type5-00}) for Bianchi type $\mathrm{V}$. Again, a bounce can be implemented by a term $\propto ({^3\!R})^n$ with $n\geq3$ in $h$. Such a bounce ensures that $\kappa^{A}_{B}\kappa^{B}_{A}$ is bounded. In the case of negative spatial curvature ($d>0$) there is only one bounce and no recollapse and hence \begin{equation} \kappa^{A}_{B}\kappa^{B}_{A} \propto \frac{1}{t^2} \qquad \textup{as} \quad t\to \pm \infty, \end{equation} like in Bianchi type $\mathrm{V}$. It follows that in the case $d>0$ such a bounce is already enough to ensure causal geodesic completeness, as one finds by slightly modifying the theorem presented in \cite{ChoquetBruhat}, cf. appendix A. In the case of positive spatial curvature ($d<0$) the solution for $u(t)$ and $\kappa(t)$ will be cyclic, similar to the closed Friedmann universe. Note that \begin{equation} d+\frac{4\mu\nu}{a^2} = \left(\lambda a^2-\mu b^2 - \nu c^2\right)^2, \end{equation} which holds also for simultaneous cyclic permutations of $(a,b,c)$ and $(\lambda,\mu,\nu)$. This can be used to express the right hand side of (\ref{bianchispeq11}) solely through $a$ and $u$. For Bianchi type $\mathrm{IX}$ in particular we find that \begin{equation} \partial_t\left(uf \frac{\dot{a}}{a}\right) = (1-h')u^{1/3}\left(a^2 \sqrt{d+\frac{4}{a^2}} + \frac{d}{3} \right) . \label{typeIXaeq} \end{equation} By symmetry, the same equation must also hold if $a$ is replaced with $b$ or $c$. Since both $u$ and $\kappa$, which appear as sources in this equation, are periodic in time, we expect that also the solutions for $a$, $b$ and $c$ are oscillating between their minimal and maximal values and the corresponding spacetimes will be non-singular. \clearpage \section{Modified Black Hole} \subsection{Black hole in synchronous coordinates} In GR the metric of a non-rotating, eternal black hole in the synchronous Lema\^itre coordinates \cite{Lemaitre} is given by \begin{equation} \mathrm{d}s^2 = \mathrm{d}T^2 - \left ( x/x_+ \right )^{-2/3} \mathrm{d}R^2 - \left ( x/x_+ \right )^{4/3} r_g^2\, \mathrm{d}\Omega^2, \label{Schwmet} \end{equation} where $x = R-T$, and $r_g = 2M$. These coordinates are regular at the horizon \begin{equation*} x = x_+:= \frac{4}{3} M , \end{equation*} and the region $x>0$ covers both interior and exterior of the Schwarzschild black hole. For comoving observers with $R,\vartheta,\varphi = const.$, $T$ represents proper time. In the Schwarzschild radial coordinate $r=r_g \left(x/x_+\right)^{2/3}$ the paths followed by these synchronous observers correspond to radially infalling geodesics. They start from rest at $r \to \infty$ at proper time $T \to -\infty$ and reach the singularity at $r=0$ at the finite proper time $T=R$. To see how (\ref{Schwmet}) is modified in our theory, we consider in the synchronous coordinates (\ref{synch}) provided by $T = \phi$ the ansatz \begin{equation} \mathrm{d}s^2 = \mathrm{d}T^2 - a^2\left (x \right ) \mathrm{d}R^2 - b^2\left (x \right ) \mathrm{d}\Omega^2, \label{mansatz} \end{equation} where the functions $a$ and $b$ still depend only on $x=R-T$. The transformation to Schwarzschild coordinates $t$ and $r$ is given by \begin{equation} t = T-\int\!\mathrm{d}x\,\frac{a^2}{1-a^2}, \qquad r = b(R-T), \label{Schwcoord} \end{equation} which brings the metric to the form \begin{equation} \mathrm{d}s^2 = (1-a^2) \mathrm{d}t^2 -\frac{a^2}{b'\,^2 (1-a^2)}\mathrm{d}r^2 - r^2\mathrm{d}\Omega^2. \end{equation} The dependence of $a$ and $b'$ on $r$ has to be found by inverting $r = b(x)$. The spatial metric determinant of (\ref{mansatz}) is \begin{equation*} \gamma = a^2b^4\sin^2 \vartheta =: u^2(x) \sin^2 \vartheta, \end{equation*} and the non-vanishing components of the extrinsic curvature are given by \begin{equation*} \kappa^R_{R} = \frac{\dot{a}}{a} = - \frac{a'}{a},\qquad \kappa^\vartheta_{\vartheta}=\kappa^\varphi_{\varphi} = \frac{\dot{b}}{b}=-\frac{b'}{b}, \end{equation*} where the prime denotes $x$-derivatives. \clearpage The spatial Ricci curvature components for the class of metrics (\ref{mansatz}) are given by \begin{align} {^3}\!R^{R}_{R}= R^{R}_{T} &=2\left(\gamma^{RR} \left(\kappa^\vartheta_{\vartheta}\right)^2 - \gamma^{\vartheta\vartheta}+ {^3}\!R^\vartheta _{\vartheta } \right) , \label{eq-3Rrr} \\ {^3}\!R^\vartheta _{\vartheta } = {^3}\!R^\varphi _{\varphi} &= \frac{1}{2\kappa^\vartheta _{\vartheta}}\, \left(\gamma^{RR} \left(\kappa^\vartheta_{\vartheta}\right)^2 -\gamma^{\vartheta\vartheta}\right) '- 2\left(\gamma^{RR} \left(\kappa^\vartheta_{\vartheta}\right)^2 - \gamma^{\vartheta\vartheta}\right). \label{eq-3Rthth} \end{align} The condition for spatial flatness hence amounts to the single equation \begin{equation} \gamma^{RR} \left(\kappa^\vartheta_{\vartheta}\right)^2 - \gamma^{\vartheta\vartheta}= 0 \quad \quad \Leftrightarrow \quad a^2 = b'\,^2. \label{spflcond} \end{equation} In this case the metric in Schwarschild coordinates takes the form \begin{equation} \mathrm{d}s^2 = (1-a^2) \mathrm{d}t^2 -\frac{\mathrm{d}r^2}{(1-a^2)} - r^2\mathrm{d}\Omega^2, \label{Schwspfl} \end{equation} and we see that the Schwarzschild metric (\ref{Schwmet}) is spatially flat in Lema\^itre coordinates. Note that in the direction of the vector field \begin{equation} k^\mu \frac{\partial }{\partial x^\mu} := \frac{\partial }{\partial R} +\frac{\partial }{\partial T} = \frac{\partial }{\partial t} \label{eq-SchwKilling} \end{equation} the Lie derivative of (\ref{mansatz}) vanishes. In other words, $k^\mu$ is a Killing vector field with norm \begin{equation*} k^\mu k_{\mu} = 1-a^2(x). \end{equation*} It follows that a Killing horizon occurs wherever $a^2(x) = 1$. Let us, in analogy with (\ref{Schwmet}), denote the largest value of $x$ where this happens, i.e. the most exterior horizon, by $x_+$. We can also calculate the surface gravity $g_s$ of this Killing horizon which is defined by the equation \cite{Poisson} \begin{equation} k^\nu_{\:;\mu} k^{\mu} = g_s \, k^\nu, \end{equation} evaluated at the horizon. We find that it is related to the extrinsic curvature of the synchronous slices by \begin{equation} g_s = \kappa^R_{R}(x_+) = -a'(x_+). \label{sg} \end{equation} \subsection{On generality of the solution $\phi = T$} \label{OnGen} In the last section we were making ansatz (\ref{mansatz}) from the beginning in the synchronous coordinates provided by $T = \phi$. One could, however, ask the question how some other synchronous time coordinate would be related to this specific one, i.e. if there is a more general solution of the constraint (\ref{constraint}) in a metric given by (\ref{mansatz}). Such a solution still has to be consistent with the isometries of (\ref{mansatz}), e.g. it should be independent of the angular coordinates by spherical symmetry. Moreover, applying the Lie-derivative $\mathcal{L}_{k^{\mu}\frac{\partial }{\partial x^\mu}}$ to the constraint equation (\ref{constraint}) or to the modified Einstein equation (\ref{mEE}), we find the consistency condition \begin{equation} \left[ k^{\mu}\frac{\partial }{\partial x^\mu}\, ,\, \phi^{,\nu}\frac{\partial }{\partial x^\nu} \right ] =\left (k^\mu\frac{\partial }{\partial x^\mu} \phi^{,\nu} -\phi^{,\mu}\frac{\partial }{\partial x^\mu} k^{\nu} \right )\frac{\partial }{\partial x^\nu} = 0, \end{equation} from which it follows that \begin{equation} \phi = c\, T + \xi(R-T), \label{altsol} \end{equation} where $c$ is a constant and $\xi$ is an arbitrary function. Reinserting into the constraint equation (\ref{constraint}), we find the family of solutions \begin{equation} \xi_\pm^{(c)}(x) = \int\!\mathrm{d}x \frac{c\,a^2 \pm a\sqrt{c^2-1+a^2}}{a^2-1}. \end{equation} Introducing the radial coordinate \begin{equation} \tilde{r} := c\,R - \varrho(R-T) \end{equation} and requiring the coordinates $(\tilde{t} :=\phi,\tilde{r},\vartheta,\varphi)$ to be synchronous, i.e. $\tilde{g}^{\tilde{t}\tilde{r}} = 0$, yields the corresponding solution \begin{equation} \varrho_{\pm}^{(c)}(x) = -c\int\!\mathrm{d}x \frac{1\pm c\,a / \sqrt{c^2-1+a^2}}{a^2-1}. \end{equation} Note that the combination \begin{equation} \tilde{x} :=\tilde{r}-\tilde{t} = \mp \int\! \mathrm{d}x \frac{a}{\sqrt{c^2-1+a^2}} \end{equation} is only a function of $x = R-T$ and hence all functions of $x$ can be expressed as functions of $\tilde{x}$. The full metric ansatz in these coordinates will thus be again of the form \begin{equation} \mathrm{d}s^2 = \mathrm{d}\tilde{t}\,^2 - \tilde{a}^2\left (\tilde{x} \right ) \mathrm{d}\tilde{r}^2 - \tilde{b}^2\left (\tilde{x} \right ) \mathrm{d}\Omega^2, \end{equation} where now $\phi = \tilde{t}$ and \begin{equation} \tilde{a}^2(\tilde{x}) = \frac{c^2-1+a^2(x(\tilde{x}))}{c^2}, \qquad \tilde{b}^2(\tilde{x}) = b^2(x(\tilde{x})). \end{equation} Hence, the ansatz (\ref{mansatz}) made above (corresponding to the case $c=1$) is fully general. The Killing vector field $\partial/\partial t$ in these coordinates has the expression \begin{equation} \frac{\partial}{\partial t} = \frac{\partial}{\partial R} +\frac{\partial}{\partial T} = c\left(\frac{\partial}{\partial \tilde{r}} +\frac{\partial}{\partial \tilde{t}}\right). \end{equation} Suppose that the metric is spatially flat in the original coordinates $T,R$, i.e. $b'(x)^2 = a^2(x)$. Then the relation in the new coordinates is \begin{equation} \left(\frac{\mathrm{d}\tilde{b}}{\mathrm{d}\tilde{x}}\right)^2 = c^2 \tilde{a}^2(\tilde{x}). \end{equation} It follows that there can only be at most one synchronous coordinate system in which such a metric is spatially flat. The case $c=0$, where even $k^\mu\phi_{,\mu}=0$, is exceptional and deserves special attention. In this case the corresponding synchronous coordinates $(\tilde{t}=\phi, \tilde{r})$ are defined by \begin{equation} \tilde{t} = - \int\!\mathrm{d}x \frac{a}{\sqrt{a^2-1}}, \qquad \tilde{r} = R+\int\!\mathrm{d}x \frac{1}{a^2-1}, \end{equation} and the metric is \begin{equation} \mathrm{d}s^2 = \mathrm{d}\tilde{t}\,^2 - \left(a^2-1\right) \mathrm{d}\tilde{r}^2 - b^2 \mathrm{d}\Omega^2, \label{Kantowski-Sachs} \end{equation} where $a$ and $b$ can now be expressed as functions of $\tilde{t}$ only. This is a Kantowski-Sachs type metric \cite{KantowskiSachs} and hence homogeneous. In these coordinates the metric can never be spatially flat. The condition for spatial flatness in the original $c=1$ coordinates translates to \begin{equation} \left(\frac{\mathrm{d}b}{\mathrm{d}\tilde{t}}\right)^2 = a^2-1. \end{equation} Note that $\phi = \tilde{t}$ is not a global solution of (\ref{constraint}) because it is only defined where $a^2>1$ and becomes singular at the horizon. This is the ansatz that was first considered and then discarded in \cite{BH} before it was realized that the global solution $\phi=T$ has to be used. \subsection{Modified Einstein equations} Considering for simplicity the theory where $h=0$, we want to derive the modified Einstein equation for a metric of the form (\ref{mansatz}). By virtue of the fact that all relevant quantities depend on $R$ and $T$ only through the quantity $x=R-T$, we can replace $\partial_T = - \partial_R$ and reduce partial differential equations to ordinary differential equations. For example, for the vacuum case at hand (\ref{Xisol2}) yields \begin{equation*} \Xi = -\tfrac{1}{\sqrt{\gamma}} \int \!\textup{d}T \, \partial_R \left( \sqrt{\gamma}R^{R}_{T} \right) = R^{R}_{T} = {^3\!R^{R}_{R}}, \end{equation*} where we set the constant of integration to zero to be consistent with the asymptotic exterior vacuum solution (\ref{Schwmet}). Hence the temporal modified Einstein equation (\ref{mEE00}) becomes \begin{equation} \frac{1}{3}\left( f-2\kappa f^{\prime}\right) \kappa^{2}-\Lambda +\kappa\Lambda^{\prime}-\frac{1}{2}\left( f+\kappa f^{\prime}\right) \tilde{\kappa}_{b}^{a}\tilde{\kappa}_{a}^{b}=\gamma^{RR} \left(\kappa^\vartheta_{\vartheta}\right)^2 - \gamma^{\vartheta\vartheta}. \label{BH00} \end{equation} The trace subtracted spatial equations (\ref{mEEabsub}) read \begin{equation} \frac{1}{u}\left( u f\tilde{\kappa}^{a}_{b}\right)'= {^3\!R^{a}_{b}} - \frac{1}{3}{^3\!R} \delta^{a}_{b}. \label{BHspeq} \end{equation} By spherical symmetry and tracelessness they contribute only one independent equation. Subtracting the $\vartheta-\vartheta$ equation from the $R-R$ equation and inserting (\ref{eq-3Rrr}), (\ref{eq-3Rthth}) it can be written as \begin{equation} \frac{1}{u} \left(u \,f \left(\frac{b'}{b}-\frac{a'}{a}\right)\right)' = \frac{1}{2\kappa^\vartheta_\vartheta}\left(\gamma^{RR}(\kappa^\vartheta_\vartheta)^2 - \gamma^{\vartheta\vartheta}\right)'. \label{speqsub} \end{equation} For the Schwarzschild solution (\ref{Schwmet}) it holds that $\kappa = -1/x$. Hence, for large mass black holes with \begin{equation} M \gg \frac{1}{\kappa_0} \end{equation} the extrinsic curvature at the horizon $x=x_+ \approx 4M/3$ is much lower than the limiting curvature scale $\kappa_0$ and we can still expect the exterior solution to be given by (\ref{Schwmet}) and modifications to restrict themselves to the interior region. As we have seen, the Schwarzschild solution (\ref{Schwmet}) is exactly spatially flat in the given slicing. Let us assume that the spatial curvature will remain negligible also for some range of $x$ after the modification has taken over. In fact, we will find that the linear contribution of spatial curvature is irrelevant for the region close to the horizon even in the case $M \sim \kappa_0^{-1}$. In this spatial flatness approximation (\ref{BHspeq}) is easily integrated and yields \begin{equation} \tilde{\kappa}^{R}_{R}=\frac{2M}{fu}, \qquad \tilde{\kappa}^{\vartheta}_{\vartheta}=-\frac{M}{fu}, \label{excsol} \end{equation} where the constants of integration have been fixed to match the Schwarzschild solution in the limit $x\to \infty$. Accordingly, (\ref{BH00}) becomes \begin{equation} \frac{\kappa^2\left(f-2\kappa f'\right)-3\left(\Lambda - \kappa \Lambda'\right)}{f+\kappa f'}= \left(\frac{3M}{fu}\right)^2, \label{BH00sf} \end{equation} which is formally the same equation as (\ref{Bianchi100}) for a modified Kasner universe and can be used to determine $u(x)$. We can integrate (\ref{excsol}) again to obtain the solutions for $a(x)$ and $b(x)$ as \begin{equation} a = u^{1/3}\,\left(\frac{2}{3}\,\kappa_0\, e^H\right)^{2/3}, \qquad \qquad b = u^{1/3}\,\left(\frac{2}{3}\,\kappa_0 \, e^H\right)^{-1/3}, \label{eq-Schwab2} \end{equation} where the prefactors have been chosen for dimensionality and later convenience and \begin{equation} H:=\int \! \textup{d}T \, \frac{3M}{f u} . \label{H} \end{equation} Note that the integrand can be expressed entirely through $\kappa$ from (\ref{BH00sf}). Moreover, applying the same technique as before and taking the time derivative of the logarithm of (\ref{BH00sf}) yields a first order differential equation for $\kappa$ where $M$ drops out. The dependence of $a$, $b$ and $u$ on the mass parameter $M$ can hence only come from a constant of integration which needs to be fixed to match (\ref{Schwmet}). Solutions of the spatially flat approximation hence generically scale as \begin{equation} a,b \propto M^{1/3}. \end{equation} For the analogous case of a contracting Kasner universe we have seen in \cite{AFmimetic} that a fast enough divergence of $f$ at the limiting curvature can make anisotropies disappear during contraction. Under the condition that $\dot{H} \propto 1/fu \to 0 $ while $fu^2$ remains finite when the limiting curvature is approached, also for the case at hand it follows that $\tilde{\kappa}^{R}_{R}$ and $\tilde{\kappa}^{\vartheta}_{\vartheta}$ will vanish as $x\to -\infty$. Hence, the functions $a$ and $b$ become alike, up to some finite constant factor \begin{equation} \zeta:= \lim_{x\to-\infty} \frac{a(x)}{b(x)} . \end{equation} In the original Schwarzschild solution the function $a$ is increasing as $a \propto b^{-1/2}$ as we go towards $r=b\to0$. Since the modification smoothly connects this solution to $a\propto b$, it is clear that $a'$ has to change sign at some point where $a$ reaches a maximum value before starting to decrease as we go deeper inside the black hole to $x\to -\infty$. If this maximum value of $a$ is greater than one, we hence expect two Killing horizons $x_\pm$, one on each side of the maximum. In the limiting case where the maximum is exactly equal to one, these two horizons merge and the region where the Killing vector $k^\mu$ is spacelike (region $\textup{II}$ in the conformal diagram, cf. section \ref{sec:CD}) shrinks to a single horizon. Decreasing the mass of the black hole even further, we find that no horizon occurs at all. Hence there is a minimal mass of order \begin{equation} M_{\min} \sim \frac{1}{\kappa_0} \end{equation} below which no black hole solution exists. Moreover, by (\ref{sg}) it follows that the surface gravity of this minimal black hole vanishes. This indicates that the final product of black hole evaporation will approach a minimal mass remnant for which Hawking radiation stops \cite{BlackHoleRemnants}, \cite{Mbook2}. Before illustrating this in a concrete example, let us discuss the fate of the singularity of (\ref{Schwmet}) at the ``center'' $x=0$. The asymptotic solution of the spatially flat approximation in the limit $x\to-\infty$ becomes \begin{equation} \mathrm{d}s^2 = \mathrm{d}T^2 - u^{\frac{2}{3}}(x) \left (\zeta^{\frac{4}{3}}\mathrm{d}R^2 +\zeta^{-\frac{2}{3}} \mathrm{d}\Omega^2 \right ), \label{asympsol} \end{equation} where \begin{equation} u =u_0\exp{\left(\kappa_0 x\right)}. \end{equation} The spatial curvature components of this asymptotic solution are given by \begin{equation} {^3}\!R^{R}_{R}= 0,\qquad {^3}\!R^\vartheta _{\vartheta } = {^3}\!R^\varphi _{\varphi} = \left ( 1-\left ( \frac{\kappa_0}{3\zeta} \right )^2 \right ) \left ( \frac{\zeta}{u} \right )^{2/3} . \end{equation} Note that for modifications where it happens that $\zeta=\kappa_0/3$, the prefactor in the spatial curvature exactly cancels and the asymptotic solution is hence spatially flat. As a consequence, the full Ricci scalar of this asymptotic solution has a constant value and it describes a part of de Sitter spacetime. This can be seen also by defining the new radial coordinate $\Tilde{R} : = (u_0 e^{\kappa_0 R} /\zeta)^{1/3} $ which brings (\ref{asympsol}) to the form \begin{equation} \mathrm{d}s^2 = \mathrm{d}T^2 - e^{-\frac{2}{3}\kappa_0 T} \left (\left(\frac{3\zeta}{\kappa_0}\right)^2\mathrm{d}\tilde{R}^2 +\tilde{R}^2 \mathrm{d}\Omega^2 \right ). \end{equation} Since inside the inner horizon $x_-$ the Killing vector field $k^\mu$ is timelike again, we know that the metric in this region of the modified solution has to be static. Transforming to static Schwarzschild coordinates (\ref{Schwcoord}), we find the relations $a = \zeta r$, $b' = (\kappa_0/3) r$ and the asymptotic metric \begin{equation} \mathrm{d}s^2 = (1-\zeta^2r^2) \mathrm{d}t^2 -\frac{(3\zeta/\kappa_0)^2}{(1-\zeta^2r^2)}\mathrm{d}r^2 - r^2\mathrm{d}\Omega^2. \end{equation} In the case $\zeta = \kappa_0/3$, the solution in the region $x\in(-\infty,x_-)$ (region $\textup{IIa}$ in figures \ref{fig:CDs} and \ref{fig:CDmax}) hence approaches the static patch of the de Sitter spacetime and has the same causal structure. The singularity is hence replaced by a smooth transition to a part of de Sitter space \cite{ThroughBH}. The above shows that in principle it is possible to find a modification for which the spatial curvature and hence also the potential $h({^3\!R})$ will never become important and even exactly vanish in both limits $x\to\pm\infty$. If $\zeta \neq \kappa_0/3$, the spatial curvature in the asymptotic region is of order \begin{equation} \gamma^{RR} \left(\tilde{\kappa}^{\vartheta}_{\vartheta}\right)^2 -\gamma^{\vartheta\vartheta}\to \frac{\kappa_0/3\zeta-1}{b^2} \propto \frac{1}{u^{2/3}}. \label{eq-asympspcont} \end{equation} Contracting (\ref{BHspeq}) with $\kappa^b_a$ we find that \begin{equation} \frac{1}{2u^2f}\left(u^2 f^2 \tilde{\kappa}^a_b\tilde{\kappa}^b_a\right)' = \kappa^a_b \,{^3\!\widetilde{R}^b_a} = \frac{1}{3}\left(\frac{\kappa^R_R}{\kappa^{\vartheta}_{\vartheta}}-1\right) \left(\gamma^{RR} \left(\kappa^{\vartheta}_{\vartheta}\right)^2 -\gamma^{\vartheta\vartheta}\right)'. \end{equation} By asymptotic freedom and isotropy of the asymptotic solution, the right hand side remains negligible for the whole range of $x$ even if $\zeta\neq \kappa_0/3$. As a consequence it still holds approximately that \begin{equation} \tilde{\kappa}^a_b\tilde{\kappa}^b_a \propto \frac{1}{u^2 f^2}, \end{equation} and thus the linear spatial curvature contribution $\propto u^{-2/3}$ to (\ref{BH00sf}) is dominated by $u^{-2}$. Hence, we expect the solution in this case to remain qualitatively unchanged compared to the spatially flat approximation except for the fact that now $\left|{^3\!R}\right| \to \infty$ as $x\to-\infty$. Naively treating (\ref{BH00}) in the region where $a$ and $b$ are alike as a formal analogue of a modified non-flat Friedmann universe, one would come to the conclusion that in order to achieve a bounce and prevent this blowing up of spatial curvature, it would take a spatial curvature dependent potential including a term \begin{equation} h = -\left|{^3\!R}\right|^n, \qquad n > 3. \end{equation} Of course this argument is purely heuristic and a rigorous verification would require an analysis of the full system of equations given by the analogues of (\ref{BH00}) and (\ref{BHspeq}) with $h\neq 0$. Without simplifying approximations, these constitute a highly coupled system of non-linear differential equations for $a$ and $b$, or alternatively, $u$ and $H$. To thoroughly verify our above speculation would hence require further investigation (perhaps numerical) beyond the scope of this paper. \subsection{A spatially flat exact solution} \label{sec:Solution} If we can find a modification such that the solution of the modified Einstein equation in the spatial flatness approximation everywhere exactly satisfies the spatial flatness condition (\ref{spflcond}), this would be an exact solution of the full modified Einstein equation, even in the case $h\neq0$. The following modification provides a concrete (perhaps not the simplest) example where this possibility is realized. Consider the asymptotically free modification given by \begin{align} f(\kappa) &= \frac{1+3\left (\kappa/\kappa_0 \right )^2}{\left ( 1+\left ( \kappa/\kappa_0 \right )^2 \right )\left ( 1-\left ( \kappa/\kappa_0 \right )^2 \right )^2}, \\ \Lambda(\kappa) &= \kappa^2\left ( \frac{\tfrac{4}{3}\left ( \kappa/\kappa_0 \right )^2}{\left ( 1-\left ( \kappa/\kappa_0 \right )^2\right )^2} \,-\,\frac{1+2\left ( \kappa/\kappa_0 \right )^2}{1+4\left ( \kappa/\kappa_0 \right )^2+3\left ( \kappa/\kappa_0 \right )^4}\right ) + \nonumber\\ & \qquad -\frac{\kappa_0}{6} \kappa\left ( \arctan \frac{\kappa}{\kappa_0} -3\sqrt{3}\arctan\left (\sqrt{3} \, \frac{\kappa}{\kappa_0} \right )+2\,\mathrm{atanh} \frac{\kappa}{\kappa_0} \right ). \end{align} With this choice (\ref{BH00sf}) becomes \begin{equation} \frac{\kappa^2}{\left(1-\left(\kappa/\kappa_0\right)^4\right)^2}= \left(\frac{3M}{u}\right)^2 . \end{equation} Taking the time derivative of the logarithm of this equation we find that \begin{equation} \dot{\kappa} = -\kappa^2\frac{1-\left(\kappa/\kappa_0\right)^4}{1+3\left(\kappa/\kappa_0\right)^4}, \label{eq-kdot} \end{equation} which has the implicit solution \begin{equation} -\kappa_0 \,x = \frac{\kappa_0}{\kappa} - 2\,\mathrm{atanh}\frac{\kappa}{\kappa_0} +2\arctan\frac{\kappa}{\kappa_0}. \end{equation} Evaluating (\ref{H}) as an integral over $\kappa$ yields \begin{equation} H(\kappa) = \ln \left(-\left(\kappa/\kappa_0\right) \;\frac{1+\left(\kappa/\kappa_0\right)^2}{1+3\left(\kappa/\kappa_0\right)^2}\right), \end{equation} where the constant of integration was fixed to match the Schwarzschild solution. It follows that \begin{equation} \frac{a}{b} = \frac{2}{3} \kappa_0 e^H = \frac{2}{3} \left(-\kappa \;\frac{1+\left(\kappa/\kappa_0\right)^2}{1+3\left(\kappa/\kappa_0\right)^2}\right) = -\frac{1}{3}\left(\kappa -\frac{3M}{fu} \right) = \left|\kappa^{\vartheta}_{\vartheta}\right|, \end{equation} which shows that this solution is spatially flat and hence an exact solution of the full modified Einstein equation. The solutions (\ref{eq-Schwab2}) for $a$ and $b$ expressed through $\kappa$ are given by \begin{align} a^3(\kappa) &= \frac{4M}{3} \, \left|\kappa\right| \,\left(1-\left(\kappa/\kappa_0\right)^4\right)\, \left(\frac{1+\left(\kappa/\kappa_0\right)^2}{1+3\left(\kappa/\kappa_0\right)^2}\right)^2,\\ b^3(\kappa) &= \frac{9M}{2\kappa^2}\, \left(1-\left(\kappa/\kappa_0\right)^2\right)\left(1+3\left(\kappa/\kappa_0\right)^2\right). \end{align} For this particular solution $a$ assumes its maximum value at $\kappa = \kappa_*= -\kappa_0/\sqrt{5}$. At this point \begin{equation} a(\kappa_*) = \left(\frac{18 \kappa_0 }{25 \sqrt{5}} M\right)^{1/3} =: \left(\frac{M}{M_{\min}} \right)^{1/3}, \end{equation} and we find that the minimal possible black hole mass in this specific modification is given by \begin{equation} M_{\min} =\frac{25 \sqrt{5}}{18\kappa_0}. \end{equation} This solution was studied already in \cite{BlackHoleRemnants}. To aid our intuition, let us transform to Schwarzschild coordinates (\ref{Schwcoord}). By virtue of spatial flatness, it takes there the form (\ref{Schwspfl}). The location of the maximum of $a$ in the Schwarzschild $r$-coordinate is given by \begin{equation} r_{\ast }= b(\kappa_*)=\left( 144M/5\kappa _{0}^{2}\right)^{1/3}. \end{equation} \clearpage Far away from the black hole, in the limit $r\rightarrow \infty$, where $(\kappa / \kappa_0)^{2}\ll 1$, we find the expansion \begin{equation} 1-a^{2}=1-\frac{2M}{r}\left[ 1-\frac{5}{16}\left( \frac{r_{\ast }}{r}\right) ^{3}+\mathcal{O}\left( \left( \frac{r_{\ast }}{r}\right) ^{6}\right) \right]. \label{19} \end{equation}% It follow that the location of the outer horizon of a large mass black hole is given by \begin{equation} r_{+}=2M\left[ 1-\frac{729}{6250}\left( \frac{M_{\min }}{M}\right) ^{2}+\mathcal{O}\left( \left( \frac{M_{\min }}{M}\right) ^{4}\right) \right]. \label{20} \end{equation}% On the other hand, close to the limiting curvature $\kappa^2\rightarrow \kappa_0^{2}$ we find the expansion \begin{equation} 1-a^{2}=1-(\zeta r)^{2}\left[ 1-\frac{4}{5}\left( \frac{r}{r_{\ast }}\right) ^{3}+\mathcal{O}\left( \left( \frac{r}{r_{\ast }}\right) ^{6}\right) \right], \label{21} \end{equation}% where $\zeta =\kappa _{0}/3$ and the inner horizon ($ \sim$ de Sitter horizon) occurs at \begin{equation} r_{-}=\zeta^{-1}\left[ 1+\frac{27\sqrt{5}}{1600}\frac{M_{\min }}{M}+\mathcal{O}\left( \left( \frac{M_{\min }}{M}\right) ^{2}\right) \right] . \label{22} \end{equation}% Both asymptotics fail to describe the region between the two horizons. Expanding the solution around the maximum of $a$ at $r_*$ we find that \begin{equation} 1-a^2 \approx 1-\left(\frac{M}{M_{\min}}\right)^{2/3}\left(1-\frac{10}{7}\left(1-r/r_*\right)^2\right). \end{equation} For the minimal Black Hole $M=M_{\min}$ inner and outer horizon coincide, i.e. $r_*= r_+ = r_-$, and the metric close to this single horizon is given by \begin{equation} 1-a^2 \approx \frac{10}{7}\left(1-r/r_*\right)^2. \end{equation} Note the similarity to the near horizon metric of an extremal Reissner-Nordstr\"om black hole. \clearpage \subsection{Conformal diagrams} \label{sec:CD} The conformal diagrams of the family of solutions found in the last section can be obtained by standard methods by gluing the diagrams of the individual regions separated by horizons, \cite{Mbook2}. For the case of a non-minimal black hole $M>M_{\min}$ with two separate horizons, the solution with range $x\in(-\infty,\infty)$, i.e. $\kappa \in (-\kappa_0,0)$, covers the three regions of the eternal black hole solution, the exterior $\mathrm{I}$, the region between horizons $\mathrm{II}$ and the region $\mathrm{IIa}$ between the inner horizon and $r=0$ which is essentially a static de Sitter patch. By time reversal invariance of our theory, the corresponding white hole solution can be found simply by reversing the arrow of time. Identifying the black and white hole exterior regions, we find the new regions $\mathrm{IV}$ and $\mathrm{IVa}$ which are just time reversed versions of $\mathrm{II}$ and $\mathrm{IIa}$. Note that the static regions $\mathrm{I}$ and $\mathrm{IIa}$ are identical to their time reversed version. The conformal diagrams encompassing these regions for the three cases $M>M_{\min}$, $M=M_{\min}$, $M<M_{\min}$ are shown in figure \ref{fig:CDs}. Note that a static de Sitter patch is not geodesically complete and hence neither are the diagrams $M\geq M_{\min}$ in figure \ref{fig:CDs}. Synchronous observers with $R=const.$ start at $i^-$ ($T=-\infty$) from rest, pass outer and inner horizon and after infinite proper time reach $\widetilde{i^+}$ ($T=\infty, r=0$). Hence these comoving geodesics are complete and fully contained in the union of regions $\mathrm{I}$, $\mathrm{II}$, $\mathrm{IIa}$. However, light rays and massive particles with negative initial radial velocity at $i^-$ will reach $r=0$ at finite synchronous time. Since no singularity occurs at this point, they will simply be reflected towards the upper horizon $r=r_-$ of region $\textup{IIa}$ where also $T=\infty$. The diagram can be easily completed by identifying the black hole region $\textup{IIa}$ with the region $\textup{IVa'}$ of another white hole and continuing this procedure ad infinitum. The conformal diagram of the maximally extended eternal black hole solution is shown in figure \ref{fig:CDmax}. The maximally extended solution shows that all non-comoving particles falling through the event horizon will eventually escape to another universe \cite{ThroughBH}. It follows that no information is ``trapped'' inside the finite region $\mathrm{IIa}$ and there is no upper limit on the amount of information that can fall into the black hole. Even though figure \ref{fig:CDmax} bears similarity with the conformal diagram of the Reissner-Nordstr\"om and Kerr spacetimes \cite{FrolovBH}, there is a crucial difference: There is no Cauchy horizon at $r=r_{-}$ and the regions $\mathrm{IIa}$, $\mathrm{IIb}$, $\mathrm{IVa}$, $\mathrm{IVb}$ etc. represent static patches of de Sitter space. Moreover, there is no singularity at $r=0$ and geodesics reaching this point will simply be reflected. \clearpage \begin{figure*}[h] \centering \begin{tikzpicture}[scale=0.25] \tikzmath{\dx = 20;} \node (I) at ( 4,0) {I}; \node (II) at (-4,0) {}; \node (III) at (0, 4) {II}; \node (IV) at (0,-4) {IV}; \node (V) at (4,8) {}; \node (VI) at (-4,8) {}; \node (VII) at (4,-8) {}; \node (VIII) at (-4,-8) {}; \node at (0,-16) {$M>M_{\min}$}; \path (II) +(90:4) coordinate[label= left:$ $] (IItop)+(-90:4) coordinate[label=left:$ $] (IIbot)+(0:4) coordinate (IIright)+(180:4) coordinate[label=180:$ $] (IIleft); \draw[dashed] (IIbot) -- (IIleft) -- (IItop); \path(I) +(90:4) coordinate[label=above right:$ $] (Itop)+(-90:4) coordinate[label=right:$ $] (Ibot)+(180:4) coordinate (Ileft)+(0:4) coordinate[label=right:$ $] (Iright); \path (III) +(2.5,4) coordinate[label= center:$ $]+(-2.5,4) coordinate[label=center:$\textup{IIa}$]; \path(IV) +(2.5,-4) coordinate[label= center:$ $] +(-2.5,-4) coordinate[label=center:$\textup{IVa}$]; \path (III) +(90:4) coordinate (IIItop)+(-90:4) coordinate (IIIbot)+(180:4) coordinate (IIIleft)+(0:4) coordinate (IIIright); \path(V) +(90:4) coordinate (Vtop)+(-90:4) coordinate (Vbot)+(180:4) coordinate (Vleft)+(0:4) coordinate (Vright); \path(VI) +(90:4) coordinate[label=above:$ $] (VItop)+(-90:4) coordinate (VIbot)+(180:4) coordinate (VIleft)+(0:4) coordinate (VIright); \path(VII) +(90:4) coordinate (VIItop)+(-90:4) coordinate (VIIbot)+(180:4) coordinate (VIIleft)+(0:4) coordinate (VIIright) ; \path(VIII) +(90:4) coordinate (VIIItop)+(-90:4) coordinate (VIIIbot)+(180:4) coordinate (VIIIleft)+(0:4) coordinate (VIIIright); \draw (Ileft) -- (Itop) -- node[midway, above right] {$\cal{J}^+$}(Iright) -- node[midway, below right] {$\cal{J}^-$}(Ibot) -- (Ileft) -- cycle; \draw (IIItop) -- (IIIright); \draw (IIIbot) -- (IIIleft); \draw (Ileft) -- (VIIItop); \draw (VIIIright) -- (Ibot); \draw[dashed] (Vleft) -- (Vtop) -- (Vbot); \draw (VItop) -- node[midway, above right] { } (VIright) -- (VIbot) --node[midway, above,sloped] {$r=0$} (VItop)-- cycle; \draw[dashed] (VIItop) --(VIIbot) -- (VIIleft); \draw (VIIItop) -- (VIIIright) -- node[midway, below right] { }(VIIIbot) --node[midway, above,sloped] {$r=0$} (VIIItop)-- cycle; \node (0) at (\dx+1-6,6) {}; \node (I) at ( \dx+1,0) {I}; \node (II) at (\dx+1-6,-6) {}; \node (III) at (\dx+1+6, 6) {}; \path (I) +(-3.75,6) coordinate[label= center:$\textup{IIa}$]+(-3.75,-6) coordinate[label=center:$\textup{IVa}$]; \node at (\dx,-16) {$M=M_{\min}$}; \path (0) +(0:0) coordinate (00)+(90:6) coordinate[label= above:$ $] (0top)+(-90:6) coordinate (0bot)+(90:-6) coordinate (0left)+(0:6) coordinate (0right) ; \path (II) +(0:0) coordinate (II0)+(90:6) coordinate (IItop)+(-90:6) coordinate (IIbot)+(90:-6) coordinate (IIleft)+(0:6) coordinate (IIright); \path (I) +(90:6) coordinate (Itop)+(-90:6) coordinate[label= below right:$ $] (Ibot)+(180:6) coordinate (Ileft)+(0:6) coordinate (Iright); \draw (0top) -- node[midway, above right] { } (0right) -- (0bot) --node[midway, above,sloped] {$r=0$}(0left) -- cycle; \draw (IItop) -- (IIright) --node[midway, below right] { } (IIbot) -- (IIleft) -- cycle; \draw (Itop) --node[midway, above right] {$\cal{J}^+$} (Iright) -- node[midway, below right] {$\cal{J}^-$}(Ibot); \node (0) at (2*\dx-6,0) {}; \node (I) at (2*\dx-1,0) {I}; \node at (2*\dx-1,-16) {$M<M_{\min}$}; \path(0) +(0:0) coordinate (00)+(90:12) coordinate[label= above:$ $] (0top)+(-90:12) coordinate (0bot)+(0:12) coordinate (0right); \draw (0top) -- node[midway, above right] {$\cal{J}^+$} (0right) -- node[midway, below right] {$\cal{J}^-$}(0bot) --node[midway, above,sloped] {$r=0$}(0top) -- cycle; \end{tikzpicture} \caption{ \label{fig:CDs} Conformal diagrams of the solution found in section \ref{sec:Solution} in the three cases $M>M_{\min}$, $M=M_{\min}$, $M<M_{\min}$.} \end{figure*} \begin{description} \item[$M>M_{\min} :$] The eternal black hole solution is covered by the regions $\textup{I}$, $\textup{II}$, $\textup{IIa}$. Regions $\textup{I}$, $\textup{IV}$, $\textup{IVa}$ describe the time reversed white hole solution. Dashed lines indicate the mirror symmetric extension. \item[$M=M_{\min} :$] The outer and inner horizon coincide and the regions $\textup{II}$ and $\textup{IV}$ shrink to a single horizon $r_+=r_-=r_*$. \item[$M<M_{\min} :$] No horizon occurs and the causal structure is just like Minkowski spacetime. Close to $r=0$ the solution approaches a static de Sitter metric replacing the singularity. \end{description} \clearpage \pagestyle{empty} \begin{figure} \vspace{-4.5cm} \centering \begin{tikzpicture}[scale=0.65] \node (I) at ( 4,0) {I}; \node (II) at (0, 4) {II}; \node (III) at (-4,0) {III}; \node (IV) at (0,-4) {IV}; \node (V) at (4,8) {}; \node (VI) at (-4,8) {}; \node (VII) at (4,-8) {}; \node (VIII) at (-4,-8) {}; \node (IV') at (0,12) {}; \node (I') at (4,16) {}; \node (III') at (-4,16) {}; \node (II') at (0,-12) {}; \node (I'') at (4,-16) {}; \node (III'') at (-4,-16) {}; \tikzmath{\nd = 11.5mm;} \path (II) +(2.5,4) coordinate[label= center:$\textup{IIb}$]+(-2.5,4) coordinate[label=center:$\textup{IIa}$]; \path (IV) +(2.5,-4) coordinate[label= center:$\textup{IVb}$] +(-2.5,-4) coordinate[label=center:$\textup{IVa}$]; \path (I) +(90:4) coordinate[label=above right:$i^+$] (Itop)+(-90:4) coordinate[label=right:$i^-$] (Ibot)+(180:4) coordinate (Ileft)+(0:4) coordinate[label=right:$i^0$] (Iright); \path (II) +(90:4) coordinate (IItop)+(-90:4) coordinate (IIbot)+(180:4) coordinate (IIleft)+(0:4) coordinate (IIright); \path (III) +(90:4) coordinate(IIItop)+(-90:4) coordinate(IIIbot)+(0:4) coordinate (IIIright) +(180:4) coordinate (IIIleft); \path (IV) +(90:4) coordinate (IVtop)+(-90:4) coordinate (IVbot)+(180:4) coordinate (IVleft)+(0:4) coordinate (IVright); \path (V) +(90:4) coordinate (Vtop)+(-90:4) coordinate (Vbot)+(180:4) coordinate (Vleft)+(0:4) coordinate (Vright); \path (VI) +(90:4) coordinate[label={[label distance=-7.5mm]0:{$\widetilde{i^+}$}}] (VItop)+(-90:4) coordinate (VIbot)+(180:4) coordinate (VIleft)+(0:4) coordinate (VIright); \path (VII) +(90:4) coordinate (VIItop)+(-90:4) coordinate (VIIbot)+(180:4) coordinate (VIIleft)+(0:4) coordinate (VIIright); \path (VIII) +(90:4) coordinate (VIIItop) +(-90:4) coordinate (VIIIbot)+(180:4) coordinate (VIIIleft)+(0:4) coordinate (VIIIright); \path (IV')coordinate[label= {[gray]center:$\textup{IV'}$}] +(90:4) coordinate (IV'top)+(-90:4) coordinate (IV'bot)+(180:4) coordinate (IV'left)+(0:4) coordinate (IV'right); \path (I')coordinate[label= {[gray]center:$\textup{I'}$}] +(90:4) coordinate (I'top)+(-90:4) coordinate (I'bot)+(180:4) coordinate (I'left)+(0:4) coordinate (I'right); \path (III')coordinate[label= {[gray]center:$\textup{III'}$}] +(90:4) coordinate (III'top)+(-90:4) coordinate (III'bot)+(180:4) coordinate (III'left)+(0:4) coordinate (III'right); \path (II')coordinate[label= {[gray]center:$\textup{II'}$}] +(90:4) coordinate (II'top)+(-90:4) coordinate (II'bot)+(180:4) coordinate (II'left)+(0:4) coordinate (II'right); \path (I'')coordinate[label= {[gray]center:$\textup{I''}$}] +(90:4) coordinate (I''top)+(-90:4) coordinate (I''bot)+(180:4) coordinate (I''left)+(0:4) coordinate (I''right); \path (III'')coordinate[label= {[gray]center:$\textup{III''}$}] +(90:4) coordinate (III''top)+(-90:4) coordinate (III''bot)+(180:4) coordinate (III''left)+(0:4) coordinate (III''right); \draw (Ileft) -- (Itop) -- node[midway, above right]{$\cal{J}^+$}(Iright) -- node[midway, below right] {$\cal{J}^-$}(Ibot) -- (Ileft) -- cycle; \draw (IItop) -- node[midway, below, sloped]{$\phantom{_-}r=r_-$}(IIright) -- node[midway, above, sloped,label={[label distance=-\nd]90:\rotatebox{45}{$\phantom{_+}r=r_+$}}]{ }(IIbot) --node[midway, above, sloped,label={[label distance=-\nd]90:\rotatebox{-45}{$\phantom{_+}r=r_+$}}]{ } (IIleft) --node[midway, below, sloped]{$\phantom{_-}r=r_-$} (IItop) ; \draw (IIIleft) -- (IIItop) --(IIIright) -- (IIIbot) --(IIIleft) -- cycle; \draw (Vbot) -- (Vleft) -- node[midway, above, sloped,label={[label distance=-\nd]90:\rotatebox{45}{$\phantom{_-}r=r_-$}}]{ } (Vtop) -- node[midway, above, sloped] {$r=0$} (Vbot) --cycle; \draw (VItop) -- node[midway, above, sloped,label={[label distance=-\nd]90:\rotatebox{-45}{$\phantom{_-}r=r_-$}}]{ }(VIright) -- (VIbot) --node[midway, above,sloped] {$r=0$} (VItop)-- cycle; \draw (IIleft) to[bend left] node[midway, below,sloped] {$\phantom{_*}r=r_*$} (IIright); \draw (IVleft) to[bend right] node[midway, above, sloped,label={[label distance=-3.5mm]{$\phantom{_*}r=r_*$}}]{ } (IVright); \draw (IVtop) -- node[midway, below, sloped]{$\phantom{_+}r=r_+$}(IVright) -- node[midway, above, sloped,label={[label distance=-\nd]90:\rotatebox{45}{$\phantom{_-}r=r_-$}}]{ }(IVbot) --node[midway, above, sloped,label={[label distance=-\nd]90:\rotatebox{-45}{$\phantom{_-}r=r_-$}}]{ } (IVleft) --node[midway, below, sloped]{$\phantom{_+}r=r_+$} (IVtop) ; \draw (VIIbot) -- node[midway, below ,sloped]{$\phantom{_-}r=r_-$}(VIIleft) -- (VIItop) -- node[midway, above, sloped] {$r=0$} (VIIbot) --cycle; \draw (VIIItop) -- (VIIIright) -- node[midway, below ,sloped]{$\phantom{_-}r=r_-$}(VIIIbot) --node[midway, above,sloped] {$r=0$} (VIIItop)-- cycle; \draw[dashed,path fading=north] (I'left)-- (I'bot) -- (I'right); \draw[dashed,path fading=north] (III'left)-- (III'bot) -- (III'right); \draw[dashed,path fading=south] (IV'left) to[bend right] (IV'right); \draw[dashed,path fading=south] (I''left)-- (I''top) -- (I''right); \draw[dashed,path fading=south] (III''left)-- (III''top) -- (III''right); \draw[dashed,path fading=north] (II'left) to[bend left] (II'right); \end{tikzpicture} \vspace{-2cm} \caption{Conformal diagram of the maximally extended black hole solution from section \ref{sec:Solution} in the case $M>M_{\min}$.} \label{fig:CDmax} \end{figure} \clearpage \pagestyle{plain} \subsection{Electric charge} The Reissner-Nordstr\"om metric in Schwarzschild coordinates takes the form (\ref{Schwspfl}), where now \begin{equation} 1-a^2 = 1-\frac{2M}{r}+\frac{Q^2}{r^2}. \end{equation} Using the same transformation to Lema\^itre coordinates as in \cite{BlackHoleRemnants}, we find that in this case they can only cover the part $r> \frac{Q^2}{2M}$ of this spacetime which contains both horizons but not the singularity. The expressions for $a(x)$ and $b(x)$ in (\ref{mansatz}) can be found from the relation \begin{equation} r(x) = \frac{Q^2}{2M}\left(\theta(\bar{x})+ \theta^{-1}(\bar{x}) -1\right), \end{equation} where \begin{equation} \theta^3(\bar{x}) = 1+2\bar{x}^2\left(1+ \sqrt{1+\bar{x}^{-2}}\right),\qquad \bar{x}:= \frac{3M^2\, x}{Q^3}. \end{equation} More generally, also the synchronous coordinates associated to the solution $\xi^{(c>1)}_+$ (cf. section \ref{OnGen}) cover only the region \begin{equation} r>\frac{M}{c-1}\left(\sqrt{1+(c-1)\frac{Q^2}{M^2}}-1\right). \end{equation} Hence, even though one can obtain synchronous coordinates covering regions arbitrarily close to $r=0$, there is no global synchronous coordinate system covering the Reissner-Nordstr\"om metric. The constraint (\ref{constraint}) does not have a global solution and no synchronous Cauchy hypersurface exists. This can be taken as an indicator of the pathologies associated to the unstable interior of this spacetime which exhibits a Cauchy horizon \cite{FrolovBH}. Searching for a modification of this metric in our modified theory of gravity, we, however, have to care only about matching this GR solution in the low curvature limit in the exterior. Since this metric still respects spherical symmetry and the Killing vector field $\partial/\partial t$, the ansatz (\ref{mansatz}) is general enough to cover also modifications of charged black holes. Note that for the Reissner-Nordstr\"om metric in Lema\^itre coordinates the trace of extrinsic curvature expanded around $x=0$ reads \begin{equation} \kappa = -\frac{1}{x} -\frac{16M^4}{3Q^2} x + \mathcal{O}(x^3). \end{equation} It is hence singular at the point $x=0$ corresponding to $r=Q^2/2M$ already before the actual curvature singularity\footnote{$R^{\mu\nu}R_{\mu\nu} = 4Q^4/r^8$, $R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta} = 8\left(Q^4+6\left(Q^2-Mr\right)^2\right)/r^8$} at $r=0$ is reached. The modification must hence anyway take over well before this point is reached. Since the metric is of the form (\ref{Schwspfl}), we know already that the Reissner-Nordstr\"om solution is spatially flat in Lema{\^i}tre coordinates, just like the Schwarzschild solution. The main difference in going from the uncharged to the charged case is that now we are no longer looking for vacuum solutions but for \textit{electro}vacuum solutions. In the exterior we expect the static observers defined by the Killing vector field $\partial/\partial t$ to observe only an electric but no magnetic field. Moreover, due to spherical symmetry, this electric field should only depend on the Schwarzschild $r$-coordinate. We hence use the ansatz \begin{equation} A_\mu \mathrm{d}x^\mu = \Phi(r(x)) \mathrm{d}t = \Phi(x) \frac{\mathrm{d}T-a^2\mathrm{d}R}{1-a^2} \end{equation} for the electromagnetic potential 1-form. In Lema{\^i}tre coordinates, its components with raised index are \begin{equation} A^T = A^R = \frac{\Phi(x) }{1-a^2}. \end{equation} It follows that the only non-vanishing components of the Faraday tensor are \begin{equation} F^{TR}=-F^{RT}=\frac{2}{a^2}\frac{\partial}{\partial x} A^T. \end{equation} The vacuum Maxwell equation amounts to \begin{equation} F^{T\mu}_{;\mu} = \frac{1}{ab^2} \frac{\partial}{\partial R} \left(ab^2 F^{TR}\right) = 0, \end{equation} from which it follows that \begin{equation} F^{TR}= \frac{Q}{ab^2} \qquad \textup{and} \qquad \frac{\partial}{\partial x} A^T = 2Q \frac{a}{b^2}, \end{equation} where a constant of integration corresponding to charge was fixed. The energy momentum tensor of the electromagnetic field is \begin{equation} T^\alpha_\beta = \frac{1}{4\pi} \left(F^{\alpha\mu}F_{\mu\beta}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}\, \delta^\alpha_\beta\right) = \frac{Q^2}{8\pi b^4}\, \mathrm{diag}\left(-1,-1,1,1 \right ). \end{equation} Inserting this into the modified Einstein equation, the temporal equation takes the form \begin{equation} \frac{1}{3}\left( f-2\kappa f^{\prime}\right) \kappa^{2}-\Lambda +\kappa\Lambda^{\prime}-\frac{1}{2}\left( f+\kappa f^{\prime}\right) \tilde{\kappa}_{b}^{a}\tilde{\kappa}_{a}^{b}=\gamma^{RR} \left(\kappa^\vartheta_{\vartheta}\right)^2 - \gamma^{\vartheta\vartheta} -\frac{Q^2}{b^4}, \label{chbh00} \end{equation} and the analogue of (\ref{speqsub}) becomes \begin{equation} \frac{1}{ab^2} \left(ab^2 f \left(\frac{b'}{b}-\frac{a'}{a}\right)\right)' = \frac{b}{2b'}\left(\gamma^{RR}(\kappa^\vartheta_\vartheta)^2 - \gamma^{\vartheta\vartheta}\right)' - \frac{2 Q^2}{b^4}. \label{chbhspfl00} \end{equation} Assuming spatial flatness, i.e. $a = b'$, this equation has the first integral \begin{equation} \frac{b'}{b}-\frac{a'}{a} = \frac{1}{ab^2 f} \left(3M +\frac{2 Q^2}{b}\right). \end{equation} Moreover, in this case \begin{equation} A^T = - \frac{2Q}{b}, \end{equation} and the temporal equation (\ref{chbh00}) becomes \begin{equation} \kappa^2\left(f-2\kappa f'\right)-3\left(\Lambda - \kappa \Lambda'\right)= \left(f+\kappa f'\right) \left(\frac{3M+ 2 Q^2/b}{fu}\right)^2 - \frac{3Q^2}{b^4}. \label{BH00sfc} \end{equation} We see that the new terms coming from charge can become dominant only close to $b=r\to0$. They remain negligible until well after the modifications have taken over at $r_*$, provided that \begin{equation} \left(M^4 M_{\min}^2\right)^{1/3}\gg Q^2. \end{equation} Since we can expect the Schwinger effect to discharge a black hole much faster than the timescales of Hawking radiation, this condition should be satisfied by realistic black holes close to the end of their evolution. This suggests that our conclusions drawn for the fate of evaporating uncharged black holes in \cite{BlackHoleRemnants} should remain valid also in the charged case. Noting the sign of the charge contribution to (\ref{chbhspfl00}), there is the possibility that charge can lead to a bounce even without a spatial curvature dependent potential. As we can check explicitly by inserting the modified solution from section \ref{sec:Solution}, the right hand side of (\ref{chbhspfl00}) in this case exhibits a zero. Such a bounce would also prevent a blow up of the electromagnetic field energy at $r=0$. Of course, a rigorous verification of this speculation would require a more extensive analysis beyond the scope of this paper. \subsection{First order rotation} To first order in angular momentum $J$ the Kerr metric in Boyer-Lindquist coordinates reads \cite{FrolovBH} \begin{equation} \mathrm{d}s^2 = \left(1-\frac{r_g}{r}\right)\mathrm{d}t^2-\frac{\mathrm{d}t^2}{\left(1-\frac{r_g}{r}\right)}-r^2\mathrm{d}\Omega^2+\frac{4J}{r}\sin^2\vartheta\,\mathrm{d}t\mathrm{d}\varphi \end{equation} and is still spherically symmetric. In the Lema\^itre coordinates \begin{equation*} \mathrm{d}T = \mathrm{d}t + \sqrt{\frac{r_g}{r}}\left ( 1-\frac{r_g}{r} \right )^{-1}\mathrm{d}r, \qquad \mathrm{d}R = \mathrm{d}t + \sqrt{\frac{r}{r_g}}\left ( 1-\frac{r_g}{r} \right )^{-1}\mathrm{d}r, \end{equation*} it becomes \begin{equation} \mathrm{d}s^2 = \mathrm{d}T^2-a^2(x)\mathrm{d}R^2-b^2(x)\mathrm{d}\Omega^2+ \frac{2 j\,\omega(x) \sin^2\vartheta}{1-a^2(x)} \left(\mathrm{d}T-a^2(x)\mathrm{d}R\right)\mathrm{d}\varphi, \label{eq-ansatz} \end{equation} where $x=R-T$, $j:= J/M^2$ and \begin{equation} a(x) = \left(\frac{x}{x_{+}}\right)^{-1/3}, \quad b(x) = \left(\frac{x}{x_{+}}\right)^{2/3} r_g, \quad \omega(x) = M\left(\frac{x}{x_{+}}\right)^{-2/3}. \label{SchwLemomeg} \end{equation} Let us take (\ref{eq-ansatz}) as an ansatz where we consider $a$, $b$ and $\omega$ as independent functions. Note that even though these coordinates are not synchronous, to first order in the angular momentum it still holds that \begin{equation} g^{\mu\nu}T_{,\mu}T_{,\nu}= 1+\mathcal{O}\left(j^2\right). \end{equation} Making the expansion \begin{equation} \phi = T + j\, \phi_0(x) + \mathcal{O}\left(j^2\right), \end{equation} where $\phi_0(x)$ should depend on $x$ to preserve spherical symmetry, we find the condition $\phi_0' =0$. Hence, in first order perturbation theory we can still use the approximate solution \begin{equation} \phi = T + \mathcal{O}\left(j^2\right). \end{equation} However, now $\phi^{,\varphi} = g^{T\varphi}\neq0$ and \begin{equation} \phi_{;T\varphi}= \frac{j \,\omega \sin^2\vartheta}{(1-a^2)}\frac{b'}{b} + \mathcal{O}\left(j^3\right)\neq 0. \end{equation} Hence, we cannot directly use the components of the modified Einstein equation that were derived in the synchronous coordinates above. Instead, we have to expand the full equation (\ref{mEE}) in $j$. Expanding the Ricci tensor \begin{equation} R_{\mu\nu} = {^{(0)}\!R_{\mu\nu}} + j\, {^{(1)}\!R_{\mu\nu}} + \mathcal{O}\left(j^2\right), \end{equation} we find that the only new non-vanishing contibutions to first order in angular momentum are given by \begin{equation} {^{(1)}\!R^T_{\varphi}} = {^{(1)}\!R^R_{\varphi}}=\frac{\sin^2\vartheta}{a^2}\left[\frac{1}{2}a\left(\frac{\omega'}{a}\right)'-\omega\left(\frac{b''}{b} +\left(\frac{b'}{b}\right)^2-\frac{a'}{a}\frac{b'}{b}\right)\right]. \end{equation} In first order perturbation theory it still holds that \begin{equation} \phi^{;T}_{\:T} = 0, \quad \phi^{;R}_{\:R} = -\frac{a'}{a}, \quad \phi^{;\vartheta}_{\:\vartheta} = \phi^{;\varphi}_{\:\varphi} = -\frac{b'}{b}, \quad \Box \phi = - \frac{a'}{a}-2\frac{b'}{b}, \end{equation} where corrections would appear in order $\mathcal{O}(j^2)$. Moreover, \begin{equation} \phi^{;T}_{\:\varphi} = 0, \quad \phi^{;T\;\:T}_{\:\:\:\varphi} = 0, \quad \phi^{;R}_{\:\varphi} = \frac{j \sin^2\vartheta}{2a^2}\, b^2\left(\frac{\omega}{b^2}\right)' \label{foRcovdercomp} \end{equation} where corrections would appear in order $\mathcal{O}\left(j^3\right)$. The functions $a$ and $b$ are hence still determined by the zeroth order equations (\ref{BH00}), (\ref{speqsub}). The function $\omega$ has to be obtained from one of the new off-diagonal modified Einstein equations, e.g. the $T-\varphi$ equation which, making use of (\ref{foRcovdercomp}), becomes \begin{equation} f R^T_\varphi +\phi^{;R}_{\:\varphi} f_{,R} =0. \end{equation} More explicitely, we have to determine $\omega$ from the equation \begin{equation} f\left( a\left(\frac{\omega'}{a}\right)'-2\left(\frac{b''}{b} +\left(\frac{b'}{b}\right)^2-\frac{a'}{a}\frac{b'}{b}\right)\omega \right) + f_{,R}\,b^2\left(\frac{\omega}{b^2}\right)'= 0. \end{equation} Assuming spatial flatness of the slices $T=const.$ (i.e. $a=\pm b'$) this simplifies to \begin{equation} f\left( a\left(\frac{\omega'}{a}\right)'-2\left(\frac{b'}{b}\right)^2\omega \right) + f_{,R}\,b^2\left(\frac{\omega}{b^2}\right)'= 0. \label{omegaODE} \end{equation} It is easy to see that one solution of this linear ODE is given by $\omega\propto b^2$. However, this solution does not agree with the asymptotic solution (\ref{SchwLemomeg}). Multiplying (\ref{omegaODE}) by $b^2/a$ and using (\ref{speqsub}) in the spatially flat case, it is straightforward to verify that another independent solution is given by \begin{equation} \omega = M a^2, \end{equation} where the constant of integration was already fixed to match (\ref{SchwLemomeg}). Note that this identity for the GR solution continues to hold in the modification in the spatially flat case. It follows that the frame dragging function $\omega$ assumes a maximum at the same location as $a$ at $r=r_*$ and after that decreases until it vanishes at $r=0$ according to $\omega \propto r^2$. Since $\omega$ is bounded by a maximum value, we can expect that for a small enough value of $j\ll 1$ our perturbative analysis is justified for the whole range of $x$. Moreover, this suggests that the spacetime structure of a rotating black hole close to $r=0$ is in fact not different from the non-rotating case. Note that in first order perturbation theory the norm of the Killing vector field (and hence the location of horizons) is still given by $a^2-1$. Moreover, for the surface gravity it holds that \begin{equation} g_s = -a'(x_+) + \mathcal{O}(j^2). \end{equation} This shows that our above conclusions are robust even for slowly rotating black holes. \clearpage \section{Conclusions} The introduction of the mimetic field $\phi$ allowed us to find a remarkably simple high-curvature modification of GR, where a scale dependence of gravitational and cosmological constant can be implemented covariantly. We found that $\Box \phi$ is the unique measure of curvature on which the gravitational constant can depend such that the resulting modified Einstein equation is still second order in time. This modified theory of gravity hence does not exhibit any additional degrees of freedom except that the conformal degree of freedom of the metric becomes dynamical. As a first application, we found that the most natural class of modified Friedmann universes arising from this theory generically feature a de Sitter-like initial state replacing the Big Bang singularity. To resolve also the anisotropic Kasner singularity in the same way, we found that we have to require ``asymptotic freedom'' of gravity, i.e. the vanishing of the gravitational constant at limiting curvature. Taking on the task of singularity resolution in general, spatially non-flat spacetimes, it is clear that this is too much to ask of a theory where only the conformal degree of freedom is modified. Gratifyingly, we found that the mimetic field also permits to introduce in a covariant manner a potential depending on spatial curvature. In fact, adding such higher order terms to the action could even improve the renormalizability of gravity, along the lines of Ho\v{r}ava gravity. We showed that in spatially non-flat Friedmann and certain Bianchi universes a simple power law potential is already enough to replace the singularity with a bounce. In application to non-rotating black holes, we found that our modification of GR generically leads to a lower bound on the black hole mass. Minimal black holes have vanishing Hawking temperature and the final product of black hole evaporation is hence a stable remnant of minimal mass. Moreover, we found that this result is also robust when putting small amounts of charge or angular momentum. An inner horizon is already present in the non-rotating, uncharged case and the causal structure resembles those of Reissner-Nordstr\"om and Kerr, except that the region inside the inner horizon is replaced with a static de Sitter patch. Furthermore, since the mere assumption of existence of a global solution to the mimetic constraint already implies stable causality, we expect no Cauchy horizon in the interior even for arbitrary charge and rotation. Hence the instabilities present in Reissner-Nordstr\"om and Kerr solutions could be cured in such a modification. \clearpage \section*{Appendix} \addcontentsline{toc}{section}{Appendix} \subsection*{A: Synchronous coordinates.} \addcontentsline{toc}{subsection}{A: Synchronous coordinates} Variation of the action (\ref{action}) with respect to the Lagrange multiplier $\lambda$ yields the constraint (\ref{constraint}) \begin{equation*} g^{\mu\nu}\phi_{,\mu}\phi_{,\nu} = 1. \end{equation*} We will see that the existence of a global scalar field with this porperty already has some far reaching consequences, e.g. on the causal structure of admissible spacetimes.\footnote{For example, as shown in \cite{HawkingEllis}, the existence of a function whose gradient is everywhere time-like implies stable causality.} Taking a covariant derivative of this equation shows that \begin{equation} \phi^{,\mu}\nabla_{\mu}\phi^{,\nu} = \frac{1}{2}\, \nabla^{\nu}\left(\phi^{,\mu}\phi_{,\mu}\right)=0, \label{geodesicness} \end{equation} and hence the vector field $\phi^{,\mu}$ is tangent to a congruence of timelike geodesics. Through every point of a hypersurface of constant $\phi$ passes a unique geodesic in the congruence. Choosing coordinates\footnote{More generally: an atlas} $x^a$ on some initial $3-$hypersurface $\{\phi=\phi_i=const.\}$ then defines coordinates on any other hypersurface $\{\phi = \phi_0= const.\}$ by traveling along these unique geodesics. Since the congruence is hypersurface orthogonal and its normal vector field $\phi^{,\mu}$ has unit norm, $(t:=\phi,x^a)$ defines a synchronous coordinate system in which the metric takes the form \cite{Landau} \begin{equation*} \textup{d}s^{2}=\textup{d}t^{2}-\gamma_{ab}\textup{d}x^{a}\textup{d}x^{b}, \end{equation*} where latin indices run over spatial coordinates. The whole spacetime is sliced into into spatial hypersurfaces $\{\phi = \textup{const.}\}$ with extrinsic curvature \begin{equation*} \kappa_{ab} = \frac{1}{2} \frac{\partial }{\partial t} \gamma_{ab}, \qquad \kappa^{ab} := \gamma^{ac}\gamma^{bd}\kappa_{cd} = -\frac{1}{2} \frac{\partial }{\partial t} \gamma^{ab}, \qquad \kappa := \gamma^{ab}\kappa_{ab} = \frac{\partial }{\partial t} \ln \sqrt{\gamma} \end{equation*} and metric determinant $\gamma = \det{\gamma_{ab}} = - \det{g_{\mu\nu}}$. In this splitting the non-vanishing connection coefficients are given by \begin{gather*} \Gamma^{0}_{ab}= \kappa_{ab},\qquad\Gamma^{a}_{0b}=\kappa^a_{b} := \gamma^{ac}\kappa_{cb}, \qquad \Gamma^{a}_{bc} =\lambda ^{a}_{bc}, \end{gather*} where $\lambda ^{a}_{bc}$ are the connection coefficients of the Levi-Civita connection $D$ belonging to the Riemannian $3-$metric $\gamma_{ab}$. Note that \begin{equation} \phi_{;0\alpha} = 0, \qquad \phi_{;ab} = - \kappa_{ab}, \end{equation} and the expansion and shear of the geodesic congruence $\phi^{,\mu}$ are given by $\Box\phi = \kappa$ and $\tilde{\kappa}^a_b := \kappa^a_b-\tfrac{1}{3}\kappa \delta^a_b$, respectively. The covariant 4-divergence of a vector $X^{\mu}$ is given by \begin{equation*} \nabla_\mu X^{\mu} = \partial_0 X^0 + \kappa X^0 +D_a X^{a} ={ \tfrac{1}{\sqrt{\gamma}} }\partial_0\left (\sqrt{\gamma } X^0 \right )+D_a X^{a} \end{equation*} and the d'Alembertian of a scalar $S$ is \begin{equation*} \Box S ={ \tfrac{1}{\sqrt{\gamma}} }\,\partial_0\!\left (\sqrt{\gamma }\, {\dot{S} } \right ) - \Delta S = \ddot{S} +\kappa\dot{S}- \Delta S \end{equation*} where the dot denotes $t$-derivatives and $\Delta$ is the Laplacian belonging to the Riemannian metric $\gamma$. The non-vanishing components of the four-dimensional Riemann tensor in this splitting are determined by \begin{gather*} R_{\: abc}^{0} =\kappa_{ac|b}-\kappa_{ab|c} \qquad \qquad R_{\: a0b}^{0} =\dot{\kappa}_{ab}-\kappa_{a}^{c}\kappa_{bc}\\ R_{\: abc}^{d} = {^{3}\!R_{\: abc}^{d}}+\kappa_{b}^{d}\kappa_{ca} -\kappa_{c}^{d}\kappa_{ab}\phantom{\frac{1}{1}} \end{gather*} where ${^{3}\!R_{\: abc}^{d}}$ is the Riemann tensor of the spatial metric $\gamma_{ab}$ and the notation $\%_{|b}:= D_b\%$ was used. We find the useful identity \begin{equation} \phi^{,\mu}\phi_{,\nu}R^{\nu}_{\alpha\mu\beta} = -\phi^{,\mu}\nabla_{\mu}\left(\phi_{;\alpha\beta}\right) - \phi^{;\mu}_{\;\alpha}\phi_{;\mu\beta}. \label{Riemannid} \end{equation} The Ricci tensor $R^{\mu}_{\nu}$ splits into extrinsic and intrinsic curvature as \begin{gather*} R^{0}_{0} = R_{00} = -\dot{\kappa}-\kappa^{ab}\kappa_{ab}\qquad\qquad R^{0}_{a} = R_{0a} = \kappa^{b}_{a|b} - \kappa_{,a}\\ -R^{a}_{b} = \gamma^{ac} R_{cb} = \tfrac{1}{\sqrt{\gamma }}\partial_0 \left (\sqrt{\gamma } \kappa^{a}_{b} \right ) + {^3\!}R^{a}_{b}. \phantom{\frac{1}{1}} \end{gather*} The Ricci scalar is thus given by \begin{equation*} {-\!R} = 2\dot{\kappa} +\kappa^2+\kappa^{ab}\kappa_{ab} + {^3\!}R. \end{equation*} The $0-0$ component of the Einstein tensor $G^{\mu}_{\nu} = R^{\mu}_{\nu}-\frac{1}{2} R \delta^{\mu}_{\nu} $ is hence \begin{equation*} G_{00}= G^{00}= G^{0}_{0}= \tfrac{1}{2}\left ( \kappa^2-\kappa^{ab}\kappa_{ab} + {^3\!}R \right ). \end{equation*} This allows to isolate the spatial curvature scalar as \begin{equation*} {^3\!}R = 2G_{\mu\nu}\phi^{,\mu}\phi^{,\nu}-(\Box\phi)^2+\phi^{;\mu\nu}\phi_{;\mu\nu}. \end{equation*} For evaluation of the modified Einstein equation in the synchronous frame it will be useful to note that \begin{equation} \nabla_\mu\left(\phi^{;\alpha}_{\;\beta}\tilde{f} \phi^{,\mu}\right) = \tfrac{1}{\sqrt{\gamma }}\partial_0 \left (\sqrt{\gamma } \tilde{f} \kappa^{a}_{b} \right ) \delta^\alpha_a \delta_\beta^b. \end{equation} \subsubsection*{A causal completeness condition} In this section we will find a sufficient (but not necessary) condition for causal geodesic completeness of a metric of the form (\ref{synch}). We follow mainly the steps taken in \cite{ChoquetBruhat}, applied to the $3+1$ splitting at hand. Consider the velocity vector $u^\mu$ of a geodesic parametrized by an affine parameter $s$, \begin{equation*} u^0 = \frac{\textup{dt}}{\textup{ds}},\qquad u^a = \frac{\textup{d}x^a}{\textup{ds}} = u^0\frac{\textup{d}x^a}{\textup{dt}} =: u^0v^a. \end{equation*} The temporal component of the geodesic equation reads \begin{equation*} 0 = u^\mu \nabla_\mu u^0 = \frac{\textup{d}}{\textup{ds}} u^0 + \kappa_{ab}u^a u^b = u_0^2 \left(\frac{1}{u_0}\frac{\textup{d}}{\textup{dt}} u^0 + \kappa_{ab}v^a v^b \right) \end{equation*} and we can integrate to find \begin{equation} \ln u^0 = -\int\!\mathrm{d}t\, \kappa_{ab}v^a v^b. \label{eq-geodcomp} \end{equation} Timelike geodesics with $u^\mu =\phi^{,\mu}$ describe ``comoving'' observers with $v^a=0$. They are freely falling and the synchronous time $t$ measures proper time for these observers. Their affine parameter extension is hence infinite, if $t$ can be extended to the range $(-\infty,\infty) $. Assuming this to be the case, the affine parameter extension of a general causal geodesic is given by \begin{equation} \int \textup{d} s = \int_{-\infty}^{\infty} \frac{\textup{d} t}{u^0}, \label{eq-paramintegral} \end{equation} and thus it is future resp. past complete, if this integral diverges at $t\to \infty$ resp. $t\to -\infty$. Using the Cauchy-Schwarz inequality for the scalar product $\left \langle A ,B\right \rangle := \gamma^{ab}\gamma^{cd} A_{ac} B_{bd}$ of spatial tensors $A$ and $B$ on the right hand side of (\ref{eq-geodcomp}), we see that \begin{equation*} \ln u^0 \leq \int\!\mathrm{d}t\, \sqrt{\kappa^{ab}\kappa_{ab}}(v^c v_c) \leq \int\!\mathrm{d}t\, \sqrt{\kappa^{ab}\kappa_{ab}}, \end{equation*} where the second inequality is an equality for light-like geodesics. It hence follows that if \begin{equation} \int_{-\infty}^{\infty}\!\mathrm{d}t\, \sqrt{\kappa^{ab}\kappa_{ab}} < \infty, \label{eq-kabkabbound} \end{equation} then $1/u^0$ will be uniformly bounded from $0$ and hence all causal geodesics are both past- and future complete. Note that also the weaker condition \begin{equation} \sqrt{\kappa^{ab}\kappa_{ab}} \leq\frac{1}{\left|t\right|} \quad \textup{asymptotically as} \quad t\to\pm\infty, \label{eq-kabkabbound2} \end{equation} suffices to have $u^0 \leq\left|t\right|$ and hence logarithmic divergence of (\ref{eq-paramintegral}). Thus also (\ref{eq-kabkabbound2}) is a sufficient condition for causal completeness. \subsection*{B: Explicit calculations in the variation of the action} \addcontentsline{toc}{subsection}{B: Explicit calculations in the variation of the action} \paragraph{Variation with respect to the mimetic field.} Let us vary (\ref{action}) with respect to $\phi$. To this end, calculate \begin{align} \frac{ \delta\mathcal{L} }{\delta \Box\phi} &= f'\left (R+2G_{\mu\nu}\phi^{,\mu}\phi^{,\nu} -\left ( \Box \phi \right )^2 + \phi^{;\mu\nu}\phi_{;\mu\nu} \right )-2(f-1+h')\Box\phi+2\Lambda' \phantom{\frac{1}{1}}\nonumber\\ &\doteq -2\left [(\Tilde{f}-h')_{,\alpha}\phi^{,\alpha}+\Tilde{f}\,\Box\phi+\tfrac{1}{2}f'\left ( (\Box\phi)^2+ \phi^{;\mu\nu}\phi_{;\mu\nu} \right )-\Lambda' \right ] \nonumber \\ &=: -2\left [( \Tilde{f}\phi^{,\alpha} )_{;\alpha}+\Tilde{Z} \right ] \label{dLdBoxphi} \end{align} where we introduced the useful notations \begin{equation*} \Tilde{f} := f-1+h',\qquad Z:=\tfrac{1}{2}f'\left((\Box\phi)^2+\phi^{;\mu\nu}\phi_{;\mu\nu} \right) - \Lambda', \qquad \Tilde{Z} := Z - \phi^{,\alpha}h'_{,\alpha} \end{equation*} and $\doteq$ means equality if the constraint (\ref{constraint}) is satisfied. It follows that \begin{align*} \tfrac{1}{2}\,\delta_\phi\mathcal{L} &=-\left [ ( \Tilde{f}\phi^{,\mu} )_{;\mu}+\Tilde{Z} \right ] \Box \delta\phi+ \Tilde{f}\left ( 2G_{\mu\nu}\phi^{,\mu}\,\delta \phi^{,\nu}+ \phi^{;\mu\nu}\delta\phi_{;\mu\nu} \right ) \phantom{\frac{1}{1}}\\ &= -\left [( \Tilde{f}\phi^{,\mu} )_{;\mu}+\Tilde{Z} \right ] \delta\phi^{;\nu}_{\:\nu}+ \Tilde{f}\left (\phi^{;\mu}_{\:\nu}\,\delta\phi^{,\nu} \right )_{;\mu} -\Tilde{f}\left [\phi^{;\mu}_{\:\nu\mu}- 2G_{\mu\nu}\phi^{,\mu} \right ]\,\delta\phi^{,\nu} \phantom{\frac{1}{1}} \end{align*} Thus the variation of (\ref{action}) yields \begin{align*} -8\pi\delta_\phi S &=\int\! \textup{d}^4x \sqrt{-g}\,\delta\phi^{,\nu}\left \{( \Tilde{f}\phi^{,\mu} )_{;\mu\nu} - ( \Tilde{f} \phi^{;\mu}_{\:\nu} )_{;\mu} +\Tilde{Z}_{;\nu}+\Tilde{f}2G_{\mu\nu}\phi^{,\mu}-\lambda \phi_{,\nu}\right \} \\ &=\int\! \textup{d}^4x \sqrt{-g}\,\delta\phi^{,\nu}\left \{ ( \Tilde{f}_{,\nu}\,\phi^{,\mu} )_{;\mu}+ \Tilde{Z}_{,\nu}+\Tilde{f}\left (2G_{\mu\nu}-R_{\mu\nu} \right )\phi^{,\mu} -\lambda \phi_{,\nu}\right \} \end{align*} where covariant partial integration and the commutator of covariant derivatives were used. Here and in the following section we ignore boundary terms in the variation. Integrating by parts once again, we find the equation of motion \begin{equation*} \nabla_\nu\left [( \lambda +\Tilde{f}R )\phi^{,\nu} - ( \Tilde{f}^{,\nu}\,\phi^{,\mu} )_{;\mu} -\Tilde{Z}^{,\nu}-\Tilde{f}R^{\mu\nu}\phi_{,\mu}\right ]= 0 \end{equation*} which will be used to determine $\lambda$. \clearpage \paragraph{Variation with respect to the metric.} Next we have to vary (\ref{action}) with respect to $g_{\mu\nu}$. In the course of this undertaking the following identities for the varations of the metric determinant, connection coefficients and Ricci tensor will be put to good use: \begin{equation*} \delta \sqrt{-g} = -\frac{1}{2}\sqrt{-g}\, g_{\mu\nu} \delta g^{\mu\nu},\qquad \delta {\Gamma} ^{\lambda }_{\mu\nu } = - g_{\alpha(\mu}\nabla_{\nu)}\delta g^{\alpha\lambda} + \tfrac{1}{2} g_{\mu\alpha}g_{\nu\beta}\nabla^{\lambda}\delta g^{\alpha\beta} \end{equation*} \begin{equation*} \delta R_{\mu\nu} = \nabla_{\lambda}\delta \Gamma^{\lambda}_{\mu\nu}-\nabla_{\nu}\delta \Gamma^{\lambda}_{\lambda\mu} \end{equation*} Combining the latter two yields \begin{equation*} \delta R _{\mu\nu } = \tfrac{1}{2}\left [ g_{\mu\alpha}g_{\nu\beta}\Box+g_{\alpha\beta}\nabla_{\nu}\nabla_{\mu} -g_{\mu\beta}\nabla_{\alpha}\nabla_{\nu} -g_{\nu\beta}\nabla_{\alpha}\nabla_{\mu}\right ]\delta g^{\alpha\beta}. \end{equation*} In the variation of the usual Einstein action one only encounters the term \begin{equation*} g^{\mu\nu}\delta R _{\mu\nu } = \left (g_{\mu\nu} \,\Box - \nabla_\mu \nabla_\nu \right ) \delta g^{\mu \nu }, \end{equation*} which turns out to be a total covariant derivative, provided that it appears with a constant prefactor. Using, in a first step, \begin{equation*} \delta_g\left ( 2 G_{\mu\nu}\phi^{,\mu}\phi^{,\nu} \right ) =2 \delta R_{\mu\nu}\phi^{,\mu}\phi^{,\nu} +4R_{\alpha\mu}\phi^{,\alpha}\phi_{,\nu}\delta g^{\mu\nu} -\delta R-R\phi_{,\mu}\phi_{,\nu}\delta g^{\mu\nu}, \end{equation*} the expression $\delta \mathcal{L}/\delta \Box \phi$ from (\ref{dLdBoxphi}) and ignoring boundary terms we find that \begin{align*} -16\pi\, & \delta_g S= \phantom{\frac{1}{1}} \\ \int\!\textup{d}^4x&\,\sqrt{-g} \Bigg\{ \left [ R_{\mu\nu}-\frac{1}{2} \mathcal{L}\, g_{\mu\nu} +4\Tilde{f}\phi^{,\alpha}R_{\alpha(\mu}\phi_{,\nu)}-\left(\lambda+\Tilde{f}R \right)\phi_{,\mu}\,\phi_{,\nu} \right ] \,\delta g^{\mu\nu} \\ & -\left[(\Tilde{f}\phi^{,\alpha})_{;\alpha} + \Tilde{Z}\right ]\,\underbrace{2\,\delta_g \Box\phi }_{\textsc{\romannumeral 1}}+\Tilde{f}\underbrace{2\,\phi^{,\mu}\phi^{,\nu}\delta R_{\mu\nu}}_{\textsc{\romannumeral 2}} + \Tilde{f}\underbrace{\delta_g \left(\phi^{;\mu\nu}\phi_{;\mu\nu}\right)}_{\textsc{\romannumeral 3}} - h' \underbrace{\,\delta R\phantom{_\mu} }_{\textsc{\romannumeral 4}} \Bigg \} \end{align*} The modified Einstein equation hence reads \begin{align*} G_{\mu\nu} - \Lambda g_{\mu\nu}&-\tfrac{1}{2}g_{\mu\nu}\left [(f-1)(R +{^3\!R}) +h\right ]+ 4\phi^{,\alpha}\phi_{(,\mu}R_{\nu)\alpha} +\dots \phantom{\frac{1}{1}}\\ & \dots-T^{\textsc{\romannumeral1}}_{\mu\nu}+T^{\textsc{\romannumeral2}}_{\mu\nu}+T^{\textsc{\romannumeral3}}_{\mu\nu}-T^{\textsc{\romannumeral4}}_{\mu\nu}= (\lambda + \Tilde{f}R) \phi_{,\mu}\phi_{,\nu} + 8 \pi T_{\mu\nu}^{\textup{(m)}},\phantom{\frac{1}{1}} \end{align*} where we still have to figure out the contribution of the terms $\textsc{\romannumeral1}-\textsc{\romannumeral4}$. \newpage Starting with term $\textsc{\romannumeral1}$, we first have to calculate \begin{equation*} 2\,\delta_g \Box\phi = -\phi^{,\mu}\nabla_\mu\left ( g_{\alpha \beta }\delta g^{\alpha \beta } \right ) + 2\nabla_\mu\left (\delta g^{\mu\nu}\phi_{,\nu} \right ) \end{equation*} where only the variation of the metric determinant and the identity \begin{equation*} {\Gamma} ^{\nu }_{\nu\mu} = \frac{1}{\sqrt{-g}}\partial_{\mu}\sqrt{-g} \end{equation*} were used. The contribution to the variation of the action of a term like $\textsc{\romannumeral1}$ multiplied by an arbitrary spacetime function $\mathcal{F}$ is thus \begin{align*} \int \textup{d}^4x\sqrt{-g} \,\mathcal{F}\,2\,\delta_g \Box \phi &=\int \textup{d}^4x\sqrt{-g}\,\mathcal{F} \left (-\nabla_\alpha\left ( g_{\mu\nu }\delta g^{\mu\nu } \right )\phi^{,\alpha} + 2\nabla_\mu\left (\delta g^{\mu\nu}\phi_{,\nu} \right ) \right ) \\ &= \int \textup{d}^4x\sqrt{-g}\left (g_{\mu\nu } (\mathcal{F}\phi^{,\alpha})_{;\alpha} -2\mathcal{F}_{(,\mu} \phi_{,\nu)} \right )\delta g^{\mu\nu} \end{align*} where covariant partial integration and the symmetry of $\delta g^{\mu \nu}$ were used. The contribution of term $\textsc{\romannumeral1}$ to the modified Einstein equation is thus \begin{equation*} T^{\textsc{\romannumeral1}}_{\mu\nu} = g_{\mu\nu}\nabla_\beta\left [\nabla_\alpha(\Tilde{f}\phi^{,\alpha}) \,\phi^{,\beta} \right ]-2\phi_{(,\mu}\nabla_{\nu )}\nabla_\alpha(\Tilde{f}\phi^{,\alpha}) + g_{\mu\nu}\nabla_\beta(\Tilde{Z}\phi^{,\beta} ) -2\Tilde{Z}_{(,\mu}\phi_{,\nu)} \end{equation*} Next, let us turn to term $\textsc{\romannumeral2}$. Using $\delta R_{\mu\nu}$ from above, the fact that $\phi^{,\mu}$ is geodesic (\ref{geodesicness}) and the commutator of covariant derivatives acting on a 2-tensor, we can express \begin{align*} 2\phi^{,\mu}\phi^{,\nu}\delta R _{\mu\nu } \doteq \phi_{,\alpha}\phi_{,\beta}\Box \delta g^{\alpha\beta}&+\phi^{,\mu}\nabla_{\mu}\left [\phi^{,\nu}\nabla_{\nu} \left (g_{\alpha\beta}\delta g^{\alpha\beta} \right )-2\phi_{(,\alpha}\nabla_{\beta)} \delta g^{\alpha\beta}\right ] + \phantom{\frac{1}{1}}\\ &+ 2\phi^{,\mu}\left ( \phi_{,\nu} R^{\nu}_{\:\alpha\mu\beta} - \phi_{(,\alpha}R_{\beta)\mu} \right )\delta g^{\alpha\beta}\phantom{\frac{1}{1}} \end{align*} The second term is in a form ready for covariant partial integration and the second line does not contain derivatives of $\delta g^{\alpha\beta}$. The first and the third term, however, still need some rewriting. \clearpage \noindent To this end, calculate \begin{equation*} \phi_{,\alpha}\phi_{,\beta}\Box\delta g^{\alpha\beta} = \Box\left ( \phi_{,\alpha}\phi_{,\beta}\delta g^{\alpha\beta} \right )-2\nabla_{\mu}\left ( \nabla^{\mu}\left (\phi_{,\alpha}\phi_{,\beta} \right )\delta g^{\alpha\beta}\right ) + \Box\left (\phi_{,\alpha}\phi_{,\beta} \right )\delta g^{\alpha\beta} \end{equation*} and \begin{equation*} \phi_{(,\alpha}\nabla_{\beta)} \delta g^{\alpha\beta} = \nabla_{(\alpha}\left ( \phi_{,\beta)} \delta g^{\alpha\beta} \right )- \phi_{;\alpha\beta} \delta g^{\alpha\beta}. \phantom{\frac{1}{1}} \end{equation*} Thus, in summary, we find that \begin{align*} 2\phi^{,\mu}\phi^{,\nu}\delta R _{\mu\nu } & \doteq \Box\left ( \phi_{,\alpha}\phi_{,\beta}\delta g^{\alpha\beta} \right )+\phi^{,\mu}\nabla_{\mu}\left [\phi^{,\nu}\nabla_{\nu} \left (g_{\alpha\beta}\delta g^{\alpha\beta} \right )-2\nabla_{(\alpha}\left ( \phi_{,\beta)} \delta g^{\alpha\beta} \right )\right]+\phantom{\frac{1}{1}}\\ &-2\nabla_{\mu}\left ( \nabla^{\mu}\left (\phi_{,\alpha}\phi_{,\beta} \right ) \delta g^{\alpha\beta}\right ) +2\phi^{,\mu}\nabla_\mu\left (\phi_{;\alpha\beta} \,\delta g^{\alpha\beta} \right )+ \phantom{\frac{1}{1}}\\ &+\left (2 \phi^{,\mu}\phi_{,\nu} R^{\nu}_{\:\alpha\mu\beta} - 2\phi^{,\mu}\phi_{(,\alpha}R_{\beta)\mu} + \Box\left (\phi_{,\alpha}\phi_{,\beta} \right )\right ) \delta g^{\alpha\beta}\phantom{\frac{1}{1}} \end{align*} Applying covariant partial integration, we find that the contribution to the modified Einstein equation of term $\textsc{\romannumeral2}$ is given by \begin{align*} T^{\textsc{\romannumeral2}}_{\alpha\beta} &= \phi_{,\alpha}\phi_{,\beta}\Box \Tilde{f}+g_{\alpha\beta}\nabla_\nu\left ( \nabla_\mu( \Tilde{f}\phi^{,\mu})\phi^\nu \right ) -2\phi_{(,\alpha}\nabla_{\beta)}\left ( \nabla_\mu ( \Tilde{f} \phi^{,\mu} ) \right )+\phantom{\frac{1}{1}}\\ &+2\nabla^{\mu}\left (\phi_{,\alpha}\phi_{,\beta} \right ) \Tilde{f}_{,\mu}-2\phi_{;\alpha\beta} \nabla_\mu(\Tilde{f} \phi^{,\mu})+ \phantom{\frac{1}{1}}\\ &+\left ( 2\phi^{,\mu}\phi_{,\nu} R^{\nu}_{\:\alpha\mu\beta} - 2\phi^{,\mu}\phi_{(,\alpha}R_{\beta)\mu} + \Box\left (\phi_{,\alpha}\phi_{,\beta} \right )\right ) \Tilde{f}\phantom{\frac{1}{1}} \end{align*} Note that the second and third term in the first line cancel the two terms containing $\Tilde{f}$ in $T^{\textsc{\romannumeral1}}_{\alpha\beta}$. Going on to term $\textsc{\romannumeral3}$, calculate \begin{equation*} \delta_g \left(g^{\mu\alpha}g^{\nu\beta}\phi_{;\mu\nu} \phi_{;\alpha\beta} \right) = 2\phi^{;\mu}_{\alpha} \phi_{;\beta\mu}\, \delta g^{\alpha\beta}-2 \phi^{;\mu\nu}\,\delta \Gamma ^{\lambda}_{\mu\nu} \phi_{,\lambda} \end{equation*} Inserting the variation of the connection coefficients from above, the second term becomes \begin{align*} -2 \phi^{;\mu\nu}\,\delta \Gamma ^{\lambda}_{\mu\nu} \phi_{,\lambda} &=2 \phi^{;\mu}_{\alpha}\,\phi_{,\beta}\nabla_{\mu} \delta g^{\alpha\beta} - \phi_{;\alpha\beta}\,\phi^{,\lambda}\nabla_{\lambda}\left (\delta g^{\alpha\beta} \right )\phantom{\frac{1}{1}} \\ &= 2\nabla_\mu\left ( \phi^{;\mu}_{\alpha}\,\phi_{,\beta}\delta g^{\alpha\beta} \right ) - 2\nabla_\mu\left ( \phi^{;\mu}_{\alpha}\,\phi_{,\beta}\right ) \delta g^{\alpha\beta} +\phantom{\frac{1}{1}} \\ &\quad -\phi^{,\lambda}\nabla_{\lambda}\left (\phi_{;\alpha\beta}\delta g^{\alpha\beta} \right ) +\phi^{,\lambda}\nabla_{\lambda}\left (\phi_{;\alpha\beta}\right ) \delta g^{\alpha\beta} \phantom{\frac{1}{1}} \end{align*} where in the second step the expression was brought to a form ready for covariant partial integration. Summarizing, we find that \begin{align*} \delta_g \left(\phi^{;\mu\nu} \phi_{;\mu\nu} \right) &= 2\nabla_\mu\left ( \phi^{;\mu}_{\alpha}\,\phi_{,\beta}\delta g^{\alpha\beta} \right )-\phi^{,\lambda}\nabla_{\lambda}\left (\phi_{;\alpha\beta}\delta g^{\alpha\beta} \right ) +\phantom{\frac{1}{1}} \\ &\quad +\left (\phi^{,\mu}\phi_{;\alpha\beta\mu}-2\phi_{,\beta}\phi^{;\mu}_{\alpha\mu} \right ) \delta g^{\alpha\beta} \phantom{\frac{1}{1}} \end{align*} Applying covariant partial integration, the contribution of term $\textsc{\romannumeral3}$ to the modified Einstein equation is hence \begin{equation*} T^{\textsc{\romannumeral3}}_{\alpha\beta} = -\nabla^\mu\left(\phi_{\alpha}\phi_{,\beta}\right)\Tilde{f}_{,\mu}+\phi_{;\alpha\beta}\nabla_\mu(\phi^{,\mu}\Tilde{f}) +\left (\phi^{,\mu}\phi_{;\alpha\beta\mu}-\phi_{,\beta}\phi^{;\mu}_{\alpha\mu}-\phi_{,\alpha}\phi^{;\mu}_{\beta\mu} \right ) \Tilde{f}. \end{equation*} Using (\ref{geodesicness}) and the commutator of covariant derivatives we can bring the last term into a form more similar to terms in $T^{\textsc{\romannumeral2}}$ as \begin{equation*} \phi^{,\mu}\phi_{;\alpha\beta\mu}-\phi_{,\beta}\phi^{;\mu}_{\alpha\mu}-\phi_{,\alpha}\phi^{;\mu}_{\beta\mu} =\phi_{\alpha}^{;\mu}\phi_{;\beta\mu}-R^{\nu}_{\alpha\mu\beta}\phi^{,\mu}\phi_{,\nu}-\Box\left ( \phi_{,\alpha}\phi_{,\beta} \right ). \end{equation*} Note that the appearing Riemann tensor components can be rewritten purely in terms of covariant derivatives of $\phi$ according to (\ref{Riemannid}). Combining all our results, we find that the sum of contibutions to the modified Einstein equation is \begin{align*} -T^{\textsc{\romannumeral1}}_{\alpha\beta} + T^{\textsc{\romannumeral2}}_{\alpha\beta} + T^{\textsc{\romannumeral3}}_{\alpha\beta} &= \phi_{,\alpha}\phi_{,\beta}\Box \Tilde{f} - \nabla_{\mu}(\Tilde{Z}\phi^{,\mu} )g_{\alpha\beta } +2 \phi_{(,\alpha} \Tilde{Z}_{,\beta )}+\phantom{\frac{1}{1}}\\ &+\nabla^{\mu}\left (\phi_{,\alpha}\phi_{,\beta} \right ) \tilde{f}_{,\mu}- \nabla_\mu(\phi_{;\alpha\beta}\tilde{f} \phi^{,\mu})- 2\phi^{,\mu}\phi_{(,\alpha}R_{\beta)\mu}\tilde{f}\,.\phantom{\frac{1}{1}} \end{align*} Finally, term $\textsc{\romannumeral4}$ is easily found to be given by \begin{equation*} T^{\textsc{\romannumeral4}}_{\alpha\beta} = \left(g_{\alpha\beta}\Box-\nabla_\alpha\nabla_\beta+R_{\alpha\beta}\right)h'. \end{equation*} \bigskip \bigskip \textbf{{\large {Acknowledgments}}} The work of A. H. C is supported in part by the National Science Foundation Grant No. Phys-1518371 and Phys-5912998. The work of V.M. and T.B.R. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2111 -- 390814868. V.M. is grateful to Korea Institute for Advanced Study, where the part of this work was completed, for hospitality.
1,314,259,996,015
arxiv
\section{Introduction} Discussions on the Maxwell's demon have provided a better understanding of the relation between information entropy and entropy production~\cite{Szilard,demon,Maruyama}. As a generalization of the relation between information entropy and entropy production, the second law of thermodynamics is extended to an open system under feedback control~\cite{Sagawa,Sagawa2}. The generalization of the second law is denoted by \begin{equation} \beta \left( \left< W \right> - \Delta F \right) \geq - \left<I \right>, \label{SagawaUeda} \end{equation} where $\left< W \right>$ is the ensemble average of work $W$ exerted on the open system, $\Delta F$ is the free energy difference gained in the open system, and $\left< I \right>$ is the mutual information obtained by the feedback protocol. The open system is in contact with a thermal reservoir at temperature $T =(k_{\rm B} \beta)^{-1}$, where $k_{\rm B}$ is the Boltzmann constant. The difference $\left< W \right> - \Delta F $ amounts to a dissipated work in the open system. When the dissipated work becomes negative, the feedback can extract work from a heat reservoir. The amount of work is bounded by the mutual information $\left< I \right>$, owing to the generalized second law, Eq.(\ref{SagawaUeda}). Feedback control in Brownian systems has important applications in noise cancellation, namely, cold damping or entropy pumping~\cite{Kim,Jourdan}. For instance in cold damping, thermal noise of the cantilever in an atomic force microscope was canceled through a measurement of velocity and feedback control with a force proportional to the velocity of the cantilever~\cite{Jourdan}. Similarly, in entropy pumping, reduction of thermal fluctuations by optical tweezers under velocity-dependent feedback control was proposed~\cite{Kim}. These discussions did not take into account noise effects in the feedback system, which are unavoidable in a real experiment. An ideal condition that the effective temperature reaches 0 K was only discussed in Ref.~\cite{Jourdan}. The fundamental limit of cooling by feedback in the presence of measurement errors has not been discussed. To discuss the noise effects, we study the generalized second law for a one-dimensional Langevin system and derive the relation between fluctuations and mutual information. In our derivation, we can apply the following remarkable progress in nonequilibrum statistical mechanics. The fluctuation theorem (FT)~\cite{Evans,Gallavotti,Crooks} and the Jarzynski equality~\cite{Jarzynski} are remarkable advances which are connected to the second law. The premise of the FT, the detailed FT, which is the FT for specific trajectory, is also the premise of the generalized second law~\cite{Sagawa2}. The detailed FT can be derived for many systems including a Langevin system~\cite{Evans2,Jarzynski2,Chernyak}. Maxwell's demon can be discussed using the FT for a Langevin system~\cite{Kim}. Moreover there are relations between fluctuations and entropy change for a Langevin system. The Harada-Sasa equality or the generalization of the fluctuation-dissipation theorem (FDT)~\cite{Harada,Harada2,Speck} clarifies the relations between the rate of energy dissipation and the violation of the FDT. The FDT is the relations between thermal fluctuations and the dissipation in equilibrium and is connected to the FT and the Jarzynski equality~\cite{Evans2, Jarzynski}. Generalizations of the FDT for nonequilibrium processes can generally be obtained by the perturbation dependence of a path probability~\cite{Harada2, Baiesi}. In our discussion, we derive the Harada-Sasa equality and the generalized second law for a nonequilibrium transition performed by feedback control. Since these two equalities are connected in terms of the entropy change in the heat reservoir, we can obtain the bounds to the FDT violation. The FDT violation is bounded by the mutual information characterized as measurement errors of the feedback system. Hence, the expression of the bounds quantifies the effects of error on the FDT violation. Here we show that effects of error are dominant especially in a cold damping system. We construct two cold damping models under velocity-dependent feedback control including measurement errors and discuss the effects of error on the FDT violation. Furthermore, in view of the effective temperature, the bounds to the FDT violation give the cooling limit of the effective temperature in a steady state. The lower bound to the effective temperature is determined by the balance between the information obtained by the measurement for feedback control and the information lost as a result of the relaxation. The inequality giving the lower bound to the effective temperature has a form similar to that of the Carnot efficiency. \section{system and feedback protocol} We study an underdamped Langevin equation including the feedback described as \begin{equation} m\ddot{x}(t) + \gamma \dot{x}(t) = F_{\lambda(t,y)}(x(t)) + \epsilon f_p(t) + \xi(t),\\ \label{Feedback langevin eq} \end{equation} where $m$ is the mass of a Brownian particle and $\gamma$ is the friction coefficient. We assume that the friction coefficient $\gamma$ does not depend on the time $t$. The feedback force $F_{\lambda(t,y)}(x(t))$ is the external force, which generally includes a potential force $ -\partial U/\partial x$, and a constant driving force $f_{\rm ex}$ as in Ref.~\cite{Harada}. $\lambda(t,y)$ is a control parameter for a nonequilibrium transition which depends on the time $t$ and measurement outcomes $y=\{ y_1, \dots ,y_n \}$. $\epsilon f_p (t)$ is the perturbation force which is introduced for the discussion of the response function. The thermal noise $\xi(t)$ is zero-mean white Gaussian noise with variance $2\gamma k_{\rm B} T$. Throughout our paper, multiplications of stochastic variables are assumed to be interpreted as the Stratonovich-type integral without explicit remarks. We consider a nonequilibrium transition performed by the feedback force $F_{\lambda(t,y)}(x(t))$ from time $t=0$ to $t= \tau$. We note the phase space point of the Langevin system at time $t$ as $\Gamma(t)=(x(t),\dot{x}(t))$ and the trajectory of a transition as $\hat{\Gamma} = \{\Gamma(t)|0 \leq t \leq \tau \}$. We assume that measurements for the feedback control are performed at time $t = t_{M_i}$ $(i=1,\dots,n)$, where $0\leq t_{M_1}\leq \dots \leq t_{M_{n}} \leq\tau$, and the measurement outcomes $y_i$ are obtained at time $t = t_{M_i}$. The probability of obtaining the measurement outcome $y_i$ is depend on the phase space point $\Gamma_{M_i} = \Gamma(t_{M_i})$. Therefore the stochastic process of measurement outcomes $y$ is determined by conditional probabilities \begin{equation} p_i(y_i| \Gamma_{M_i} ) = g_i(y_i, \Gamma_{M_i} ), \label{conditional probability eq} \end{equation} where $g_i(y_i, \Gamma_{M_i} ) $ is a function which characterizes the measurement error of the feedback system. The conditional probabilities are normalized as $\int dy_i p_i(y_i|\Gamma_{M_i})=1$. In the system, the measurement outcomes $y$ determine the time evolution of the feedback force $F_{\lambda(t,y)}(x(t))$. Due to the causality of feedback control, the time evolution of the feedback force $F_{\lambda(t,y)}(x(t))$ depends on the $i$-th measurement outcome $y_i$ for $t\geq t_{M_i}$ (see Fig.~\ref{fig.0}). \begin{figure} \includegraphics[scale=0.6]{figure1.eps \caption{\label{fig.0} An open system subjected to thermal noise is coupled to a feedback system. The measurement outcome $y_i$ is obtained as a function of the phase space $\Gamma_{M_{i}}$. The control parameter $\lambda$ depends on the measurement outcomes $y$ under feedback control. $I_i$ is the mutual information which is characterized as the dependence between the measurement outcome $y_i$ and the phase space $\Gamma_{M_{i}}$. $2\gamma k_{\rm B} T$ is the value of the thermal fluctuation.} \end{figure} When the measurement outcomes $y$ are fixed, the time evolution of the control parameter $\lambda(t,y)$ is uniquely determined. Then the path probability $\mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}| \Gamma(0)]$ for the Langevin equation Eq.(\ref{Feedback langevin eq}), is given by the Stratonovich-type path-integral expression as \begin{equation} \mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}| \Gamma(0)] = \frac{1}{\mathcal{N}} e^{-\frac{\beta}{4\gamma} \int^{\tau}_0 dt \left( m \ddot{x} + \gamma \dot{x} -F_{\lambda} -\epsilon f_p \right)^2 }, \label{path integral expression} \end{equation} where $\mathcal{N}$ is a normalization constant independent of $\epsilon$. The path probability $\mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}]$ including an initial density is defined as $ \mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}] = \rho_0 (\Gamma(0)) \mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}| \Gamma(0)] $, where $\rho_0(\Gamma)$ is the initial probability density at time $t=0$ for a transition . This path probability is assumed to be normalized by the path integral as $\int [\mathcal{D} \hat{\Gamma}] \mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}] =1$. The probability density at time $t$ is defined as $\rho_t(\Gamma) = \int [\mathcal{D} \hat{\Gamma}] \delta(\Gamma(t)-\Gamma) \mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}]$. For a feedback system, the ensemble averages of arbitrary path function $A[\hat{\Gamma}]$ and arbitrary phase function $B(\Gamma)$ are defined as \begin{equation} \left< A\right>_\epsilon= \int \prod_i dy_i \int [\mathcal{D} \hat{\Gamma}] A[\hat{\Gamma}] \mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}]p_i(y_i|\Gamma_{M_i}), \label{ensemble average 1} \end{equation} and \begin{equation} \left< B(t)\right>_\epsilon = \int \prod_i dy_i \int [\mathcal{D} \hat{\Gamma}]B(\Gamma(t))\mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}]p_i(y_i|\Gamma_{M_i}). \label{ensemble average 2} \end{equation} These ensemble averages are the averages for all paths and all measurement outcomes. To discuss the FDT violation, we define the response function $R(t;s)$ of the system for $t>s$, using the $\epsilon$ dependence of the ensemble average, as \begin{equation} \left< \dot{x}(t) \right>_\epsilon = \left< \dot{x}(t) \right>_0 + \epsilon \int^t_{0} ds R(t;s) f_p(s) +\mathrm{O}(\epsilon^2), \label{response function} \end{equation} where $\left< \dots \right>_0$ is the ensemble average when the perturbation force $\epsilon f_p$ is $0$. Due to the causality, $R(t,s)=0$ is satisfied for $t<s$. Moreover, the same time response is defined as $ R(t,t) = 1/2\left[R(t;t-0)+R(t;t+0)\right] =1/2R(t;t-0) $. To consider a steady state, we generalize Eq.(\ref{SagawaUeda}) for several measurements and feedbacks. We define the $i$-th mutual information $I_i$ between the system's state $\Gamma_{M_{i}}$ and the measurement outcome $y_i$ as $I_i \equiv \ln p_i(y_i|\Gamma_{M_i})/p_i(y_i) $, where $p_i(y_i)$ is the probability of obtaining the outcome $y_i$ in the $i$-th measurement. The probability $p_i(y_i)$ is calculated as \begin{equation} p_i(y_i)=\int \prod_{j \neq i} dy_j \int [\mathcal{D} \hat{\Gamma}] \mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}] \Pi_{k}p_k(y_k|\Gamma_{M_k}). \label{mutual information} \end{equation} Here, the normalization $\int dy_i p_i(y_i) = 1$ is satisfied. \section{Main result} For the Langevin system including feedback effects, we prove the inequality \begin{eqnarray} \beta \int^{\tau}_0 dt \gamma \left[ \left< \dot{x}(t)^2 \right>_0 - \frac{2}{\beta} R(t;t) \right] \geq \left< \Delta \phi \right>_0 - \sum_i \left<I_i \right>_0, \nonumber \\ \label{Main result 1} \end{eqnarray} where $\left< \Delta \phi \right>_0 =\left< \ln \rho_0(\Gamma(0)) -\ln \rho_{\tau}(\Gamma(\tau)) \right>_0 $ is the entropy change of the system. This inequality shows that the time integral of the FDT violation is bounded by the sum of the mutual information. On the other hand, when the measurement outcome is obtained without error, the mutual information $\left< I_i \right>_0$ goes to infinity. In this limit, the bounds to the violation of FDT are vanished. When the measurement outcome is obtained with error, the mutual information has finite values. This result is interpreted as effects of error on the FDT violation. The bounds are crucial especially in the condition that the correlation term $\left< \dot{x}^2(t) \right>$ is smaller than the response term $2R(t;t)/\beta $. Therefore, the bounds can be important for a problem where the dynamical feedback makes the effective temperature of the system lower than the temperature of the heat reservoir because the effective temperature is defined as the ratio of the correlation term to the response term in a steady state, \begin{equation} T_{\rm eff} = \frac{\left< \dot{x}^2(t) \right>_0}{2 k_{\rm B} R(t;t)}. \label{effective temperature} \end{equation} The relation between the generalized FDT and the effective temperature is discussed in Ref.~\cite{Baiesi2}. When the system is considered to be in a steady state approximately by a time-independent feedback protocol as a result of coarse graining over time, we can prove the following relation for the effective temperature $T_{\rm eff}$ \begin{equation} \frac{T_{\rm eff} -T}{T} \geq - \frac{\sum_i \left< I_i \right>_0}{\tau}t_r. \label{Main result 2} \end{equation} where $t_r = m/\gamma$ is the relaxation time and $\sum_i \left< I_i \right>_0$ is the sum of the mutual information obtained within the time duration $\tau$. Then $\sum_i \left< I_i \right>_0/\tau$ is considered to be the mutual information rate obtained by the measurement. The left-hand side of Eq.(\ref{Main result 2}) is similar to the Carnot efficiency and the right-hand side is considered the information obtained in the relaxation time. It is worth indicating that $t_r$ is the characteristic time of the relaxation to an equilibrium state in terms of the velocity without external forces ($F_{\lambda} + \epsilon f_p=0$). In other words, the system practically forgets the information of the velocity after time $m/\gamma$. In order to cool the system down to a lower temperature, we should obtain the information of the velocity of a particle before the system loses the information of the velocity and applies feedback control. If the system is considered to be an overdamped Langevin system $(m/\gamma \to 0)$, the right-hand side of Eq.(\ref{Main result 2}) becomes $0$. Then this inequality indicates an inability to cool the overdamped Langevin system by the feedback force. In addition, if the relaxation time is smaller than the measurement interval, cooling the system is difficult and the lower bounds to the effective temperature are decided by this inequality. We prove these inequalities in the next section. \section{Proof} For the discussion of the detailed FT, a reversal process is introduced. Then we define a time-reversal map as $(x,\dot{x})^* = (x,-\dot{x})$. When $\hat{\Gamma} $ is considered to be the trajectory of a forward process, the trajectory of the reverse process is described as $\hat{\Gamma^\dagger} = \{\Gamma^{*}(\tau-t)| 0 \leq t \leq \tau \}$. In the reversal process, a control parameter is introduced as $\lambda(\tau-t,y)$ using a protocol of the forward process $\lambda(t,y)$. We assume that the initial probability density of the trajectory of the reversed process is equal to the final probability density of the trajectory of the forward process ($\rho_0(\Gamma^*(\tau)) = \rho_{\tau}(\Gamma(\tau))$). According to Eq.(\ref{path integral expression}), the local detailed balance for the Langevin system is derived as \begin{equation} \frac{\mathcal{P}^\epsilon_{\lambda(t,y)}[\hat{\Gamma}]}{\mathcal{P}^\epsilon_{\lambda(\tau -t,y)}[\hat{\Gamma}^{\dagger}]} = \exp \left[ \int_0^{\tau} dt \omega(t) - \Delta \phi \right], \label{local detailed balance} \end{equation} where $\omega(t)$ is the entropy production rate defined as \begin{equation} \omega(t) = \beta \dot{x}(t) \left[ F_{\lambda(t,y)}(x(t)) + \epsilon f_p(t)- m\ddot{x} (t) \right]. \label{entropy production rate} \end{equation} The entropy production rate $\omega(t) = \beta\dot{x}(t) \left[ \gamma \dot{x}(t) - \xi(t) \right]$ is consistent with the definition of the energy dissipation rate in Ref.~\cite{Sekimoto}. The generalized Jarzynski equality~\cite{Sagawa2} for the system can be derived using the definition of the ensemble average including the feedback, Eq.(\ref{ensemble average 1}), as \begin{eqnarray} &&\left< e^{-\int_0^{\tau} dt \omega(t) + \Delta \phi - \sum_i I_i} \right>_\epsilon \nonumber \\ &=& \int \Pi_i dy_i p_i(y_i)\int [\mathcal{D} \hat{\Gamma}] \mathcal{P}^\epsilon_{\lambda(\tau-t,y)}[\hat{\Gamma}^{\dagger}] \nonumber \\ &=&1. \label{generalized Jarzynski equality} \end{eqnarray} Due to the concavity of the exponential function, Jensen's inequality for Eq.(\ref{generalized Jarzynski equality}) is obtained. Then Jensen's inequality for $\epsilon = 0$ is equal to the generalization of the second law for a feedback Langevin system as \begin{eqnarray} \beta \int^\tau_0 dt \left<\dot{x}(t)\left[ F_{\lambda(t,y)}(x(t)) - m \ddot{x}(t) \right] \right>_0 \nonumber\\ - \left< \Delta \phi \right>_0 \geq - \sum_i \left<I_i \right>_0, \label{generalized second law} \end{eqnarray} because the left-hand side of Eq.(\ref{generalized second law}) is the entropy production from time $t=0$ to time $t=\tau$ and the entropy production is bounded by the sum of mutual information from time $t=0$ to time $t=\tau$. To discuss the violation of FDT, we start with the identity \begin{eqnarray} &&\left. \frac{\partial}{\partial \epsilon}\left< \dot{x}(t) e^{-\epsilon \beta \int^\tau_0 dt' \dot{x}(t')f_p(t')} \right>_\epsilon \right|_{\epsilon=0}\nonumber \\ &=& \left. \frac{\partial \left< \dot{x}(t) \right>_\epsilon}{\partial \epsilon} \right|_{\epsilon=0}-\beta \int^\tau_0 dt' f_p(t') \left< \dot{x}(t) \dot{x}(t') \right>_0. \label{identity 1} \end{eqnarray} The definition of the response function, Eq.(\ref{response function}), give us the relation \begin{equation} \left. \frac{\partial \left< \dot{x}(t) \right>_\epsilon}{\partial \epsilon} \right|_{\epsilon=0} = \int^t_{0} dt' R(t;t') f_p(t'). \label{response function 2} \end{equation} Moreover, we can calculate the identity, Eq.(\ref{identity 1}), exactly using the path probability, Eq.(\ref{path integral expression}), as \begin{eqnarray} &&\left. \frac{\partial}{\partial \epsilon}\left< \dot{x}(t) e^{-\epsilon \beta \int^\tau_0 dt' \dot{x}(t')f_p(t')} \right>_\epsilon \right|_{\epsilon=0} \nonumber \\ &=& \frac{\beta}{2\gamma} \int^\tau_0 dt' f_p(t') \left< \dot{x}(t)\left[ -\gamma\dot{x}(t') -F_{\lambda(t',y)}(x(t')) \right. \right. \nonumber \\ &&\left. \left. +m\ddot{x}(t') \right] \right>_0. \label{identity 2} \end{eqnarray} A small impulse force $f_p(t')=\delta(t'-t+s)$ is substituted for Eqs.(\ref{identity 1})-(\ref{identity 2}) for $s \neq t$, then the generalized FDT for the feedback system can be derived as \begin{eqnarray} &&\gamma \left[ \left< \dot{x}(t) \dot{x}(t-s) \right>_0 - \frac{2}{\beta} R(t;t-s) \right] \nonumber \\ &=& \left< \dot{x}(t) \left[ F_{\lambda(t-s,y)}(x(t-s)) -m \ddot{x}(t-s) \right] \right>_0. \label{Harada-Sasa} \end{eqnarray} using the causality $R(t;t+s)=0$ for $s>0$. Owing to the definition of the Stratonovich-integral and the same time response $R(t;t)$, the relation between the same time response and correlation can be obtained as \begin{eqnarray} &&\gamma \left[ \left< \dot{x}^2(t) \right>_0 - \frac{2}{\beta} R(t;t) \right] \nonumber \\ &=&\left< \dot{x}(t) \left[ F_{\lambda(t,y)}(x(t)) -m \ddot{x}(t) \right] \right>_0. \label{Harada-Sasa 2} \end{eqnarray} This equality is the Harada-Sasa equality for a Langevin system with feedback. The left-hand side of Eq.(\ref{Harada-Sasa 2}) is the degree of violation of the FDT and the right-hand side of Eq.(\ref{Harada-Sasa 2}) represents the energy dissipation rate. In an equilibrium state, the FDT violation is vanished because the feedback force $F_{\lambda(t,y)}(x(t))$ is considered to be a time-independent potential force, $-\partial U(x)/\partial x$. Therefore both correlations, $\left< \dot{x}(t) F_{\lambda(t,y)}(x(t)) \right>_0$ and $\left< \dot{x}(t) \ddot{x}(t) \right>_0$ are $0$. Therefore we obtain the first main result, Eq.(\ref{Main result 1}), from Eqs.(\ref{generalized second law}) and (\ref{Harada-Sasa 2}). This result is valid for the Langevin dynamics driven by the feedback force. To discuss the effective temperature, we assume that the system is considered to be in a nonequilibrium steady state approximately as a result of coarse graining over time. A steady state can be introduced when the feedback protocol is independent of time. In our protocol, the $i$-th measurement outcome dependence of the feedback force is independent of $i$. In a steady state, the correlation term $\left< \dot{x}^2(t) \right>$ and the response term $R(t;t)$ do not depend on time $t$. The effective temperature $T_{eff}$ is defined by the ratio of the correlation term to the response term in a steady state as Eq.(\ref{effective temperature}). In an equilibrium state, the effective temperature is equal to the temperature of the heat reservoir because the degree of the FDT violation is 0, $\left< \dot{x}^2(t) \right>_0 - 2R(t;t)/\beta=0$, while in a nonequilibrium steady state, the response function $R(t;t)$ is calculated using the Furutsu-Novikov-Donsker formula as in Refs.~\cite{Deutsch} and~\cite{Ohta} when the noise term $\xi(t)$ is a zero-mean white Gaussian noise. The correlation $\left< \dot{x}(t) \xi(t) \right>_0$ becomes $2\gamma R(t;t)/\beta$. Moreover, we can calculate $\left< \dot{x}(t) \xi(t) \right>_0$ by the definition of the Stratonovich integral. When $\epsilon = 0$, the correlation $\left< \dot{x}(t) \xi(t) \right>_0$ is calculated as $\gamma/\left(m\beta\right)$. The same time response $R(t;t)$ in a steady state is obtained exactly as $ R(t;t)=1/\left(2m\right)$. This fact shows that the effective temperature fulfills \begin{equation} \left< \frac{1}{2}m \dot{x}^2 \right>_0 = \frac{1}{2}k_{\rm B} T_{\rm eff}. \label{effective temperature 2} \end{equation} If the probability of a particle's velocity is a zero-mean Gaussian distribution, Eq.(\ref{effective temperature 2}) means that the distribution of a steady state is considered to be the Maxwell-Boltzmann distribution with the temperature $T_{\rm eff}$. Let the value $R(t;t)=1/\left(2m\right)$, a steady-state condition $\left< \Delta \phi \right>_0 = 0$, and Eq.(\ref{effective temperature 2}) substitute for the first main result, Eq.(\ref{Main result 1}), then we can obtain the second main result, Eq. (\ref{Main result 2}). \section{Models for cold damping} First, we consider the cold damping process~\cite{Jourdan} or entropy pumping~\cite{Kim}, generally given by the following Langevin equation: \begin{equation} m\ddot{x}(t) +\gamma{x}(t) = -\gamma' \dot{x}(t) + \xi(t). \end{equation} In this model, $\gamma'$ is positive. This cold damping process was proposed in an experiment of cooling a Brownian particle by applying a velocity-dependent feedback $-\gamma' \dot{x}(t)$. In a realistic experimental setup, this feedback can be realized by using optical tweezers~\cite{Kim,Li}. In a steady state, the effective temperature of this system $T\gamma/\left(\gamma + \gamma'\right)$ was found to be lower than the temperature of the heat reservoir $T$. Thus this model is considered as the noise cancellation. The feedback of this model includes the velocity of the Brownian particle $\dot{x}(t)$ without a measurement error. We substitute $F_{\lambda}=- \gamma' \dot{x}(t)$ into Eq.(\ref{Harada-Sasa 2}), then the FDT violation of the system is calculated as $- \gamma' \left< \dot{x}^2(t) \right>_0 - d/dt\left<\left(m/2\right) \dot{x}^2(t) \right>_0 $. In a steady state, the condition $d/dt\left<\left(m/2\right) \dot{x}^2(t) \right>_0=0$ is derived because the term $\left<\left(m/2\right) \dot{x}^2(t) \right>_0$ does not depend on time $t$. Then the FDT violation $- \gamma' \left< \dot{x}^2(t) \right>_0$ is always negative in a steady state. The effective temperature of the system is calculated by the definition Eq.(\ref{effective temperature}) as $T_{\rm eff} = T\gamma/\left(\gamma + \gamma'\right)$. In the limit of $\gamma' \to \infty$, the effective temperature $T_{\rm eff}$ reaches 0 K. This model does not give the cooling bounds to the effective temperature by the mutual information because the feedback protocol is free of measurement errors, thus the mutual information goes to infinity. In terms of the measurement error, this model cannot describe the actual setup because the feedback protocol has measurement errors in the actual experimental setup. If the feedback protocol of the cold damping has measurement errors, the bounds to the FDT violation given by Eq.(\ref{Main result 1}) are dominant and therefore the effective temperature cannot reach 0 K. To discuss the effects of errors on the FDT violation, we consider the following two models including measurement errors. We show the validity of the bounds to the FDT violation given by Eq.(\ref{Main result 1}). \subsection{Case 1} A model for cold damping with continuous output feedback can be described by the Langevin equation \begin{equation} m\ddot{x}(t) + \gamma \dot{x}(t) = F_{\lambda(t,y)}(x(t)) + \xi(t). \label{Feedback langevin eq 2} \end{equation} We consider the following feedback protocol for one cycle. First, a measurement about the velocity $\dot{x}(0) = \dot{x}_0$ is performed at time $t=0$. Second, a measurement outcome $y$ about the velocity $\dot{x}_0$ is obtained. In order to introduce the measurement error, we consider that the conditional probability is Gaussian with variance $\sigma^2_{\rm err}$ as \begin{equation} p(y|\dot{x}_0) = \frac{1}{\sqrt{2\pi \sigma_{\rm err}^2}} \exp \left[ -\frac{(\dot{x}_0 - y)^2}{2\sigma_{\rm err}^2} \right]. \label{Condition probability} \end{equation} Third, a constant force $F_{\lambda(t,y)}(x(t)) = -\gamma' y$ is applied to the system from time $t=0$ to $t=\tau$. This feedback sequence defines one cycle. In repeating this cycle, we assume that the system has the same Gaussian distribution about the velocity at time $t=0$ and $t=\tau$, instead of the assumption of a steady state, described as $p(\dot{x}_0) =p(\dot{x}(\tau)) = 1/\sqrt{2\pi \sigma^2} \exp \left[ - \dot{x}^2_0/\left(2 \sigma^2\right) \right].$ Due to the noise cancellation, the variance of the steady state density becomes smaller than that of the original Maxwell-Boltzmann distribution with temperature $T$ as $1/\left( m\beta \right) \geq \sigma^2.$ In this model, we can show the validity of Eq.(\ref{Main result 1}) for one cycle. Let the left hand side of Eq.(\ref{Main result 1}) be defined as $\Omega_{\tau} = \beta \int^{\tau}_0 dt \gamma \left[ \left< \dot{x}(t)^2 \right>_0 - 2R(t;t)/\beta \right]$. The FDT violation, $\Omega_{\tau}$, can be calculated using Eq.(\ref{Harada-Sasa 2}) as \begin{eqnarray} \Omega_{\tau} &=& \beta \int^\tau_0 dt \left< \dot{x}(t)F_{\lambda(t,y)}(x(t)) \right>_0 \nonumber\\ &&- \left< \beta \frac{m}{2} \left[ \dot{x}^2(0)-\dot{x}^2(\tau) \right] \right>_0, \label{Calculation 1} \end{eqnarray} In this condition, the relations $\left< \Delta \phi \right>_0 =0$ and $\left< m/2\left[ \dot{x}^2(0)-\dot{x}^2(\tau) \right] \right>_0 =0$ are calculated because the probability distribution is the same at $t=0$ and $t=\tau$. Then we compare the value of the FDT violation $\Omega_{\tau}$ and the mutual information $\left<I \right>$ to discuss the validity of Eq.(\ref{Main result 1}). We can exactly calculate the violation of the FDT as \begin{equation} \Omega_{\tau} =- \beta \int^\tau_0 dt \int^\infty_{-\infty} dy \int^\infty_{-\infty} d\dot{x}_0 p(\dot{x}_0) p(y|\dot{x}_0) \gamma' y \bar{\dot{x}}(t), \label{Calculation 2} \end{equation} where $\bar{\dot{x}}(t)$ is the average of the velocity in terms of the thermal noise $\xi(t)$. $\bar{\dot{x}}(t)$ obeys the equation of motion $ m\left(d/dt\right)\bar{\dot{x}}(t) = -\gamma \bar{\dot{x}}(t) - \gamma' y $; then the solution of the equation of motion is calculated as \begin{equation} \bar{\dot{x}}(t) = - \frac{\gamma' y}{\gamma}+ \left( \dot{x}_0 + \frac{\gamma' y}{\gamma} \right) e^{-\frac{\gamma}{m}t}. \label{Calculation 4} \end{equation} Then we substitute Eqs.(\ref{Calculation 4}) and (\ref{Condition probability}) into Eq.(\ref{Calculation 2}) to obtain the value of the FDT violation as \begin{eqnarray} \Omega_{\tau} &=& \beta \frac{\gamma'^2}{\gamma}(\sigma^2+\sigma_{\rm err}^2)\tau \nonumber \\ &&- \beta \gamma' \frac{m}{\gamma} \left[ \sigma^2 + \frac{\gamma'}{\gamma} \left( \sigma^2+\sigma^2_{\rm err} \right) \right] \left( 1 -e^{-\frac{\gamma}{m} \tau} \right). \label{Calculation 5} \end{eqnarray} When $\left(d \Omega_{\tau}/d\tau \right) \left. \right|_{\tau = \tau_{\rm min}} =0 $, the FDT violation has a minimum value in terms of $\tau$. The value of $\tau_{\rm min}$ is calculated as $\tau_{\rm min} = m/\gamma \ln \left[1 + \left(\gamma/\gamma'\right)\left[ \sigma^2/\left(\sigma^2 + \sigma^2_{\rm err}\right)\right] \right]$. Therefore, the minimum value of the FDT violation $\Omega_{\tau_{\rm min}}$ is obtained as \begin{eqnarray} \Omega_{\tau_{\rm min}} &=& \frac{\beta m \gamma'^2(\sigma^2 + \sigma_{err}^2) }{\gamma^2} \ln \left(1 + \frac{\gamma}{\gamma'} \frac{\sigma^2}{\sigma^2 + \sigma_{err}^2} \right) \nonumber\\ &&- \frac{\beta m \gamma' \sigma^2}{\gamma} \nonumber\\ &\simeq& - \frac{m \beta \sigma^2}{2} \frac{1}{1+ \sigma_{r}^2}. \label{Calculation 7} \end{eqnarray} where $\sigma_{r} = \sigma_{\rm err}/\sigma$. In this calculation, the logarithmic term is expanded in terms of $\sigma^2/\left(\sigma^2 + \sigma_{\rm err}^2 \right)$ ($\leq 1$). On the other hand, the mutual information $\left< I \right>_0$ can be calculated. The probability of obtaining the measurement outcome $p(y)$ is calculated as \begin{eqnarray} p(y) &=& \int^{\infty}_{-\infty} d\dot{x}_0 p(y|\dot{x}_0)p(\dot{x}_0) \nonumber\\ &=& \frac{1}{\sqrt{2 \pi (\sigma^2 + \sigma_{\rm err}^2)}} \exp \left[ - \frac{y^2}{2(\sigma^2+\sigma_{\rm err}^2)} \right]. \end{eqnarray} Then, the mutual information $\left< I \right>_0$ is obtained as \begin{eqnarray} \left< I \right>_0 &=& \int^\infty_{-\infty} dy \int^\infty_{-\infty} d\dot{x}_0 p(\dot{x}_0) p(y|\dot{x}_0) \ln \frac{p(y|\dot{x}_0)}{p(y)} \nonumber \\ &=& \frac{1}{2} \ln \left( 1+ \frac{1}{\sigma_{r}^2} \right). \label{Calculation 9} \end{eqnarray} The results of Eqs.(\ref{Calculation 7}) and (\ref{Calculation 9}) and the condition of the variance $1/\left(m\beta\right) \geq \sigma^2$ give us the inequality \begin{equation} \Omega_{\tau_{\rm min}} \geq - \frac{1}{2} \frac{1}{1+ \sigma_{r}^2} \geq -\left< I \right>_0 . \label{Calculation 11} \end{equation} \begin{figure} \includegraphics[scale=1.3]{figure2.eps \caption{\label{fig.1} Minimum values of FDT violation (dashed lines) and mutual information (solid line) in case 1. Mutual information is less than the FDT violation. Thus the main result, Eq.(\ref{Main result 1}), is valid in case 1.} \end{figure} Thus the bounds to the FDT violation given by Eq.(\ref{Main result 1}) are valid in this model. According to Fig.~\ref{fig.1}, the bounds to the FDT violation are effective when the measurement error cannot be negligible ($\sigma_{r}^2 \gg 1$). This result does not depend on any of the parameters $\tau$, $m$, $\gamma$, $\gamma'$, $\sigma$, or $\sigma_{\rm err}$. In other words, the validity of Eq.(\ref{Main result 1}) does not depend on the feedback parameter in this model. \subsection{Case 2} Here we consider the case where the output of feedback control system takes only discrete values. We assume that the system has only binary states for the measurement outcome. In such a case, without loss of generality, the measurement outcome $y$ can be simply represented by $y=0$ for negative values of $\dot{x}_0$ observations, or $y=1$ otherwise. The measurement error rate $q$ $(0 \leq q \leq 1/2)$ is introduced by the conditional probability as \begin{eqnarray} p(0|\dot{x}_0)=\left\{ \begin{array}{ll} q & (\dot{x}_0 \geq 0) \\ 1-q & (\dot{x}_0 < 0) \\ \end{array} \right., \label{case2-1} \end{eqnarray} \begin{eqnarray} p(1|\dot{x}_0 )=\left\{ \begin{array}{ll} 1-q & (\dot{x}_0 \geq 0) \\ q & (\dot{x}_0 < 0) \\ \end{array} \right.. \label{case2-2} \end{eqnarray} Here, a constant force $F_{\lambda(t,0)}(x(t)) = \gamma'$ or $F_{\lambda(t,1)}(x(t)) = -\gamma'$ $(\gamma'>0)$ is applied to the system from time $t=0$ to time $t=\tau$, depending on the value of $y$. This feedback sequence is considered as one cycle. In repeating this cycle, we also assume that the system has the same Gaussian distribution about the velocity at time $t=0$ and $t=\tau$ described as $p(\dot{x}_0) =p(\dot{x}(\tau)) = 1/\sqrt{2\pi \sigma^2} \exp \left[ - \dot{x}^2_0/\left(2 \sigma^2\right) \right].$ Moreover we also assume the condition $1/\left(m\beta\right) \geq \sigma^2.$ In this case, the violation of the FDT $\Omega_\tau$ can also be calculated as \begin{eqnarray} \Omega_\tau = \beta \sum_y \int^\tau_0 dt \int^\infty_{-\infty} d\dot{x}_0 p(\dot{x}_0) p(y|\dot{x}_0) F_{\lambda(t,y)}(x(t)) \bar{\dot{x}}(t), \nonumber \\ \label{case2-3} \end{eqnarray} where $\bar{\dot{x}}(t)$ is calculated as \begin{equation} \bar{\dot{x}}(t) =\left. \pm \frac{\gamma'}{\gamma}- \left( - \dot{x}_0 \pm \frac{\gamma'}{\gamma} \right) e^{-\frac{\gamma}{m}t} \right. . \label{case2-4} \end{equation} The plus and minus signs in Eq.(\ref{case2-4}) correspond to $y=0$ and $y=1$, respectively. By substituting Eqs.(\ref{case2-1}), (\ref{case2-2}), and (\ref{case2-4}) into Eq.(\ref{case2-3}), we obtain the value of the FDT violation as \begin{eqnarray} \Omega_{\tau} = \beta \frac{\gamma'^2}{\gamma} \tau - \beta \frac{m}{\gamma}\left[ \frac{\gamma'^2}{\gamma} + 2(1-2q) \frac{\gamma' \sigma}{\sqrt{2\pi}} \right] \left( 1 -e^{-\frac{\gamma}{m} \tau} \right). \nonumber\\ \label{case2-5} \end{eqnarray} When $\left(d \Omega_{\tau}/d\tau \right) \left. \right|_{\tau = \tau_{\rm min}} =0 $, the FDT violation has its minimum value in terms of $\tau$. In this case, the value of $\tau_{\rm min}$ is calculated as $\tau_{\rm min} = m/\gamma \ln \left[ 1 + (1-2q) \left[ 2 \sigma \gamma/\left(\sqrt{2\pi}\gamma'\right)\right] \right]$. Therefore, the minimum value of the FDT violation $\Omega_{\tau_{\rm min}}$ is obtained as \begin{eqnarray} \Omega_{\tau_{\rm min}} &=& \frac{\beta m \gamma'}{\gamma} \left[ \frac{\gamma'}{\gamma} \ln \left[ 1 + (1-2q)\frac{2\gamma\sigma}{\sqrt{2\pi}\gamma'} \right] \right. \nonumber\\ &&\left. - (1-2q)\frac{2\sigma}{\sqrt{2\pi}} \right] \nonumber\\ &\simeq& - \frac{m \beta \sigma^2}{\pi} (1-2q)^2. \label{case2-6} \end{eqnarray} In this calculation, the logarithmic term is expanded in terms of ($1-2q$) ($\leq 1$). On the other hand, the probability of obtaining the measurement outcome $p(y)$ is calculated as $p(0) = 1/2$ and $p(1) = 1/2$. Then, the mutual information $\left< I \right>_0$ is obtained as \begin{eqnarray} \left< I \right>_0 &=& \sum_y \int^\infty_{-\infty} d\dot{x}_0 p(\dot{x}_0) p(y|\dot{x}_0) \ln \frac{p(y|\dot{x}_0)}{p(y)} \nonumber \\ &=& \ln 2 + q\ln q + (1-q) \ln (1-q) \label{case2-7} \end{eqnarray} The results Eqs.(\ref{case2-6}) and (\ref{case2-7}) and the condition of the variance $1/\left(m\beta\right) \geq \sigma^2$ give us the inequality (see Fig.~\ref{fig.2}) \begin{figure} \includegraphics[scale=1.3]{figure3.eps \caption{\label{fig.2} Minimum values of FDT violation (dashed lines) and mutual information (solid line) in case 2. Mutual information is less than the FDT violation. Thus the main result, Eq.(\ref{Main result 1}), is valid in case 2.} \end{figure} \begin{equation} \Omega_{\tau_{\rm min}} \geq - \frac{(1-2q)^2}{\pi} \geq -\left< I \right>_0 . \end{equation} Therefore we have demonstrated that Eq.(\ref{Main result 1}) is valid regardless of the different feedback protocols. \section{Discussion} In this paper, we have discussed the effects of error on the FDT violation and the effective temperature using Langevin dynamics under feedback with error. The bounds to the FDT violation and the effective temperature as a function of the mutual information are derived. Then we have presented two simple models to demonstrate analytical calculations for the validity of the generalized second law, Eq.(\ref{SagawaUeda}), for a Langevin system including the velocity-dependent feedback with error. Moreover, the result for the effective temperature is considered to be the relation between the information obtained by the measurement and the relaxation. We believe that this result is a valuable approach to nonequilibrium steady-state dynamics when the contents of the information play a significant role in feedback control systems. As a possible experimental realization of the proposed results, cooling of a Brownian particle by application of a feedback force with laser tweezers might be a good candidate, since the velocity of a Brownian particle is measurable in the present technology~\cite{Li}. In vacuum, millikelvin cooling of a Brownian particle was recently archived and the lowest temperature was limited by the noise~\cite{Li2}. For the generalized second law, the inequality, Eq.(\ref{SagawaUeda}), has been tested by our group in the feedback system of a Brownian particle~\cite{Toyabe}. Therefore experimental verification may be technically feasible. A more important extension of the present result will be the generalization to a quantum system, in which measurement error comes from quantum fluctuations, or generalization to many-particle systems. It is worth noting that the stochastic cooling in particle acceleration technology uses a periodic feedback control. It would be interesting to look for a theoretical relation with mutual information in many-particle systems as in Ref.\cite{vandermeer}. \begin{acknowledgments} \section{Acknowledgments} The authors would like to thank Dr. T. Sagawa and Professor S. Sasa for their valuable comments. \end{acknowledgments}
1,314,259,996,016
arxiv
\section{Introduction}\label{sintro} Let $\{X_n\}_{n\in\mathbb{Z}}$ and $\{U_n\}_{n\in\mathbb{Z}}$ be stationary sequences of real random variables on the probability space $(\Omega, {\cal{A}}, P)$ and $P(U_n\in\{0,1\})=1$. We define, for $n\geq 1$, \begin{eqnarray}\label{Yn} Y_n=\left\{ \begin{array}{ll} X_n & ,\, U_n=1\\ Y_{n-1} & ,\, U_n=0\,. \end{array} \right. \end{eqnarray} Sequence $\{Y_n\}_{n\geq 1}$ corresponds to a model of failures on records of $\{X_n\}_{n\in\mathbb{Z}}$ replaced by the last available record, which occurs in some random past instant, if we interpret $n$ as time. Thus, if for example it occurs $\{U_1=1,U_2=0,U_3=1,U_4=0,U_5=0,U_6=0,U_7=1\}$, we will have $\{Y_1=X_1,Y_2=X_1,Y_3=X_3,Y_4=X_3,Y_5=X_3,Y_6=X_3,Y_7=X_7\}$. This constancy of some variables of $\{X_n\}_{n\in\mathbb{Z}}$ for random periods of time motivates the designation of ``stopped clock model" for sequence $\{Y_n\}_{n\geq 1}$. Failure models studied in the literature from the point of view of extremal behavior do not consider the stopped clock model (Hall and H\"usler, \cite{HallHusler2006} 2006; Ferreira \emph{et al.}, \cite{FerreiraH++2019} 2019 and references therein). The model we will study can also be represented by $\{X_{N_n}\}_{n\geq 1}$ where $\{N_n\}_{n\geq 1}$ is a sequence of positive integer variables representable by \begin{eqnarray}\nonumbe N_n=nU_n+\sum_{i\geq 1}\left(\prod_{j=0}^{i-1}(1-U_{n-j})\right)U_{n-i}(n-i),\,n\geq 1\,. \end{eqnarray} We can also state a recursive formulation for $\{Y_n\}_{n\geq 1}$ through \begin{eqnarray}\nonumbe Y_n=X_nU_n+\sum_{i\geq 1}\left(\prod_{j=0}^{i-1}(1-U_{n-j})\right)U_{n-i}X_{n-i}+\prod_{i\geq 0}(1-U_{n-i})Y_{n-\kappa},\,n\geq 1,\,\,\,\kappa\geq 1\,. \end{eqnarray} Under any of the three possible representations (failures model, random index sequence or recursive sequence), we are not aware of an extremal behavior study of $\{Y_n\}_{n\geq 1}$ in the literature. Our departure hypotheses about the base sequence $\{X_n\}_{n\in\mathbb{Z}}$ and about sequence $\{U_n\}_{n\in\mathbb{Z}}$ are: \begin{enumerate} \item[(1)] $\{X_n\}_{n\in\mathbb{Z}}$ is a stationary sequence of random variables almost surely distinct and, without loss of generality, such that $F_{X_n}(x):=F(x)=\exp(-1/x)$, $x>0$, i.e., standard Fr\'echet distributed. \item[(2)] $\{X_n\}_{n\in\mathbb{Z}}$ and $\{U_n\}_{n\in\mathbb{Z}}$ are independent. \item[(3)] $\{U_n\}_{n\in\mathbb{Z}}$ is stationary and $p_{n_1,...,n_s}(i_1,...,i_s):=P(U_{n_1}=i_{1},...,U_{n_s}=i_s)$, $i_j\in\{0,1\}$, $j=1,...,s$, is such that $p_{n,n+1,...,n+\kappa-1}(0,...,0)=0$, for some $\kappa\geq 1$. \end{enumerate} The trivial case $\kappa=1$ corresponds to $Y_n=X_n$, $n\geq 1$. Hypothesis (3) means that we are assuming that it is almost impossible to lose $\kappa$ or more consecutive values of $\{X_n\}_{n\in\mathbb{Z}}$. We remark that, along the paper, the summations, produts and intersections is considered to be non-existent whenever the end of the counter is less than the beginning. We will also use notation $a\vee b=\max(a,b)$. \begin{ex}\label{ex1} Consider an independent and identically distributed sequence $\{W_n\}_{n\in\mathbb{Z}}$ of real random variables on $(\Omega, {\cal{A}}, P)$ and a Borelian set $A$. Let $p=P(A_n)$ where $A_n=\{W_n\in A\}$, $n\in\mathbb{Z}$. The sequence of Bernoulli random variables \begin{eqnarray}\label{exUn} U_n=\mathbf{1}_{\{\bigcap_{i=1}^{\kappa-1}\overline{A}_{n-i}\}}+(1-\mathbf{1}_{\{\bigcap_{i=1}^{\kappa-1}\overline{A}_{n-i}\}})\mathbf{1}_{\{A_{n}\}},\,{n\in\mathbb{Z}}, \end{eqnarray} where $\mathbf{1}_{\{\cdot\}}$ denotes the indicator function, defined for some fixed $\kappa\geq 2$, is such that $p_{n,n+1,...,n+\kappa-1}(0,...,0)=0$, i.e., it is almost sure that after $\kappa-1$ consecutive variables equal to zero, the next variable takes value one. In fact, for any choice of $\kappa \geq 2$, \begin{eqnarray}\nonumber \begin{array}{rl} p_{n,n+1,...,n+\kappa-1}(0,...,0)&=P(U_n=0=U_{n+1}=...=U_{n+\kappa-1}) \\ &\leq P\left(\{\displaystyle\bigcap_{i=n}^{n+\kappa-2}\overline{A}_{i}\} \cap \{U_{n+\kappa-1}=0\}\right)\\ &=P\left(\mathbf{1}_{\{\bigcap_{i=1}^{\kappa-1}\overline{A}_{n+\kappa-1-i}\}}=1, \{U_{n+\kappa-1}=0\}\right)=0. \end{array} \end{eqnarray} We also have \begin{eqnarray}\nonumber \begin{array}{rl} p_n(0)=&P\left(\mathbf{1}_{\{\bigcap_{i=1}^{\kappa-1}\overline{A}_{n-i}\}}=0,\mathbf{1}_{\{A_{n}\}}=0\right)=P(\bigcup_{i=1}^{\kappa-1}A_{n-i}\cap\overline{A}_n)\\ =&P(\overline{A}_n)-P(\bigcap_{i=0}^{\kappa-1}\overline{A}_{n-i})=1-p-(1-p)^\kappa, \end{array} \end{eqnarray} since the independence of random variables $W_n$ implies the independence of events $A_n$, and, for $\kappa>2$, \begin{eqnarray}\nonumber \begin{array}{rl} p_{n-1,n}(0,0)=&P((\bigcup_{i=1}^{\kappa-1}A_{n-i}\cap\overline{A}_n)\cap (\bigcup_{i=1}^{\kappa-1}A_{n-1-i}\cap\overline{A}_{n-1}))\\ =&P(\overline{A}_n\cap\overline{A}_{n-1})-P(\overline{A}_n\cap\overline{A}_{n-1}\cap(\bigcap_{i=1}^{\kappa-1}\overline{A}_{n-i}\cup \bigcap_{i=1}^{\kappa-1}\overline{A}_{n-1-i}))\\ =&P(\overline{A}_n)P(\overline{A}_{n-1})-P(\bigcap_{i=0}^{\kappa-1}\overline{A}_{n-i})=(1-p)^2-(1-p)^\kappa, \end{array} \end{eqnarray} $p_{n-1,n}(1,0)=p_n(0)-p_{n-1,n}(0,0)=p(1-p).$ In Figure \ref{FigYn} we illustrate with a particular example based on independent standard Fr\'echet $\{X_n\}_{n\in\mathbb{Z}}$, $\{W_n\}_{n\in\mathbb{Z}}$ with standard exponential marginals, $A=]0,1/2]$ and thus $p=0.3935$ and considering $\kappa=3$. Therefore, $p_{n,n+1,n+2}(0,0,0)=0$, $p_{n,n+1,n+2}(1,0,0)=p_{n,n+1}(0,0)=p(1-p)^2$. \begin{figure} \centering \includegraphics[width=7cm,height=7cm]{falhask3-eps-converted-to.pdf} \caption{Sample path of $100$ observations simulated from $\{Y_n\}$ defined in (\ref{Yn}) based on independent standard Fr\'echet $\{X_n\}$ and on $\{U_n\}$ given in (\ref{exUn}) where we take random variables $\{W_n\}$ standard exponential distributed, $A=]0,1/2]$ and thus $p=0.3935$ and considering $\kappa=3$.\label{FigYn}} \end{figure} \end{ex} \vspace{0,5cm} In the next section we propose an estimator for probabilities $p_{n,...,n+s}(1,0,...,0)$, $0\leq s<\kappa-1$. In Section \ref{sextremalindex} we analyse the existence of the extremal index for $\{Y_n\}_{n\geq 1}$, an important measure to evaluate the tendency to occur clusters of its high values. A characterization of the tail dependence will be presented in Section \ref{stdc}. The results are illustrated with an ARMAX sequence. For the sake of simplicity, we will omit the variation of $n$ in sequence notation whenever there is no doubt, taking into account that we will keep the designation $\{Y_n\} $ for the stopped clock model and $\{X_n\}$ and $\{U_n\}$ for the sequences that generate it. \section{Inference on $\{U_n\} $}\label{sinfer} Assuming that $\{U_n\} $ is not observable, as well as the values of $\{X_n\} $ that are lost, it is of interest to retrieve information about these sequences from the available sequence $\{Y_n\} $. Since, for $n\geq 1$ and $s\geq 1$, we have \begin{eqnarray}\nonumber \begin{array}{rl} &\displaystyle p_n(1)=E\left(\mathbf{1}_{\{Y_n\not = Y_{n-1}\}}\right),\,\, p_n(0)=E\left(\mathbf{1}_{\{Y_n = Y_{n-1}\}}\right)\\ \textrm{ and }& \displaystyle p_{n-s,n-s+1,...,n}(1,0,...,0)=E\left(\mathbf{1}_{\{Y_{n-s-1}\not =Y_{n-s}=Y_{n-s+1}=...= Y_{n}\}}\right), \end{array} \end{eqnarray} we propose to estimate these probabilities from the respective empirical counterparts of a random sample $(\hat{Y}_1, \hat{Y}_2,...,\hat{Y}_m)$ from $\{Y_n\} $, i.e., \begin{eqnarray}\nonumber \begin{array}{rl} &\displaystyle \widehat{p}_n(1)=\frac{1}{m}\sum_{i=2}^{m}\mathbf{1}_{\{\hat{Y}_i\not = \hat{Y}_{i-1}\}},\,\, \widehat{p}_n(0)=\frac{1}{m}\sum_{i=2}^{m}\mathbf{1}_{\{\hat{Y}_i = \hat{Y}_{i-1}\}}\\ \textrm{ and }& \displaystyle \widehat{p}_{n-s,n-s+1,...,n}(1,0,...,0)=\frac{1}{m}\sum_{i=s+2}^{m}\mathbf{1}_{\{\hat{Y}_{i-s-1}\not =\hat{Y}_{i-s}=\hat{Y}_{i-s+1}=...= \hat{Y}_{i}\}}\,, \end{array} \end{eqnarray} which are consistent by the {\it weak law of large numbers}. The value of $\kappa$ can be inferred from \begin{eqnarray}\nonumber \widehat{\kappa}=\bigvee_{i=s+2}^{m}\,\,\bigvee_{s\geq 1} s \,\,\mathbf{1}_{\{\hat{Y}_{i-s-1}\not =\hat{Y}_{i-s}=\hat{Y}_{i-s+1}=...= \hat{Y}_{i}\}}\,. \end{eqnarray} In order to evaluate the finite sample behavior of the estimators above, we have simulated $1000$ independent replicas with size $m=100,1000,5000$ of the model in Example \ref{ex1}. The absolute bias (abias) and root mean squared error (rmse) are presented in Table \ref{tab1}. The results reveal a good performance of the estimators, even in the case of smaller sample sizes. Parameter $\kappa$ was always estimated with no error. \begin{table} \caption{The absolute bias (abias) and root mean squared error (rmse) obtained from 1000 simulated samples with size $m=100,1000,5000$ of the model in Example \ref{ex1}. \label{tab1}} \begin{center} \begin{tabular}{ll|cc} & & abias & rmse\\ \hline & $m=100$ & 0.0272 & 0.0335 \\ $\widehat{p}_n(0)$ & $m=1000$ & 0.0087 & 0.0108 \\ & $m=5000$ & 0.0039 & 0.0048 \\ \hline & $m=100$ & 0.0199 & 0.0253 \\ $\widehat{p}_{n-1,n}(1,0)$ & $m=1000$ & 0.0065 & 0.0080 \\ & $m=5000$ & 0.0030 & 0.0037 \\ \hline & $m=100$ & 0.0160 & 0.0200\\ $\widehat{p}_{n-2,n-1,n}(1,0,0)$ & $m=1000$ & 0.0051 & 0.0064\\ & $m=5000$ & 0.0022 & 0.0028\\ \end{tabular} \end{center} \end{table} \section{The extremal index of $\{Y_n\} $}\label{sextremalindex} The sequence $\{Y_n\} $ is stationary because the sequences $\{X_n\} $ and $\{U_n\} $ are stationary and independent from each other. In addition, the common distribution for $Y_n$, $n \geq 1$, is also standard Fr\'echet, as is the common distribution for $X_n$, since \begin{eqnarray}\nonumber \begin{array}{rl} F_{Y_n}(x)=&\displaystyle \sum_{i=1}^{\kappa-1}P(X_{n-i}\leq x,U_{n-i}=1,U_{n-i+1}=0=...=U_n)+P(X_n\leq x)P(U_n=1)\\ =& \displaystyle F(x)\left(p_n(1)+\sum_{i=1}^{\kappa-1}p_{n-i,...,n}(1,0,...,0)\right)=F(x)\,. \end{array} \end{eqnarray} For any $\tau>0$, if we define $u_n\equiv u_n(\tau)=n/\tau$, $n\geq 1$, it turns out that $E\left(\sum_{i=1}^{n}\mathbf{1}_{\{Y_i>u_n\}}\right)=nP(Y_1>u_n) \displaystyle\mathop{\longrightarrow}_{n\to\infty}\tau$ and $nP(X_1>u_n)\displaystyle\mathop{\longrightarrow}_{n\to\infty}\tau$, so we refer to these levels $u_n$ by {\it normalized levels} for $\{Y_n\} $ and $\{X_n\} $. In this section, in addition to the general assumptions about the model presented in Section \ref{sintro}, we start by assuming that $\{X_n\}$ and $\{U_n\}$ present dependency structures such that variables sufficiently apart can be considered approximately independent. Concretely, we assume that $\{U_n\} $ satisfies the strong-mixing condition (Rosenblat \cite{Rosenblat1956} 1956) and $\{X_n\} $ satisfies condition $D(u_n)$ (Leadbetter \cite{Leadbetter1974} 1974) for normalized levels $u_n$. \begin{pro}\label{pD} If $\{U_n\} $ is strong-mixing and $\{X_n\} $ satisfies condition $D(u_n)$ then $\{Y_n\} $ also satisfies condition $D(u_n)$. \end{pro} \begin{proof} For any choice of $p+q$ integers, $1\leq i_1<...<i_p<j_1<...<j_q\leq n$ such that $j_1\geq i_p+l$, we have that \begin{eqnarray}\nonumber \left|P\left(\bigcap_{s=1}^p X_{i_s}\leq u_n,\bigcap_{s=1}^q X_{j_s}\leq u_n\right)-P\left(\bigcap_{s=1}^p X_{i_s}\leq u_n\right)P\left(\bigcap_{s=1}^q X_{j_s}\leq u_n\right)\right|\leq \alpha_{n,l}, \end{eqnarray} with $\alpha_{n,l_n}\to 0$, as $n\to\infty$, for some sequence $l_n=o(n)$, and \begin{eqnarray}\nonumber \left|P\left(A\cap B\right)-P\left(A\right)P\left(B\right)\right|\leq g(l), \end{eqnarray} with $g(l)\to 0$, as $l\to\infty$, where $A$ belongs to the $\sigma$-algebra generated by $\{U_i,\,i=1,...,i_p\}$ and $B$ belongs to the $\sigma$-algebra generated by $\{U_i,\,i=j_1,j_1+1,...\}$. Thus, for any choice of $p+q$ integers, $1\leq i_1<...<i_p<j_1<...<j_q\leq n$ such that $j_1\geq i_p+l+\kappa$, we will have \begin{eqnarray}\nonumber \begin{array}{rl} &\displaystyle \left|P\left(\bigcap_{s=1}^p Y_{i_s}\leq u_n,\bigcap_{s=1}^q Y_{j_s}\leq u_n\right)-P\left(\bigcap_{s=1}^p Y_{i_s}\leq u_n\right)P\left(\bigcap_{s=1}^q Y_{j_s}\leq u_n\right)\right|\\\\ \leq & \displaystyle \sum_{\substack{i_s-\kappa<i^*_s\leq i_s\\ j_s-\kappa<j^*_s\leq j_s}}\left|P\left(\bigcap_{s=1}^p X_{i_s^*}\leq u_n,\bigcap_{s=1}^q X_{j_s^*}\leq u_n\right)P\left(A^*\cap B^*\right)\right.\\ &\hspace{1.8cm}\displaystyle \left.-P\left(\bigcap_{s=1}^p X_{i_s^*}\leq u_n\right)P\left(\bigcap_{s=1}^q X_{j_s^*}\leq u_n\right)P\left(A^*\right)P\left(B^*\right)\right|, \end{array} \end{eqnarray} where $A^*=\bigcap_{s=1}^p \{U_{i_s}=0=...=U_{i^*_s+1},U_{i^*_s}=1\}$ and $B^*=\bigcap_{s=1}^q \{U_{j_s}=0=...=U_{j^*_s+1},U_{j^*_s}=1\}$ and $j^*_1>j_1-\kappa\geq i^*_p+l$. Therefore, the last summation above is upper limited by \begin{eqnarray}\nonumber \begin{array}{rl} & \displaystyle \sum_{\substack{i_s-\kappa<i^*_s\leq i_s\\ j_s-\kappa<j^*_s\leq j_s}}\Bigg(\left|P\left(\bigcap_{s=1}^p X_{i_s^*}\leq u_n,\bigcap_{s=1}^q X_{j_s^*}\leq u_n\right)-P\left(\bigcap_{s=1}^p X_{i_s^*}\leq u_n\right)P\left(\bigcap_{s=1}^q X_{j_s^*}\leq u_n\right)\right|\\ &\hspace{1.8cm}\displaystyle + |P\left(A^*\cap B^*\right)- P\left(A^*\right)P\left(B^*\right)|\Bigg)\\\\ \leq& \displaystyle \sum_{\substack{i_s-\kappa<i^*_s\leq i_s\\ j_s-\kappa<j^*_s\leq j_s}}\left(\alpha_{n,l}+g(l)\right), \end{array} \end{eqnarray} which allows to conclude that $D(u_n)$ holds for $\{Y_n\}$ with $l_n^{(Y)}=l_n+\kappa$. \end{proof} The tendency for clustering of values of $\{Y_n\}$ above $u_n$ depends on the same tendency within $\{X_n\}$ and the propensity of $\{U_n\}$ for consecutive null values. The clustering tendency can be assessed through the extremal index (Leadbetter, \cite{Leadbetter1974} 1974). More precisely, $\{X_n\}$ is said to have extremal index $\theta_X\in (0,1]$ if \begin{eqnarray}\label{thetaDef} \lim_{n\to\infty}P\left(\bigvee_{i=1}^{n}X_i\leq n/\tau\right)=e^{-\theta_X\tau}. \end{eqnarray} If $D(u_n)$ holds for $\{X_n\}$, we have \begin{eqnarray}\nonumber \lim_{n\to\infty}P\left(\bigvee_{i=1}^{n}X_i\leq u_n\right)=\lim_{n\to\infty}P^{k_n}\left(\bigvee_{i=1}^{[n/k_n]}X_i\leq u_n\right), \end{eqnarray} for any integers sequence $\{k_n\}$, such that, \begin{eqnarray}\label{kn} k_n\to\infty,\,\,k_nl_n/n\to 0 \textrm{ and } k_n\alpha_{n,l_n}\to 0, \textrm{ as } n\to\infty. \end{eqnarray} We can therefore say that \begin{eqnarray}\nonumber \theta_X\tau=\lim_{n\to\infty} k_n P\left(\bigvee_{i=1}^{[n/k_n]}X_i> u_n\right)\,. \end{eqnarray} Now we compare the local behavior of sequences $\{X_n\}$ and $\{Y_n\}$, i.e., of $X_i$ and $Y_i$ for $i\in\left\{(j-1)\left[\frac{n}{k_n}\right]+1,...,j\left[\frac{n}{k_n}\right]\right\}$, $j=1,...,k_n$, with regard to the oscillations of their values in relation to $u_n$. To this end, we will use local dependency conditions $D^{(s)}(u_n)$. We say that $\{X_n\}$ satisfies $D^{(s)}(u_n)$, $s\geq 2$, whenever \begin{eqnarray}\nonumber \lim_{n\to\infty} n \sum_{j=s}^{[n/k_n]} P\left(X_1> u_n, X_j\leq u_n<X_{j+1}\right)=0, \end{eqnarray} for some integers sequence $\{k_n\}$ satisfying (\ref{kn}). Condition $D^{(1)}(u_n)$ translates into \begin{eqnarray}\nonumber \lim_{n\to\infty} n \sum_{j=2}^{[n/k_n]} P\left(X_1> u_n, X_{j}> u_n\right)=0, \end{eqnarray} and is known as condition $D^{'}(u_n)$ (Leadbetter \emph{et al.}, \cite{Leadbetter+1983} 1983), related to a unit extremal index, i.e., absence of extreme values clustering. In particular, this is the case of independent variables. Although $\{X_n\}$ satisfies $D^{'}(u_n)$, this condition is not generally valid for $\{Y_n\}$. Observe that \begin{eqnarray}\nonumber \begin{array}{rl} &\displaystyle n \sum_{j=2}^{[n/k_n]} P\left(Y_1> u_n, Y_{j}> u_n\right)\\\\ =&\displaystyle \sum_{i=2-\kappa}^{1} n \sum_{j=2}^{[n/k_n]}\sum_{j^*=i\vee (j-\kappa+1)}^{j} P\left(X_i> u_n, X_{j^*}> u_n\right)\cdot\\ &\hspace{4cm}\cdot p_{i,...,1,j^*,j^*+1,...,j}(0,...,0,1,0,...,0)\, \end{array} \end{eqnarray} For $i=1$ and $j=\kappa$, we have $j^*=1$ and the corresponding term becomes $nP(X_1>u_n)\to\tau>0$, as $n\to\infty$, reason why, in general $\{Y_n\}$ does not satisfy $D^{'}(u_n)$ even if $\{X_n\}$ satisfies it. \begin{pro}\label{pD's} The following statements hold: \begin{itemize} \item [(i)] If $\{Y_n\}$ satisfies $D^{(s)}(u_n)$, $s\geq 2$, then $\{X_n\}$ satisfies $D^{(s)}(u_n)$. \item [(ii)] If $\{X_n\}$ satisfies $D^{(s)}(u_n)$, $s\geq 2$, then $\{Y_n\}$ satisfies $D^{(s+\kappa-1)}(u_n)$. \item [(iii)] If $\{X_n\}$ satisfies $D^{'}(u_n)$, then $\{Y_n\}$ satisfies $D^{(2)}(u_n)$. \end{itemize} \end{pro} \begin{proof} Consider $r_n=[n/k_n]$. We have that \begin{eqnarray}\label{pD'sExp1} \begin{array}{rl} &\displaystyle n \sum_{j=s}^{r_n} P\left(Y_1> u_n, Y_{j}\leq u_n<Y_{j+1}\right) \\\\ =&\displaystyle \sum_{i=2-\kappa}^{1} n \sum_{j=s}^{r_n}P\left(X_i> u_n, Y_{j}\leq u_n<X_{j+1},U_i=1,U_{i+1}=0=...=U_1,U_{j+1}=1\right)\\\\ =&\displaystyle \sum_{i=2-\kappa}^{1} n \sum_{j=s}^{r_n}\sum_{j^*=(i+1)\vee (j-\kappa+1)}^{j} P\left(X_i> u_n, X_{j^*}\leq u_n<X_{j+1}\right)\cdot\\ &\hspace{4cm}\cdot p_{i,i+1,...,1,j^*,j^*+1,...,j,j+1}(1,0,...,0,1,0,...,0,1)\,. \end{array} \end{eqnarray} Since $\{Y_n\}$ satisfies $D^{(s)}(u_n)$, with $s\geq 2$, and thus the first summation in (\ref{pD'sExp1}) converges to zero, as $n\to\infty$, then all the terms in the last summations also converge to zero. In particular, when $i=1$ and $j^*=j$, we have $n \sum_{j=s}^{r_n} P\left(X_1> u_n, X_{j}\leq u_n<X_{j+1}\right)\to 0$, as $n\to\infty$, which proves (i). On the other hand, writing the first summation in (\ref{pD'sExp1}) with $j$ starting at $s+\kappa-1$, we have \begin{align &\displaystyle n \sum_{j=s+\kappa-1}^{r_n} P\left(Y_1> u_n, Y_{j}\leq u_n<Y_{j+1}\right) \nonumber\\ =&\displaystyle \sum_{i=2-\kappa}^{1} n \sum_{j=s+\kappa-1}^{r_n}\sum_{j^*=j-\kappa+1}^{j} P\left(X_i> u_n, X_{j^*}\leq u_n<X_{j+1}\right)\cdot\nonumber\\ &\hspace{4cm}\cdot p_{i,i+1,...,1,j^*,j^*+1,...,j,j+1}(1,0,...,0,1,0,...,0,1)\nonumber\\ =&\displaystyle \sum_{i=2-\kappa}^{1} n \sum_{j=s+\kappa-1}^{r_n}\sum_{j^*=j-\kappa+1}^{j}\sum_{i^*=j^*}^{j} P\left(X_i> u_n, X_{j^*}\leq u_n,...,X_{i^*}\leq u_n,X_{j+1}>u_n\right)\nonumber\cdot\\ &\hspace{4cm}\cdot p_{i,i+1,...,1,j^*,j^*+1,...,j,j+1}(1,0,...,0,1,0,...,0,1)\label{pD'sExp2}\,, \end{align} where the least of distances between $i$ and $i^*$ corresponds to the case $i=1$ and $i^*=j^*=s$. Therefore, if $\{X_n\}$ satisfies $D^{(s)}(u_n)$ for some $s\geq 2$ then each term of (\ref{pD'sExp2}) converges to zero, as $n\to\infty$, and thus $\{Y_n\}$ satisfies $D^{(s+\kappa-1)}(u_n)$, proving (ii). As for (iii), observe that \begin{align &\displaystyle n \sum_{j=2}^{r_n} P\left(Y_1> u_n, Y_{j}\leq u_n<Y_{j+1}\right) \nonumber\\ =&\displaystyle \sum_{i=2-\kappa}^{1} n \sum_{j=2}^{r_n}P\left(X_i> u_n, Y_{j}\leq u_n<X_{j+1},U_i=1,U_{i+1}=0=...=U_1,U_{j+1}=1\right)\nonumber\\ \leq &\displaystyle \sum_{i=2-\kappa}^{1} n \sum_{j=2}^{r_n} P\left(X_i> u_n, X_{j+1}>u_n\right =\sum_{i=2-\kappa}^{1} n \sum_{j=2}^{r_n} P\left(X_1> u_n, X_{j-i+2}>u_n\right)\nonumber\\ \leq &\displaystyle \,\,\kappa\, n \sum_{j=2}^{r_n} P\left(X_1> u_n, X_{j}>u_n\right)\label{pD'sExp3}\,. \end{align} If $\{X_n\}$ satisfies $D^{'}(u_n)$, then (\ref{pD'sExp3}) converges to zero, as $n\to\infty$, and $D^{(2)}(u_n)$ holds for $\{Y_n\}$. \end{proof} Under conditions $D(u_n)$ and $D^{(s)}(u_n)$ with $s\geq 2$, we can also compute the extremal index $\theta_X$ defined in (\ref{thetaDef}) by (Chernick \emph{et al.}, \cite{Chernick+1991} 1991; Corollary 1.3) \begin{eqnarray}\label{thetaChernick} \theta_X =\displaystyle \lim_{n\to\infty}P\left(X_{2}\leq u_n,...,X_{s}\leq u_n|X_1>u_n\right)\,. \end{eqnarray} If $\{X_n\}$ and $\{Y_n\}$ have extremal indexes $\theta_X$ and $\theta_Y$, respectively, then $\theta_Y\leq \theta_X$, since $P(\bigvee_{i=1}^{n}X_i\leq n/\tau)\leq P(\bigvee_{i=1}^{n}Y_i\leq n/\tau)$. This corresponds to the intuitively expected, if we remember that the possible repetition of variables $X_n$ leads to larger clusters of values above $u_n$. In the following result, we establish a relationship between $\theta_X$ and $\theta_Y$. \begin{pro}\label{pthetaYX} Suppose that $\{U_n\}$ is strong-mixing and $\{X_n\}$ satisfies conditions $D(u_n)$ and $D^{(s)}(u_n)$, $s\geq 2$, for normalized levels $u_n\equiv u_n(\tau)$. If $\{X_n\}$ has extremal index $\theta_X$ then $\{Y_n\}$ has extremal index $\theta_Y$ given by \begin{eqnarray}\nonumber \theta_Y=\theta_X\,\sum_{j=0}^{\kappa-1} p_{1,2,...,j+1,j+2}(1,0,...,0,1)\,\beta_j, \end{eqnarray} where \begin{eqnarray}\nonumber \beta_j =\lim_{n\to\infty} P(X_{s+j}>u_n|X_1\leq u_n,...,X_{s-1}\leq u_n<X_s)\,. \end{eqnarray} \end{pro} \begin{proof} By Proposition \ref{pD}, $\{Y_n\}$ also satisfies condition $D(u_n)$. Thus we have \begin{eqnarray}\nonumber \lim_{n\to\infty} P\left(\bigvee_{i=1}^{n}Y_i\leq u_n\right)=\exp\left\{-\lim_{n\to\infty} k_n P\left(\bigvee_{i=1}^{[n/k_n]}Y_i> u_n\right)\right\} \end{eqnarray} and \begin{align} &\displaystyle \lim_{n\to\infty} k_n P\left(\bigvee_{i=1}^{[n/k_n]}Y_i> u_n\right) = \displaystyle \lim_{n\to\infty} k_n P\left(Y_1\leq u_n, \bigvee_{i=1}^{[n/k_n]}\{Y_i> u_n\}\right)\nonumber\\ = & \displaystyle \lim_{n\to\infty} k_n P\left(\bigcup_{i=1}^{[n/k_n]}\{Y_i\leq u_n<Y_{i+1}\}\right) =\displaystyle \lim_{n\to\infty} k_n P\left(\bigcup_{i=1}^{[n/k_n]}\{Y_i\leq u_n<X_{i+1},U_{i+1}=1\}\right)\nonumber\\ =&\displaystyle \lim_{n\to\infty} k_n P\left(\bigcup_{i=1}^{[n/k_n]}\bigcup_{j=0}^{\kappa-1}\{X_{i-j}\leq u_n<X_{i+1},U_{i-j}=1,U_{i-j+1}=0=...=U_{i},U_{i+1}=1\}\right)\nonumber\\ =&\displaystyle \lim_{n\to\infty} k_n P\left(\bigcup_{i=1}^{[n/k_n]}\bigcup_{j=0}^{\kappa-1}\{X_{i}\leq u_n<X_{i+j+1},U_{i}=1,U_{i+1}=0=...=U_{i+j},U_{i+j+1}=1\}\right)\nonumber\\ =&\displaystyle \lim_{n\to\infty} k_n \sum_{i=1}^{[n/k_n]}\sum_{j=0}^{\kappa-1}P\left(X_1\leq u_n,...,X_{i}\leq u_n<X_{i+1},X_{i+j+1}>u_n\right)\cdot\nonumber\\ &\hspace{4cm}\cdot p_{i,i+1,...,i+j,i+j+1}(1,0,...,0,1)\label{pthetasExp1}\\ =&\displaystyle \lim_{n\to\infty} k_n \sum_{i=1}^{[n/k_n]}\sum_{j=0}^{\kappa-1}P\left(X_{i-s+2}\leq u_n,...,X_{i}\leq u_n<X_{i+1},X_{i+j+1}>u_n\right)\cdot\nonumber\\ &\hspace{4cm}\cdot p_{1,2,...,j+1,j+2}(1,0,...,0,1)\nonumber \end{align} since $\{X_n\}$ satisfies condition $D^{(s)}(u_n)$ for some $s\geq 2$. The stationarity of $\{X_n\}$ leads to \begin{equation}\nonumber \begin{array}{rl} &\displaystyle \lim_{n\to\infty} k_n \sum_{i=1}^{[n/k_n]}\sum_{j=0}^{\kappa-1}P\left(X_{i-s+2}\leq u_n,...,X_{i}\leq u_<X_{i+1},X_{i+j+1}>u_n\right)\cdot\nonumber\\ &\hspace{4cm}\cdot p_{1,2,...,j+1,j+2}(1,0,...,0,1)\\ =&\displaystyle \lim_{n\to\infty} k_n \sum_{i=1}^{[n/k_n]}\sum_{j=0}^{\kappa-1}P\left(X_{1}\leq u_n,...,X_{s-1}\leq u_n<X_{s},X_{s+j}>u_n\right)\cdot\nonumber\\ &\hspace{4cm}\cdot p_{1,2,...,j+1,j+2}(1,0,...,0,1)\\ =&\displaystyle \lim_{n\to\infty} \sum_{j=0}^{\kappa-1}n\,P\left(X_{1}\leq u_n,...,X_{s-1}\leq u_n<X_{s},X_{s+j}>u_n\right)\cdot\nonumber\\ &\hspace{4cm}\cdot p_{1,2,...,j+1,j+2}(1,0,...,0,1)\\ =&\displaystyle \lim_{n\to\infty} \sum_{j=0}^{\kappa-1}n\,P\left(X_{1}\leq u_n,...,X_{s-1}\leq u_n<X_{s}\right)P\left(X_{s+j}>u_n|X_{1}\leq u_n,...,X_{s-1}\leq u_n<X_{s}\right)\cdot\nonumber\\ &\hspace{4cm}\cdot p_{1,2,...,j+1,j+2}(1,0,...,0,1)\\ =& \displaystyle \tau\, \theta_X \sum_{j=0}^{\kappa-1} p_{1,2,...,j+1,j+2}(1,0,...,0,1)\,\beta_j, \end{array} \end{equation} where the last step follows from (\ref{thetaChernick}). \end{proof} Observe that $\sum_{j=0}^{\kappa-1} p_{1,2,...,j+1,j+2}(1,0,...,0,1)=p_n(1)=P(U_n=1)$ and thus $\theta_Y\leq \theta_Xp_n(1)\leq \theta_X$, as expected. \begin{pro} Suppose that $\{U_n\}$ is strong-mixing and $\{X_n\}$ satisfies conditions $D(u_n)$ and $D^{'}(u_n)$, for normalized levels $u_n\equiv u_n(\tau)$. Then $\{Y_n\}$ has extremal index $\theta_Y$ given by $\theta_Y=p_{1,2}(1,1)$. \end{pro} \begin{proof} By condition $D^{'}(u_n)$, the only term to consider in (\ref{pthetasExp1}) corresponds to $j=0$, and we obtain \begin{eqnarray}\nonumber \begin{array}{rl} &\displaystyle\lim_{n\to\infty} k_n\,P\left(\bigvee_{i=1}^{[n/k_n]}Y_n\leq u_n\right)\\ =&\displaystyle\lim_{n\to\infty} k_n\,\sum_{i=1}^{[n/k_n]}P\left(X_1\leq u_n,...,X_{s-1}\leq u_n<X_s\right)p_{1,2}(1,1)\\ =&\displaystyle\lim_{n\to\infty} n\,P(X_s>u_n)\,p_{1,2}(1,1)=\tau \,p_{1,2}(1,1)\,. \end{array} \end{eqnarray} \end{proof} Observe that we can obtain the above result by applying Proposition \ref{pD's} (iii) and calculating directly $\tau\, \theta_Y=\lim_{n\to\infty}n\, P(Y_1\leq u_n<Y_2)$. More precisely, we have that $\{Y_n\}$ satisfies $D^{(2)}(u_n)$ and by applying (\ref{thetaChernick}), we obtain \begin{eqnarray}\nonumber \begin{array}{rl} \tau\, \theta_Y=&\displaystyle\lim_{n\to\infty} n\, P(Y_1\leq u_n<Y_2)\\ =&\displaystyle\lim_{n\to\infty} n\, P(Y_1\leq u_n<X_2,U_2=1)\\ =&\displaystyle\lim_{n\to\infty} nP\left(\bigcup_{j=0}^{\kappa-1}X_{1-j}\leq u_n<X_2,U_{1-j}=1,U_{1-j+1}=0=...=U_{1},U_2=1,\right)\\ =&\displaystyle\lim_{n\to\infty} nP\left(\bigcup_{j=0}^{\kappa-1}X_{2-\kappa}\leq u_n,...,X_{1-j}\leq u_n<X_{2-j},X_2>u_n\right)\cdot\\ &\hspace{4cm}\cdot p_{1-j,1-j+1,...,1,2}(1,0,...,0,1)\\%,U_{1-j}=1,U_{1-j+1}=0=...=U_{1},U_2=1,\right)\\ =&\displaystyle\lim_{n\to\infty} n\,P(X_1\leq u_n<X_2)\,p_{1,2}(1,1)\\ =&\displaystyle\lim_{n\to\infty} n\,P(X_2>u_n)\,p_{1,2}(1,1)=\tau \,p_{1,2}(1,1)\,. \end{array} \end{eqnarray} The same result can also be seen as a particular case of Proposition \ref{pthetaYX} where, if we take $s=1$, we have $\beta_j=0$, for $j\not=0$, and we obtain $\theta_Y=\theta_X\beta_0 p_{1,2}(1,1)=p_{1,2}(1,1)$, since $\beta_0=1$ and under $D^{'}(u_n)$ it comes $\theta_X=1$. \\ \begin{ex}\label{exARMAXtheta} Consider $\{Y_n\}$ such that $\{X_n\}$ is an ARMAX sequence, i.e., $X_n=\phi X_{n-1}\vee (1-\phi)Z_n$, $n\geq 1$, where $\{Z_n\}$ is an independent sequence of random variables with standard Fr\'echet marginal distribution and $\{X_n\}$ and $\{Z_n\}$ are independent. We have that $\{X_n\}$ has also standard Fr\'echet marginal distribution, satisfies condition $D^{(2)} (u_n)$ and has extremal index $\theta_X=1-\phi$ (see e.g.~Ferreira and Ferreira \cite{Ferreira+2012} 2012 and references therein). Observe that, for normalized levels $u_n\equiv n/\tau$, $\tau>0$, we have \begin{eqnarray}\nonumber \begin{array}{rl} \beta_1=&{\displaystyle \lim_{n\to\infty}}P(X_3>u_n|X_1\leq u_n<X_2)\\ =&{\displaystyle \lim_{n\to\infty}}\frac{P(X_1\leq u_n)-P(X_1\leq u_n,X_2\leq u_n)-P(X_1\leq u_n,X_3\leq u_n)+P(X_1\leq u_n,X_2\leq u_n,X_3\leq u_n)}{P(X_1\leq u_n)-P(X_1\leq u_n,X_2\leq u_n)}\\ =&{\displaystyle \lim_{n\to\infty}}\frac{1-\frac{\tau}{n}-(1-\frac{\tau}{n}(2-\phi))-(1-\frac{\tau}{n}(2-\phi^2))+1-\frac{\tau}{n}(3-2\phi)}{1-\frac{\tau}{n}-(1-\frac{\tau}{n}(2-\phi))}\\ =&\phi\,. \end{array} \end{eqnarray} Analogous calculations lead to $\beta_2=\phi^2$. Considering $\kappa=3$, we have $\theta_Y=(1-\phi)(p_{1,2}(1,1)+\phi p_{1,2,3}(1,0,1)+\phi^2p_{1,2,3,4}(1,0,0,1))$. \end{ex} The observed sequence is $\{Y_n\}$, therefore results that allow retrieving information about the extreme behavior of the initial sequence $\{X_n\}$, subject to the failures determined by $\{U_n\}$, may be of interest. If we assume that $\{Y_n\}$ satisfies $D^{(s)}(u_n)$ then $\{X_n\}$ also satisfies $D^{(s)}(u_n)$ by Proposition \ref{pD's} (i), thus coming \begin{eqnarray}\nonumber \begin{array}{rl} \tau\,\theta_X=&\displaystyle\lim_{n\to\infty} n\,P(X_1\leq u_n,...,X_{s-1}\leq u_n<X_s)\\ =&\displaystyle\lim_{n\to\infty} n\,P(Y_1\leq u_n,...,Y_{s-1}\leq u_n<Y_s|U_1=...=U_s=1)\\ =&\displaystyle\lim_{n\to\infty} n\,P(Y_1\leq u_n,...,Y_{s-1}\leq u_n<Y_s|Y_0\not=Y_1\not=...\not=Y_s). \end{array} \end{eqnarray} Thereby, we can write \begin{eqnarray}\nonumber \begin{array}{rl} \theta_X=&\displaystyle\lim_{n\to\infty} \frac{P(Y_1\leq u_n,...,Y_{s-1}\leq u_n<Y_s|Y_0\not=Y_1\not=...\not=Y_s)}{P(Y_1>u_n)}. \end{array} \end{eqnarray} \section{Tail dependence}\label{stdc} Now we will analyse the effect of this failure mechanism on the dependency between two variables, $Y_n$ and $Y_{n+m}$, $m\geq 1$. More precisely, we are going to evaluate the lag-$m$ tail dependence coefficient \begin{eqnarray}\nonumbe \lambda(Y_{n+m}|Y_n)=\lim_{x\to\infty}P(Y_{n+m}>x|Y_n>x), \end{eqnarray} which incorporates the tail dependence between $X_n$ and $X_{n+j}$, with $j$ regulated by the maximum number of failures $\kappa-1$ and by the relation between $m$ and $\kappa$. In particular, independent variables present null tail dependence coefficients. If $m=1$ we obtain the tail dependence coefficient in Joe (\cite{Joe1997} 1997). For simplicity, we first present the case $m = 1$ and then we extend the result to any value $m$. \begin{pro}\label{plambdalag1} Sequence $\{Y_n\}$ has tail dependence coefficient \begin{eqnarray}\nonumbe \lambda(Y_{n+1}|Y_n)=p_n(0)+\sum_{i=0}^{\kappa-1}\lambda(X_{n+1+i}|X_n)\,p_{1,2,...,i+1,i+2}(1,0,...,0,1), \end{eqnarray} provided all coefficients $\lambda(X_{n+1+i}|X_n)$ exist \end{pro} \begin{proof} We have that \begin{eqnarray}\nonumber \begin{array}{rl} &\displaystyle\lim_{x\to\infty}\frac{P(Y_n>x,Y_{n+1}>x)}{P(Y_{n}>x)}\\ =&\displaystyle\lim_{x\to\infty}\frac{P(Y_n>x,U_{n+1}=0)}{P(Y_{n}>x)}+\lim_{x\to\infty}\frac{P(Y_n>x,X_{n+1}>x,U_{n+1}=1)}{P(Y_{n}>x)}\\ =&\displaystyle\lim_{x\to\infty}\frac{\sum_{i=0}^{\kappa-2}P(X_{n-i}>x)\,p_{n-i,n-i+1,...,n+1}(1,0,...,0)}{P(Y_{n}>x)}\\ &+\displaystyle\lim_{x\to\infty}\frac{\sum_{i=0}^{\kappa-1}P(X_{n-i}>x,X_{n+1}>x)\, p_{n-i,n-i+1,...,n,n+1}(1,0,...,0,1)}{P(Y_{n}>x)}\\ =&\displaystyle \sum_{i=0}^{\kappa-2}p_{1,2,...,i+2}(1,0,...,0)+\sum_{i=0}^{\kappa-1}\lambda(X_{n+1+i}|X_n)\,p_{1,2,...,i+1,i+2}(1,0,...,0,1)\,. \end{array} \end{eqnarray} \end{proof} \begin{pro}\label{plambdalags} Sequence $\{Y_n\}$ has lag-$m$ tail dependence coefficient, with $m\geq 1$, \begin{eqnarray}\label{TDClag-m} \begin{array}{rl} \displaystyle \lambda(Y_{n+m}|Y_n)=&p_{1,...,m}(0,...,0)\,\mathbf{1}_{\{m\leq \kappa-1\}}+\displaystyle \sum_{i=1\vee (m-\kappa+1)}^{m}\sum_{i^*=0}^{\kappa-1}\lambda(X_{n+i+i^*}|X_n)\cdot\\ &\hspace{4cm}\displaystyle\cdot p_{1,2,...,i^*+1,i^*+1+i,i^*+2+i,...,i^*+1+m}(1,0,...,0,1,0,...,0), \end{array} \end{eqnarray} provided all coefficients $\lambda(X_{n+i+i^*}|X_n)$ exist \end{pro} \begin{proof} Observe that \begin{eqnarray}\nonumber \begin{array}{rl} &\displaystyle P(Y_n>x,Y_{n+m}>x)\\ =&\displaystyle P(Y_n>x,U_{n+1}=0=...=U_{n+m})\mathbf{1}_{\{m\leq \kappa-1\}}\\ &+\displaystyle\sum_{i=1\vee (m-\kappa+1)}^{m} P(Y_n>x,X_{n+i}>x,U_{n+i}=1,U_{n+i+1}=0=...=U_{n+m})\\ =&\displaystyle \sum_{i=0}^{\kappa-1-m}P(X_{n-i}>x)\,p_{n-i,n-i+1,...,n+m}(1,0,...,0)\mathbf{1}_{\{m\leq \kappa-1\}}\\ &+\displaystyle \sum_{i=1\vee (m-\kappa+1)}^{m}\sum_{i^*=0}^{\kappa-1} P(X_{n-i^*}>x,X_{n+i}>x)\, p_{n-i^*,n-i^*+1,...,n,n+i,n+i+1,...,n+m}(1,0,...,0,1,0,...,0)\, \end{array} \end{eqnarray} and $\displaystyle \sum_{i=0}^{\kappa-1-m}p_{1,2,...,m+i+1}(1,0,...,0)=p_{1,...,m}(0,...,0).$ \end{proof} Taking $m=1$ in (\ref{TDClag-m}), we immediately obtain the result of Proposition \ref{plambdalag1}. If $\{X_n\}$ is lag-$m^*$ tail independent for all integer $m^*\geq 1\vee(m-\kappa+1)$, we have $\lambda(X_{n+i+i^*}|X_n)=0$ in the second ter of (\ref{TDClag-m}) and thus $\lambda(Y_{n+m}|Y_n)=p_{1,...,m}(0,...,0)\,\mathbf{1}_{\{m\leq \kappa-1\}}$ and $\{Y_n\}$ is lag-$m$ tail independent for all integer $m\geq \kappa$. \begin{ex}\label{exARMAXTDClag-m} Consider again $\{Y_n\}$ based on ARMAX sequence $\{X_n\}$ as in Example \ref{exARMAXtheta}. We have that $\{X_n\}$ has lag-$m$ tail dependence coefficient $\lambda(X_{n+m}|X_n)=\phi^m$ (Ferreira and Ferreira \cite{Ferreira+2012} 2012) and thus \begin{eqnarray}\nonumber \begin{array}{rl} \displaystyle \lambda(Y_{n+m}|Y_n)=&p_{1,...,m}(0,...,0)\,\mathbf{1}_{\{m\leq \kappa-1\}}\\ &+\displaystyle \sum_{i=1\vee (m-\kappa+1)}^{m}\sum_{i^*=0}^{\kappa-1}\phi ^{i+i^*} p_{1,2,...,i^*+1,i^*+1+i,i^*+2+i,...,i^*+1+m}(1,0,...,0,1,0,...,0). \end{array} \end{eqnarray} \end{ex} \bigskip
1,314,259,996,017
arxiv
\section*{\centering References} \list {[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth \setlength{\itemsep}{.1em} \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus -.07em{\hskip .11em plus .33em minus -.07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \def{\cal L}_2{{\cal L}_2} \def\begin{array}{\begin{array}} \def\end{array}{\end{array}} \def\begin{center}{\begin{center}} \def\end{center}{\end{center}} \def\begin{corollary}{\begin{cor}} \def\end{corollary}{\end{cor}} \def\begin{definition}{\begin{defn}} \def\end{definition}{\end{defn}} \def\begin{description}{\begin{description}} \def\end{description}{\end{description}} \def\begin{displaymath}{\begin{displaymath}} \def\end{displaymath}{\end{displaymath}} \def\begin{eqnarray*}{\begin{eqnarray*}} \def\end{eqnarray*}{\end{eqnarray*}} \def\begin{enumerate}{\begin{enumerate}} \def\end{enumerate}{\end{enumerate}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\begin{example}{\begin{exmp}} \def\hfill\openbull \end{example} \vspace*{0.1in}{\end{exmp}} \def\begin{fig}{\begin{fig}} \def\end{fig}{\end{fig}} \def\begin{itemize}{\begin{itemize}} \def\end{itemize}{\end{itemize}} \def\begin{lemma}{\begin{lem}} \def\end{lemma}{\end{lem}} \def\begpro{\begin{prop}} \def\endpro{\end{prop}} \def\begin{theorem}{\begin{thm}} \def\end{theorem}{\end{thm}} \def\noindent{\bf Remark} :{\noindent{\bf Remark}:\ } \def\\{\\} \def\noindent{\bf Remarks}:\begin{enumerate}{\noindent{\bf Remarks}:\begin{enumerate}} \def\end{enumerate} \par{\end{enumerate} \par} \def\noindent{\em Proof:}$\;\;${\begin{proof}} \def\hfill\bull \vspace*{0.05in}{\end{proof}} \newcommand\bull{\vrule height .9ex width .8ex depth -.1ex } \newcommand\bul{$\bullet\;\;\;$} \newcommand\re{\rm I\! R} \newcommand\spacebox[1]{\raisebox{-6pt}[7pt][0pt]{#1}} \newcommand\cdcout[1]{} \newcommand{\Lb}{\left [} \newcommand{\Rb}{\right ]} \newcommand{\LB}{\left \{} \newcommand{\RB}{\right \}} \newcommand{\Ld}{\left .} \newcommand{\Rd}{\right .} \newcommand{L_{\mathfrak{p}}}{\left (} \newcommand{\Rp}{\right )} \newcommand{\Lv}{\left |} \newcommand{\Rv}{\right |} \newcommand{\LV}{\left \|} \newcommand{\RV}{\right \|} \DeclareMathOperator{\trace}{tr} \DeclareMathOperator{{\rm diag}}{diag} \DeclareMathOperator{{\rm rank}}{rank} \DeclareMathOperator{\vect}{vec} \newcommand{\T}{^\mathrm{T}} \newcommand{\h}{^\mathrm{H}} \newcommand{\rv}[1]{\boldsymbol{#1}} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator*{\Lim}{l.i.m.} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator*{\argmax}{\arg\max} \DeclareMathOperator{{\rm dist}}{dist} \newcommand{\N}{\mathbb N} \newcommand{\C}{\mathbb C} \newcommand{\Q}{\mathbb Q} \newcommand{\R}{\mathbb R} \newcommand{\Z}{\mathbb Z} \newcommand{\me}{\mathrm{e}} \newcommand{\mi}{\mathrm{i}} \newcommand{\mj}{\mathrm{j}} \newcommand{\dif}{\,\mathrm{d} \newcommand{\mathbb M(\R^{n})}{\mathbb M(\R^{n})} \newcommand{\mathbb H^n}{\mathbb H^n} \newcommand{\mathbb H^{n+}}{\mathbb H^{n+}} \newcommand{\mathbb B(\Hn)}{\mathbb B(\mathbb H^n)} \newcommand{\matlab}{\textsc{MATLAB}\textsuperscript{\textregistered}} \newcommand{\simulink}{Simulink\textsuperscript{\textregistered}} \newcommand{\RomanNumber}[1]{\uppercase\expandafter{\romannumeral #1}} \newcommand{\romannumber}[1]{\lowercase\expandafter{\romannumeral #1}} \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \newcommand{\rb}[1]{\raisebox{1.5ex}[0pt]{#1}} \def\1{\rv 1} \def\I{{\mathcal I}} \def\E{{\mathbb E}} \def\bE{{\mathcal E}} \def\x{{\rv\chi}} \def\B{{\mathcal B}} \section*{\centering References} \list {[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth \setlength{\itemsep}{.1em} \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus -.07em{\hskip .11em plus .33em minus -.07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \def{\cal L}_2{{\cal L}_2} \def\begin{array}{\begin{array}} \def\end{array}{\end{array}} \def\begin{center}{\begin{center}} \def\end{center}{\end{center}} \def\begin{corollary}{\begin{cor}} \def\end{corollary}{\end{cor}} \def\begin{definition}{\begin{defn}} \def\end{definition}{\end{defn}} \def\begin{description}{\begin{description}} \def\end{description}{\end{description}} \def\begin{displaymath}{\begin{displaymath}} \def\end{displaymath}{\end{displaymath}} \def\begin{eqnarray*}{\begin{eqnarray*}} \def\end{eqnarray*}{\end{eqnarray*}} \def\begin{enumerate}{\begin{enumerate}} \def\end{enumerate}{\end{enumerate}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\begin{example}{\begin{exmp}} \def\hfill\openbull \end{example} \vspace*{0.1in}{\end{exmp}} \def\begin{fig}{\begin{fig}} \def\end{fig}{\end{fig}} \def\begin{itemize}{\begin{itemize}} \def\end{itemize}{\end{itemize}} \def\begin{lemma}{\begin{lem}} \def\end{lemma}{\end{lem}} \def\begpro{\begin{prop}} \def\endpro{\end{prop}} \def\begin{theorem}{\begin{thm}} \def\end{theorem}{\end{thm}} \def\noindent{\bf Remark} :{\noindent{\bf Remark}:\ } \def\\{\\} \def\noindent{\bf Remarks}:\begin{enumerate}{\noindent{\bf Remarks}:\begin{enumerate}} \def\end{enumerate} \par{\end{enumerate} \par} \def\noindent{\em Proof:}$\;\;${\begin{proof}} \def\hfill\bull \vspace*{0.05in}{\end{proof}} \newcommand\bull{\vrule height .9ex width .8ex depth -.1ex } \newcommand\bul{$\bullet\;\;\;$} \newcommand\re{\rm I\! R} \newcommand\spacebox[1]{\raisebox{-6pt}[7pt][0pt]{#1}} \newcommand\cdcout[1]{} \newcommand{\Lb}{\left [} \newcommand{\Rb}{\right ]} \newcommand{\LB}{\left \{} \newcommand{\RB}{\right \}} \newcommand{\Ld}{\left .} \newcommand{\Rd}{\right .} \newcommand{L_{\mathfrak{p}}}{\left (} \newcommand{\Rp}{\right )} \newcommand{\Lv}{\left |} \newcommand{\Rv}{\right |} \newcommand{\LV}{\left \|} \newcommand{\RV}{\right \|} \DeclareMathOperator{\trace}{tr} \DeclareMathOperator{{\rm diag}}{diag} \DeclareMathOperator{{\rm rank}}{rank} \DeclareMathOperator{\vect}{vec} \newcommand{\T}{^\mathrm{T}} \newcommand{\h}{^\mathrm{H}} \newcommand{\rv}[1]{\boldsymbol{#1}} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator*{\Lim}{l.i.m.} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator*{\argmax}{\arg\max} \DeclareMathOperator{{\rm dist}}{dist} \newcommand{\N}{\mathbb N} \newcommand{\C}{\mathbb C} \newcommand{\Q}{\mathbb Q} \newcommand{\R}{\mathbb R} \newcommand{\Z}{\mathbb Z} \newcommand{\me}{\mathrm{e}} \newcommand{\mi}{\mathrm{i}} \newcommand{\mj}{\mathrm{j}} \newcommand{\dif}{\,\mathrm{d} \newcommand{\mathbb M(\R^{n})}{\mathbb M(\R^{n})} \newcommand{\mathbb H^n}{\mathbb H^n} \newcommand{\mathbb H^{n+}}{\mathbb H^{n+}} \newcommand{\mathbb B(\Hn)}{\mathbb B(\mathbb H^n)} \newcommand{\matlab}{\textsc{MATLAB}\textsuperscript{\textregistered}} \newcommand{\simulink}{Simulink\textsuperscript{\textregistered}} \newcommand{\RomanNumber}[1]{\uppercase\expandafter{\romannumeral #1}} \newcommand{\romannumber}[1]{\lowercase\expandafter{\romannumeral #1}} \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \newcommand{\rb}[1]{\raisebox{1.5ex}[0pt]{#1}} \def\1{\rv 1} \def\I{{\mathcal I}} \def\E{{\mathbb E}} \def\bE{{\mathcal E}} \def\x{{\rv\chi}} \def\B{{\mathcal B}} \section{Introduction} Networks of nonlinear dynamical systems appear in many fields, especially in the natural sciences where the nonlinearity is often a key feature in generating the observed behavior \cite{Golubitsky-Stewart_06,Jiang-Lai_19}. The vast majority of analysis of such networks is done in a finite dimensional state space setting using coupled systems of ordinary differential equations. In \cite{Gray-Ebrahimi-Fard_SCL21}, however, the authors describe an alternative approach which uses only input-output models at each node of the network in the form of a locally convergent Chen-Fliess series \cite{Fliess_81,Fliess_83}. These weighted infinite sums of iterated integrals provide a convenient algebraic framework for describing the network's behavior without relying on any particular choice of coordinates as in the state space setting. Series coefficients for each node can be estimated via system identification techniques \cite{Gray-etal_Auto20}. Computational tools were developed in \cite{Gray-Ebrahimi-Fard_SCL21} to determine, for example, how an input injected at one node affects the output observed at another node. Nevertheless, there are still a number of open questions regarding the basic properties of such networks. The focus here will be on so called {\em additive} networks, where the outputs of the nodes are simply added together and injected into other nodes, including self-loops. Other classes of aggregation functions such the multiplication of node outputs will not be addressed here. This paper has two goals. The first goal to address the open problem stated in \cite{Gray-Ebrahimi-Fard_SCL21} regarding whether an additive network of locally convergent Chen-Fliess series always yields mappings between nodes which have locally convergent Chen-Fliess series representations. This hypothesis will be proved to be true and is independent of the network's topology. The approach taken is to identify for a given network an associated {\em maximal network} whose growth bounds on the coefficients of the generating series between nodes upper bound all the growth bounds of the original network and are much easier to determine using conventional methods as presented in \cite{Sussmann_83}. The particular growth bound derived turns out to be exactly equivalent to one discovered for a class of unity feedback systems described in \cite{Thitsa-Gray_12}. The second goal is to provide sufficient conditions under which the input-output map between a pair of nodes has well defined relative degree as defined by its generating series \cite{Gray-etal_AUTO14,Gray-Venkatesh_SCL19}. A simple counterexample will be given first to show that this property can fail to hold in certain situations. The proofs of the sufficient conditions rely on identifying certain properties first described in \cite{Gray-Venkatesh_SCL19} in relation to a subgraph connecting a given input node and output node. It is also shown, however, that this relative degree property is {\em generic} in a certain sense. Namely, if the generating series for every node has relative degree and the connection strengths between the nodes are random, then every node pair has a generating series with well defined relative degree with probability one. An obvious application for this result is in the context of feedback linearization for networks \cite{Menara-etal_CDC20}, however, that application will not be pursued here. The paper is organized as follows. To keep the presentation as self-contained as possible, the required preliminaries are briefly summarized in Section~\ref{sec:preliminaries}. The question regarding the convergence of Chen-Fliess series for mappings between nodes is addressed in Section~\ref{sec:local-convergence}. The subsequent section treats the property of relative degree. The paper's conclusions are summarized in the final section. \section{Preliminaries} \label{sec:preliminaries} An {\em alphabet} $X=\{ x_0,x_1,$ $\ldots,x_m\}$ is any nonempty and finite set of noncommuting symbols referred to as {\em letters}. A {\em word} $\eta=x_{i_1}\cdots x_{i_k}$ is a finite sequence of letters from $X$. The number of letters in a word $\eta$, written as $\abs{\eta}$, is called its {\em length}. The empty word, $\emptyset$, is taken to have length zero. The collection of all words having length $k$ is denoted by $X^k$. Define $X^\ast=\bigcup_{k\geq 0} X^k$, which is a monoid under the concatenation (Cauchy) product. Any mapping $c:X^\ast\rightarrow \re^\ell$ is called a {\em formal power series}. Often $c$ is written as the formal sum $c=\sum_{\eta\in X^\ast}\langle c,\eta\rangle\eta$, where the {\em coefficient} $\langle c,\eta\rangle\in\re^\ell$ is the image of $\eta\in X^\ast$ under $c$. The {\em support} of $c$, ${\rm supp}(c)$, is the set of all words having nonzero coefficients. A series $c$ is {\em proper} when $\emptyset\not\in{\rm supp}(c)$. The set of all noncommutative formal power series over the alphabet $X$ is denoted by $\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$. The subset of series with finite support, i.e., polynomials, is represented by $\mbox{$\re^{\ell}\langle X \rangle$}$. For any $c,d\in\mbox{$\re\langle\langle X \rangle\rangle$}$, the scalar product is $\langle c,d\rangle:=\sum_{\eta\in X^\ast} \langle c,\eta\rangle\langle d,\eta\rangle$, provided the sum is finite. The set $\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ is an associative $\re$-algebra under the concatenation product and an associative and commutative $\re$-algebra under the {\em shuffle product}, that is, the bilinear product uniquely specified by the shuffle product of two words $x_i\eta,x_j\xi\in X^\ast$: \begin{displaymath} (x_i\eta){\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}(x_j\xi)=x_i(\eta{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}(x_j\xi))+x_j((x_i\eta){\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,} \xi), \end{displaymath} where $x_i,x_j\in X$ and with $\eta{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}\emptyset=\emptyset{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}\eta=\eta$ \cite{Fliess_81}. \subsection{Chen-Fliess series} Given any $c\in\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ one can associate a causal $m$-input, $\ell$-output operator, $F_c$, in the following manner. Let $\mathfrak{p}\ge 1$ and $t_0 < t_1$ be given. For a Lebesgue measurable function $u: [t_0,t_1] \rightarrow\re^m$, define $\norm{u}_{\mathfrak{p}}=\max\{\norm{u_i}_{\mathfrak{p}}: \ 1\le i\le m\}$, where $\norm{u_i}_{\mathfrak{p}}$ is the usual $L_{\mathfrak{p}}$-norm for a measurable real-valued function, $u_i$, defined on $[t_0,t_1]$. Let $L^m_{\mathfrak{p}}[t_0,t_1]$ denote the set of all measurable functions defined on $[t_0,t_1]$ having a finite $\norm{\cdot}_{\mathfrak{p}}$ norm and $B_{\mathfrak{p}}^m(R)[t_0,t_1]:=\{u\in L_{\mathfrak{p}}^m[t_0,t_1]:\norm{u}_{\mathfrak{p}}\leq R\}$. Assume $C[t_0,t_1]$ is the subset of continuous functions in $L_{1}^m[t_0,t_1]$. Define inductively for each word $\eta=x_i\bar{\eta}\in X^{\ast}$ the map $E_\eta: L_1^m[t_0, t_1]\rightarrow C[t_0, t_1]$ by setting $E_\emptyset[u]=1$ and letting \[E_{x_i\bar{\eta}}[u](t,t_0) = \int_{t_0}^tu_{i}(\tau)E_{\bar{\eta}}[u](\tau,t_0)\,d\tau, \] where $x_i\in X$, $\bar{\eta}\in X^{\ast}$, and $u_0=1$. The {\em Chen--Fliess series} corresponding to $c\in\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ is \begin{displaymath} y(t)=F_c[u](t) = \sum_{\eta\in X^{\ast}} \langle c,\eta\rangle \,E_\eta[u](t,t_0) \end{displaymath} \hspace*{-0.06in}\cite{Fliess_81}. If there exist real numbers $K,M>0$ such that \begin{displaymath} \abs{\langle c,\eta\rangle}\le K M^{|\eta|}|\eta|!,\;\; \forall\eta\in X^{\ast}, \end{displaymath} then $F_c$ constitutes a well defined mapping from $B_{\mathfrak p}^m(R)[t_0,$ $t_0+T]$ into $B_{\mathfrak q}^{\ell}(S)[t_0, \, t_0+T]$ for sufficiently small $R,T >0$ and some $S>0$, where the numbers $\mathfrak{p},\mathfrak{q}\in[1,\infty]$ are conjugate exponents, i.e., $1/\mathfrak{p}+1/\mathfrak{q}=1$ \cite{Gray-Wang_02}. (Here, $\abs{z}:=\max_i \abs{z_i}$ when $z\in\re^\ell$.) The set of all such {\em locally convergent} series is denoted by $\mbox{$\re^{\ell}_{LC}\langle\langle X \rangle\rangle$}$, and $F_c$ is referred to as a {\em Fliess operator}. \subsection{System interconnections} Given Fliess operators $F_c$ and $F_d$, where $c,d\in\mbox{$\re^{\ell}_{LC}\langle\langle X \rangle\rangle$}$, the parallel and product connections satisfy $F_c+F_d=F_{c+d}$ and $F_cF_d=F_{c{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,} d}$, respectively \cite{Fliess_81}. When Fliess operators $F_c$ and $F_d$ with $c\in\mbox{$\re^{\ell}_{LC}\langle\langle X \rangle\rangle$}$ and $d\in\mbox{$\re^{m}_{LC}\langle\langle X \rangle\rangle$}$ are interconnected in a cascade fashion, the composite system $F_c\circ F_d$ has the Fliess operator representation $F_{c\circ d}$, where the {\em composition product} of $c$ and $d$ is given by \begin{displaymath} c\circ d=\sum_{\eta\in X^\ast} \langle c,\eta\rangle\,\psi_d(\eta)(\mathbf{1}) \end{displaymath}% \hspace*{-0.07in}\cite{Ferfera_80}. Here $\mbf{1}$ denotes the monomial $1\emptyset$, and $\psi_d$ is the continuous (in the ultrametric sense) algebra homomorphism from $\mbox{$\re\langle\langle X \rangle\rangle$}$ to the vector space endomorphisms on $\mbox{$\re\langle\langle X \rangle\rangle$}$, ${\rm End}(\allseries)$, uniquely specified by $\psi_d(x_i\eta)=\psi_d(x_i)\circ \psi_d(\eta)$ with $ \psi_d(x_i)(e)=x_0(d_i{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,} e), $ $i=0,1,\ldots,m$ for any $e\in\mbox{$\re\langle\langle X \rangle\rangle$}$, and where $d_i$ is the $i$-th component series of $d$ ($d_0:=\mbf{1}$). By definition, $\psi_d(\emptyset)$ is the identity map on $\mbox{$\re\langle\langle X \rangle\rangle$}$. \subsection{Relative degree of a generating series} Let $X=\{x_0,x_1\}$. Following \cite{Gray-etal_AUTO14}, a series $c\in\mbox{$\re\langle\langle X \rangle\rangle$}$ has relative degree $r$ if and only if it has the decomposition \begin{displaymath} c=c_N+Kx_0^{r-1}x_1+x_0^{r-1}e \end{displaymath} for some $K\neq 0$ and proper $e\in\mbox{$\re\langle\langle X \rangle\rangle$}$ with $x_1\not\in{\rm supp}(e)$. This definition of relative degree is consistent with the classical definition whenever $y=F_c[u]$ is realizable \cite{Gray-etal_AUTO14,Gray-Ebrahimi-Fard_SIAM17}. The following results will be of central importance in the work that follows. \begin{theorem} \cite{Gray-Venkatesh_SCL19} \label{th:r-parallel-sum-connection} If $c,d\in\mbox{$\re\langle\langle X \rangle\rangle$}$ have distinct relative degrees $r_c$ and $r_d$, respectively, then $c+d$ has relative degree $\min(r_c,r_d)$. On the other hand, if $r_c = r_d =: r$, then $c+d$ has relative degree $r$ if and only if $\langle c,x_{0}^{r-1}x_{1}\rangle + \langle d,x_{0}^{r-1}x_{1}\rangle \neq 0$. \end{theorem} \begin{corollary} \label{co:relative-degree-multi-sum-distinct} If $c_1,c_2,\ldots,c_m$ have relative degree $r_1,r_2,\ldots,r_m$, respectively, with $r_i\neq r_j$ when $i\neq j$, then the relative degree of $c_1+c_2+\cdots+c_m$ is $\min_i(r_i)$. \end{corollary} \begin{corollary} \label{co:relative-degree-multi-sum-not-distinct} Suppose $c_1,c_2,\ldots,c_m$ have relative degree $r_1,r_2,\ldots,r_m$, respectively. Let $s_j$ denote the multiplicity of relative degree $r_j$. If for each $s_j>1$ the series $c_{k_1},c_{k_2},\ldots,c_{k_{s_j}}$ having relative degree $r_j$ satisfy \begin{displaymath} \langle c_{k_1},x_0^{r_j-1}x_1\rangle+\langle c_{k_2},x_0^{r_j-1}x_1\rangle+\dots+\langle c_{k_{s_j}},x_0^{r_j-1}x_1\rangle\neq 0, \end{displaymath} then the relative degree of $c_1+c_2+\cdots+c_m$ is $\min_i(r_i)$. \end{corollary} \begin{theorem} \cite{Gray-Venkatesh_SCL19} \label{th:relative-degree-casecade} If $c,d\in\mbox{$\re\langle\langle X \rangle\rangle$}$ have relative degrees $r_c$ and $r_d$, respectively, then $r_{c\circ d}$ has relative degree $r_c+r_d$. \end{theorem} \subsection{Formal realizations and representations} It is shown in \cite{Kawski-Sussmann_97} that a given Chen-Fliess series $y=F_c[u]$ can be written in terms of a state $z$ evolving on a formal Lie group $\mathcal{G}(X)$ with Lie algebra $\widehat{\mathcal L}(X)$ and output map $y=\langle c,z\rangle$. This notion of a {\em universal control system} was generalized in \cite{Gray-Ebrahimi-Fard_SCL21} as follows to describe networks of Chen-Fliess series. \begin{definition} Let $V_i$ be a vector field on $\mathcal{G}^n(X):=\mathcal{G}(X)\times\mathcal{G}(X)\times\cdots\times \mathcal{G}(X)$, $i=0,1,\ldots,m$ with \begin{align*} V_i&:{\mathcal G}^n(X)\rightarrow T_z{\mathcal G}^n(X) \\ &z=(z_1,\ldots,z_n)\mapsto V_i(z)=(V_{i1}(z)z_1,\ldots,V_{in}(z)z_n), \end{align*} where $V_{ij}(z(t))\in \widehat{\mathcal L}(X)$. The $j$-th component of the corresponding state equation on ${\mathcal G}^n(X)$ is \begin{displaymath} \dot{z}_j=\sum_{i=0}^m V_{ij}(z)z_j u_{ij} , \;\; z_j(0)=z_{j0}. \end{displaymath} Given $\hat{c}_{k}\in\mbox{$\re_{LC}^{\otimes n}\langle\langle X \rangle\rangle$}$, $k=1,2,\ldots,\ell$, the $k$-th output equation is defined to be \begin{displaymath} y_{k}=\hat{c}_{k}(z). \end{displaymath} Collectively, $(V,z_0,\hat{c})$ is a {\em formal realization} on ${\mathcal G}^n(X)$ of the formal input-output map $u\mapsto y$. \end{definition} Analogous to the standard finite dimensional theory \cite{Isidori_95,Nijmeijer-vanderSchaft_90}, a series $c\in\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ is said to have a {\em formal representation} when there exists a formal realization with the property that every coefficient of $c$ can be written in terms of iterated Lie derivatives of the vectors fields acting on the output map and evaluated at $z_0$, i.e., $\langle c,x_{i_1}\cdots x_{i_k}\rangle=L_{V_{i_k}}\cdots L_{V_{i_1}}\hat{c}(z_0)$. \section{Additive Networks of Chen-Fliess Series: Local Convergence} \label{sec:local-convergence} In this section it is shown that every network of additively interconnected locally convergent Fliess operators has the property that the input-output maps between any two nodes can be represented by a locally convergent Fliess operator. The first definition describes the specific class of networks under consideration. \begin{definition} A set of $m$ single-input, single-output Chen-Fliess series mapping $u_i$ to $y_i$ with generating series $c_i\in\re_{LC}\langle\langle X_i\rangle\rangle$, where $X_i=\{x_0,x_i\}$ is said to be an {\em additively interconnected network} ${\mathcal N}_m$ with weighting matrix $W\in\re^{m\times m}$ if $u_i=v_i+\sum_{j=1}^mW_{ij}y_j$, $i=1,2,\ldots,m$. \end{definition} A network ${\mathcal N}_m$ can therefore be viewed as a directed graph connecting $m$ nodes, where the $i$-th node corresponds to a Chen-Fliess series with generating series $c_i$, $i=1,2,\ldots,m$. Henceforth, it will be assumed that the connection weights are normalized so that $W_{ij}\in[0,1]$, $i,j=1,2,\ldots,m$. The following theorem follows directly from Theorem 5.1 in \cite{Gray-Ebrahimi-Fard_SCL21}. \begin{theorem} \label{th:additive-interconnections} The input-output map $v_i\mapsto y_j$ in any additively interconnected network ${\mathcal N}_m$ has generating series $d_{ji}\in\re\langle\langle X_i\rangle\rangle$ which can be computed from a formal representation in terms of the vector fields \begin{align*} V_0(z)&= \left[ \begin{tabular}{p{0.5cm}} \hspace*{-0.1cm}$x_0z_1$ \\ \hspace*{-0.1cm}$x_0z_2$ \\ \hspace*{0.2cm}\vdots \\ \hspace*{-0.1cm}$x_0z_m$ \end{tabular} \right]+{\rm diag}(x_1z_1,\ldots,x_m z_m)W \left[ \begin{tabular}{p{1cm}} \hspace*{-0.02cm}$\langle c_1,z_1\rangle$ \\ \hspace*{-0.02cm}$\langle c_2,z_2\rangle$ \\ \hspace*{0.4cm}\vdots \\ \hspace*{-0.13cm}$\langle c_m,z_m\rangle$ \end{tabular} \right] \\ V_i(z)&=x_iz_i\mbf{e}_i \end{align*} acting on $\hat{c}_j=\mbf{1}\otimes\cdots\otimes\mbf{1}\otimes c_j\otimes\mbf{1}\cdots\otimes\mbf{1}\in\mbox{$\re_{LC}^{\otimes m}\langle\langle X \rangle\rangle$}$ ($c_j$ appears in the $j$-th position) and evaluated at $z_{j0}=\mbf{1}$, $i,j=1,2\ldots,m$. \end{theorem} The next theorem states the main convergence result concerning additive networks. \begin{theorem} If ${\mathcal N}_m$ is an additively interconnected network where the generating series for each node $c_i\in\re_{LC}\langle\langle X_i\rangle\rangle$, then the generating series for every input-output map $d_{ji}\in\re_{LC}\langle\langle X_i\rangle\rangle$. More specifically, if $K_i,M_i$ denote the growth constants for $c_i$, then for all $i,j=1,2,\ldots,m$ \begin{displaymath} \abs{\langle d_{ji},\eta\rangle}<KM^{\abs{\eta}}\abs{\eta}!,\;\;\forall \eta\in X^\ast \end{displaymath} for some $K>0$ and any $M>M_{\rm inf}$, where \begin{equation} \label{eq:M-inf-additive-network} M_{\rm inf}=\frac{\bar{M}}{1-m\bar{K} \ln\left(1+\frac{1}{m\bar{K}}\right)} \end{equation} with $\bar{K}=\max_{i} K_i$ and $\bar{M}=\max_{i} M_i$. \end{theorem} \noindent{\em Proof:}$\;\;$ It is first shown that each generating series $d_{ji}$ is locally convergent. Consider the case where every node series $c_i\in\re_{LC}\langle\langle X_i\rangle\rangle$ is a {\em maximal series} $\bar{c}_i:=\sum_{\eta\in X^\ast}K_iM_i^{\abs{\eta}}\abs{\eta}!\, \eta$. That is, every coefficient of $\bar{c}_i$ is growing at its maximal rate. While $y_i=F_{c_i}[u_i]$ may not have a finite dimensional state space realization, it is easily shown that a maximal series has the realization \begin{displaymath} \dot{z}_i=\frac{M_i}{K_i}z_i^2(1+u_i),\;\;z_i(0)=K_i,\;\;y_i=z_i \end{displaymath} \hspace*{-0.07in}\cite[Lemma 3]{Thitsa-Gray_12}. Therefore, the corresponding network can be realized by \begin{equation} \label{eq:maximal-series-state-space} \dot{z}_i=\frac{M_i}{K_i}z_i^2\left(1+\sum_{j=1}^m W_{ij}z_j+v_i\right)\!\!,\;\;z_i(0)=K_i,\;\;y_i=z_i, \end{equation} $i=1,2,\ldots,m$. As this realization of the input-output map $v\mapsto y$ is polynomial, it is clearly real analytic. Therefore, every generating series for $v_i\mapsto y_j$, say $\bar{d}_{ji}$, must be locally convergent \cite[Lemma 4.2]{Sussmann_83}. The claim now is that $d_{ji}$ must also be locally convergent since $\abs{\langle d_{ji},\eta\rangle}\leq \langle \bar{d}_{ji},\eta\rangle$ for all $\eta\in X^\ast$. This inequality is most easily deduced from the formal realization of $v_i\mapsto y_j$ given in Theorem~\ref{th:additive-interconnections}, where the Lie derivatives used to compute the coefficients of $d_{ji}$ will all be upper bounded in magnitude by the Lie derivatives computed using maximal series. Next, a suitable geometric growth constant for the network ${\mathcal N}_m$ is determined. First observe that the growth constants $\bar{K}$ and $\bar{M}$ constitute a worst case maximum growth rate for every node in the network. In light of the formal representation of any $d_{ji}$ in Theorem~\ref{th:additive-interconnections}, the growth rate of $d_{ji}$ is upper bounded by the growth rate of the natural response $\langle \bar{d}_{ji},x_0^k\rangle=L_{V_0}^k\hat{c}_j(\mbf{1})$, $k\geq 0$, where $W_{ij}=1$ for all $i,j$, and every non-trivial component of $\hat{c}_j$ is the maximal series $\bar{c}=\sum_{\eta\in X^\ast}\bar{K} \bar{M}^{\abs{\eta}}\abs{\eta}!\, \eta$. (See \cite[Lemma 7]{Thitsa-Gray_12} for an alterative approach when $m=1,2$.) From the symmetry of such a {\em maximal network}, $z_i=z_j$ for all $i,j$. Applying these conditions to \rref{eq:maximal-series-state-space}, the natural response at each node is given by the solution of the Abel differential equation \begin{equation} \label{eq:maximal-network-Abel-equation} \dot{z}=\frac{\bar{M}}{\bar{K}}(z^2+mz^3),\;\;z(0)=\bar{K}. \end{equation} It can be directly verified that this equation has the solution \begin{displaymath} z(t)=\frac{-\frac{1}{m}}{1+\mathcal{W}\left[-\left(1+\frac{1}{m\bar{K}}\right)\exp\left(\frac{\bar{M}}{m\bar{K}}t-\left(1+\frac{1}{m\bar{K}}\right)\right)\right]}, \end{displaymath} where $\mathcal{W}$ denotes the Lambert $W$-function, that is, the inverse of the function $f(x)=x\exp(x)$ corresponding to the principal branch of this multi-valued function \cite{Corless-etal_96}. As $\mathcal{W}$ is known to be holomorphic on the complex plane, $z(t)$ will therefore be analytic at $t=0$. The corresponding Taylor series has a radius of convergence determined by the singularity nearest to the origin, in this case \begin{displaymath} t^\ast=\frac{1}{\bar{M}}\left(1-m\bar{K}\ln\left(1+\frac{1}{m\bar{K}}\right)\right). \end{displaymath} Applying a well known theorem from complex analysis (see \cite[Theorem~2.4.3]{Wilf_94}) gives the infimum of all geometric growth constants for the maximal network, namely $M_{\rm inf}=1/t^\ast$. (Note that the function $\lambda(x)=1-x\ln(1+{1/x})$ is a decreasing function, which further justifies using the maximum $K_i$ in the network as the worst case.) Since for any $M>M_{\rm inf}$ there is a $K>0$ to upper bound the fastest coefficient growth in the maximal network, the generating series for every node in the original network must also be upper bounded by this growth rate. \hfill\bull \vspace*{0.05in} It is worth noting that \rref{eq:M-inf-additive-network} is in fact identical to the growth constant identified for unity feedback systems with $m$ inputs as described in \cite[Corollary~2]{Thitsa-Gray_12}. While the network topologies are clearly distinct, this point of tangency is derived from the fact that unity feedback systems and additive maximal networks both have natural responses satisfying \rref{eq:maximal-network-Abel-equation}. \begin{table}[tb] \caption{Integer sequences generated by maximal additive network with unity growth constants} \label{tbl:network-series} \vspace*{-0.1in} \begin{center} \begin{tabular}{c|l|c|c} \toprule $m$ & \hspace*{0.8in}$a_n$ & $M_{\rm inf}$ & $\hat{M}_n$ \\ \midrule 1 & 1, 2, 10, 82, 938, 13778, 247210, $\ldots$ & 3.2589 & 3.22634 \\ 2 & 1, 3, 24, 318, 5892, 140304, $\ldots$ & 5.2891 & 5.23618 \\ 3 & 1, 4, 44, 804, 20556, 675588, $\ldots$ & 7.3017 & 7.22873 \\ 4 & 1, 5, 70, 1630, 53120, 2225480, $\ldots$ & 9.3088 & 9.21567 \\ 5 & 1, 6, 102, 2886, 114294, 5819190, $\ldots$ & 11.3132 & 11.2001 \\ 6 & 1, 7, 140, 4662, 217308, 13022688,$\ldots$ & 13.3163 & 13.1831 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{figure}[tb] \begin{center} \includegraphics[scale=0.55]{three_node_simulation_worst_case} \end{center} \vspace*{-0.2in} \caption{Natural response of three node maximal network in Example~\ref{ex:three-node-maximal-network-worst-case}.} \label{fig:three-node-maximal-network-worst-case} \end{figure} \begin{example} \label{ex:three-node-maximal-network-worst-case} {\rm Consider a maximal additive network where $K_i=M_i=1$, $i=1,2,\ldots,m$. The Taylor series of the natural response has integer coefficients $a_n$, $n\geq 1$ as shown in Table~\ref{tbl:network-series}. The coefficients when $m=1$ correspond to the OEIS integer sequence A112487 \cite{OEIS}. The table also shows the growth rate $M_{\rm \inf}$ computed from \rref{eq:M-inf-additive-network} and an estimate of the growth constant $M$ computed from $\hat{M}_n=na_{n}/a_{n-1}$ when $n=50$. The corresponding three node network was simulated in MatLab for the zero input case. The node responses, which are identical, are shown in Figure~\ref{fig:three-node-maximal-network-worst-case}. Since the coefficients of every generating series are positive, it is known that the natural response of every node will have a finite escape time at $t=t^\ast$ (see \cite[Theorem 11]{Thitsa-Gray_12}). In this case, $t^\ast=1/M_{\rm inf}=0.1379$, which is what was observed in the simulation. } \hfill\openbull \end{example} \vspace*{0.1in} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.55]{three_node_simulation_generic} \end{center} \vspace*{-0.1in} \caption{Natural response of three node network in Example~\ref{ex:three-node-maximal-network-generic}.} \label{fig:three-node-maximal-network-generic} \end{figure} \begin{example} \label{ex:three-node-maximal-network-generic} {\rm Consider a three node additive network involving maximal series with $K_i=i$, $M_i=5-i$ and \begin{displaymath} W=\left[\begin{array}{cccc} 1 & 0.5 & 1 \\ 1 & 1 & 0 \\ 0.25 & 1 & 1 \end{array}\right]. \end{displaymath} Thus, $\bar{K}=3$, $\bar{M}=4$, and $M_{\rm inf}=77.2867$. The node natural responses are shown in Figure~\ref{fig:three-node-maximal-network-generic}. As this network is not maximal, $t^\ast=1/M_{\rm inf}=0.01294$ provides only a lower bound on the escape times of each node. } \hfill\openbull \end{example} \vspace*{0.1in} \section{Additive Networks of Chen-Fliess Series: Relative Degree} \label{sec:relative-degree} In this section the following question is addressed: When does the generating series of the mapping $v_i\mapsto y_j$ in an additively interconnected network ${\mathcal N}_m$ have a well defined relative degree? The treatment starts with the easiest case first as described next. It is assumed throughout that ${\mathcal N}_m$ is comprised of systems with generating series $c_i$ which have relative degree $r_i$ for $i=1,2,\ldots,m$. \begin{definition} The $i$-th node in a network ${\cal N}_m$ is said to be {\em fully connected} if $W_{ij}\neq 0$ for all $j\neq i$. A network ${\cal N}_m$ is said to be {\em fully connected} if every node is fully connected. \end{definition} Note that self-loops, i.e., when $W_{ii}\neq 0$, are not important in the present context as proportional output feedback is easily shown to preserve relative degree \cite{Gray-Venkatesh_SCL19}. \begin{theorem} \label{th:relative-degree-fully-connected} If the $i$-th node in ${\mathcal N}_m$ is fully connected, then the generating series $d_{ji}$ for mapping $v_i\mapsto y_j$ has relative degree $r_{ji}=r_j+r_i$. \end{theorem} \noindent{\em Proof:}$\;\;$ Observe that the full output at node $j$ is \begin{align*} y_j&=F_{c_j}\left[v_j+\sum_{k=1}^m W_{jk} y_k \right] \\ &=F_{c_j}\left[v_j+\sum_{k,l=1}^m W_{jk} F_{d_{kl}}[v_l]\right]. \end{align*} For any $i\neq j$, that part of $y_j$ in response to $v_i$ acting alone (i.e., $v_l=0$ for $l\neq i$) is given by \begin{align*} y_{j}&=F_{c_j}\Bigg[W_{ji}F_{c_i}[v_i]+\sum_{k=1\atop k\neq i}^m W_{jk} F_{d_{ki}}[v_i]+ \\ &\hspace{0.2in} \sum_{k,l=1\atop l\neq i}^m W_{jk} F_{d_{kl}}[0]\Bigg]. \end{align*} Note that for all $k\neq i$, ${\rm supp}(d_{ki})\subseteq x_0^{r}X^\ast$, where $r\geq r_i+1$, since $v_i$ passes through $F_{c_i}$ in every path leading to the $j$-th node. In which case, the argument of $F_{c_j}$ above has a generating series with relative degree $r_i$. The conclusion then follows immediately from Theorem~\ref{th:relative-degree-casecade}. \hfill\bull \vspace*{0.05in} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{diamond_network} \end{center} \vspace*{-0.2in} \caption{Four node network in Example~\ref{ex:four-node-counterexample}.} \label{fig:four-node-counterexample} \end{figure} \begin{example} \label{ex:four-node-counterexample} {\rm Consider the network shown in Figure~\ref{fig:four-node-counterexample}. The corresponding weighting matrix is \begin{displaymath} W=\left[\begin{array}{cccc} 0 & 0 & 0 & 0 \\ W_{21} & 0 & 0 & 0 \\ W_{31} & 0 & 0 & 0 \\ W_{41} & W_{42} & W_{43} & 0 \end{array}\right]. \end{displaymath} The network is clearly {\em not} fully connected, but node $4$ is fully connected assuming $W_{4j}\neq 0$, $j=1,2,3$. Therefore, applying the theorem above gives, for example, that $r_{41}=r_4+r_1$. Suppose now that $W_{41}=0$ so that the theorem no longer applies. Further assume that $r_2=r_3=r$. Observe that \begin{displaymath} u_4=W_{42}F_{c_2}[W_{21}F_{c_1}[u_1]]+W_{43}F_{c_3}[W_{31}F_{c_1}[v_1]], \end{displaymath} and thus, \begin{displaymath} d_{41}=c_4\circ [W_{42}(c_2\circ (W_{21}c_1))+W_{43}(c_3\circ (W_{31}c_1))]. \end{displaymath} Both $c_2\circ (W_{21}c_1)$ and $c_3\circ (W_{31}c_1)$ have relative degree $r+r_1$, but $d_{41}$ can fail to have relative degree. As a simple example, suppose $c_1=c_2=c_4=x_1$ and $c_3=-x_1$ so that $d_{41}=(W_{42}W_{21}-W_{43}W_{31}) x_0^2x_1$. If $W$ is such that $W_{42}W_{21}=W_{43}W_{31}$, then $d_{41}=0$ does not have relative degree. On the other hand, if the symmetry condition $r_2=r_3$ is broken, then it follows that $d_{41}$ has relative degree $r_{41}=r_4+\min (r_2,r_3)+r_1$. } \hfill\openbull \end{example} \vspace*{0.1in} The final case in the example above suggests a sufficient condition for the general case. Namely, in the absence of these degenerate situations where a node is presented with an input whose underlying generating series does not have relative degree, the relative degree for $d_{ji}$ will be well defined and determined by a path from node $i$ to node $j$ whose {\em accumulated} relative degrees is minimal. To make this claim more precise, the following language adapted from signal flow graph theory will be useful. Let ${\mathcal N}_m$ be a given additive network. An {\em edge} is a directed line segment connecting two nodes. A {\em path} is a continuous set of edges connecting two nodes in ${\mathcal N}_m$ and traversed in the direction indicated. A {\em forward path} is a path in which no node is encountered more than once. A {\em loop} is a path that originates and ends on the same node in which no node is encountered more than once. Finally, the {\em subgraph} $G_{ji}$ from node $i$ to node $j$ is the simple graph (i.e., all loops are omitted) consisting of all forward paths connecting node $i$ and node $j$. The following theorems provide a sufficient condition under which the relative degree is well defined for a given input-output map $v_i\mapsto y_j$ in an additive network. Given a subgraph $G_{ji}$, the {\em accumulated relative degree} of node $i$ is $r_i^+=r_i$. If node $k\neq i$ in $G_{ji}$ has $N$ incoming edges from nodes $i_1,i_2,\ldots, i_N$ with accumulated relative degrees $r^+_{i_1},r^+_{i_2},\ldots,r^+_{i_N}$, respectively, then the {\em accumulated relative degree} at node $k$ is \begin{displaymath} r^+_k=r_k+\min\{r^+_{i_1},r^+_{i_2},\ldots,r^+_{i_N}\}. \end{displaymath} Note this definition does not imply that any mappings defined by the network have relative degree, it simply computes the {\em potential} relative degree of such a mapping should it be well defined. \begin{theorem} \label{th:relative-degree-additive-network-distinct-condition} Let $i$ and $j$ be fixed nodes in ${\mathcal N}_m$. If at every node $l{\not\in}\{i,j\}$ the accumulated relative degrees of the nodes from every incoming edge are distinct, then the generating series $d_{ji}$ for $v_i\mapsto y_j$ in ${\mathcal N}_m$ has well defined relative degree equivalent to $r_{ji}=r_j^+$. \end{theorem} \noindent{\em Proof:}$\;\;$ As feedback loops do not affect the relative degree of any forward path, it is sufficient to consider only the subgraph $G_{ji}$. The claim then follows directly from Corollary~\ref{co:relative-degree-multi-sum-distinct}, Theorem~\ref{th:relative-degree-casecade}, and the definition of accumulated relative degree. \hfill\bull \vspace*{0.05in} The distinctness condition in the above theorem can be relaxed by utilizing instead the condition in Corollary~\ref{co:relative-degree-multi-sum-not-distinct}. \begin{theorem} \label{th:relative-degree-additive-network-repeated-condition} Let $i$ and $j$ be fixed nodes in ${\mathcal N}_m$. If at every node $l{\not\in}\{i,j\}$ the accumulated relative degrees of the nodes from every incoming edge satisfy the condition in Corollary~\ref{co:relative-degree-multi-sum-not-distinct}, then the generating series $d_{ji}$ for $v_i\mapsto y_j$ in ${\mathcal N}_m$ has well defined relative degree equivalent to $r_{ji}=r_j^+$. \end{theorem} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{double_diamond_network} \end{center} \vspace*{-0.1in} \caption{Network in Example~\ref{ex:double-diamond-example}.} \label{fig:double-diamond-example} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{double_diamond_network_subgraph} \end{center} \vspace*{-0.1in} \caption{Subgraph of forward paths for $v_1\mapsto y_7$ in Example~\ref{ex:double-diamond-example}. The relative degree of each generating series $c_i$ is the circled number. The accumulated relative degree at each node is the number in the triangle.} \label{fig:double-diamond-example-subgraph} \end{figure} \begin{example} \label{ex:double-diamond-example} {\rm Consider the network shown in Figure~\ref{fig:double-diamond-example}, where each weight $W_{ij}\in \{0,1\}$ (i.e., $0\sim\mbox{not connected}$, $1\sim\mbox{connected}$), and the generating series for the nodes are: \begin{align*} c_1 &= K_1 x_1 + 2 x_0 x_1 \\ c_2 &= x_0+K_2 x_0^2 x_2 \\ c_3 &= K_3 x_0 x_3 + 3 x_0^2 x_3^2 \\ c_4 &= 1+K_4 x_0 x_4 - x_0^2 x_4x_0 \\ c_5 &= 4x_0+K_5 x_0^2 x_5 -2x_0^4x_5 \\ c_6 &= K_6 x_6 -x_6^2\\ c_7 &= x_0+2+K_7 x_7+4x_0x_7 \end{align*} with $K_i\neq 0$ in every case. The subgraph of forward paths is shown in Figure~\ref{fig:double-diamond-example-subgraph}. The relative degree of the generating series at each node is the circled number shown next to each node. The accumulated relative degree at each node is the number in the triangle The goal is to determine the relative degree of the mapping $v_1\mapsto y_7$, provided it is well defined. Observe that only nodes $4$, $5$ and $7$, have more than one incoming edge. In each case, the accumulated relative degrees are distinct, namely, $3,4$; $4,5$; and $6,7$, respectively. Therefore, Theorem~\ref{th:relative-degree-additive-network-distinct-condition} applies, and $r_{71}=7$. To independently verify this claim, the generating series $d_{71}$ was computed using the full network via Theorem~\ref{th:additive-interconnections} with the aid of Mathematica and found to be \begin{displaymath} d_{71}=d_{71,N}+K_1K_3K_4K_6K_7x_0^6x_1+x_0^6e, \end{displaymath} where \begin{align*} d_{71,N} = & {\,} x_0+ (4 K_7 + K_6 K_7)x_0^2 +(16 + 4 K_6 - K_7)x_0^3+ \\ & {\,} (-4 + K_5 K_7 + 2 K_4 K_6 K_7) x_0^4 + (4 K_5 + 8 K_4 K_6- \\ & {\,} 8 K_4 K_7 + K_5 K_7 + 2 K_4 K_6 K_7) x_0^5 + \cdots\\ e = & {\,} (4 K_1 K_3 K_4 K_6 - 6 K_1 K_3 K_4 K_7 + K_1 K_2 K_5 K_7+ \\ & {\,}K_1 K_2 K_4 K_6 K_7 + 2 K_3 K_4 K_6 K_7) x_0 x_1 + \cdots \end{align*} The relative degree of $d_{71}$ is 7 as expected. } \hfill\openbull \end{example} \vspace*{0.1in} An additive network ${\mathcal N}_m$ is said to have {\em complete relative degree} if every mapping $v_i\mapsto y_j$, $i,j=1,2,\ldots,m$ has relative degree. From Theorem~\ref{th:relative-degree-fully-connected} it is immediate that fully connected networks have this property. Another class of networks sharing this property is given in the following theorem. It states that in some sense the property of a network having complete relative degree is {\em generic}. \begin{theorem} Consider an additive network ${\mathcal N}_m$ where the weighting matrix has entries $W_{ij}\in\{0,1\}$. If the unity weights are replaced with continuous random variables, then every sample network has complete relative degree. \end{theorem} \noindent{\em Proof:}$\;\;$ At any given node, the incoming nodes may or may not have distinct accumulated relative degree. In the case where they do, then Theorem~\ref{th:relative-degree-additive-network-distinct-condition} applies, otherwise, Theorem~\ref{th:relative-degree-additive-network-repeated-condition} applies provided the condition for multiplicities greater than one can be met. Specifically, at node $k$ with incoming edges from nodes $j_1,j_2,\ldots, j_N$ with accumulated relative degrees $r^+_{j_1},r^+_{j_2},\ldots,r^+_{j_N}$, it is required that if $r^+_j$ is repeated $s_j>1$ times then \begin{align*} &W_{ij(1)}\langle d_{j(1)i},x_0^{r^+_j-1}x_i\rangle+ W_{ij(2)} \langle d_{j(2)i},x_0^{r^+_j-1}x_i\rangle+ \cdots \\ &\hspace*{0.2in}+W_{ij(s_j)}\langle d_{j(s_j)i},x_0^{r^+_j-1}x_i\rangle\neq 0, \end{align*} where $j(l)\in\{j_1,j_2,\ldots, j_N\}$, and the $W_{ij_l}$ are random variables with any continuous distribution(s). But this condition is always true with probability one, and hence, the theorem is proved. \hfill\bull \vspace*{0.05in} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.6]{random_network} \end{center} \vspace*{-0.1in} \caption{Estimate of density function for $\abs{\langle d_{41},x_0^2x_1\rangle}$ in Example~\ref{exer:random-network}.} \label{fig:random-network-density-function} \end{figure} \begin{example} \label{exer:random-network} {\rm Reconsider Example~\ref{ex:four-node-counterexample}, where $W_{41} = 0$ and now $W_{21}$, $W_{31}$, $W_{42}$ and $W_{43}$ are i.i.d. random variables with a uniform distribution on $[0,1]$. An estimate of the density function for the random variable $\abs{\langle d_{41},x_0^2x_1\rangle}$ is shown in Figure~\ref{fig:random-network-density-function}. In every case of the 1000 random networks generated, $d_{41}$ had relative degree $r=3$ as expected. } \hfill\openbull \end{example} \vspace*{0.1in} \section{Conclusions} Two basic properties were established for an additive network of input-output systems where each node of the network is modeled by a convergent Chen-Fliess series. First it was shown that every input-output map between a pair of nodes has a locally convergence Chen-Fliess series representation. An explicit and in some cases achievable growth bound on the coefficients was computed using the notion of a maximal network. Second, sufficient conditions were given under which the input-output map between a pair of nodes has a well defined relative degree as defined by its generating series. This analysis led to the conclusion that this relative degree property is generic when the connection strengths between nodes are randomized. \section{Introduction} Networks of nonlinear dynamical systems appear in many fields, especially in the natural sciences where the nonlinearity is often a key feature in generating the observed behavior \cite{Golubitsky-Stewart_06,Jiang-Lai_19}. The vast majority of analysis of such networks is done in a finite dimensional state space setting using coupled systems of ordinary differential equations. In \cite{Gray-Ebrahimi-Fard_SCL21}, however, the authors describe an alternative approach which uses only input-output models at each node of the network in the form of a locally convergent Chen-Fliess series \cite{Fliess_81,Fliess_83}. These weighted infinite sums of iterated integrals provide a convenient algebraic framework for describing the network's behavior without relying on any particular choice of coordinates as in the state space setting. Series coefficients for each node can be estimated via system identification techniques \cite{Gray-etal_Auto20}. Computational tools were developed in \cite{Gray-Ebrahimi-Fard_SCL21} to determine, for example, how an input injected at one node affects the output observed at another node. Nevertheless, there are still a number of open questions regarding the basic properties of such networks. The focus here will be on so called {\em additive} networks, where the outputs of the nodes are simply added together and injected into other nodes, including self-loops. Other classes of aggregation functions such the multiplication of node outputs will not be addressed here. This paper has two goals. The first goal to address the open problem stated in \cite{Gray-Ebrahimi-Fard_SCL21} regarding whether an additive network of locally convergent Chen-Fliess series always yields mappings between nodes which have locally convergent Chen-Fliess series representations. This hypothesis will be proved to be true and is independent of the network's topology. The approach taken is to identify for a given network an associated {\em maximal network} whose growth bounds on the coefficients of the generating series between nodes upper bound all the growth bounds of the original network and are much easier to determine using conventional methods as presented in \cite{Sussmann_83}. The particular growth bound derived turns out to be exactly equivalent to one discovered for a class of unity feedback systems described in \cite{Thitsa-Gray_12}. The second goal is to provide sufficient conditions under which the input-output map between a pair of nodes has well defined relative degree as defined by its generating series \cite{Gray-etal_AUTO14,Gray-Venkatesh_SCL19}. A simple counterexample will be given first to show that this property can fail to hold in certain situations. The proofs of the sufficient conditions rely on identifying certain properties first described in \cite{Gray-Venkatesh_SCL19} in relation to a subgraph connecting a given input node and output node. It is also shown, however, that this relative degree property is {\em generic} in a certain sense. Namely, if the generating series for every node has relative degree and the connection strengths between the nodes are random, then every node pair has a generating series with well defined relative degree with probability one. An obvious application for this result is in the context of feedback linearization for networks \cite{Menara-etal_CDC20}, however, that application will not be pursued here. The paper is organized as follows. To keep the presentation as self-contained as possible, the required preliminaries are briefly summarized in Section~\ref{sec:preliminaries}. The question regarding the convergence of Chen-Fliess series for mappings between nodes is addressed in Section~\ref{sec:local-convergence}. The subsequent section treats the property of relative degree. The paper's conclusions are summarized in the final section. \section{Preliminaries} \label{sec:preliminaries} An {\em alphabet} $X=\{ x_0,x_1,$ $\ldots,x_m\}$ is any nonempty and finite set of noncommuting symbols referred to as {\em letters}. A {\em word} $\eta=x_{i_1}\cdots x_{i_k}$ is a finite sequence of letters from $X$. The number of letters in a word $\eta$, written as $\abs{\eta}$, is called its {\em length}. The empty word, $\emptyset$, is taken to have length zero. The collection of all words having length $k$ is denoted by $X^k$. Define $X^\ast=\bigcup_{k\geq 0} X^k$, which is a monoid under the concatenation (Cauchy) product. Any mapping $c:X^\ast\rightarrow \re^\ell$ is called a {\em formal power series}. Often $c$ is written as the formal sum $c=\sum_{\eta\in X^\ast}\langle c,\eta\rangle\eta$, where the {\em coefficient} $\langle c,\eta\rangle\in\re^\ell$ is the image of $\eta\in X^\ast$ under $c$. The {\em support} of $c$, ${\rm supp}(c)$, is the set of all words having nonzero coefficients. A series $c$ is {\em proper} when $\emptyset\not\in{\rm supp}(c)$. The set of all noncommutative formal power series over the alphabet $X$ is denoted by $\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$. The subset of series with finite support, i.e., polynomials, is represented by $\mbox{$\re^{\ell}\langle X \rangle$}$. For any $c,d\in\mbox{$\re\langle\langle X \rangle\rangle$}$, the scalar product is $\langle c,d\rangle:=\sum_{\eta\in X^\ast} \langle c,\eta\rangle\langle d,\eta\rangle$, provided the sum is finite. The set $\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ is an associative $\re$-algebra under the concatenation product and an associative and commutative $\re$-algebra under the {\em shuffle product}, that is, the bilinear product uniquely specified by the shuffle product of two words $x_i\eta,x_j\xi\in X^\ast$: \begin{displaymath} (x_i\eta){\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}(x_j\xi)=x_i(\eta{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}(x_j\xi))+x_j((x_i\eta){\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,} \xi), \end{displaymath} where $x_i,x_j\in X$ and with $\eta{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}\emptyset=\emptyset{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,}\eta=\eta$ \cite{Fliess_81}. \subsection{Chen-Fliess series} Given any $c\in\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ one can associate a causal $m$-input, $\ell$-output operator, $F_c$, in the following manner. Let $\mathfrak{p}\ge 1$ and $t_0 < t_1$ be given. For a Lebesgue measurable function $u: [t_0,t_1] \rightarrow\re^m$, define $\norm{u}_{\mathfrak{p}}=\max\{\norm{u_i}_{\mathfrak{p}}: \ 1\le i\le m\}$, where $\norm{u_i}_{\mathfrak{p}}$ is the usual $L_{\mathfrak{p}}$-norm for a measurable real-valued function, $u_i$, defined on $[t_0,t_1]$. Let $L^m_{\mathfrak{p}}[t_0,t_1]$ denote the set of all measurable functions defined on $[t_0,t_1]$ having a finite $\norm{\cdot}_{\mathfrak{p}}$ norm and $B_{\mathfrak{p}}^m(R)[t_0,t_1]:=\{u\in L_{\mathfrak{p}}^m[t_0,t_1]:\norm{u}_{\mathfrak{p}}\leq R\}$. Assume $C[t_0,t_1]$ is the subset of continuous functions in $L_{1}^m[t_0,t_1]$. Define inductively for each word $\eta=x_i\bar{\eta}\in X^{\ast}$ the map $E_\eta: L_1^m[t_0, t_1]\rightarrow C[t_0, t_1]$ by setting $E_\emptyset[u]=1$ and letting \[E_{x_i\bar{\eta}}[u](t,t_0) = \int_{t_0}^tu_{i}(\tau)E_{\bar{\eta}}[u](\tau,t_0)\,d\tau, \] where $x_i\in X$, $\bar{\eta}\in X^{\ast}$, and $u_0=1$. The {\em Chen--Fliess series} corresponding to $c\in\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ is \begin{displaymath} y(t)=F_c[u](t) = \sum_{\eta\in X^{\ast}} \langle c,\eta\rangle \,E_\eta[u](t,t_0) \end{displaymath} \hspace*{-0.06in}\cite{Fliess_81}. If there exist real numbers $K,M>0$ such that \begin{displaymath} \abs{\langle c,\eta\rangle}\le K M^{|\eta|}|\eta|!,\;\; \forall\eta\in X^{\ast}, \end{displaymath} then $F_c$ constitutes a well defined mapping from $B_{\mathfrak p}^m(R)[t_0,$ $t_0+T]$ into $B_{\mathfrak q}^{\ell}(S)[t_0, \, t_0+T]$ for sufficiently small $R,T >0$ and some $S>0$, where the numbers $\mathfrak{p},\mathfrak{q}\in[1,\infty]$ are conjugate exponents, i.e., $1/\mathfrak{p}+1/\mathfrak{q}=1$ \cite{Gray-Wang_02}. (Here, $\abs{z}:=\max_i \abs{z_i}$ when $z\in\re^\ell$.) The set of all such {\em locally convergent} series is denoted by $\mbox{$\re^{\ell}_{LC}\langle\langle X \rangle\rangle$}$, and $F_c$ is referred to as a {\em Fliess operator}. \subsection{System interconnections} Given Fliess operators $F_c$ and $F_d$, where $c,d\in\mbox{$\re^{\ell}_{LC}\langle\langle X \rangle\rangle$}$, the parallel and product connections satisfy $F_c+F_d=F_{c+d}$ and $F_cF_d=F_{c{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,} d}$, respectively \cite{Fliess_81}. When Fliess operators $F_c$ and $F_d$ with $c\in\mbox{$\re^{\ell}_{LC}\langle\langle X \rangle\rangle$}$ and $d\in\mbox{$\re^{m}_{LC}\langle\langle X \rangle\rangle$}$ are interconnected in a cascade fashion, the composite system $F_c\circ F_d$ has the Fliess operator representation $F_{c\circ d}$, where the {\em composition product} of $c$ and $d$ is given by \begin{displaymath} c\circ d=\sum_{\eta\in X^\ast} \langle c,\eta\rangle\,\psi_d(\eta)(\mathbf{1}) \end{displaymath}% \hspace*{-0.07in}\cite{Ferfera_80}. Here $\mbf{1}$ denotes the monomial $1\emptyset$, and $\psi_d$ is the continuous (in the ultrametric sense) algebra homomorphism from $\mbox{$\re\langle\langle X \rangle\rangle$}$ to the vector space endomorphisms on $\mbox{$\re\langle\langle X \rangle\rangle$}$, ${\rm End}(\allseries)$, uniquely specified by $\psi_d(x_i\eta)=\psi_d(x_i)\circ \psi_d(\eta)$ with $ \psi_d(x_i)(e)=x_0(d_i{\, {\mathchoice{\shuff{5pt}{3.5pt}}{\shuff{5pt}{3.5pt}}{\shuff{3pt}{2.6pt}}{\shuff{3pt}{2.6pt}}} \,} e), $ $i=0,1,\ldots,m$ for any $e\in\mbox{$\re\langle\langle X \rangle\rangle$}$, and where $d_i$ is the $i$-th component series of $d$ ($d_0:=\mbf{1}$). By definition, $\psi_d(\emptyset)$ is the identity map on $\mbox{$\re\langle\langle X \rangle\rangle$}$. \subsection{Relative degree of a generating series} Let $X=\{x_0,x_1\}$. Following \cite{Gray-etal_AUTO14}, a series $c\in\mbox{$\re\langle\langle X \rangle\rangle$}$ has relative degree $r$ if and only if it has the decomposition \begin{displaymath} c=c_N+Kx_0^{r-1}x_1+x_0^{r-1}e \end{displaymath} for some $K\neq 0$ and proper $e\in\mbox{$\re\langle\langle X \rangle\rangle$}$ with $x_1\not\in{\rm supp}(e)$. This definition of relative degree is consistent with the classical definition whenever $y=F_c[u]$ is realizable \cite{Gray-etal_AUTO14,Gray-Ebrahimi-Fard_SIAM17}. The following results will be of central importance in the work that follows. \begin{theorem} \cite{Gray-Venkatesh_SCL19} \label{th:r-parallel-sum-connection} If $c,d\in\mbox{$\re\langle\langle X \rangle\rangle$}$ have distinct relative degrees $r_c$ and $r_d$, respectively, then $c+d$ has relative degree $\min(r_c,r_d)$. On the other hand, if $r_c = r_d =: r$, then $c+d$ has relative degree $r$ if and only if $\langle c,x_{0}^{r-1}x_{1}\rangle + \langle d,x_{0}^{r-1}x_{1}\rangle \neq 0$. \end{theorem} \begin{corollary} \label{co:relative-degree-multi-sum-distinct} If $c_1,c_2,\ldots,c_m$ have relative degree $r_1,r_2,\ldots,r_m$, respectively, with $r_i\neq r_j$ when $i\neq j$, then the relative degree of $c_1+c_2+\cdots+c_m$ is $\min_i(r_i)$. \end{corollary} \begin{corollary} \label{co:relative-degree-multi-sum-not-distinct} Suppose $c_1,c_2,\ldots,c_m$ have relative degree $r_1,r_2,\ldots,r_m$, respectively. Let $s_j$ denote the multiplicity of relative degree $r_j$. If for each $s_j>1$ the series $c_{k_1},c_{k_2},\ldots,c_{k_{s_j}}$ having relative degree $r_j$ satisfy \begin{displaymath} \langle c_{k_1},x_0^{r_j-1}x_1\rangle+\langle c_{k_2},x_0^{r_j-1}x_1\rangle+\dots+\langle c_{k_{s_j}},x_0^{r_j-1}x_1\rangle\neq 0, \end{displaymath} then the relative degree of $c_1+c_2+\cdots+c_m$ is $\min_i(r_i)$. \end{corollary} \begin{theorem} \cite{Gray-Venkatesh_SCL19} \label{th:relative-degree-casecade} If $c,d\in\mbox{$\re\langle\langle X \rangle\rangle$}$ have relative degrees $r_c$ and $r_d$, respectively, then $r_{c\circ d}$ has relative degree $r_c+r_d$. \end{theorem} \subsection{Formal realizations and representations} It is shown in \cite{Kawski-Sussmann_97} that a given Chen-Fliess series $y=F_c[u]$ can be written in terms of a state $z$ evolving on a formal Lie group $\mathcal{G}(X)$ with Lie algebra $\widehat{\mathcal L}(X)$ and output map $y=\langle c,z\rangle$. This notion of a {\em universal control system} was generalized in \cite{Gray-Ebrahimi-Fard_SCL21} as follows to describe networks of Chen-Fliess series. \begin{definition} Let $V_i$ be a vector field on $\mathcal{G}^n(X):=\mathcal{G}(X)\times\mathcal{G}(X)\times\cdots\times \mathcal{G}(X)$, $i=0,1,\ldots,m$ with \begin{align*} V_i&:{\mathcal G}^n(X)\rightarrow T_z{\mathcal G}^n(X) \\ &z=(z_1,\ldots,z_n)\mapsto V_i(z)=(V_{i1}(z)z_1,\ldots,V_{in}(z)z_n), \end{align*} where $V_{ij}(z(t))\in \widehat{\mathcal L}(X)$. The $j$-th component of the corresponding state equation on ${\mathcal G}^n(X)$ is \begin{displaymath} \dot{z}_j=\sum_{i=0}^m V_{ij}(z)z_j u_{ij} , \;\; z_j(0)=z_{j0}. \end{displaymath} Given $\hat{c}_{k}\in\mbox{$\re_{LC}^{\otimes n}\langle\langle X \rangle\rangle$}$, $k=1,2,\ldots,\ell$, the $k$-th output equation is defined to be \begin{displaymath} y_{k}=\hat{c}_{k}(z). \end{displaymath} Collectively, $(V,z_0,\hat{c})$ is a {\em formal realization} on ${\mathcal G}^n(X)$ of the formal input-output map $u\mapsto y$. \end{definition} Analogous to the standard finite dimensional theory \cite{Isidori_95,Nijmeijer-vanderSchaft_90}, a series $c\in\mbox{$\re^{\ell} \langle\langle X \rangle\rangle$}$ is said to have a {\em formal representation} when there exists a formal realization with the property that every coefficient of $c$ can be written in terms of iterated Lie derivatives of the vectors fields acting on the output map and evaluated at $z_0$, i.e., $\langle c,x_{i_1}\cdots x_{i_k}\rangle=L_{V_{i_k}}\cdots L_{V_{i_1}}\hat{c}(z_0)$. \section{Additive Networks of Chen-Fliess Series: Local Convergence} \label{sec:local-convergence} In this section it is shown that every network of additively interconnected locally convergent Fliess operators has the property that the input-output maps between any two nodes can be represented by a locally convergent Fliess operator. The first definition describes the specific class of networks under consideration. \begin{definition} A set of $m$ single-input, single-output Chen-Fliess series mapping $u_i$ to $y_i$ with generating series $c_i\in\re_{LC}\langle\langle X_i\rangle\rangle$, where $X_i=\{x_0,x_i\}$ is said to be an {\em additively interconnected network} ${\mathcal N}_m$ with weighting matrix $W\in\re^{m\times m}$ if $u_i=v_i+\sum_{j=1}^mW_{ij}y_j$, $i=1,2,\ldots,m$. \end{definition} A network ${\mathcal N}_m$ can therefore be viewed as a directed graph connecting $m$ nodes, where the $i$-th node corresponds to a Chen-Fliess series with generating series $c_i$, $i=1,2,\ldots,m$. Henceforth, it will be assumed that the connection weights are normalized so that $W_{ij}\in[0,1]$, $i,j=1,2,\ldots,m$. The following theorem follows directly from Theorem 5.1 in \cite{Gray-Ebrahimi-Fard_SCL21}. \begin{theorem} \label{th:additive-interconnections} The input-output map $v_i\mapsto y_j$ in any additively interconnected network ${\mathcal N}_m$ has generating series $d_{ji}\in\re\langle\langle X_i\rangle\rangle$ which can be computed from a formal representation in terms of the vector fields \begin{align*} V_0(z)&= \left[ \begin{tabular}{p{0.5cm}} \hspace*{-0.1cm}$x_0z_1$ \\ \hspace*{-0.1cm}$x_0z_2$ \\ \hspace*{0.2cm}\vdots \\ \hspace*{-0.1cm}$x_0z_m$ \end{tabular} \right]+{\rm diag}(x_1z_1,\ldots,x_m z_m)W \left[ \begin{tabular}{p{1cm}} \hspace*{-0.02cm}$\langle c_1,z_1\rangle$ \\ \hspace*{-0.02cm}$\langle c_2,z_2\rangle$ \\ \hspace*{0.4cm}\vdots \\ \hspace*{-0.13cm}$\langle c_m,z_m\rangle$ \end{tabular} \right] \\ V_i(z)&=x_iz_i\mbf{e}_i \end{align*} acting on $\hat{c}_j=\mbf{1}\otimes\cdots\otimes\mbf{1}\otimes c_j\otimes\mbf{1}\cdots\otimes\mbf{1}\in\mbox{$\re_{LC}^{\otimes m}\langle\langle X \rangle\rangle$}$ ($c_j$ appears in the $j$-th position) and evaluated at $z_{j0}=\mbf{1}$, $i,j=1,2\ldots,m$. \end{theorem} The next theorem states the main convergence result concerning additive networks. \begin{theorem} If ${\mathcal N}_m$ is an additively interconnected network where the generating series for each node $c_i\in\re_{LC}\langle\langle X_i\rangle\rangle$, then the generating series for every input-output map $d_{ji}\in\re_{LC}\langle\langle X_i\rangle\rangle$. More specifically, if $K_i,M_i$ denote the growth constants for $c_i$, then for all $i,j=1,2,\ldots,m$ \begin{displaymath} \abs{\langle d_{ji},\eta\rangle}<KM^{\abs{\eta}}\abs{\eta}!,\;\;\forall \eta\in X^\ast \end{displaymath} for some $K>0$ and any $M>M_{\rm inf}$, where \begin{equation} \label{eq:M-inf-additive-network} M_{\rm inf}=\frac{\bar{M}}{1-m\bar{K} \ln\left(1+\frac{1}{m\bar{K}}\right)} \end{equation} with $\bar{K}=\max_{i} K_i$ and $\bar{M}=\max_{i} M_i$. \end{theorem} \noindent{\em Proof:}$\;\;$ It is first shown that each generating series $d_{ji}$ is locally convergent. Consider the case where every node series $c_i\in\re_{LC}\langle\langle X_i\rangle\rangle$ is a {\em maximal series} $\bar{c}_i:=\sum_{\eta\in X^\ast}K_iM_i^{\abs{\eta}}\abs{\eta}!\, \eta$. That is, every coefficient of $\bar{c}_i$ is growing at its maximal rate. While $y_i=F_{c_i}[u_i]$ may not have a finite dimensional state space realization, it is easily shown that a maximal series has the realization \begin{displaymath} \dot{z}_i=\frac{M_i}{K_i}z_i^2(1+u_i),\;\;z_i(0)=K_i,\;\;y_i=z_i \end{displaymath} \hspace*{-0.07in}\cite[Lemma 3]{Thitsa-Gray_12}. Therefore, the corresponding network can be realized by \begin{equation} \label{eq:maximal-series-state-space} \dot{z}_i=\frac{M_i}{K_i}z_i^2\left(1+\sum_{j=1}^m W_{ij}z_j+v_i\right)\!\!,\;\;z_i(0)=K_i,\;\;y_i=z_i, \end{equation} $i=1,2,\ldots,m$. As this realization of the input-output map $v\mapsto y$ is polynomial, it is clearly real analytic. Therefore, every generating series for $v_i\mapsto y_j$, say $\bar{d}_{ji}$, must be locally convergent \cite[Lemma 4.2]{Sussmann_83}. The claim now is that $d_{ji}$ must also be locally convergent since $\abs{\langle d_{ji},\eta\rangle}\leq \langle \bar{d}_{ji},\eta\rangle$ for all $\eta\in X^\ast$. This inequality is most easily deduced from the formal realization of $v_i\mapsto y_j$ given in Theorem~\ref{th:additive-interconnections}, where the Lie derivatives used to compute the coefficients of $d_{ji}$ will all be upper bounded in magnitude by the Lie derivatives computed using maximal series. Next, a suitable geometric growth constant for the network ${\mathcal N}_m$ is determined. First observe that the growth constants $\bar{K}$ and $\bar{M}$ constitute a worst case maximum growth rate for every node in the network. In light of the formal representation of any $d_{ji}$ in Theorem~\ref{th:additive-interconnections}, the growth rate of $d_{ji}$ is upper bounded by the growth rate of the natural response $\langle \bar{d}_{ji},x_0^k\rangle=L_{V_0}^k\hat{c}_j(\mbf{1})$, $k\geq 0$, where $W_{ij}=1$ for all $i,j$, and every non-trivial component of $\hat{c}_j$ is the maximal series $\bar{c}=\sum_{\eta\in X^\ast}\bar{K} \bar{M}^{\abs{\eta}}\abs{\eta}!\, \eta$. (See \cite[Lemma 7]{Thitsa-Gray_12} for an alterative approach when $m=1,2$.) From the symmetry of such a {\em maximal network}, $z_i=z_j$ for all $i,j$. Applying these conditions to \rref{eq:maximal-series-state-space}, the natural response at each node is given by the solution of the Abel differential equation \begin{equation} \label{eq:maximal-network-Abel-equation} \dot{z}=\frac{\bar{M}}{\bar{K}}(z^2+mz^3),\;\;z(0)=\bar{K}. \end{equation} It can be directly verified that this equation has the solution \begin{displaymath} z(t)=\frac{-\frac{1}{m}}{1+\mathcal{W}\left[-\left(1+\frac{1}{m\bar{K}}\right)\exp\left(\frac{\bar{M}}{m\bar{K}}t-\left(1+\frac{1}{m\bar{K}}\right)\right)\right]}, \end{displaymath} where $\mathcal{W}$ denotes the Lambert $W$-function, that is, the inverse of the function $f(x)=x\exp(x)$ corresponding to the principal branch of this multi-valued function \cite{Corless-etal_96}. As $\mathcal{W}$ is known to be holomorphic on the complex plane, $z(t)$ will therefore be analytic at $t=0$. The corresponding Taylor series has a radius of convergence determined by the singularity nearest to the origin, in this case \begin{displaymath} t^\ast=\frac{1}{\bar{M}}\left(1-m\bar{K}\ln\left(1+\frac{1}{m\bar{K}}\right)\right). \end{displaymath} Applying a well known theorem from complex analysis (see \cite[Theorem~2.4.3]{Wilf_94}) gives the infimum of all geometric growth constants for the maximal network, namely $M_{\rm inf}=1/t^\ast$. (Note that the function $\lambda(x)=1-x\ln(1+{1/x})$ is a decreasing function, which further justifies using the maximum $K_i$ in the network as the worst case.) Since for any $M>M_{\rm inf}$ there is a $K>0$ to upper bound the fastest coefficient growth in the maximal network, the generating series for every node in the original network must also be upper bounded by this growth rate. \hfill\bull \vspace*{0.05in} It is worth noting that \rref{eq:M-inf-additive-network} is in fact identical to the growth constant identified for unity feedback systems with $m$ inputs as described in \cite[Corollary~2]{Thitsa-Gray_12}. While the network topologies are clearly distinct, this point of tangency is derived from the fact that unity feedback systems and additive maximal networks both have natural responses satisfying \rref{eq:maximal-network-Abel-equation}. \begin{table}[tb] \caption{Integer sequences generated by maximal additive network with unity growth constants} \label{tbl:network-series} \vspace*{-0.1in} \begin{center} \begin{tabular}{c|l|c|c} \toprule $m$ & \hspace*{0.8in}$a_n$ & $M_{\rm inf}$ & $\hat{M}_n$ \\ \midrule 1 & 1, 2, 10, 82, 938, 13778, 247210, $\ldots$ & 3.2589 & 3.22634 \\ 2 & 1, 3, 24, 318, 5892, 140304, $\ldots$ & 5.2891 & 5.23618 \\ 3 & 1, 4, 44, 804, 20556, 675588, $\ldots$ & 7.3017 & 7.22873 \\ 4 & 1, 5, 70, 1630, 53120, 2225480, $\ldots$ & 9.3088 & 9.21567 \\ 5 & 1, 6, 102, 2886, 114294, 5819190, $\ldots$ & 11.3132 & 11.2001 \\ 6 & 1, 7, 140, 4662, 217308, 13022688,$\ldots$ & 13.3163 & 13.1831 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{figure}[tb] \begin{center} \includegraphics[scale=0.55]{three_node_simulation_worst_case} \end{center} \vspace*{-0.2in} \caption{Natural response of three node maximal network in Example~\ref{ex:three-node-maximal-network-worst-case}.} \label{fig:three-node-maximal-network-worst-case} \end{figure} \begin{example} \label{ex:three-node-maximal-network-worst-case} {\rm Consider a maximal additive network where $K_i=M_i=1$, $i=1,2,\ldots,m$. The Taylor series of the natural response has integer coefficients $a_n$, $n\geq 1$ as shown in Table~\ref{tbl:network-series}. The coefficients when $m=1$ correspond to the OEIS integer sequence A112487 \cite{OEIS}. The table also shows the growth rate $M_{\rm \inf}$ computed from \rref{eq:M-inf-additive-network} and an estimate of the growth constant $M$ computed from $\hat{M}_n=na_{n}/a_{n-1}$ when $n=50$. The corresponding three node network was simulated in MatLab for the zero input case. The node responses, which are identical, are shown in Figure~\ref{fig:three-node-maximal-network-worst-case}. Since the coefficients of every generating series are positive, it is known that the natural response of every node will have a finite escape time at $t=t^\ast$ (see \cite[Theorem 11]{Thitsa-Gray_12}). In this case, $t^\ast=1/M_{\rm inf}=0.1379$, which is what was observed in the simulation. } \hfill\openbull \end{example} \vspace*{0.1in} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.55]{three_node_simulation_generic} \end{center} \vspace*{-0.1in} \caption{Natural response of three node network in Example~\ref{ex:three-node-maximal-network-generic}.} \label{fig:three-node-maximal-network-generic} \end{figure} \begin{example} \label{ex:three-node-maximal-network-generic} {\rm Consider a three node additive network involving maximal series with $K_i=i$, $M_i=5-i$ and \begin{displaymath} W=\left[\begin{array}{cccc} 1 & 0.5 & 1 \\ 1 & 1 & 0 \\ 0.25 & 1 & 1 \end{array}\right]. \end{displaymath} Thus, $\bar{K}=3$, $\bar{M}=4$, and $M_{\rm inf}=77.2867$. The node natural responses are shown in Figure~\ref{fig:three-node-maximal-network-generic}. As this network is not maximal, $t^\ast=1/M_{\rm inf}=0.01294$ provides only a lower bound on the escape times of each node. } \hfill\openbull \end{example} \vspace*{0.1in} \section{Additive Networks of Chen-Fliess Series: Relative Degree} \label{sec:relative-degree} In this section the following question is addressed: When does the generating series of the mapping $v_i\mapsto y_j$ in an additively interconnected network ${\mathcal N}_m$ have a well defined relative degree? The treatment starts with the easiest case first as described next. It is assumed throughout that ${\mathcal N}_m$ is comprised of systems with generating series $c_i$ which have relative degree $r_i$ for $i=1,2,\ldots,m$. \begin{definition} The $i$-th node in a network ${\cal N}_m$ is said to be {\em fully connected} if $W_{ij}\neq 0$ for all $j\neq i$. A network ${\cal N}_m$ is said to be {\em fully connected} if every node is fully connected. \end{definition} Note that self-loops, i.e., when $W_{ii}\neq 0$, are not important in the present context as proportional output feedback is easily shown to preserve relative degree \cite{Gray-Venkatesh_SCL19}. \begin{theorem} \label{th:relative-degree-fully-connected} If the $i$-th node in ${\mathcal N}_m$ is fully connected, then the generating series $d_{ji}$ for mapping $v_i\mapsto y_j$ has relative degree $r_{ji}=r_j+r_i$. \end{theorem} \noindent{\em Proof:}$\;\;$ Observe that the full output at node $j$ is \begin{align*} y_j&=F_{c_j}\left[v_j+\sum_{k=1}^m W_{jk} y_k \right] \\ &=F_{c_j}\left[v_j+\sum_{k,l=1}^m W_{jk} F_{d_{kl}}[v_l]\right]. \end{align*} For any $i\neq j$, that part of $y_j$ in response to $v_i$ acting alone (i.e., $v_l=0$ for $l\neq i$) is given by \begin{align*} y_{j}&=F_{c_j}\Bigg[W_{ji}F_{c_i}[v_i]+\sum_{k=1\atop k\neq i}^m W_{jk} F_{d_{ki}}[v_i]+ \\ &\hspace{0.2in} \sum_{k,l=1\atop l\neq i}^m W_{jk} F_{d_{kl}}[0]\Bigg]. \end{align*} Note that for all $k\neq i$, ${\rm supp}(d_{ki})\subseteq x_0^{r}X^\ast$, where $r\geq r_i+1$, since $v_i$ passes through $F_{c_i}$ in every path leading to the $j$-th node. In which case, the argument of $F_{c_j}$ above has a generating series with relative degree $r_i$. The conclusion then follows immediately from Theorem~\ref{th:relative-degree-casecade}. \hfill\bull \vspace*{0.05in} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{diamond_network} \end{center} \vspace*{-0.2in} \caption{Four node network in Example~\ref{ex:four-node-counterexample}.} \label{fig:four-node-counterexample} \end{figure} \begin{example} \label{ex:four-node-counterexample} {\rm Consider the network shown in Figure~\ref{fig:four-node-counterexample}. The corresponding weighting matrix is \begin{displaymath} W=\left[\begin{array}{cccc} 0 & 0 & 0 & 0 \\ W_{21} & 0 & 0 & 0 \\ W_{31} & 0 & 0 & 0 \\ W_{41} & W_{42} & W_{43} & 0 \end{array}\right]. \end{displaymath} The network is clearly {\em not} fully connected, but node $4$ is fully connected assuming $W_{4j}\neq 0$, $j=1,2,3$. Therefore, applying the theorem above gives, for example, that $r_{41}=r_4+r_1$. Suppose now that $W_{41}=0$ so that the theorem no longer applies. Further assume that $r_2=r_3=r$. Observe that \begin{displaymath} u_4=W_{42}F_{c_2}[W_{21}F_{c_1}[u_1]]+W_{43}F_{c_3}[W_{31}F_{c_1}[v_1]], \end{displaymath} and thus, \begin{displaymath} d_{41}=c_4\circ [W_{42}(c_2\circ (W_{21}c_1))+W_{43}(c_3\circ (W_{31}c_1))]. \end{displaymath} Both $c_2\circ (W_{21}c_1)$ and $c_3\circ (W_{31}c_1)$ have relative degree $r+r_1$, but $d_{41}$ can fail to have relative degree. As a simple example, suppose $c_1=c_2=c_4=x_1$ and $c_3=-x_1$ so that $d_{41}=(W_{42}W_{21}-W_{43}W_{31}) x_0^2x_1$. If $W$ is such that $W_{42}W_{21}=W_{43}W_{31}$, then $d_{41}=0$ does not have relative degree. On the other hand, if the symmetry condition $r_2=r_3$ is broken, then it follows that $d_{41}$ has relative degree $r_{41}=r_4+\min (r_2,r_3)+r_1$. } \hfill\openbull \end{example} \vspace*{0.1in} The final case in the example above suggests a sufficient condition for the general case. Namely, in the absence of these degenerate situations where a node is presented with an input whose underlying generating series does not have relative degree, the relative degree for $d_{ji}$ will be well defined and determined by a path from node $i$ to node $j$ whose {\em accumulated} relative degrees is minimal. To make this claim more precise, the following language adapted from signal flow graph theory will be useful. Let ${\mathcal N}_m$ be a given additive network. An {\em edge} is a directed line segment connecting two nodes. A {\em path} is a continuous set of edges connecting two nodes in ${\mathcal N}_m$ and traversed in the direction indicated. A {\em forward path} is a path in which no node is encountered more than once. A {\em loop} is a path that originates and ends on the same node in which no node is encountered more than once. Finally, the {\em subgraph} $G_{ji}$ from node $i$ to node $j$ is the simple graph (i.e., all loops are omitted) consisting of all forward paths connecting node $i$ and node $j$. The following theorems provide a sufficient condition under which the relative degree is well defined for a given input-output map $v_i\mapsto y_j$ in an additive network. Given a subgraph $G_{ji}$, the {\em accumulated relative degree} of node $i$ is $r_i^+=r_i$. If node $k\neq i$ in $G_{ji}$ has $N$ incoming edges from nodes $i_1,i_2,\ldots, i_N$ with accumulated relative degrees $r^+_{i_1},r^+_{i_2},\ldots,r^+_{i_N}$, respectively, then the {\em accumulated relative degree} at node $k$ is \begin{displaymath} r^+_k=r_k+\min\{r^+_{i_1},r^+_{i_2},\ldots,r^+_{i_N}\}. \end{displaymath} Note this definition does not imply that any mappings defined by the network have relative degree, it simply computes the {\em potential} relative degree of such a mapping should it be well defined. \begin{theorem} \label{th:relative-degree-additive-network-distinct-condition} Let $i$ and $j$ be fixed nodes in ${\mathcal N}_m$. If at every node $l{\not\in}\{i,j\}$ the accumulated relative degrees of the nodes from every incoming edge are distinct, then the generating series $d_{ji}$ for $v_i\mapsto y_j$ in ${\mathcal N}_m$ has well defined relative degree equivalent to $r_{ji}=r_j^+$. \end{theorem} \noindent{\em Proof:}$\;\;$ As feedback loops do not affect the relative degree of any forward path, it is sufficient to consider only the subgraph $G_{ji}$. The claim then follows directly from Corollary~\ref{co:relative-degree-multi-sum-distinct}, Theorem~\ref{th:relative-degree-casecade}, and the definition of accumulated relative degree. \hfill\bull \vspace*{0.05in} The distinctness condition in the above theorem can be relaxed by utilizing instead the condition in Corollary~\ref{co:relative-degree-multi-sum-not-distinct}. \begin{theorem} \label{th:relative-degree-additive-network-repeated-condition} Let $i$ and $j$ be fixed nodes in ${\mathcal N}_m$. If at every node $l{\not\in}\{i,j\}$ the accumulated relative degrees of the nodes from every incoming edge satisfy the condition in Corollary~\ref{co:relative-degree-multi-sum-not-distinct}, then the generating series $d_{ji}$ for $v_i\mapsto y_j$ in ${\mathcal N}_m$ has well defined relative degree equivalent to $r_{ji}=r_j^+$. \end{theorem} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{double_diamond_network} \end{center} \vspace*{-0.1in} \caption{Network in Example~\ref{ex:double-diamond-example}.} \label{fig:double-diamond-example} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{double_diamond_network_subgraph} \end{center} \vspace*{-0.1in} \caption{Subgraph of forward paths for $v_1\mapsto y_7$ in Example~\ref{ex:double-diamond-example}. The relative degree of each generating series $c_i$ is the circled number. The accumulated relative degree at each node is the number in the triangle.} \label{fig:double-diamond-example-subgraph} \end{figure} \begin{example} \label{ex:double-diamond-example} {\rm Consider the network shown in Figure~\ref{fig:double-diamond-example}, where each weight $W_{ij}\in \{0,1\}$ (i.e., $0\sim\mbox{not connected}$, $1\sim\mbox{connected}$), and the generating series for the nodes are: \begin{align*} c_1 &= K_1 x_1 + 2 x_0 x_1 \\ c_2 &= x_0+K_2 x_0^2 x_2 \\ c_3 &= K_3 x_0 x_3 + 3 x_0^2 x_3^2 \\ c_4 &= 1+K_4 x_0 x_4 - x_0^2 x_4x_0 \\ c_5 &= 4x_0+K_5 x_0^2 x_5 -2x_0^4x_5 \\ c_6 &= K_6 x_6 -x_6^2\\ c_7 &= x_0+2+K_7 x_7+4x_0x_7 \end{align*} with $K_i\neq 0$ in every case. The subgraph of forward paths is shown in Figure~\ref{fig:double-diamond-example-subgraph}. The relative degree of the generating series at each node is the circled number shown next to each node. The accumulated relative degree at each node is the number in the triangle The goal is to determine the relative degree of the mapping $v_1\mapsto y_7$, provided it is well defined. Observe that only nodes $4$, $5$ and $7$, have more than one incoming edge. In each case, the accumulated relative degrees are distinct, namely, $3,4$; $4,5$; and $6,7$, respectively. Therefore, Theorem~\ref{th:relative-degree-additive-network-distinct-condition} applies, and $r_{71}=7$. To independently verify this claim, the generating series $d_{71}$ was computed using the full network via Theorem~\ref{th:additive-interconnections} with the aid of Mathematica and found to be \begin{displaymath} d_{71}=d_{71,N}+K_1K_3K_4K_6K_7x_0^6x_1+x_0^6e, \end{displaymath} where \begin{align*} d_{71,N} = & {\,} x_0+ (4 K_7 + K_6 K_7)x_0^2 +(16 + 4 K_6 - K_7)x_0^3+ \\ & {\,} (-4 + K_5 K_7 + 2 K_4 K_6 K_7) x_0^4 + (4 K_5 + 8 K_4 K_6- \\ & {\,} 8 K_4 K_7 + K_5 K_7 + 2 K_4 K_6 K_7) x_0^5 + \cdots\\ e = & {\,} (4 K_1 K_3 K_4 K_6 - 6 K_1 K_3 K_4 K_7 + K_1 K_2 K_5 K_7+ \\ & {\,}K_1 K_2 K_4 K_6 K_7 + 2 K_3 K_4 K_6 K_7) x_0 x_1 + \cdots \end{align*} The relative degree of $d_{71}$ is 7 as expected. } \hfill\openbull \end{example} \vspace*{0.1in} An additive network ${\mathcal N}_m$ is said to have {\em complete relative degree} if every mapping $v_i\mapsto y_j$, $i,j=1,2,\ldots,m$ has relative degree. From Theorem~\ref{th:relative-degree-fully-connected} it is immediate that fully connected networks have this property. Another class of networks sharing this property is given in the following theorem. It states that in some sense the property of a network having complete relative degree is {\em generic}. \begin{theorem} Consider an additive network ${\mathcal N}_m$ where the weighting matrix has entries $W_{ij}\in\{0,1\}$. If the unity weights are replaced with continuous random variables, then every sample network has complete relative degree. \end{theorem} \noindent{\em Proof:}$\;\;$ At any given node, the incoming nodes may or may not have distinct accumulated relative degree. In the case where they do, then Theorem~\ref{th:relative-degree-additive-network-distinct-condition} applies, otherwise, Theorem~\ref{th:relative-degree-additive-network-repeated-condition} applies provided the condition for multiplicities greater than one can be met. Specifically, at node $k$ with incoming edges from nodes $j_1,j_2,\ldots, j_N$ with accumulated relative degrees $r^+_{j_1},r^+_{j_2},\ldots,r^+_{j_N}$, it is required that if $r^+_j$ is repeated $s_j>1$ times then \begin{align*} &W_{ij(1)}\langle d_{j(1)i},x_0^{r^+_j-1}x_i\rangle+ W_{ij(2)} \langle d_{j(2)i},x_0^{r^+_j-1}x_i\rangle+ \cdots \\ &\hspace*{0.2in}+W_{ij(s_j)}\langle d_{j(s_j)i},x_0^{r^+_j-1}x_i\rangle\neq 0, \end{align*} where $j(l)\in\{j_1,j_2,\ldots, j_N\}$, and the $W_{ij_l}$ are random variables with any continuous distribution(s). But this condition is always true with probability one, and hence, the theorem is proved. \hfill\bull \vspace*{0.05in} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.6]{random_network} \end{center} \vspace*{-0.1in} \caption{Estimate of density function for $\abs{\langle d_{41},x_0^2x_1\rangle}$ in Example~\ref{exer:random-network}.} \label{fig:random-network-density-function} \end{figure} \begin{example} \label{exer:random-network} {\rm Reconsider Example~\ref{ex:four-node-counterexample}, where $W_{41} = 0$ and now $W_{21}$, $W_{31}$, $W_{42}$ and $W_{43}$ are i.i.d. random variables with a uniform distribution on $[0,1]$. An estimate of the density function for the random variable $\abs{\langle d_{41},x_0^2x_1\rangle}$ is shown in Figure~\ref{fig:random-network-density-function}. In every case of the 1000 random networks generated, $d_{41}$ had relative degree $r=3$ as expected. } \hfill\openbull \end{example} \vspace*{0.1in} \section{Conclusions} Two basic properties were established for an additive network of input-output systems where each node of the network is modeled by a convergent Chen-Fliess series. First it was shown that every input-output map between a pair of nodes has a locally convergence Chen-Fliess series representation. An explicit and in some cases achievable growth bound on the coefficients was computed using the notion of a maximal network. Second, sufficient conditions were given under which the input-output map between a pair of nodes has a well defined relative degree as defined by its generating series. This analysis led to the conclusion that this relative degree property is generic when the connection strengths between nodes are randomized.
1,314,259,996,018
arxiv
\section{\label{sec:intro}Introduction} In magnetic systems where the antisymmetric Dzyaloshinskii-Moriya interaction (DMI) is present~\cite{Dzyal58,moriya1960new}, topological and chiral features emerge. The DMI interaction is the responsible of the stabilization of localized magnetic textures with chiral character, such as the skyrmion lattice~\cite{Muehlbauer09,Yu10,Yu11,Wilhelm11,kezsmarki2015neel,wu2020neel,Laliena18b} and single skyrmion state~\cite{Bogdanov94a,Bogdanov94b,Bogdanov99,sampaio2013nucleation}. In monoaxial helimagnets, such as CrNb$_3$S$_6$, CrTa$_3$S$_6$, CuB$_2$O$_4$, CuCsCl$_3$, Yb(Ni$_{1-x}$Cu$_x$)$_3$Al$_9$ and Ba$_2$CuGe$_2$O$_7$~\cite{Moriya82,Kousaka16,Roessli01,Adachi80,Ohara14,Matsumura17,Zheludev97,togawa2012chiral}, the DMI favors the rotation of the magnetization along a single chiral axis. In this case, analogously to the skyrmion lattice and single skyrmion in bulk or interfacial DMI systems, chiral soliton lattice (CSL)~\cite{Dzyal64, Miyadai83, izyumov1984modulated, togawa2012chiral, Kishine15, Togawa16, Laliena16a, Laliena16b, Laliena17a} and individual chiral solitons (CSs) can be stabilized~\cite{victor2020dynamics}. Both objects, the skyrmions and chiral solitons, present interesting magnetoresistive~\cite{hanneken2015electrical,Togawa13,Togawa15} and mobility~\cite{sampaio2013nucleation,iwasaki2013universal,victor2020dynamics} properties, with their particular imprint related to their structure and topological nature. These properties make them good candidates for spintronic devices~\cite{skyrmionicsroadmap2020}. Besides the application to spintronic devices, new electromagnetic properties of magnetic textures are being explored based on the concept of emergent electrodynamics~\cite{schulz2012emergent,Nagaosa13}. It was theoretically predicted, and experimentally confirmed in the compound Gd$_3$Ru$_4$Al$_{12}$, that the spiral structure encountered in helimagnets can effectively work as an electromagnetic inductor~\cite{nagaosa2019emergent,yokouchi2020emergent}. This property of the spiral structure allows for the implementation of large inductances at small scales. The previously described potential technological applications motivate the study of the CS and CSL dynamics in monoaxial helimagnets under electric current. The response to external currents of the CSL has been theoretically studied in the linear response limit corresponding to small currents and weak fields~\cite{Kishine10,Tokushuku17}. The response of a single CS to external currents has been recently analyzed and it has been shown that the single soliton is destabilized and can be destroyed by large currents~\cite{victor2020dynamics}. Here, we study the response of the CSL in a wide range of currents and magnetic fields. We show that both the CSL and the single CS have the same mobility in the steady motion regime, and that the CSL is also destabilized with large currents. Our results are relevant within the field of chiral magnetism but also for the design of spintronic and electronic devices. The article is organized as follows: in Sec. \ref{sec:model} we introduce the model for a monoaxial chiral helimagnet under the effect of a spin-transfer torque, we present the main results on the CSL stability and subcritical dynamics in Sec. \ref{sec:sub}, we continue in Sec. \ref{sec:critic} with the study of the dynamical behavior in the supercritical regime, and in Sec. \ref{sec:phase_diagram} we study the $j-B$ phase diagram and the critical current at constant density of solitons. Finally we summarize our findings in Sec. \ref{sec:conclusions}. \section{\label{sec:model}Micromagnetic model for a monoaxial helimagnet under external currents} The time evolution of the magnetization field in a ferromagnet under current induced external torque is governed by the modified Landau-Lifshitz-Gilbert (LLG) equation: \begin{equation} \label{ec:llg_corr} \frac{\partial\bm{n}}{\partial t}=\gamma \bm{B}_{\mathrm{eff}}\times\bm{n}+\alpha \bm{n}\times\left(\frac{\partial\bm{n}}{\partial t}\right) + \bm{\tau}, \end{equation} where $\alpha$ and $\gamma$ are the Gilbert damping and the gyromagnetic constant, respectively. The vector field $\bm{B}_{\mathrm{eff}}(\bm{r})=-\frac{1}{M_{\mathrm{S}}}\frac{\delta E}{\delta\bm{n}(\bm{r})}$ is the effective field derived from the energy functional $E$. The unimodular vector field $\bm{n}(\bm{r})=\bm{M}(\bm{r})/M_{\mathrm{S}}$ describes the local magnetization direction and $M_{\mathrm{S}}$ is the saturation magnetization. The last term in Eq. (\ref{ec:llg_corr}), $\bm{\tau}$, is the spin-transfer torque due to the spin-polarized current and it is given by: \begin{equation} \label{ec:stt_zhang_li} \bm{\tau}=- (\bm{u}\cdot\nabla)\bm{n} + \beta \bm{n}\times(\bm{u}\cdot\nabla)\bm{n}, \end{equation} where $\bm{u} = - b_j \bm{j}$ and $b_j = \frac{P\mu_{\mathrm{B}}}{|e| M_{\text{S}}}$ with $P$ the polarization degree, $e$ the electron charge, and $\mu_{\mathrm{B}}$ the Bohr magneton. Notice that $\bm{u}$ points in the direction of the electron motion while the current density $\bm{j}$ points in the opposite direction. The first term is the reactive (adiabatic) torque and the second term is the dissipative (non-adiabatic) torque, whose strength is controlled by the nonadiabaticity coefficient $\beta$~\cite{Zhang04, Manchon19}. To describe a monoaxial chiral ferromagnet we consider a model that includes ferromagnetic exchange interactions, monoaxial DMIs and single-ion anisotropies, characterized by the stiffness constant $A$, the DMI strength constant $D$, and the anisotropy constant $K$, respectively. Thus the magnetic energy functional is $E[\bm{n}]=\int d^{3}\bm{r} e(\bm{r})$, and the energy density $e(\bm{r})$ is given by \begin{equation} \label{eq:ener_laliena} e(\bm{r})=A\sum_{i}\left(\partial_{i}\bm{n}\right)^2-D\bm{\hat{z}}\cdot\left(\bm{n}\times\partial_{z}\bm{n}\right)-K n_{z}^{2}-M_{\mathrm{S}}\bm{B}\cdot\bm{n}, \end{equation} where the index $i$ runs over $x,y,z$, the chiral axis is along $\bm{\hat{z}}$ and $\bm{B}$ is the external magnetic field. The effects of the dipolar interaction are effectively taken into account in the uniaxial anisotropy term, which is correct for magnetization fields that depend only on the $z$ coordinate, as those considered in this work. The corresponding effective field in Eq. (\ref{ec:llg_corr}) reads: \begin{equation} \label{eq:b_eff} \bm{B}_{\mathrm{eff}}=\frac{2}{M_{\mathrm{S}}}\left[ A\nabla^{2}\bm{n}-D\bm{\hat{z}}\times\partial_{z}\bm{n}+Kn_z\bm{\hat{z}}+\frac{M_{\mathrm{S}}}{2}\bm{B}\right]. \end{equation} \begin{figure}[t!] \includegraphics[width=8cm]{fig_1.png} \caption{The magnetization field for different configurations in a monoaxial chiral magnet: at zero magnetic field the configuration corresponds to the helical state (HL) with period $L_{0}$, for a magnetic field along the chiral axis the magnetization corresponds to the conical state (CN), if the magnetic field is applied in the direction perpendicular to the chiral axis the magnetic state corresponds to a chiral soliton lattice (CSL) which can be conceived as a regular arrangement of chiral solitons (CS). The color code represents the $n_{y}$ component: blue (yellow) for $n_{y} = -1$ (+1). \label{fig:conf_sol}} \end{figure} The model just described possess a rich phenomenology. Without applied current and at zero magnetic field the magnetization forms a helical structure (HL) with the propagation vector $\bm{q}_{0}$ aligned with the chiral axis (see Fig.~\ref{fig:conf_sol}). This means that the magnetization is contained within the $x-y$ plane but rotates around the $z$ axis. If a magnetic field is applied along the chiral axis, the helical state features a conical deformation leading to a conical state (CN) as shown in Fig.~\ref{fig:conf_sol}. By increasing the magnetic field the system reaches a ferromagnetic state, with the magnetization pointing in the $z$ direction~\cite{Miyadai83, Ghimire13, Chapman14, Laliena16a, Laliena17a}. Instead, if a magnetic field is applied in a direction perpendicular to the chiral axis, say $\bm{B}=B\bm{\hat{y}}$, the helical state is distorted and a CSL is formed (Fig.~\ref{fig:conf_sol}). The structure of the CSL can be transformed into that of the HL if the magnetic field is gradually reduced down to zero. The density of solitons decreases with the external field $B$, so that the distance between consecutive solitons increases according to the relation \cite{togawa2012chiral,Dzyal64,izyumov1984modulated,kishine2005synthesis} \begin{equation} \label{eq:l_teo} \frac{L(B)}{L_0}=\frac{4\tilde{K}(k)\tilde{E}(k)}{\pi^{2}}, \end{equation} where $L_{0}=4\pi A/D$ is the period of the zero-field helical state, $\tilde{K}(k)$ and $\tilde{E}(k)$ are the complete elliptical integrals of the first and second kind, respectively, and $k$ solves the equation \begin{equation} \label{eq:l_teo2} \frac{k}{\tilde{E}(k)}=\sqrt{\frac{B}{B_{c}}}. \end{equation} The model described by Eq. \eqref{eq:ener_laliena} applies to a wide range of monoaxial chiral helimagnets. In particular we shall consider $A=1.42 \un{pJ/m}$, $D=369 \un{\mu J/m^{2}}$, $K=-124 \un{kJ/m^3}$ and $M_{\mathrm{S}}=129 \un{kA/m}$, that reproduces the phenomenology of the CrNb$_{3}$S$_{6}$ compound~\cite{togawa2012chiral,Dzyal64,Miyadai83,izyumov1984modulated,kishine2005synthesis}. The zero-field helical pitch $L_{0}\approx48$ nm and the critical field $B_{c}\approx 230$ mT for the chiral soliton lattice-forced ferromagnet transition in a transverse magnetic field, are well described by the previous set of parameters~\cite{victor2020dynamics,Osorio2021}. In the following, we shall study the effect of an external electric current applied along the chiral axis when the system is subjected to a magnetic field applied perpendicular to the chiral axis. Henceforth we thus consider a magnetic field along the $\bm{\hat{y}}$ direction, $\bm{B}=B\bm{\hat{y}}$. \section{\label{sec:sub}Steady motion of the Chiral Soliton Lattice} Since the norm of the magnetization $\bm{n}$ is constant there are only two degrees of freedom and it is useful to use the polar parametrization \begin{equation} \label{eq:polar_param} \bm{n} = -\sin \theta \sin \varphi \, \bm{\hat{x}} + \sin \theta \cos \varphi \, \bm{\hat{y}} + \cos \theta \, \bm{\hat{z}}, \end{equation} with the direction $\bm{\hat{z}}$ aligned with the chiral axis. Steady solutions of the LLG equation, where a magnetic texture rigidly moves at a constant velocity, exist if there is an applied electric current which delivers a torque on the magnetization. In this case the magnetic state is characterized by functions $\theta(w)$ and $\varphi(w)$ depending on $w = q_0(z-vt)$, with $v$ a constant velocity and $q_0 = D/2A$. Setting the current to $\bm{j} = -j \bm{\hat{z}}$, the LLG equations in the steady state can be written in the form \begin{widetext} \begin{eqnarray} \theta^{\prime\prime} &=& (\varphi^{\prime\,2}-2\varphi^\prime + \kappa)\sin\theta\cos\theta - h_y\cos\theta\cos\varphi - \Omega\theta^\prime + \Gamma\sin\theta\varphi^\prime, \label{eq:stationary1} \\[4pt] \sin\theta\varphi^{\prime\prime} &=& h_y\sin\varphi - 2(\varphi^\prime-1)\cos\theta\theta^\prime - \Gamma\theta^\prime - \Omega\sin\theta\varphi^\prime, \label{eq:stationary2} \end{eqnarray} \end{widetext} where $\kappa=K/Aq_0^2$ and $h_y=M_\mathrm{S}B/2Aq_0^2$. The primes indicate derivatives with respect to the $w$ variable. The parameters $\Omega$ and $\Gamma$ are given by \begin{equation} \label{eq:Omega_Gamma} \Omega = \frac{\alpha}{v_0} \left( v - \frac{\beta}{\alpha} b_j j \right), \quad \Gamma = \frac{1}{v_0} \left(v - b_j j \right), \end{equation} with $v_0 = 2 \gamma A q_0/M_\mathrm{S}$. When the current is applied to the CSL, the steady solution is expected to be also periodic and thus the steady equations are solved for $z$ within an interval of length equal to a period, $L$. This means $w\in[-w_L,w_L]$ with $w_L=q_0L/2$, and then $\varphi(w)$ and $\theta(w)$ satisfy the boundary conditions \begin{gather} \varphi(-w_L) = 0, \;\; \varphi(w_L) = 2 \pi, \;\; \varphi'(w_L) = \varphi'(-w_L), \label{eq:BC1} \\ \theta(-w_L) = \theta(w_L), \;\;\; \theta'(w_L) = \theta'(-w_L). \label{eq:BC2} \end{gather} These conditions ensure, in a single period, a $2 \pi$ rotation of $\varphi$, periodicity of $\theta$ and continuity of their derivatives. \subsection{Determination of the steady solutions \label{subsec:steady_bvp}} Besides the model parameters and the applied magnetic field, Eqs.~\eqref{eq:stationary1} and \eqref{eq:stationary2} contain a priori two independent free parameters, $\Omega$ and $\Gamma$, or, equivalently, $j$ and $v$. The value of $j$ can be arbitrarily chosen since it corresponds to an external physical parameter which can be varied at will. However, we expect the velocity $v$, which has been introduced in the ansatz for the steady state solution, to be determined by the applied current. This is indeed what happens, since the boundary value problem defined by Eqs.~\eqref{eq:stationary1} and \eqref{eq:stationary2} and the boundary conditions \eqref{eq:BC1} and \eqref{eq:BC2} has a solution \textit{only if} $\Omega=0$, as shown in appendix~\ref{sec:app-bvp}. In this way the current $j$ determines uniquely the steady state velocity $v$, which is given by \begin{equation} \label{eq:v_vs_j} v=\frac{\beta b_{j}}{\alpha}j. \end{equation} This means that the steady velocity has a linear dependence with the current density $j$, with a mobility $m = \beta b_j/\alpha$ which is independent of the density of solitons and of the applied field, but still depends on the Gilbert damping, the non-adiabaticity parameter, the saturation magnetization and the polarization degree of the current. Notice that the direction of velocity vector $\bm{v}$ is opposite to the direction of the current density $\bm{j}$. Interestingly, the relation in Eq. \eqref{eq:v_vs_j} is the same as that found for the steady motion of a single CS in a monoaxial helimagnet \cite{victor2020dynamics} and of a domain wall in an anisotropic ferromagnet~\cite{Thiaville05}. Thus, it seems to be a universal feature of the one dimensional magnetic soliton dynamics. Notice that if the condition in Eq. \eqref{eq:v_vs_j} holds, $\Gamma$ is proportional to the current density: $\Gamma=(\beta/\alpha-1)b_jj/v_0$. For $\Omega=0$ the boundary value problem defined by Eqs. \eqref{eq:stationary1}, \eqref{eq:stationary2}, \eqref{eq:BC1}, and \eqref{eq:BC2} may have one or more solutions, or no solution (this happens if $j$ is large, see below). For given $j$ we characterize the solutions by the magnetization tilt angle at the boundary \footnote{The reason to choose the magnetization tilt angle at the boundary instead of, for instance, at the cell center, is related to the method of solution of the boundary value problem (see appendix A).}, $\theta_L = \theta(-w_L) = \theta(w_L)$, which encodes conical deformations of the magnetic configuration. For given $B$ and low values of $|j|$ there is only one solution, but at high enough $|j|$ a second solution appears. The two solutions merge at a critical value of $|j|$, denoted by $j_c$, beyond which the boundary value problem with $\Omega=0$ has no solution. As an example, Fig.~\ref{fig:critic-sol}(a) shows the values of $\theta_L$ as a function of $j$ for $B=50\un{mT}$, with a density of solitons corresponding to the equilibrium CSL at zero current, that is, with $L$ obtained from $B$ by Eq. \eqref{eq:l_teo}. In this case, the value $j_{c}\approx2.34\times10^{12}\un{A/m^{2}}$ is obtained. The continuous blue line corresponds to stable solutions while the solutions indicated by broken red lines are unstable, as detailed in the following. To analyze the stability of the steady solutions we study the dynamics of perturbations about them. Let $\bm{n}_0$ be a steady state and let a perturbation around this state be given by \begin{equation} \label{eq:pert} \bm{n} = \bm{n}_0 + \xi_1 \bm{e}_1 + \xi_2 \bm{e}_2, \end{equation} where $\bm{e}_1$ and $\bm{e}_2$ are two orthonormal vectors perpendicular to $\bm{n}_0$, and $\xi_1$ and $\xi_2$ are the amplitudes of the perturbations. The perturbations $\xi_1$ and $\xi_2$ are functions of the three coordinates $x$, $y$, $z$, and of time, $t$, while the vectors $\bm{n}_0$, $\bm{e}_1$, and $\bm{e}_2$ are functions of the single variable $w=q_0(z-vt)$, where $v$ is given by Eq. \eqref{eq:v_vs_j}. Inserting the form of the magnetization given by Eq. \eqref{eq:pert} into the LLG equation and linearizing it in $\xi_1$ and $\xi_2$ we obtain a linear equation for the dynamics of the perturbations. Defining the two component column vector $\xi = (\xi_1, \xi_2)^T$, where the superscript $T$ stands for matrix transpose, the linearized LLG equation relates the time derivative of $\xi$ to a linear second order differential operator acting on $\xi$. The linear operator involves only spatial derivatives and its coefficients are functions only of $w$. Hence, it is convenient to perform a change of variables and consider $\xi$ a function of $t$, $x$, $y$ and $w$. In this form we obtain the equation \begin{equation} \partial_t \xi = \mathcal{S} \xi, \label{eq:lin_LLG} \end{equation} where the coefficients of the linear differential operator $\mathcal{S}$, which is given in Appendix~\ref{sec:app-stability}, depend only on $w$. With the ansatz $\xi = \eta e^{\nu t}$, where $\eta$ is a function of $x$, $y$, and $w$, the evolution equation is reduced to the eigenvalue problem $\mathcal{S}\eta = \nu\eta$. The steady state is stable if and only if all eigenvalues $\nu$ of $\mathcal{S}$ have non positive real part. Figure~\ref{fig:critic-sol}(b) shows the maximum of the real part of the eigenvalues of $\mathcal{S}$ corresponding to the steady solutions of Fig.~\ref{fig:critic-sol}(a). Some details on the computations are given in the Appendix~\ref{sec:app-stability}. We see that the blue branch of Fig.~\ref{fig:critic-sol}(a) represents the values of $\theta_L$ that correspond to stable steady solutions, while the steady solutions corresponding to the dashed branches are unstable. In the range $2.12\times10^{12}\un{A/m^{2}}\lesssim |j| \lesssim 2.34\times10^{12}\un{A/m^{2}}$, we find two possible stable solutions, as $\theta_{L}$ is not single valued and the corresponding eigenvalues have negative real part (see inset in Fig.~\ref{fig:critic-sol}(b)). In this case, which of the two possible stable solutions is reached will depend on the initial condition. In our numerical simulations we use the CSL as the initial state and we always observe the solution corresponding to the maximum deviation from the $x-y$ plane, i.e. with $\max(|\theta_L - \pi/2|)$, corresponding to the lower(upper) blue section for positive(negative) $j$ values in Fig.~\ref{fig:critic-sol}(a). In conclusion, steady motion states exist only if the applied current density is lower than a critical current $j_c$, which depends strongly on the applied magnetic field and on the density of solitons (see Sec. \ref{sec:phase_diagram}). \begin{figure}[t!] \includegraphics[width=8cm]{fig_2a.png} \includegraphics[width=8cm]{fig_2b.png} \caption{(a) $\theta_L$ and (b) $\max\mathrm{Re}(\nu)$ (in units of $\omega_0$, see Appendix~\ref{sec:app-stability}) as a function of $j$ for $B = 50\un{mT}$. The stable branch of $\theta_L$ corresponds to $\max\mathrm{Re}(s) < 0$ and is indicated with a continuous blue line. Unstable branches are indicated with dashed red lines. For this value of the external field, there are no solutions beyond $j_c\approx2.34\times10^{12}\un{A/m^{2}}$. The inset in (b) shows that within the range $2.12\times10^{12}\un{A/m^{2}}\lesssim |j| \lesssim 2.34\times10^{12}\un{A/m^{2}}$ two stable solutions are found (corresponding to two different values of $\theta_{L}$ in (a)). \label{fig:critic-sol}} \end{figure} \subsection{Steady velocity-current response} The stable steady solutions are reproduced by micromagnetic numerical simulations: a steady motion state is obtained after a short transient if a polarized electric current along the chiral axis is applied to a system which is initially at equilibrium, provided the applied current density is lower than a certain critical value. We use the MuMax3 code and implement a monoaxial DMI interaction~\cite{MuMax3,Leliaert18,victor2020dynamics}. Parameter values for CrNb$_3$S$_6$ (as mentioned in Sec.~\ref{sec:model}) were used in a one-dimensional system of size $R =500\un{nm}$, with a mesh comprised of 500 cells of length $\Delta R=1\un{nm}$, and we set $\alpha=0.01$ and $\beta=0.02$ for the Gilbert damping in Eq. \eqref{ec:llg_corr} and the non-adiabaticity constant in Eq. \eqref{ec:stt_zhang_li}, respectively. We perform our simulations using periodic boundary conditions and keeping the number of chiral solitons constant at a given value $N$. The velocity of the CSL can be obtained from the simulations using the autocorrelation $\langle \bm{n}(z,0)\cdot\bm{n}(z,t)\rangle$ where $\langle\cdots\rangle=\frac{1}{R}\int_{0}^{R}\cdots dz$. From the Fourier transform of the time-dependent autocorrelation function, and using the lowest non-zero frequency $\nu_{1}$, we get the CSL velocity as $v=\frac{\nu_{1}R}{2\pi N}$ (see Appendix ~\ref{sec:app-autocorrelation}). The results of the velocity as a function of the current are shown in Fig.~\ref{fig:v_vs_j}(a), indicating an extremely good agreement between the stationary solution and numerical simulations of the full LLG equations. The fact that the velocity does not depend on the solitons' density, controlled by the external magnetic field, gives room to work in a wide field range without modifying the dynamical properties of the CSL. \begin{figure}[t!] \includegraphics[width=8cm]{fig_3a.png} \includegraphics[width=8cm]{fig_3b.png} \caption{(a) The CSL velocity for different number of CSs and at different values of magnetic field: $N=10$ and $B=50\un{mT}$, $N=13$ and $B=50\un{mT}$ and $N=13$ and $B=100\un{mT}$. The black line represents the analytical result for the velocity given by Eq.~\eqref{eq:v_vs_j}. (b) The magnetization along the chiral axis as a function of time for different square pulses of current of intensities $j$ and $B=50\un{mT}$. \label{fig:v_vs_j}} \end{figure} \subsection{Current induced CSL deformation} \label{subsec:def} As shown in Fig.~\ref{fig:v_vs_j}(b), where the $z$ component of the net magnetization is presented, numerical simulations show that the stationary solutions are reached after a transient time of the order of a few nanoseconds. This results correspond to a case with $B = 50 \un{mT}$ and different intensities of the current $j$. It is also important to mention that besides the translation motion of the magnetic texture, the effect of the current involves a deformation of the original CSL into a state with cone-like profile, leading to a net magnetization along the chiral axis, as shown in Fig.~\ref{fig:v_vs_j}(b). At zero magnetic field, the current drives the system to a conical state analogous to the state observed in a cubic helimagnet under the same conditions~\cite{Goto08,masell2020manipulating,masell2020combing}. In this case the distortion is characterized by a uniform component of the magnetization field along the propagation vector $\bm{q}_0$. However, when a transverse magnetic field is applied, the magnetization component parallel to $\bm{q}$ is not uniform but exhibits a modulation along the system. Figure~\ref{fig:mz_nxnynz_esf}(a) shows how the magnetization components are periodically varying along the $z$ coordinate, as found using micromagnetic simulations for $B=50\un{mT}$ and applying a current $j=1.8\times10^{12}\un{A/m^{2}}$. The distortion of the CSL is described by the form of $\theta(w)$ and $\varphi(w)$ within one period. Figures~\ref{fig:mz_nxnynz_esf}(b) and \ref{fig:mz_nxnynz_esf}(c) compare the steady solutions obtained by solving the boundary value problem and by the micromagnetic simulations for $B=50\un{mT}$. A good agreement between both results is observed. Let us discuss the form of the CSL distortion in the steady motion state. In absence of current, $j=0$, the polar angle {has a constant value} $\theta(w) = \pi/2$, which means that the magnetization lays in the $x-y$ plane. If a current is applied, $\theta(w)$ oscillates between a maximum value for $z = 0 ,L$ (i.e. $w = \pm w_{L}$) and a minimum value at $z=L/2$ (i.e. $w=0$), as can be appreciated in Fig.~\ref{fig:mz_nxnynz_esf}(b). This means that the tilting of the magnetization towards the chiral axis is maximum at the center of the soliton, i.e. when $n_y$ is minimum, and it is minimum when $n_y$ takes its maximum value. The variation of the angle $\varphi(w)$ indicates how the magnetization field performs the $2 \pi$ rotation, and depends on the applied current and field as shown in Fig.~\ref{fig:mz_nxnynz_esf} (c). \begin{figure}[t!] \includegraphics[width=8cm]{fig_4a.png} \includegraphics[width=4cm]{fig_4b.png} \includegraphics[width=4cm]{fig_4c.png} \includegraphics[width=3.3cm]{fig_4d.png} \includegraphics[width=3.8cm]{fig_4e.png} \caption{ (a) A snapshot of the magnetization field along the sample after the steady motion is reached ($B=50$ mT and $j=1.8\times10^{12}$ A/m$^2$). (b) Polar angle $\theta(z)$ within one period of the CSL, with a pitch $L=50\un{nm}$. (c) Rotation angle $\varphi(z)$ indicating one complete turn in a CSL period. The curve corresponding to $j=1.8\times10^{12}$ A/m$^2$ was displaced in order to present the results more clearly. The dotted lines serve as a guide for the eye and emphasize the difference between the cases with and without applied current. In (b) and (c) the circles represent the results from the micromagnetic simulations while the solid lines are the solutions for the boundary value problem in Eqs. \eqref{eq:stationary1}, \eqref{eq:stationary2}, \eqref{eq:BC1} and \eqref{eq:BC2}. (d) A spherical plot representing the magnetization field over the Bloch sphere. The thick black line represents the CSL before the current is applied. The thin black line represents the conical state for $B=0\un{mT}$ when the current is applied. The blue line represents the magnetization field in (a). The sphere represents the Bloch sphere spanned by the set of vectors $|\bm{n}|=1$ and the color code (blue-yellow) represents the value of $n_y$: blue (yellow) corresponds to $n_{y}=-1$ $(+1)$. (e) Projection of the conical distortion in the $y-z$ plane. The orientation and opening angles, $\theta_o$ and $\theta_a$, characterizing the cone are indicated. \label{fig:mz_nxnynz_esf}} \end{figure} The distortion of the steady moving CSL can be recast as a conical deformation, akin the one observed when a magnetic field in the $z$ direction is considered~\cite{Laliena17a,Yonemura17}. The opening of the cone depends on the intensity of the current. Large values of $j$ tend to shrink the cone, and, as a consequence, the value of the net magnetization along the chiral axis grows approximately linearly with the intensity of the current as shown in Fig. \ref{fig:v_vs_j}(b). In this case $\theta(w) < \pi/2$, indicating a conical deformation pointing in the $z$ direction. It is instructive to represent the magnetization field over the Bloch sphere as in Fig. \ref{fig:mz_nxnynz_esf}(d). From this figure it is possible to recognize the effect of the current on the structure of the CSL: its profile changes from a planar (thick black line) to a conical section (thin black and thick blue lines) when a current density is applied. For $B=0\un{mT}$ the cone axis is aligned with the $z$ direction (thin black) whilst for non zero $B$ the orientation of the axis of the conical distortion slightly departs from the $z$ direction (thick blue). Since the current deforms the CSL and turns its profile into an oriented-cone, key features of the magnetization dynamics can be characterized by two angles that we call $\theta_{o}$, providing information about the orientation of the cone, and $\theta_{a}$, representing the opening angle of the cone (see Fig. \ref{fig:mz_nxnynz_esf}(e)). Whenever $\theta_o >0$ the $2 \pi$ rotation of the magnetization is around the direction defined by $\theta_o$, and the cone is not perfectly oriented with the chiral axis. Figure~\ref{fig:angs}(a) presents micromagnetic simulation results showing that $\theta_a$ (red circles) and $\theta_o$ (blue diamonds) reach a steady value. It can be observed that $\theta_{o}$ grows from zero (the axis of the cone coincides with the chiral axis) to a finite value in the steady regime, that is, the axis of the cone departs from the chiral axis. On the other hand, the opening angle $\theta_{a}$ decreases with time, from $\pi/2$ to a finite value reached at the steady state. The values of $\theta_{a}$ and $\theta_{o}$ in the steady state as a function of the applied current are shown in Fig.~\ref{fig:angs}(b). We see that $\theta_{a}$ decreases while $\theta_{o}$ increases with $j$. It is important to note that $\theta_{a}$ takes a finite value when $j$ reaches $j_c$, i.e. the critical regime is reached before the cone closes. The numerical results (symbols) and the analytical results (solid lines) are in perfect agreement. A similar phenomenology appears in the helical state of cubic noncentrosymmetric ferromagnets \cite{masell2020manipulating}. \begin{figure}[t!] \includegraphics[width=8cm]{fig_5a.png} \includegraphics[width=8cm]{fig_5b.png}% \caption{ Characteristics of the conical distortion for small currents. (a) Time evolution of the orientation and opening angles, $\theta_o$ (blue diamonds) and $\theta_a$ (red circles), in the subcritical regime for $j=1.8\times10^{12}\un{A/m^2}$ and $B=50\un{mT}$. (b) The angles $\theta_{o}$ (in blue) and $\theta_{a}$ (in red), in the steady state, as a function of the current intensity for $B=50\un{mT}$. The circles and diamonds are the results from the micromagnetic simulations and the solid lines are the results obtained from the solution of the boundary value problem. The dotted black line signals the critical current $j_{c}\approx2.34\times10^{12}\un{A/m^{2}}$ for $B=50\un{mT}$. \label{fig:angs}} \end{figure} \section{\label{sec:critic}Destruction of the chiral soliton lattice and transient dynamics beyond the critical current} The steady states described in Section~\ref{sec:sub} are only reached if $j$ is below a critical current, since steady solutions of the LLG equation exist only if $j < j_c$ as indicated in Fig.~\ref{fig:critic-sol}. When $j > j_c$ the CSL is destabilized and the system is driven to a different state. Although it is not expected to become an accurate description for large distortions, it is still insightful to describe the magnetization texture as an oriented cone. The time evolution of the orientation and opening angles obtained using micromagnetic simulations for $B = 50\un{mT}$ and for a current $j=3\times10^{12}\un{A/m^{2}}$, which is above $j_c$ ($j_{c}\approx2.34\times10^{12}\un{A/m^{2}}$ at $B=50\un{mT}$) are presented in Fig.~\ref{fig:esfstar}(a). The orientation angle $\theta_o$ (blue diamonds) starts increasing from zero and reaches the constant value $\theta_o = \pi/2$. Concomitantly, the value of the opening angle $\theta_a$ (red circles) decreases from $\pi/2$ to reach the constant value $\theta_a=0$. This means that the conical deformation initially oriented along the chiral axis rotates to the $y$ direction, whilst shrinking at the same time, and the final result is a ferromagnetic state ($\theta_a = 0$) oriented in the direction of the external magnetic field ($\theta_o = \pi/2$). In Fig.~\ref{fig:esfstar}(b) we show a representation of the dynamical evolution of the magnetization field in the Bloch sphere for the current density and magnetic field values corresponding to Fig.~\ref{fig:esfstar}(a). It can be observed that after the application of the current the profile of the magnetization field can be pictured as a deformed cone with its axis pointing, approximately, along the chiral axis. The shape and orientation of this cone evolves with time and, after a while, the axis of the cone moves within the $y-z$ plane and its direction gradually departs from the chiral axis ($z$ axis) to finally lay along the direction of the magnetic field ($y$ axis), see Fig.~\ref{fig:esfstar}(b)i-vi. After this, the cross section of the cone starts shrinking to finally reach the ferromagnetic state along the magnetic field, see Fig.~\ref{fig:esfstar}(b)vii-viii. Notice that, as can be appreciated in Fig.~\ref{fig:angs}, the conical deformation does not fully close as $j$ approaches the critical current $j_c$ from below. Moreover, notice also that once $\theta_o > \theta_a$ the magnetization texture winds around $\theta_o$, but the chiral axis is no longer contained within the cone defined by $\theta_o$ and $\theta_a$ [Figs.~\ref{fig:esfstar}(c)v-vi]. It is important to mention that after the destruction of the CSL the magnetic state can be described as a ferromagnetic state with small spatial fluctuations. As shown in Fig.~\ref{fig:esfstar}, the transition from the CSL to the ferromagnetic state occurs within a few nanoseconds. When the current is not too large ($j_{c}<j\leq j_{c}^{FM}$ with $j_{c}^{FM}$ the critical current for the ferromagnetic instability, discussed in Sec.~\ref{sec:phase_diagram}) the amplitude of these fluctuations decreases with time and the perfect ferromagnetic state is eventually reached. To summarize the main results of this section we mention that for $j>j_{c}(B)$, but $j$ not too high, and a long enough pulse of current, the system reaches a ferromagnetic steady state, and the CSL exhibits a finite life time. \begin{figure}[t!] \includegraphics[width=8cm]{fig_6a.png} \includegraphics[width=6.3cm]{fig_6b.png} \caption{ Destruction of the CSL in the supercritical current regime. (a) Time evolution of the orientation and opening angles, $\theta_o$ (blue diamonds) and $\theta_a$ (red circles), in the supercritical regime for $j = 3 \times 10^{12} \un{A/m^2}$. (b) Representation of the magnetization field (on the Bloch sphere) at selected times after the application of the density current pulse corresponding to (a): i) $t=0.05\un{ns}$, ii) $t=0.20\un{ns}$, iii) $t=0.45\un{ns}$, iv) $t=0.70\un{ns}$, v) $t=0.95\un{ns}$, vi) $t=1.10\un{ns}$, vii) $t=1.45\un{ns}$, viii) $t=1.80\un{ns}$. The black circle represents the initial state at $t=0\un{ns}$ and the blue line represents the magnetization at each time. \label{fig:esfstar}} \end{figure} \begin{figure}[t!] \includegraphics[width=8.0cm]{fig_7a.png} \includegraphics[width=8.0cm]{fig_7b.png} \caption{ (a) The $j$-$B$ phase diagram for a monoaxial helimagnet. The color code indicates the value of the winding number $Q$ which, due to periodic boundary conditions, only takes integer values ($0\leq Q\leq 10$ for the equilibrium state in a system of size $R=500\un{nm}$) for the final magnetization state after a 50 ns long pulse of intensity $j$ at each value of the magnetic field $B$. The solid red line represents the analytic limit for the stability of the CSL. The dashed red line represents the analytic limit for the stability of the ferromagnetic state (which is unstable within the gray region). The dashed white line represents the critical field $B_{c}=230$ mT. The green cross represents the critical current for the helical state at $B=0\un{mT}$. Its value $j\approx2.51\times10^{12}\un{A/m^{2}}$ is very close to the value of the critical current for the stability of ferromagnetic state ($j\approx2.54\times10^{12}\un{A/m^{2}}$). (b) The stability limit of the CSL at constant density of solitons, as indicated in the key. The dashed black line represents the stability limit for the equilibrium state (shown in (a)), in which the density of chiral solitons varies with the magnetic field. \label{fig:phase_diagram}} \end{figure} \section{\label{sec:phase_diagram}Phase diagram} Extending the previous analysis to different values of $j$ and $B$ it is possible to construct the phase diagram shown in Fig. \ref{fig:phase_diagram}(a). From micromagnetic simulations the winding number $Q$ in the final state after a $50 \un{ns}$ pulse of current is obtained. The winding number is computed as $Q=\sum_{i} \text{arcsin}\left[(\bm{\hat{n}}_{\perp,i}\times\bm{\hat{n}}_{\perp,i+1})\cdot\bm{\hat{z}}\right]$, where the sum runs over the number of cells along the chiral axis, $\bm{\hat{n}}_{\perp,i}=\bm{n}_{\perp,i}/\lvert\bm{n}_{\perp,i}\rvert$ and $\bm{\hat{z}}\cdot\bm{n}_{\perp,i}=0$, and counts the number of chiral solitons winding around the chiral axis in the system. It is important to note that this definition of $Q$ does not involve the evaluation of derivatives (through finite differences). This implies that the value of $Q$ is well quantized, taking integer values, and its value does not depend on the mesh size used in the discretization of the system. The computation of $Q$, as introduced here, resembles the method for the computation of the topological charge (or skyrmion number) in two dimensional systems using a lattice-based approach \cite{kim2020quantifying}. The region with a gradient scale of colors from yellow to dark blue corresponds to $j<j_c$ where we find a CSL with the number of CSs decreasing from $N=10$ to $N=0$ for increasing magnetic fields. The region in dark blue corresponds to $Q=0$, and this means that the magnetization texture is not winding around the chiral axis, which eventually result in a ferromagnetic state. For $j=0$ we observe the typical behavior of a monoaxial chiral magnet in a transverse magnetic field. Since in our simulations we consider a system of size $R=500 \un{nm}$, and at zero magnetic field the period of the magnetic texture is $L_{0}\approx 48 \un{nm}$, the number of chiral solitons is thus $Q=10$. This value decreases down to $Q=0$ as the magnetic field grows and the system reaches the ferromagnetic state at $B_c$. The solid red line corresponding to $j_c=j_{c}(B)$ was obtained using the stability analysis and agrees with the results from micromagnetic simulations. It is observed that the winding number does not change with the current except at the transition point, where it drops to zero discontinuously. A change in $Q$ involves the removal of a chiral soliton and this could occur in two ways, either through the edges of the system or destroying locally a chiral soliton. Since we simulate infinite systems, through the implementation of periodic boundary conditions, the first mechanism is forbidden due to the absence of edges. Since $Q$ is conserved when the current is increased below $j_c(B)$, the local destruction of CSs is not observed in our numerical simulations, presumably due to the topological protection of the CSL state. However an unwinding process of individual CSs could be present at low magnetic fields \cite{masell2020manipulating}. The instability of the ferromagnetic state occurs for $j>j_{c}^{FM}(B)$ due to the current-assisted excitation of spin waves and is a well-known fact, usually encountered in different models of ferromagnets~\cite{Bazaliy98, Fernandez04, Tserkovnyak06, Masell2020}. In Fig. \ref{fig:phase_diagram}(a) the ferromagnetic state is unstable in the gray region and the critical current $j_{c}^{FM}(B)$ is represented by the dashed red line. Above $j_{c}^{FM}$ the magnetization field does exhibit neither spatial nor temporal structure. It is important to note that for $B\lesssim 12 \un{mT}$ the CSL is driven directly to the region where the ferromagnet is unstable, without passing through a ferromagnetic state. The value at $B=0\un{mT}$ can be directly computed to obtain $j_{c}(0)\approx2.51\times10^{12} \un{A/m^{2}}$ (green cross in Fig. \ref{fig:phase_diagram}(a)). In this region the necessary computation time to reach $j_{c}$ using micromagnetic simulations increases noticeable. Since we used a maximum time of $50\un{ns}$, the stability limit shown in Fig.~\ref{fig:phase_diagram}(a) is slightly larger than the analytical limit for $j_{c}(B)$ when $B \to 0$. Within this region, random fluctuations could also lead to an unwinding dynamical process, gradually reducing the number of CSs \cite{masell2020manipulating}. The phase diagram shown in Fig. \ref{fig:phase_diagram}(a) corresponds to the equilibrium state, in which the density of chiral solitons minimizes the energy (at zero current), and thus varies with the magnetic field. However, due to the protection of the topologically non trivial states, each metastable states characterized by the density of solitons has its own critical current, $j_c(B)$, which is displayed in Fig. \ref{fig:phase_diagram}(b) for different values of the density of solitons. For comparison, the critical current corresponding to the equilibrium state is also shown (dashed black line). We see that $j_{c}(B)$ decreases both with $B$ and with the density of solitons. \section{\label{sec:conclusions}Discussion and Conclusions} We have described how the CSL responds to an applied current beyond the weak current density and weak magnetic field regimes ($B$ small compared to $B_c$). For each value of the magnetic field we find a critical current $j_{c}$ depending on the density of solitons. In the subcritical regime ($j<j_{c}$) the velocity-current response is linear and does not depend on the density of solitons. The steady finite velocity regime is accompanied by a conical distortion of the CSL, similar to the one observed when applying magnetic fields with a finite $z$ component. The magnitude of the applied current governs two properties of the conical distortion: the cross section of the cone decreases with the current, while the deviation of the cone axis, with respect to the chiral axis, increases with the current. In the supercritical regime, $j>j_{c}$, the CSL is destabilized and the system reaches a ferromagnetic state with the magnetization oriented along the external field (except within the range $0\un{mT}\leq B \lesssim 12\un{mT}$). Even in this supercritical regime the evolution of the CSL to the ferromagnetic state can still be described, qualitatively, by an oriented conical deformation, but with strong deviations. The velocity of the CSL dragged by a spin polarized current has been already studied in Ref.~\onlinecite{Tokushuku17}, assuming weak magnetic fields. In that article the authors find that the terminal velocity for the CSL exhibits a weak dependence on the magnetic field for $B\ll B_{c}$, that can be recast as an approximately constant velocity, in agreement with our findings. In addition, in the calculations of Refs. \onlinecite{Kishine10} and \onlinecite{Tokushuku17} the authors considered $\theta\approx\pi/2$. We go beyond this limit by considering that the spin polarized current can induce pronounced distortions in the structure of the CSL in which $\theta(z)$ is allowed to significantly depart from $\theta(z)=\pi/2$. Let us end the article with a brief discussion about the practical relevance of the results reported in this work. Firstly, the stability limit of the CSL imposes a constraint on the velocity of the CSL. That is, at a given magnetic field, $v$ can not exceed the critical velocity $v_{c}(B)=\frac{\beta b_{j}}{\alpha}j_{c}(B)$. Since $j_{c}(B)$ is a decreasing function of $B$, $v_{c}(B)\leq v_{c}(0)$, and that in turn implies for CrNb$_3$S$_6$ that the maximum velocity for a CSL is $v=v_{c}(0)\approx2600\un{m/s}$ (for $\alpha=0.01$ and $\beta=0.02$). Finally, although not shown in detail here, it is important to mention that once the ferromagnetic state is destabilized, and after turning off the current, the system evolves to a CSL with a variable number of CSs. Since the forced ferromagnet and CSL have very different magnetoresistive responses~\cite{Togawa13,Togawa15}, the dynamics described here allows a write/erase mechanism by using two currents $j_w$ and $j_e$ to switch between states with high and low magnetoresistance. For instance, lets consider two current pulses of values $j_e$ and $j_w$ with $j_e < j_w$ and such that $j_{c}(B)<j_e<j_{c}^{FM}(B)$ and $j_w>j_{c}^{FM}(B)$. By applying a pulse of intensity $j_e$ to the CSL the system is driven into a ferromagnetic state which is then metastably retained when the current is removed, i.e a low-magnetoresistive state is retained. If we then apply a pulse with intensity $j_{w}$ the system goes beyond the ferromagnetic instability and then relaxes to a CSL, which would correspond to a high-magnetoresistive state. After a sequence $j_e$-$j_w$ current pulses the initial and final CSL would, in general, have different number of CSs, which would comprise a small difference between high-magnetoresistive states but would not drastically affect the possible observation of two well resolved high- and low- magnetoresistive states. The results discussed here could therefore be relevant for the development of spintronic devices. \begin{acknowledgments} The authors acknowledge support by Grants No PGC-2018-099024-B-I00-ChiMag from the Ministry of Science and Innovation (MCIN) of Spain, SpINS-OTR2223 from CSIC/MCIN and DGA-M4 from the Diputación General de Aragón, Spain. This work was also supported by the Grant No. PICT 2017-0906 from the Agencia Nacional de Promoción Científica y Tecnológica, Argentina. \end{acknowledgments}
1,314,259,996,019
arxiv
\section{Introduction} Recent years \textsl{Neural Architecture Search}~\cite{zoph2016neural, baker2016designing, zoph2018learning, zhong2018practical, zhong2018blockqnn, liu2018progressive, real2019regularized, tan2019mnasnet, chen2019detnas}~(NAS) has received much attention in the community as its superior performances over human-designed architectures on a variety of tasks such as image classification \cite{tan2019mnasnet,tan2019efficientnet,guo2019single}, object detection \cite{chen2019detnas,ghiasi2019fpn} and semantic segmentation \cite{liu2019auto}. In general, most existing NAS frameworks can be summarized as a \emph{nested bilevel optimization}, formulated as follows: \begin{equation} a^{\star} = \mathop{\argmax}_{a \in \mathcal{A}} \text{Score} \left(a, \mathbf{W}_a^{\star}\right) \label{eq0} \end{equation} \begin{equation} \mbox{s.t.}\quad \mathbf{W}_a^{\star} = \mathop{\argmin}_{\bf{W}} \mathcal{L} \left(a, \mathbf{W}\right), \label{eq1} \end{equation} where $a$ is a candidate architecture with weights $\mathbf{W}_{a}$ sampled from the search space $\mathcal{A}$; $\mathcal{L}(\cdot)$ represents the \emph{training loss}; $\text{Score}(\cdot)$ means the performance indicator (e.g. accuracy in supervised NAS algorithms or \emph{pretext task} scores in unsupervised NAS frameworks \cite{liu2020labels}) evaluated on the \emph{validation set}. Briefly speaking, the NAS paradigm aims to search for the architecture which obtains the best validation performance, thus we name it \emph{performance-based NAS} in the remaining text. Despite the great success, to understand why and how \emph{performance-based NAS} works is still an open question. Especially, the mechanism how NAS algorithms discover good architectures from the huge search space is well worth study. A recent literature \cite{Shu2020Understanding} analyzes the searching results under cell-based search spaces and reveals that existing performance-based methods tend to favor architectures with fast convergence. Although Shu \textit{et al.}~\cite{Shu2020Understanding} further empirically find that architectures with fast convergence can not achieve the highest generalization performance, the fast convergence connection pattern still implies that there may exist high correlations between architectures with fast convergence and the ones with high performance (named \emph{ease-of-convergence hypothesis} for short). Inspired by the hypothesis, we propose an alternative NAS paradigm, \emph{convergence-based NAS}, as follows: \begin{equation} a^{\star} = \mathop{\argmax}_{a \in \mathcal{A}} \text{Convergence} \left(a, \mathbf{W}_{a}^{\star}\right) \label{eq2} \end{equation} \begin{equation} \mbox{s.t.}\quad \mathbf{W}_{a}^{\star} = \mathop{\argmin}_{\mathbf{W}} \mathcal{L} \left(a, \mathbf{W}\right), \label{eq3} \end{equation} where $\text{Convergence}(\cdot)$ is a certain indicator to measure the speed of convergence; other notations follow the same definitions as in Eq.~\ref{eq0},~\ref{eq1}. In this paper we mainly investigate \emph{convergence-based NAS} frameworks, which is rarely \emph{explicitly} explored in previous works to our knowledge. First of all, we study the role of \emph{labels} in both frameworks. In \emph{performance-based NAS}, we notice that \emph{feasible labels} are critical in both search steps:~for Eq.~\ref{eq0} step, since we need to select the architecture with the highest validation performance, reasonable labels such as ground truths or at least carefully-designed \emph{pretext task} (e.g. rotation prediction \cite{gidaris2018unsupervised}) labels in unsupervised NAS~\cite{liu2020labels} are required for evaluation. For Eq.~\ref{eq1} step such corresponding labels are also necessary in the training set to optimize the weights. While in \emph{convergence-based NAS}, Eq.~\ref{eq2} only depends on a metric to estimate the convergence speed, which is free of labels. Though the optimization in Eq.~\ref{eq3} still needs labels, the purpose of the training is just to provide the evidence for the benchmark in Eq.~\ref{eq2} rather than accurate representations. So, we conclude that in convergence-based NAS the requirement of labels is much weaker than that in performance-based NAS. The observation motivates us to take a further step: in convergence-based NAS, \emph{can we use only random labels for search, instead of any feasible labels like ground truths or pretext task labels entirely?} To demonstrate it, we propose a novel convergence-based NAS framework, called \emph{Random Label NAS (RLNAS)}, which only requires random labels to search. RLNAS follows the paradigm of Eq.~\ref{eq2},~\ref{eq3}. In Eq.~\ref{eq3} step, random labels are adopted to optimize the weight for each sampled architecture $a$; while in Eq.~\ref{eq2} step, a customized \emph{angle} metric \cite{hu2020angle} is introduced to measure the distance between trained and initialized weights, which estimates the convergence speed of the corresponding architecture. To speed up the search procedure, RLNAS further utilizes the mechanism of \emph{One-Shot NAS} \cite{bender2018understanding,guo2019single} to decouple the nested optimization of Eq.~\ref{eq2} and Eq.~\ref{eq3} into a \emph{two-step} pipeline: first training a \emph{SuperNet} with random labels, then extracting the sub-network with the fastest convergence speed from the SuperNet using evolutionary search. We evaluate our RLNAS in popular search spaces like NAS-Bench-201~\cite{dong2020bench}, DARTS~\cite{liu2018darts} and MobileNet-like search space~\cite{cai2018proxylessnas}. Very surprisingly, though RLNAS does not use any feasible labels, it still achieves comparable or even better performances on multiple benchmarks than many supervised/unsupervised methods, including state-of-the-art NAS frameworks such as \emph{PC-DARTS} \cite{xu2019pc}, \emph{Single-Path One-Shot} \cite{guo2019single}, \emph{FairDARTS} \cite{chu2019fair}, \emph{FBNet} \cite{wu2019fbnet} and \emph{UnNAS} \cite{liu2020labels}. Moreover, networks discovered by RLNAS are also demonstrated to transfer well in the downstream tasks such as object detection and semantic segmentation. In conclusion, the major contribution of the paper is that we propose a new convergence-based NAS framework \emph{RLNAS}, which makes it possible to search with only random labels. We believe the potential of RLNAS may includes: \vspace{-1em} \paragraph{A simple but stronger baseline.} Compared with the widely used \emph{random search} \cite{li2020random} baseline, RLNAS is much more powerful, which can provide a stricter validation for future NAS algorithms. \vspace{-1em} \paragraph{Inspiring new understandings on NAS.} Since the performance of RLNAS is as good as many supervised NAS frameworks, on one hand, it further validates the effectiveness of \emph{ease-of-convergence hypothesis}. On the other hand, however, it suggests that the ground truth labels or NAS on specified tasks do not help much for current NAS algorithms, which implies that architectures found by existing NAS methods may still be suboptimal. \section{Related Work} \paragraph{Supervised Neural Architecture Search.} Supervised neural architecture search~(NAS) paradigm is the mainstream NAS setting. Looking back the development history, supervised NAS can be divided into two categories hierarchically: \emph{nested NAS} and \emph{weight-sharing NAS} from the perspective of search efficiency. In the early stage, \emph{nested} \textsl{NAS}~\cite{zoph2016neural, baker2016designing, zoph2018learning, zhong2018practical, zhong2018blockqnn, liu2018progressive, real2019regularized, tan2019mnasnet} trains candidate architectures from scratch and update controller with corresponding performance feedbacks iteratively. However, \emph{nested NAS} works at the cost of a surge in computation, e.g. NASNet~\cite{zoph2018learning} costs about 1350--1800 GPU days. ENAS~\cite{pham2018efficient} observes the computation bottleneck of \emph{nested NAS} and forces all candidate architectures to share weights. ENAS takes $1000\times$ less computation cost than \emph{nested NAS}~\cite{pham2018efficient} and proposes a new NAS paradigm named \emph{weight-sharing NAS}. A large number of literature~\cite{liu2018darts, chen2019progressive, xu2019pc, bender2018understanding,brock2017smash, cai2018proxylessnas, guo2019single} follow the \emph{weight-sharing} strategy due to the superiority of search efficiency. This work is also carried out under the \emph{weight-sharing} strategy. Unlike most \emph{weight-sharing} approaches, we are not focusing on the improvement of search efficiency. According to different optimization steps, \emph{weight-sharing} approaches can be further divided into two categories: the one joint step optimization approach named \emph{gradient-based NAS}~\cite{liu2018darts, chen2019progressive, xu2019pc}) and the two sequential steps optimization approach named \emph{One-Shot NAS}~\cite{bender2018understanding,brock2017smash, cai2018proxylessnas, guo2019single}). The \emph{gradient-based NAS} relaxes discrete search space into a continuous one with architecture parameters, which are optimized with end-to-end paradigms. Because of the non-differentiable characteristic of angle, we follow the mechanism of \emph{One-Shot NAS} to study \emph{convergence-based NAS}. \paragraph{Unsupervised Neural Architecture Search.} Recently, unsupervised learning~\cite{He_2020_CVPR, chen2020simple, grill2020bootstrap} has received much attention, and the unsupervised paradigm has also appeared in the field of NAS. \cite{yan2020does} used unsupervised architecture representation in the latent space to better distinguish network architectures with different performance. UnNAS~\cite{liu2020labels} introduces unsupervised methods~\cite{gidaris2018unsupervised, noroozi2016unsupervised, zhang2016colorful} to \emph{weight-sharing NAS} in order to ablate the role of labels. Although UnNAS does not use the labels of the target dataset, the labels like \emph{rotation category}, etc on the pretext tasks are still exploited. UnNAS shows that \emph{weight-sharing NAS} can still work with the absence of ground truth labels, but it is hard to conclude that labels are completely unnecessary. Different from unsupervised learning, which requires representation, unsupervised NAS focuses on architectures. Therefore, random labels are introduced in this paper, which completely detach from prior supervision information and help us thoroughly ablate the impact of labels on NAS. \paragraph{Model Evaluation Metrics.} \cite{mellor2020neural, anonymous2021neural} develop \emph{training-free NAS} which means searching directly at initialization without involving any training. They focus on investigating training-free model evaluation metrics to rank candidate architectures. \cite{mellor2020neural} uses the correlation between input Jacobian to indicate model performance. \cite{anonymous2021neural} uses the combination of NTKs and linear regions in input space to measure the architecture trainability and expressivity. Although \emph{training-free NAS} has much higher search efficiency, there is still a performance gap compared with well-trained \emph{weight-sharing NAS}. ABS~\cite{hu2020angle} introduces angle metric to indicate model performance and mainly focuses on search space shrinking. Different from ABS, we directly search architectures with angle metric. \section{Methodology} As mentioned in the introduction, in order to utilize the mechanism of Oner-Shot NAS, we first briefly review Single Path One-Shot~(SPOS)~\cite{guo2019single} as preliminary. Based on SPOS framework, we then put forward our approach Random Label NAS~(RLNAS). \subsection{Preliminary:~SPOS} \label{methodology.preliminaries} SPOS is one of the One-Shot approaches, which decouple the NAS optimization problem into two sequential steps: firstly train SuperNet, and then search architectures. Different from other One-Shot approaches, SPOS further decouples weights of candidate architectures by training SuperNet stochastically. Specifically, SPOS regards a candidate architecture in SuperNet as a single path and uniformly activates a single path to optimize corresponding weights in each iteration. Thus, the SuperNet training step can be expressed as: \begin{equation} \mathbf{W}_{a}^{\star} = \mathop{\argmin}_{\mathbf{W}} \mathbb{E}_{a \sim \varGamma(\mathcal{A}) } \mathcal{L} \left(a, \mathbf{W}\right), \label{eq4} \end{equation} where $\mathcal{L}$ means objective function optimized on training dataset with ground truth labels and $\varGamma(\mathcal{A})$ is a uniform distribution of $a \in \mathcal{A}$. After SuperNet trained to convergence, SPOS performs architecture search as: \begin{equation} a^{\star} = \mathop{\argmax}_{a \in \mathcal{A}} \text{ACC}_{val} \left(a, \mathbf{W}_{a}^{\star}\right). \label{eq5} \end{equation} SPOS implements Eq.~\ref{eq5} by utilizing an evolution algorithm to search architectures. With initialized population, SPOS conducts crossover and mutation to generate new candidate architectures and uses validation accuracy as fitness to keep candidate architectures with top performance. Repeat this way until the evolution algorithm converges to the optimal architecture. \subsection{Our approach: Random Label NAS~(RLNAS)} \label{methodology.approach} The combination of two decoupled optimization steps, SuperNet structure consisting of single paths and evolution search, makes SPOS simple but flexible. Following the mechanism of SPOS, we decouple the \emph{convergence-based} optimization of Eq.~\ref{eq2} and Eq.~\ref{eq3} into the following two steps. Firstly, SuperNet is trained with random labels: \begin{equation} \mathbf{W}_{a}^{\star} = \mathop{\argmin}_{\mathbf{W}} \mathbb{E}_{a \sim \varGamma(\mathcal{A}) } \mathcal{L} \left(a, \mathbf{W}, R\right), \label{eq6} \end{equation} where $R$ represents random labels; other notations follow the same definitions as in Eq.~\ref{eq4}. Secondly, evolution algorithm with convergence-based metric $\text{Convergence}(\cdot)$ as fitness searches the optimal architecture from SuperNet: \begin{equation} a^{\star} = \mathop{\argmax}_{a \in \mathcal{A}} \text{Convergence} \left(a, \mathbf{W}_{a}^{\star}\right). \label{eq7} \end{equation} In the next section, we introduce the mechanism of generating random labels in Sec.~\ref{methodology.random_label} and use an angle-based metric as $\text{Convergence}(\cdot)$ to estimate model convergence speed in Sec.~\ref{methodology.angle}. \subsubsection{Random Labels Mechanism} \label{methodology.random_label} In representation learning field, deep neural networks~(DNNs) have the capacity to fit dataset with partial random labels~\cite{zhang2016understanding}. Further more, \cite{maennel2020neural} tries to understand what DNNs learn when trained on natural images with entirely random labels and experimentally demonstrates that pre-training on purely random labels can accelerate the training of downstream tasks under certain conditions. For NAS field, although we pursue the optimal model architecture rather than model representation in search phase, model representation is still involved in the performance-based NAS. However, it is still an open question can neural architecture search work within random labels setting. In the view of this, we try to study the impact of random labels on NAS optimization problem. At first, we introduce the mechanism of generating random labels. To be specific, random labels obey the discrete uniform distribution and the number of discrete variable is equal to the image category of dataset in default~(other possible methods are discussed in Sec.~\ref{sec.ablation.study}). Random labels corresponding to different images are sampled in data pre-processing procedure and these image-label pairs will not change during the whole model optimization process. \subsubsection{Angle-based Model Evaluation Metric} \label{methodology.angle} Recently,~\cite{Shu2020Understanding} found out that searched architectures by NAS algorithms share the same pattern of fast convergence. With this rule as a breach, we try to design model evaluation metrics from the perspective of model convergence. \cite{carbonnelle2018layer} firstly measure the convergence of a stand-alone trained model with a angle-based metric. The metric is defined as the angle between initial model wights and trained ones. ABS~\cite{hu2020angle} introduces this metric into the NAS community and uses it to shrink the search space progressively. Different from ABS, we focus on the optimization problem with random labels and adopt angle-based metric to directly search architectures rather than shrink search space. Prior to extend angle to guide architecture search, we first review angle metric in ABS~\cite{hu2020angle}. \paragraph{Review Angle Metric in ABS.} SuperNet is represented as a directed acyclic graph~(DAG) denoted as $\mathcal{A}(\mathbf{O},\mathbf{E})$, where $\mathbf{O}$ is the set of feature nodes and $\mathbf{E}$ is the set of connections~(each connection is instantiated as an alternative operation) between two feature nodes. ABS defines $\mathcal{A}(\mathbf{O},\mathbf{E})$ with the only input node $O_{in}$ and the only output node $O_{out}$. A candidate architecture is sampled from SuperNet and it is represented as $a(\mathbf{O},\widetilde{\mathbf{E}})$. The candidate architecture has the same feature nodes $\mathbf{O}$ as SuperNet but subset edges $\widetilde{\mathbf{E}} \in \mathbf{E}$. ABS uses a weight vector $\boldsymbol{V}(a, \mathbf{W})$ to represent a model and constructs $\boldsymbol{V}(a, \mathbf{W})$ by concatenating the weights of all paths from $O_{in}$ to $O_{out}$. The distance between the initialized candidate architecture whose weights is $\mathbf{W_{0}}$ and the trained one with weights $\mathbf{W_{t}}$ is: \begin{equation} \text{Angle}(a) = \arccos{(\frac{<\boldsymbol{V}(a, \mathbf{W}_{0}),\boldsymbol{V}(a,\mathbf{W}_{t})>}{\left\|\boldsymbol{V}(a,\mathbf{W}_{0})\right\|_{2} \cdot \left\|\boldsymbol{V}(a, \mathbf{W}_{t})\right\|_{2}})}. \label{eq9} \end{equation} \paragraph{Extensive Representation of Weight Vector.} As above discussed, ABS define the SuperNet with just one input node and one output node. However, for some search spaces, they consist of cell structures with multiple input nodes and outputs nodes. For example, each cell in DARTS has two input nodes and the output node of each cell consists outputs of all intermediate nodes by concatenation, which motivates us to consider all intermediate nodes as output nodes for the identification of architecture topology. In general, we redefine weight vector $\boldsymbol{V}(a, \mathbf{W})$ by concatenating the weights of all paths from $\mathbf{O}_{in}$ to $\mathbf{O}_{out}$. \paragraph{Parameterize Non-weight Operations.}So as to resolve the conflict among candidate architectures with the same learnable weights, ABS parameterizes non-weight operations~('pool', 'skip-connect' and 'none'). The 'pool' operation~(both 'average pool' and 'max pool') is assigned with a fixed tensor with dimension $[O,C,K,K]$~($O$ and $C$ represent output channels and input channels respectively, $K$ is the kernel size) and all elements are $1/K^{2}$. Different from ABS assign 'skip-connect' with empty vector, we propose an alternative parametric method, which assigns identity tensor with dimension $[O,C,1,1]$ to the 'skip-connect' operation. We adjust parametric methods for different search spaces, e.g., empty weights and identity tensor are assigned to 'skip-connect' in NAS-Bench-201 and DARTS or MobileNet-like search space respectively. The reason for the difference may be related to the complexity of the search space. The 'none' operation need not to be parameterized as ABS and it determines the number of paths that make up the weights vector~$\boldsymbol{V}$. If there is a 'none' in a path, then weights of operations in this path will not involved in angle calculation. \section{Experiments} \subsection{Search Space and Training Setting} We analyze and evaluate RLNAS on three existing popular search spaces: NAS-Bench-201~\cite{dong2020bench}, DARTS ~\cite{liu2018darts} and MobileNet-like search space~\cite{cai2018proxylessnas}. \paragraph{NAS-Bench-201.} There are 6 edges in each cell and each edge has 5 alternative operations. Because of repeated stacking, NAS-Bench-201 consists of 15625 candidate architectures and provides the real performance for each architecture. We adopt the same training setting for SuperNet in a single GPU across CIFAR-10~\cite{krizhevsky2009learning} CIFAR-100~\cite{krizhevsky2009learning} and ImageNet16-120~\cite{chrabaszcz2017downsampled}. We train the SuperNet 250 epochs with mini-batch 64. We use SGD to optimize weights with momentum 0.9 and weight decay $5e^{-4}$. The learning rate follows cosine schedule from initial 0.025 annealed to 0.001. In evolution phase, we use population size 100, max iterations 20 and keep top-30 architectures in each iteration. All experiment results on NAS-Bench-201 are obtained in three independent runs with different random seeds. \begin{table*}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}*{Method} & \multicolumn{2}{c}{Configurations} & \multicolumn{2}{|c}{CIFAR-10~(\%)} & \multicolumn{2}{|c}{CIFAR-100~(\%)} & \multicolumn{2}{|c}{ImageNet16-120~(\%)} \\ \cline{2-9} ~ & Label type & Performance indicator & valid acc & test acc & \multicolumn{1}{c|}{valid acc} & \multicolumn{1}{c|}{test acc} & valid acc & test acc \\ \hline\hline A~(SPOS)\tnote{$\dagger$} & ground truth label & validation accuracy & 88.49 & 92.11 & 66.51 & 66.89 & 40.16 & 40.80 \\ \hline B & ground truth label & angle & \textbf{90.20} & \textbf{93.76} & 70.71 & \textbf{71.11} & 40.78 & 41.44 \\ \hline C & random label & validation accuracy & 76.47 & 80.60 & 52.48 & 52.84 & 29.58 & 28.37 \\ \hline D~(RLNAS)\tnote{$\ddagger$} & random label & angle & 89.94 & 93.45 & \textbf{70.98} & 70.71 & \textbf{43.86} & \textbf{43.70} \\ \hline \end{tabular} \caption{ Search performance on NAS-Bench-201 across CIFAR-10, CIFAR-100 and ImageNet16-120.} \label{tab.1} \end{table*} \paragraph{DARTS.}Different from vanilla DARTS~\cite{liu2018darts}, each intermediate node only samples two operations among alternative operations~(except 'none') from its all preceding nodes in SuperNet training phase. We train the SuperNet with 8 cell on CIFAR-10 for 250 epochs and other training settings keep the same as DARTS~\cite{liu2018darts}. We also train 14 cell SuperNet with initial channel 48 on ImageNet. We use 8 GPUs to train SuperNet 50 epochs with mini-batch 512. SGD with momentum 0.9 and weight decay $4e^{-5}$ is adopted to optimize weights. The cosine learning rate schedules from 0.1 to $5e^{-4}$. We use the same evolution hyper-parameters as Single Path One-Shot~(SPOS)~\cite{guo2019single}. As for model evaluation phase~(retrain searched architecture), we follow the training setting as PC-DARTS~\cite{xu2019pc} on ImageNet. \paragraph{MobileNet.}The MobileNet-like search space proposed in ProxylessNAS~\cite{cai2018proxylessnas} is adopted in this paper. The SuperNet contains 21 choice blocks and each block has 7 alternatives: 6 MobileNet blocks~(combination of kernel size \{3,5,7\} and expand ratio \{3,6\}) and 'skip-connect'. We keep the same experiment setting for both search phase and evaluation phase as SPOS~\cite{guo2019single}. \subsection{Searching Results} \subsubsection{NAS-Bench-201 Experiment Results} \label{sec.4.2.1} \paragraph{Search performance.} For NAS-Bench-201 search space, experiments are conducted on three datasets: CIFAR-10, CIFAR-100 and ImageNet16-120. Different from other literature only search on CIFAR-10 and look up real performance of the found architecture on various test dataset~(e.g., test accuracy on CIFAR-100 or ImageNet16-120), we actually train SuperNet on different target datasets and search architectures with the unique SuperNet. Firstly, we construct SuperNet based NAS-Bench-201 search space and train the SuperNet by uniform sampling strategy~\cite{guo2019single} with ground truth labels or random labels. Then, angle or validation accuracy is regarded as fitness to perform evolution search. According to different method configurations, there are total four possible methods as described in Table~\ref{tab.1}. For simplity, we denoted they as method A, B, C and D respectively. In particular, method A and D correspond to SPOS and RLNAS. The search performance on three datasets are reported in Table~\ref{tab.1}. We first compare method C and D within the random label setting, and find that angle surpasses validation accuracy with a large margin. Similar results can also be observed under the ground truth label setting, but the margin between method A and B is not such large. This suggests that angle can evaluate models more accurately than validation accuracy. Further more, in the case where angle is used as the metric, even if random labels are used, RLNAS obtains comparable accuracy on CIFAR-10 and CIFAR-100 and even outperforms method B by 1.26\% test accuracy on ImageNet16-120. \paragraph{Ranking correlation.} In addition to the analysis of top architectures as Table~\ref{tab.1}, we further conduct rank correlation analysis. The first step is also to train SuperNet with ground truth labels or random labels. Secondly, we traverse the whole NAS-Bench-201 search space and rank them with different model evaluation metrics independently. We treat the rank based on real performance provided by NAS-Bench-201 as the ground truth rank. At last, we compute the Kendall's Tau~\cite{kendall1938new,yu2019evaluating,chu2019fairnas, hu2020angle} between the rank based on the model evaluation metric and the ground truth rank to evaluate the ranking correlation. We compare angle and validation accuracy as model evaluation metric in both ground truth label and random label setting across three datasets. The ranking correlation results are shown in Table~\ref{tab.2}. The results on different datasets show the consistent order of ranking correlation: C$<$A$<$D$<$B. It should be noted that the rank obtained by validation accuracy in the case of random labels has almost no correlation with the ground truth rank. To our surprise, angle still has the ranking correlation around 0.5 under the random label setting, which even exceeds validation accuracy in ground truth label case. \begin{table}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c} \hline \multirow{1}*{$\text{Method}^\dagger$} & {CIFAR-10} & {CIFAR-100} & {ImageNet16-120} \\ \hline\hline A~(SPOS) & 0.4239 & 0.4832 & 0.4322 \\ \hline B & \textbf{0.6671} & \textbf{0.6942} & \textbf{0.6342} \\ \hline C & 0.0874 & \multirow{1}{1.25cm}{$-$0.0195} & \multirow{1}{1.25cm}{$-$0.0262} \\ \hline D~(RLNAS) & 0.5059 & 0.5097 & 0.4716 \\ \hline \end{tabular} \caption{Ranking correlation on NAS-Bench-201. $^\dagger$ refer to Table~\ref{tab.1} for detailed method configurations.} \label{tab.2} \end{table} \begin{table*}[htbp] \centering \footnotesize \begin{tabular}{c|l|c|c|c|c} \hline Search type & Method & Params~(M) & FLOPs~(M)&Top-1~($\%$) &Top-5~($\%$) \\ \hline\hline \multirow{7}*{CIFAR-10}& DARTS~\cite{liu2018darts}~( \textit{sup.}) & 4.7 & 574 & 73.3 & 91.3 \\ ~& SPOS~\cite{guo2019single}~(\textit{sup, our impl.}) & 4.3 & 471 & 73.7 & 91.6 \\ ~& PC-DARTS~\cite{xu2019pc}~(\textit{sup.}) & 5.3 & 586 & 74.9 & 92.2 \\ ~& FairDARTS-B~\cite{chu2019fair}~(\textit{sup.}) & 4.8 & 541 & 75.1 & 92.5 \\ ~& P-DARTS~\cite{chen2019progressive}~(\textit{sup.}) & 4.9 & 557 & 75.6 & 92.6 \\ \cline{2-6} ~& RLNAS~(\textit{unsup.}) & 5.7 & 629 & \textbf{76.0} & \textbf{92.9} \\ ~& $\text{RLNAS}^\blacktriangledown$~(\textit{unsup.}) & 5.3 & 581 & 75.6 & 92.5 \\ \hline \multirow{4}*{ImageNet} & SPOS~\cite{guo2019single}~(\textit{sup, our impl.}) & 4.6 & 512 & 74.5 & 92.1 \\ ~ & $\text{NAS-DARTS}^{\dagger}$~\cite{liu2020labels}~(\textit{sup.}) & 5.3 & 582 & \textbf{76.0} & 92.7 \\ ~ & PC-DARTS~\cite{xu2019pc}~(\textit{sup.}) & 5.3 & 597 & 75.8 & 92.7 \\ \cline{2-6} ~ & RLNAS~(\textit{unsup.}) & 5.5 & 597 & 75.9 & \textbf{92.9} \\ \hline \end{tabular} \caption{DARTS search space results: comparison of the SOTA methods on ImageNet. There are two search types of methods and the results of the first block and the second block are searched on CIFAR-10 and ImageNet respectively. $^\blacktriangledown$ FLOPs of the searched architecture is scaled down within 600M by adjusting initial channels from 48 to 46. $^\dagger$ retrain NAS-DARTS reported in UnNAS~\cite{liu2020labels} as PC-DARTS~\cite{xu2019pc}.} \label{tab.3} \end{table*} \subsubsection{DARTS Search Space Results} We conduct two types of experiments in DARTS search space:~search architectures with 8 cells on CIFAR-10, then transfer to ImageNet and search architectures with 14 cells on ImageNet directly. For experiment conducted on CIFAR-10, the training dataset is divided into two subsets with equal size, one of which is used to train the SuperNet, and the other is used as the validation dataset to evaluate model performance in the search phase. As for experiments searched on ImageNet, 50K images are separated from the original training dataset as validation and the rest images are used as the new training dataset. \paragraph{Search architectures on CIFAR-10.} We first analyze the search performance on CIFAR-10 dataset in Table~\ref{tab.3}. RLNAS embodies strong generalization ability when transfering searched architecture from CIFAR-10 to ImageNet. As shown in the first block of Table~\ref{tab.3}, RLNAS has reached 76.0\% top-1 accuracy, even obtains 75.6\% within 600M FLOPs constrain. \paragraph{Search architectures on ImageNet.} After demonstrating the transferring ability of RLNAS among classification tasks, we further verify the efficacy of our method by directly searching on ImageNet. To our best knowledge, it is the first time to train SuperNet with \textbf{14 cells} in DARTS search space without any SuperNet structure modification or complicated techniques. After SuperNet training, we search candidate architectures with 600M FLOPs constrain. The searching results are shown in the second block of Table~\ref{tab.3} and RLNAS obtains 75.9\%. Compared with the results found on CIFAR-10, the performance of RLNAS is further improved by 0.3\%, which indicates that narrowing the gap between the training setting~(both dataset and SuperNet structure) of the search phase and the one in the evaluation phase is helpful for architecture search. \paragraph{Comparison with UnNAS.}Further, we compare our method with UnNAS~\cite{liu2020labels} which also search architectures directly on ImageNet-1K with three pretext tasks~\cite{gidaris2018unsupervised, noroozi2016unsupervised, zhang2016colorful}. For fair comparisons with UnNAS, we have no FLOPs limit in the search phase, but after the search is completed, we limit the FLOPS within 600M by scaling the initial channels from 48 to 42. Simultaneously, we retrain the three architectures reported as UnNAS~\cite{liu2020labels} with the same training setting as PC-DARTS~\cite{xu2019pc}. Table~\ref{tab.4} shows that our method obtains high performance with 76.7\% and 75.9\% within 600M FLOPs constrain, which is comparable with UnNAS with jigsaw task and competitive to results obtained by the other two pretext tasks. \begin{table}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c|c} \hline Method & \makecell[c]{Params \\ (M)} & \makecell[c]{FLOPs \\ (M)}& \makecell[c]{Top-1 \\ ($\%$)} & \makecell[c]{Top-5 \\ ($\%$)} \\ \hline\hline UnNAS~\cite{liu2020labels}~(\textit{rotation task.}) & 5.1 & 552 & 75.8 & 92.6 \\ UnNAS~\cite{liu2020labels}~(\textit{color task.}) & 5.3 & 587 & 75.5 & 92.6 \\ UnNAS~\cite{liu2020labels}~(\textit{jigsaw task.}) & 5.2 & 560 & 76.2 & 92.8 \\ \hline RLNAS~(\textit{random label.}) & 6.6 & 724 & \textbf{76.7} & \textbf{93.1} \\ $\text{RLNAS}^\blacktriangledown$~(\textit{random label.}) & 5.2 & 561 & 75.9 & 92.8 \\ \hline \end{tabular} \caption{DARTS search space results: comparison with UnNAS on ImageNet. The architectures of UnNAS based on three pretext tasks are provided in \cite{liu2020labels} and we retrain them as PC-DARTS training setting~\cite{xu2019pc}.$^\blacktriangledown$ FLOPs of the searched architecture is scaled down within $600$M by adjusting initial channels from 48 to 42. } \label{tab.4} \end{table} \subsubsection{MobileNet-like Search Space Results.} To verify the versatility of our method, we further conduct experiments in the MobileNet-like search space. We train SuperNet with 120 epochs on ImageNet as \cite{guo2019single}. In the search phase, we limit model FLOPs within 475M so as to make fair comparisons with other methods. Results are summarized in Table~\ref{tab.5}. RLNAS obtains 75.6\% top-1 accuracy. Compared with other SOTA methods, our method even outperforms with a slight margin, which verify that our strategy does not overfit to any search space and can achieve effective results generally. \begin{table}[htbp] \centering \footnotesize \begin{threeparttable} \begin{tabular}{l|c|c|c|c} \hline Method & \makecell[c]{Params\\(M)} & \makecell[c]{FLOPs\\(M)}& \makecell[c]{Top-1~\\(\%)} & \makecell[c]{Top-5 \\ (\%)} \\ \hline\hline FairNAS-A~\cite{chu2019fairnas}~(\textit{sup.}) & 4.6 & 388 & 75.3 & 92.4 \\ FBNet-C~\cite{wu2019fbnet}~(\textit{sup.}) & 4.4 & 375 & 74.9 & 92.1 \\ Proxyless~(GPU)~\cite{cai2018proxylessnas}~(\textit{sup.}) & 7.0 & 457 & 75.1 & 92.5 \\ FairDARTS-D~\cite{chu2019fair}~(\textit{sup.})& 4.3 & 440 & \textbf{75.6} & \textbf{92.6} \\ SPOS~\cite{guo2019single}~(\textit{sup.}) & 5.4 & 472 & 74.8 & - \\ \hline RLNAS~(\textit{unsup.}) & 5.3 & 473 & \textbf{75.6} & \textbf{92.6} \\ \hline \end{tabular} \end{threeparttable} \caption{MobileNet search space results: comparison of the SOTA methods on ImageNet.} \label{tab.5} \end{table} \subsection{Ablation Study and Analysis} \label{sec.ablation.study} We perform ablation study in this section. We analyze the impact of random labels and angle metric on RLNAS. All experiments are conducted on NAS-Bench-201. \paragraph{Methods of generating random labels.} In the above experiments, we uniformly sample random labels for images before SuperNet training and we denote it as (1). In this subsection, we further discuss 3 other methods for generating random labels: (2).~shuffle all ground truth labels at once before SuperNet training, (3).~uniformly sample labels in each training iteration, and (4).~shuffle ground truth labels in each training iteration. According to these four methods, we conducted three repeated architecture search experiments across CIFAR-10, CIFAR-100 and ImageNet16-120. As Table~\ref{tab.6} shows, in general, the methods of generating random labels at one time have higher performance than the methods of randomly generating labels in each iteration. Even if $\text{RLNAS}^{\dagger}$ has better performance than $\text{RLNAS}^{\ast}$ and $\text{RLNAS}^{\star}$ on CIFAR-10 and CIFAR-100, the performance on ImageNet16-120 is poor with a large margin and this means that $\text{RLNAS}^{\dagger}$ is instable and has poor transferring ability. As for $\text{RLNAS}^{\ast}$ and $\text{RLNAS}^{\star}$, these two methods obtain comparable test accuracy. Considering $\text{RLNAS}^{\ast}$ coupled with ground truth labels, we generate random labels with $\text{RLNAS}^{\star}$ in default and it is easy to apply our algorithm to tasks without labels. \begin{table}[htbp] \centering \footnotesize \begin{tabular}{c|c|c|c} \hline \multirow{2}*{Method} & CIFAR-10 & CIFAR-100 & ImageNet16-120 \\ \cline{2-4} ~ & test acc~(\%) & test acc~(\%) & test acc~(\%) \\ \hline\hline $\text{RLNAS}^\star$ & 93.45$\pm$0.11 & 70.71$\pm$0.36 & 43.70$\pm$1.25 \\ \hline $\text{RLNAS}^\ast$ & 93.52$\pm$0.27 & 70.25$\pm$0.25 & \textbf{43.81}$\pm$\textbf{1.12} \\ \hline $\text{RLNAS}^\dagger$ & \textbf{93.65}$\pm$\textbf{0.07} & \textbf{71.45}$\pm$\textbf{0.42} & 27.51$\pm$1.04 \\ \hline $\text{RLNAS}^\ddagger$ & 92.85$\pm$0.46 & 61.59$\pm$6.57 & 27.51$\pm$1.04 \\ \hline \end{tabular} \caption{Search results of four generating random label method on NAS-Bench-201:~(1).$^\star$ uniform sample all random labels at once, (2).$^\ast$ shuffle all ground truth labels at once, (3).$^\dagger$ uniform sample labels in each iteration, and (4).$^\ddagger$shuffle ground truth labels in each iteration.} \label{tab.6} \end{table} \paragraph{Impact of image category.} We have shown that uniform sample labels corresponding images before training is the most appropriate method to generate random labels. In this section, we further discuss the impact of the label category on searching performance. In detail, we sample 20 different categories from 10 to 200 with interval 10 for CIFAR-10, CIFAR-100 and ImageNet16-120. SuperNet is trained with different categories of random labels. After that, test accuracy and Kendall's Tau are obtained like subsection~\ref{sec.4.2.1}. As shown in Figure~\ref{fig.1}, test accuracy and Kendall's Tau fluctuate greatly when the number of categories on the ImageNet16-120 is small~(in $[10,50]$). However, Kendall's Tau and test accuracy are not sensitive to label categories in most cases. This observation implies that our method can be directly applied to tasks where the real image category is unknown. \begin{figure}[htbp] \centering \subfigure[Test accuracy]{ \includegraphics[scale=0.27]{ablate_class_num_acc.pdf}} \subfigure[Kendall's Tau]{ \includegraphics[scale=0.27]{ablate_class_num_tau.pdf}} \caption{Impact of the random label category on (a) test accuracy and (b) Kendall's Tau~(best view in color). CIFAR-10, CIFAR-100 and ImageNet16-120 all sample 20 different image categories from 10 to 200 with interval 10. The red marker in each polyline represents the number of real image categories for different datasets.} \label{fig.1} \end{figure} \paragraph{Bias analysis of angle metric.} We have shown the impacts of random labels on RLNAS in the above section. Next, we further ablate the bias of angle metric in architecture search. Specifically, we initialize two SuperNet weights with the same distribution but different random seeds. Based on the SuperNet without training, evolution algorithm with angle is used to search architectures. We also construct a random search baseline which train SuperNet with uniform sampling strategy and ground truth labels, then randomly sample 100 architectures from NAS-Bench-201 search space. The top-1 architecture is selected among the sampled architectures according to their validation accuracy. Table~\ref{tab.7} compares our method with two training free methods with different initialization and one random search method. The results show that the two training free methods are worse than random search, and RLNAS is better than random search. This means that angle metric will not bias to a certain candidate architecture. \begin{table}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c} \hline \multirow{2}*{Method} & CIFAR-10 & CIFAR-100 & ImageNet16-120 \\ \cline{2-4} ~ & test acc~(\%) & test acc~(\%) & test acc~(\%) \\ \hline\hline $\text{Training free}^{\dagger}$ & 90.74$\pm$1.39 & 66.97$\pm$1.86 & 38.54$\pm$2.86 \\ \hline $\text{Training free}^{\ddagger}$ & 91.55$\pm$1.34 & 66.59$\pm$2.10 & 39.03$\pm$3.91 \\ \hline Random search & 92.09$\pm$0.21 & 67.27$\pm$1.28 & 40.77$\pm$3.64 \\ \hline RLNAS & \textbf{93.45}$\pm$\textbf{0.11} & \textbf{70.71}$\pm$\textbf{0.36} & \textbf{43.70}$\pm$\textbf{1.25} \\ \hline \end{tabular} \caption{Bias analysis of angle towards architectures on NAS-Bench-201.$^{\dagger}$ and $^{\ddagger}$ initializes model weights with normalization distribution and uniform distribution.} \label{tab.7} \end{table} \subsection{Generalization Ability} \begin{table*}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline Method & Params~(M)& FLOPs~(M) & Acc & AP &$\text{AP}_{50}$ &$\text{AP}_{75}$ & $\text{AP}_{S}$ & $\text{AP}_{M}$ & $\text{AP}_{L}$\\ \hline\hline Random search & 4.7 & 519 & 74.3 & 31.7 & 50.4 & 33.4 & 16.3 & 35.2 & 42.9 \\ \hline DARTS-v1~\cite{liu2018darts}~(\textit{sup.}) & 4.5 & 507 & 74.3 & 31.2 & 49.5 & 32.6 & 16.1 & 33.9 & 43.6 \\ DARTS-v2~\cite{liu2018darts}~(\textit{sup.}) & 4.7 & 531 & 74.9 & 31.5 & 50.3 & 33.1 & 16.9 & 34.5 & 43.0 \\ P-DARTS~\cite{chen2019progressive}~(\textit{sup.}) & 4.9 & 544 & 75.7 & 32.9 & \textbf{52.1} & 34.7 & 17.2 & 36.2 & 44.8 \\ PC-DARTS~\cite{xu2019pc}~(\textit{sup.}) & 5.3 & 582 & 75.9 & 32.9 & 51.8 & \textbf{34.8} & \textbf{17.5} & 36.3 & 43.5\\ \hline UnNAS~\cite{liu2020labels}~(\textit{rotation task.}) & 5.1 & 552 & 75.8 & 32.8 & 51.5 & 34.7 & 16.7 & 36.1 & 44.5\\ UnNAS~\cite{liu2020labels}~(\textit{color task.}) & 5.3 & 587 & 75.5 & 32.4 & 51.2 & 34.2 & 16.6 & 35.6 & 44.6\\ UnNAS~\cite{liu2020labels}~(\textit{jigsaw task.}) & 5.2 & 560 & 76.2 & \textbf{33.0} & 51.9 & 35.3 & 16.4 & \textbf{37.2} & \textbf{45.4}\\ \hline $\text{Ours}^\dagger$~(\textit{random label.}) & 5.5 & 597 & 75.9 & 32.4 & 50.9 & 34.4 & 16.5 & 35.5 & 44.5 \\ $\text{Ours}^\ddagger$~(\textit{random label.}) & 5.2 & 561 & 75.9 & 32.9 & 51.6 & \textbf{34.8} & 16.8 & 36.7 & 44.5\\ \hline \end{tabular} \caption{Object detection results of DARTS search space on MS COCO. $^\dagger$ search with 600M FLOPs constrain. $^\ddagger$ search without FLOPs constrain but scale FLOPs to 600M.} \label{tab.8} \end{table*} \begin{table*}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline Method & Params~(M)& FLOPs~(M) & Acc & AP &$\text{AP}_{50}$ &$\text{AP}_{75}$ & $\text{AP}_{S}$ & $\text{AP}_{M}$ & $\text{AP}_{L}$\\ \hline\hline Random search~(\textit{sup.}) & 4.5 & 446 & 75.3 & 29.7 & 47.5 & 31.4 & 15.3 & 32.6 & 39.9 \\ \hline FairNAS-A~\cite{chu2019fairnas}~(\textit{sup.}) & 4.7 & 389 & 75.1 & 29.8 & 47.8 & 31.4 & 15.5 & 32.3 & \textbf{41.2} \\ Proxyless~(GPU)~\cite{cai2018proxylessnas}~(\textit{sup.}) & 7.0 & 457 & 75.5 & 29.5 & 47.5 & 30.9 & 15.5 & 32.4 & 40.8 \\ FairDARTS-D~\cite{chu2019fair}~(\textit{sup.}) & 4.4 & 477 & 74.7 & 29.6 & 47.2 & 31.1 & 14.6 & 32.5 & 40.1\\ SPOS~\cite{guo2019single}~(\textit{sup.}) & 5.4 & 472 & 75.6 & 29.8 & \textbf{48.1} & 31.1 & \textbf{16.0} & 32.6 & 40.4 \\ \hline Ours~(\textit{unsup.}) & 5.3 & 473 & 75.6 & \textbf{30.0} & 47.6 & \textbf{31.8} & 15.7 & \textbf{32.8} & 40.5\\ \hline \end{tabular} \caption{Object detection results of MobileNet-like search space on MS COCO.} \label{tab.9} \end{table*} We evaluate the generalization ability of RLNAS on two downstream tasks:~object detection and semantic segmentation. We first retrain the models searched by different NAS methods on ImageNet , and then finetune these pre-trained models on downstream tasks. In order to make fair comparisons, models searched in the same search space adopt the same training setting for ImageNet classification tasks. At the same time, models for the same downstream task also use the same training setting, no matter what search space the model is searched from. \paragraph{Object detection.} We conduct experiments on MS COCO~\cite{Lin2014Microsoft} and adopt RetinaNet~\cite{2017Focal} as the detection framework. The train and test image scale is 800$\times$ resolution. We only modify the backbone of RetinaNet and train RetinaNet with default training setting as Detectron2~\cite{wu2019detectron2}. Table~\ref{tab.8} and Table~\ref{tab.9} show the comparisons of models searched in DARTS and MobileNet-like search space respectively. RLNAS obtains comparable AP in DARTS search space and surpasses other methods with slight margin in MobileNet-like search space. \paragraph{Semantic segmentation.} We further test RLNAS on the task of semantic segmentation on Cityscapes~\cite{2016The} dataset. We adopt DeepLab-v3~\cite{chen2017rethinking} as segmentation framework. The train and test image scale is 769$\times$769 and we train DeepLab-v3 with 40k iterations. The other segmentation training setting are kept the same as MMSegmentation~\cite{mmseg2020}. Table~\ref{tab.10} and Table~\ref{tab.11} make comparisons among models searched on DARTS and MobileNet-like search space respectively. For DARTS search space, $\text{RLNAS}^{\dagger}$ obtains 73.2\% mIoU and outperform other methods by a large margin. RLNAS also obtains comparable mIoU compared to other methods in MobileNet search space. \paragraph{Summary.} We conclude that RLNAS achieves comparable or even superior performance across two downstream tasks and various search spaces, without bells and whistles. \begin{table}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c|c} \hline Method & \makecell[c]{Params \\ (M)}& \makecell[c]{FLOPs \\ (M)} & \makecell[c]{Acc \\ (\%)} & \makecell[c]{mIoU \\ (\%)} \\ \hline\hline Random search~(\textit{sup.}) & 4.7 & 519 & 74.3 & 72.3 \\ \hline DARTS-v1~\cite{liu2018darts}~(\textit{sup.}) & 4.5 & 507 & 74.3 & 72.7 \\ DARTS-v2~\cite{liu2018darts}~(\textit{sup.}) & 4.7 & 531 & 74.9 & 71.8 \\ P-DARTS~\cite{xu2019pc}~(\textit{sup.}) & 4.9 & 544 & 75.7 & 71.9 \\ PC-DARTS~\cite{xu2019pc}~(\textit{sup.}) & 5.3 & 582 & 75.9 & 72.2 \\ \hline UnNAS~\cite{liu2020labels}~(\textit{rotation task.}) & 5.1 & 552 & 75.8 & 71.9 \\ UnNAS~\cite{liu2020labels}~(\textit{color task.}) & 5.3 & 587 & 75.5 & 72.0 \\ UnNAS~\cite{liu2020labels}~(\textit{jigsaw task.}) & 5.2 & 560 & 76.2 & 72.1 \\ \hline $\text{Ours}^\dagger$~(\textit{random label.}) & 5.5 & 597 & 75.9 & \textbf{73.2} \\ $\text{Ours}^\ddagger$~(\textit{random label.}) & 5.2 & 561 & 75.9 & 72.5 \\ \hline \end{tabular} \caption{Semantic segmentation results of DARTS search space on Cityscapes.$^\dagger$ search with 600M FLOPs constrain. $^\ddagger$ search without FLOPs constrain but scale FLOPs to 600M.} \label{tab.10} \end{table} \begin{table}[htbp] \centering \footnotesize \begin{tabular}{l|c|c|c|c} \hline Method & \makecell[c]{Params \\ (M)}& \makecell[c]{FLOPs \\ (M)} & \makecell[c]{Acc \\ (\%)} & \makecell[c]{mIoU \\ (\%)} \\ \hline\hline Random search~(\textit{sup.}) & 4.5 & 446 & 75.3 & 70.6 \\ \hline FairNAS-A~\cite{chu2019fairnas}~(\textit{sup.}) & 4.7 & 389 & 75.1 & 72.0 \\ Proxyless~(GPU)~\cite{cai2018proxylessnas}~(\textit{sup.}) & 7.0 & 457 & 75.5 & 71.0 \\ FairDARTS-D~\cite{chu2019fair}~(\textit{sup.}) & 4.4 & 477 & 74.7 & \textbf{72.1} \\ SPOS~\cite{guo2019single}~(\textit{sup.}) & 5.4 & 472 & 75.6 & 71.6 \\ \hline Ours~(\textit{unsup.}) & 5.3 & 473 & 75.6 & 71.8 \\ \hline \end{tabular} \caption{Semantic segmentation results of MobileNet-like search space on Cityscapes.} \label{tab.11} \end{table} \newpage \newpage \newpage \bibliographystyle{ieee_fullname}
1,314,259,996,020
arxiv
\section{Electronic Submission} \label{submission} Submission to ICML 2015 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX\ templates are available on the conference web site at: \begin{center} \textbf{\texttt{http://icml.cc/2015/}} \end{center} Send questions about submission and electronic templates to \texttt{program@icml.cc}. The guidelines below will be enforced for initial submissions and camera-ready copies. Here is a brief summary: \begin{itemize} \item Submissions must be in PDF. \item The maximum paper length is \textbf{8 pages excluding references, and 10 pages including references} (pages 9 and 10 must contain only references). \item Do \textbf{not include author information or acknowledgments} in your initial submission. \item Your paper should be in \textbf{10 point Times font}. \item Make sure your PDF file only uses Type-1 fonts. \item Place figure captions {\em under} the figure (and omit titles from inside the graphic file itself). Place table captions {\em over} the table. \item References must include page numbers whenever possible and be as complete as possible. Place multiple citations in chronological order. \item Do not alter the style template; in particular, do not compress the paper format by reducing the vertical spaces. \end{itemize} \subsection{Submitting Papers} {\bf Paper Deadline:} The deadline for paper submission to ICML 2015 is at \textbf{23:59 Universal Time (3:59 Pacific Daylight Time) on February 6, 2015}. If your full submission does not reach us by this time, it will not be considered for publication. There is no separate abstract submission. {\bf Anonymous Submission:} To facilitate blind review, no identifying author information should appear on the title page or in the paper itself. Section~\ref{author info} will explain the details of how to format this. {\bf Simultaneous Submission:} ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences during ICML's review period. Authors may submit to ICML substantially different versions of journal papers that are currently under review by the journal, but not yet accepted at the time of submission. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. \medskip To ensure our ability to print submissions, authors must provide their manuscripts in \textbf{PDF} format. Furthermore, please make sure that files contain only Type-1 fonts (e.g.,~using the program {\tt pdffonts} in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using \textbf{Word} must convert their document to PDF. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF. Really. We're not joking. Don't send Word. Those who use \textbf{\LaTeX} to format their accepted papers need to pay close attention to the typefaces used. Specifically, when producing the PDF by first converting the dvi output of \LaTeX\ to Postscript the default behavior is to use non-scalable Type-3 PostScript bitmap fonts to represent the standard \LaTeX\ fonts. The resulting document is difficult to read in electronic form; the type appears fuzzy. To avoid this problem, dvips must be instructed to use an alternative font map. This can be achieved with the following two commands: {\footnotesize \begin{verbatim} dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps \end{verbatim}} Note that it is a zero following the ``-G''. This tells dvips to use the config.pdf file (and this file refers to a better font mapping). A better alternative is to use the \textbf{pdflatex} program instead of straight \LaTeX. This program avoids the Type-3 font problem, however you must ensure that all of the fonts are embedded (use {\tt pdffonts}). If they are not, you need to configure pdflatex to use a font map file that specifies that the fonts be embedded. Also you should ensure that images are not downsampled or otherwise compressed in a lossy way. Note that the 2015 style files use the {\tt hyperref} package to make clickable links in documents. If this causes problems for you, add {\tt nohyperref} as one of the options to the {\tt icml2015} usepackage statement. \subsection{Reacting to Reviews} We will continue the ICML tradition in which the authors are given the option of providing a short reaction to the initial reviews. These reactions will be taken into account in the discussion among the reviewers and area chairs. \subsection{Submitting Final Camera-Ready Copy} The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except of course that the normal author information (names and affiliations) should be given. See Section~\ref{final author} for details of how to format this. The footnote, ``Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.'' must be modified to ``\textit{Proceedings of the $\mathit{31}^{st}$ International Conference on Machine Learning}, Lille, France, 2015. JMLR: W\&CP volume 37. Copyright 2015 by the author(s).'' For those using the \textbf{\LaTeX} style file, simply change $\mathtt{\backslash usepackage\{icml2015\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2015\}}$$ \noindent Authors using \textbf{Word} must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$ point thick. The running head should be centered, bold and in $9$ point type. The rule should be $10$ points above the main text. For those using the \textbf{\LaTeX} style file, the original title is automatically set as running head using the {\tt fancyhdr} package which is included in the ICML 2015 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using \verb|\icmltitlerunning{...}| just before $\mathtt{\backslash begin\{document\}}$. Authors using \textbf{Word} must edit the header of the document themselves. \section{Format of the Paper} All submissions must follow the same format to ensure the printer can reproduce them without problems and to let readers more easily find the information that they desire. \subsection{Length and Dimensions} Papers must not exceed eight (8) pages, including all figures, tables, and appendices, but excluding references. When references are included, the paper must not exceed ten (10) pages. Any submission that exceeds this page limit or that diverges significantly from the format specified herein will be rejected without review. The text of the paper should be formatted in two columns, with an overall width of 6.75 inches, height of 9.0 inches, and 0.25 inches between the columns. The left margin should be 0.75 inches and the top margin 1.0 inch (2.54~cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. The paper body should be set in 10~point type with a vertical spacing of 11~points. Please use Times typeface throughout the text. \subsection{Title} The paper title should be set in 14~point bold type and centered between two horizontal rules that are 1~point thick, with 1.0~inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. \subsection{Author Information for Submission} \label{author info} To facilitate blind review, author information must not appear. If you are using \LaTeX\/ and the \texttt{icml2015.sty} file, you may use \verb+\icmlauthor{...}+ to specify authors. The author information will simply not be printed until {\tt accepted} is an argument to the style file. Submissions that include the author information will not be reviewed. \subsubsection{Self-Citations} If your are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., ``in previous work \cite{langley00}, we have shown \ldots''). Do not anonymize citations in the reference section by removing or blacking out author names. The only exception are manuscripts that are not yet published (e.g. under submission). If you choose to refer to such unpublished manuscripts \cite{anonymous}, anonymized copies have to be submitted as Supplementary Material via CMT. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look a the Supplementary Material when writing their review. \subsubsection{Camera-Ready Author Information} \label{final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3~inches below the bottom rule surrounding the title. The authors' names should appear in 10~point bold type, electronic mail addresses in 10~point small capitals, and physical addresses in ordinary 10~point type. Each author's name should be flush left, whereas the email address should be flush right on the same line. The author's physical address should appear flush left on the ensuing line, on a single line if possible. If successive authors have the same affiliation, then give their physical address only once. A sample file (in PDF) with author names is included in the ICML2015 style file package. \subsection{Abstract} The paper abstract should begin in the left column, 0.4~inches below the final address. The heading `Abstract' should be centered, bold, and in 11~point type. The abstract body should use 10~point type, with a vertical spacing of 11~points, and should be indented 0.25~inches more than normal on left-hand and right-hand margins. Insert 0.4~inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and no more than six or seven sentences. \subsection{Partitioning the Text} You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. \subsubsection{Sections and Subsections} Section headings should be numbered, flush left, and set in 11~pt bold type with the content words capitalized. Leave 0.25~inches of space before the heading and 0.15~inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10~pt bold type with the content words capitalized. Leave 0.2~inches of space before the heading and 0.13~inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10~pt small caps with the content words capitalized. Leave 0.18~inches of space before the heading and 0.1~inches after the heading. Please use no more than three levels of headings. \subsubsection{Paragraphs and Footnotes} Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes\footnote{For the sake of readability, footnotes should be complete sentences.} to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9~point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{icml_numpapers}} \caption{Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 -- ICML 2008) and International Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \subsection{Figures} You may want to include figures in the paper to help readers visualize your approach and your results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption {\it after\/} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in Figure~\ref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment {\tt figure*} in \LaTeX), but always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. Algorithm~\ref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title {\it above\/} the table with at least 0.1~inches of space before the title and the same after it, as in Table~\ref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \hline \abovespace\belowspace Data set & Naive & Flexible & Better? \\ \hline \abovespace Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ \belowspace Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material that can be typeset, as contrasted with figures, which contain graphical material that must be drawn. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns, but place two-column tables at the top or bottom of the page. \subsection{Citations and References} Please use APA reference format regardless of your formatter or word processor. If you rely on the \LaTeX\/ bibliographic facility, use {\tt natbib.sty} and {\tt icml2015.bst} included in the style-file package to obtain this format. Citations within the text should include the authors' last names and year. If the authors' names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel's pioneering work \yrcite{Samuel59}. Otherwise place the entire reference in parentheses with the authors and year separated by a comma \cite{Samuel59}. List multiple references separated by semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.' construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference \cite{MachineLearningI}. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to Section~\ref{author info} for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports \cite{mitchell80}, and dissertations \cite{kearns89}. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). \subsection{Software and Data} We strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, do not include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as ``Supplementary Material'' into the CMT reviewing system. Note that reviewers are not required to look a this material when writing their review. \section*{Acknowledgments} \textbf{Do not} include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. \nocite{langley00} \section{Introduction} Recent advances in machine learning have given rise to several classes of latent variable models, allowing one to embed additional unobserved structure into a problem in order to improve results. One such method is Latent Dirichlet Allocation (LDA) \cite{OriginalLDA2003}, a generative model in which documents are assumed to contain a mixture of topics; each topic is then represented as a probability distribution over words in the corpus. A recent application of this is in Amazon review modeling \cite{715Project}. In this approach, each review text is treated as a document, and products are displayed via a word cloud containing the top $n$ words per topic. However, while work such as the Alias method \cite{LietalKDD2014}, GPU-sampling \cite{LietalANU2012}, and Parameter Server \cite{LietalNIPS2013, LietalWSDM2015} has resulted in substantial speed improvements for large scale systems, much less has been done for a more hardware-constrained setting such as smartphones running Android or iOS. Performance aside, even less has been explored with respect to obtaining interpretable and visualizable results from these models. \section{Background} \subsection{Current Amazon System} The current Amazon system for displaying reviews is divided into three sections. Quotes give a user a set of three one-line excerpts from reviews as a high level overview of the product. A list of ``most helpful reviews'' give the user more detailed information via the eyes of a select few reviewers. The full review list is then available as a backup for the ambitious buyer who wishes to sift through thousands of reviews to ensure their purchase will be worthwhile. It is easy to spot a couple potential flaws in this system -- a user is either able to view the experience of a small choice set of individuals, or they must look through the full uncurated review set. Thus, their is no aggregation of knowledge despite the availability of millions of reviews throughout the site. In addition, the system does not quickly convey a multi-faceted view of the product. For example, a user buying a smartphone might be interested in knowing the general sentiment about the phone's camera, battery life, performance (does it lag?), reliability of connection, etc. \subsection{Previous Topic Modeling Approach} The previous approach uses LDA to overcome this issue by representing each product by its topic distribution. Each topic distribution is then displayed to the user via a word cloud in order to provide multifaceted information about a product in a way that is intuitive, compact, and aggregate. The system retains completeness in the sense that the full review set is still available to the user via an interactive topic-based search feature. However, this system still falls short in several areas. Namely, the system: \begin{itemize} \item Requires that models are built on the cloud and returned for the user for display \item Is far too slow to be run on mobile operating systems \item Does not take advantage of auxiliary data such as ratings, helpfulness votes, and users when estimating topics \item Utilizes a fixed number of topics (16) regardless of the product \item Does not give rise to results that are easily displayable on a small-screen mobile device \item Uses a fixed dataset \cite{amazondata} rather than an adaptive one \end{itemize} The first point gives rise to several issues when scaling the service. Previously, computing topics was done with MALLET \cite{MALLET}, a machine learning toolkit written in Java. Aside from the toolkit's inability to efficient parallelize LDA training, the use of MALLET server-side results in large computation time and an unscalable system in terms of cost. We plan to address this issue by building a system that can run efficiently in a mobile setting, thus offloading the vast majority of computation to a distributed system of clients connected by a central model cache and updating server. To do this, the proposed learning algorithm must efficiently run in a locally parallel system (2-4 cores) in an amount of time that is acceptable for a typical user. Sampling for the algorithm must be done from scratch; as an example of MALLET's poor performance on Android, it was found that a small topic modeling job (~350 reviews) caused the app to crash after several minutes largely spent in garbage collection when using MALLET due to high memory consumption. Another weakness of the previous system is omission of auxiliary data from the underlying probability model. By using standard LDA, ratings, helpfulness votes, and the user network cannot be used for directly improving results. While post-processing can help with some of these issues, it cannot effectively improve the quality of topics themselves. Thus, we present a new latent variable model, RLDA, that incorporates this information in order to improve topic quality and reduce noise. In addition, RLDA allows for a variable number of topics in order to help avoid the display of information-void topics and improve user experience. Along these same lines, the final result must be easily visualizable on a mobile device. While the previous system was easily viewable on a desktop web browser, the size constraints of a mobile device coupled with the lack of mouse prohibit the use of many previous features. Namely, the number of topics displayed and the size of the visualization circle are too large to display on a mobile device while maintaining visible text, and the lack of a mouse invalidates the hover-based review selection system. In later sections, we demonstrate how our revised approach overcomes these issues to allow RLDA results to effectively be visualized on small-screen devices. As a final note, the previous system utilized a fixed dataset \cite{amazondata} which did not include recent products or allow for model updating as new reviews appear. For a potential buyer, it is often useful to know if a product has a tendency to fail a few months after purchasing it. We address this issue by creating a system that dynamically updates models as new data becomes available using efficient sampling techniques. \subsection{Pre-existing work on modeling reviews} Pre-existing latent variable models designed for analyzing reviews, such as \cite{BrodyE10, JoO11, TitovM08}, generally fall short in scalability and generality. They generally make improvement over LDA by using word associations and sentence context to form more representative words. In addition to that, they also focus specifically on a fixed number of known aspects. This severely limits the potential application of these models, and as a result they cannot capture the unknown topics/aspects at fine-grained level (e.g a third-party charging adapter does not work with some Apple computers). These models also ignore a large quantity of auxiliary data such as user ratings, helpfulness, and unhelpfulness. \subsection{Latent Dirichlet Allocation} Latent Dirichlet Allocation (LDA) \cite{OriginalLDA2003} is a widely used topic model in which documents are assumed to be generated from mixture distributions of language models associated with individual topics. That is, the documents are generated by the latent variable model below: \begin{figure}[ht!] \centering \begin{tikzpicture} \node[obs] (alpha) {$\alpha$}; \node[latent, right=of alpha] (theta) {$\theta_d$}; \node[latent, right=of theta] (z) {$z_{di}$}; \node[obs, right=of z] (w) {$w_{di}$}; \node[latent, right=of w] (phi) {$\phi_k$}; \node[obs, right=of phi] (beta) {$\beta$}; \edge{alpha}{theta} ; \edge{theta}{z} ; \edge{z, phi}{w} ; \edge{beta}{phi} ; \plate {plate1} {(z)(w)} {for all $i$} ; \plate {plate2} {(theta)(plate1)} {for all $d$} ; \plate {plate3} {(phi)} {for all $k$} ; \end{tikzpicture} \end{figure} \\ The generative process is as follows: \\ \\ For each document $d$ draw a topic distribution $\theta_d$ from a Dirichlet distribution with parameter $\alpha$ \begin{align} \theta_d \sim Dir(\alpha) \end{align} For each topic $t$ draw a word distribution from a Dirichlet distribution with parameter $\beta$ \begin{align} \psi_t \sim Dir(\beta) \end{align} For each word $i \in \{1...n_d\}$ in document $d$ draw a topic from the multinomial $\theta_d$ via \begin{align} z_{di} \sim Mult(\theta_d) \end{align} Draw a word from the multinomial $\psi_{z_{di}}$ via \begin{align} w_{di} \sim Mult(\psi_{z_{di}}) \end{align} The Dirichlet-multinomial design in this model makes it simple to do inference due to distribution conjugacy -- we can integrate out the multinomial parameters $\theta_d$ and $\psi_k$, thus allowing one to express $p(w,z|\alpha,\beta,n_d)$ in a closed-form \cite{SparseLDA2009}. This yields a Gibbs sampler for drawing $p(z_{di}|rest)$ efficiently. The conditional probability is given by \begin{align} p(z_{di}|rest) \propto \frac{(n_{td}^{-di} + \alpha_t)(n_{tw}^{-di} + \beta_w)}{n_t^{-di} + \bar{\beta}} \end{align} Here the count variables $n_{td},n_{tw}$ and $n_t$ denote the number of occurrences of a particular (topic,document) and (topic,word) pair, or of a particular topic, respectively. Moreover, the superscript $.^{-di}$ denotes count when ignoring the pair $(z_{di},w_{di})$. For instance, $n_{tw}^{-di}$ is obtained when ignoring the (topic,word) combination at position $(d,i)$. Finally, $\bar{\beta}:=\sum_{w}\beta_w$ denotes the joint normalization. Sampling from (5) requires $O(k)$ time since we have $k$ nonzero terms in a sum that need to be normalized. In large datasets where the number of topics may be large, this is computationally costly. However, there are many approaches for substantially accelerating sampling speed by exploiting the topic sparsity to reduce time complexity to $O(k_d + k_w)$ \cite{SparseLDA2009} and further to $O(k_d)$ \cite{LietalKDD2014}, where $O(k_d)$ denotes the number of topics instantiated in a document and $O(k_w)$ denotes the number of topics instantiated for a word across all documents. \subsection{Chital Computation Marketplace} Chital is a scalable, distributed computation marketplace designed for the efficient allocation of high CPU, low network bandwidth tasks among a network of mobile devices. The five key aspects of Chital are 1) task distribution via the marketplace, 2) a credit score system for monitoring user behavior 3) real-time matching mechanisms for maximizing user gain 4) an optional lottery system for further incentivizing participation 5) an evaluation system for verifying the submitted models. Each of these is discussed in detail below. \subsubsection{Marketplace} \label{Marketplace} The marketplace is the major underlying component in Chital for task allocation. In the marketplace, each user has the option of opting-in to background computation; once opted in, this user is then listed as a computational seller and can be assigned modeling tasks to be run in the background. When a Vedalia user enters a query, a matching request is sent to a centralized server system -- this user is now a buyer. Assuming the buyer has sufficient computational power on his phone, the buyer is also automatically listed as a seller for the duration of his model computation. The marketplace then matches the buyer with a pair of sellers and requests that both sellers generate a model from the supplied data. This data is then returned to the central servers, where the system determines whether model verification is necessary. Let $c_1$ and $c_2$ denote the credit of the two sellers, and $p_1$ and $p_2$ denote the perplexity of the sellers' results. Then the probability of secondary verification is defined as: \begin{eqnarray} 1 - \frac{1}{3} \Bigg[ \frac{1}{1 + e^{-(c_1 + c_2)}} + 2\frac{\min(p_1, p_2)}{\max(p_1, p_2)} \Bigg] \end{eqnarray} Thus, high seller credit scores and high perplexity match reduce the probability of verification, and vice versa. The best model (measured by perplexity) that passes verification is then returned to the original buyer. \subsubsection{Credit System} A 0-sum credit system is established that begins with two 0-credit sellers for computation. Then, each user that joins the system as a seller begins with 0 credit. When building a model, the perplexities of each of the two models returned by the sellers are compared; a credit from the worst model's seller is then transferred to the best model's seller. The best model's seller is additionally awarded a $t \cdot i^*$ lottery tickets, where $t$ is the number of tokens processed and $i^*$ is the number of sampling iterations performed by the best model. Assuming every seller is honest, each seller has expectation 0 credit over time. However, in the event that a malicious seller attempts to provide phony results in order to acquire lottery tickets, the credit distribution shifts from the bad to good users. As a result, the system becomes less likely to need to verify results of good users, and becomes increasingly likely to perform verification on bad users. \subsubsection{Matching Mechanisms} The core of Chital is a real-time matching system that pairs each query with two sellers. The matching problem can be formulated as a bipartite matching problem with both sides of the vertices arriving online. Each buyer vertex is required to match with two seller vertices. In addition to that, after a matching is established, the matched vertices only become temporarily unavailable for a period of time based on the performance of seller nodes and the task size of buyer node, before the matching is removed and the vertices become available again. Although online graph matching especially online bipartite matching is a well-studied major research area \cite{KarVazVaz90, Mehta13}, our problem setup makes it difficult to apply any existing algorithm for two reasons: our problem introduces an extra "time dimension", and our objective is to maximize overall user gain thereby to convince them joining the system voluntarily. Based on these, we developed the concept of strategyproofness and Nash equilibrium in another recent work of ours \cite{896Project} and, studied and created a suite of new real-time matching algorithms to achieve our goal. \subsubsection{Lottery System} To further incentivize seller participation, a lottery system can be constructed in which a portion of app advertising revenue is allocated for a lottery system. At the end of each lottery period, a user a sampled at random with probability of winning proportionate to the user's number of lottery tickets. The full lottery amount is then awarded to this winning user. Note, however, the existence of the lottery system is entirely optional, since a rational user would voluntarily participate the system if a good matching mechanism with strategyproofness and empirical Nash equilibrium were used. In our empirical studies \cite{896Project}, we found under appropriate parameters, users always save overall computation time by a large margin within our simulation. \subsubsection{Evaluation System} Evaluation is a multi-stage system consisting of model validation, selection, and verification. In validation, basic properties of the submitted distributions are verified (e.g. sum to 1). Any model that fails validation is immediately rejected. In selection, the perplexity of each submitted model is computed. The model with the lower perplexity is the selected to be returned to the end user, pending verification. In verification, the marketplace first computes the probability $p_v$ of secondary verification as described in \ref{Marketplace}. A value $s$ is then sampled uniformly from [0,1]; if $s > p_v$, verification occurs. In verification, a few additional iterations of Gibbs sampling are run on the selected model on Chital servers, and final model perplexity is computed. If the final perplexity deviates substantially from that of the submitted model, the submitted model has not converged and is thus rejected by the system. \section{Proposed System} We propose a system that incorporates a new latent variable model, RLDA, that is well-suited for mobile devices and review analysis. It naturally extends LDA while simultaneously maintains a structure that allows the use techniques introduced in SparseLDA and AliasLDA to achieve high-performance at any scale. Furthermore, the system is integrated with Chital for scalability, and is accessible to the end user in the form of a mobile application. \subsection{RLDA} Review-augmented Latent Dirichlet Allocation (RLDA) is an adaptation of LDA that is well-suited for modeling reviews in a mobile setting due to its high sampling performance and increased structure with respect to standard LDA. In Figure \ref{RLDA-model} we present the RLDA model in plate notation. The notations are described in Figure \ref{RLDA-vars} \begin{figure}[ht!] \label{RLDA-model} \centering \begin{tikzpicture} \node[obs] (alpha) {$\alpha$}; \node[latent, right=of alpha, xshift=6mm] (theta) {$\theta_d$}; \node[latent, above=of theta] (rcat) {$c_d$}; \node[latent, above=of rcat] (radj) {$\widetilde{r}_d$}; \node[obs, above=of radj, xshift=-12mm] (r) {$r_d$}; \node[obs, above=of radj] (b) {$b_d$}; \node[obs, above=of radj, xshift=12mm] (sigma) {$\sigma^2_d$}; \node[latent, below=of theta] (z) {$z_{di}$}; \node[obs, below=of z] (w) {$w_{di}$}; \node[latent, right=of w, xshift=6mm] (phi) {$\phi_{dk}$}; \node[latent, above=of phi] (psi) {$\psi_d$}; \node[obs, above=of psi, xshift=12mm] (h) {$h_d$}; \node[obs, above=of psi] (u) {$u_d$}; \node[obs, above=of psi, xshift=-12mm] (nu) {$\nu_d$}; \node[obs, right=of phi, xshift=6mm] (beta) {$\beta$}; \edge{alpha, rcat}{theta} ; \edge{radj}{rcat} ; \edge{r, sigma, b}{radj} ; \edge{theta}{z} ; \edge{z, phi}{w} ; \edge{beta, psi}{phi} ; \edge{u, h, nu}{psi} ; \plate {plate1} {(z)(w)} {for all $i$} ; \plate {plate2} {(phi)} {for all $k$} ; \plate {plate3} {(theta)(radj)(r)(sigma)(h)(u)(nu)(b)(plate1)(plate2)} {for all $d$} ; \end{tikzpicture} \caption{RLDA} \end{figure}\\ \begin{figure}[h!] \label{RLDA-vars} \begin{framed} $r_d$ : rating associated with review $d$ \\ $b_d$ : mean of user $d$'s rating biases, excl. review $d$\\ $\sigma^2_d$ : variance of user $d$'s rating biases, excl. review $d$\\ $\widetilde{r}_d \sim N(r_d + b_d, \sigma^2_d + 1)$ : bias-corrected review rating \\ $c_d$ : categorical distribution over ratings ${1, 2, 3, 4, 5}$ \\ $\nu_d$ : writing quality score for review $d$ \\ $u_d$ : unhelpfulness votes for review $d$ \\ $h_d$ : helpfulness votes for review $d$ \\ $\psi_d \sim Bernoulli(Logistic(\nu_d, u_d, h_d))$ : the review quality rating \end{framed} \caption{Variables in RLDA} \end{figure} Note that despite the introduction of latent variables $\widetilde{r}_d$, $c_d$, $\psi_d$, and our per-document observed variables $r_d$, $\sigma_d$, $h_d$, $u_d$, $\nu_d$, the basic structure of LDA is maintained. However, we can see that the topic distribution of each review is now dependent on the review's bias-corrected rating. This makes sense intuitively in that we expect more negative reviews to talk about different topics that wholly positive ones; as an example, negative reviews might tend to focus on poor product quality and customer service, with positive reviews focusing on product satisfaction and example use cases. The model also incorporates a Bernoulli review quality rating $\psi_d$, which takes into account review helpfulness votes, unhelpfulness votes, and writing quality (out-of-vocabulary rate, punctuational correctness, average word length, etc.). Creating a model while simultaneously maintaining high sampling performance can be very challenging. To our knowledge, many pre-existing latent variable models overlook this issue albeit providing interesting results in accuracy in a laboratory setting for certain categories of reviews. In Section \ref{SamplingSection}, we describe efficient sampling techniques for the RLDA model that build on existing LDA sampling methods. \subsection{Model Updating} Using the Chital system, model updating follows naturally by performing sampling using the existing model with the new reviews added to the review set. In this way, if a lottery system is used, the number of lottery tickets awarded to the seller is fairly determined by the amount of computation required to update the model. To avoid convergence to poor optima, we recompute a product model after every few updates. This methodology allows for products to be quickly updated when new reviews become available while maintaining model quality via occasional full recomputes. \subsection{Core Set Selection} To accommodate a variable number of topics, we first perform RLDA sampling with a fixed number of topics $k$. The number of topics can then be reduced to a smaller core set post-sampling by using techniques in \cite{FeldmanFK11} combined with estimating the informativeness of the top words in each topic. \subsection{Visualization} Given the limited screen space available on mobile devices, the interface is designed with simplicity in mind from the ground up. The initial screen is a simple search, with a singular entry box in which the user can query products. After submitting a product query, the user is provided with a list of products to select from in order to build a topic model. In contrast to \cite{715Project}, the display of topics is returned to its core. We display each topic using its review in topic-document sorted order, however in contrast to Amazon's system we partition the visualization in a set of tabs, one for each topic. The user can the use an intuitive SeekBar to select topics. Once selected, a topic summary is displayed including topic weighted rating, topic weight, and the top $k$ tokens of the topic listed as keywords. Above this topic summary is a a review ViewPager, which can be using to quickly pan through reviews in sorted order according to topic probability of the selected topic. Within the review text, each region of text corresponding to a keyword lemma is bolded in order to bring attention to regions of the review that are pertinent to the selected topic. In this way, I used can quickly glance at select regions of a review when learning about features of a product that they deem important. We defer more discussion of visualization to the case studies in which we provide examples of Quokka visualizations. \section{Implementation Details} Below we briefly describe the implementation of Vedalia, with emphasis on performance-critical details and modeling. \subsection{Architecture, Preprocessing, and Database} To accommodate the new system design we made substantial architecture change compared to the previous system \cite{LiRobinsonDengJing14}. We dropped the integration with parameter-server-like computing clusters, but substantially increased the power of pre-processing clusters and databases. Furthermore, the model result selection system and intermediate-result push-update mechanism are no longer required, hence entirely replaced by Chital system. We schedule large-scale batch review preprocessing task using Apache Spark \cite{spark10} combined with Stanford CoreNLP \cite{stanfordnlp14} as soon as enough reviews are committed into the database, so to reduce waiting time and eliminate overhead during model computation. We deployed a 1-rack, multi-node Cassandra \cite{cassandra10} database for storing and streaming product information, raw reviews, and analyzed reviews, so to achieve best performance in fault tolerance, consistency, and read-and-write at scale. An analyzed review is a review attached with pre-processing result in compressed binary format. To facilitate searching we deployed a multi-node scalable search engine, ElasticSearch \cite{elasticsearch15}, that operates alongside Cassandra, simultaneously indexes every product and every review inserted to the database. ElasticSearch complements Cassandra with its higher insertion performance and much more flexible query structure, while simultaneously taking benefit from Cassandra's high consistency and fault tolerance. Web servers are deployed in the front end which exposes APIs that process product and review queries and stream the result back to query initiators, while simultaneously provides network-level isolation and security guarantee. This architecture effectively establishes a scalable system which is not sensitive to the number of users making queries, due to Chital's ability to offload computation to users themselves. Since reviews are preprocessed in batch at insertion as Spark tasks, large amount of workers are only allocated temporarily for a short period of time. Modern cloud computing platforms such as Google Compute Platform allow this to be done in an automated fashion, and charge for only the amount of time allocated on a per-minute basis. To process the entire collection of 23 million reviews in SNAP dataset \cite{amazondata}, we used only up to 100 cores with 500GB of memory resources for 2 days. \subsection{Model Views} To reduce bandwidth and protect models from outside use, we avoid sending the entire model to the end user. The initial model view is streamed to the user as a list of topic descriptions (id, probability, expected rating, expected helpfulness, expected unhelpfulness) and their associated top $n$ words. As with the previous system, we defer sending review text to the end user until it is requested. This is of particular importance in a mobile setting, where many users will be using the app on a bandwidth-limited data plan. To improve user experience, reviews can be cached for offline viewing. \subsection{Sampling} \label{SamplingSection} Sampling can be performed by following a procedure which transforms the auxiliary information along with other latent variables into word observation, then sample the transformed data in an LDA-like fashion, where an adaptation of SparseLDA \cite{SparseLDA2009} sampling is performed in order to estimate model parameters. We define review score tier $c_{d,t}$ as: \begin{align*} c_{d,1} &:= p(\widetilde{r}_d \leq 1.5), \\ c_{d,2} &:= p(\widetilde{r}_d \in (1.5, 2.5]), \\ c_{d,3} &:= p(\widetilde{r}_d \in (2.5, 3.5]), \\ c_{d,4} &:= p(\widetilde{r}_d \in (3.5, 4.5]), \\ c_{d,5} &:= p(\widetilde{r}_d > 4.5), \\ \end{align*} Additionally, we need to characterize the distribution of $\psi_d$. We train a logistic regression model mapping $\{ \nu_d, u_d, h_d \} \rightarrow \text{is\_relevant}$, where $\text{is\_relevant} = 1$ if the review is relevant to the product being reviewed, and 0 otherwise. As an example, one Amazon review for a Macbook Pro says "The product is good but I find that my neck is getting sore from using it."; the goal of the logistic model, then, is to label with review as not relevant. While the original intent was to train a model using data collected from Amazon Mechanical Turk, we later chose to hand-label a set of reviews in order to train our classifier as a means of cutting our implementation costs. Since the vast majority of Amazon reviews come from users who have only reviewed a single product, estimating the distribution of a general user's bias-corrected rating is often impossible. In order to reduce within-topic rating variability, for a general user we assume low rating variance and approximate the rating distribution by adding the review only for the given rating. This is achieved by appending ``\_rating'' to each token within a review, then stripping out the rating suffix when displaying keywords to the user. Notice that in making this approximation we simultaneously utilize the imposed independence assumption that $\psi_d \perp c_d | w_{d*}$, as shown in the RLDA graphical model. Approximate weighting is performed by allocating the bottom $w_{bits}$ bits of review-topic and word-topic counts for fractional counts. What previously would correspond to a count increment of 1 is mapped to an increment of $2^{w_{bits} + 1}$. Fractional counts can then be approximated as an integer-rounded fraction of $2^{w_{bits} + 1}$, providing us with $\frac{1}{2^{w_{bits} + 1}}$ precision. Count sparsity can be imposed by reducing the value of $w_{bits}$ -- all fractional counts below $\frac{1}{2^{w_{bits} + 2}}$ will be treated as a 0-count. \section{Case Study} In order to evaluate the effectiveness of the system, we compare the new Vedalia system to the current Amazon system. \begin{figure}[ht!] \centering \includegraphics[height=0.4\textheight]{review_good.png} \caption{Above-average rating topic} \label{fig:good_topic} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.4\textheight]{review_bad.png} \caption{Below-average rating topic} \label{fig:bad_topic} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.4\textheight]{amazon_app_reviews.png} \caption{Review representation in Amazon app for Android} \label{fig:amazon_app} \end{figure} For our case study, we examine the use of the Quokka system for the iHome iH5 Clock Radio and Speaker System for iPod (ASIN B00080FO4O). At the time of modeling, the product had 487 reviews with an average rating of approximate 3.5 stars. Upon submitting a query for this product, the user is presented with the output given in Figures \ref{fig:good_topic} and \ref{fig:bad_topic}. In Figure \ref{fig:good_topic}, we see that the topic keywords are generally positive, with the highlighting bringing attention to the iHome's ability to charge your phone's battery and the mild (but not substantial) disappointment in the brightness of the screen when trying to sleep. In Figure \ref{fig:bad_topic}, we see more negative highlighted keywords, with emphasis on the product's sub-par build quality and unjustifiably high price. In Figure \ref{fig:amazon_app}, we see the review representation in the official Amazon app for Android. Aside from the increased effort required to simply navigate through individual reviews, the system has no way of drawing the user's attention to any specific region of reviews, leaving the user to dig through mounds of text in order to find the specific information he is looking for. For the iHome product modeled above, the time until initial results appeared was approximately 5 seconds, with final results appearing in 15 seconds. \section{Future Work} In the immediate future, we will be submitting a patch to MALLET \cite{MALLET} in order to fix the broken ParallelTopicModel parallel implementation. Many users of MALLET will note that the library runs substantially slower when using the multithreaded implementation. This slowdown is due to the ``Thread.sleep()'' calls in the inner loop of parameter estimation when using more than one thread, which appears to be a temporary hack used to correct for a concurrency bug (non-volatile thread-cached boolean value accessed from another thread). In designing our system, we corrected this bug and refactored the ParallelTopicModel and WorkerRunnable code in order to achieve substantially higher performance and fully utilize all cores in a multithreaded environment. Regarding future work on RLDA, we wish to continually improve the model's performance on products with a limited number of reviews. The availability of a hierarchical structure of products allows for more advanced models that utilize product categories and reviews of similar products in order to better estimate topics in low-review situations. We also will be pursuing the idea of computing a single model per group of related products in order to leverage similarities in topics and improve topic estimation. We would like to further investigate the performance of RLDA under some classical metrics to validate its superior performance compared to standard LDA in the context of product review modeling. We also plan to implement and test Chital at a larger scale and refine our user interface over the next few months so as to begin field testing with real users. As previously discussed, we will be releasing the app to the Google Play Store shortly -- as such, modeling performance, usability, and robustness in low-review situations are critical areas that require further optimization.
1,314,259,996,021
arxiv
\section{Introduction} Inflationary cosmology stands today as an elegant and compelling explanation why our observable universe is so large, flat and homogeneous on large scales \cite{Starobinsky:1980te,Guth:1980zm,Linde:1981mu}. Quantum fluctuations during inflation provide the seeds for structure formation, whereas the subsequent decay of the inflaton can trigger a hot thermal universe containing matter and radiation. According to the recent Planck data, single-field inflationary scenarios with a plateau-like (or Starobinsky-like) potentials are strongly favoured \cite{Ade:2015lrj,Akrami:2018odb}. Two models have gained significant attention: Starobinsky's $R+R^2$ inflation \cite{Starobinsky:1980te}, which is based on the modification of gravity induced by one-loop corrections from matter quantum fields \cite{Duff:1993wm}, and Higgs inflation \cite{Bezrukov:2008ej}, where the Standard Model (SM) Higgs boson is non-minimally coupled to gravity. These two inflationary models occupy a central spot in the $(n_s,r)$ plane with the predictions \begin{equation} n_s = 1-\frac{2}{N},\qquad r= \frac{12}{N^2}\,, \label{sweetspot} \end{equation} where $n_s$ is the scalar tilt, $r$ is the tensor-to-scalar ratio and $N$ is the number of e-foldings before the end of inflation. The observational bounds from the latest Planck data \cite{Akrami:2018odb} read $n_s=0.9649\pm 0.0042$ at $68\%$ CL and $r<0.064$ at $95\%$ CL. The expressions (\ref{sweetspot}) are related to the shape of the potential during inflation which nearly coincides in both of these models \cite{Kehagias:2013mya}. The only way to distinguish between them is through the study of reheating \cite{Bezrukov:2011gp}. In recent years, it has been realized that the Starobinsky-like inflation can also be induced through a spontaneous breaking of local scale invariance \cite{Kallosh:2013xya,Kallosh:2013hoa}. This guiding principle is motivated by the approximately scale invariant power spectrum of the primordial cosmic fluctuations that suggests a role of scale symmetry in the inflationary model building. Throughout the paper, conformal symmetry is understood as a local scale symmetry, namely the action of the theory is invariant under space-time dependent transformations of the metric and the fields (cfr.~eq.~(\ref{ConformtrSU5})), at variance with a global scale symmetry or dilatation symmetry.\footnote{In the literature other expressions are used as well, for example local (global) scale invariance is dubbed local (global) Weyl symmetry, see e.g.~\cite{Ferreira:2016wem,Bars:2013yba}. Note that the conformal GUT model that we study here is different from the class of conformally invariant field theories associated with the conformal symmetry group (the extension of Poincar\'e group) at both classical and quantum level \cite{Armillis:2013wya,Low:2001bw}.} Global and local scale invariant models of inflation have been often adopted to explore a variety of different aspects: from inflationary perturbations, reheating, the SM electroweak symmetry breaking and the origin of the Higgs mass, as well as open issues in the SM like dark matter, baryogenesis and neutrino masses~\cite{GarciaBellido:2011de,Salvio:2017xul,Kannike:2015apa,Rinaldi:2014gha,Einhorn:2014gfa,Kannike:2014mia,Barrie:2016rnv,Tambalo:2016eqr,Farzinnia:2015fka,Ferreira:2016wem,Ferreira:2016vsc,Meissner:2006zh,Meissner:2008gj,Foot:2007iy,Chang:2007ki,Iso:2009ss,AlexanderNunneley:2010nw,Bars:2013yba,Carone:2013wla,Khoze:2013oga,Hambye:2013sna,Khoze:2014xha,Karam:2015jta,Khoze:2016zfi,Lewandowski:2017wov,Chankowski:2014fva,Davoudiasl:2014pya,Latosinski:2015pba,Demir:2013uja,Guo:2015lxa,Nayak:2017dwg,Croon:2019kpe,Foot:2007as,Foot:2007ay,Espinosa:2007qk,Englert:2013gz,Khoze:2013uia,Farzinnia:2013pga,Gabrielli:2013hma,Helmboldt:2016mpi,Oda:2017zul,Loebbert:2018xsd,Brdar:2019qut}. Despite the spontaneous breaking of both global and confromal symmetry is related to a non-vanishing VEV of some of the scalar fields in the theory, there is a difference about the appearance of a dynamical Goldstone boson. In the case of globally scale invariant models, the symmetry breaking is accompanied with the corresponding massless Goldstone boson, which is a dynamical propagating field and has an impact in the early cosmology and phenomenology of a given model \cite{Wetterich:1987fm,GarciaBellido:2011de,Ferreira:2016wem,Ferreira:2016vsc}. In the case of conformally invariant models, the auxiliary scalar field, which is also called \textit{conformon} and comes with a wrong sign in the kinetic term, can be gauge fixed to an arbitrary value. The gauge fixing can be interpreted as a spontaneous breaking of conformal invariance due to existence of a classical field value for the conformon, however any dynamical property of the field is removed and no associated Goldstone boson emerges \cite{Kallosh:2013oma,Bars:2013yba,Bars:2012mt,SravanKumar:2018tgk}. In this paper, we consider Conformal GUT (CGUT) inflation~\cite{SravanKumar:2018tgk}, that has been recently developed as a conformal extension of SU(5) GUT inflation~\cite{Starobinsky:1982ee,Shafi:1983bd}. Since the scale of inflation can be as high as $10^{14}$ GeV \cite{Martin:2015dha}, it is quite appealing to consider the role of GUTs in the context of inflationary cosmology~\cite{Lyth:1998xn,Linde:2005ht,Mazumdar:2010sa,Hertzberg:2014sza,Martin:2015dha,Linde:2014nna,Elizalde:2014xva}. In CGUT inflation, conformal symmetry is introduced with two additional SU(5) singlet fields, one of them playing the role of the inflaton. In this model, inflation occurs by spontaneous breaking of conformal and GUT symmetries. Conformal symmetry can be broken by gauge fixing the VEV of one of the singlet fields \cite{Kallosh:2013xya,Kallosh:2013hoa,Bars:2012mt,Bars:2013yba}, and it corresponds to a spontaneous breaking. In this work, we do not consider the possibility of an accompanying explicit breaking from an intrinsic mass scale, which can be generated through the renormalization procedure of the theory and the corresponding running couplings (scale anomaly) \cite{Wetterich:1987fm}. We leave this interesting possibility for future investigations. Later on, a Coleman-Weinberg (CW) potential for the inflaton field is generated through the interactions with the GUT fields, where we take the couplings of the model as frozen and do not evolve them at different energy scales. One common aspect with GUT inflation is that the inflaton rolls down to a non-zero VEV \cite{Shafi:1983bd}, which offers a rich phenomenology and plays a crucial role in the dynamical generation of mass scales at the end of inflation~\cite{Lazarides:1991wu,Lazarides:1984pq}. However, there is an important difference with original GUTs models. In CGUT inflation the above VEV branch of the CW potential can be stretched to a Starobinsky-plateau leading to nearly the same predictions as (\ref{sweetspot}). The inflationary paradigm encompasses two aspects through the inflaton field. First, the inflaton perturbations explains the observed nearly scale invariant density fluctuations in the CMB. Second, the inflaton may be responsible for particle production through its decays during the reheating stage at the end of inflation. This way the present matter and radiation content in the universe is originated from a single field. Since dark matter appears to be the dominant matter component already at the time of CMB, it is natural to think of a strong connection between the inflaton field and a dark sector. As noted, the great desert that lies between the scale of inflation ($\sim 10^{14}$ GeV) and the electroweak scale can be an artefact of some hidden sector we have not yet observed. Motivated by a win-win interplay between particle physics and cosmology, we further develop CGUT inflation to account for a viable dark matter particle and the electroweak symmetry breaking. The first aspect is realized by letting the inflaton decay into a dark sector, made of fermions and scalar particles. We assume the dark fermion to be an inert and stable particle that accounts for the present-day relic density. The masses of the particles in the hidden sector are induced by the spontaneous breaking of conformal and GUT symmetry, which are at the origin of the inflaton VEV as well. The second aspect is addressed through a portal coupling between the Higgs boson and the dark scalar. When the latter reaches its non-vanishing VEV, it generates a mass scale for the Higgs boson that triggers the electroweak symmetry breaking. The dark scalar can decay into Higgs bosons pairs and induce the reheating of the SM sector. This latter aspect of our framework is similar to other recent implementations for a SM reheating from hidden sectors \cite{Tenkanen:2016jic,Berlin:2016vnh,Paul:2018njm}. In our construction, conformal symmetry breaking is responsible for the generation of the relevant scales, from inflation down to the electroweak scale. We note by passing that a cosmological constant can be generated by conformal symmetry breaking as considered in refs.~\cite{Sadeghi:2015bxy,Oda:2018zth,Bloch:2019bvc}. The paper is organized as follows. In section~\ref{sec:fund_symm} we introduce the different scalar potentials appearing in cosmology and particle physics and relevant to our framework. In section~\ref{sec:conf_infl}, the CGUT framework is briefly reviewed. We introduce the masses and couplings generation in the dark sector in section~\ref{sec:Infl_matter}, together with a discussion about the constraints on the model parameters. The non-thermal production of dark matter and the reheating of the SM from the hidden sector is discussed in section~\ref{sec:numerical_final}, whereas conclusions are offered in section \ref{sec:conclusion}. \section{Fundamental potentials in cosmology and particle physics} \label{sec:fund_symm} In this section we discuss how the scalar field potentials, that frequently occur in our current understanding of inflationary cosmology and particle physics, can be related to conformal symmetry. Let us start with particle physics. The Higgs boson is the only fundamental scalar of the SM and it is responsible for the generation of the fermions and gauge bosons masses via spontaneous breaking of the gauge symmetry. This is implemented with the well-known Brout–Englert–Higgs mechanism \cite{Higgs:1964pj,Higgs:1964ia,Englert:1964et}, which assumes a negative mass term compatible with the gauge symmetry and that induces a non-trivial minimum of the potential. The Higgs potential reads \begin{equation} V_H=-\mu_H^2 H^\dagger H + \lambda_H (H^\dagger H)^2 \, , \label{higgs_0_pot} \end{equation} where $H$ is the Higgs doublet, $\mu_H^2>0$ is the Higgs mass term and $\lambda_H$ is the Higgs self-coupling. The condition for the minimum brings to a VEV $v^2_H=\mu^2_H/\lambda_H$ and the physical Higgs mass is $m^2_H =2 \lambda_H v_H^2=2 \mu_H^2$. The negative quadratic term in (\ref{higgs_0_pot}) destabilizes the potential at the origin. Such a solution for the electroweak symmetry breaking predicts that the Higgs quartic coupling, as inferred from the measurement of the Higgs boson mass measurement, also determines the strength of self-interactions of the Higgs boson. Checking this prediction is very important to confirm that the electroweak symmetry breaking is induced by the potential in (\ref{higgs_0_pot}). However, current measurements at the LHC leave still room for deviations induced by new physics and the details of the symmetry breaking can be perhaps probed at the LHC upgrades, muon and linear collider facilities \cite{Gupta:2013zza,Chiesa:2020awd,Baer:2013cma,Fuks:2017zkg,Cepeda:2019klc}. Alternatively, we may start with a scale invariant theory and set $\mu_H=0$ in (\ref{higgs_0_pot}). The idea of an underlying conformal symmetry is attractive since it allows to abandon the arbitrary negative mass parameter for the Higgs field. However, we can still achieve spontaneous breaking of symmetries through radiative corrections and let the scalar field acquire a VEV. This is the well-known Coleman-Weinberg (CW) mechanism~\cite{Coleman:1973jx}, where radiative corrections to the Higgs self-coupling destabilize the Higgs potential at the origin. Despite the very minimal conformal SM is not capable of explaining the observed particles masses~\cite{Coleman:1973jx}\footnote{In ref.~\cite{Coleman:1973jx} the top quark was not taken into account because it had not yet been discovered at that time, which led the authors to the conclusion that a stable radiatively generated minimum can be attained. However, the predicted mass of the Higgs boson was too low. After including the top quark, it was shown that there is no stable minimum due to the large contribution of top-quark Yukawa to the Higgs self-coupling running.}, this framework can be successful when some field content that goes beyond the SM is included. The main point is that the Higgs mass term $\mu_H$ is reinterpreted in terms of a vacuum expectation value of a new scalar, coupled to the SM via a Higgs portal interaction. In this respect, several extensions of SM were considered, where $\mu_H$ is generated through a portal coupling with particles of a hidden sector \cite{Meissner:2006zh,Foot:2007as,Espinosa:2007qk,Foot:2007iy,AlexanderNunneley:2010nw,Englert:2013gz,Khoze:2013uia,Bars:2013yba,Farzinnia:2013pga,Gabrielli:2013hma,Lewandowski:2017wov,Chankowski:2014fva}. Since the CW potential will appear in our framework as well, we give its parametric form for a generic scalar field $\phi$ and the radiatively generated VEV $v_{\hbox{\tiny CW}}$ as follows \begin{equation} V_{\hbox{\tiny CW}} \simeq \, A_{\hbox{\tiny CW}} \phi^4 \left[ \ln\left( \frac{\phi}{v_{\hbox{\tiny CW}}} \right) -\frac{1}{4} \right] + A_{\hbox{\tiny CW}} \frac{ v_{\hbox{\tiny CW}}^4}{4} \, , \label{CW_0_pot} \end{equation} where $ A_{\hbox{\tiny CW}}$ is a dimensionless quantity. On the early universe cosmology side, single-field inflationary scenarios with plateau-like (Starobinsky-like) potentials \begin{equation} V_{\hbox{\tiny S}} = A_{\hbox{\tiny S}} \left( 1-e^{-\sqrt{\frac{2}{3}}\frac{\phi}{M_{\hbox{\tiny P}}}} \right)^2\,, \label{staropot} \end{equation} are highly successful with respect to cosmological data \cite{Kehagias:2013mya,Akrami:2018odb}. In (\ref{staropot}) $\phi$ is the inflaton field, $A_{\hbox{\tiny S}}$ is a dimensionful quantity and $M_{\hbox{\tiny P}}$ is the reduced Planck mass. The above potential has been realized in various frameworks. The Starobinsky model based on $R+R^2$ modification of gravity in the Einstein frame has exactly the same form as the potential (\ref{staropot}) \cite{Starobinsky:1980te,Kehagias:2013mya}. The success of the Starobinsky model with respect to the CMB data can be heuristically understood as the $R^2$ term is scale invariant. Interestingly, the potential (\ref{staropot}) can be also obtained with a two-field model with conformal symmetry, which can be spontaneously broken by gauge fixing one of the field to its VEV \cite{Kallosh:2013xya,Kallosh:2013hoa}. Also Higgs inflation leads to the same shape of the potential (\ref{staropot}) in the scale invariant regime, namely when the Higgs field is far away from electroweak vacuum so that the mass term can be neglected. However, after inflation the Higgs field acquires a non-zero VEV leading to mass scales of SM degrees of freedom \cite{Bezrukov:2007ep}. In this work we develop both successful inflation and post-inflationary particle physics in one single framework by invoking conformal symmetry as our guiding principle. We start with CGUT inflation as implemented in ref.~\cite{SravanKumar:2018tgk} and we extend it in order to discuss post-inflationary physics that involves both a dark sector and the visible sector. In our framework all three potential forms in (\ref{higgs_0_pot}), (\ref{CW_0_pot}) and (\ref{staropot}) are realized as originated by symmetry breaking patterns. \section{Conformal GUT inflation} \label{sec:conf_infl} Inflationary cosmology is an effective field theory (EFT) for energy scales much below the Planck scale or the so-called regime of quantum gravity. Here, we consider an EFT dubbed as CGUT which describes the physics from GUT energy scales $\left( \sim 10^{16}\textrm{GeV} \right)$ down to smaller scales. The framework of CGUT inflation in SU(5) can be described by the following action \cite{SravanKumar:2018tgk} \begin{equation} \begin{split}S_{\hbox{\tiny CGUT}}= & \int d^{4}x\,\sqrt{-g}\Bigg[\left(\chi^{2}-\phi^{2}-\textrm{Tr}\Sigma^{2}\right)\frac{R}{12}-\frac{1}{2}\left(\partial\phi\right)\left(\partial\phi\right)+\frac{1}{2}\left(\partial\chi\right)\left(\partial\chi\right)\\ & -\frac{1}{2}\text{Tr}\left[\left(D^{\mu}\Sigma\right)^{\dagger}\left(D_{\mu}\Sigma\right)\right]-\frac{1}{4}\text{Tr}\left(F_{\mu\nu}F^{\mu\nu}\right)-V \Bigg]\,, \end{split} \label{CFTSU(5)} \end{equation} where $\left( \phi,\,\chi \right)$ are real singlets of SU(5) conformally coupled to GUT Higgs field $\Sigma$, which belongs to the adjoint representation of SU(5), and $R$ is the Ricci scalar. The covariant derivative is defined by $D_{\mu}\Sigma=\partial_{\mu}\Sigma-ig\left[A_\mu,\,\Sigma\right]$, $A_{\mu}$ are the 24 massless Yang-Mills fields with field strength given by $F_{\mu\nu}\equiv\nabla_{[\mu}A_{\nu]}-ig\left[A_{\mu},\,A_{\nu}\right]$. Here, the field $\phi$ is responsible for inflation when the conformal symmetry is broken by the field $\chi$ as we will explain shortly. The tree level potential in (\ref{CFTSU(5)}), that accounts for the interactions between the inflaton and the other fields, can be split into two contributions as follows \begin{eqnarray} V= V\left(\phi,\,\chi,\,\Sigma\right) + V(\phi,\hbox{\scriptsize SM},\hbox{\scriptsize DS}) \, , \label{pot_infl_reheat} \end{eqnarray} where $V(\phi,\hbox{\scriptsize SM},\hbox{\scriptsize DS})$ includes interactions of the inflaton field with the dark sector (DS), which are negligible during inflation (see discussion in section~\ref{sec:Infl_matter}). Its expression is given later in section~\ref{sec:Infl_matter} (cfr.~eq.~(\ref{bosf})), and it is (i) conformal invariant and (ii) responsible for the reheating process that will be discussed in section~\ref{sec:numerical_final}. The first term in eq.~(\ref{pot_infl_reheat}), which is instead responsible for the inflationary dynamics, reads \begin{equation} V\left(\phi,\,\chi,\,\Sigma\right)=\frac{a}{4}\left(\textrm{Tr}\Sigma^{2}\right)^{2}+\frac{b}{2}\textrm{Tr}\Sigma^{4}-\frac{\lambda_{2}}{2}\phi^{2}\textrm{Tr}\Sigma^{2}f\left(\frac{\phi}{\chi}\right)+\frac{\lambda_{1}}{4}\phi^{4}f^{2}\left(\frac{\phi}{\chi}\right) \, , \label{potCFTSU5} \end{equation} where $a \approx b \approx g^{2}$, $g$ is the gauge coupling of the GUT group with $g^{2}\simeq 0.3$ as obtained from the fine structure constant $\alpha_G = g^2/(4\pi) \simeq 1/40$ \cite{Rehman:2008qs}. The SU(5) group contains another scalar which is the fundamental Higgs field $H_{5}$, which comprises the colour-triplet Higgs and the SM Higgs doublet. In eq.~(\ref{potCFTSU5}) we assume the coupling between the field $\phi$ and $H_5$ to be negligible in comparison with its coupling to the adjoint field $\Sigma$~\cite{Esposito:1992xf,SravanKumar:2018tgk}. We further assume the couplings between $\Sigma$ and $H_5$ to be very small and do not play a role in the inflationary dynamics \cite{Rehman:2008qs} (we elaborate more on this point in section~\ref{sec:proton_decay}). As consistent with conformal symmetry, there is no mass scale in eq.~(\ref{potCFTSU5}). The main feature of the potential is the appearance of field-dependent couplings that depend on the ratio of the fields $(\phi,\chi)$ through $f\left( \phi/\chi \right)$~\cite{Kallosh:2013xya,Kallosh:2013hoa,Bars:2013yba}. Given the tree-level potential in (\ref{potCFTSU5}), the action (\ref{CFTSU(5)}) is then conformally invariant under the following transformations \begin{equation} g_{\mu\nu}\to\Omega^{2}\left(x\right)g_{\mu\nu}\quad,\quad\chi\to\Omega^{-1}(x)\chi\quad,\quad\phi\to\Omega^{-1}\left(x\right)\phi\quad,\quad\Sigma\to\Omega^{-1}\left(x\right)\Sigma\,.\label{ConformtrSU5} \end{equation} In principle any generic function $f\left(\phi/ \chi \right)$ is allowed with respect to conformal invariance. In the context of a successful inflation, the following choice was found to be useful~\cite{Kallosh:2013hoa,SravanKumar:2018tgk} \begin{equation} f\left(\frac{\phi}{\chi}\right)=\left(1-\frac{\phi^{2}}{\chi^{2}}\right)\,.\label{fPchi} \end{equation} In CGUT the first symmetry breaking pattern is performed by letting the field $\chi$ acquire a constant value $\chi = \sqrt{6}M$, where $M$ is the mass scale associated with the conformal symmetry breaking. As discussed and implemented in refs.~\cite{Kallosh:2013xya,Kallosh:2013hoa,Bars:2013yba,SravanKumar:2018tgk}, the conformon field $\chi$ is gauged fixed to a constant value, also called as \textit{c-gauge} in the context of SUGRA frameworks \cite{Bars:2013yba}, and there is no associated dynamical massless degree of freedom.\footnote{It was explicitly shown in ref.~\cite{Jackiw:2014koa} that the conformal symmetry we discuss here has no associated conserved current and thus has no dynamical role once we gauge fix the field.} The situation is different from the case of a global scale symmetry breaking, where the associated Goldston boson, also called dilaton, has an impact on the early cosmology \cite{GarciaBellido:2011de,Ferreira:2016vsc,Ferreira:2016wem}. Next, we consider the GUT symmetry breaking $\textrm{SU}(5)\to\textrm{SU}(3)_{c}\times\textrm{SU}(2)_{L}\times\textrm{U}(1)_{Y}$ by\footnote{In refs.~\cite{Shafi:1983bd,Rehman:2008qs} the notation $\langle \Sigma \rangle$ is instead used in eq.~(\ref{GUTfieldVEV-1}) and it stands for a particular direction in the vector space of SU(5).} \begin{equation} \Sigma=\sqrt{\frac{1}{15}}\sigma \, \textrm{diag}\left(1,\,1,\,1,-\frac{3}{2},\,-\frac{3}{2}\right)\,.\label{GUTfieldVEV-1} \end{equation} Assuming $\lambda_{1}\ll\lambda_{2}\ll a,\,b$ and due to the coupling $-\frac{\lambda_{2}}{2}\phi^{2}\textrm{Tr}\Sigma^{2}f\left(\frac{\phi}{\sqrt{6}M}\right)$, the GUT field $\sigma$ reaches its local field dependent minimum that reads \begin{equation} \sigma^{2}=\frac{2}{\lambda_{c}}\lambda_{2}\phi^{2}f\left(\frac{\phi}{\sqrt{6}M}\right)\, , \label{sigma2field} \end{equation} where $\lambda_c= a+7b/15$~\cite{Rehman:2008qs}. As we can see from eq.~(\ref{sigma2field}), the field $\sigma$ continues to follow the behaviour of the field $\phi$. The tree level potential for the $\left(\phi,\,\sigma\right)$ sector is given by \begin{equation} V\left(\phi, \sigma \right) =\left[\frac{\lambda_{c}}{16}\sigma^{4}-\frac{\lambda_{2}}{4}\sigma^{2}\phi^{2}f\left(\frac{\phi}{\sqrt{6}M}\right)+\frac{\lambda_{1}}{4}\phi^{4}f^{2}\left(\frac{\phi}{\sqrt{6}M}\right)\right]\,,\label{treelevel3fieldpot} \end{equation} and the effective potential for the inflaton field $\phi$ due to the radiative corrections becomes \cite{SravanKumar:2018tgk} \begin{equation} V_{\hbox{\scriptsize eff}}\left(\phi\right)= A\frac{M^4}{m_P^4}\phi^{4}f^{2}\left(\frac{\phi}{\sqrt{6}M}\right)\left(\ln\left(\frac{\phi M \sqrt{f\left(\frac{\phi}{\sqrt{6}M}\right)}}{v_\phi M_{\rm P}}\right)-\frac{1}{4}\right)+\frac{Av_\phi^{4}}{4}\,, \label{vef-1} \end{equation} where $A \simeq \lambda_{2}^{2}/(16\pi^{2})$, $\langle \phi \rangle \equiv v_\phi$ is the VEV of the inflation, the counterterm has been fixed to $\delta V=\frac{\delta{\lambda}_{2}}{4}\sigma^{2}\phi^{2}f^{2}\left(\frac{\phi}{\sqrt{6}M}\right)$, the normalization constant is such that $V_{\hbox{\scriptsize eff}}\left(\phi=v_\phi\right)=0$ and the corresponding vacuum energy density is $V_{0} \equiv V_{\hbox{\scriptsize eff}}\left(\phi=0\right) =A v_\phi^{4}/4$. In CGUT we can generate the Planck mass dynamically by fixing \begin{equation} v_\phi=\sqrt{6}\, M_{\rm P} \left( \frac{1}{\gamma^2} - 1 \right)^{\frac{1}{2}} \,.\label{phiVev} \end{equation} where $M_{\text{P}}=2.43 \times 10^{18}$ GeV is the reduced Planck mass and $\gamma \equiv M_{\rm P}/M$, which is smaller than unity by definition in our model. We notice that the inflaton effective potential in eq.~(\ref{vef-1}) has the form of a CW potential in eq.~(\ref{CW_0_pot}). To summarize, inserting (\ref{sigma2field}) and (\ref{treelevel3fieldpot}) in the original action (\ref{CFTSU(5)}), we obtain a quantum effective action for the inflaton field $\phi$ that reads \begin{equation} S_{\phi}=\int d^4x\sqrt{-g}\Bigg[\left( 6M^2-\phi^2 \right) \frac{R}{12}-\frac{1}{2}(\partial\phi)(\partial\phi) -V_{\hbox{\scriptsize eff}}(\phi)\Bigg]\,. \label{effphi} \end{equation} \begin{figure}[t!] \centering \includegraphics[scale=0.93]{infl_pot.pdf} \caption{We plot the potential $V_E(\varphi)$ for $A= 5\times 10^{-12}$ from Table 1 of \cite{SravanKumar:2018tgk} for different values of $\gamma$. We can notice that the potential shape reaches a Starobinsky Plateau in the limit when $\phi\to \sqrt{6}M$ or $\varphi\gg \sqrt{6}M$.} \label{fig:potential_platau} \end{figure} In order to clearly see the Starobinsky-like inflation in this theory, we perform the conformal transformation of the action (\ref{effphi}) and canonically normalize the scalar field as $\phi=\sqrt{6}M\tanh\left(\frac{\varphi}{\sqrt{6}M_{\textrm P}}\right)$. Thus, we obtain a minimally coupled scalar field $\varphi$ in the Einstein frame with the following potential\footnote{Note that when we go to the Einstein frame we must do a rescaling of mass scales with the conformal factor as $v_\phi^2 \to \frac{6v_\phi^2 M_{\textrm{P}}^2}{\left( 6M^2-\phi^2 \right)}$. Since at the end of inflation $6M^2-v_\phi^2=6M_{\rm P}^2$, we can clearly see the VEV of the inflaton field remains the same in Jordan and Einstein frames.} \begin{equation} V_{E}\left(\varphi\right)=36AM^4\tanh^{4}\left(\frac{\varphi}{\sqrt{6}M_{\rm P}}\right)\left(\ln\left(\frac{\sqrt{6}M\tanh\left(\frac{\varphi}{\sqrt{6}M_{\textrm P}}\right)}{v_\phi}\right)-\frac{1}{4}\right)+\frac{Av_\phi^{4}}{4}\,,\label{varphipot} \end{equation} where the corresponding VEV of the canonically normalized scalar field is $\langle \varphi \rangle \equiv v_\varphi=\sqrt{6}\arctan\left(\frac{v_\phi}{\sqrt{6}M}\right)$. We can now easily notice that for $\varphi \gg \sqrt{6}M_{\textrm P}$ (i.e., $\phi\to \sqrt{6}M$), the potential (\ref{varphipot}) reaches a plateau. During the inflationary regime, the shape of the potential is nearly the same as the Starobinsky and Higgs inflation as shown in Fig.~\ref{fig:potential_platau}. The key inflationary predictions ($n_s,\,r$) in this model were computed up to leading order in the slow-roll approximation~\cite{SravanKumar:2018tgk} and turn out to be as in (\ref{sweetspot}), namely they are the same as those of Starobinsky inflation. However, there is a crucial difference since the inflaton reaches a non-zero VEV at the end of inflation, which can be seen in Fig.~\ref{fig:potential_platau}. We can fix the coupling $\lambda_2$, that enters the amplitude of the potential, by using the CMB constraint on the scalar power spectrum \cite{SravanKumar:2018tgk,Akrami:2018odb} \begin{equation} \mathcal{P}_\mathcal{R} = \frac{H_\ast^2}{6\pi^2M_{\textrm{P}}^2}N^2\approx 2.2\times 10^{-9} \,, \label{prl} \end{equation} where the value of Hubble parameter during inflation can be read from Fig.~\ref{fig:potential_platau} as $H_\ast \approx \sqrt{\frac{1}{3M_{\textrm P}^2}V_E^*}\approx 1.5\times 10^{13}\,\textrm{GeV} $. Numerical estimates of $\lambda_2$ using eq.~(\ref{prl}) for $\gamma < 0.9$ are found to be\footnote{We read the average value of $\lambda_2$ taking $A\sim 5\times 10^{-12}$ from Table I of \cite{SravanKumar:2018tgk} since there is very mild dependence on $\gamma$.} \cite{SravanKumar:2018tgk} \begin{equation} \lambda_2^2 \approx 8\times 10^{-10}\,. \label{l2} \end{equation} We notice that the value of $\lambda_2$ is nearly the same for any VEV of the inflaton field as studied in~\cite{SravanKumar:2018tgk}. One may understand from Fig.~\ref{fig:potential_platau} that the shape of the potential during inflation remains nearly the same for any value of the inflaton VEV. This implies we can compute the inflaton mass at the end of inflation in terms of $\gamma$ as follows \begin{equation} M_{\Phi}= \left. \sqrt{V^E_{\varphi,\varphi}} \right|_{\varphi=\langle \varphi \rangle} \simeq 2\times 10^{-6} v_\phi \simeq 5 \times 10^{-6} \frac{M_{\textrm P}}{\gamma} \left( 1-\gamma^2\right)^{1/2} \, . \end{equation} From the last expression we see that the mass of the inflaton increases for smaller values of $\gamma$. We avoid $\gamma \ll 1$ in order not to have an inflaton mass too close to the Planck mass according to the point of view of an effective theory for inflation \cite{Cheung:2007st}. In this paper we take $\gamma=0.1$ as the lowest value, for which we get $M_\Phi \simeq 1.2 \times 10^{14}$ \text{GeV}. \begin{figure}[t!] \centering \includegraphics[scale=0.65]{protondecay_2.pdf} \caption{The ratio between the proton lifetime predicted in the CGUT model and the observed lower bound is shown with a solid orange line. The forbidden region is indicated with the gray shaded area and gives $\gamma \leq 0.97$. The gray dashed and dotted lines indicate 10 and 100 times the experimental bound lower bound.} \label{fig:protondecay} \end{figure} \subsection{Proton decay} \label{sec:proton_decay} As one can read off eq.~(\ref{sigma2field}), the GUT field reaches a VEV through the inflaton field. Accordingly the GUT gauge bosons acquire a mass and they can mediate the proton decay\footnote{Since we assume the couplings between $\Sigma$ and $H_5$ to be negligible, we do not consider the color-triplet Higgs and its effect in the proton decay process. We take the proton decay as mediated by $X, Y$ gauge bosons in this minimal framework.} (they are often labelled with $X$ and $Y$). As a general feature of GUT models, the larger the gauge boson masses the longer the proton life time. In this particular model, the VEV of the inflaton field $\phi$, the GUT gauge boson masses, and consequently the proton life time, depend on one free parameter $M$ that we trade for $\gamma$. The mass of the gauge bosons depends on the VEV of the GUT field as follows~\cite{Rehman:2008qs} \begin{equation} M_X = \sqrt{\frac{5}{3}} \frac{g v_\sigma}{2} \simeq\sqrt{{5} \lambda_2\left( 1-\gamma^2 \right)} M_{\textrm P}\,. \label{mx} \end{equation} where the second equality applies to our model, $\langle \sigma \rangle \equiv v_\sigma$ and we take $\lambda_c \simeq g^2$. The proton life time in this model can be computed as\footnote{In \cite{SravanKumar:2018tgk} the proton life time was computed by approximating the gauge boson mass as $M_X\sim 2 V_0^{1/4}$. We have found that this approximation is incorrect in CGUT inflation as it is only valid in the case of standard GUT inflation \cite{Rehman:2008qs}. We have provided the correct expression in eq.~(\ref{mx}) by including accordingly the dependence of $X$ boson mass on the parameter $\gamma$. This alters the estimates of proton life time as compared to \cite{SravanKumar:2018tgk}, however the model still gives prediction much above the present lower bound as shown in Fig.~\ref{fig:protondecay}. } \cite{Nath:2006ut,SravanKumar:2018tgk} \begin{equation} \tau_{p}\approx \frac{M_X^4}{\alpha_G^2m_{p}^5} \approx 3.2 \times 10^{-5}\frac{M_{\rm P}^4}{m_{p}^{5}} \left( 1-\gamma^2 \right)^2 \approx 2.6 \times 10^{37} (1-\gamma^2)^2 \, \hbox{yrs} \,, \label{prlifetime} \end{equation} where $m_{p}$ is the proton mass and we used the estimate of $\lambda_2$ from eq.~(\ref{l2}). The current lower bound on proton life time is given by $\tau_{p,\hbox{\scriptsize exp}}>1.6\times10^{34}$ years \cite{Nishino:2009aa,Miura:2016krn} and it allows for $\gamma \lesssim 0.97 $ as one may see in Fig.~\ref{fig:protondecay}. However, in order not to make the Planck scale completely approaching the conformal scale, we consider $\gamma < 0.9$. In the remaining of this work, we take $0.1 \leq \gamma \leq 0.9$ for which $\tau_p/\tau_{p,\hbox{\scriptsize exp}} \simeq 1600$ and $\tau_p/\tau_{p,\hbox{\scriptsize exp}} \simeq 60$ respectively. In standard GUTs, $\Sigma$-$H_5$ interactions are needed in order to give the Higgs doublet a mass term to trigger the electroweak symmetry breaking. At the same time, the color-triplet Higgs scalars in $H_5$ also get a mass and, because they can mediate proton decay, this has to be large enough in order not to clash with the proton life-time \cite{Ellis:1978xg}. This mass-scale separation between the Higgs doublet and the color-triplet Higgs, that sit in the same multiplet, gives rise to the well-known triplet-doublet-splitting problem in GUTs \cite{Mohapatra:1997sp,Dimopoulos:1981zb}. Many possible solutions to address the issue have been conceived in the context of supersymmetric extensions of GUTs~\cite{Dimopoulos:1981zb,Witten:1981kv,Georgi:1981vf,Nanopoulos:1982wk,Dimopoulos:1982af,Masiero:1982fe,Inoue:1985cw,Barr:1997pt,Witten:2001bf,Kawamura:2000ev,Hall:2001pg}. In our construction of CGUT in (\ref{CFTSU(5)}) and the corresponding potential (\ref{potCFTSU5}), we assumed that the interaction between $H_5$ and $\Sigma$ are negligible and does not play a role. This means there is no relevant contribution to the mass terms of the color-triplet Higgs and SM Higgs doublet form the VEV of the adjoint field $\Sigma$. Instead, the Higgs doublet mass scale will be generated through the scalar of the hidden sector (see the potential (\ref{bosf}) and section~\ref{sec:Higgs_ele}). As far as inflation is concerned, all what we need is a coupling between $\Sigma$ and the inflaton, which generates a CW potential for the inflaton. We leave the study of a more general potential with $\Sigma$-$H_5$ and $\phi$-$H_5$ interactions fully included, together with the generalization to a supersymmetric framework, for future research on the subject. However, we explore a possible connection between CGUT and the doublet-triplet splitting problem in appendix~\ref{sec:doublet_triplet}. Finally, our present framework of inflation can be straightforwardly extended to $\text{SO(10)}$ by promoting the fields $\left( \phi,\,\chi \right)$ as singlets of $\text{SO(10)}$ and conformally coupling them to the 45-plet adjoint field. The symmetry breaking patterns in this context are rich~\cite{Croon:2019kpe}, for example within the two-stage pattern $\text{SO(10)}\to\text{G}_{422}=\text{SU}(4)_{c}\times\text{SU}\left(2\right)_{L}\times\text{SU}\left(2\right)_{R}\to\textrm{SU}(5)\to\textrm{SU}(3)_{c}\times\textrm{SU}(2)_{L}\times\textrm{U}(1)_{Y}$, there will be more observables associated with the intermediate group such as generation of primordial monopoles \cite{Senoguz:2015lba}. \section{Inflaton interactions with the dark sector} \label{sec:Infl_matter} In this section we introduce the interactions between the inflaton and the dark sector in order to describe post-inflationary production of the bulk of the matter content in the universe. We consider the dark species to be both fermions and scalars. Since the non-vanishing VEV of the inflaton generates a mass term for each particle coupled to it, we avoid SM particles to have such an interaction. Indeed, we aim at keeping the SM particles mass generation as still driven by the Higgs mechanism, namely a potential of the form (\ref{higgs_0_pot}). However, since we start with a conformal theory, a mass scale for the Higgs boson has to be generated, together with a way to produce the SM degrees of freedom in the post-inflationary phase. To this end, we allow a portal coupling consistent with conformal symmetry between the dark scalar and the SM Higgs, so that the energy stored in the inflaton can partly leak into the SM model and produce the corresponding reheating. In so doing, we shall see that an intermediate energy scale between the inflaton mass and the electroweak scale is found. The dark scalar shall completely decay into Higgs bosons, whereas the rest of the inflaton energy is responsible for the dark matter relic density made of stable dark fermions. The explicit form of the interactions are comprised in the the second term of the conformal potential in eq.~(\ref{pot_infl_reheat})\footnote{We assume the kinetic terms and non-minimal couplings of matter sector to be negligible, hence we do not write them in the action (\ref{CFTSU(5)}). }, which reads \begin{equation} \begin{aligned} V(\phi,\hbox{\scriptsize SM},\hbox{\scriptsize DS}) \equiv &\, V (\phi,\chi,\psi,S,H)\\= &\, Y_\psi f_{\psi}\left(\frac{\phi}{\chi}\right) \phi \, \bar{\psi} \psi\, -\lambda_{S1}f_{S1}\left(\frac{\phi}{\chi}\right)\phi^2 S^{\dagger}S+\lambda_{S2}f_{S2}\left(\frac{\phi}{\chi}\right) \left( S^\dagger S \right)^2 \\ & - \lambda_{HS}\left( S^\dagger S \right)\left( H^\dagger H \right) + \lambda_H \left( H^\dagger H \right)^2\,, \label{bosf} \end{aligned} \end{equation} where $\psi$ and $S$ are the dark fermion and dark scalar fields, $H$ is the SM Higgs boson doublet, $\lambda_{HS}$ is the portal coupling between the Higgs boson and the dark scalar, $\lambda_H$ is the Higgs self-interaction.\footnote{We take the dark sector particles to be Dirac fermions and complex scalars. This way we allow for a charge in the dark sector, even though we do not exploit or elaborate the details of such a feature in this work.} A Yukawa coupling of the form $\bar{\psi} S \psi$ is allowed by the conformal symmetry, however we do not consider it here. We adopt this choice in order to keep a simple implementation and not to introduce an additional parameter in the model. Moreover, we choose not to introduce a conformal function of the form $f(\phi / \chi)$ in the terms that comprise the SM field $H$, having in mind a closer connection between the inflaton and the dark sector. The conformal symmetry of the whole action (\ref{CFTSU(5)}) with the potential (\ref{bosf}) can be preserved by implementing the following transformations on the matter fields \begin{equation} S\to\Omega^{-1}(x) S\,,\quad H\to \Omega^{-1}(x)H \, , \quad \psi \to \Omega^{-3/2}(x) \psi \,. \label{newcon} \end{equation} The functions $f_\psi, \, f_{S1},\,f_{S2} $ are conformally invariant with respect to transformation of the fields $\phi,\,\chi$ and we take them of a similar form as in eq.~(\ref{fPchi}) \begin{equation} \begin{aligned} \quad f_\psi=\left( 1-\frac{\phi^2}{\chi^2} \right)^{\alpha} \, , \quad f_{S1} = \left( 1-\frac{\phi^2}{\chi^2} \right)^{\beta_1},\,\quad f_{S2} = \left( 1-\frac{\phi^2}{\chi^2} \right)^{\beta_2} \, . \label{conf_coupl_matter} \end{aligned} \end{equation} There is however a relevant difference given by the exponents $\alpha$, $\beta_1$ and $\beta_2$, that we take to be positive. Such a choice of the conformal couplings allows for suppressing the couplings between the inflaton and the dark particles with respect to inflaton-GUT fields couplings during inflation. This is important in order to let the GUT fields to be the relevant source for the effective potential of the inflaton during the slow-roll dynamics. As anticipated in section~\ref{sec:conf_infl}, we require $V\left( \phi,\,\chi,\,\Sigma \right)$ in the tree level potential (\ref{pot_infl_reheat}) to be dominant term during inflation. This can be achieved by imposing $\alpha,\,\beta_1 \gg 2$ and $\beta_2 \gg 2$ in the couplings (\ref{conf_coupl_matter}). Indeed, as we learned in section~\ref{sec:conf_infl}, inflation happens when $\phi \to \sqrt{6}M$, which automatically implies the conformal couplings in (\ref{conf_coupl_matter}) to be suppressed during the period of inflation. As we shall see, having $\alpha,\,\beta_1 \gg 2$ also give dark particle masses that are (much) smaller than the inflaton mass, see section~\ref{sec:dark_masses}, whereas $\beta_2 \gg 2$ ensures that the effective self-coupling of $S$ scalars is suppressed in comparison with the self-coupling of the $\Sigma$ field. In order to make the exponents $\alpha, \beta_1, \beta_2$ fully responsible for the strength of the couplings and reduce the number of parameters, we fix the couplings $Y_\psi$, $\lambda_{S1}$ and $\lambda_{S2}$ in (\ref{bosf}) to unity. \subsection{Dark particles masses and tree-level interactions} \label{sec:dark_masses} The functions $f_\psi$, $f_{S1}$ and $f_{S2}$ in the potential (\ref{bosf}) lead to a dynamical generation of the masses of the dark particles when the inflaton acquires its VEV, together with tree-level couplings that induce the inflaton decays into dark fermion/scalar pairs. To the best of our knowledge, the conformal couplings (\ref{conf_coupl_matter}) have not been exploited to provide the features of the dark particles. In the following, we assume small perturbations around the inflaton VEV, $\phi = v_\phi + \delta \phi + \cdots \, = v_\phi + \Phi + \cdots$, where we define $\delta \phi \equiv \Phi$ to be the dynamical inflaton field. With the aim of determining the dark particle masses and the tree-level interactions for the inflaton decays, we single out the constant and linear terms in $\Phi$ from the conformal couplings in eq.~(\ref{conf_coupl_matter}) \begin{eqnarray} &&f_{\psi} = \gamma^{2 \alpha} - 2 \alpha \frac{\Phi}{\sqrt{6}M} \gamma^{2(\alpha-1)} \left( 1- \gamma^2 \right)^{\frac{1}{2}} \, , \end{eqnarray} for the fermion dark matter $\psi$. The same holds for the scalar particle couplings when substituting with the corresponding coefficients $\beta_1$ and $\beta_2$, however we consider only the leading $\Phi$-independent term for $\beta_2$ in the following. Next, we expand the inflaton field around $v_\phi$ in the vertices involving the fermionic and bosonic matter in eq.~(\ref{bosf}), so that one finds the dark matter fermion and scalar mass to be \begin{eqnarray} m_\psi = \, \gamma^{2 \alpha} v_\phi \, , \quad \mu_S^2 = \gamma^{2\beta_1} v_\phi^2 \, . \label{m_ferm_bos} \end{eqnarray} Let us remark that, at this first stage, we can single out the symmetric potential for dark scalar from (\ref{pot_infl_reheat}) that reads\footnote{If one assumes there is no gauge symmetry and corresponding charge carried by the $S$ field, then a $Z_2$ symmetry is the one to be broken in the potential (\ref{pot_B_0}). } \begin{equation} V_S = -\mu_S^2(S^\dagger S)+\lambda_{S} \left( S^\dagger S\right)^2\,. \label{pot_B_0} \end{equation} where $\mu_S^2 = v_\phi^2 \gamma^{2\beta_1} >0$ and $\lambda_{S} \equiv \gamma^{2\beta_2}$ and the potential for $S$ shows up in the form of (\ref{higgs_0_pot}) with an overall negative mass term. Correspondingly, the potential is destabilised and the $S$ field acquires a VEV, namely $v^2_S =\mu_S^2/ \lambda_S$, and the physical mass of the dark scalar reads $m_S=\sqrt{2}\mu_S$. For this implementation to work, we assume that the dark scalar does not get 1-loop CW quantum corrections due to its coupling with the inflaton field. Therefore, we consider the self-coupling of the dark scalar to be larger than its coupling to inflaton field, that can be easily realized for $\beta_2 \ll \beta_1$ and $\gamma<1$ (we anticipate that this condition holds when the observed Higgs boson mass is generated form the VEV of the dark scalar, see section~\ref{sec:Higgs_ele}). From eq.~(\ref{m_ferm_bos}) and $m_S=\sqrt{2}\mu_S$, we see that the inflaton generates bosonic states which are heavier than the fermionic ones, whenever the same exponent $\alpha=\beta_1$ is inserted in the conformal couplings (\ref{conf_coupl_matter}) \begin{equation} \frac{m_S}{m_\psi}= \frac{\sqrt{2}}{\gamma^{\alpha}} \, . \end{equation} We show the dark fermion and boson masses in Fig.~\ref{fig:figMass} in the parameter space $(\gamma,\alpha)$ and $(\gamma,\beta_1)$ respectively, where the coloured dashed and dotted lines correspond to benchmark mass values for the dark particles. \begin{figure}[t!] \centering \includegraphics[scale=0.57]{Mass_MF_BBN.pdf} \hspace{0.3 cm} \includegraphics[scale=0.572]{Mass_MB_BBN.pdf} \caption{\label{fig:figMass}The masses for fermion and boson particles as given in eqs.~(\ref{m_ferm_bos}) with $m_S = \sqrt{2}\mu_S$. The black solid lines stand for $2 m_S=M_\Phi$ and $2 m_\psi=M_\Phi$, and the shaded areas account for $2 m_S>M_\Phi$ and $2 m_\psi>M_\Phi$. The solid blue lines and the corresponding shaded area implements the BBN constraint. From top to bottom, the dashed lines stand for $m_\psi=10^3, \, 10^6, \, 10^9, \, 10^{12}$ GeV, whereas the dotted lines correspond to $m_S=10^8, \, 10^{10}, \, 10^{12}$ GeV.} \end{figure} Our aim is to consider the inflaton decays into particles of the hidden sector. To this end, we need to write the tree-level vertices for the processes $\Phi \to \psi \psi$ and $\Phi \to ss$, where $s$ stands for the dynamical dark scalar field with mass $m_S=\sqrt{2}\mu_S$. From the potential in eq.~(\ref{bosf}) we obtain, by singling out the linear terms in $\Phi$, the following interaction Lagrangian \begin{eqnarray} \mathcal{L}_{\phi,\hbox{\scriptsize DS}} = - \, \gamma^{2(\alpha-1)}\left[ \gamma^2 (2 \alpha +1) -2 \alpha \right] \bar{\psi} \psi \Phi \, - \, v_\phi \gamma^{2(\beta_1-1)} \left[ \gamma^2 (\beta_1 +1) - \beta_1 \right] s^2 \, \Phi \, , \label{int_infl_b} \end{eqnarray} where we remind that the physical scalar is the one obtained after the symmetry breaking with $S=(v_S+s)/\sqrt{2}$ in unitary gauge. In order to make the notation simpler, we define the couplings \begin{equation} y_\psi \equiv \gamma^{2(\alpha-1)}\left[ \gamma^2 (2 \alpha +1) -2 \alpha \right] \, , \quad y_S = \frac{v_\phi}{M_{\textrm{P}}} \gamma^{2(\beta_1-1)} \left[ \gamma^2 (\beta_1 +1) - \beta_1 \right] \, , \label{int_infl_c} \end{equation} and we factor out the Planck mass from the scalar coupling so to have a dimensionless quantity. In order for the inflaton to decay perturbatively into the dark sector, two conditions have to be met. First, the kinematic conditions $M_\Phi > 2 m_\psi$ and $M_\Phi > 2 m_S$ has to hold.\footnote{In our study we consider the coupling $y_S< 36\pi^2 M_{\Phi}^2/M_{\textrm P}^2$ in which case the effects of parametric resonance are negligible \cite{Dolgov:1989us,Traschen:1990sw,Kofman:1994rk}.} This imposes a condition on $\alpha$ and $\beta_1$ for different values of $\gamma$ in the range $0.1 \leq \gamma \leq 0.9$. Both in the right and left plot in Fig.~\ref{fig:figMass}, we indicate the condition $2 m_S =M_\Phi$ and $2 m_\psi =M_\Phi$ with a black solid line. The shaded region below the solid black line is then not relevant to our study. Let us stress that the kinematic condition for the inflaton decays to happen points to values of $\alpha$ and $\beta_1$ that automatically suppress the relevance of the inflaton-dark sector coupling during inflation as desired. The decay width of the inflation into fermion and boson pairs, $\Gamma_{\Phi\to \psi \psi}$ and $\Gamma_{\Phi\to ss}$, read respectively \begin{eqnarray} \Gamma_{\Phi\to \psi \psi} &=& \frac{M_\Phi y_\psi^2}{8 \pi} \left( 1-\frac{4m_\psi^2}{M^2_\Phi}\right)^{3/2} = \frac{M_\Phi \alpha_\psi}{2 } \left( 1-\frac{4m_\psi^2}{M^2_\Phi}\right)^{3/2} \, , \label{decay_phi_to_psi} \\ \Gamma_{\Phi\to ss} &=& \frac{M_{\textrm{P}}^2 y_S^2}{ 16 \pi M_\Phi} \left( 1-\frac{4m_S^2}{M^2_\Phi}\right)^{1/2} = \frac{M_{\textrm{P}}^2 \alpha_S}{4 \, M_\Phi } \left( 1-\frac{4m_S^2}{M^2_\Phi}\right)^{1/2} \, . \label{decay_phi to_b} \end{eqnarray} where $\alpha_S \equiv y_S^2 /(4 \pi)$ and $\alpha_\psi \equiv y_\psi^2/(4 \pi)$, and the couplings $y_\psi$ and $y_S$ have been given in eq.~(\ref{int_infl_c}). The second condition for perturbative decay, $\alpha_\psi \, , \alpha_S < 1$ is well met in all the parameter space of the model and does not pose any constraint. There is an additional important aspect that imposes a condition on the model parameters through the inflaton decay widths in eqs.~(\ref{decay_phi_to_psi}) and (\ref{decay_phi to_b}). This is the constraint given by the BBN nucleosynthesis time scale. Indeed, any heavy particle in the early universe has to decay before this epoch and ever since the standard thermal history is expected to hold. The decays of the inflaton into the hidden sector has to satisfy this condition as well because the SM plasma is in turn generated from the decays of $S$ scalars though the portal coupling $\lambda_{HS}$ in this model. Therefore, we require $\tau_{\Phi \to ss} \, , \tau_{\Phi \to \psi \psi} < 0.1 $ s, and accounting for the conversion 1 GeV$ \simeq 1.52 \times 10^{24} s^{-1}$ we find \begin{equation} \Gamma_{\Phi\to \psi\psi} \, , \; \Gamma_{\Phi\to ss} > 6.6 \times 10^{-24} \; \hbox{GeV} \, . \label{BBN_infl} \end{equation} In Fig.~\ref{fig:figMass}, we show the BBN constraint with a blue solid line and corresponding blue shaded region, where the model provides inflaton decay widths not compatible with (\ref{BBN_infl}). In summary, the model remains viable in the not-shaded regions. The dashed lines corresponds to fixed-mass hypotheses for the dark particles, and we provide them for orientation. As far as the dark fermion is concerned, the dashed lines stand for $m_\psi=10^3, \, 10^6, \, 10^9, \, 10^{12}$ GeV, from top to bottom. Masses down to 1 GeV are viable for the dark fermions that correspond to the boundary given by the solid blue line from BBN.\footnote{Such mass range for the stable dark fermions implies that the bound on $N_{\hbox{\scriptsize eff}}$ is not affected because these particle are heavy and cannot qualify as relativistic degrees of freedom at the BBN epoch.} The situation for the dark scalar is different due to the dependence of the decay width in eq.~(\ref{decay_phi to_b}) and the mass $m_S$ on $\gamma$ and $\beta_1$. In the right plot of Fig.~\ref{fig:figMass}, the solid blue line corresponds to $m_S=10^6$ GeV making the allowed dark scalar masses much heavier than the fermion case. From top to bottom, the dotted lines correspond to $m_S=10^8, \, 10^{10}, \, 10^{12}$ GeV. \subsection{Higgs mass and electroweak symmetry breaking from the dark sector} \label{sec:Higgs_ele} Starting with the scale invariant potential (\ref{pot_infl_reheat}), the inflaton VEV is responsible for the generation of the dark scalar mass term $\mu_S$, which in turn generates a Higgs mass term $\mu_H$ through the portal coupling $\lambda_{HS}$. Since we assumed a negligible coupling between the adjoint field $\Sigma$ and the fundamental $H_5$ in our CGUT framework, the generation of the Higgs mass term is entirely due to the dark scalar. Our aim is to further constrain the model parameter space with the condition given by the observed Higgs boson mass $m_H=125.18 \pm 0.16$ GeV \cite{Tanabashi:2018oca}. Similarly to our assumption for the dark scalar in section~\ref{sec:dark_masses}, we assume the Higgs portal coupling to the $S$ field to be much smaller than the self-coupling $\lambda_H$ so to protect the Higgs field to get substantial CW corrections.\footnote{At tree level and at the electroweak scale $\lambda_H \approx m_H^2 g_2^2/(8 m_W^2) \approx 0.13$, where $m_W$ is the W boson mass and $g_2$ is the gauge coupling of SU(2) SM gauge group.} As shown below, this assumption is going to be well satisfied since the portal coupling $\lambda_{HS}$ turn out to be very small. Starting from the potential (\ref{bosf}) and after $\phi$ and $S$ symmetry breaking, we can consider the $s$-independent terms and write the following Higgs potential in the form of eq.~(\ref{higgs_0_pot}) \begin{equation} \begin{aligned} V_H &=-\lambda_{HS} \frac{v_S^2}{2} (H^\dagger H) + \lambda_H (H^\dagger H)^2 =-\mu_H^2 (H^\dagger H) + \lambda_H (H^\dagger H)^2 \, , \end{aligned} \end{equation} where in the second step we define $\mu_H^2 \equiv \lambda_{HS}v_S^2/2$. This way a negative mass term appears in the Higgs potential and the Higgs boson undergoes the spontaneous breaking of the electroweak symmetry. The physical Higgs mass reads $m_H=\sqrt{2}\mu_H$, and in terms of $v_S$ and $v_\phi$ we obtain \begin{equation} m_H^2=\lambda_{HS} v_S^2= \lambda_{HS} \, \gamma^{2(\beta_1-\beta_2)} v_\phi^2 \, , \label{Higgs_mass_eq} \end{equation} Finally, by using eq.~(\ref{phiVev}), we can write the Higgs mass in terms of the additonal model parameters $\gamma$, $\beta_1$ and $\beta_2$ \begin{equation} m_H^2= 6 \lambda_{HS} \, \gamma^{2(\beta_1-\beta_2-1)} (1-\gamma^2)m_{\hbox{\tiny P}}^2 . \label{higgsms} \end{equation} With the expression (\ref{higgsms}) we can now explore the model parameter space compatible with the observed Higgs mass. The four parameters $\gamma$, $\beta_1$, $\beta_2$ and $\lambda_{HS}$ are involved. In general the Higgs boson and the dark scalar shall mix. However, due to the quite large masses for the dark scalar $m_S>10^6$ GeV considered in this work (see discussion in section~\ref{sec:dark_masses}), we can assume a small mixing. This approximation allows us take the observed Higgs mass $m_H$ as the mass eigenstate of the light scalar in the two-scalar mixing, and use eq.~(\ref{higgsms}) as a constraint. We give some details on the scalar mixing in Appendix~\ref{sec:scalar_mix}. \begin{figure}[t!] \centering \includegraphics[scale=0.78]{beta2_lhb_10_7.pdf} \hspace{0.3 cm} \includegraphics[scale=0.79]{beta2_lhb_10_9.pdf} \caption{\label{fig:figMB_Higgs} Parameter space $(\lambda_{HS},\beta_2)$ compatible with the Higgs boson mass $m_H=125.18$ GeV. Two benchmark values $m_S=10^7$ GeV and $m_S=10^9$ GeV are considered in the left and right plot respectively. The solid-dashed lines correspond to four values of $(\gamma,\beta_1)$ which are fixed to comply with the given dark scalar masses.} \end{figure} We orient ourselves by taking two benchmark values for the dark scalar mass $m_S=10^7$ GeV and $m_S=10^9$ GeV. As shown in the legend of Fig.~\ref{fig:figMB_Higgs}, we obtain pairs ($\gamma, \beta_1$) that comply with the given choice of the dark scalar mass. We saw already in Fig.~\ref{fig:figMass} that, for a fixed $\gamma$, the larger $\beta_1$ the smaller $m_S$. In Fig.~\ref{fig:figMB_Higgs}, the four dashed lines show the contours in the $(\lambda_{HS},\beta_2)$ plane compatible with the Higgs mass for different values of $\gamma$. As expected, because of the large separation of the dark scalar and Higgs masses, the portal coupling is very small. One may see that the allowed parameter space for $\lambda_{HS}$ shifts to smaller values when going from $m_S=10^7$ GeV to $m_S=10^9$ GeV (roughly four orders of magnitude). The result can be understood as follows. Larger values of the dark scalar mass correspond to smaller values of $\beta_1$. Keeping $\beta_2$ fixed, the less effective suppression from $\gamma^{2 \beta_1 -2 \beta_2 -2}$ in (\ref{higgsms}) has to be compensated by smaller values of the portal couplings. The same reasoning can be used to understand the trend of each dashed curve in both plots in Fig.~\ref{fig:figMB_Higgs}. The largest possible value for $\lambda_{HS}$ is given in the limiting case $m_S=10^6$ GeV, that we recall is a boundary situation to comply with BBN, and we find it to be $\lambda_{HS} \simeq 10^{-7}$ . Finally we give the expression for the decay width of the dark scalar into a Higgs boson pair. This is a key ingredient in the analysis that we carry out in the next section as regards the SM reheating. Indeed, the inflaton is not directly coupled to the SM sector in this model. The dark scalar plays the role of a mediator that transfers some of the energy stored in the inflaton at the end of inflation into SM degrees of freedom. After the symmetry breaking of the dark sector and the electroweak symmetry, a vertex that mediates the decay $s \to hh$ is induced and the expression for the width is \begin{equation} \Gamma_{s \to hh}\equiv \Gamma_S = \frac{\lambda_{HS}}{32 \pi} \frac{m_H^2}{m_S} \sqrt{1-\frac{4 m_H^2}{m_S^2}} + \sin^2(\theta) \Gamma_h(m_S) \, , \label{Higgs_width_1} \end{equation} where we have used the condition $m^2_H=\lambda_{HS} \, v_S^2$ to express the first term in eq.~(\ref{Higgs_width_1}). The first term in the decay width amounts to the direct decay of the dark scalar into Higgs pairs (in our model $m_S \gg 2 m_H$ and the square root can be set two unity). The latter term is triggered by the $S$-Higgs boson mixing and we take the off-shell Higgs boson width as in refs.\cite{Cline:2013gha,Dittmaier:2011ti}. However, we checked that its contribution is negligible in all the parameter space of interest due to the small mixing angle, which reads in the limit $\lambda_{HS} \ll \lambda_H, \lambda_S$, as follows \begin{equation} \theta \simeq \frac{\lambda_{HS} v_H v_S}{m_S^2-m_H^2} = \frac{\lambda_{HS}^{3/2}}{2^{\frac{3}{2}} \lambda_S } + \cdots \label{mix_angl_0} \end{equation} The dots stand for corrections in $m_H^2/m_S^2 \ll 1$ and we used the relations between VEVs, masses and four-scalar couplings to obtain the last expression in eq.~(\ref{mix_angl_0}). An important comment is in order. We see that the conformal coupling between the inflaton and the dark scalar in (\ref{bosf}), together with the constraint from the BBN time scale, allow for massive states $m_S {\ \lower-1.2pt\vbox{\hbox{\rlap{$>$}\lower6pt\vbox{\hbox{$\sim$}}}}\ } 10^6$. Let us assume that we remove the dark scalar and couple instead the Higgs boson directly to the inflaton field with a conformal coupling. Then, it seems not possible to generate a scalar with a mass of order $10^2$ GeV (as the SM Higgs boson is) and be consistent with BBN at the same time. This is manly due to the large inflaton VEV that has to be compensated with very large exponents in the conformal couplings to generate small scalar masses. However, large exponents bring in turn to small tree-level couplings between the inflaton and the Higgs boson, and eventually induce a too small decay width not compatible with the BBN time scale. Having delineated the main features and compelling parameter space for the model, we summarize the energy scales and symmetry breaking patterns discussed so far in Fig.~\ref{fig:Esscales} . We shall address the thermal history after CGUT inflation in the next section~\ref{sec:numerical_final}. \begin{figure}[t!] \centering \includegraphics[scale=0.595]{CGUT.pdf} \caption{In this plot we depict the associated hierarchy of energy scales (from left to right) and symmetry breaking patterns in our model. We obtain Starobinsky-like inflation after Conformal Symmetry Breaking (CSB) and GUT Symmetry Breaking (GUTSB) respectively. Later on, a Dark Sector Symmetry Breaking (DSSB) occurs at some intermediate scale between the inflation scale and Electroweak Symmetry Breaking (EWSB) scale.} \label{fig:Esscales} \end{figure} \section{Dark matter relic density and SM reheating} \label{sec:numerical_final} In this section we address the production of dark particles, both the dark fermions and dark scalars directly coupled to $\phi$, and the SM degrees of freedom after inflation. The picture is the following. At the end of inflation, the universe has been given the initial kick for its expansion, however it is left empty and without any particle but $\phi$. The energy stored in the inflaton field is then converted into other particles through its decays, and an equilibrium heat bath of SM particles is formed. This epoch is called reheating and the inflaton field decays into SM particles can happen either directly or via mediator fields~\cite{Kofman:1994rk,Kofman:1997yn}. In order to avoid spoiling predictions of the standard BBN, the SM has to become the dominant energy density component before temperatures as low as 4 MeV~\cite{Kawasaki:2000en,Hannestad:2004px,Ichikawa:2005vw,DEBERNARDIS2008192}. In this work, we prevent a direct coupling between $\phi$ and SM particles. The inflaton only decays into the hidden sector and a population of dark fermions and scalars is induced. The fermions are inert and stable and make up for the present-day dark matter relic density $\Omega_{\hbox{\tiny DM}}h^2 = 0.1200 \pm 0.0012$ \cite{Aghanim:2018eyx}. However, we can still generate a SM thermal bath via the portal coupling $\lambda_{HS}$ between the dark scalar and the Higgs boson. The heavy dark scalars start to decay as soon as they are produced by the inflaton, and they work as a mediator for transferring energy from the inflaton to the SM. The Higgs boson is understood to generate the SM relativistic degrees of freedom, therefore we connect the SM temperature with the Higgs boson density as implemented for example in refs.~\cite{Chu:2011be,Tenkanen:2016jic}. \begin{figure}[t!] \centering \includegraphics[scale=0.79]{rgamma_b5_bis.pdf} \hspace{0.25 cm} \includegraphics[scale=0.79]{rgamma_b8_bis.pdf} \caption{The instantaneous reheating temperature (\ref{SM_T_rh}) is given as function of the dark scalar mass $m_S$ for $\gamma=0.1$ and $\gamma=0.9$, red-dashed and orange dot-dashed lines respectively. In the left (right) plot $\beta_2=5$ $(\beta_2=8)$. The shaded area gives temperatures smaller than 4 MeV.} \label{fig:T_RH_inst} \end{figure} Despite we shall solve a network of Boltzmann equations to track the evolution of the dark matter and the SM temperature in section~\ref{sec:Boltzmann_numerics}, it is useful to give an estimation of the reheating temperature for the SM using the approximation of instantaneous reheating. In short, such a version of the reheating temperature can be obtained by assuming that all the available energy stored in the decaying particle is instantaneously converted into radiation. As the SM is generated through the decays of the dark scalars, the corresponding reheating temperature reads \begin{equation} T^{\hbox{\tiny RH},i}_{\hbox{\tiny SM}} = \sqrt{\Gamma_{S} M_{\textrm{P}}} \left(\frac{90}{ \pi^2 g_{\hbox{\scriptsize eff}}(T^{\hbox{\tiny RH},i}_{\hbox{\tiny SM}})} \right)^{1/4} \, , \label{SM_T_rh} \end{equation} where the relevant decay width is given in eq.~(\ref{Higgs_width_1}) and $g_{\hbox{\scriptsize eff}}$ amounts to the number of relativistic degrees of freedom at the reheating temperature. In Fig.~\ref{fig:T_RH_inst}, we show the reheating temperature as provided by eq.~(\ref{SM_T_rh}) for the two benchmark values $\gamma=0.1$ and $\gamma=0.9$, respectively the red-dashed and dot-dashed orange lines. In the left plot, we see that there is a rather limited parameter space available to comply with $T^{\hbox{\tiny RH}}_{\hbox{\tiny SM}} \geq 4$ MeV for $\gamma=0.1$, whereas the situation sensibly improves for larger values of $\gamma$. Also, increasing from $\beta_2=5$ to $\beta_2=8$ makes the $\gamma=0.1$ choice incompatible with a large enough reheating temperature. The trend can be understood again by looking at Fig.~\ref{fig:figMB_Higgs}. There we see that, for a given $m_S$, smaller values of $\gamma$ are related to smaller portal couplings $\lambda_{HS}$. Accordingly the decay width $\Gamma_S$ entering (\ref{SM_T_rh}) is smaller and so is the corresponding reheating temperature. In this work we assume that the dark particles do not form a thermal bath in the hidden sector. The temperature of such sector system would be measured by the radiation energy density of the dark sector. To this end, one needs thermalized relativistic degrees of freedom. However, the dark fermions are an inert component and do not interact with anything else after they are produced from the inflaton decays. Only the dark scalars could be candidates for a thermal bath in the hidden sector. We checked that the would-be instantaneous reheating temperature, as estimated from (\ref{SM_T_rh}) with $\Gamma_{\Phi \to ss}$ instead of $\Gamma_S$, is order of magnitudes smaller than $m_S$ in the whole parameter space of interest. Hence, the dark scalar cannot qualify as thermalized relativistic degrees of freedom and we do not follow the temperature evolution in the hidden sector.\footnote{The reheating temperature is not the largest temperature obtained during the inflaton decay~\cite{Chung:1998rq,Giudice:2000ex}. The maximum temperature peaks at very early stages and then quickly decreases. There could be a limited time range in the very early stages of $\phi$ decays where the dark scalars could behave as a thermalized plasma of relativistic particles in our model. We make an approximation when neglecting this stage.} Also, we are interested in keeping $\beta_2$ large enough to suppress the self-coupling of $S$ during inflation, and this helps in suppressing the processes that would induce a thermalization in the dark scalar sector. A complementary framework where dark scalars form a thermal bath in the hidden sector with its own temperature is described for example in ref.~\cite{Tenkanen:2016jic}. \subsection{Boltzmann equations} \label{sec:Boltzmann_numerics} In this section we solve a network of Boltzmann equations to track the abundance of the inflaton, dark fermions and dark scalars and the SM. Our main scope is to provide benchmark values in the model parameter space that reproduce both the observed relic density $\Omega_{\hbox{\tiny DM}}h^2 = 0.1200 \pm 0.0012$~\cite{Aghanim:2018eyx} and $T^{\hbox{\tiny RH}}_{\hbox{\tiny SM}} \geq 4$ MeV. The dark matter fermions are stable and are responsible for the present-day relic density in this model. They are produced non-thermally from the inflaton decays and are inert ever since. The dark scalars are produced also non-thermally from the inflaton decays. However, they in turn decay into SM Higgs bosons that we take as source of the SM component radiation (see ref.~\cite{Tenkanen:2016jic} for a similar implementation for the generation of the SM plasma). In the remaining of the paper, we shall not refer to an instantaneous reheating temperature as given (\ref{SM_T_rh}), rather we obtain the temperature evolution from the Boltzmann equations together with the extraction of the relic abundance of the dark fermions $\psi$. Indeed, as discussed and shown in refs.~\cite{Chung:1998rq,Giudice:2000ex,Arcadi:2011ev}, the reheating temperature $T^{\hbox{\tiny RH}}_{\hbox{\tiny SM}}$ can be properly defined when the radiation component finally scales as $T_{\hbox{\tiny SM}} \propto a^{-1}$, where $a$ is the scale factor, whereas the temperature exhibits different power laws in earlier stages. The Boltzmann equations for the various components read~\cite{Chung:1998rq,Giudice:2000ex,Arcadi:2011ev} \begin{eqnarray} && \dot{\rho}_\Phi + 3 H \rho_\Phi = - \Gamma_{\Phi} \rho_\Phi \label{BE_phi} \\ &&\dot{n}_\psi + 3 H n_\psi = \mathcal{B}_\psi \Gamma_{\Phi} \frac{\rho_\Phi}{M_\Phi} \, , \label{BE_psi} \\ &&\dot{n}_S + 3 H n_S = \mathcal{B}_S \Gamma_{\Phi} \frac{\rho_\Phi}{M_\Phi} - \Gamma_{S} n_S \, , \label{BE_b} \\ && \dot{\rho}_{\hbox{\tiny SM}} + 4 H \rho_{\hbox{\tiny SM}} = E_S\, n_S \, \Gamma_S , , \label{BE_rad} \end{eqnarray} where $\rho_\Phi=M_\Phi n_\Phi$ is the inflaton energy density, $n_\psi$ and $n_S$ are the dark fermion and dark scalar number density, $E_S$ is the energy of the dark scalar (cfr.~eq.~(\ref{energy_dark})) and $\rho_{\hbox{\tiny SM}}$ is the energy density of the SM radiation component. $\Gamma_\Phi$ is the total decay width of the inflation, namely the sum of the decay widths in eqs.~(\ref{decay_phi_to_psi}) and (\ref{decay_phi to_b}), the latter are expressed in terms of the corresponding branching ratios $\mathcal{B}_{\psi}$ and $\mathcal{B}_{S}$ in eqs.~(\ref{BE_psi}) and (\ref{BE_b}). The left-hand side of eq.~(\ref{BE_rad}) is obtained imposing the equation of state for the pressure and energy density of radiation $p_{\hbox{\tiny SM}}=\rho_{\hbox{\tiny SM}}/3$. As in the usual Boltzmann approach, the terms on the right-hand side of each equation describe gain and loss terms for the given species due to the corresponding processes.\footnote{We checked that the $2 \to 2$ process $ss \to hh$ is negligible with respect to $s \to hh$ in the model parameter space. Therefore, we do not include it in the rate equations.} We assume that the dark scalars do not equilibrate with the SM plasma, which is a rather good approximation due to the very small portal couplings that get realized in our framework. The four components enter the Hubble rate as follows \begin{equation} H= \left[ \left(\frac{1}{3 M_{\textrm{P}}^2} \right)\left( \rho_\Phi + \rho_\psi + \rho_S + \rho_{\hbox{\tiny SM}} \right) \right]^{\frac{1}{2}} \, , \end{equation} where we write the energy density stored in the dark particles as $\rho_\psi=E_\psi n_\psi$ and $\rho_S=E_S n_S$~\cite{Arcadi:2011ev}. The expression of the energy can be taken to be~\cite{Takahashi:2007tz,Dev:2013yza} \begin{equation} E_{\psi(S)}(t) \simeq \sqrt{m^2_{\psi(S)}+\left[ \frac{M_\Phi \, \mathcal{B}_{\psi(S)}}{2} \frac{a(t_d)}{a(t)}\right]^2 } \label{energy_dark} \end{equation} that holds for $m_\psi,m_S \ll M_\Phi$ and $t_d$ is the time when the inflaton decays, that we take to be the initial time when integrating the Boltzmann equations.\footnote{One could pursue a better treatment for $E_{\psi(S)}(t)$ at the injection from inflaton decays along with the discussion in ref.~\cite{Arcadi:2011ev}. However, we checked that the contribution of the three-momentum in (\ref{energy_dark}) is practically irrelevant for the numerics of our model and actually an approximation $\rho_\psi=m_\psi n_\psi$ and $\rho_S=m_S n_S$ works already pretty well. } In order to solve the Boltzmann equations (\ref{BE_phi})-(\ref{BE_rad}), it is convenient to switch to another set of variables \cite{Chung:1998rq} \begin{equation} \Phi = \frac{\rho_\Phi \, a^3}{M_\Phi} \, , \quad \Psi = n_\psi a^3 \, \quad \text{S} = n_S a^3 \, , \quad R = \rho_{\hbox{\tiny SM}} a^4 \, , \label{def_var_Boltz} \end{equation} where one adopts the scale factor $a$ for the independent variable rather than the time. We define $A=a M_\Phi$, or equivalently $A=a/a_I$ with $a_I= 1/ M_\Phi$. Then, the Boltzmann equations (\ref{BE_phi})-(\ref{BE_rad}) become \begin{eqnarray} &&\frac{d \Phi}{d A}= - \frac{\sqrt{3}M_{\textrm{P}} \, }{M_\Phi^2} \frac{A^{\frac{1}{2}} \Phi \, \Gamma_\Phi}{\left( \Phi + \frac{ E_S}{M_\Phi} \text{S}+ \frac{ E_\psi}{M_\Phi} \Psi + \frac{R}{A} \right)^{1/2}} \, , \label{BE_phi_bis} \\ &&\frac{d \Psi}{d A}= \frac{\sqrt{3}M_{\textrm{P}} }{M_\Phi^2} \frac{A^{\frac{1}{2}} \Phi \, \mathcal{B}_\psi\, \Gamma_{\Phi}}{\left( \Phi + \frac{ E_S}{M_\Phi} \text{S} + \frac{ E_\psi}{M_\Phi} \Psi + \frac{R}{A} \right)^{1/2}} \, , \\ &&\frac{d \text{S}}{d A}= \frac{\sqrt{3}M_{\textrm{P}}}{M_\Phi^2} \frac{A^{\frac{1}{2}} \left( \Phi \mathcal{B}_S \, \Gamma_{\Phi} - \text{S} \, \Gamma_S \, \right)}{\left( \Phi + \frac{ E_S}{M_\Phi} \text{S} + \frac{ E_\psi}{M_\Phi} \Psi + \frac{R}{A} \right)^{1/2}} \, , \\ &&\frac{d R}{d A}= \frac{\sqrt{3}M_{\textrm{P}} \, E_S }{M_\Phi^3}\frac{A^{\frac{3}{2}} \text{S} \, \Gamma_S}{\left( \Phi + \frac{ E_S}{M_\Phi} \text{S} + \frac{ E_\psi}{M_\Phi} \Psi + \frac{R}{A} \right)^{1/2}} \, . \label{BE_rad_bis} \end{eqnarray} We solve them with the initial conditions $\text{S}(A_I)=\Psi(A_I)=R(A_I)=0$, whereas the initial energy is entirely stored in the inflaton $\Phi(A_I)=\rho_{\Phi,I}/M_\Phi^4$. As for the initial density of the inflaton, we use the estimation $\rho_{\Phi,I} \approx M_\Phi^2 M_{\hbox{\scriptsize P}}^2$ that applies in the case of chaotic inflation. \begin{figure}[t!] \centering \includegraphics[scale=0.79]{gamma08_sol_mb_10_9.pdf} \hspace{0.3 cm} \includegraphics[scale=0.572]{gamma08_sol_mb_10_9_norm.pdf} \caption{Solution of the Boltzmann equations (\ref{BE_phi_bis})-(\ref{BE_rad_bis}). The red-dotted line, purple-solid line, orange-dashed line and blue-solid lines correspond to the inflaton, dark fermion, dark scalar and SM radiation respectively. The dimension for $\Phi$, $\Psi$ and S is [GeV]$^3$, whereas for $R$ one has [GeV]$^4$. The right plot shows the same quantities, however rescaled and normalized as explained in the main text. We set $\gamma=0.8$ that gives $M_\Phi=8.9 \times 10^{12}$ GeV, then $m_S=1.0 \times 10^9$ GeV and $m_\psi=3.9 \times 10^3$ GeV.} \label{fig:gamma09_density} \end{figure} The results of the rate equations are collected in Figs.~\ref{fig:gamma09_density} and \ref{fig:gamma09_density_10_8} for two different choices of the model parameters $\beta_1$ and $\alpha$ that comply with the observed dark matter relic density, whereas $\gamma=0.8$ is kept in both cases and it corresponds to $M_\Phi \simeq 8.9 \times 10^{12}$ GeV. In Fig.~\ref{fig:gamma09_density}, the mass of the dark scalar is $m_S=10^9$ GeV, whereas the mass of the dark matter fermion is $m_\psi=3.9 \times 10^3$ GeV. One may see that for a long time the inflaton density stays nearly constant (red-dotted line) and slowly populates the hidden sector with dark fermions and scalars (dark fermion and scalar particles in solid-purple and dashed-orange lines respectively). In the mean time, the dark scalar decays trigger the formation of the SM component, solid-blue line, which is smaller than the dark particles population for early times. The bulk of the inflaton decays happens for $A=10^{17}$ for this choice of the parameters, and one sees that the dark fermions and scalars freeze-in. Their later behaviour is different though. On the one hand, the dark fermion density stays constant ever since. On the other hand, the dark scalar exhibits a nearly constant density for a while, up until their population is very effectively depleted and no further decays can occur (this corresponds to the time when $\Gamma_{S} {\ \lower-1.2pt\vbox{\hbox{\rlap{$<$}\lower6pt\vbox{\hbox{$\sim$}}}}\ } H$). As a result, also the SM radiation density freezes to a constant value. In the right panel of Fig.~\ref{fig:gamma09_density}, we normalize the densities as follows: $\Phi/\Phi_I$, the $S$ scalar and $\Psi$ fermion densities are divided by the value of the dark scalar density at the onset of the freeze-in regime; finally the radiation is normalized by its constant value after $S$ decays no longer occur. For this choice of the parameters, the branching ratio of the process $\Phi \to \psi \psi$ is $\mathcal{B}_\psi \equiv \Gamma_{\Phi\to \psi \psi}/(\Gamma_{\Phi\to \psi \psi}+\Gamma_{\Phi\to ss}) \simeq 1.2 \times 10^{-3}$. \begin{figure}[t!] \centering \includegraphics[scale=0.79]{gamma08_sol_mb_10_8.pdf} \hspace{0.3 cm} \includegraphics[scale=0.572]{gamma08_sol_mb_10_8_norm.pdf} \caption{Color-style for the densities as in Fig.~\ref{fig:gamma09_density}. The dark particles have the following masses $m_S=10^8$ GeV and $m_\psi=59.7$ GeV.} \label{fig:gamma09_density_10_8} \end{figure} The relic density is fixed in terms of the relative density with the radiation component \cite{Chung:1998rq} \begin{equation} \frac{\Omega_{\hbox{\tiny DM}}h^2}{\Omega_{\hbox{\tiny SM}}h^2} = \frac{\rho_\psi (A_{\hbox{\tiny RH}})}{\rho_{\hbox{\tiny SM}} (A_{\hbox{\tiny RH}})} \frac{T_{\hbox{\tiny RH}}}{T_0} \, , \end{equation} where $T_{\hbox{\tiny RH}}$ is the reheating temperature, $T_0=2.37 \times 10^{-13}$ GeV is the temperature today, and $\Omega_{\hbox{\tiny SM}}h^2=2.47 \times 10^{-5}$~\cite{Aghanim:2018eyx} is the present radiation energy density. This relation holds because the dark fermions freezes-in before the radiation density sets to a constant (see solid purple and blue lines in Fig.~\ref{fig:gamma09_density} and \ref{fig:gamma09_density_10_8}). Using the definitions (\ref{def_var_Boltz}) and in terms of the quantities extracted from the numerical solution of the Boltzmann equations (\ref{BE_phi_bis})-(\ref{BE_rad_bis}), we obtain \begin{equation} \Omega_{\hbox{\tiny DM}}h^2 = \frac{T_{\hbox{\tiny RH}}}{\hbox{GeV}} \frac{m_\psi}{M_\Phi} \frac{\Psi(A_{\hbox{\tiny RH}})}{R(A_{\hbox{\tiny RH}})} A_{\hbox{\tiny RH}} \, . \end{equation} The solutions of the Boltzmann equations for a second choice of the parameters is shown in Fig.~\ref{fig:gamma09_density_10_8}. The masses of the dark particles are $m_S = 10^8$ GeV and $m_\psi=59.7$ GeV that are compatible with the observed relic density. A smaller dark scalar mass has a relevant impact on the form of the solutions, as one may see by comparing the orange and blue lines in Fig.~\ref{fig:gamma09_density} and in Fig.~~\ref{fig:gamma09_density_10_8}. The main difference is that the dark scalar displays a constant density for a much smaller time and promptly decay as soon as the maximum abundance is reached. This is due to a larger decay width $\Gamma_S$ in eq.~(\ref{Higgs_width_1}) that follows from a smaller $m_S$ in the denominator and a larger portal coupling. Indeed, for a fixed value of $\gamma$, $\lambda_{HS}$ increases as the dark scalar mass decreases (see Fig.~\ref{fig:figMB_Higgs}). The branching ratio is $\mathcal{B}_\psi \simeq 4.2 \times 10^{-3}$ in this case. A comment is in order for the non-thermally produced dark fermions. In our model, these particles are responsible for the present-day dark matter relic density. Therefore, they need to be non-relativistic (cold or warm) at the time of matter-radiation equality to allow structures formation. We use the comoving free-streaming length $\lambda_{\text{fs}}$ in order to assess the coldness of a non-thermal dark matter as explained in refs.~\cite{Takahashi:2007tz,Dev:2013yza}. From Lyman-$\alpha$ constraints, the comoving free streaming is bounded to be $\lambda_{\text{fs}} {\ \lower-1.2pt\vbox{\hbox{\rlap{$<$}\lower6pt\vbox{\hbox{$\sim$}}}}\ } 1$ Mpc. Since the dark fermions are produced in pairs from the inflaton decays, they can have large velocities. So one has to ensure that the dark fermions three-momenta are sufficiently red-shifted from the time scale of the inflaton decays to the matter-radiation equality. We find that the dark fermion belongs to the cold and warm categories as defined in ref.~\cite{Dev:2013yza} for the model parameters considered in Fig.~\ref{fig:gamma09_density} and Fig.~\ref{fig:gamma09_density_10_8} respectively. \begin{figure}[t!] \centering \includegraphics[scale=0.572]{gamma08_10_9_Temp_tris.pdf} \hspace{0.25 cm} \includegraphics[scale=0.572]{gamma08_10_8_Temp_tris.pdf} \caption{The SM temperature evolution as extracted from the solution of the Boltzmann equations is shown with solid blue line. The right and left panel correspond to the two different benchmark points discussed in the main text. The purple solid line indicates the lower bound of 4 MeV and the shaded area indicates smaller temperatures. Reheating has to happen above the $4$ MeV purple-solid line. } \label{fig:gamma09_temperature} \end{figure} Finally we discuss the temperature of the SM sector and the corresponding reheating. In Fig.~\ref{fig:gamma09_temperature}, the SM temperature is displayed for the two benchmark scenarios that also comply with a reheating temperature larger than 4 MeV. The temperature evolution can be obtained from the radiation density $R$ as follows \begin{equation} T_{\hbox{\tiny SM}}= \left( \frac{30}{\pi^2 g_*( T_{\hbox{\tiny SM}})}\right)^{\frac{1}{4}} \frac{R^\frac{1}{4}}{A}M_\Phi \, , \end{equation} where $R$ is the solution from the rate equations. The plots show the evolution of the temperature along the different stages. First, as originally outlined in refs.~\cite{Chung:1998rq,Giudice:2000ex}, the reheating temperature is not the maximum temperature the thermal bath can reach. The very early rise up for the SM model temperature peaks roughly at 20 GeV (100 GeV) in the first (second) benchmark scenario, which is way larger than the reheating temperature at the onset of the $T_{\hbox{\tiny SM}} \propto a^{-1}$ scaling, that we find to be $T^{\hbox{\tiny RH}}_{\hbox{\tiny SM}} \simeq 19$ MeV ($T^{\hbox{\tiny RH}}_{\hbox{\tiny SM}} \simeq 182$ MeV). We extract the reheating temperature numerically and similarly to what it is done in ref.~\cite{Chung:1998rq,Giudice:2000ex,Arcadi:2011ev}. Basically, we match the $T_{\hbox{\tiny SM}} \propto a^{-3/8}$ behaviour obtained in the phase when the dark scalar acts as a large source of entropy to the $T_{\hbox{\tiny SM}} \propto a^{-1}$ scaling in the subsequent radiation dominated regime, as clearly visible in the left plot in Fig.~\ref{fig:gamma09_temperature}. For the second choice of the parameters, the regime $T_{\hbox{\tiny SM}} \propto a^{-3/8}$ is actually absent, and the radiation stage kicks in very sharply after the dark scalars reach their maximum density and they promptly decay into SM degrees of freedom. The second bumpy rise, that appears in both the cases, is due to the stage when the dark scalar decays are very effective and the abundance of dark scalars is entirely depleted, namely when $\Gamma_{S} \simeq H$. The numerical results can be compared with the instantaneous reheating temperature (\ref{SM_T_rh}) for the two benchmark scenarios, that read $T^{\hbox{\tiny RH},i}_{\hbox{\tiny SM}} \simeq 23$ MeV and 373 MeV respectively. Alternatively, one can obtain another estimate for the SM reheating temperature as outlined in ref.~\cite{Tenkanen:2016jic}, where the corresponding expression differs from that in (\ref{SM_T_rh}) by a factor $2^{5/4}$. In this latter case, one finds 10 MeV and 156 MeV. \section{Conclusions and discussion} \label{sec:conclusion} In this paper we studied the post-inflationary dynamics of a CGUT model as given in (\ref{CFTSU(5)}). With respect to the original formulation, we include additional degrees of freedom that account for a hidden sector and the SM sector by respecting conformal invariance, see eq.~(\ref{bosf}). Dark fermions and dark scalars of the hidden sector are directly coupled to the inflaton, whereas the SM Higgs boson acts as a portal between the visible and the dark sector through a coupling shared with the dark scalars. As typical of conformal theories, no mass scale appears in the fundamental action (or Lagrangian). A non-vanishing VEV for the inflaton field is generated through radiative corrections induced by the GUT fields after the conformal and GUT symmetries are broken. The Planck mass is dynamically generated when the inflaton reaches its VEV. The same inflaton VEV is responsible for the masses of the dark particles. Moreover, the strength of the interactions between the inflaton and the dark particles is dictated by the form of the conformal couplings in (\ref{conf_coupl_matter}). To the best of our knowledge, this framework has not been implemented before within models that involve both conformal and GUT symmetry. A first orientation for the relevant parameter space of the model is given by the proton life time. In CGUT inflation, the proton life time can be extended beyond the observed limit $\tau_{p, \hbox{\scriptsize exp}} > 10^{34}$ yrs. This is a compelling guidance through the model parameter space and we restrict our analysis to the range where the model predicts $\tau_p {\ \lower-1.2pt\vbox{\hbox{\rlap{$>$}\lower6pt\vbox{\hbox{$\sim$}}}}\ } 60 \tau_{p, \hbox{\scriptsize exp}}$, where $\tau_{p, \hbox{\scriptsize exp}}$ is the observed experimental bound (see Fig.~\ref{fig:protondecay}). In the present model, the electroweak symmetry breaking can be traced back to the inflaton VEV. Indeed, even though there is no direct coupling between $\phi$ and $H$ in the potential (\ref{conf_coupl_matter}), the dark scalar acts as a mass-scale generator from the inflaton sector to the SM domain. First, the VEV of the inflaton induces a mass term $\mu_S$ for the dark scalar, which triggers a spontaneous breaking of the dark scalar symmetric potential. As a result, a Higgs mass term $\mu_H$ is in turn induced, and the Higgs potential acquires its standard form as given in eq.~(\ref{higgs_0_pot}). Of course, in order to reproduce the observed Higgs mass, the model parameter has to be constrained accordingly. The main outcome is that very small portal couplings are consistent with the Higgs boson mass (cfr.~eq.~(\ref{higgsms}) and Fig.~\ref{fig:figMB_Higgs}). In our construction, conformal symmetry breaking is at the origin for generating the relevant scales from inflation down to the electroweak scale. In section~\ref{sec:numerical_final}, we studied the evolution of the inflaton field, the dark particles and the SM radiation that are respectively originated directly and indirectly from the inflaton decays. The dark fermions are inert stable particles, they account for the present day relic density and they are produced non-thermally by the inflaton decays. The dark scalars, also produced non-thermally from the inflaton, are unstable and decay in turn into SM Higgs bosons. This way, the energy stored in the inflaton field can also leak to the SM, even though there is no direct interactions among the inflaton and the SM. The model at hand can accommodate the observed relic density and a reheating temperature compatible with the BBN requirements. In solving the Boltzmann equations, we track the SM temperature along with the whole post-inflationary dynamics. We extract the reheating temperature numerically and, for the two benchmarks choices considered in this work, we are able to reproduce the observed relic density of dark matter and a reheating temperature $T^{\hbox{\tiny RH}}_{\hbox{\tiny SM}} {\ \lower-1.2pt\vbox{\hbox{\rlap{$>$}\lower6pt\vbox{\hbox{$\sim$}}}}\ } 4$ MeV. We focused on $\gamma=0.8$ ($M_{\Phi} \simeq 8.9 \times 10^{12}$ GeV) that provides a broad window to allow for sufficiently high reheating temperatures. A more detailed study of the whole parameter space is beyond the scope of the present paper. In our model, it is likely that the hidden sector does thermalize and does not have its own temperature. We assumed that this holds in any stage of the evolution. It is not obvious to realize both the observed relic density and a reheating temperature larger than $4$ MeV in our framework. From the analysis given in section~\ref{sec:numerical_final}, we see that larger reheating temperatures (smaller dark scalar masses) push the dark fermion mass to very small values. Given the lower bound on the dark fermions mass $m_\psi >1$ GeV as obtained from the inflaton decay (see Fig.~\ref{fig:figMass} and section~\ref{sec:dark_masses}), a viable dark matter candidate cannot be provided for too high reheating temperatures. On the other hand, increasing the dark scalar masses implies having a less efficient energy transfer to the SM sector (because $\Gamma_{S}$ becomes smaller), that results in too low reheating temperatures. As far as the model is implemented in this work, the dark particles are rather hard to be searched for at experimental facilities. The dark scalars are heavier than $10^3$ TeV and couple feebly with the SM Higgs boson $\lambda_{HS}{\ \lower-1.2pt\vbox{\hbox{\rlap{$<$}\lower6pt\vbox{\hbox{$\sim$}}}}\ } 10^{-7}$. The dark fermions do not interact with any other field but the inflaton. Therefore, direct, indirect and collider searches cannot be applied for our dark matter candidate. It is possible to consider couplings between the dark fermion and the SM Higgs boson whenever we allow for a terms $\bar{\psi} \psi S$ that respect the conformal invariance of the potential. For small dark matter masses, this might open up a production reaction through $s$-mediated processes like $hh \to \bar{\psi} \psi$~\cite{Chung:1998rq,Giudice:2000ex,Gelmini:2006pw,Arcadi:2011ev}, that can also give viable channels for detection strategies of the dark matter candidate. We leave the inclusion and assessment of detectable dark matter signatures, together with gravitational waves production from the breaking of conformal, GUT and dark sector symmetries for future research on the subject. \section*{Acknowledgements} We thank Giorgio Arcadi for useful discussions, and Anish Ghoshal for reading the manuscript and useful comments. K.S.K is supported by the Netherlands Organization for Scientific Research (NWO) grant number 680-91-119.
1,314,259,996,022
arxiv
\section{Introduction} With the rapid development of deep learning, semantic segmentation, one of the most fundamental tasks in computer vision, has achieved significantly better performance than before~\cite{chen2017rethinking,chen2018encoder}. However, the fully supervised methods are largely limited by their dependence on sufficient datasets with pixel-wise ground-truth annotations, which requires intensive manual labor. Thus, numerous efforts~\cite{dai2015boxsup,papandreou2015weakly,lee2019ficklenet,wang2019panet,tian2020prior} are motivated to address this problem. Different tasks such as weakly supervised semantic segmentation, domain adaption in segmentation, and few-shot semantic segmentation, have been proposed. Among them, few-shot semantic segmentation is an appealing direction and has attracted much research attention \cite{shaban2017one,dong2018few,wang2019panet,tian2020prior,zhang2019canet,zhang2019pyramid}. The goal of few-shot semantic segmentation is to segment a query image, given the support set which is comprised of a few support images and corresponding ground-truth masks. Many recent proposed few-shot segmentation approaches~\cite{wang2019panet,tian2020prior,zhang2019canet} follow this pipeline: first extract features for both support and query images, then process the support features and query features, and finally make predictions on query images based on the refined features. PL~\cite{dong2018few} and PANet~\cite{wang2019panet} learn prototypes for each class and compute cosine similarity between prototypes and features to make predictions. Another stream of works, including CANet~\cite{zhang2019canet}, PFENet~\cite{tian2020prior}, and PGNet~\cite{zhang2019pyramid}, adopt convolutional layers to process features. Despite their success, they typically only use local information when processing features, either by pixel-wise similarity computation or convolutional layers, while global relationship modelling is of vital importance for scene understanding. Motivated by this, we propose to exploit the global information when processing support and query features. Recently, transformers have been proven to be effective in various vision tasks~\cite{dosovitskiy2021image,liu2021transformer,carion2020end,zheng2021rethinking,sun2021boosting} due to its ability in establishing long-range relationships within image features. Inspired by this, we propose transformer-based few-shot semantic segmentation (TRFS). Specifically, our method (shown in Fig.~\ref{fig:framework}) contains two modules: Global Enhancement Module (GEM) and Local Enhancement Module (LEM). The former refines features with global receptive fields while the latter focuses on local information. The combination of both modules provides better feature refinement for segmenting the query images, guided by the support set. Our contributions are as follows. First, we address the value of global information in few-shot semantic segmentation, which is achieved by adopting transformers. Second, we show that global information and local information are complementary in few-shot semantic segmentation. The combination of both information performs better than individual one. Third, we achieve state-of-the-art results on two standard benchmarks. \section{Related Work} \subsection{Semantic Segmentation} Semantic segmentation is one of the most fundamental tasks in the computer vision community. It aims to predict the semantic label for each pixel in a natural image. In the era of deep learning, it has achieved tremendous progress~\cite{chen2015semantic,chen2017deeplab,chen2017rethinking,chen2018encoder}, including popular approaches such as DeepLab~\cite{chen2018encoder}, DPN~\cite{liu2015semantic}, and CRF-RNN~\cite{zheng2015conditional}. Despite the fact that promising performance has been obtained on several standard large-scale benchmarks, those methods need sufficient well-labeled data (pixel-wise annotations) to work well. However, pixel-wise labeled data are very expensive, limiting many real-world applications of semantic segmentation. To relieve the problem, weakly supervised semantic segmentation and few-shot semantic segmentation have attracted more and more attention. Both directions are important, but they have different underlying goals, as well as formulations. Specifically, the former aims to replace the pixel-wise annotations to other weaker forms of annotations, such as bounding boxes~\cite{dai2015boxsup,papandreou2015weakly}, scribbles~\cite{lin2016scribblesup}, points~\cite{bearman2016s}, and image-level labels~\cite{ahn2018learning,lee2019ficklenet,sun2020mining,liu2020leveraging}. The latter focuses on improving model's generalizability to unseen classes with only a few well-labeled samples. In this work, we target on the few-shot setting. \subsection{Few-shot Segmentation} Few-shot semantic segmentation has gained lots of research interests after that Shaban {\em et al.~} \cite{shaban2017one} first tackled this problem by proposing to adapt the classifier for each class, conditioned on the support set. One stream of works~\cite{dong2018few,wang2019panet} involves with learning prototypes. PL~\cite{dong2018few} learns prototypes for different classes and the prediction is made by computing the cosine similarity between the features and the prototypes. PANet~\cite{wang2019panet} makes progress by learning consistent prototypes through introducing alignment regularization. Another stream of methods~\cite{tian2020prior,zhang2019canet,zhang2019pyramid} concatenates the support and query features and let the network to figure out the relations between query and support, so that the segmentation can be conducted based on the clues given by the support set. As discussed before, existing methods merely use the local information within query-support features, while the global information is ignored. As we know, global relationship modelling is of vital importance for scene understanding in computer vision \cite{dosovitskiy2021image,liu2021transformer,wang2018non}. Motivated by this, this paper boosts the few-shot semantic segmentation by adopting transformers to exploit the global information over the merged query-support features. \subsection{Transformer} Recently, transformer, first introduced in natural language processing~\cite{vaswani2017attention,dai2019transformer,devlin2019bert,yang2019xlnet}, has attracted lots of research attention in the computer vision community. It relies on a multi-head self-attention (MHSA) module and a multi-layer perceptron (MLP), to model the global relationship within input sequences. For vision tasks, images or features are first converted into sequences of vectors, the global interactions within which are then modelled by the transformers. Since the pioneer works such as ViT~\cite{dosovitskiy2021image} and DETR~\cite{carion2020end}, it has been shown to be effective in various tasks, including image classification~\cite{yuan2021tokens,wang2021pyramid,liu2021transformer}, object detection~\cite{carion2020end}, semantic/instance segmentation~\cite{zheng2021rethinking,ding2021looking}, video segmentation~\cite{wang2020end}, crowd counting~\cite{sun2021boosting,liang2021transcrowd}, depth estimation~\cite{li2020revisiting,yang2021transformers}, domain adaptation~\cite{zhang2021detr,yang2021transformer}, and virtual try-on~\cite{ren2021cloth}. In particular, ViT~\cite{dosovitskiy2021image} divides the image into patches and converts them to sequences of features, which are then used as the input to the transformers. In contrast, DETR~\cite{carion2020end} directly exploits CNN features as the input for transformers for object detection. For a more complete survey for vision transformers, please refer to \cite{khan2021transformers}. However, to the best of our knowledge, there is no exploration of transformers for few-shot semantic segmentation. In this paper, we fill the gap and demonstrate the effectiveness of global relationship modelling using transformers in this task. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{images/main_plot.png} \caption{Framework overview. The pipeline is shown for the one-shot case. The details of feature merging unit (FMU) are shown in bottom-left. Given a support image and the associated ground-truth mask, our framework segments the query image for the target class (plane in this example). The core of our model is combination of the Global Enhancement Module (GEM) and Local Enhancement Module (LEM). The former explores the global information, while the latter focuses on the local information. The synthesis of both modules enhances feature representations for few-shot semantic segmentation. Best viewed in color.} \label{fig:framework} \end{figure*} \section{Methods} In this section, we first give formal definition of few-shot semantic segmentation. Then we introduce how to fuse support and query features. Finally, we explain our global enhancement module, local enhancement module, and the loss functions for training our model. \subsection{Problem Formulation} For few-shot semantic segmentation, all classes are divided into two disjointed class set $\mathcal{C}_{train}$ (base classes) and $\mathcal{C}_{test}$ (unseen classes), where $\mathcal{C}_{train} \cap \mathcal{C}_{test}=\emptyset$. The goal of this task is to train the model on $\mathcal{C}_{train}$ and evaluate the model on unseen classes $\mathcal{C}_{test}$. Both training and testing are conducted in episodes~\cite{vinyals2016matching,shaban2017one}. In particular, let $\mathcal{D}_{train}=\{(I_i, M_i, \{I_{i}^{S_k}, M_{i}^{S_k}\}_{k=1}^{K})\}_{i=1}^{N_{tr}}$ denote $N_{tr}$ training episodes from $\mathcal{C}_{train}$. Here, $I_i$ and $M_i$ are $i^{th}$ query image and the corresponding ground-truth mask, which form a query set. Each query set is associated with a small ($K$-shot) support set $\{I_{i}^{S_k}, M_{i}^{S_k}\}_{k=1}^{K}$, where $I_{i}^{S_k}$ and $M_{i}^{S_k}$ are $k^{th}$ support image and the corresponding ground-truth mask for the $i^{th}$ query image. Each training episode (query and support sets) focuses on the same class, sampled from $\mathcal{C}_{train}$. For evaluation, let $\mathcal{D}_{test}=\{(I_i, \{I_{i}^{S_k}, M_{i}^{S_k}\}_{k=1}^{K})\}_{i=1}^{N_{te}}$ denote the test episodes. For each test episode, the model needs to segment $I_i$ based on the information given by the support set $\{I_{i}^{S_k}, M_{i}^{S_k}\}_{k=1}^{K}$, \emph{i.e.}, segment the same class as the ground-truth masks of the support set. \subsection{Feature Fusion} Here, we introduce how the input features for global and local enhancement modules are generated. Our formulation is based on a single episode of $\{(I, M, \{I^{S_k}, M^{S_k}\}_{k=1}^{K})\}$, for notation simplicity. Following previous works~\cite{wang2019panet,tian2020prior}, we start from the query features and support features encoded by the ImageNet~\cite{russakovsky2015imagenet} pre-trained backbone, whose parameters are fixed throughout the training process. Let $\Theta$, $F_Q\in \mathcal{R}^{H \times W\times C}$, and $F_{S_k}\in \mathcal{R}^{H \times W\times C}$ denote the backbone function, the query feature map, and the $k^{th}$ support feature map, respectively. $C$, $H$, and $W$ are feature channel number, height, and width, respectively. We have \begin{align}\small \begin{split} &F_Q=\Theta(I),~~~F_{S_k}=\Theta(I^{S_k}). \\ \end{split} \end{align} As mentioned above, few-shot semantic segmentation aims to segment the query image based on the clues given by support images and support ground-truth masks. Hence, we obtain the support prototype $F_S$, which is used to guide the segmentation of the query image, as follows: \begin{align}\small \begin{split} &F_{S}=\frac{\sum_{k=1}^{K}GAP(F_{S_k}[M^{S_k},:])}{K}, \\ \end{split} \end{align} where $GAP$ denotes global average pooling across the spatial dimension, and the ground-truth mask $M^{S_k} \in \mathcal{R}^{H\times W}$ is already resized to the feature resolution. Intuitively, the support prototype $F_S \in \mathcal{R}^C$ is a feature vector averaged from the foreground features in the support set, which encodes the representative information of the target class. Different few-shot segmentation methods differ in the way to use the support prototype $F_S$ to guide the segmentation of query images. PANet~\cite{wang2019panet} directly computes similarity between query features and support prototypes. PFENet~\cite{tian2020prior} combines different features into a new feature, and then uses several convolutional layers to refine it. Since concatenating features enables further refinement and enhancement, we follow PFENet~\cite{tian2020prior} to generate input features $X\in \mathcal{R}^{H\times W \times (2C+1)}$ by concatenating query features ($\mathcal{R}^{H\times W \times C}$), expanded support prototypes ($\mathcal{R}^{H\times W \times C}$), and the prior mask ($\mathcal{R}^{H\times W \times 1}$), which will be used in global and local information modules. For details of computing the prior mask, we refer to PFENet~\cite{tian2020prior}. Simply, it pre-estimates the probability of pixels belonging to the target class, using high-level features. \subsection{Global and Local Enhancement Modules} \noindent\textbf{Multi-scale Processing.} Inspired by the fact that object size in both support and query images can vary largely~\cite{tian2020prior}, we design a multi-scale framework with the input feature map $X$ so that information over different scales can be utilized, as shown in Fig.~\ref{fig:framework}. To obtain features in different scales, we use adaptive average pooling. Let $R=\{R^1, R^2, ..., R^n\}$ denote the spatial resolution after the average pooling, and we assume $R^1>R^2>...>R^n$. Feature $X_i$ with spatial size of $R^i$ can be obtained by \begin{align}\small \begin{split} &X_i={\rm GAP}_{R^i}(X), \\ \end{split} \end{align} where ${\rm GAP}_{R^i}$ indicates adaptive average pooling so that the output feature has the size of $R^i$. Hence, a feature pyramid of $\{X_1, X_2,...,X_n\}$ is obtained, each of which will be processed by both global enhancement module and local enhancement module. In particular, $X_i \in \mathcal{R}^{R_i\times R_i \times (2C+1)}$. \myPara{Global Enhancement Module (GEM).} Different from previous study~\cite{tian2020prior} which uses convolutional layers to refine the combined features, we propose to adopt transformers to enhance the features so that global information can be exploited, as shown in Fig.~\ref{fig:framework}. We first reduce the channel dimension of $X_i$ by a fully connected layer and obtain $X_i^{'}\in \mathcal{R}^{R_i\times R_i \times C}$. $X_i^{'}$ first goes through the \textbf{Feature Merging Unit (FMU)}, which merges the output feature from the branch refining $X_{i-1}$. If $i=1$, then no feature merging is performed and $X_i^{'}$ is directly output from FMU. Let $Y_i \in \mathcal{R}^{R_i \times R_i \times C}$ denote the output of FMU, given by \begin{align}\small \begin{split} Y_i= \begin{cases} {\rm Conv}_{1\times 1}({\rm Concat}(X_i^{'},T_{i-1}^{L}))+X_i^{'},& \text{if } x> 1\\ X_i^{'}, & \text{if } x= 1 \end{cases} \end{split} \end{align} where ${\rm Concat}(\cdot)$ denotes feature concatenation across channels, ${\rm Conv}_{1\times 1}$ represents $1\times 1$ convolution with output channel of $C$ and interpolation is not shown for simplicity. We reshape $Y_i$ into $\mathcal{R}^{{R_i}^2 \times C}$. Then, the obtained sequence of vectors is processed by $L$ transformer blocks to explore the global information, denoted as follows: \begin{align}\small \begin{split} &T_i^{0}=Y_i,\\ &\hat{T}_{i}^{l}={\rm MHSA}(T_i^{l-1})+T_i^{l-1}, ~~~l=1,...,L,\\ &T_{i}^{l}={\rm MLP}(\hat{T}_{i}^{l})+\hat{T}_{i}^{l}, ~~~l=1,...,L,\\ \end{split} \end{align} in which ${\rm MHSA}(\cdot)$ denotes the standard multi-head self-attention in transformer \cite{vaswani2017attention,dosovitskiy2021image}, and ${\rm MLP}(\cdot)$ is a two-layer multi-layer perceptron. After the transformers, we obtain $T_{i}^{L} \in\mathcal{R}^{{R_i}^2 \times C}$, which is reshaped back to $\mathcal{R}^{R_i \times R_i \times C}$. In our experiments, we use $L=3$. After processing different scales, we have $\{T_{1}^{L}, T_{2}^{L},...,T_{n}^{L}\}$. The final output feature from the global enhancement module is formed by interpolation and concatenation of $n$ enhanced feature maps $T_{i}^{L}$, denoted as \begin{align}\small \begin{split} &T_i^{0}=Y_i,\\ &\hat{T}_{i}^{l}={\rm MHSA}(T_i^{l-1})+T_i^{l-1}, ~~~l=1,...,L,\\ &T={\rm Concat}(T_{1}^{L}, T_{2}^{L},...,T_{n}^{L}),\\ \end{split} \end{align} where ${\rm Concat}(\cdot)$ indicates feature concatenation across the channel dimension, and interpolation is not shown for simplicity. $T$ is used to predict the target mask $M$. \myPara{Local Enhancement Module (LEM).} The local enhancement module follows the same pipeline as GEM. $X_i$ is processed by a fully connected layer and FMU to generate $Y_i$. Different from GEM which utilizes transformer blocks to process $Y_i$, LEM exploits conventional convolution to refine $Y_i$, in order to encode the local information. Both global and local information can be complementary. After LEM, let $\{Z_{1}, Z_{2},...,Z_{n}\}$ denote the output features from different scales. Similarly, the final output feature from LEM is formed by the interpolation and concatenation of $Z_{i}$, denoted as \begin{align}\small \begin{split} &Z={\rm Concat}(Z_{1}, T_{2},...,T_{n}),\\ \end{split} \end{align} where interpolation is omitted for simplicity. $Z$ is used to predict the target mask $M$. \subsection{Loss Functions} Both the features from GEM and LEM are used to predict the target mask of the query image, whose losses are $\mathcal{L}_{\rm GEM}$ and $\mathcal{L}_{\rm LEM}$. The final loss for the whole network is defined as \begin{align}\small \begin{split} &\mathcal{L}=\mathcal{L}_{\rm GEM}+\mathcal{L}_{\rm LEM}\\ \end{split} \end{align} Here, both $\mathcal{L}_{\rm GEM}$ and $\mathcal{L}_{\rm LEM}$ are common cross-entropy loss for semantic segmentation \cite{chen2015semantic,chen2017deeplab,chen2017rethinking}. During testing, the final prediction for query image is the average of the predictions output from the global enhancement module and the local enhancement module. \section{Experiments} Experiments are conducted on two benchmark few-shot semantic segmentation datasets~\cite{shaban2017one,lin2014microsoft} to validate the effectiveness of the proposed approach. We begin this section by introducing our experimental setting, followed by state-of-the-art comparisons with previous methods. Finally, we show ablation studies to examine the effectiveness of key components of our model. \subsection{Experimental Setup} \noindent\textbf{Datasets.} We conduct experiments on standard benchmarks of PASCAL-5$^{i}$~\cite{shaban2017one} and COCO~\cite{lin2014microsoft} to evaluate the proposed method. The PASCAL-5$^{i}$ is constructed from PASCAL VOC 2012~\cite{everingham2010pascal} and SDS~\cite{hariharan2014simultaneous} datasets. It has 20 classes, which are evenly split into 4 groups, with 5 classes for each. The COCO is a more challenging dataset, having 82,783 training images and 40504 test images. The whole 80 classes are evenly divided into 4 folds, with 20 classes for each. For both datasets, the split of class groups follows previous works~\cite{wang2019panet,tian2020prior} for fair comparisons. The evaluation is done by cross-validation. Specifically, for each split, three groups of classes are used as base classes, while the remaining one is used as unseen classes. For test, we randomly sample 5,000 query-support pairs for each fold following PFENet~\cite{tian2020prior,shaban2017one} on PASCAL-5$^{i}$ dataset. Since COCO has a large validation set, we sample 20,000 query-support pairs on each fold during the evaluation. \myPara{Implementation Details .} Following previous arts~\cite{wang2019panet,tian2020prior}, we test our method on ImageNet~\cite{russakovsky2015imagenet} pre-trained backbones: VGG-16~\cite{simonyan2015very}, ResNet-50~\cite{he2016deep} and ResNet-101~\cite{he2016deep}. For transformer parameters, we set number of heads in MHSA as 8 and MLP ratio as 4. GELU non-linear activation and Layernorm are used in transformer layers. For dataloader, we follow the official implementation of PFENet~\cite{tian2020prior}. Specifically, data augmentations of horizontal flip, random rotation within 10 degrees, and random cropping of $473\times 473$ are used. For optimizing the network, we use SGD optimizer with momentum and weight decay set to 0.9 and 0.0001, respectively. `Poly' learning rate scheduler is used with power parameter set as 0.9. For PASCAL-5$^{i}$ dataset, the model is trained for 200 epochs with batch size of 4 on single GPU. For COCO dataset, the model is trained for 50 epochs with batch size of 32 on 4 GPUs. Our framework is implemented in PyTorch. The code and trained models will be released. \myPara{Evaluation Metrics.} Following previous works~\cite{wang2020few,wang2019panet,tian2020prior}, we report mean intersection over union (mIoU) on individual folds and the final averaged mIoU on all folds. Note that our results are all single-scale results without any post-processing such as multi-scale testing or DenseCRF~\cite{krahenbuhl2011efficient}. \subsection{Comparison with State-of-the-Arts} The state-of-the-art comparisons for PASCAL-5$^{i}$ and COCO datasets are shown in Table~\ref{table:results_pascal} and Table~\ref{table:results_coco}, respectively. From the results, we have four observations. First, the proposed method achieves better performance than the compared state-of-the-art approaches, which demonstrates the effectiveness of global and local enhancement module. Specifically, our method outperforms PFENet~\cite{tian2020prior} by 1.3\% in terms of mean mIoU over 4 folds in 5-shot setting on PASCAL-5$^{i}$. For specific folds, the proposed approach achieves 3.0\% mIoU gain on fold-3 on 5-shot setting using ResNet-50. Since previous works~\cite{zhang2019canet,tian2020prior} only consider the local information to refine the merged query-support features while ours also takes global information into account, the performance gain of ours over those methods is attributed to the GEM. Second, our approach is robust across different backbones: VGG-16, ResNet-50 and ResNet-101. For these backbones, our method achieves consistent performance gain over the corresponding state-of-the-art methods. For instance, under 1-shot setting, the performance gain for VGG-16 and ResNet-50 are 1.0\% and 1.1\%, respectively. Third, the performance gain of our method over other methods is consistent for both 1-shot and 5-shot setting. Fourth, on the challenging COCO dataset, the proposed approach also obtains promising results. Specifically, our method (VGG-16) achieves 1.9\% gain over the current state-of-the-art method PFENet in terms of mean mIoU under 5-shot setting. Qualitative results on novel classes are shown in Fig.~\ref{fig:qual}. Our method performs well on 1-shot setting where only single support image and its ground-truth mask are given. The shown examples are challenging due to: the query or support images are unclear due to bad weather or shadow, objects in query and support images cover very different object regions, object size in query and support image varies significantly, or complicated/complex background exists. \begin{table*}[!t] \centering \caption{Comparison with state-of-the-art methods on PASCAL-5$^{i}$ dataset. It shows that our method achieves new state-of-the-art performance on this dataset.} \label{table:results_pascal} \resizebox{0.999\textwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline\thickhline \multirow{2}{*}{Methods} & \multirow{2}{*}{Publication} & \multicolumn{5}{c|}{1-Shot} & \multicolumn{5}{c}{5-Shot} \\ \cline{3-12} & & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean \\ \hline \multicolumn{12}{c}{VGG-16 Backbone} \\ \hline OSLSM~\cite{shaban2017one} & BMVC17 & 33.6 & 55.3 & 40.9 & 33.5 & 40.8 & 35.9 & 58.1 & 42.7 & 39.1 & 44.0 \\ \hline co-FCN~\cite{rakelly2018conditional} & ICLRW18 & 36.7 & 50.6 & 44.9 & 32.4 & 41.1 & 37.5 & 50.0 & 44.1 & 33.9 & 41.4 \\ \hline SG-One~\cite{zhang2020sg} & TCYB20 & 40.2 & 58.4 & 48.4 & 38.4 & 46.3 & 41.9 & 58.6 & 48.6 & 39.4 & 47.1 \\ \hline AMP~\cite{siam2019adaptive} & ICCV19 & 41.9 & 50.2 & 46.7 & 34.7 & 43.4 & 41.8 & 55.5 & 50.3 & 39.9 & 46.9 \\ \hline PANet~\cite{wang2019panet} & ICCV19 & 42.3 & 58.0 & 51.1 & 41.2 & 48.1 & 51.8 & 64.6 & \textbf{59.8} & 46.5 & 55.7 \\ \hline FWBF~\cite{nguyen2019feature} & \multicolumn{1}{c|}{ICCV19} & 47.0 & 59.6 & 52.6 & 48.3 & 51.9 & 50.9 & 62.9 & 56.5 & 50.1 & 55.1 \\ \hline RPMMs~\cite{yang2020prototype} & \multicolumn{1}{c|}{ECCV20} & 47.1 & 65.8 & 50.6 & 48.5 & 53.0 & 50.0 & 66.5 & 51.9 & 47.6 & 54.0 \\ \hline CRNet~\cite{liu2020crnet} & \multicolumn{1}{c|}{CVPR20} & - & - & - & - & 55.2 & - & - & - & - & 58.5 \\ \hline PFENet~\cite{tian2020prior} & \multicolumn{1}{c|}{TPAMI20} & 56.9 & 68.2 & 54.4 & 52.4 & 58.0 & \textbf{59.0} & 69.1 & 54.8 & 52.9 & 59.0 \\ \hline Ours & \multicolumn{1}{c|}{-} & \textbf{58.8} & \textbf{68.4} & \textbf{54.8} & \textbf{53.8} & \textbf{59.0} & 57.8 & \textbf{69.4} & 54.8 & \textbf{56.4} & \textbf{59.6} \\ \hline \multicolumn{12}{c}{ResNet-50 Backbone} \\ \hline CANet~\cite{zhang2019canet} & \multicolumn{1}{c|}{CVPR19} & 52.5 & 65.9 & 51.3 & 51.9 & 55.4 & 55.5 & 67.8 & 51.9 & 53.2 & 57.1 \\ \hline PGNet~\cite{zhang2019pyramid} & \multicolumn{1}{c|}{ICCV19} & 56.0 & 66.9 & 50.6 & 50.4 & 56.0 & 54.9 & 67.4 & 51.8 & 53.0 & 56.8 \\ \hline RPMMs~\cite{yang2020prototype} & \multicolumn{1}{c|}{ECCV20} & 55.2 & 66.9 & 52.6 & 50.7 & 56.3 & 56.3 & 67.3 & 54.5 & 51.0 & 57.3 \\ \hline CRNet~\cite{liu2020crnet} & \multicolumn{1}{c|}{CVPR20} & - & - & - & - & 55.7 & - & - & - & - & 58.8 \\ \hline PFENet~\cite{tian2020prior} & \multicolumn{1}{c|}{TPAMI20} & 61.7 & 69.5 & 55.4 & 56.3 & 60.8 & 63.1 & 70.7 & \textbf{55.8} & 57.9 & 61.9 \\ \hline Ours & \multicolumn{1}{c|}{-} & \textbf{62.9} & \textbf{70.7} & \textbf{56.5} & \textbf{57.5} & \textbf{61.9} & \textbf{65.0} & \textbf{71.2} & 55.5 & \textbf{60.9} & \textbf{63.2} \\ \hline \end{tabular} } \end{table*} \begin{table*}[!t] \centering \caption{Comparison with state-of-the-art methods on COCO dataset. It shows that our method achieves new state-of-the-art performance on this dataset.} \label{table:results_coco} \resizebox{0.999\textwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline\thickhline \multirow{2}{*}{Methods} & \multirow{2}{*}{Publication} & \multicolumn{5}{c|}{1-Shot} & \multicolumn{5}{c}{5-Shot} \\ \cline{3-12} & & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean \\ \hline \multicolumn{12}{c}{VGG-16 Backbone} \\ \hline PANet~\cite{wang2019panet} & ICCV19 & - & - & - & - & 20.9 & - & - & - & - & 29.7 \\ \hline FWBF~\cite{nguyen2019feature} & \multicolumn{1}{c|}{ICCV19} & 18.4 & 16.7 & 19.6 & 25.4 & 20.0 & 20.9 & 19.2 & 21.9 & 28.4 & 22.6 \\ \hline PFENet~\cite{tian2020prior} & \multicolumn{1}{c|}{TPAMI20} & {33.4} & 36.0 & 34.1 & {32.8} & 34.1 & {35.9} & 40.7 & {38.1} & 36.1 & 37.7 \\ \hline Ours & \multicolumn{1}{c|}{-} & \textbf{34.2} & \textbf{38.8} & \textbf{35.3} & \textbf{33.3} & \textbf{35.4} & \textbf{37.8} & \textbf{43.8} & \textbf{39.7} & \textbf{36.9} & \textbf{39.6} \\ \hline \multicolumn{12}{c}{ResNet-101 Backbone} \\ \hline FWBF~\cite{nguyen2019feature} & \multicolumn{1}{c|}{ICCV19} & 19.9 & 18.0 & 21.0 & 28.9 & 21.2 & 19.1 & 21.5 & 23.9 & 30.1 & 23.7 \\ \hline DAN~\cite{wang2020few} & \multicolumn{1}{c|}{ECCV20} & - & - & - & - & 24.4 & - & - & - & - & 29.6 \\ \hline PFENet~\cite{tian2020prior} & \multicolumn{1}{c|}{TPAMI20} & \textbf{34.3} & 33.0 & 32.3 & 30.1 & 32.4 & \textbf{38.5} & 38.6 & 38.2 & 34.3 & 37.4 \\ \hline Ours & \multicolumn{1}{c|}{-} & 31.8 & \textbf{34.9} & \textbf{36.4} & \textbf{31.4} & \textbf{33.6} & 35.4 & \textbf{41.7} & \textbf{42.3} & \textbf{36.1} & \textbf{38.9} \\ \hline \end{tabular} } \end{table*} \subsection{Ablation Study} We conduct ablation study on PASCAL-5$^{i}$ dataset to validate the contributions of the key components of our method. We also examine the effect of the number ($L$) of transformer layers and number of scales on the performance of our model. \myPara{GEM and LEM.} We show the results of only using global enhancement module or local enhancement module in Table~\ref{table:ablation_study}. It shows that GEM and LEM perform similarly in terms of the final averaged mIoU over all folds, which achieve 60.6 and 60.8 respectively. However, the model using only GEM and the one using only LEM can have very different performance for a specific split. For example, using only GEM has mIoU of 70.9 while using only LEM achieves mIoU of 69.9 in fold-1. For fold-3, using LEM outperforms the model using GEM by 1.5\%. This suggests that GEM and LEM capture different information. When combining both GEM and LEM (our final model), we obtain better performance, demonstrating that GEM and LEM are complementary. \begin{table}[!t] \centering % \caption{Ablation study on the key components and $L$ (the number of transformer blocks) on PASCAL-5$^{i}$ dataset using ResNet-50.} \label{table:ablation_study} \resizebox{0.49\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{} & \multirow{2}{*}{L} & \multicolumn{5}{c}{1-Shot} \\ \cline{3-7} & & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean \\ \hline +GEM & 3 & 60.4 & 70.9 & 56.3 & 54.9 & 60.6 \\ \hline +LEM & 3 & 60.9 & 69.9 & 56.1 & 56.4 & 60.8 \\ \hline +GEM+LEM & 3 & \textbf{62.9} & \textbf{70.7} & \textbf{56.5} & 57.5 & \textbf{61.9} \\ \hline +GEM+LEM & 2 & 62.1 & 70.6 & 54.1 & \textbf{58.3} & 61.3 \\ \hline +GEM+LEM & 4 & 62.3 & 71.0 & 55.3 & 58.0 & 61.6 \\ \hline \end{tabular} } \end{table} \myPara{Number of Transformer Blocks.} We also evaluate the effect of different number of transformer blocks. It shows that our approach is robust to different choice of $L$, achieving comparable results. However, the best performance is observed when setting $L$ to be 3. It may due to that when using small $L=2$, global information is not fully explored. When using more transformer blocks ($L=4$), the network may overfit to the base classes and achieve a little worse performance on novel classes. \begin{figure*}[t] \centering \includegraphics[width=0.99\linewidth]{images/qual.png} \caption{Qualitative results on novel/unseen classes on PASCAL-5$^{i}$ dataset using ResNet-50 model. From \textit{left} to \textit{right}: query image, query prediction, support image, support ground-truth mask, and query ground-truth mask. It shows that our method performs well on novel classes in 1-shot setting where a single support image and its associated mask are given to guide the segmentation.} \label{fig:qual} \end{figure*} \begin{table}[!t] \centering % \caption{Ablation study on different scale combinations on PASCAL-5$^{i}$ dataset using ResNet-50.} \label{table:ablation_study2} \resizebox{0.49\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Scales} & \multicolumn{5}{c}{1-Shot} \\ \cline{2-6} & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean \\ \hline [60] & 59.4 & 68.3 & 54.2 & 52.4 & 58.6 \\ \hline [60,30] & 60.4 & 70.1 & 54.7 & 56.3 & 60.4 \\ \hline [60,30,15] & 61.4 & 70.4 & 54.3 & \textbf{57.7} & 61.0 \\ \hline [60,30,15,8] & \textbf{62.9} & \textbf{70.7} & \textbf{56.5} & 57.5 & \textbf{61.9} \\ \hline \end{tabular} } \end{table} \noindent\textbf{Number of Scales.} We conduct multi-scale processing on input feature $X \in \mathcal{R}^{H\times W \times (2C+1)}$, which is the concatenation of query features ($\mathcal{R}^{H\times W \times C}$), expanded support prototypes ($\mathcal{R}^{H\times W \times C}$), and the prior mask ($\mathcal{R}^{H\times W \times 1}$). Specifically, a series of adaptive average pooling operations with output scales $R=\{R^1, R^2, ..., R^n\}$ on $X$ is used. We ablate the effect of different scale variations in Table~\ref{table:ablation_study2}. In all our experiments, the input size is $473\times 473$. After going through the backbone, the height/weight ($H/W$) is 60. Hence, we start from a single scale of 60, and gradually add smaller scales of 30, 15, and 8. It shows that our method performs better when more scales are used. For all our results in the paper, we use $R=\{60,30,15,8\}$. \section{Conclusion} In this paper, we study the value of global information in few-shot semantic segmentation. We propose global enhancement module (GEM) to refine the query-support features, together with local enhancement module (LEM). GEM exploits global information via transformer layers while LEM utilizes local information through convolutional layers. The combination of both modules help to learn better features for segmenting query images. Our experiments show that GEM and LEM are complimentary, and the proposed method combining both GEM and LEM achieves state-of-the-art performance on two standard benchmark datasets, \emph{i.e.}, PASCAL-5$^{i}$ and COCO. Our qualitative results on novel classes show that our method provides promising segmentation masks on query images under challenging situations. For future research, it is interesting to see if the feature interaction between global enhancement module and local enhancement module in intermediate layers can further boost the performance. It is also interesting to study the effect of other newly developed transformer layers in few-shot semantic segmentation. \bibliographystyle{IEEEtran}
1,314,259,996,023
arxiv
\section{Introduction} The standard model (SM), which provides an articulate description of the nature around TeV energy scale, was completed by the discovery of the Higgs boson \cite{:2012gk,:2012gu}. {However,} the SM still contains several mysteries and problems, which cannot be solved {within the context of the SM}. One is so-called the generation problem. The SM contains three {sets} of {quarks and leptons}, which {have {the} exact same quantum numbers} except for their Yukawa couplings. {Three generations} {were} introduced to the Kobayashi-Maskawa theory \cite{Kobayashi:1973fv} by hand {though the origin of the generations is not unveiled}. Another {problem is on the fermion mass hierarchy}. Each generation of {the quarks and the charged leptons has {exactly the same quantum numbers} though their masses have an exponential hierarchy around $10^{5}$}. In the SM, the masses are generated by the Higgs mechanism and are determined by the dimensionless Yukawa couplings{; however,} there is no explanation to the question {of why so large a hierarchy }appears in the dimensionless parameters. {Because of the above circumstance, various theories beyond the SM have been explored}. One possibility in the context of {four-dimensional (4d) gauge theory} is a {scenario} with non-compact gauge symmetry, {which can naturally produce the fermion mass hierarchy and three generations} \cite{Inoue:1994qz,Inoue:2000ia,Inoue:2003qi,Inoue:2007uj,Yamatsu:2012rj}. Another way is achieved by using extra dimensions. Extra dimension models with magnetic flux~\cite{Cremades:2004wa} {can lead to both of} the fermion mass hierarchy and three generations. {Magnetized orbifold models~\cite{Abe:2008fi,Abe:2008sx,Abe:2014noa,Abe:2014vza,Abe:2015yva,Fujimoto:2016zjs,Sakamura:2016kqv,Kobayashi:2016qag,Ishida:2017avx,Buchmuller:2017vho} {are}} also fascinating to discuss {within} the fermion flavor structure {and several} achievements {have} been investigated\footnote{{We can find an another geometric way to produce the generations in Refs.~\cite{Libanov:2000uf,Frere:2001ug,Frere:2003yv,Frere:2003ye,Frere:2010ah} in which {a topological structure of a vortex on a sphere plays} an important role.}}. However, some parameters of the models have to be chosen suitably by hand to make a fermion mass hierarchy. Moreover, in the case of extra dimension models, arguments for the stability of the extra dimension have mostly been postponed. Therefore, it is worth searching {the} dynamical generation mechanism of the fermion mass hierarchy and discussing the stability of the extra dimension, simultaneously. In this paper, we propose a dynamical generation mechanism for the fermion mass hierarchy {in a model with a single generation of fermions {in five dimensions (5d)}. An interval extra dimension with point interactions \cite{Fujimoto:2011kf,Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka} {takes the responsibility} to produce the generations.\footnote{ {In 5d, to the best of our knowledge, the first proposal was given by the first manuscript of the series of our works~\cite{Fujimoto:2011kf} with a concrete example where multiple chiral zero modes are generated from one five-dimensional fermion.} } Point interactions also play an important role to discuss the fermion mass hierarchy. In the previous model \cite{Fujimoto:2011kf,Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka}, the positions of the point interactions, which affect to the fermion mass hierarchy, have been controlled {by hand}. On the other hand, in this paper, the positions of the point interactions are determined dynamically through the minimization of the Casimir energy~\cite{Ponton:2001hq,deAlbuquerque:2003qbk} (or say, Radion effective potential \cite{Garriga:2000jb,Goldberger:2000dv,Hofmann:2000cj,Abe:2014eia}). As a result, a large mass hierarchy appears dynamically in our model\footnote{See~\cite{Panico:2016ull} (also~\cite{Agashe:2016rle,DaRold:2017xdm}) for generating Yukawa hierarchies through multiple dynamical scales.}. {We also discuss the stability of the extra dimension from a Casimir energy point of view.}} This paper is organized as follows. In {section}~\ref{sec:4d spectum of a 5d $U(1)$ gauge theory on an interval}, we review the 4d spectrum of a 5d $U(1)$ gauge theory on an interval extra dimension. {A general class of {boundary conditions (BCs)}, which is important to determine the 4d spectrum of the fields and phase structure of the symmetries, is derived for gauge, fermion and scalar {fields}. Using the knowledge of the general boundary conditions, we display the 4d spectrum at low energies and the profiles of the {mode functions} with respect to the extra dimension.} In {section}~\ref{sec.2-3}, we discuss {the} stability of the extra dimension. {Evaluating the {contribution} of each field, we investigate the extra dimension {length} dependence of the total Casimir energy.} {In} section~\ref{sec:Theory with point interactions}, a theory with point interactions is reviewed and {the 4d mass spectrum at low energies and the profiles of the mode functions are shown. In section~\ref{Semi-realistic}, using all the results, we construct {an $SU(2)\times U(1)$ model}, which can {lead to} the fermion mass hierarchy dynamically {with a single generation of fermions}. The minimization {of} the Casimir energy determines the positions of the point interactions, which are important parameters to {produce} the fermion mass hierarchy, and {leads} to the stability of the extra dimension. After that, we find that a fermion mass hierarchy {is realized} dynamically.} Section~\ref{sec:Conclusion and Discussion} is devoted to conclusion and discussion. {In Appendix~\ref{sec:appendix}, we provide a self-contained review on the formulation of wave functions of a 5d fermion under the presence of one point interaction in the bulk part of an interval.} \section{4d spectum of a 5d $U(1)$ gauge theory on an interval}\label{sec:4d spectum of a 5d $U(1)$ gauge theory on an interval} In this section, we first summarize the results of allowed {boundary conditions, which {are} consistent with the {requirements from} the action principle, the gauge {invariance} and 4d Lorentz invariance, for gauge, fermion and scalar fields on an interval. The boundary conditions are crucially important to determine the 4d mass spectrum at low energies and also the phase structure of symmetries \cite{Sakamoto:1999yk,Sakamoto:1999ym,Sakamoto:1999iv,Ohnishi:2000hs,Hatanaka:2000zq,Sakamoto:2009hb,Hatanaka:2001kq,Hatanaka:2003mn,Lim:2005rc,Lim:2007fy,Lim:2008hi,Fujimoto:2011kf,Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka}. We then derive the 4d mass spectrum of the gauge and fermion fields, which are necessary to evaluate Casimir energies. We further show that the scalar field can possess a coordinate-dependent {vacuum expectation value (VEV)} on the extra dimension \cite{Sakamoto:1999yk,Sakamoto:1999ym,Sakamoto:1999iv,Fujimoto:2011kf,Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka}, which is found to be a crucial ingredient of our dynamical generation mechanism {for generating} a fermion mass hierarchy}. \subsection{Consistent BCs for the fields} In this subsection, we investigate the general class of BCs for an abelian gauge field, a fermion {field} and a scalar field on an interval, respectively. \subsubsection{BCs for {Abelian gauge, ghost{,} and anti-ghost fields}} First, we start from the gauge field: \begin{align} S_{G}=\int d^4 x \int^{L}_{0}dy \Bigl[ -\frac{1}{4}F^{MN}F_{MN}-\frac{1}{2}(\partial^{\mu}A_{\mu}+\partial_{y}A_{y})^2 -i\bar{c}(\partial^{\mu}\partial_{\mu}+\partial_{y}^2)c\Bigr], \label{action_gauge} \end{align} where \begin{align} F_{MN}=\partial_{M}A_{N}(x^{\mu},y)-\partial_{N}A_{M}(x^{\mu},y),\hspace{3em}(M,N=0,1,2,3,y). \end{align} $x^{\mu}$ ($\mu=0,1,2,3$) denotes the {four-dimensional Minkowski-spacetime} coordinate and $y$ is the coordinate of the extra dimension with $0\leq y\leq L$. {Our choice of the 5d metric} is {$\eta_{MN}={\rm diag}(-1,1,1,1,1)$}. We {introduced} the second term as a gauge fixing term and the third term as a kinetic term of ghost fields. The general class of boundary conditions for the gauge field {is} obtained from the action principle: \begin{align} \delta S_{G}=0. \end{align} We obtain the bulk field equation for $A_{M}$, together with the following surface term from the first term of the action after taking the variation. \begin{align} (\partial^{\mu}A_{y}-\partial_{y}A^{\mu})\delta A_{\mu}=0, \hspace{3em}{\rm at}\ y=0,L. \end{align} Since the boundary condition $A_{\mu}=0$ at $y=0,L$ breaks 4d gauge symmetry explicitly, the general class of boundary conditions {consistent with the 4d gauge invariance} {is} given by the following{:} \begin{align} \left\{ \begin{array}{l} \partial_{y}A_{\mu}=0,\\ A_{y}=0, \end{array} \right. \hspace{3em}{\rm at}\ y=0,L. \label{Gauge_BCs} \end{align} The BRST transformation leads us to BCs for the ghost field. The abelian gauge field $A_{M}$ and the ghost field $c$ have a relation with each other through the {Grassmann-odd} BRST transformation ${\bm \delta}_{B}$: \begin{align} {\bm \delta}_{B}A_{M}=\partial_{M}c. \end{align} This fact implies that $\partial_{y} c$ ($c$) should obey the same boundary conditions as $A_{y}$ ($A_{\mu}$). Thus we obtain the BCs for the ghost as \begin{align} \partial_{y}c =0 \hspace{3em}{\rm at}\ y=0,L. \label{ghostBC} \end{align} The boundary condition for the anti-ghost field $\bar{c}$ can be {derived} from the action principle for the third term of the action. The variation for the third term produces the following surface term{:} \begin{align} \bar{c}\partial_{y}(\delta c)-(\partial_{y}\bar{c})\delta c=0 \hspace{3em}{\rm at}\ y=0,L. \end{align} Since $c(x,y)$ obeys the boundary conditions (\ref{ghostBC}), the following boundary condition should be imposed for the anti-ghost field $\bar{c}$: \begin{align} \partial_{y}\bar{c}=0 \hspace{3em}{\rm at}\ y=0,L. \label{anti-ghost_BC} \end{align} \subsubsection{BCs for {fermion}} Next, we consider the BCs for the fermion with adding the following action to eq.~(\ref{action_gauge}){:} \begin{align} S_{F}=\int d^4 x \int^{L}_{0}dy\ \overline{\Psi}(i\Gamma^{M}D_{M}+M_{F})\Psi, \label{Fermion_action} \end{align} where \begin{align} D_{M}\Psi =(\partial_{M}-ieA_{M})\Psi, \end{align} and $\Psi$ is a 5d 4-component Dirac spinor. $M_{F}$ is a bulk mass of the fermion and we take the gamma {matrix} $\Gamma^{M}$ as \begin{align} \Gamma^{\mu}&=\gamma^{\mu},\\ \Gamma^{y}&=-i\gamma_{5}=\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}. \end{align} From the action principle $\delta S_F =0$, we obtain the following condition for the surface term: \begin{align} \overline{\Psi}\ \gamma_{5}\delta \Psi=0, \hspace{3em}{\rm at}\ y=0,L, \end{align} with the 5d Dirac equation, \begin{align} i \gamma^{\mu}D_{\mu}\Psi +(\gamma_{5}D_{y}+M_{F})\Psi=0. \end{align} In terms of the chiral spinors $\Psi_{R/L}$ ($\Psi =\Psi_{R}+\Psi_{L}$), which are defined as $\gamma_{5}\Psi_{R/L}=\pm \Psi_{R/L}$, we can rewrite the above equations as \begin{align} \overline{\Psi}_{L}\delta \Psi_{R}-\overline{\Psi}_{R}\delta\Psi_{L}=0, \label{fermion_surface}\hspace{3em}{\rm at}\ y=0,L, \end{align} \begin{align} i\gamma^{\mu}D_{\mu}\Psi_{R}+(-D_{y}+M_{F})\Psi_{L}=0,\\ i\gamma^{\mu}D_{\mu}\Psi_{L}+(D_{y}+M_{F})\Psi_{R}=0. \end{align} Since {boundary conditions} {which {consist} of a linear combination of $\Psi_{R}$ and $\Psi_{L}$} {break} the 4d Lorentz invariance, the condition{~(\ref{fermion_surface})} {should} be reduced to the form \begin{align} \overline{\Psi}_{L}\delta\Psi_{R}=0=\overline{\Psi}_{R}\delta \Psi_{L}, \end{align} which {leads to} the BCs: \begin{align} \Psi_{R}=0\hspace{1em}\text{or}\hspace{1em}\Psi_{L}=0.\hspace{3em}\text{at}\ y=0,L. \end{align} We should note that under the BC $\Psi_{R}=0$ ($\Psi_{L}=0$) at boundaries, the 5d Dirac equation automatically determines the BC for $\Psi_{L}$ ($\Psi_{R}$) as \begin{align} \Psi_{R}=0 &\rightarrow (-D_{y}+M_{F})\Psi_{L}=0,\label{BCRtoL}\\ \Psi_{L}=0 &\rightarrow (D_{y}+M_{F})\Psi_{R}=0. \label{BCLtoR} \end{align} Thus we have the following four choices for the fermion BCs {\cite{Fujimoto:2011kf,Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka}}: \begin{align} \begin{array}{ll} \text{type-(I)}:\Psi_{R}(0)=0=\Psi_{R}(L),\\[0.2cm] \text{type-(II)}:\Psi_{L}(0)=0=\Psi_{L}(L),\\[0.2cm] \text{type-(III)}:\Psi_{R}(0)=0=\Psi_{L}(L),\\[0.2cm] \text{type-(IV)}:\Psi_{L}(0)=0=\Psi_{R}(L). \end{array}\label{sec2:Fermion_BCs} \end{align} \subsubsection{BCs for {scalar} field} Finally, we consider the general class of boundary conditions for a scalar field: \begin{align} S_{\Phi}=\int d^4 x \int^{L}_{0}dy \left[ \Phi^{\ast}(D^{M}D_{M} -M^2)\Phi -\frac{\lambda}{2}(\Phi^{\ast}\Phi)^2\right],\label{Scalar_action} \end{align} where \begin{align} D_{M}\Phi=(\partial_{M}-ie'A_{M})\Phi, \end{align} and $\Phi(x,y)$ denotes a 5d complex scalar {field. As} {in} the previous cases, we obtain the surface term from the action principle $\delta S_{\Phi}=0$: \begin{align} \Phi^{\ast}D_{y}\delta \Phi -(D_{y}\Phi)^{\ast}\delta\Phi=0, \hspace{3em}{\rm at}\ y=0,L. \label{surface_term} \end{align} Under the infinitesimal special variation $\delta \Phi =\varepsilon \Phi$, we can rewrite the above surface term as \begin{align} |\Phi -iL_{0}D_{y}\Phi|^{2}=|\Phi+iL_{0}D_{y}\Phi|^{2}\hspace{3em}{\rm at}\ y=0,L, \end{align} where $L_{0}$ is an arbitral non-zero real constant, which possesses mass dimension {$-1$}. The above equation implies that $\Phi -iL_{0}D_{y}\Phi$ and $\Phi+iL_{0}D_{y}\Phi$ have a difference only up to a phase {at the boundaries}: \begin{align} &\Phi -iL_{0}(D_{y}\Phi)=e^{i\theta_{0}}(\Phi+iL_{0}D_{y}\Phi)\hspace{3em}{\rm at }\ y=0,\\ &\Phi -iL_{0}(D_{y}\Phi)=e^{i\theta_{L}}(\Phi+iL_{0}D_{y}\Phi)\hspace{3em}{\rm at }\ y=L. \end{align} {With} $L_{+}\equiv L_{0}\cot \frac{\theta_{0}}{2}$ and $L_{-}\equiv -L_{0}\cot\frac{\theta_{L}}{2}$, we obtain the general class of BCs for the scalar field {\cite{Fujimoto:2011kf,Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka}}, \begin{align} \left\{ \begin{array}{l} \Phi(0)+L_{+}D_{y}\Phi(0)=0,\\ \Phi(L)-L_{-}D_{y}\Phi(L)=0, \end{array} \right.\hspace{3em}(-\infty \leq L_{\pm}\leq +\infty).\label{Robin_BCs} \end{align} {These} boundary conditions are known as the Robin boundary condition. Note that the derived Robin boundary condition satisfies the condition~(\ref{surface_term}) under the assumption that $\Phi$ and $\delta \Phi$ satisfy the same boundary condition. We should emphasize that all derived boundary conditions (\ref{Gauge_BCs}), (\ref{ghostBC}), (\ref{anti-ghost_BC}), (\ref{sec2:Fermion_BCs}), (\ref{Robin_BCs}) are consistent with the 5d gauge invariance. \subsection{4d spectrum} In the previous subsection, we {investigated} the general class of BCs for each field. Now, we derive the 4d spectrum of the gauge field and the fermion field under the derived boundary conditions, respectively. For the scalar field, we only investigate the vacuum expectation value for {our} purpose. \subsubsection{4d spectrum of Abelian {gauge, ghost{,} and anti-ghost fields}} First, we start from the abelian {gauge, the ghost{,} and the anti-ghost fields}. The action and the boundary conditions are given by eq.~(\ref{action_gauge}) and eqs.~(\ref{Gauge_BCs}),~(\ref{ghostBC}),~(\ref{anti-ghost_BC}). The action $S_{G}$ can be rewritten as \begin{align} S_{G}=\int d^4 x\int^{L}_{0}dy\ \left[ \frac{1}{2}A^{\mu}(\partial^{\nu}\partial_{\nu}+\partial_{y}^2)A_{\mu}+\frac{1}{2}A_{y}(\partial^{\mu}\partial_{\mu}+\partial_{y}^2)A_{y}-i\bar{c}(\partial^{\mu}\partial_{\mu}+\partial_{y}^2)c\right].\label{action_gauge2} \end{align} To obtain the 4d spectrum, we expand the fields as follows: \begin{align} A_{\mu}(x,y)=\sum_{n}A^{(n)}_{\mu}(x)f_{n}(y),\\ A_{y}(x,y)=\sum_{n}A^{(n)}_{y}(x)g_{n}(y),\\ c(x,y)=\sum_{n}c^{(n)}(x)\Xi_{n}(y),\\ \bar{c}(x,y)=\sum_{n}\bar{c}^{(n)}(x)\Xi_{n}(y), \end{align} where $\{f_{n}(y)\}$ $\Bigl( \{g_{n}(y)\}\Bigr)$ are eigenfunctions of the Hermitian operator ${\cal D}^{\dagger}{\cal D}$ (${\cal D}{\cal D}^{\dagger}$): \begin{align} \left\{ \begin{array}{l} {\cal D}^{\dagger}{\cal D}f_{n}(y)=m_{n}^2 f_{n}(y),\\[0,3cm] {\cal D}{\cal D}^{\dagger}g_{n}(y)=m_{n}^2 g_{n}(y), \end{array} \right. \end{align} and we defined ${\cal D}$ and ${\cal D}^{\dagger}$ as \begin{align} {\cal D}&\equiv \partial_{y},\\ {\cal D}^{\dagger} &\equiv -\partial_{y}. \end{align} $\{\Xi_{n}(y)\}$ are eigenfunctions of the Hermitian operator $(-\partial_{y}^2)$, \begin{align} -\partial_{y}^2 \Xi_{n}(y)=m_{n}^2 \Xi_{n}(y). \end{align} Note that $\{ f_{n}\}$, $\{g_{n}\}${,} and $\{\Xi_{n}\}$ form {complete sets}, respectively, and can obey the {orthonormal} relations: \begin{align} \int^{L}_{0}dy\ f_{n}^{\ast}(y)f_{m}(y)=\delta_{n,m},\\ \int^{L}_{0}dy\ g_{n}^{\ast}(y)g_{m}(y)=\delta_{n,m},\\ \int^{L}_{0}dy\ \Xi^{\ast}_{n}(y)\Xi_{m}(y)=\delta_{n,m}. \end{align} Furthermore, $\{f_{n}\}$ and $\{g_{n}\}$ satisfy the {quantum-mechanical} supersymmetry (QM-SUSY) relations \cite{Lim:2005rc,Lim:2007fy,Lim:2008hi,Ohya:2010wf,Nagasawa:2011mu,Sakamoto:2012ew}, \begin{align} \left\{ \begin{array}{l} {\cal D}f_{n}(y)=m_{n}g_{n}(y),\\ {\cal D}^{\dagger}g_{n}(y)=m_{n}f_{n}(y). \end{array} \right. \end{align} Under the BCs~(\ref{Gauge_BCs}),~(\ref{ghostBC}),~(\ref{anti-ghost_BC}), we can derive the explicit {forms} of $\{f_{n}\}$, $\{g_{n}\}$ and $\{\Xi_{n} \}$ with the mass eigenvalue $m_{n}$ as \begin{align} \begin{array}{ll} f_{0}=\sqrt{\frac{1}{L}},&\\ f_{n}=\sqrt{\frac{2}{L}}\cos\left(\frac{n\pi}{L}y\right),&\hspace{3em}(n=1,2,3,\cdots),\\ g_{n}=-\sqrt{\frac{2}{L}}\sin\left(\frac{n\pi}{L}y\right),&\hspace{3em}(n=1,2,3,\cdots),\\ \Xi_{0}=\sqrt{\frac{1}{L}},&\\ \Xi_{n}=\sqrt{\frac{2}{L}}\cos\left(\frac{n\pi}{L}y\right),&\hspace{3em}(n=1,2,3,\cdots),\\ {{m_{n}}=\frac{n\pi}{L}},&\hspace{3em}(n=0,1,2,\cdots). \end{array} \end{align} Substituting the {above expansions} into the action~(\ref{action_gauge2}) and executing the integration with respect to the extra dimension, we obtain the following reduced action. \begin{align} S_{G}&=\int d^4 x \biggl[ \frac{1}{2}A^{(0)}_{\mu}\eta^{\mu\nu}(\partial^{\alpha}\partial_{\alpha})A^{(0)}_{\nu} +\sum^{\infty}_{n=1}\frac{1}{2}A^{(n)}_{\mu}\eta^{\mu\nu}(\partial^{\alpha}\partial_{\alpha}-m_{n}^2)A_{\nu}^{(n)}+\sum^{\infty}_{n=1}\frac{1}{2}A_{y}^{(n)}(\partial^{\alpha}\partial_{\alpha}-m_{n}^2)A_{y}^{(n)}\nonumber\\ &\hspace{5em}-i\bar{c}^{(0)}(\partial^{\alpha}\partial_{\alpha})c^{(0)}-i\sum^{\infty}_{n=1}\bar{c}^{(n)}(\partial^{\alpha}\partial_{\alpha}-m_{n}^2)c^{(n)}\biggr]. \end{align} A schematic figure of the 4d spectrum is depicted in Figure~\ref{fig.4d-gauge}. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{4d-gauge_v2.pdf} \caption{A schematic figure of the 4d spectrum of the abelian gauge field with ghosts on an interval. Each black oval pair indicates a QM-SUSY pair to make a mass term} \label{fig.4d-gauge} \end{center} \end{figure} \subsubsection{4d spectrum of fermion} Second, we investigate the 4d spectrum of the fermion on an interval. The action and BCs are given by eq.~(\ref{Fermion_action}) and {eq.~(\ref{sec2:Fermion_BCs}). To evaluate the 4d spectrum of the fermion, we expand the fermion as \begin{align} \Psi(x,y)&=\Psi_{R}(x,y)+\Psi_{L}(x,y)\nonumber\\ &=\sum_{n}\psi^{(n)}_{R}(x)\mathscr{ F}^{(n)}_{\psi_R}(y)+\sum_{n}\psi^{(n)}_{L}(x)\mathscr{ G}^{(n)}_{\psi_L}(y), \end{align} } where $\{ \mathscr{F}^{(n)}_{\psi_R}\}$ ($\{\mathscr{G}^{(n)}_{\psi_L}\}$) {are eigenfunctions} of the {Hermitian} operator $\mathscr{D}^{\dagger}\mathscr{D}$ ($\mathscr{D}\mathscr{D}^{\dagger}$): \begin{align} \left\{ \begin{array}{l} \mathscr{D}^{\dagger}\mathscr{D}\mathscr{F}^{(n)}_{\psi_R}(y)=m_{\psi^{(n)}}^2 \mathscr{F}^{(n)}_{\psi_R}(y),\\[0.2cm] \mathscr{D}\mathscr{D}^{\dagger}\mathscr{G}^{(n)}_{\psi_L}(y)=m_{\psi^{(n)}}^2 \mathscr{G}^{(n)}_{\psi_L}(y), \end{array} \right.\label{Fermion_eigenvalue-equation} \end{align} and {form} {complete sets}. In the above, the operators $\mathscr{D}$ and $\mathscr{D}^{\dagger}$ are defined as \begin{align} \mathscr{D}&\equiv \partial_{y}+M_{F},\\ \mathscr{D}^{\dagger}&\equiv -\partial_{y}+M_{F}. \end{align} Furthermore, $\{ \mathscr{F}^{(n)}_{\psi_R}\}$ and $\{\mathscr{G}^{(n)}_{\psi_L} \}$ satisfy the QM-SUSY relations: \begin{align} \left\{ \begin{array}{l} \mathscr{D}\mathscr{F}^{(n)}_{\psi_R}(y)=m_{\psi^{(n)}}\mathscr{G}^{(n)}_{\psi_L}(y),\\[0.2cm] \mathscr{D}^{\dagger}\mathscr{G}^{(n)}_{\psi_L}(y)=m_{\psi^{(n)}}\mathscr{F}^{(n)}_{\psi_R}(y). \end{array} \right.\label{eq:QM-SUSY relations} \end{align} We can obtain the explicit {forms} of the wavefunctions after we solve the eigenvalue {equations} (\ref{Fermion_eigenvalue-equation}) {while taking into account} the BCs (\ref{sec2:Fermion_BCs}). However, we {here} concentrate on the existence of {a} chiral massless zero-mode and the form of its {wavefunction.} {Zero-mode solutions are} obtained from the QM-SUSY {relations (\ref{eq:QM-SUSY relations}) with} $m_{\psi^{(0)}}=0$: \begin{align} \mathscr{D}\mathscr{F}^{(0)}_{\psi_R}&=0, \label{ZEROsolutionF}\\ \mathscr{D}^{\dagger}\mathscr{G}^{(0)}_{\psi_L}&=0.\label{ZEROsolutionG} \end{align} The {solutions} of the above {equations would be given} as follows: \begin{align} \mathscr{F}^{(0)}_{\psi_R}(y)&=\sqrt{\frac{2M_{F}}{1-e^{-2M_{F}L}}}e^{-M_{F}y},\label{f0R}\\ \mathscr{G}^{(0)}_{\psi_L}(y)&=\sqrt{\frac{2M_{F}}{e^{2M_{F}L}-1}}e^{M_{F}y}.\label{g0L} \end{align} {Schematic figures} of the zero-mode {solutions are} depicted in Figure~\ref{fig.zero-modes}. The zero-mode solution $\mathscr{F}^{(0)}_{\psi_R}$ $\Bigl(\mathscr{G}^{(0)}_{\psi_L}\Bigr)$ {localizes} to the boundary $y=0$ ($y=L$) in the case of $M_{F}>0$ and localizes to $y=L$ ($y=0$) in the case of $M_{F}<0$. \begin{center} \begin{figure}[h] \begin{center} \begin{minipage}{0.9\textwidth} \begin{center} \scalebox{0.3}{\includegraphics{fermion-zeromode_positiveMF.pdf}} \par {\footnotesize{(i) {Schematic figures} of $\mathscr{F}^{(0)}_{\psi_R}$ and $\mathscr{G}^{(0)}_{\psi L}$ in the case of $M_{F}>0$. $\mathscr{F}^{(0)}_{\psi_R}$ ($\mathscr{G}^{(0)}_{\psi_L}$) localizes to the boundary point $y=0$ ($y=L$).}} \end{center} \end{minipage} \end{center} \begin{center} \begin{minipage}{0.9\textwidth} \begin{center} \scalebox{0.3}{\includegraphics{fermion-zeromode_negativeMF.pdf}} \par \vspace{0.0cm} {\footnotesize{(ii) {Schematic figures} of $\mathscr{F}^{(0)}_{\psi_R}$ and $\mathscr{G}^{(0)}_{\psi_L}$ in the case of $M_{F}<0$. $\mathscr{F}^{(0)}_{\psi_R}$ ($\mathscr{G}^{(0)}_{\psi_L}$) localizes to the boundary point $y=L$ ($y=0$).}} \end{center} \end{minipage} \end{center} \caption{{Schematic figures} of chiral massless {fermion} zero mode solutions.} \label{fig.zero-modes} \end{figure} \end{center} It should be emphasized that the {zero-mode} solutions $(\ref{f0R})$ $\bigl((\ref{g0L})\bigr)$ {are} consistent only with the type-(II) $\bigl($type-(I)$\bigr)$ BC given in (\ref{sec2:Fermion_BCs}) because of (\ref{BCRtoL}) and (\ref{BCLtoR}){, respectively.} Therefore we will concentrate on {the} {type-(I) and type-(II)} BCs in the following. The mass spectrum of both type-(I) and type-(II) is {given by} \begin{align} m_{\psi^{(0)}}&=0,\\ {m_{\psi^{(n)}}}&{= \sqrt{\left(\frac{n\pi}{L}\right)^2 +M_{F}^2}} , \hspace{4em}(n=1,2,3,\cdots ). \end{align} Inserting the mode expansions into the action and using the {orthonormal} relations of the mode functions, we have \begin{align} S_{F}=\int d^4 x \left\{ {\cal L}_{m=0}+\sum^{\infty}_{n=1}\overline{\psi^{(n)}}(x)\Bigl(i\gamma^{\mu}\partial_{\mu}+m_{n}\Bigr)\psi^{(n)}(x)\right\}, \end{align} where \begin{align} {\cal L}_{m=0}&= \left\{ \begin{array}{l} \overline{\psi^{(0)}_{L}}(x) (i\gamma^{\mu}\partial_{\mu})\psi^{(0)}_{L}(x), \hspace{4em}\text{{for}\quad type-(I)},\\[0.3cm] \overline{\psi^{(0)}_{R}}(x) (i\gamma^{\mu}\partial_{\mu})\psi^{(0)}_{R}(x), \hspace{4em}\text{{for}\quad type-(II)}, \end{array} \right. \end{align} and $\psi^{(n)}=\psi^{(n)}_R +\psi^{(n)}_L$. A typical spectrum of the fermion is depicted {in Figure~\ref{fig.chap2-4d-fermion}}. A chiral massless zero mode {exists} in the case of both type-(I) and type-(II). \begin{figure}[h] \begin{center} \begin{minipage}{0.3\textwidth} \begin{center} \scalebox{0.45}{\includegraphics{4d-fermion_type1.pdf}} \par \vspace{0.0cm} {\footnotesize{{Type-(I) case}}} \end{center} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{center} \scalebox{0.45}{\includegraphics{4d-fermion_type2.pdf}} \par \vspace{0.0cm} {\footnotesize{{Type-(II) case}}} \end{center} \end{minipage} \end{center} \caption{A typical mass spectrum of the fermion on an interval. Each black oval pair indicates a QM-SUSY pair to make a mass term.} \label{fig.chap2-4d-fermion} \end{figure} \subsubsection{Vacuum expectation value of the scalar} Finally, we comment on the vacuum expectation value of the scalar field. The action and the BCs are given by eq.~(\ref{Scalar_action}) and eq.~(\ref{Robin_BCs}). It was found in Refs.~\cite{Fujimoto:2011kf,Fujimoto:2012wv} that under the Robin boundary condition (\ref{Robin_BCs}), $\Phi(x,y)$ can {possess} a non-vanishing vacuum expectation value $\langle \Phi(x,y)\rangle=\phi(y)$ with the form \begin{align} \phi(y) =\dfrac{\frac{M}{\sqrt{\lambda}}(\sqrt{1+X}-1)^{\frac{1}{2}}}{{\rm cn}\left[M(1+X)^{\frac{1}{4}}(y-y_{0}), \sqrt{\frac{1}{2}(1+\frac{1}{\sqrt{1+X}}})\right]}, \end{align} with \begin{align} X=\frac{4\lambda |Q|}{M^4}. \end{align} ${\rm cn}(y,a)$ is the Jacobi's elliptic function, {and $y_0$, $Q$} are constants which are determined by the parameters $L_{\pm}$ of the Robin BCs. {Choosing} {suitable values} of $L_{\pm}$, {we can approximately {take} the form of the scalar VEV $\phi(y)$ as \begin{align} \phi(y)\sim{\cal A}e^{My},\label{y-dependentVEV} \end{align} } where ${\cal A}$ is a {constant with} mass dimension $\frac{3}{2}$. \section{Casimir energy and stability of the extra dimension}\label{sec.2-3} In the previous section, we succeeded {in obtaining} the 4d spectrum of the fields with the specified BCs. {Taking the result into account,} we evaluate the Casimir energy $E[L]$ as a function of the {length} $L$ of the extra dimension {and show that the minimization of the Casimir energy provides a mechanism to stabilize the extra dimension}. For our purpose, we only {concentrate} on the gauge {and fermion field contributions to the Casimir energy} {while} ignoring the effect of the scalar field in this paper. {We summarize the action and the BCs which we consider,} \begin{align} S&=S_G +S_F,\\ S_{G}&=\int d^4 x \int^{L}_{0}dy \Bigl[ -\frac{1}{4}F^{MN}F_{MN}-\frac{1}{2}(\partial^{\mu}A_{\mu}+\partial_{y}A_{y})^2 -i\bar{c}(\partial^{\mu}\partial_{\mu}+\partial_{y}^2)c\Bigr], \\ S_{F}&=\int d^4 x \int^{L}_{0}dy\ \overline{\Psi}(i\Gamma^{\mu}\partial_{\mu}+i\Gamma^{y}\partial_{y}+M_{F})\Psi, \end{align} \begin{align} \left\{ \begin{array}{l} \partial_{y}A_{\mu}=0,\\ A_{y}=0, \end{array} \right. \hspace{3em}{\rm at}\ y=0,L{,} \label{Sec2Gauge_BCs} \end{align} \begin{align} \partial_{y}c=0=\partial_{y}\bar{c} \hspace{3em}{\rm at}\ y=0,L, \label{Sec2ghost_BC} \end{align} \begin{align} \begin{array}{ll} \text{type-(I)}:\Psi_{R}(0)=0=\Psi_{R}(L),\\[0.2cm] \text{type-(II)}:\Psi_{L}(0)=0=\Psi_{L}(L){.}\label{Fermion_BCs} \end{array} \end{align} We {focus} on the situation in which a chiral massless zero mode exists. As an {example}, we consider {the type-(II) BC} {first}. To evaluate the Casimir energy, we examine the partition function $Z[L]$. \begin{align} Z[L]=\int [dA_{\mu}\, dA_y\, {d\Psi\, d\bar{\Psi}}\, dc\, d\bar{c} ]\,e^{iS}. \end{align} The gauge field part of the partition function {reads} \begin{align} Z_{G}[L]&=\int [dA_\mu dA_y dc d\bar{c}]\, e^{iS_{G}}\\ &\propto{\rm exp}\left[ i\int d^4 x \Biggl\{ i\int \frac{d^4 p}{(2\pi)^4}\,\biggl({\rm ln}\, p^{\mu}p_{\mu}+\frac{3}{2}\sum^{\infty}_{n=1}{\rm ln}\, (p^{\mu}p_{\mu}+m_{n}^2)\biggr)\Biggr\}\right]. \end{align} After moving to the Euclidian space, we obtain the Casimir energy of the gauge field: \begin{align} Z^{{\rm Euclid}}_{G}[L]\propto{\rm exp}\left[ -E^{\rm U(1)}[L]\int d^4 x_{E}\right], \end{align} where \begin{align} E^{\rm U(1)}[L]&= \int \frac{d^4 p_{E}}{(2\pi)^4}\left[ {\rm ln}\, p_{E}^2 +\frac{3}{2}\sum^{\infty}_{n=1}{\rm ln}\, (p_{E}^2 +m_{n}^2)\right]\nonumber\\ &= \int \frac{d^4 p_{E}}{(2\pi)^4}\left[ \frac{1}{4}\,{\rm ln}\, p_{E}^2 +\frac{3}{4}\sum^{\infty}_{n=-\infty}{\rm ln} \Bigl\{ p_{E}^2 +\left(\frac{n\pi}{L}\right)^2\Bigr\}\right]. \end{align} and $p_{E}^2 =(p^0_E)^2+(p^1_E)^2+(p^2_E)^2+(p^3_E)^2$. {For further concrete discussions,} we {divide} $E^{\rm U(1)}[L]$ into two parts: \begin{align} E^{\rm U(1)}[L]&=E^{\rm U(1)}_{{\rm part1}}[L]+E^{\rm U(1)}_{{\rm part2}}[L],\\ E^{\rm U(1)}_{{\rm part1}}[L]&= \int \frac{d^4 p_{E}}{(2\pi)^4}\ \frac{1}{4}\,{\rm ln}\, p_{E}^2,\\ E^{\rm U(1)}_{{\rm part2}}[L]&= \int \frac{d^4 p_{E}}{(2\pi)^4}\ \frac{3}{4}\sum^{\infty}_{n=-\infty} {\rm ln}\, \biggl\{ p_{E}^2 +\left(\frac{n\pi}{L}\right)^2\biggr\} \end{align} {Now we find that $E^{\rm U(1)}_{\rm part1}[L]$ has no $L$-dependence. {Our interest is} only in the $L$-dependence of the Casimir energy $E^{\rm U(1)}[L]$ so that we simply ignore this part. We emphasize that this part actually does not affect any results of the $L$-dependence of the Casimir energy $E^{\rm U(1)}[L]$.} On the other hand, $E^{\rm U(1)}_{\rm part2}[L]$ has {an} $L$-dependence and {plays} a crucial role when we discuss the $L$-dependence of the total Casimir energy. {By using the formulas} \begin{align} &-{\rm ln}\, A=\frac{d}{ds}A^{-s}\biggr|_{s=0},\label{formula1}\\ &A^{-s}=\frac{1}{\Gamma (s)}\int^{\infty}_{0} dt\ t^{s-1}e^{-At},\label{formula2}\\ &\frac{d}{ds}\frac{t^s}{\Gamma (s)}\Biggr|_{s=0}=1,\label{formula3} \end{align} with the Gamma function $\Gamma (s)=\int^{\infty}_{0}dt\, t^{s-1}e^{-t}$, we can rewrite $E^{\rm U(1)}_{\rm part2}[L]$ as \begin{align} E^{\rm U(1)}_{\rm part2}[L]=-\frac{3}{4}\cdot\frac{1}{16\pi^2}\sum^{\infty}_{n=-\infty}\int^{\infty}_{0}dt\, t^{-3} e^{-(\frac{n\pi}{L})^2 t}. \end{align} The Poisson summation {formula} \begin{align} \sum^{\infty}_{n=-\infty}e^{-(\frac{n\pi}{L})^2 t}=\sum^{\infty}_{w=-\infty}\frac{L}{\sqrt{\pi t}}e^{-\frac{w^2 L^2}{t}},\label{poisson} \end{align} will help us to move on. Here, the index $w$ is {an} integer which {represents} the winding number. {By} {utilizing} the Poisson summation formula, we obtain \begin{align} E^{\rm U(1)}_{\rm part2}[L]=-\frac{3L}{64\pi^{5/2}}\sum^{\infty}_{w=-\infty}\int^{\infty}_{0}dt\, {t}^{-\frac{7}{2}}e^{-\frac{w^2 L^2}{t}}, \end{align} and find that $E^{\rm U(1)}_{\rm part2}[L]$ contains a UV-divergence {when} $t\rightarrow 0$. To remove this UV-divergence, we define the regularized total Casimir energy as \begin{align} \frac{1}{L}E^{\rm U(1)}_{\rm part2}[L]_{\rm reg.}\equiv \frac{1}{L}E^{\rm U(1)}_{\rm part2}[L] -\frac{1}{L}E^{\rm U(1)}_{\rm part2}[L]\Biggr|_{{L\to\infty}} . \end{align} {{We note that this} regularization {is equivalent simply to removing} the $w=0$ mode from the Casimir energy. The $w\neq 0$ modes express winding modes and provide finite contributions to the $L$-dependence of the Casimir energy. On the other hand, $w=0$ {corresponds to} an unwinding mode and {it causes a} UV-divergence. Since the regularized Casimir energy $E^{\rm U(1)}_{\rm part2}[L]_{\rm reg.}$ does not contain any unwinding mode, it has no UV-divergence and {becomes} finite.} The explicit form of $E^{\rm U(1)}_{\rm part2}[L]$ is \begin{align} E^{\rm U(1)}_{\rm part2}[L]_{\rm reg.}&=-\frac{3L}{32\pi^{5/2}}\sum^{\infty}_{w=1}\int^{\infty}_{0}dt\, t^{-\frac{7}{2}}e^{-\frac{w^2 L^2}{t}}\nonumber\\ &=-\frac{3L}{32\pi^{5/2}}\sum^{\infty}_{w=1}\frac{1}{w^5 L^5}\int^{\infty}_{0}dt'\ t'{}^{\frac{5}{2}-1}e^{-t'}\nonumber\\ &=-\frac{9}{128\pi^2 L^4}\zeta (5), \end{align} where we performed the integration by substitution $t' \equiv \frac{w^2 L^2}{t}$ and used $\Gamma (\frac{5}{2}) =\frac{3\sqrt{\pi}}{4}$. From the above analysis, we obtain the regularized Casimir energy $E^{\rm U(1)}[L]_{\rm reg.}$ of the gauge field: \begin{align} E^{\rm U(1)}[L]_{\rm reg.}=-\frac{9}{128 \pi^2 L^4}\zeta (5).\label{U(1)Casimir} \end{align} {In the same way}, {we next} evaluate the Casimir energy of the fermion with the type-(II) boundary condition. {(}It is found that {the} type-(I) boundary condition {leads to} the same conclusion {as the type-(II)} for the Casimir energy.{)} To move on, we introduce the chiral representation: \begin{align} \psi^{(n)} =\left(\begin{array}{c} \xi^{(n)}\\ 0 \end{array} \right)+\left(\begin{array}{c} 0\\ \eta^{(n)} \end{array}\right){.} \end{align} The Gamma matrices are represented by \begin{align} \gamma^{\mu}=\left(\begin{array}{cc} 0&\bar{\sigma}^\mu\\ \sigma^\mu &0\\ \end{array}\right), \end{align} where \begin{align} \bar{\sigma}^\mu &=(1,{\bm \sigma}),\\ \sigma^{\mu}&=(1,-{\bm \sigma}){,} \end{align} {and} ${\bm \sigma}$ are Pauli matrices. The partition function of the fermion {reads} \begin{align} Z_F [M_F ,L]&=\int [{d\Psi d\bar{\Psi}}]\ e^{iS_F}\nonumber\\ &\propto {\rm exp}\left[ i\int d^4 x \Biggl\{ -i \int\frac{d^4 p}{(2\pi)^4}\, \biggl({\rm ln}\,p^\mu p_\mu +2\sum^{\infty}_{n=1}{\rm ln} \,(p^\mu p_\mu +m_{\psi^{(n)}}^2)\biggr)\Biggr\}\right]{,} \end{align} {where the overall minus sign originates in the Grassmann property of fermions.} After moving to the Euclidian space, we obtain the Casimir energy of the fermion: \begin{align} Z^{\rm Euclid}_F [M_F ,L]&\propto {\rm exp}\left[ -E^{(F)}[M_F ,L]\int d^4 x_{E}\right], \end{align} where \begin{align} E^{(F)}[M_F ,L]&= -\int \frac{d^4 p_E}{(2\pi)^4}\left[ {\rm ln}\, p_E^2 +2\sum^{\infty}_{n=1}\, {\rm ln}(p_E^2 +m_{\psi^{(n)}}^2)\right]\nonumber\\ &=-\int \frac{d^4 p_E}{(2\pi)^4}\left[ {\rm ln}\,p_E^2 -{\rm ln}\, (p_E^2 +M_F^2)+\sum^{\infty}_{n=-\infty}{\rm ln}\, \biggl\{ p_E^2 +\Bigl(\frac{n\pi}{L}\Bigr)^2 +M_F^2\biggr\}\right]. \end{align} We divide $E^{(F)}[M_F ,L]$ into two parts as is the case of the gauge field: \begin{align} E^{(F)}[M_F ,L]&=E^{(F)}_{\rm part1}[M_F ,L]+E^{(F)}_{\rm part2}[M_F ,L],\\ E^{(F)}_{\rm part1}[M_F ,L]&=-\int \frac{d^4 p_E}{(2\pi)^4}\left[ {\rm ln}\, p_E^2 -{\rm ln}\, (p_E^2 +M_F^2)\right],\\ E^{(F)}_{\rm part2}[M_F ,L]&=-\int \frac{d^4 p_E}{(2\pi)^4}\sum^{\infty}_{n=-\infty}{\rm ln}\,\left[ p_E^2 +\Bigl(\frac{n\pi}{L}\Bigr)^2 +M_F^2\right]. \end{align} {In the same way as the gauge {field}, $E^{(F)}_{\rm part1}[M_F ,L]$ does not contain any $L$-dependence. Since we have an interest in the $L$-dependence of the Casimir energy, we just ignore this part.} $E^{(F)}_{\rm part2}[M_F ,L]$ can {be also} evaluated as the gauge field case. {Using} the formulas (\ref{formula1})-(\ref{formula3}), we can rewrite $E^{(F)}_{\rm part2}[M_F ,L]$ as \begin{align} E^{(F)}_{\rm part2}[M_F ,L]=\frac{1}{16\pi^2}\sum^{\infty}_{n=-\infty}\int^{\infty}_{0}dt\, t^{-3}e^{-\left\{ (\frac{n\pi}{L})^2+M_F^2\right\}t}. \end{align} By using the Poisson summation formula (\ref{poisson}), we obtain the following form for $E^{(F)}_{\rm part2}[M_F ,L]$: \begin{align} E^{(F)}_{\rm part2}[M_F ,L]=\frac{L}{16\pi^{5/2}}\sum^{\infty}_{w=-\infty}\int^{\infty}_{0}dt\,t^{-\frac{7}{2}}e^{-\frac{w^2 L^2}{t}-M_F^2 t}. \end{align} Since $E^{(F)}_{\rm part2}[M_F ,L]$ contains UV-divergence {when} $t\rightarrow 0$, we regularize it as \begin{align} \frac{1}{L}E^{(F)}_{\rm part2}[M_F ,L]_{\rm reg.}\equiv \frac{1}{L}E^{(F)}_{\rm part2}[M_F ,L]-\frac{1}{L}E^{(F)}_{\rm part2}[M_F ,L]\Biggr|_{L\rightarrow \infty}. \end{align} The regularized Casimir energy $E^{(F)}_{\rm part2}[M_F ,L]_{\rm reg.}$ is expressed by the modified Bessel function $K_\nu (z)$ as \begin{align} E^{(F)}_{\rm part2}[M_F ,L]_{\rm reg.}=\frac{{L}}{4\pi^{5/2}}\sum^{\infty}_{w=1} \left(\frac{|M_F |}{wL}\right)^{\frac{5}{2}}K_{\frac{5}{2}}(2w |M_F|L), \end{align} where the modified Bessel function is defined by \begin{align} 2\left(\frac{A}{B}\right)^{\frac{\nu}{2}} K_\nu (2\sqrt{AB})=\int^{\infty}_{0}dt\, t^{-\nu-1}e^{-At -\frac{B}{t}}. \end{align} Moreover, the modified Bessel function $K_{\frac{D}{2}}(z)$ with $D=\text{odd integer}$ can be expressed as \begin{align} K_{\frac{D}{2}}(z)=\sqrt{\frac{\pi}{2z}}e^{-z}\sum^{\frac{D-1}{2}}_{k=0}\frac{\Bigl(\frac{D-1}{2}+k\Bigr)!}{k! \left(\frac{D-1}{2}-k\right)! (2z)^k}. \end{align} Therefore the explicit form of $E^{(F)}_{\rm part2}[M_F ,L]_{\rm reg.}$ is given by \begin{align} E^{(F)}_{\rm part2}[M_F ,L]_{\rm reg.}=\frac{|M_F|^2}{8\pi^2 L^2}\sum^{\infty}_{w=1}\frac{e^{-2w|M_F|L}}{w^3}\left(1+\frac{3}{2w|M_F|L}+\frac{3}{4w^2 (|M_F|L)^2}\right). \end{align} From the analysis, we obtain {the $L$-dependence of} the regularized total Casimir energy $E^{(F)}[M_F ,L]_{\rm reg.}$ of the fermion as \begin{align} {E^{(F)}[M_F ,L]_{\rm reg.}=\frac{|M_F|^2}{8\pi^2 L^2}\sum^{\infty}_{w=1}\frac{e^{-2w|M_F|L}}{w^3}\left(1+\frac{3}{2w|M_F|L}+\frac{3}{4w^2 (|M_F|L)^2}\right).\label{fermion-Casimir}} \end{align} {Schematic figures} of the regularized Casimir energy of the fermion $E^{(F)}[M_F ,L]_{\rm reg.}$ and its derivative $\frac{d}{dL}E^{(F)}[M_F ,L]_{\rm reg.}$ are depicted in {Figure~\ref{fig:E-fermion-0PI} and Figure~\ref{fig:dEdL-fermion-0PI}}. \begin{figure}[h] \begin{center} \includegraphics[width=7cm]{E-fermion-0PI.pdf} \caption{{Schematic} figure {of the $L$-dependence of the Casimir energy $E^{(F)}[M_F ,L]_{\rm reg.}$}. {The blue, cyan, green, {and red lines} correspond to the {cases} of $M_{F}=3.5$, $M_{F}=2$, $M_{F}=1.1$, {and} $M_{F}=0.4$, respectively}. {In this plot, $M_{F}$ and $L$ should be regarded as dimensionless parameters by multiplying a fundamental scale of the theory.}} \label{fig:E-fermion-0PI} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=7cm]{dEdL-fermion-0PI.pdf} \caption{{Schematic} figure of {the $L$-dependence of the derivative of the Casimir energy $\frac{d}{dL}E^{(F)}[M_F ,L]_{\rm reg.}$}. {The blue, cyan, green, {and red lines} correspond to the {cases} of $M_{F}=3.5$, $M_{F}=2$, $M_{F}=1.1$, {and} $M_{F}=0.4$, respectively}. {In this plot, $M_{F}$ and $L$ should be regarded as dimensionless parameters by multiplying a fundamental scale of the theory.}} \label{fig:dEdL-fermion-0PI} \end{center} \end{figure} {Combining} all the results and concentrating on the {$L$-dependence} of the Casimir energy, we obtain the regularized Casimir energy $E[M_F ,L]_{\rm reg.}$ as \begin{align} E[M_F ,L]_{\rm reg.}&=E^{\rm U(1)}[L]_{\rm reg.}+E^{(F)}[M_F ,L]_{\rm reg.}\nonumber\\ &=-\frac{9}{128 \pi^2 L^4}\zeta (5) {+\frac{|M_F|^2}{8\pi^2 L^2}\sum^{\infty}_{w=1}\frac{e^{-2w|M_F|L}}{w^3}\left(1+\frac{3}{2w|M_F|L}+\frac{3}{4w^2 (|M_F|L)^2}\right)}.\label{Casimir-U(1)model} \end{align} {Schematic figures} of the total Casimir energy $E[M_F ,L]_{\rm reg.}$ and its derivative $\frac{d}{dL}E[M_F ,L]_{\rm reg.}$ are depicted in Figure~\ref{fig.E} and Figure~\ref{fig.dEdL}. We can find that {there} exists a non-trivial global minimum to the Casimir energy. Thus we can conclude that the extra dimension is stable in this setup. {We should give a comment} for the above results. {It was discussed in ref.~\cite{Ponton:2001hq} that,} in the case of $M_{F}=0$, {the} {$L$-dependence} of $E^{(F)}[M_F ,L]_{\rm reg.}$ becomes \begin{align} E^{(F)}[M_F ,L]_{\rm reg.}\sim \dfrac{\alpha}{L^{4}}\quad (\alpha :\text{Const.}), \end{align} so that the {finite} global minimum does not appear in the Casimir energy. {In the case of $M_{F}\neq 0$, the fermion's positive contribution to the Casimir energy becomes dominant for $L\rightarrow 0$ because the fermion has more {degrees} of freedom than the gauge {field}. On the other hand, the negative contribution of the gauge {field} becomes dominant for $L\rightarrow \infty$ since the contribution of the fermion is suppressed by the exponential factor via the bulk mass. {Therefore, we have {revisited} that the extra dimension can be stabilized if the following two conditions, which were pointed out in Ref.~\cite{Ponton:2001hq}, are satisfied: (i) 5d massless gauge bosons exist and all 5d fermions have nonzero bulk masses{; and (ii) the }degrees of freedom of fermions are sufficiently larger than those of bosons.} In our interval extra dimension case, in contrast with orbifold models, a bulk mass $M_{F}$ {is not forbidden} from any symmetry and should be involved so that the {finite} global minimum of the Casimir energy {can emerge}. \begin{figure}[H] \begin{center} \includegraphics[width=7cm]{E-eps-converted-to.pdf} \end{center} \begin{center} \caption{{Schematic} figure of the total Casimir energy $E[M_F ,L]_{\rm reg.}$ as a function of the {length} $L$ of the extra dimension. The blue, cyan, green, {and red lines} correspond to the {cases} of {$M_F =1.7$, $M_F =1.5$, $M_F =1.3$, {and} $M_F =1$, respectively. In this plot, $M_{F}$ and $L$ should be regarded as dimensionless parameters by multiplying a fundamental scale of the theory.} We can find a non-trivial global minimum and can conclude that the extra dimension is stable.} \label{fig.E} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[width=7cm]{dEdL-eps-converted-to.pdf} \caption{{Schematic} figure of the derivative of the total Casimir energy $\frac{d}{dL}E[M_F ,L]_{\rm reg.}$ as a function of the {length} $L$ of the extra dimension. The blue, cyan, green, {and red lines} correspond to the case of {$M_F =1.7$, $M_F =1.5$, $M_F =1.3${, and} $M_F =1$, respectively. In this plot, $M_{F}$ and $L$ should be regarded as dimensionless parameters by multiplying a fundamental scale of the theory}. We can find a non-trivial global minimum.} \label{fig.dEdL} \end{center} \end{figure} \section{Theory with point interactions}\label{sec:Theory with point interactions} In the papers \cite{Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka,Cai:2015jla}, a new way to produce generations and a mass hierarchy was proposed with introducing zero-width {branes}, {so-called point interactions,} to the extra dimension. In this section, we {briefly} review a theory with point interactions at first. In the theory, massless zero modes {become} degenerate and {a nontrivial number of} generations {appears from a} {one-generation} 5d fermion {(where a self-contained comprehensive review on the formulation is provided in Appendix~\ref{sec:appendix})}. {In this section, we clarify} the 4d mass spectrum of the theory {with point interactions}, which {plays} an important role in the calculation of the Casimir energy. \subsection{BCs and 4d {mass} spectrum} In a theory with point interactions, we can recognize the point {interactions} as {extra boundary points} and {need to} impose extra boundary conditions at the {points}. {Assuming} that only the fermion feels {the} {point interactions} at {$y=L_1,\, L_{2}$}, we can obtain {three-generation} chiral massless zero modes from the {{following BCs} :} \begin{align} \Psi_R (y)&=0\qquad \text{at}\qquad {y=0,\, L_1 {\pm \varepsilon},\, L_{2}{\pm \varepsilon},\, L},\label{RBC}\\ &\text{or}\nonumber\\ \Psi_L (y)&=0\qquad \text{at}\qquad {y=0,\, L_1 {\pm \varepsilon},\, L_{2}{\pm \varepsilon},\, L},\label{LBC} \end{align} {where $\varepsilon$ represents an infinitesimal positive constant.} We should emphasize that the above BCs are consistent with the 5d gauge invariance since {they} are invariant under the 5d gauge transformation: \begin{align} \Psi_R (x,y)\ \rightarrow\ \widetilde{\Psi}_R (x,y)=e^{-ig\Lambda (x,y)}\Psi_R (x,y),\\ \Psi_L (x,y)\ \rightarrow\ \widetilde{\Psi}_L (x,y)=e^{-ig\Lambda (x,y)}\Psi_L (x,y). \end{align} We expand a 5d fermion $\Psi(x,y)$ {with the {BCs (\ref{RBC}) or (\ref{LBC})}:} \begin{align} {\Psi (x,y) =\Psi_R (x,y) +\Psi_L (x,y)=\sum_n \psi^{(n)}_R (x) \mathscr{F}^{(n)}_{\psi_R} (y)+\sum_n \psi^{(n)}_L (x)\mathscr{G}^{(n)}_{\psi_L} (y).} \end{align} It was found in Refs.~\cite{Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka} that we have {three} {degenerate} zero modes {$\mathscr{G}^{(0)}_{i,\psi_L}(y)$} with {$i=1,2,3$ {$\bigl(\mathscr{F}^{(0)}_{i,\psi_R}$} with $i=1,2,3 \bigr)$} under the BC (\ref{RBC}) {$\bigl($the BC (\ref{LBC})$\bigr)$} and can obtain {three} {degenerate} massless chiral fermions {$\psi_{i,L}^{(0)}(x)$} {{$\bigl(\psi_{i,R}^{(0)}(x)\bigr)$}}: \begin{align} \Psi (x,y)&=\Psi_0 (x,y) +\sum_{n=1}^{\infty} \sum^{{3}}_{i=1}\left\{{\psi^{(n)}_{i,R}(x)} \mathscr{F}^{(n)}_{i,\psi_R} (y) +{\psi^{(n)}_{i,L}(x)} \mathscr{G}_{i,\psi_L}^{(n)} (y) \right\}, \label{ModeexpansionPIs}\\ \Psi_0 (x,y)&= \left\{ \begin{array}{l} \displaystyle \sum^{{3}}_{i=1}{\psi^{(0)}_{i,L}(x)}\mathscr{G}_{i,\psi_L}^{(0)}(y),\qquad \text{{for}}\quad\Psi_R (y)=0\quad \text{at}\quad {y=0,\, L_1 {\pm \varepsilon},\, L_{2}{\pm \varepsilon},\, L},\\ \displaystyle \sum^{{3}}_{i=1}{\psi^{(n)}_{i,R}(x)}\mathscr{F}_{i,\psi_R}^{(0)}(y),\qquad \text{{for}}\quad\Psi_L (y)=0\quad \text{at}\quad {y=0,\, L_1 {\pm \varepsilon},\, L_{2}{\pm \varepsilon},\, L}, \end{array}\right. \end{align} where $\mathscr{G}_{i,\psi_L}^{(0)} (y)$ {$\bigl(\mathscr{F}_{i,\psi_R}^{(0)}(y)\bigr)$} is a solution of eq.~(\ref{ZEROsolutionG}) {$\bigl($eq.~(\ref{ZEROsolutionF})$\bigr)$} under the BC~{(\ref{RBC}) with eq.~(\ref{BCRtoL})} {$\bigl($the BC{~(\ref{LBC}) with eq.~(\ref{BCLtoR})$\bigr)$}}. The explicit {forms} of $\mathscr{G}_{i,\psi_L}^{(0)}(y)$ {and $\mathscr{F}_{i,\psi_R}^{(0)}(y)$ are} {given by} \begin{align} \mathscr{G}_{i,\psi_L}^{(0)}(y)&=\sqrt{\frac{2M_F}{e^{2M_F l_i}-1}}e^{M_F (y-L_{i-1})} \Bigl[ \theta (y-L_{i-1})\theta(L_i-y)\Bigr]\\ \mathscr{F}_{i,\psi_R}^{(0)}(y)&=\sqrt{\frac{2M_F}{1-e^{-2M_F l_i}}}e^{-M_F (y-L_{i-1})} \Bigl[ \theta (y-L_{i-1})\theta(L_i-y)\Bigr] \end{align} where \begin{align} {l_i \equiv L_i -L_{i-1}\quad (i=1,2,3;\ L_3 =L, L_0=0 )},\label{segment} \end{align} and $\theta (y)$ is the step function. {Schematic figures} of the localized zero modes $\mathscr{G}_{i,\psi_L}^{(0)}(y)$ and $\mathscr{F}_{i,\psi_R}^{(0)}(y)$ {are} depicted in {Figure}~\ref{fig.G_0^i} and {Figure}~\ref{fig.F_0^i}. Each zero mode only lives in a segment and localizes to {a boundary.} \begin{figure}[h] \begin{center} \includegraphics[width=10cm]{G_0.pdf} \vspace{-0.5cm} \caption{{Schematic figures} of localized zero modes {$\mathscr{G}^{(0)}_{i,\psi_L} (y)$ {($i=1,2,3$)} with $M_F >0$}. Each zero mode {of $\mathscr{G}^{(0)}_{i,\psi_L} (y)$} only has a {non-vanishing} value within {the} segment {$L_{i-1}<y<L_{i}$ and localizes to a boundary.}} \label{fig.G_0^i} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=10cm]{F_0.pdf} \vspace{-0.5cm} \caption{{Schematic figures} of localized zero modes {$\mathscr{F}^{(0)}_{i,\psi_R} (y)$ {($i=1,2,3$)} with $M_F >0$}. Each zero mode {of $\mathscr{F}^{(0)}_{i,\psi_R} (y)$} only has a {non-vanishing} value within {the} segment {$L_{i-1}<y<L_{i}$ and localizes to a boundary.}} \label{fig.F_0^i} \end{center} \end{figure} After substituting eq.~(\ref{ModeexpansionPIs}) into the action (\ref{Fermion_action}) and using the {orthonormal} {relations} \begin{align} \int^{L}_{0}dy\,{\bigl(\mathscr{F}^{(n)}_{i,\psi_R}(y)\bigr)^{\ast}} \mathscr{F}^{(m)}_{j,\psi_R} (y)&=\delta_{n,m}\delta_{i,j},\\ \int^{L}_{0}dy\,{\bigl(\mathscr{G}^{(n)}_{i,\psi_L} (y)\bigr)^{\ast}}\mathscr{G}^{(m)}_{j,\psi_L} (y)&=\delta_{n,m}\delta_{i,j},\quad {(i,j=1,2,3)} \end{align} we obtain {the} 4d spectrum of the fermion{,} \begin{align} S_F =\int d^4 x \left\{ {\cal L}_{n=0} +\sum_{n=1}^{\infty}{\sum^{3}_{i=1}}\ \bar{\psi}^{(n)}_{i} (i\gamma^{\mu}\partial_{\mu} +m_{i,\psi^{(n)}} )\psi^{(n)}_{i}\right\}, \end{align} where \begin{align} {\cal L}_{n=0}=\left\{\begin{array}{l} \displaystyle {\sum^{3}_{i=1}}\ \psi^{(0)}_{i,L}{(x)}(i\gamma^\mu \partial_\mu )\psi^{(0)}_{i,L}{(x)} \qquad \text{for the {BC (\ref{RBC})}},\\ \displaystyle {\sum^{3}_{i=1}}\ \psi^{(0)}_{i,R}{(x)}(i\gamma^\mu \partial_\mu )\psi^{(0)}_{i,R}{(x)} \qquad \text{for the {BC (\ref{LBC})}}.\end{array}\right. \end{align} {and the} 4d mass spectrum $m_{i,\psi^{(n)}}$ is given by \begin{align} m_{i,\psi^{(n)}}={\sqrt{M_F^2 +\left(\frac{n\pi}{l_i}\right)^2}}\qquad {({i=1,2,3;}\ n=1,2,3,\cdots)}, \end{align} where $l_i$ is defined by eq.~(\ref{segment}). \section{{Dynamical generation of fermion mass hierarchy}}\label{Semi-realistic} In this section, by using the previous results, {we consider an $SU(2)\times U(1)$ model with a single generation of 5d fermions, which produces three generations of 4d chiral fermions by the point interactions, and discuss whether the model can dynamically generate a fermion mass hierarchy.} {To this end, we first set an action and BCs of {this} model. {The action consists of an $SU(2)$ gauge field, a $U(1)$ gauge field, a single generation $SU(2)$ doublet fermion, a single generation $SU(2)$ singlet fermion{,} and an $SU(2)$ doublet scalar field. {The contents of our model {mimic} those of the SM without the color degree of freedom{, where the $U(1)$ (hyper)charges of $Q$ and $U$ take those of the quark doublet and the up-type singlet.}}} Extra BCs via point interactions are {a} key ingredient to produce the {three} generations from one generation 5d fermion as we reviewed in Section \ref{sec:Theory with point interactions}{. The} positions of the point interactions {crucially} {affect} the fermion mass hierarchy through the overlap integrals, {as} we will see in Section \ref{subsec:Mass hierarchy}. {We will show that the positions of the point interactions can} be determined dynamically through the minimization of the Casimir energy {and then find that an exponential fermion mass hierarchy naturally {appears}}. Following the results, {we discuss the stability of the extra dimension.} {\subsection{Action and BCs}}\label{sec:Action and BCs_Semi-realistic} \noindent {{We start with} the following action {for the gauge fields and fermions}:} \begin{align} S&=S_G +S_F,\\ S_G&=\int d^4 x \int^L_0 dy \left[-\frac{1}{4}W^{aMN}W_{MN}^{a}-\frac{1}{2}(\partial^M W_M^a)^2 -i\,\bar{c}^a (\partial^M {\cal D}_M)c^a\right.\nonumber\\ & \hspace{10em}\left.{-\frac{1}{4}F^{MN}F_{MN}-\frac{1}{2}(\partial^{M}A_{M})^2 -i\bar{c}(\partial^{M}\partial_{M})c}\right],\\ S_F&=\int d^4 x\int^L_0 dy \left[\bar{Q}\Bigl(i\Gamma^M D_M^{(Q)}+M_F^{(Q)}\Bigr)Q+{\bar{U}\Bigl(i\Gamma^M \partial_M +M_F^{(U)}\Bigr)U}\right],\label{5d-fermions-action} \end{align} where \begin{align} W^a_{MN}&=\partial_M W_N^a -\partial_N W_M^a {-g\varepsilon_{abc}W_M^b W_N^c},\\ {F_{MN}}&{=\partial_{M}A_{N}-\partial_{N}A_{M},}\\ {\cal D}_M c^a &=\partial_M c^a +g\varepsilon_{abc}W^b_M c^c,\\ D_M^{(Q)} Q&=\biggl(\partial_M {-igW_M^a T_a{-ig'A_{M}}}\biggr)Q. \end{align} {$W_M^{a}$, $A_{M}$, {$c^{a}$}, $c$ and {$\bar{c}^{a}$}, $\bar{c}$ denote {an} $SU(2)$ gauge, a $U(1)$ gauge, {ghost and anti-ghost fields}, respectively.} {$g$ and $g'$ denote $SU(2)$ and $U(1)$ couplings of the $SU(2)$ doublet fermion.} $Q$ and {$U$} indicate {an} $SU(2)$ doublet fermion and {an} $SU(2)$ single fermion{, respectively}. A bulk mass of the 5d fermion is denoted by $M_F^{(\Psi )}$ ($\Psi =Q,{U}$). $\varepsilon_{abc}$ is {a} complete antisymmetric tensor and $T_a$ is a generator of $SU(2)$ {acts on a fundamental representation}, which satisfies the following algebra and the orthogonal relation: \begin{align} [T_a, T_b]=i\varepsilon_{abc}T_c,\\ \text{tr}\,T_a T_b=\frac{1}{2}\delta_{a,b}. \end{align} {According to the analysis given in Section \ref{sec:4d spectum of a 5d $U(1)$ gauge theory on an interval}, we} choose boundary conditions for the fields as follows: \begin{align} &\left\{ \begin{array}{l} \partial_y W_\mu^{a}(x,y)=0,\\ W_y^{a}(x,y)=0, \end{array} \right. \hspace{2em}\text{at}\qquad y=0,L,\label{NABC}\\ &\left\{ \begin{array}{l} {\partial_y c^{a}(x,y)=0},\\ {\partial_y \bar{c}^{a}(x,y)=0}, \end{array} \right. \hspace{3em}\text{at}\qquad y=0,L,\\[0.2cm] &{\left\{ \begin{array}{l} \partial_y A_\mu(x,y)=0,\\ A_y(x,y)=0, \end{array} \right. \hspace{2em}\text{at}\qquad y=0,L,}\label{ABC}\\ &{\left\{ \begin{array}{l} \partial_y c(x,y)=0,\\ \partial_y \bar{c}(x,y)=0, \end{array} \right. \hspace{3em}\text{at}\qquad y=0,L,}\\[0.2cm] &Q_R (x,y)=0\hspace{5em} \text{at}\qquad y=0,L_1 {\pm \varepsilon},L_2 {\pm \varepsilon} ,L,\\ &{U_L (x,y)}=0\hspace{5em}\text{at}\qquad y=0,L_1 {\pm \varepsilon},L_2 {\pm \varepsilon} ,L{,} \end{align} {where $L_{1}$ and $L_{2}$ ($0<L_{1}<L_{2}<L$) denote the positions of the point interactions and {$\varepsilon$ represents an infinitesimal positive constant.}} A schematic figure of the extra dimension is depicted in {Figure}~\ref{fig.semi-realistic_extra_dim}. We {introduced} two point interactions {at $y=L_{1}$ and $L_{2}$} for the fermions and put the situation that all fermions feel the point interactions {at} the same positions for {simplicity}. {On the other hand, the gauge and the ghost fields are assumed not to feel the point interactions at $y=L_{1}$ and $L_{2}$. {We note that the 5d gauge symmetries are intact under the configuration of the boundary conditions.}} \begin{figure}[h] \begin{center} \includegraphics[width=9cm]{Semi-realistic_extra_dim.pdf} \caption{A schematic figure of the extra dimension. Only the fermions $Q$ and $U$ feel the point interactions (Green dots) at $y=L_1,L_2$ and {the gauge fields $W_\mu^{a}$, $W_y^{a}$, $A_{\mu}$ and $A_{y}$} do not. The situation is completely consistent with 5d gauge invariance.} \label{fig.semi-realistic_extra_dim} \end{center} \end{figure} {\subsection{Determination of the positions of the point interactions}\label{sec:Determination of the positions of the point interactions}} \noindent {Using the results of section~\ref{sec.2-3}, we can evaluate the Casimir energy as a function of the positions of {the} point interactions $\{L_{1}, L_{2}\}$}: \begin{align} &{E^{{(F)}}[M^{(Q)}_{F},M^{{(U)}}_{F},L_{1},L_{2},L]_{\rm reg.}}\nonumber\\ {=}&\,2\cdot\frac{|M^{(Q)}_{F}|^2}{8\pi^2 L_{1}^{2}}\sum^{\infty}_{w=1}\frac{{\rm e}^{-2w|M^{(Q)}_{F}|L_{1}}}{w^3}\left(1+\frac{3}{2w|M_{F}^{(Q)}|L_{1}}+\frac{3}{4w^2 |M_{F}^{(Q)}|^2 L_{1}^2}\right)\nonumber\\ &+2\cdot \frac{|M_{F}^{(Q)}|^2}{8\pi^2 (L_{2}-L_{1})^2}\sum^{\infty}_{w=1}\frac{{\rm e}^{-2w|M_{F}^{(Q)}|(L_{2}-L_{1})}}{w^3}\left(1+\frac{3}{2w|M_{F}^{(Q)}|(L_{2}-L_{1})}+\frac{3}{4w^2 |M_{F}^{(Q)}|^2(L_{2}-L_{1})^2}\right)\nonumber\\ &+2\cdot \frac{|M_{F}^{(Q)}|^2}{8\pi^2 (L-L_{2})^2}\sum^{\infty}_{w=1}\frac{{\rm e}^{-2w|M_{F}^{(Q)}|(L-L_{2})}}{w^3}\left(1+\frac{3}{2w|M_{F}^{(Q)}|(L-L_{2})}+\frac{3}{4w^2 |M_{F}^{(Q)}|^2(L-L_{2})^2}\right)\nonumber\\ &+\frac{|M^{{(U)}}_{F}|^2}{8\pi^2 L_{1}^{2}}\sum^{\infty}_{w=1}\frac{{\rm e}^{-2w|M^{{(U)}}_{F}|L_{1}}}{w^3}\left(1+\frac{3}{2w|M^{{(U)}}_{F}|L_{1}}+\frac{3}{4w^2 {|M^{{(U)}}_{F}|}^2 L_{1}^2}\right)\nonumber\\ &+\frac{|M^{{(U)}}_{F}|^2}{8\pi^2 (L_{2}-L_{1})^2}\sum^{\infty}_{w=1}\frac{{\rm e}^{-2w|M^{{(U)}}_{F}|(L_{2}-L_{1})}}{w^3}\left(1+\frac{3}{2w|M^{{(U)}}_{F}|(L_{2}-L_{1})}+\frac{3}{4w^2 |M^{{(U)}}_{F}|^2(L_{2}-L_{1})^2}\right)\nonumber\\ &+\frac{|M^{{(U)}}_{F}|^2}{8\pi^2 (L-L_{2})^2}\sum^{\infty}_{w=1}\frac{{\rm e}^{-2w|M^{{(U)}}_{F}|(L-L_{2})}}{w^3}\left(1+\frac{3}{2w|M^{{(U)}}_{F}|(L-L_{2})}+\frac{3}{4w^2|M^{{(U)}}_{F}|^2(L-L_{2})^2}\right). \label{Vefffreefermion-2PI} \end{align} {{With the fixed {length} $L$,} the minimization condition for the Casimir energy can determine the {values} of the parameters $\{L_{1},L_{2}\}$. The above potential {turns out to have} the {finite} global minimum at $L_{1}=\frac{L}{3}$, $L_{2}=\frac{2L}{3}$. To verify this statement, we consider the following function $I(x,y,z)$: \begin{align} &I(x,y,z)=f(x)+f(y)+f(z),\label{imitatefunction}\\ &{x,y,z> 0},\\ &x+y+z=1\label{constraintcondition}. \end{align} $I(x,y,z)$ {imitates} the function form of the fermion Casimir energy with the variables $x=\widetilde{L}_{1}$, $y=\widetilde{L}_{2}-\widetilde{L}_{1}$, $z=1-\widetilde{L}_{2}$, where $\widetilde{L}_{i}$ ($i=1,2$) is defined as $\widetilde{L}_{i}\equiv L_{i}/L$. We assume the function $f(x)$ {to be} a {monotonically} decreasing function {and also {$f' (x)\equiv \dfrac{df(x)}{dx}$} to be} a {monotonically} increasing one {with} $\displaystyle{\lim_{x\to 0}}f(x)=+\infty$. {We note that the fermion Casimir energy~(\ref{fermion-Casimir}) turns out to satisfy those assumptions (see Figures~\ref{fig:E-fermion-0PI} and \ref{fig:dEdL-fermion-0PI}).} Substituting the condition eq.~(\ref{constraintcondition}) into eq.~(\ref{imitatefunction}), we obtain \begin{align} I(x,y,1-x-y)=f(x)+f(y)+f(1-x-y). \end{align} To investigate an extreme value of the above function, we examine $\dfrac{\partial I}{\partial x}$ and $\dfrac{\partial I}{\partial y}$: \begin{align} \frac{\partial I}{\partial x}&=f'(x)-f'(1-x-y),\\ \frac{\partial I}{\partial y}&=f'(y)-f'(1-x-y), \end{align} From the conditions $\dfrac{\partial I}{\partial x}=0$ and $\dfrac{\partial I}{\partial y}=0$, {we obtain the result} \begin{align} f'(x)=f'(y)=f'(1-x-y){.}\label{f-condition} \end{align} Since we assumed that {$f'(x)$} is a {monotonically} increasing function, the result (\ref{f-condition}) {can be realized only when} \begin{align} x=y=z=\frac{1}{3}. \end{align} Thus we find that $I(x,y,z)$ has {an} extreme value when $x=y=z=\dfrac{1}{3}$. Moreover, the function takes a local minimum {at $x=y=z=\dfrac{1}{3}$}. {To show this}, we consider the second-order differentials with the condition $x=y=z=\dfrac{1}{3}$: \begin{align} &\frac{\partial^{2} I}{\partial x^{2}}\biggr|_{x=y=\frac{1}{3}}=2f''\Bigl(\frac{1}{3}\Bigr),\\ &\frac{\partial^{2} I}{\partial y\partial x}\biggr|_{x=y=\frac{1}{3}}=f''\Bigl(\frac{1}{3}\Bigr),\\ &\frac{\partial^{2} I}{\partial x\partial y}\biggr|_{x=y=\frac{1}{3}}=f''\Bigl(\frac{1}{3}\Bigr),\\ &\frac{\partial^{2} I}{\partial y^{2}}\biggr|_{x=y=\frac{1}{3}}=2f''\Bigl(\frac{1}{3}\Bigr). \end{align} We now consider the Hessian matrix $M$: \begin{align} M= \left( \begin{array}{cc} \dfrac{\partial^2 I}{\partial x^2}\biggl|_{x=y=\frac{1}{3}}& \dfrac{\partial^2 I}{\partial x\partial y}\biggl|_{x=y=\frac{1}{3}}\\[0.2cm] \dfrac{\partial^2 I}{\partial y\partial x}\biggl|_{x=y=\frac{1}{3}}&\dfrac{\partial^2 I}{\partial y^2}\biggl|_{x=y=\frac{1}{3}} \end{array} \right)=\left(\begin{array}{cc} 2f''\Bigl(\frac{1}{3}\Bigr)& f''\Bigl(\frac{1}{3}\Bigr)\\ f''\Bigl(\frac{1}{3}\Bigr)&2f''\Bigl(\frac{1}{3}\Bigr) \end{array}\right). \end{align} Since {$f'(x)$} is a {monotonically} increasing function, $f''(x)>0$. Thus we find that \begin{align} {\rm tr}\,M>0,\\ {\rm det}\, M>0. \end{align} The above results imply that the eigenvalues of the matrix $M$ are positive {and hence that} the position $x=y=\dfrac{1}{3}$ is a local minimum of the potential. Moreover, there is no other stationary point, we found that the position $x=y=\dfrac{1}{3}$ is a global minimum of the function $I(x,y,z)$. From the above {discussions}, we conclude that the Casimir energy (\ref{Vefffreefermion-2PI}) has a global minimum at {$L_{1}=\frac{L}{3}$, $L_{2}=\frac{2L}{3}$.}} \subsection{{Fermion mass hierarchy}}\label{subsec:Mass hierarchy} Under the {above} situation, we can produce the fermion mass hierarchy dynamically {by} introducing the Yukawa coupling {to} {an $SU(2)$ doublet scalar field $\Phi (x,y)$}, which possesses the y-dependent VEV \footnote{{The $SU(2)$ doublet scalar may be regarded as $i\sigma_{2}H^{\ast}$ ($H$ is {the Higgs field}) {in the standard notation}.}} \begin{align} {\langle\Phi(y)\rangle} &{=\left(\begin{array}{c} \phi(y)\\ 0 \end{array}\right),}\label{S-VEV}\\ {\phi(y)}&{={\cal A}e^{My}}, \end{align} as in eq.~(\ref{y-dependentVEV}) because of the Robin boundary condition (\ref{Robin_BCs}).\footnote{ {As we discussed in~\cite{Fujimoto:2012wv}, if the warped scalar VEV is provided in the Higgs doublet, a serious violation in the gauge universality is expected. Thereby, in the previous works~\cite{Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka}, we introduced an additional singlet scalar with the warped VEV, while the Higgs doublet has the ordinary constant VEV, and considered 'higher-dimensional' Yukawa terms where {both} the Higgs doublet and the singlet scalar appear. (Note that we prohibited the ordinary Yukawa terms by introducing the $Z_2$ discrete symmetry: odd parity for the two scalars, even parity for the others.) In this manuscript, we assumed that the Higgs doublet contains the warped VEV for displaying the form of the Yukawa terms in a simple way, for avoiding {confusion} originating from why the two types of the scalars are introduced. This 'simplification' is just making the explanation simplified, and mass matrix takes basically the same form between in this 'simplified' setup and in the original setup without gauge universality violation. See the related discussion in section~\ref{sec:Conclusion and Discussion}.} } The {situation,} in which the {$i$-th generation ($i=1,2,3$)} fermion lives in the segment $y\in [L_{i-1},L_{i}]$ ($L_{0}\equiv0, L_{3}\equiv L$) and the {scalar field} lives in every {region,} makes a large mass hierarchy for the fermion masses through the Yukawa interaction {$\lambda \bar{Q}\Phi {U}$}: \begin{align} m_{i}= \lambda \int^{L}_{0}dy \, {\bigl(\mathscr{G}^{(0)}_{i,Q_{L}}(y)\bigr)^{\ast}}{\phi(y)} \mathscr{F}^{(0)}_{i,{{U_{R}}}}(y){,}\qquad {(i=1,2,3)} \end{align} A schematic figure is depicted in Figure~\ref{fig.mass-hierarchy}. Since the minimization of the Casimir energy {determines} the positions of the point interactions as to make the {distances} between them equal, the exponential VEV of the {scalar field} makes an exponential mass hierarchy {such as } \begin{align} {\frac{m_{2}}{m_{1}}=\frac{m_{3}}{m_{2}}={e^{\frac{1}{3}ML}}.} \end{align} {Thus, the fermion mass hierarchy around $10^{5}$ can be obtained by suitably choosing the parameter {$ML$}.} We emphasize that this mass hierarchy appears dynamically since the positions of the point interactions and the form of the VEV of the {scalar} {are determined} dynamically. \begin{figure}[h] \begin{center} \includegraphics[width=9cm]{mass-hierarchy-v3.pdf} \caption{{Schematic} figure of {zero mode profiles of chiral massless fermions and the VEV of {the scalar field $\phi(y)$}}. The figure is depicted with the situation $M_{F}^{(Q)}>0$ and {$M_{F}^{{(U)}}<0$}. The position of the point interactions are fixed by the minimization of the Casimir energy and the $y$-dependent {scalar} VEV {produces an exponential} mass hierarchy through the overlap integrals with respect to the extra dimension.} \label{fig.mass-hierarchy} \end{center} \end{figure} {\subsection{Stability of the extra dimension}} \noindent {We have shown that for} any {fixed} {length} $L$, the positions of the point interactions are determined dynamically to the value $L_{1}=\frac{L}{3}$, $L_{2}=\frac{2L}{3}$ from the minimization of the Casimir energy. Under this situation, {we discuss the stability of the {whole} extra dimension. {In} our model the $SU(2)$ doublet scalar field $\Phi(x,y)$ possesses the y-dependent VEV and breaks the gauge symmetry as $SU(2)\times U(1)\to U(1)'$. Therefore, we will discuss the stability of the extra dimension in the {broken phase.}} As we investigated in Section 3, the extra dimension can be stabilized if the following two conditions are satisfied: (i) 5d massless gauge bosons exist and all 5d fermions have nonzero bulk masses. (ii) The degrees of freedom of fermions are sufficiently larger than those of bosons. The first condition (i) will ensure that the Casimir energy approaches to zero with negative values in $L \to \infty$ limit, as in (3.42). The second condition (ii) will ensure that the Casimir energy goes to $+\infty$ in $L \to 0$ limit, as in (3.42). {In our model, the $SU(2) \times U(1)$ gauge symmetry is broken by the VEV of the scalar but a subgroup $U(1)'$ is still unbroken. Thus, the first condition (i) is satisfied in our model. The second condition (ii) seems to be satisfied in our model because the degrees of freedom of the fermions become three times the number of 5d fermions due to the point interactions. Moreover, there is still room for introducing extra fermions by using {the} }type-(III) {BCs}, which do not produce any exotic chiral massless fermions. Therefore, in our setup, the extra dimension is expected to be {stabilized} by the Casimir energy.\footnote{In the full SM-like setup, the gluon and the scalar contribute to the Casimir energy. To determine the value of the length $L$ of the extra dimension, we need to calculate the Casimir energy of all the fields in the gauge symmetry broken phase with the $y$-dependent VEV, which is beyond the scope of this paper. } \section{Conclusion and Discussion}\label{sec:Conclusion and Discussion} In this paper, we {proposed} a new mechanism to produce a fermion mass hierarchy dynamically by introducing the point interactions to the 5d gauge theory on an interval. The interval extra dimension can possibly {be} stable and the point interactions produce generations {of fermions}. The positions of the point interactions were determined by minimizing the Casimir energy of the {fermions. The} extra-dimension coordinate-dependent VEV of the {scalar field}, which is also produced dynamically {under the Robin boundary condition}, {makes exponentially different} fermion masses through the overlap integrals. {We give a comment for the contribution of the {scalar field} to the Casimir energy at first. In this paper, we ignored the effect of the {scalar field} to the Casimir energy {for simplicity} {because the contribution {to} the Casimir energy from the {scalar field} will have no exact analytic expression due to the Robin BC. {However,} the inclusion of the {scalar field} {will not change} the conclusions about the stability of the {whole} extra dimension and the positions of the {point interactions if} the degrees of freedom of the fermions are sufficiently {larger than} those of bosons.}} Next, some comments are given to the flavor mixing of the fermions. In our model, we introduced the point interactions at $y=L_{1}, L_{2}$ for {both the} $SU(2)$ doublet and {the} singlet {fermions}. {Here, mass matrices are diagonal and flavor mixing cannot appear.} In general, however, there is no need to share the point interactions {in} fermions so that we can introduce the {individual} point interactions to {each fermion}, respectively{, which means that (e.g.)} the $SU(2)$ doublet fermion feels the point interactions at $y=L_{1},L_{2}$ and the $SU(2)$ singlet fermion feels the point interactions at $y=L'{}_{1},L'{}_{2}$ {\cite{Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka}}{. Then the} mode functions of {the} {$SU(2)$-doublet} zero mode $\mathscr{G}^{(0)}_{i,Q_{L}}(y)$ and {the} {$SU(2)$-singlet} zero mode $\mathscr{F}^{(0)}_{j,{{U_{R}}}}(y)$ may have an overlap {for $i\neq j$}. In other words, off diagonal components may appear {in} the mass matrix as \begin{align} m_{ij}=\lambda \int^{L}_{0}dy \, {\Bigl(\mathscr{G}^{(0)}_{i,Q_{L}}(y)\Bigr)^{\ast}}{\phi(y)} \mathscr{F}^{(0)}_{j,{{U_{R}}}}(y),\qquad {(i,j=1,2,3)} \end{align} and a flavor mixing can be realized. If the minimization of the Casimir energy determines the positions of the point interactions as $L_{1}=L'{}_{1}$, $L_{2}=L'{}_{2}$, flavor mixing does not appear so that we need an idea to make $L_{1}\neq L'{}_{1}$, $L_{2}\neq L'{}_{2}$. One way {to avoid the situation of $L_{i}=L'{}_{i}$} is to consider higher loop effects of the Casimir energy, which may make $L_{1}\neq L'{}_{1}$, $L_{2}\neq L'{}_{2}$ through the interactions. {To introduce exotic 5d fermions, {where they contribute to the Casimir energy,} is another way. No chiral massless zero modes appear {in an} exotic {fermion} {when we assign} a suitable choice of boundary conditions {to it.} {Under such conditions,} the low energy matter {contents of} the model, i.e. the {Standard Model} {particles, do} not have a change. If we put a different boundary condition to $y=L_{1}$ ($y=L'_{1}$) from $y=L_{2}$ ($y=L'_{2}$), each {segment} has a different contribution of the Casimir energy so that we may produce the flavor mixing.} {A different strategy} is to introduce more than two point interactions, e.g. $N-1$ point interactions for the $SU(2)$ doublet fermion and $N'-1$ point interactions for the $SU(2)$ singlet fermion, {where we} divide the interval extra dimension into more than three segments, i.e. $N$ segments for the doublet and $N'$ segments for the singlet. A combination of type-(I)$\bigl($type-(II)$\bigr)$ and type-(III) BCs {can} produce three massless zero modes for $SU(2)$ doublet and singlet fermions, respectively. In this situation, the minimization of the Casimir energy determines the positions of $N-1$ ($N'-1$) point interactions and zero modes of the $SU(2)$ doublet (singlet) appear {in} three of the $N$ ($N'$) segments. {A suitable choice of the segments with zero modes} may possibly produce {off-diagonal} components of the mass matrix, i.e. flavor mixing {even after taking account of the stabilization of the point interactions}. Finally, we {focus on} the gauge universality. It was pointed out in Refs.~\cite{Fujimoto:2012wv,Fujimoto:2013ki,Fujimoto:2014fka} that the gauge symmetry breaking due to the y-dependent VEV of the scalar field {would} cause {a} gauge universality violation. That is because the y-dependent VEV of the scalar modifies the flat profile of the zero mode function of the gauge boson and thereby the values of the {4d} gauge couplings change with respect to the generations through the overlap integrals. A way to avoid this crisis is to introduce two scalar fields; one is an $SU(2)$ doublet scalar and another is a gauge-singlet scalar field. In the situation that the constant VEV of the $SU(2)$ doublet scalar breaks the gauge symmetry and the y-dependent VEV of the gauge-singlet scalar provides a mass hierarchy, we can {avoid} the gauge universality violation. It would be of great interest to construct a more phenomenologically {viable} model along the lines discussed in this paper. \section*{Acknowledgement} We thank Nobuhito Maru for discussions in the early stage of this work. This work is supported in part by Grants-in-Aid for Scientific Research [No.~15K05055 and No.~25400260 (M.S.)] from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) in Japan.
1,314,259,996,024
arxiv
\section{INTRODUCTION} Graphene, the pioneer of the world of two-dimensional (2D) materials, has been a big subject undergoing intense investigations since its discovery almost 20 years ago \cite{novoselov2004,berger2004,novoselov2005,geim2007,novoselov2012}. During the last decade, the graphitization of SiC surfaces in argon atmosphere has matured into one of the major fabrication techniques for epitaxial graphene as it offers superb and uniform layer quality on a large scale and on a semiconducting substrate \cite{emtsev2009,kruskopf2016}. Furthermore, intercalation stands out as a versatile method to modify the properties of epitaxial graphene on SiC ('epigraphene') via the sandwiching of foreign atomic species at the interface. Many intercalants have been studied by now, each of them giving different doping and proximity effects in epigraphene \cite{riedl2009,mcchesney2010,walter2011,stoehr2016,link2019,rosenzweig2019,rosenzweig2020b,forti2020,rosenzweig2020,briggs2020}. At the same time, the technique offers a way to synthesize otherwise unstable 2D triangular lattices via their dimensional confinement at the graphene/SiC heterointerface. In turn, intriguing properties emerge such as superconductivity in intercalated 2D Ga \cite{briggs2020} or---in diametrical contrast to the respective bulk crystals---the opening of a semiconducting gap for the monolayer limit of Au \cite{forti2020} and Ag \cite{rosenzweig2020}. Being a well-known superconductor even down to epitaxial ultrathin films \cite{zhang2010,brun2016} and also a heavy element with large spin-orbit coupling \cite{dil2008,yaji2010}, Pb has recently attracted a lot of attention as an intercalant for epigraphene, both experimentally \cite{yurtsever2016,chen2020,yang2021,hu2021,gruschwitz2021} and theoretically \cite{visikovskiy2018,wang2021,otheryang2021}. In fact, proximity spin-orbit coupling has been induced in Pb-intercalated graphene, yet only on metallic Ir and Pt substrates \cite{calleja2015,klimovskikh2017,otrokov2018}. On the other hand, proximity superconductivity has been reported in quasi-freestanding monolayer graphene on SiC decorated with Pb islands \cite{cherkez2018,paschke2020}. Regarding the very limited extent of this lateral proximity effect into graphene, the homogeneous intercalation of Pb underneath could be a pragmatic workaround as it might even give rise to vertical proximity coupling that spreads over the entire layer. Previous experiments have been focussing on the induced doping level in epigraphene and the atomic structure upon Pb intercalation \cite{yurtsever2016,chen2020,yang2021,hu2021,gruschwitz2021}. Unfortunately, the interpretation was mostly complicated by the presence of multiple minority phases owing to partial, inhomogeneous intercalation in conjunction with a mixture of different graphene layer thicknesses on the pristine substrates. On the other hand, density functional theory suggests the stability of a triangular lattice monolayer of Pb in $1\times 1$ epitaxial relation to SiC \cite{visikovskiy2018,otheryang2021} but no experimental evidence could be provided so far. While the actual atomic and electronic structure of Pb confined between epigraphene and SiC remains an open question, it will be pivotal for the envisioned interlayer proximity couplings, thus reinforcing the need for a dedicated study. Here, we homogeneously intercalate Pb underneath the carbon buffer layer on SiC. Angle-resolved photoelectron spectroscopy reveals the characteristic $\pi$ bands of decoupled quasi-freestanding monolayer graphene, whose charge neutrality with a residual hole density on the order of $10^9$ cm$^{-2}$ holds great promise in terms of high carrier mobilities. Momentum microscopy readily provides access to the entire first surface Brillouin zone and uncovers additional electronic bands originating from intercalated Pb at the graphene/SiC heterointerface. We unambiguously demonstrate the metallic character of 2D confined Pb, rendering the system a hot candidate for superconductivity in the 2D metal itself or even in epigraphene via proximity coupling. While x-ray photoelectron spectroscopy and low-energy electron diffraction corroborate the complete intercalation scenario, the latter method reveals an additional $10\times10$ Moiré periodicity relative to graphene, which is yet difficult to reconcile with the available band structure data. Our results establish Pb-intercalated epigraphene as a promising platform to harvest intriguing quantum effects connected to the rich interlayer band structure of a 2D quantum-confined heavy metal. \section{EXPERIMENT}\label{sec2} \subsection{Sample preparation} Nominally on-axis, single crystalline, $n$-doped 6H-SiC(0001) wafer pieces (SiCrystal GmbH) were used as substrates for graphene growth. In order to first remove residual polishing scratches and generate atomically flat terraces, the substrates were etched with molecular hydrogen at $1550$~$^{\circ}\mathrm{C}$ and near ambient pressure \cite{ramachandran1998,soubatch2005}. Heating to around 1465~$^{\circ}\mathrm{C}$ for $4$ min under $800$ mbar argon atmosphere (optimized parameters for the current setup) induces graphitization via Si sublimation \cite{emtsev2009}. In this way, several-$\mu$m-wide terraces develop, homogeneously covered with the $(6\sqrt{3}\times6\sqrt{3})\mathrm{R}30^\circ$ carbon buffer layer reconstruction. This so-called zerolayer graphene (ZLG) interacts covalently with the Si-terminated SiC substrate and does not yet share the electronic properties of a freestanding graphene monolayer \cite{emtsev2008,riedl2010}. Hydrogen etching and graphene growth were both performed \emph{ex situ} in an inductively heated reactor hosting a graphite susceptor. The ZLG/SiC samples were subsequently transferred into ultrahigh vacuum (UHV) and degassed at $700$~$^{\circ}\mathrm{C}$ for $30$ min. Pb was evaporated onto ZLG at a rate of about 4~{\AA/min from a commercial Knudsen cell (OmniVac). During evaporation, the sample was kept at room temperature. In a first cycle, Pb was deposited for 10 min, followed by annealing at $200$~$^{\circ}\mathrm{C}$, $300$~$^{\circ}\mathrm{C}$, $400$~$^{\circ}\mathrm{C}$, $500$~$^{\circ}\mathrm{C}$, and $550$~$^{\circ}\mathrm{C}$ for $\approx 1$ h, respectively. As a result, Pb atoms migrate under the carbon buffer layer, saturating the Si dangling bonds and decoupling ZLG into quasi-freestanding monolayer graphene (QFMLG). However, only partial intercalation with large residual patches of ZLG was observed at this stage, while further annealing to $600$~$^{\circ}\mathrm{C}$ and $650$~$^{\circ}\mathrm{C}$ for $\approx 30$ min each already marked the onset of de-intercalation. A second cycle then included deposition and annealing steps identical to the first cycle, but only up to a temperature of $550$~$^{\circ}\mathrm{C}$. This rather long preparation protocol turned out to significantly enhance the homogeneity of Pb intercalation, also in comparison to previous efforts \cite{yurtsever2016,chen2020,yang2021,hu2021,gruschwitz2021}, ensuring at the same time the re-evaporation of excess Pb from the sample surface. All temperature measurements were performed with an infrared pyrometer (Impac IGA 140 series) assuming an emissivity of $0.63$. \subsection{Characterization} The quality of the intercalation scenario was monitored on the basis of low-energy electron diffraction (LEED), x-ray photoelectron spectroscopy (XPS) and angle-resolved photoelectron spectroscopy (ARPES). Photoelectron spectroscopy was carried out by means of an energy-filtered photoemission electron microscope (PEEM) operating at an extractor voltage of $12$ kV in both, real and $k$-space (NanoESCA, Scienta Omicron GmbH). Single-shot-type ARPES constant energy cuts were acquired in the framework of momentum microscopy \cite{kroemker2008,tusche2015,tusche2019} from a $\approx 20\times 15$ $\mu\mathrm{m}^2$ spot on the sample surface as defined by an iris aperture. The spectrometer was operated at a nominal energy resolution of $0.1$ eV and a $k$-space field of view of 4.3~{\AA$^{-1}$} which covers the entire photoemission horizon up to the Fermi level for the employed photon energy of $21.22$ eV (non-monochromatized, unpolarized HeI $\alpha$ provided by a HIS 14 HD source, FOCUS GmbH). XPS spectra were acquired from a $\approx 60$ $\mu\mathrm{m}$ real-space field of view in energy-filtered PEEM mode at a spectrometer resolution of $0.4$ eV. Monochromatized Al K$\alpha$ radiation ($1486.6$ eV) was employed ($\mu$-FOCUS 350 monochromator, SPECS GmbH). Experiments were performed in UHV at room temperature and at a working pressure around $5\times10^{-10}$~mbar ($3\times10^{-9}$~mbar for XPS). \section{RESULTS AND DISCUSSION} \subsection{Low-energy electron diffraction} Fig.~\ref{fig1}(a) presents the LEED pattern acquired after Pb intercalation for an incident beam energy of $70$ eV. A typical intercalation scenario is clearly evident from the intense first-order graphene diffraction (red) while the first order spots of ($1\times1$)-SiC as well as the $(6\sqrt{3}\times6\sqrt{3})\mathrm{R}30^\circ$ buffer layer reconstruction (green) are largely suppressed, cf.\ Refs.~\citenum{rosenzweig2019,link2019}. On the other hand, a Moiré pattern emerges around the first order spots of graphene as highlighted by the zoom-in on the 6-fold symmetrized LEED pattern in Fig.~\ref{fig1}(b). The corresponding spots do not coincide with the $6\sqrt{3}$ supercell grid of SiC and point towards a distinct ordered phase of intercalated Pb. We determine the Moiré periodicity as $10\times 10$ with respect to graphene via a 2D Lorentzian fit [Fig.~\ref{fig1}(c), corresponding line profile in Fig.~\ref{fig1}(d)]. This is consistent with earlier studies where a similar Moiré pattern was observed locally by scanning tunneling microscopy on partially Pb-intercalated epigraphene samples \cite{yurtsever2016,yang2021,hu2021,gruschwitz2021}. \begin{figure}[t] \centering \includegraphics{fig1.pdf} \caption{(a) LEED pattern of Pb-intercalated QFMLG at $70$ eV. Sector inset: $40$ eV LEED pattern of pristine ZLG with Pb islands on top. (b) Six-fold symmetrized raw data and (c) 2D lorentzian fit around the first order graphene spots, highlighting the induced Moiré periodicity (half of the corresponding spots encircled in grey). (d) Fitted vertical line profile through the symmetrized pattern of (b) with contributions from first-order graphene diffraction (red) and the $10\times10$ Moiré pattern (grey). (e) Side and (f) top views of the $13\times13$ graphene (red) on $\approx 9\times9$ Pb (grey) on $(6\sqrt{3}\times6\sqrt{3})\mathrm{R}30^\circ$ SiC (green) structure which approximates the $10\times10$ Moiré periodicity to better than $99$\%. Unit cells and superstructures are color-encoded in (f).} \label{fig1} \end{figure} Note that $10\times10$ graphene unit cells would perfectly coincide with a $7\times7$ supercell of intercalated Pb if the latter adopted the very same triangular lattice as the surface layer of Pb(111) islands deposited on top of epigraphene \cite{cherkez2018}.The sector inset of Fig.~\ref{fig1}(a) shows the LEED pattern of pristine ZLG with Pb on top acquired at $40$ eV. At this energy, the elongated spots originating in the direction of graphene due to first-order diffraction from rotationally disordered Pb(111) islands are clearly visible (grey arrow) \footnote{Note the different position of, e.g., the diamond-like spot arrangement of $6\sqrt{3}$-SiC [green lines in Fig.~\ref{fig1}(a)] on the LEED screen since the size of the Ewald sphere scales with electron energy.}. Additional LEED patterns are presented in the Supplemental Material, Fig.~S1 \cite{supplement}. It is however hard to imagine that 2D confined Pb between graphene and SiC [Fig.~\ref{fig1}(e)] retains the exact same lattice parameter of bulk-truncated Pb islands and behaves completely independent from the underlying SiC substrate. In fact, by imposing a tensile strain of only $\approx 1.2$\% to the isolated surface plane of a Pb(111) island, $(\sqrt{3}\times\sqrt{3})\mathrm{R}30^\circ$-Pb could be matched precisely to $2\times2$-SiC and an overall $13\times 13$ graphene on $9\times 9$ Pb on $(6\sqrt{3}\times 6\sqrt{3})\mathrm{R}30^\circ$ SiC supercell would result. This means that by adopting an intermediate strain level of less than $1$\%, 2D Pb could be made commensurate with $13\times 13$ graphene ($\approx 9\times 9$-Pb), $2\times 2$ SiC ($\approx (\sqrt{3}\times\sqrt{3})\mathrm{R}30^\circ$-Pb) and the $10\times10$ Moiré periodicity ($\approx 7\times 7$-Pb) to better than $99$\% on the scale of each single supercell. Fig.~\ref{fig1}(f) provides a schematic top view of the heterostacked lattices where the individual unit cells and superstructures are indicated. The composite system could then offer mechanisms to release the strain over a longer range (potentially involving internal reconstructions, buckling in 2D Pb) while ensuring commensurability of Pb with SiC and the Moiré superstructure on large-scale average. However, the experimental LEED pattern lacks any spots that correspond directly to the modeled $\approx(\sqrt{3}\times\sqrt{3})\mathrm{R}30^\circ$-Pb on $2\times 2$-SiC superstructure. In particular, no first-order diffraction from the associated Pb lattice is observed throughout a $>200$ eV range of incident electron energy, in contrast to the case of epitaxial Pb islands on top of graphene [cf.\ inset of Fig.~\ref{fig1}(a)]. The model further opposes the prevalent tendency towards $1\times1$ epitaxial order of 2D heteroconfined metals relative to SiC \cite{forti2020,rosenzweig2020,briggs2020,yaji2019}. As will be discussed below, our ARPES data [Figs.\ \ref{fig2} and \ref{fig3}] rather support the latter scenario, which, on the other hand, cannot be readily reconciled with the $10\times 10$ Moiré pattern. It is therefore still unclear whether the observed Moiré phase is directly linked to the atomic arrangement of interfacial Pb and, if so, prevails throughout the entire surface or just relates to minority regions hosting a distinct atomic structure. \subsection{Momentum microscopy of quasi-freestanding monolayer graphene} Fig.~\ref{fig2}(a) shows a volumetric ARPES dataset of Pb-QFMLG acquired by means of momentum microscopy in steps of $50$ meV down to binding energies $E>5$ eV. The corresponding $\approx300$ $\mu\mathrm{m}^2$ real-space field of view is outlined in red in the underlying threshold-PEEM image. Adopting a characteristic crown shape, conically peaking at the six $\overline{\mathrm{K}}$ points of the hexagonal Brillouin zone (BZ) of graphene (dashed red overlay), the sharp ARPES intensity distribution indicates a completely decoupled, quasi-freestanding graphene monolayer very close to charge neutrality. Extracted from the very same 3D dataset, the energy-momentum cut of Fig.~\ref{fig2}(b) displays the graphene $\pi$-band dispersion along the $\overline{\Gamma\mathrm{KM}\Gamma}$ wedge in the first BZ (cf.\ inset). Represented by the dashed red curves, a third nearest neighbor tight binding (3rd NN TB) model \cite{classen2020} has been fitted to the spectral maxima \footnote{Dirac-point energy $E_D=-9$ meV and Fermi velocity $v_F=6.85$ eV{\AA} have been fixed according to the detailed analysis of Fig.~\ref{fig2}(e) and (f).}. Besides matching the $\approx 2.85$ eV energy separation between the Dirac point $E_D$ at $\overline{\mathrm{K}}$ and the saddle-point Van Hove singularity $E_\mathrm{VHS}$ at $\overline{\mathrm{M}}$, the model describes the overall $\pi$-band course reasonably well throughout the entire probed energy-momentum range. This goes however at the expense of a physically intuitive size and ratio of the hopping parameters ($\gamma=3.58$, $-0.16$, and $0.18$ eV for first, second, and third NN hopping, respectively) which is similar to epigraphene at higher doping levels \cite{bostwick2007}. \begin{figure*}[t] \centering \includegraphics{fig2.pdf} \caption{(a) Volume rendered 3D ARPES stack of Pb-QFMLG with the hexagonal BZ of graphene shown by dashed red lines. A region of about $20\times 15$ $\mu\mathrm{m}^2$ was probed in real space as indicated in the underlying threshold-PEEM image. (b) $\pi$ band dispersion extracted from (a) along the $\overline{\Gamma\mathrm{KM}\Gamma}$ wedge in the first BZ and fitted with a 3rd NN TB model \cite{classen2020} (dashed red curves). (c) Sequence of $k_x$-$k_y$ cuts at binding energies of $3.90$ eV, $2.85$ eV, $1.20$ eV, and at the Fermi level $E_F$. Each cut is clipped to a single $k$-space quadrant and the corresponding TB contour of QFMLG (solid green, blue, and purple curves, respectively) is overlayed to the counterclockwise adjacent quadrant for better visibility of the raw data. Spectral weight is partially suppressed due to a scattering-resonance effect (grey arrows) \cite{nazarov2013,krivenkov2017}. (d) Energy-momentum cut through the graphene Dirac cone at $\overline{\mathrm{K}}$, perpendicular to the $\overline{\Gamma\mathrm{K}}$ direction. Dashed red curves reproduce the TB fit according to panel (b). (e) A detailed determination of the Dirac-point binding energy $E_D$ for all six $\overline{\mathrm{K}}$ points (red squares) yields a mean value $\overline{E}_D=-9$ meV (black line) with standard deviation $\sigma=4$ meV (grey bars). (f) Averaged Dirac cone band velocity $\overline{v}$ perpendicular to $\overline{\Gamma\mathrm{K}}$ as a function of binding energy (black curve). The grey corridor gives the corresponding standard error of the mean $\sigma_{\overline{v}}$ and the dashed red curve represents the 3rd NN TB model. (g) The secondary spectral cutoff at $\overline\Gamma$ reveals a work function of $4.46$ eV for Pb-QFMLG (data: red, fit: black). A reference spectrum for MLG/SiC with a $\approx 0.4$ eV lower work function is also shown (yellow).} \label{fig2} \end{figure*} Fig.~\ref{fig2}(c) presents an overview of $k_x$-$k_y$ cuts at different binding energies relative to the Fermi level $E_F$ (as acquired directly by the NanoESCA momentum microscope). Each cut is clipped to a $k$-space quadrant and the respective 3rd NN TB contour as per the band structure fit of Fig.~\ref{fig2}(b) is overlayed to the counterclockwise adjacent quadrant (solid green, blue, and purple curves). The BZ of QFMLG is indicated by dashed red lines. An electron pocket around $\overline{\Gamma}$ (binding energy $E=3.90$ eV) transforms---via the $\overline{\mathrm{M}}$-point Van Hove singularity ($2.85$ eV)---into hole pockets around $\overline{\mathrm{K}}$ and $\overline{\mathrm{K}}$' ($1.20$ eV). The latter eventually converge into a singular, pointlike Fermi surface ($E=E_F$), reflecting the apparent charge-neutrality of Pb-QFMLG. For specific kinetic energies, excited photoelectrons can repeatedly scatter back and forth between graphene and its vacuum barrier in what is known as a scattering resonance \cite{nazarov2013,krivenkov2017}. Such a resonance entails a well-defined loss of spectral weight, dispersing steeply upwards from the vacuum level with near-unity effective mass and sharing the periodicity of the graphene host lattice. We thus attribute the sharp arcs of suppressed $\pi$-band intensity as highlighted by the grey arrows in the $k_x$-$k_y$ cut at $E=3.90$ eV [top left of Fig.~\ref{fig2}(c)] to repeated scattering resonances folding back into the first BZ \footnote{The scattering resonance centered around normal emission ($\overline{\Gamma}$) in the first BZ exceeds the available photoemission horizon}. Concomitantly, the sharp features confirm a clean surface of QFMLG down to the nanoscale without residual Pb clusters on top, whose presence would perturb the vacuum barrier and potentially destroy the scattering resonance condition completely via local rehybridization of graphene towards partial $sp^3$ character \cite{krivenkov2017}. The energy-momentum cut of Fig.~\ref{fig2}(d) provides a near-$E_F$ closeup of the $\pi$-band dispersion through the $\overline{\mathrm{K}}$ point, perpendicular to the $\overline{\Gamma\mathrm{K}}$ direction ($k_x$ for fixed $k_y=1.7$ \AA$^{-1}$). A single, sharp Dirac cone converging in the absolute vicinity of $E_F$ demonstrates the uniform intercalation of Pb without any admixture of differential (minority) doping levels, thus representing a significant step beyond previous studies of Pb-intercalated epigraphene \cite{yurtsever2016,gruschwitz2021}. Dashed red curves overlayed to Fig.~\ref{fig2}(d) again trace the 3rd NN TB model as fitted to the wide-range $\overline{\Gamma\mathrm{KM}\Gamma}$ data of Fig.~\ref{fig2}(b). The model describes the experimental Dirac-cone dispersion remarkably well also in the direction perpendicular to $\overline{\Gamma\mathrm{K}}$, although the corresponding data have not been taken into account for the fit itself. The apparent charge neutrality of Pb-QFMLG can still be quantified more precisely. Fig.~\ref{fig2}(e) shows the Dirac-point energy $E_D$, determined individually for all six $\overline{\mathrm{K}}$\textsuperscript{(}'\textsuperscript{)} points via the crossing of linear fits within $0.2$--$1.0$ eV down from $E_F$ to the extracted dispersion in the direction perpendicular to $\overline{\Gamma\mathrm{K}}$ \footnote{Values of $E_D$ have also been corrected for meV-sized variations in Fermi-edge position with momentum, resulting from the non-isochromaticity \cite{tusche2019} and putative inhomogeneities in the electronic lens system of the spectrometer}. All $\overline{\mathrm{K}}$\textsuperscript{(}'\textsuperscript{)} points reveal very small $p$-type doping with a Dirac crossing slightly above the Fermi level ($E_D<0$). The mean value turns out as $\overline{E}_D=-9$ meV with standard deviation $\sigma=4$ meV. Using the well-known relation $n=E_D^2/(\pi v_F^2)$ where the Fermi velocity $v_F$ is given in units of eV{\AA} [cf.\ Fig.~\ref{fig2}(f)], we determine a residual hole density for $p$-type doped Pb-QFMLG of only $n=(5.5\pm2.5)\times10^9$ cm$^{-2}$. Corresponding only to a single charge carrier per $\approx700,000$ C atoms, this vividly highlights the essential charge neutrality of Pb-QFMLG. The latter is interesting in terms of the overall limited carrier mobilities of epigraphene, which have often been ascribed to interactions with its substrate as evidenced by, e.g., finite doping \cite{emtsev2009,sinterhauf2020,aprojanz2020}. Note that epigraphene is generally doped via (i) the spontaneous polarization for hexagonal SiC \cite{ristein2012}, (ii) excess-charge transfer for $n$-type SiC \cite{mammadov2014}, and (iii) element-specific charge transfer from intercalated layers \cite{riedl2009,mcchesney2010,walter2011,stoehr2016,link2019,rosenzweig2019,rosenzweig2020b,forti2020,rosenzweig2020,briggs2020}. Future studies will thus have to clarify whether the charge neutrality of Pb-QFMLG results from perfect screening of all substrate contributions while simply no charge transfer takes place from interfacial Pb onto graphene or is merely due to coincidential compensation of all doping mechanisms (i)--(iii). In the latter case, the depletion of $n$-type SiC alone can be expected to contribute carrier densities on the order of $10^{12}$ cm$^{-2}$ \cite{mammadov2014}. A suppression of doping mechanism (ii), as anticipated well below the bulk dopants' freeze-out temperature or for semi-insulating SiC, should therefore entail a substantial upshift of $E_D$ and thus help to elucidate the origin of charge neutrality in Pb-QFMLG. Fig.~\ref{fig2}(f) shows the mean absolute band velocity $\overline{v}$ of the Dirac cone perpendicular to $\overline{\Gamma\mathrm{K}}$ as a function of binding energy, given by the smoothed derivative $\vert dE/dk\vert$ of the averaged experimental dispersion over all six BZ corners, i.e.\ 12 $\pi$-band branches (solid black curve). The grey corridor indicates the standard error of the mean $\sigma_{\overline{v}}$ as per the individual datasets and the dashed red curve represents once again the 3rd NN TB model [cf.\ Figs.~\ref{fig2}(b)--(d)]. Towards $E_F$ the inferred value of $\overline{v}= 6.9\pm0.1$ eV{\AA} turns out consistent with the well-established Fermi velocity of charge-neutral graphene \cite{castroneto2009}. However, $\overline{v}$ is found to peak out from the strictly monotonic TB curve at binding energies of $\approx 0.6$ eV and, even more prominent, $\approx 1.1$ eV. Such renormalizations differ from the gradual changes in band velocity previously observed for graphene in response to its dielectric environment \cite{siegel2011,hwang2012}. They could instead point towards hybridization with potential interlayer electronic states of Pb, although the latter cannot be readily discerned from the dominant $\pi$ bands of QFMLG (at least for the available photon energy). Note that such hybridization effects have previously been linked to enhanced spin splitting in the Dirac cone of other intercalated graphene systems \cite{marchenko2012,marchenko2016}. Similar effects might hence occur also in Pb-QFMLG, suggesting future studies by means of spin-resolved ARPES. In order to compress the entire photoelectron hemisphere into the narrow angular acceptance cone of the NanoESCA momentum microscope, the sample is by default subject to an extractor voltage. This provides immediate access to the secondary spectral cutoff where the slowest photoelectrons pile up, barely overcoming the sample work function at vanishing parallel momentum. Fig.~\ref{fig2}(g) shows the corresponding normal emission spectrum for Pb-QFMLG (red curve) fitted with the product of an error function and an exponential decay (black curve). The horizontal axis is given in kinetic energy relative to $E_F$ so that the work function $\Phi=4.46\pm0.02$ eV of Pb-QFMLG can be directly read off from the secondary cutoff. The latter is just broadened to the $0.1$ eV resolution limit of the spectrometer, suggesting the absence of domains with differing work functions and reinforcing the homogeneity of the intercalated sample. At the same time, the work function of Pb-QFMLG turns out $\approx 0.4$ eV higher as compared to epitaxial monolayer graphene (MLG) on SiC [cf.\ yellow reference spectrum in Fig.~\ref{fig2}(g)]. This work function difference is well consistent with the Dirac-point energy of Pb-QFMLG (charge neutral, $E_D\approx E_F$) relative to MLG (moderately $n$ doped, $E_D\approx 0.4$ eV) \cite{riedl2010}. The work function of Pb-QFMLG thus appears to be governed by a mere Fermi-level shift while the vacuum level itself remains unperturbed. As already indicated by the presence of sharp scattering resonances [Fig.~\ref{fig2}(c)], the rigid behavior of $\Phi$ with $E_D$ corroborates a proper carbon-honeycomb surface termination without residual Pb left behind on top. \subsection{Interlayer electronic structure of Pb} Based on the LEED Moiré pattern which hints towards an ordered phase of intercalated Pb (cf.\ Fig.~\ref{fig1}) and the experience from other intercalants \cite{forti2020,rosenzweig2020,briggs2020}, interlayer band structure formation in 2D confined Pb is well conceivable and warrants specific focus when studying the electronic structure of Pb-QFMLG. However, previous ARPES measurements of (partially) Pb-intercalated epigraphene are entirely limited to single energy-momentum cuts through the $\overline{\mathrm{K}}$ point of graphene \cite{yurtsever2016,gruschwitz2021} where the dominant $\pi$ bands easily overshine any potential interlayer electronic states associated with intercalated Pb. In this regard, momentum microscopy readily allows for a more complete band structure survey of the system as it covers the entire BZ in a single-shot-type measurement. \begin{figure*}[t] \centering \includegraphics{fig3.pdf} \caption{(a) ARPES cut along the $\overline{\mathrm{K}\Gamma\mathrm{M}}$ direction of graphene (equivalent to $\overline{\mathrm{M}\Gamma\mathrm{K}}$ of SiC) extracted from the dataset of Fig.~\ref{fig2} after 6-fold symmetrization to improve the signal-to-noise ratio of weaker spectral features. (b) Corresponding EDCs and (c) MDCs taken at selected momenta and binding energies [cf.\ colored arrows next to panel (a)]. (d)--(g) Series of $k_x$-$k_y$ cuts at $1.20$, $0.65$, $0.40$, and $0.05$ eV below the Fermi level $E_F$ with overlayed guides to the eye. The surface BZ of SiC is indicated in (g) by dashed green lines. $\pi$-band satellites (s and s$^*$, red) due to HeI $\beta$, $\gamma$ as well as Dirac cone replicas (r, red) backfolded via the surface reciprocal lattice vectors $\vec{g}$ of SiC are marked. Additional bands can be assigned to bulk SiC [(a) and (c), green] and intercalated Pb (grey). The free-electron dispersion of the scattering resonances is also indicated [(a) and (d), blue]. Counts in panels (f) and (g) have been multiplied by factors of $1.5$ and $4$, respectively.} \label{fig3} \end{figure*} Fig.~\ref{fig3}(a) shows an energy-momentum cut along the $\overline{\mathrm{K}\Gamma\mathrm{M}}$ path of graphene (corresponding to the $\overline{\mathrm{M}\Gamma\mathrm{K}}$ direction of SiC), whose color scale has been optimized so as to improve the visibility of any weaker spectral features. In order to also enhance their signal-to-noise ratio, the ARPES cut has only been extracted after 6-fold rotational symmetrization of the parent dataset of Fig.~\ref{fig2} \footnote{This procedure should be consistent with any 2D hexagonal arrangement of intercalated Pb, i.e., the LEED-based model of Fig.~\ref{fig1} or $1\times1$ epitaxial order as previously reported for other intercalants. At the same time, the threefold symmetry of bulk SiC is sacrificed.}. While the dominant $\pi$ bands of QFMLG oversaturate the applied color scale by up to a factor of ten, their weak HeI $\beta$ and $\gamma$ satellites can now be discerned, displaced by the difference in photon energy of $1.87$ eV (s) and $2.52$ eV (s$^*$), respectively. In addition, a replicated Dirac cone is made visible along the $\overline{\Gamma\mathrm{K}}$ direction, centered at $k_\parallel\approx -0.7$ \AA$^{-1}$ (labelled r). It can be attributed to backfolding of the opposite $\overline{\mathrm{K}}$-point region ($k_\parallel>0)$ via a surface reciprocal lattice vector of SiC as discussed below. The hole-like bands centered around normal emission are bulk states of SiC. Their maximum at $E\approx 1.3$ eV turns out consistent with the expected binding energy of the global valence band maximum of SiC (located at $\Gamma$ in the bulk BZ) when taking into account the upward band bending as measured by XPS (cf.\ Fig.~\ref{fig4}). Beyond these distinct features, the ARPES cut of Fig.~\ref{fig3}(a) uncovers additional dispersive states which can neither be assigned to graphene nor SiC and must therefore be related to intercalated Pb. A corresponding band starts out electron-like from $E\approx 1.3$ eV at $\overline{\Gamma}$ where it is still eclipsed by the SiC valence band maximum. It initially demonstrates a relatively flat upward dispersion in both orthogonal momentum directions until splitting up into an upper and lower branch around $\vert k_\parallel\vert \approx 0.7$ \AA$^{-1}$. The evolution of a single Pb-related spectral peak ($k_\parallel=\pm 0.6$ \AA$^{-1}$) into two distinct peaks ($k_\parallel=\pm 0.9$ \AA$^{-1}$) is specifically highlighted by the corresponding energy distribution curves (EDCs) in Fig.~\ref{fig3}(b). While the upper branch retains an electron-like dispersion also beyond the splitting region, the lower branch rather adopts a negative curvature and appears to turn into a steep downward dispersion. However, tracing the Pb-related bands becomes increasingly difficult for higher values of parallel momentum ($k_\parallel\gtrsim 1$ \AA$^{-1}$) as their spectral weight largely diminishes and in particular the lower branch can no longer be discerned. Just like the spectral weight loss of the upper branch upon approaching the Fermi level, this could be linked to a change in orbital character as Pb offers both, $6s$ and $p$ valence electrons. Future studies with varying photon energy and polarization are therefore highly desirable in order to selectively enhance these weak Pb-related signatures close to $E_F$ and further clarify their precise dispersion. Fig.~\ref{fig3}(c) shows two momentum distribution curves (MDCs) extracted from the ARPES cut in Fig.~\ref{fig3}(a) at a binding energy of $1.6$ eV (scaled to $15$\% intensity) and right at $E_F$. Colored markers indicate contributions of graphene's $\pi$ bands including their satellites (s, s$^*$) and replicas (r) as well as SiC bulk states. Additional MDC peaks (grey markers) are due to intercalated Pb and their presence at $E_F$ (bottom MDC) clearly confirms the metallic nature retained by the intercalant. This behavior is in complete contrast to the group $11$ noble metals which develop a global semiconducting gap when intercalated as monolayers beneath epigraphene \cite{forti2020,rosenzweig2020}. The metallic character of intercalated Pb at the graphene/SiC heterointerface could for instance be related to a different atomic arrangement (cf.\ Fig.~\ref{fig1}) or the distinct valence electron configuration of Pb comprising also $p$ electrons. In any case, as 2D confined Pb on SiC remains metallic, it is well conceivable that also the superconducting properties are inherited from the bulk crystal---just like for epitaxial ultrathin films of Pb on Si(111) \cite{zhang2010,brun2016}. Our experimental findings therefore hold great promise and motivate future experiments in view of a potential superconducting transition in Pb-QFMLG. A series of 6-fold symmetrized $k_x$-$k_y$ cuts at different binding energies is shown in Fig.~\ref{fig3}(d)--(g). At $1.20$ eV below $E_F$ [Fig.~\ref{fig3}(d)], the electron pocket centered at $\overline{\Gamma}$ and demonstrating slight hexagonal warping corresponds to the flat bottom of the Pb-related band. Likewise, the adjacent spectral weight around $k_\parallel= 0.8$ \AA$^{-1}$ towards $\overline{\mathrm{K}}_\mathrm{gr}$ ($\overline{\mathrm{M}}_\mathrm{SiC}$) can be assigned to the lower, split-off Pb branch which acquires a hole-like dispersion. In addition, replicated Dirac cones (r) also contribute to the concerned regions in $k$-space as discussed below. The Pb-related features appear to be largely suppressed outside the curved hexagonal corridor enclosed by the giant scattering resonance arcs (blue curves). For a binding energy of $0.65$ eV [Fig.~\ref{fig3}(e)], a single electron pocket is found to encircle $\overline{\Gamma}$ at a radius of $\approx 0.9$ \AA$^{-1}$ and with enhanced intensity in the direction of $\overline{\mathrm{K}}_\mathrm{gr}$. It corresponds to the upper branch of the Pb-related band in the energy-momentum cut of Fig.~\ref{fig3}(a) (for a corresponding zoom-in with enhanced contrast see Supplemental Material, Fig.~S2 \cite{supplement}). Additional features or repeated contours due to intercalated Pb are however not yet readily discernible at higher values of $k_\parallel$ in Fig.~\ref{fig3}(d) and (e). This is further complicated by the dominant spectral weight of graphene's $\pi$ bands and also their HeI $\beta$ satellites (red curves) towards the photoemission horizon. At $0.4$ eV below $E_F$ [Fig.~\ref{fig3}(f)] the $k_x$-$k_y$ cut turns out further enriched. Additional concave segments now appear towards $\overline{\mathrm{K}}_\mathrm{gr}$, almost touching the inner contour at about $1$ \AA$^{-1}$. Along the $\overline{\mathrm{KMK}}$' line of graphene, the ARPES intensity appears to be also enhanced (centered around $\overline{\mathrm{M}}_\mathrm{gr}$). This has to be attributed largely to the intercalant as the minimal satellite-photon flux of HeI $\gamma$ alone will not be sufficient to probe the saddle-point-region of QFMLG (s$^*$, red curves) at an adequate intensity. In direct vicinity of the Fermi level, the intercalant-related features appear somewhat sharper [Fig.~\ref{fig3}(g)]. This is however accompanied by an overall intensity loss and the counts in this panel have been quadrupled for enhanced visualization. An inner hexagonal contour is now found surrounded by additional segments at $k_\parallel\approx 1$ \AA$^{-1}$, running almost parallel and close to the border of the first SiC surface BZ (the latter indicated by the dashed green overlay). These distinct contours correspond to the two Fermi level crossings marked along $\overline{\Gamma\mathrm{K}}_\mathrm{gr}$ in the bottom MDC of Fig.~\ref{fig3}(c). They are also highlighted in the energy-momentum cut of Fig.~S2 in the Supplemental Material \cite{supplement}. Note that the slightly curved segments close to the SiC surface BZ border seem to develop out of the broader, concave contours visible in Fig.~\ref{fig3}(f) and actually move towards $\overline{\Gamma}$ when tracing their dispersion coming from higher binding energies. While the energy-momentum cut of Fig.~\ref{fig3}(a) \cite{supplement} can be interpreted accordingly in the vicinity of $\overline{\mathrm{M}}_\mathrm{SiC}$, repeated counterparts of the Pb band that originates in the first BZ cannot be readily identified at higher values of $k_\parallel$ as the dominant spectral weight of the graphene $\pi$ bands and even their satellites sets in. Likewise, this holds for the $k_x$-$k_y$ cut of Fig.~\ref{fig3}(g) where potential counterparts of the Pb-related arcs ($k_\parallel\approx 1$ \AA$^{-1}$) are easily overshadowed by satellite $\pi$-pockets of QFMLG (s$^*$, red curve). Therefore, the actual BZ of the intercalant is hitherto difficult to determine based solely on the accessible dispersion of extrinsic, Pb-related bands. Note that the enhanced spectral weight around $\overline{\mathrm{M}}_\mathrm{gr}$ which is attributed to Pb persists in Fig.~\ref{fig3}(g) while the HeI $\gamma$ satellite pockets of graphene (s$^*$, red curve) converge towards $\overline{\mathrm{K}}_\mathrm{gr}$. As discussed above, the actual BZ geometry of intercalated Pb is difficult to deduce from the available data of Fig.~\ref{fig3} due to the lack (weak intensity and unclear dispersion, respectively) of periodically repeated band features and their corresponding constant energy contours. However, weak Dirac cone replicas (r) appear at $k_\parallel\approx 0.7$ \AA$^{-1}$ along $\overline{\Gamma\mathrm{K}}_\mathrm{gr}$ as highlighted by red curves in Fig.~\ref{fig3}(d)--(f). These replicas are generated by backfolding of the opposite BZ corners of graphene via the surface reciprocal lattice vectors $\vec{g}$ of SiC as indicated in Fig.~\ref{fig3}(g). As QFMLG is electronically decoupled from the SiC substrate via intercalation, its photoelectrons should predominantly feel the periodicity of the underlying intercalant in terms of final state photoelectron diffraction. Hence, the QFMLG replicas generated by $\vec{g}_\mathrm{SiC}$ suggest a $1\times 1$ epitaxial arrangement of interfacially confined Pb relative to SiC, that is, identical surface BZs as previously observed for other intercalants \cite{forti2020,rosenzweig2020,briggs2020}. In contrast, our ARPES data do not reflect the $(\sqrt{3}\times\sqrt{3})\mathrm{R}30^\circ$-Pb on $2\times 2$-SiC superstructure \footnote{A corresponding first BZ of Pb would be oriented along graphene and have its $\overline{\mathrm{K}}$ points coinciding with $\overline{\mathrm{M}}_\mathrm{SiC}$} as deduced from the $10\times 10$ Moiré superperiodicity in LEED (cf.\ Fig.~\ref{fig1}). The same holds for the Moiré periodicity itself, counterparts of which can not be identified in the momentum microscopy data of Fig.~\ref{fig3}. We note, however, that the kinetic energy of the photoexcited electrons in the present ARPES measurement is substantially lower as compared to the $70$ eV LEED pattern of Fig.~\ref{fig1}. Moiré replicas might hence reemerge when probing the electronic structure at higher photon energies. To this end, synchrotron-based ARPES at tunable photon energy would be desirable, not least in view of optimizing the cross section for the intriguing interlayer bands of Pb. \subsection{X-ray photoelectron spectroscopy} \begin{figure*}[t] \centering \includegraphics{fig4.pdf} \caption{Fitted XPS spectra of (a) C $1s$, (b) Si $2p$, and (c) Pb $4f$ for pristine ZLG (top) and Pb-intercalated QFMLG (bottom). The top spectrum in (c) corresponds to nominally $\approx 8$ nm of Pb deposited onto pristine ZLG. Insets in (a) represent schematic ball-and-stick side views of the system and individual fit components are explained in the text.} \label{fig4} \end{figure*} We finally discuss the evolution of selected XPS core-level spectra, acquired from a $3600$ $\mu\mathrm{m}^2$ area before and after the intercalation of Pb \footnote{Spectra were fitted using the free software AAnalyzer\textregistered, \url{https://rdataa.com/download}. Unless explicitly mentioned otherwise, Voigt peaks with a fixed Lorentzian width of $0.1$ eV were employed.}. Fig.~\ref{fig4}(a) displays the well-established C $1s$ spectrum of pristine ZLG (top) with an intense bulk component (green curve) at a binding energy of $283.6$ eV and two surface components S1, S2 (yellow curves) at $284.6$ eV and $285.3$ eV, respectively. Recall that S1 represents the carbon atoms of ZLG covalently bonded to the SiC substrate while S2 accounts for those not interacting with the substrate (cf.\ ball-and-stick inset) \cite{riedl2010}. As about every third carbon atom of ZLG binds to the substrate \cite{riedl2010} the area ratio S2:S1 has been fixed at 2:1 for the fit. After the intercalation of Pb, it is now the intercalant atoms that saturate the Si dangling bonds of the substrate and the covalent interaction with the graphene layer is lifted. Consequently, S1 and S2 are replaced by a single QFMLG peak at $284.5$ eV (bottom spectrum) that has been fitted with an asymmetric Doniach-Sunjic line shape. The corresponding asymmetry parameter $\alpha$ is expected to scale with the carrier density of graphene \cite{sernelius2015,link2019,rosenzweig2019}. However, despite the charge neutrality of Pb-QFMLG, we find $\alpha=0.18$ which already exceeds the values reported for other epigraphene systems at substantially higher doping levels \cite{riedl2009,riedl2010}. Largely impeded by the pointlike Fermi surface of Pb-QFMLG itself (cf.\ Fig.~\ref{fig2}), the asymmetry could in turn be explained by the coupling to elementary excitations around the Fermi level in the metallic Pb layer (cf.\ Fig.~\ref{fig3}). Pb intercalation further shifts the bulk C $1s$ component of SiC by about $1$ eV to lower binding energies. Such behavior is characteristic for many different intercalants and can be attributed to the associated change in surface band bending at the SiC/graphene interface \cite{riedl2009,rosenzweig2019,rosenzweig2020}. Note that we have introduced the spectrum of pristine ZLG as an additional fit component (light blue curve), confirming a ratio $<5$\% of residual non-intercalated regions on the sample surface. This highlights the enhanced completeness of Pb intercalation, yielding a more distinct spectral shape as compared to earlier XPS studies \cite{yurtsever2016,yang2021,gruschwitz2021}. The Si $2p$ spectrum for pristine ZLG/SiC is shown in the top part of Fig.~\ref{fig4}(b). It consists of a bulk doublet (green curve) at $101.3$ eV and a surface doublet (yellow curve) shifted by $0.35$ eV to higher binding energies due to covalent bonding of the topmost Si atoms to the carbon buffer layer. For both doublets, spin-orbit splitting and branching ratio have been fixed at $0.62$ eV and $0.5$, respectively. Upon Pb intercalation (bottom spectrum), the Si $2p$ bulk component undergoes the same $1$ eV shift to lower binding energies as observed for bulk C $1s$, driven by an upward band bending at the substrate/intercalant interface. The surface Si component concurrently shifts by almost $2$ eV in the same direction, ending up on the low binding energy side of its bulk counterpart. This highlights the modified interfacial bonding characteristics in the intercalated system now that the surface Si dangling bonds are no longer saturated by the carbon buffer layer but by Pb. Apart from the small contribution of non-intercalated residuals (light blue curve) which slightly broadens the spectrum towards higher binding energies, note the clear shoulder emerging on the low binding energy side as indicated by the black arrow. The latter can only be properly captured by introducing a dedicated fit component (X, black curve) at a binding energy of $99.1$ eV. While it could be due to substrate defects induced during the lengthy intercalation procedure (see Sec.~\ref{sec2}), no corresponding counterpart can readily be identified in the C $1s$ spectrum of Fig.~\ref{fig4}(a). The feature therefore requires further clarification beyond the scope of the present work. A Pb $4f$ reference spectrum is displayed in the top part of Fig.~\ref{fig4}(c), acquired for nominally $\approx 8$ nm of Pb deposited onto pristine ZLG. The metallic character becomes directly evident from the asymmetric line shape, fitted by a single Doniach-Sunjic doublet. The binding energy of $136.9$ eV for the $4f_{7/2}$ level and the spin-orbit splitting of $4.86$ eV turn out fully consistent with the literature values for bulk Pb \cite{xpshandbook}. After intercalation (bottom), the Pb $4f$ doublet retains its asymmetric shape and thereby confirms the metallic nature of 2D confined Pb at the epigraphene/SiC interface as unambiguously demonstrated by our ARPES results (cf.\ Fig.~\ref{fig3}). On the other hand, the doublet shifts by $0.4$ eV to lower binding energies as compared to the bulk reference spectrum. This reflects the different chemical environment of intercalated Pb and its interaction (bonding) with the Si-terminated substrate, in line with the displaced Si $2p$ surface doublet [Fig.~\ref{fig4}(b)]. Distinct components of intercalated Pb cannot be resolved at this point and we therefore conclude that all intercalated atoms seem to interact with the substrate in a comparable way, thereby rendering a multilayer configuration of interfacial Pb unlikely. Likewise, the absence of an additional Pb $4f$ doublet at higher binding energy (cf.\ bulk reference spectrum) confirms that there is no residual Pb left on top of QFMLG, fully consistent with the ARPES results in Fig.~\ref{fig2}. \section{CONCLUSION AND OUTLOOK} In summary, we report the homogeneous intercalation of Pb underneath a single layer of epitaxial graphene on SiC. Momentum microscopy across the entire surface Brillouin zone is used to probe the sharp $\pi$ bands of the resulting quasi-freestanding graphene monolayer which hosts a negligible hole density of only $5.5\times10^9$ cm$^{-2}$. Additional dispersive bands due to 2D quantum-confined Pb at the graphene/SiC heterointerface can also be identified and the metallic character retained by Pb upon intercalation is clearly demonstrated. Our band structure data hint towards $1\times1$ epitaxial order of intercalated Pb relative to SiC. In addition, a $10\times 10$ Moiré superperiodicity with respect to graphene emerges in low-energy electron diffraction, whose counterparts could not yet be identified in the electronic structure. On the one hand, our findings hold great promise in view of enhanced carrier mobilities in the epigraphene layer which is rendered charge neutral on the large scale via the homogeneous intercalation of Pb. On the other hand, the demonstrated large-area synthesis of quantum-confined Pb with a metallic band structure opens a route towards 2D superconductivity and strong spin-orbit coupling at the modified epigraphene/SiC interface. These anticipated properties might not even be limited to the intercalant itself but could eventually propagate onto overhead graphene through vertical proximity coupling---boosted by the enhanced homogeneity of the intercalation scenario reported herein. Our study identifies Pb-intercalated epigraphene as a viable candidate to harvest intriguing quantum properties of a 2D-confined heavy-element superconductor. Providing the desired sample homogeneity and a proof of emerging interlayer electronic states, our results form a solid basis for, e.g., low-temperature transport measurements as well as detailed band structure studies by means of synchrotron-based spin- and angle-resolved photoelectron spectroscopy at maximal resolution. \begin{acknowledgments} The authors would like to thank Hrag Karakachian for most valuable discussions. Funding by the Deutsche Forschungsgemeinschaft (DFG) through Sta315/9-1 and FOR 5242 is gratefully acknowledged. O.B.\ was supported by the Erasmus$+$ Exchange Programme of the European Union. \end{acknowledgments}
1,314,259,996,025
arxiv
\section{Introduction} \label{sec:introduction} Temporal data is found in many financial, business, and scientific applications running on top of database management systems (DBMSs), i.e., supporting these applications through efficient temporal operator implementations is crucial. For example, Kaufmann states that there are several temporal queries in the hundred most expensive queries executed on SAP ERP~\cite{Kauf14}, many of which have to be implemented in the application layer, as the underlying infrastructure does not directly support the processing of temporal data. According to~\cite{Kauf14}, customers of SAP desperately need (advanced) temporal operators for efficiently running queries pertaining to legal, compliance, and auditing processes. Although the introduction of temporal operators into the SQL standard has started with SQL:2011~\cite{DBLP:journals/sigmod/KulkarniM12}, the provided implementation is far from complete or lacking in performance. There is a renewed interest in temporal data processing, and researchers and developers are busy filling the gaps. One example are join operators involving temporal predicates: there are several recent publications on overlap interval joins~\cite{bouros_forward_2017,dignos_overlap_2014,piatov_interval_2016}. However, this is not the only possible join predicate for matching (temporal) intervals. Allen defined a set of binary relations between intervals originally designed for reasoning about intervals and interval-based temporal descriptions of events~\cite{allen_maintaining_1983}. These relations have been extended for event detection by parameterizing them~\cite{helmer_iseql_2016}. Strictly speaking, all these relationships could be formulated in regular SQL \texttt{WHERE} clauses (see also the right-hand column of Table~\ref{table:allen-relations} for a formal definition of Allen's relations and extensions). The evaluation of these predicates using the implementation present in contemporary relational database management systems (RDBMSs) would be very inefficient, though, as a lot of inequality predicates are involved \cite{khayyat_fast_2017}. We also note that the predicates in the aforementioned overlap interval joins (\cite{bouros_forward_2017,dignos_overlap_2014,piatov_interval_2016}) only check for any form of overlap between intervals. Basically, they do not distinguish between many of the relationships defined by Allen and do not cover the \RelationName{Before} and \RelationName{Meets} relations at all. Additionally, many of the approaches so far lack parameterized versions, in which further range-based constraints can be formulated directly in the join predicate. \begin{figure}[t] \centering \begin{tikzpicture}[relations] \DrawXAxis{8}{3}{0} \DrawTuple{0}{2}{2}{$r_1$} \DrawTuple{0}{5}{1}{$r_2$} \DrawTuple{3}{7}{2}{$r_3$} \DrawTuple{7}{8}{1}{$r_4$} \end{tikzpicture} \caption{Example temporal relation \Rel r} \label{fig:employees} \vspace*{-.19cm} \end{figure} In Figure~\ref{fig:employees} we see an example relation showing which employees ($r_1$, $r_2$, $r_3$, and $r_4$) were working on a certain project during which month. The tuple validity intervals (visualized as line segments) are as follows: tuple $r_1$ is valid from time 0 to time 2 (exclusive) with a starting timestamp $T_s = 0$ and an ending timestamp $T_e = 2$, tuple $r_2$ is valid on interval $[0, 5)$ with $T_s = 0$ and $T_e = 5$, and so on. With a simple overlap interval join we can merely detect that $r_1$ and $r_2$ worked together on the project for some time (as did $r_2$ and $r_3$). However, we may also be interested in who started working at the same time ($r_1$ and $r_2$), who started working after someone had already left ($r_3$ coming in after $r_1$ had left and $r_4$ starting after everyone else had left), or even who took over from someone else, i.e., the ending of one interval coincides with the beginning of another one ($r_3$ and $r_4$). For even more sophisticated queries, we may want to add thresholds: who worked together with someone else and then left the project within two months of the other person ($r_3$ and $r_2$). \REV{ Allen's relationships are not only used in temporal databases, but also in event detection systems for temporal pattern matching~\cite{helmer_iseql_2016,KGMS19,KGS18}. In this context, it is also important to be able to specify concrete timeframes within which certain patterns are encountered, introducing the need for parameterized versions of Allen's relationships. For instance, K{\"o}rber et al. use their TPStream framework for analyzing real-time traffic data~\cite{KGMS19,KGS18}, while we previously employed a language called ISEQL to specify events in a video surveillance context~\cite{helmer_iseql_2016}.} Event detection motivated us to develop an approach that is also applicable for event stream processing environments, meaning that our join operators are non-blocking and produce output tuples as early as logically possible, without necessarily waiting for the intervals to finish. Moreover, we demonstrate how these joins can be processed efficiently in temporal databases using a sweeping-based framework that is supported by cache-efficient data structures and by a Timeline Index --- a flexible and general temporal index --- supporting a wide range of temporal operators, used in a prototype implementation of SAP HANA \cite{kaufmann_timeline_2013}. We therefore extend the set of operations Timeline Index supports, increasing its usefulness even further. In particular, we make the following contributions: \begin{itemize} \item We develop a family of plane-sweeping interval join algorithms that can evaluate a wide range of interval relationship predicates going even beyond Allen's relations. \item At the core of this framework sits one base algorithm, called interval-timestamp join, that can be parameterized using a set of iterators trav\-ersing a Timeline Index. This offers an elegant way of configuring and adapting the base algorithm for processing different interval join predicates, \REV{improving code maintainability}. \item Additionally, our algorithm utilizes the CPU cache efficiently, relying on a compact hash map data structure for managing data during the processing of the join operation. Together with the index, in many cases we can achieve linear run time. \item In an experimental evaluation, we show that our approach is faster than state-of-the-art methods: an order of magnitude faster than a direct competitor and several orders of magnitude faster than an inequality join. \end{itemize} \section{Related Work} There is a renewed interest in employing Allen's interval relations in different areas, e.g. for describing complex temporal events in event detection frameworks~\cite{helmer_iseql_2016,KGMS19,KGS18} as well as for querying temporal relationships in knowledge graphs via SPARQL~\cite{CPS19}. One reason is that it is more natural for humans to work with chunks of information, such as labeled intervals, rather than individual values~\cite{HoPe14}. \subsection{Allen's Interval Relations Joins} \label{sec:leung-muntz} Leung and Muntz worked on efficiently implementing joins with predicates based on Allen's relations in the 1990s \cite{leung_query_1989,leung_query_1990} and it turns out that their solution is still competitive today. In fact, they also apply a plane-sweeping strategy, but impose a total order on the tuples of a relation. Theoretically, there are four different orders tuples can be sorted in for this algorithm: $T_s$ ascending, $T_s$ descending, $T_e$ ascending, and $T_e$ descending. When joining two relations, they can be sorted in different orders independently of each other. The actual algorithm is similar to a sort-merge join. A tuple is read from one of the relations (outer or inner) and placed into the corresponding set of active tuples for that relation. Each tuple in the set of the other relation is checked whether it matches the tuple that was just read. When a matching pair is found, it is transferred to the result set. While searching for matching tuples, the algorithm also performs a garbage collection, removing tuples that will no longer be able to find a matching partner. (Not all join predicates and sort orders allow for a garbage collection, though.) A heuristic, based on tuple distribution and garbage collection statistics, decides from which relation to read the next tuple. In a follow-up paper, further strategies for parallelization and temporal query processing are discussed~\cite{leung_temporal_1992}. In contrast to our approach, in which we handle tuple starting and ending events separately (an idea also covered more generally in~\cite{Freksa92,MaHa06,Wuch09}), the algorithm of Leung and Muntz requires streams of whole tuples. A tuple is not complete until its ending endpoint $T_e$ is encountered. This has a major impact for applications such as real-time event detection. Waiting for a tuple to finish can delay the whole joining process, as tuples following it in the sort order cannot be reported yet. Chekol et al. claim to cover the complete set of Allen's relations in their join algorithm for intervals in the context of SPARQL, but the description for some relations is missing~\cite{CPS19}. It seems they are using our algorithm from~\cite{piatov_interval_2016} as a basis. They are not able to handle parameterized versions and have to create different indexes for different relations, though. There is also research on integrating Allen's predicate interval joins in a MapReduce framework~\cite{chawda_processing_2014,pilourdault_distributed_2016}. However, these approaches focus on the effective distribution of the data over MapReduce workers rather than on effective joins. \subsection{Overlap Interval Joins} One of the earliest publications to look at performance issues of temporal joins is by Segev and Gunadhi~\cite{segev_event-join_1989,gunadhi_query_1991}, who compare different sort-merge and nested-loop implementations of their event join. They refined existing algorithms by applying an auxiliary access method called an append-only tree, assuming that temporal data is only appended to existing relations and never updated or deleted. Some of the work on spatial joins can also be applied to interval joins. Arge et~al.~\cite{arge_scalable_1998} used a sweeping-based interval join algorithm as a building block for a two-dimensional spatial rectangle join, but did not investigate it as a standalone interval join. It was picked up again by Gao et al.~\cite{gao_join_2005}, who give a taxonomy of temporal join operators and provide a survey and an empirical study of a considerable number of non-index-based temporal join algorithms, such as variants of nested-loop, sort-merge, and partitioning-based methods. The fastest partitioning join, the overlap interval partitioning (OIP) join, was developed by Dign\"os et al. \cite{dignos_overlap_2014}. The (temporal) domain is divided into equally-sized granules and adjacent granules can be combined to form containers of different sizes. Intervals are assigned to the smallest container that covers them and the join algorithm then matches intervals in overlapping containers. The Timeline Index, introduced by Kaufmann et~al.~\cite{kaufmann_timeline_2013}, and the supported temporal operators have also received renewed attention recently. Kaufmann et~al. showed that a single index per temporal relation supports such operations as time travel, temporal joins, overlap interval joins and temporal aggregation on constant intervals (temporal grouping by instant). The work was done in the context of developing a prototype for a commercial temporal database and later extended to support bitemporal data~\cite{kaufmann_bi-temporal_2015}. In earlier work~\cite{piatov_interval_2016}, we defined a simplified, but functionally equivalent version of the Timeline Index, called Endpoint Index (from now on we will use both terms interchangeably). We introduced a cache-optimized algorithm for the overlap interval join, based on the Endpoint Index, and showed that the Timeline Index is not only a universal index that supports many operations, but that it can also outperform the state-of-the-art specialized indexes (including~\cite{dignos_overlap_2014}). The technique of sweeping-based algorithms has recently been applied to temporal aggregation as well~\cite{BoMa18,piatov_sweeping-based_2017}, extending the set of operations supported by the Timeline Index even further. \REV{ The basic idea of Bouros and Mamoulis is to do a \emph{forward scan} on the input collection to determine the join partners, hence their algorithm is called forward scan (FS) join~\cite{bouros_forward_2017}. In contrast, our approach does a \emph{backward scan} by traversing already encountered intervals, which have to be stored in a hash table. In FS, both input relations, $\Rel r$ and $\Rel s$, are sorted on the starting endpoint of each interval and then the algorithm sweeps through the endpoints of $\Rel r$ and $\Rel s$ in order. Every time in encounters an endpoint of an interval, it scans the other relation and joins the interval with all matching intervals. Bouros and Mamoulis introduce a couple of optimizations to improve the performance. First, consecutively swept intervals are \emph{grouped} and processed in batch (this is called gFS). Second, the (temporal) domain is split into tiles and intervals starting in such a tile are stored in a \emph{bucket} associated with this tile. While scanning for join partners for a tuple $r$, all intervals in buckets corresponding to tiles that are completely covered by the interval of $r$ can be joined without further comparisons. Combined with the previous technique, this results in a variant called bgFS. Doing a forward scan or a backward scan has certain implications. Introducing their optimizations to FS, Bouros and Mamoulis showed that forward scanning is usually more efficient than backward scanning (particularly when it comes to parallelizing the algorithm). However, there is also a downside: forward scanning needs to have access to the complete relations to work, while backward scanning considers only already encountered endpoints, i.e., backward scanning can be utilized in a streaming context (for forward scanning this is not possible). } \subsection{Generic Inequality Joins} \REV{As we will see in Table~\ref{table:allen-relations}, most of the interval joins can be broken down into inequality joins, which becomes very inefficient as soon as more than one inequality predicate is involved: Khayyat et al.~\cite{khayyat_fast_2017} point out that these joins are handled via naive nested-loop joins in contemporary RDBMSs. They develop a more efficient inequality join (IEJoin), which first sorts the relations according to the join attributes. For the sake of simplicity, we just consider two inequality predicates here, i.e., for every relation \Rel r, we have two versions, \Rel {r^1} and \Rel {r^2} sorted by the two join attributes, which helps us to find the values satisfying an inequality predicates more efficiently. (The connections between tuples from \Rel {r^1} and \Rel {r^2} are made using a permutation array.) Some additional data structures, offset arrays and bit arrays, help the algorithm to take shortcuts, but essentially the basic join algorithm still consists of two nested loops, leading to a quadratic run-time complexity (albeit with a performance that is an order of magnitude better than a naive nested-loop join).} \section{Background} \paragraph*{Interval Data:\,} We define a \emph{temporal tuple} as a relational tuple containing two additional attributes, $T_s$ and $T_e$, denoting the start and end of the half-open tuple validity interval $T = [T_s, T_e)$. We will use a period (.) to denote an attribute of a tuple, e.g. $r.T_s$ or $s.T$. The length of the tuple validity interval, $\Size r$, is therefore $r.T_e - r.T_s$. We use the terms \emph{interval} and \emph{tuple} interchangeably. With $r$ and $s$ we denote the left-hand-side and the right-hand-side tuples in a join, respectively. \REV{We use integers for the timestamps to simplify the explanations. Our approach would work with any discrete time domain: we require a total order on the timestamps and a fixed granularity, i.e., given a timestamp we have to be able to unambiguously determine the following one.} \paragraph*{Interval Relations:\,} As intervals accommodate the human perception of time-based patterns much better than individual values~\cite{HoPe14}, intervals and their relationships are a well-known and widespread approach to handle temporal data~\cite{Jensen99}. Here we look at two different ways to define binary relationships between intervals: Allen's relations~\cite{allen_maintaining_1983} and the Interval-based Surveillance Event Query Language (ISEQL) \cite{bettini_2017,helmer_iseql_2016}. Allen designed his framework to support reasoning about intervals and it comprises thirteen relations in total. The seven basic \emph{Allen's relations} are shown in the top half of Table~\ref{table:allen-relations}. For example, interval $r$ \RelationName{meets} interval $s$ when $r$ finishes immediately before $s$ starts. This is illustrated by the doodle in the table. We will use smaller versions of the doodles also in the text: (\Doodle{\Meets}). The first six relations in the table also have an inverse counterpart (hence thirteen relations). For example, relation ``$r$ \RelationName{inverse meets} $s$'' describes $s$ immediately finishing before $r$ starts: (\Doodle{\Reverse{\Meets}}). \begin{table} \caption{Allen's and ISEQL interval relations} \centering\small \vspace{-1.5mm} \newcommand*{\JoinName}{\JoinName} \begin{tabular}{@{}>{\raggedright}p{6em}p{20mm}p{12em}@{}} \toprule Relation & \ Doodle & Formal definition \\ \midrule \JoinName{overlaps} & \Doodle[table]{\LeftOverlap} &$r.T_s < s.T_s < r.T_e < s.T_e$\\ \addlinespace \JoinName{during} & \Doodle[table]{\During} &$s.T_s < r.T_s \land r.T_e<s.T_e$\\ \addlinespace \JoinName{before} & \Doodle[table]{\Before} &$r.T_e < s.T_s$ \\ \addlinespace \JoinName{meets} & \Doodle[table]{\Meets} &$r.T_e = s.T_s$ \\ \addlinespace \JoinName{equals} & \Doodle[table]{\Equals} &$r.T_s = s.T_s \land r.T_e=s.T_e$\\ \addlinespace \JoinName{starts} & \Doodle[table]{\Starts} &$r.T_s = s.T_s \land r.T_e<s.T_e$\\ \addlinespace \JoinName{finishes} & \Doodle[table]{\Finishes} &$s.T_s < r.T_s \land r.T_e=s.T_e$\\ \midrule \JoinName{start preceding} & \Doodle[table]{\StartPreceding} & $r.T_s \le s.T_s < r.T_e $\newline $s.T_s - r.T_s \le \delta $\\ \addlinespace \JoinName{end following} & \Doodle[table]{\EndFollowing} & $r.T_s < s.T_e \le r.T_e$\newline $r.T_e - s.T_e \le \varepsilon $\\ \addlinespace \JoinName{before} & \Doodle[table]{\Before} & $r.T_e \le s.T_s$\newline $s.T_s - r.T_e \le \delta $\\ \addlinespace \JoinName{left overlap} & \Doodle[table]{\LeftOverlap} & $r.T_s \le s.T_s < r.T_e \le s.T_e$\newline $s.T_s - r.T_s \le \delta $\newline $s.T_e - r.T_e \le \varepsilon $\\ \addlinespace \JoinName{during} & \Doodle[table]{\During} & $s.T_s \le r.T_s \land r.T_e \le s.T_e$ \newline $r.T_s - s.T_s \le \delta $\newline $s.T_e - r.T_e \le \varepsilon $\\ \bottomrule \end{tabular} \label{table:allen-relations} \end{table} ISEQL originated in the context of complex event detection and covers a different set of requirements. The list of the five basic ISEQL relations is presented in the bottom half of Table~\ref{table:allen-relations}; each of them has an inverse counterpart. Additionally, ISEQL relations are parameterized. The parameters control additional constraints and allow a much more fine-grained definition of join predicates. This is similar to the simple temporal problem (STP) formalism, which defines an interval that restricts the temporal distance between two events~\cite{AFC13,DMP91}. Let us consider the \RelationName{before} ISEQL relation (\Doodle{\Before}). It has one parameter $\delta$, which controls the maximum allowed distance between the intervals (events). When $\delta = 0$, this relation is equivalent to the Allen's \RelationName{meets} (\Doodle{\Meets}). When $\delta > 0$, it is a disjunction of Allen's \RelationName{meets} and \RelationName{before}, and the maximum allowed distance between the events is $\delta$ timepoints. Any ISEQL relation parameter can be \emph{relaxed} (set to infinity), which removes the corresponding constraint. \paragraph*{Joins on Interval Relations:\,} For each binary interval relation (Allen's or ISEQL) we define a predicate $P(r, s)$ as its indicator function: its value is `true' if the argument tuples satisfy the relation and `false' otherwise. From now on we will use the terms `predicate' and `binary interval relation' interchangeably. We perceive ISEQL relation parameters, if not relaxed, as part of the definition of $P$ (e.g. $P_{\delta}$). We also define a \emph{temporal relation} (not to be confused with binary relations described before) as a set of temporal tuples: $\Rel r = \{r_1, r_2, \dots, r_n\}$. Let us take a generic relational predicate $P(r, s)$. We define the $P$-\emph{join} of two relations $\Rel r$ and $\Rel s$ as an operator that returns all pairs of $\Rel r$ and $\Rel s$ tuples that satisfy predicate $P$. We can express this in pseudo-SQL as ``\texttt{SELECT * FROM \Rel r, \Rel s WHERE $P(r, s)$}''. For example, we define the ISEQL \JoinName{left overlap join} as ``\texttt{SELECT * FROM \Rel r, \Rel s WHERE $r$ LEFT OVERLAP $s$}''. If the predicate is parameterized, the join operator will also be parameterized. \begin{example} As an example (see Figure~\ref{fig:example-relations}), let us assume that we have two temporal relations: $\Rel r = \Set{ r_1 = [0, 1), r_2 = [1, 3), r_3 = [2, 5)}$ and $\Rel s = \Set{ s_1 = [1, 3), s_2 = [3, 4)}$. \begin{figure}[ht!] \begin{tikzpicture}[relations, scale=0.6] \DrawISEQLExampleRelations \end{tikzpicture} \caption{Example relations \Rel r and \Rel s} \label{fig:example-relations} \vspace*{-.2cm} \end{figure} Let us now take the ISEQL \JoinName{before join} (\Doodle{\Before}) with the parameter $\delta = 1$. Its result consists of two pairs $\Tuple{r_1, s_1}$ and $\Tuple{r_2, s_2}$, because only they satisfy this particular join predicate. If we relax the parameter (set $\delta$ to $\infty$), then an additional third pair $\Tuple{r_1, s_2}$ would be added to the result of the join. \end{example} By replacing the relational predicate by its formal definition~(Table~\ref{table:allen-relations}), we can implement an interval join in any relational database. However, such an implementation results in a relational join with inequality predicates, which is not efficiently supported by RDBMSs: they have to fall back on the nested-loop implementation in this case \cite{khayyat_fast_2017}. \section{Relational Algebra Extension} \section{Formalizing our Approach} \label{sec:formalization} We opt for a relational algebra representation to be able to make formal statements (e.g. proofs) about the different operators. Before fully formalizing our approach, we first introduce a map operator ($\chi$) as an addition to the standard selection, projection, and join operators in traditional relational algebra. This operator is used for materializing values as described in \cite{BMG93}. We also introduce our new interval-timestamp join ($\JoinByS$), allowing us to replace costly non-equi-join predicates with an operator that, as we show later, can be implemented much more efficiently. \begin{definition} The map operator $\Map{a}{e}(\Rel r)$ evaluates the expression $e$ on each tuple of $\Rel r$ and concatenates the result to the tuple as attribute $a$: \[ \Map{a}{e}(\Rel r) = \SetBuilder{ r \circ [a:e(r)] }{ r \in \Rel r }. \] If the attribute $a$ already exists in a tuple, we instead overwrite its value. \end{definition} \begin{definition} The interval-timestamp join $\Rel r \JoinByS \Rel s$ matches the intervals in the tuples of relation $\Rel r$ with the timestamps of the tuples in $\Rel s$. It comes in two flavors, depending on the timestamp chosen for $s$, i.e., $T_s$ or $T_e$. So, the interval-starting-timestamp join is defined as \[ \Rel r \JoinByS_{\mathrm{start}}^\theta \Rel s = \SetBuilder{ r \times s }{ r \in \Rel r, s \in \Rel s: r.T_s \;\theta\; s.T_s < r.T_e } \] with $\theta \in \{<, \leq\}$, whereas the interval-ending-timestamp join boils down to \[ \Rel r \JoinByS_{\mathrm{end}}^\theta \Rel s = \SetBuilder{ r \times s }{ r \in \Rel r, s \in \Rel s: r.T_s < s.T_e \;\theta\; r.T_e } \] with $\theta \in \{<, \leq\}$. \end{definition} Now we are ready to formulate the joins on interval relations shown in Table~\ref{table:allen-relations} in relational algebra extended by our new join operator $\JoinByS$. We first cover the non-parameterized versions (i.e., setting $\delta$ and $\varepsilon$ to infinity) and then move on to the parameterized ones. Table~\ref{table:mappingtora} gives an overview of the relational algebra formulations. \iftechreport{ Proof for the correctness of our rewrites can be found in Appendix~\ref{sec:rewrites}. } \ifnottechreport{ Proof for the correctness of our rewrites can be found in Appendix~A of our technical report~\cite{ourtechreport}. } \begin{table*} \caption{Mapping of interval relations to relational algebra} \centering\small \vspace{-1.5mm} \newcommand*{\JoinName}{\JoinName} \begin{tabular}{llll} \toprule & Relation & \ Doodle & Relational algebra expression \\ \midrule & \JoinName{start preceding} & \Doodle[table]{\StartPreceding} & $\Rel r \JoinBySsm_{\mathrm{start}}^{\leq} \Rel s$ \\ \addlinespace & \JoinName{end following} & \Doodle[table]{\EndFollowing} & $\Rel r \JoinBySsm_{\mathrm{end}}^{\leq} \Rel s$ \\ \addlinespace \multirow{5}{*}{\rotatebox{90}{non-parameterized}} & \JoinName{left overlap} & \Doodle[table]{\LeftOverlap} & $\sigma_{\Rel r.T_e \leq \Rel s.T_e} (\Rel r \JoinBySsm_{\mathrm{start}}^{\leq} \Rel s)$ \\ \addlinespace & \JoinName{during} & \Doodle[table]{\Reverse{\During}} & \REV{$\sigma_{\Rel r.T_e \leq \Rel s.T_e} (\Rel s \JoinBySsm_{\mathrm{start}}^{\leq} \Rel r)$} \\ \addlinespace & \JoinName{before} & \Doodle[table]{\Before} & $\Map{T_e}{\infty}(\Map{T_s}{T_e}(\Rel r))\JoinBySsm_{\mathrm{start}}^{\leq}\Rel s$ \REV{(ISEQL); $\Map{T_e}{\infty}(\Map{T_s}{T_e}(\Rel r))\JoinBySsm_{\mathrm{start}}^{<}\Rel s$ (Allen)}\\ \addlinespace & \JoinName{meets} & \Doodle[table]{\Meets} & $\Map{T_e}{T_e + 1}(\Map{T_s}{T_e}( \Rel r ))\JoinBySsm_{\mathrm{start}}^{\leq} \Rel s$ \\ \addlinespace & \JoinName{equals} & \Doodle[table]{\Equals} & $\sigma_{\Rel r.T_e' = \Rel s.T_e} (\Map{T_e}{T_s + 1}(\Map{T_e'}{T_e} (\Rel r)) \JoinBySsm_{\mathrm{start}}^{\leq} \Rel s)$ \\ \addlinespace & \JoinName{starts} & \Doodle[table]{\Starts} & $\sigma_{\Rel r.T_e' < \Rel s.T_e} (\Map{T_e}{T_s + 1}(\Map{T_e'}{T_e} (\Rel r)) \JoinBySsm_{\mathrm{start}}^{\leq} \Rel s)$ \\ \addlinespace & \JoinName{finishes} & \Doodle[table]{\Finishes} & $\sigma_{\Rel s.T_s < \Rel r.T_s'} (\Map{T_s}{T_e - 1}(\Map{T_s'}{T_s} (\Rel r)) \JoinBySsm_{\mathrm{end}}^{\leq} \Rel s)$ \\ \midrule & \JoinName{start preceding} & \Doodle[table]{\StartPreceding} & $\Map{T_e}{\min(T_e, T_s + \delta + 1)} (\Rel r) \JoinBySsm_{\mathrm{start}}^{\leq} \Rel s$ \\ \addlinespace \multirow{5}{*}{\rotatebox{90}{parameterized}} & \JoinName{end following} & \Doodle[table]{\EndFollowing} & $\Map{T_s}{\max(T_s, T_e - \varepsilon - 1)} (\Rel r) \JoinBySsm_{\mathrm{end}}^{\leq} \Rel s$ \\ \addlinespace & \JoinName{before} & \Doodle[table]{\Before} & $\Map{T_e}{T_e + \delta + 1}( \Map{T_s}{T_e}( \Rel r )) \JoinBySsm_{\mathrm{start}}^{\leq} \Rel s$ \\ \addlinespace & \JoinName{left overlap} & \Doodle[table]{\LeftOverlap} & $\sigma_{\Rel r.T'_e \leq \Rel s.T_e \leq \Rel r.T'_e + \varepsilon} (\Map{T_e}{\min(T_e, T_s + \delta + 1)} (\Map{T'_e}{T_e} (\Rel r)) \JoinBySsm_{\mathrm{start}}^{\leq} \Rel s)$ \\ \addlinespace & \JoinName{during} & \Doodle[table]{\Reverse{\During}} & \REV{$\sigma_{\Rel s.T'_e - \varepsilon \leq \Rel r.T_e \leq \Rel s.T'_e} (\Map{T_e}{\min(T_e, T_s + \delta + 1)} (\Map{T'_e}{T_e} (\Rel s)) \JoinBySsm_{\mathrm{start}}^{\leq} \Rel r)$} \\ \bottomrule \end{tabular} \label{table:mappingtora} \end{table*} \subsection{Non-parameterized Joins} The non-parameterized joins include all the Allen's relations and the ISEQL joins with relaxed parameters. The \RelationName{equals}, \RelationName{starts}, \RelationName{finishes}, and \RelationName{meets} predicates could be evaluated using a regular equi-join. Nevertheless, we formulate them via interval-timestamp joins, which are better suited to streaming environments and can be processed more quickly for low numbers of matching tuples. We present the joins roughly in the order of their complexity, i.e., how many other different operators we need to define them. \paragraph*{ISEQL Start Preceding Join:\,} \label{sec:start-preceding-idea} The \RelationName{start preceding} predicate (\Doodle{\StartPreceding}) joins two tuples $r$ and $s$ if they overlap and $r$ does not start after $s$ (we relax the parameter $\delta$ here). This can be expressed in our extended relational algebra in the following way: \[ \Rel r \JoinByS_{\mathrm{start}}^{\leq} \Rel s. \] \paragraph*{ISEQL End Following Join:\,} \label{sec:end-following-idea} As before, we first consider the ISEQL \JoinName{end following} predicate (\Doodle{\EndFollowing}) with a relaxed $\varepsilon$ parameter, meaning that tuples $r$ and $s$ should overlap and $r$ is not allowed to end before $s$. In relational algebra this boils down to \[ \Rel r \JoinByS_{\mathrm{end}}^{\leq} \Rel s. \] \paragraph*{Overlap Join:\,} If we look at the \RelationName{left overlap} join (\Doodle{\LeftOverlap}), we notice that it looks very similar to a \RelationName{start preceding} join. The main difference is that it has one additional constraint: the $\Rel r$ tuple has to end before the $\Rel s$ tuple. Formulated in relational algebra this is equal to \[ \sigma_{\Rel r.T_e \leq \Rel s.T_e} (\Rel r \JoinByS_{\mathrm{start}}^{\leq} \Rel s). \] For the \RelationName{right overlap} join (\Doodle{\Reverse{\LeftOverlap}}), we could just swap the roles of $\Rel r$ and $\Rel s$, or we could use an \RelationName{end following} join combined with a selection predicate stating that the $\Rel s$ tuple has to start before the $\Rel r$ tuple: \[ \sigma_{\Rel s.T_s \leq \Rel r.T_s} (\Rel r \JoinByS_{\mathrm{end}}^{\leq} \Rel s). \] For the more strict Allen's \RelationName{left overlap} and \RelationName{inverse overlap} joins we use ``$<$'' for the $\theta$ of the join and the selection predicate (or, alternatively, the parameterized version of the \RelationName{overlap} join, which is introduced later). \REV{ \paragraph*{During Join:\,} For the \RelationName{during} join (\Doodle{\During}), we have to swap the roles of $\Rel r$ and $\Rel s$. Formulated with the help of a \RelationName{start preceding} join, it becomes \[ \sigma_{\Rel r.T_e \leq \Rel s.T_e} (\Rel s \JoinBySsm_{\mathrm{start}}^{\leq} \Rel r) \] or, alternatively, with an \RelationName{end following} join we get \[ \sigma_{\Rel r.T_s \geq \Rel s.T_s} (\Rel s \JoinByS_{\mathrm{end}}^{\leq} \Rel r). \] A \RelationName{reverse during} join maps more naturally to a \RelationName{start preceding} or \RelationName{end following} join, i.e., we do not have to swap the roles of $\Rel r$ and $\Rel s$. } For the Allen relation we use ``$<$'' for the $\theta$ of the join and the selection predicate (or the parameterized version of the \RelationName{during} join). \paragraph*{Before Join:\,} \label{sec:before-idea} In the case of the \RelationName{before} predicate (\Doodle{\Before}), the tuples should not overlap at all. We achieve this by converting the ending events of $\Rel r$ into starting ones and setting the ending events to infinity (see Figure~\ref{fig:allen-before-join}, the dashed lines are the original tuples, the solid lines the newly created ones). Formulated in relational algebra we get \REV{for the ISEQL version of \RelationName{before}:} \[ \Map{T_e}{\infty}( \Map{T_s}{T_e}(\Rel r) ) \JoinByS_{\mathrm{start}}^{\leq} \Rel s. \] For the Allen relation we use $\theta = $``$<$'' for the join. \begin{figure}[htb] \centering \begin{tikzpicture}[relations] \renewcommand*{}[4] { \DrawTuple[old tuple]{#1}{#2}{#3}{ } \DrawTuple {#2}{8 }{#3}{#4} } \DrawISEQLExampleRelations \ResetDrawRTuple \end{tikzpicture} \caption{Formulating Allen's \JoinName{before join}} \label{fig:allen-before-join} \end{figure} \paragraph*{Meets Join:\,} \label{sec:meets-idea} For the \RelationName{meets} predicate (\Doodle{\Meets}) each tuple of relation $\Rel r$ should only be active for a short interval of length one when it ends. We achieve this by converting the end events of tuples in $\Rel r$ into start events and adding a new end event that shifts the old end event by one. Expressed in relational algebra this looks as follows. \[ \Map{T_e}{T_e + 1}(\Map{T_s}{T_e}( \Rel r )) \JoinByS_{\mathrm{start}}^{\leq} \Rel s. \] \paragraph*{Equals Join:\,} For the \RelationName{equals} predicate (\Doodle{\Equals}) we check that starting events match via an interval-time\-stamp join and then add a selection to check the ending events: \[ \sigma_{\Rel r.T_e' = \Rel s.T_e} (\Map{T_e}{T_s + 1}(\Map{T_e'}{T_e}(\Rel r)) \JoinByS_{\mathrm{start}}^{\leq} \Rel s). \] \paragraph*{Starts Join:\,} For a \RelationName{starts} predicate (\Doodle{\Starts}) we first check that the starting events are the same, the ending events are again compared in a selection afterwards. Although one of the predicates uses a comparison based on inequality, this happens in the selection, not the join: \[ \sigma_{\Rel r.T_e' < \Rel s.T_e} (\Map{T_e}{T_s + 1}(\Map{T_e'}{T_e}(\Rel r)) \JoinByS_{\mathrm{start}}^{\leq} \Rel s). \] \paragraph*{Finishes Join:\,} The \RelationName{finishes} predicate (\Doodle{\Finishes}) works similar to a \RelationName{starts} predicate. We use an interval-ending-timestamp join and swap the roles of the starting and ending events (due to the different definition of the $\JoinByS_{\mathrm{end}}^{\leq}$ join, we also have to shift the timestamps by one): \[ \sigma_{\Rel s.T_s < \Rel r.T_s'} (\Map{T_s}{T_e - 1}(\Map{T_s'}{T_s}(\Rel r)) \JoinByS_{\mathrm{end}}^{\leq} \Rel s). \] \subsection{Parameterized Joins} We now move on to the parameterized versions of the ISEQL join operators found in Table~\ref{table:allen-relations}. \paragraph*{Start Preceding Join with $\delta$:\,} Defining a value for the parameter $\delta$ for the ISEQL \RelationName{start preceding} join (\Doodle{\StartPreceding}) means that the tuple from $\Rel s$ has to start between the start of the $\Rel r$ tuple and within $\delta$ time units of the $\Rel r$ tuple starting or the end of the $\Rel r$ tuple (whichever happens first). Basically, we shorten the long tuples. Expressed in relational algebra, this becomes \[ \Map{T_e}{\min(T_e, T_s + \delta + 1)} (\Rel r) \JoinByS_{\mathrm{start}}^{\leq} \Rel s. \] \paragraph*{End Following Join with $\varepsilon$:\,} For the parameterized \RelationName{end following} (\Doodle{\EndFollowing}) join we have to make sure that the $\Rel s$ tuple ends within $\varepsilon$ time units distance from the end of the $\Rel r$ tuple (but after the start of the $\Rel r$ tuple). The formal definition in relational algebra is \[ \Map{T_s}{\max(T_s, T_e - \varepsilon - 1)} (\Rel r) \JoinByS_{\mathrm{end}}^{\leq} \Rel s. \] \paragraph*{The Before Join with $\delta$:\,} For the parameterized \RelationName{before} join we have to make sure that the $\Rel s$ tuple starts within a time window of length $\delta$ after the $\Rel r$ tuple ends: \begin{eqnarray*} \Map{T_e}{T_e + \delta + 1}( \Map{T_s}{T_e}( \Rel r )) \JoinByS_{\mathrm{start}}^{\leq} \Rel s \end{eqnarray*} \paragraph*{Overlap Join with $\delta$ and $\varepsilon$:\,} Similar to the non-parameterized version we use the \RelationName{start preceding} join to define the parameterized \RelationName{left overlap} join: \[ \sigma_{\Rel r.T'_e \leq \Rel s.T_e \leq \Rel r.T'_e + \varepsilon} (\Map{T_e}{\min(T_e, T_s + \delta + 1)} (\Map{T'_e}{T_e} (\Rel r)) \JoinByS_{\mathrm{start}}^{\leq} \Rel s). \] For the parameterized \RelationName{right overlap} join we can either swap the roles of $\Rel r$ and $\Rel s$ or use the \RelationName{end following} join: \[ \sigma_{\Rel r.T'_s - \delta \leq \Rel s.T_s \leq \Rel r.T'_s} (\Map{T_s}{\max(T_s, T_e - \varepsilon - 1)} (\Map{T'_s}{T_s} (\Rel r)) \JoinByS_{\mathrm{end}}^{\leq} \Rel s). \] \REV{ \paragraph*{During Join with $\delta$ and $\varepsilon$:\,} We can use a parameterized \RelationName{start preceding} or \RelationName{end following} join as a building block for a parameterized \RelationName{during} join, swapping the roles of $\Rel r$ and $\Rel s$. With a \RelationName{start preceding} join we get \[ \sigma_{\Rel s.T'_e - \varepsilon \leq \Rel r.T_e \leq \Rel s.T'_e} (\Map{T_e}{\min(T_e, T_s + \delta + 1)} (\Map{T'_e}{T_e} (\Rel s)) \JoinBySsm_{\mathrm{start}}^{\leq} \Rel r), \] \noindent whereas with an \RelationName{end following} join it boils down to \[ \sigma_{ \Rel s.T'_s \leq \Rel r.T_s \leq \Rel s.T'_s + \delta }( \Map{T_s}{\max(T_s, T_e - \varepsilon - 1)} (\Map{T'_s}{T_s} (\Rel s)) \JoinByS_{\mathrm{end}}^{\leq} \Rel r ). \] } \iffalse In this section we outline the basic idea behind our implementation of the joins supporting Allen's and ISEQL predicates. We start off by providing the implementation details needed for the whole family of algorithms. \medskip Even though logically the ISEQL \RelationName{start preceding} and the \RelationName{end following} predicates are combinations of two ISEQL or six Allen's predicates, computationally they are one of the simplest. This can be noticed by comparing the formal definitions in Table \ref{table:allen-relations}: they have to consider fewer endpoints. Therefore, we start explaining our approach focusing on ISEQL \JoinName{start preceding join}, \JoinName{end following join}, and \JoinName{before join}. \subsection{The ISEQL Start Preceding Join} \label{sec:start-preceding-idea} According to the definition in Table~\RefPage{table:allen-relations}, the \RelationName{start preceding} predicate (\Doodle{\StartPreceding}) joins two tuples $r$ and $s$ if they overlap and $r$ does not start after $s$. For the moment, we relax the parameter $\delta$, setting it to infinity. We enumerate the endpoints $T_s$ and $T_e$ of the tuples of both relations in chronological order. For the example in Figure~\ref{fig:example-relations}, we would get: $r_1$ starts, $r_1$ ends, $r_2$ starts, $s_1$ starts, $r_3$ starts, and so on. We explain how to break ties in a moment. Basically, we sweep through the endpoints of the tuples from left to right. When we encounter the starting endpoint of tuple $r_i$, we load this tuple and place it into the set of active \Rel r tuples. When we encounter the ending endpoint of tuple $r_i$, we remove it from the active tuple set. This way the active tuple set contains all \Rel r tuples that are currently active. While doing so, when we encounter the starting endpoint of tuple $s_i$, we produce the Cartesian product between the set of active $\Rel r$ tuples and $s_i$ as the output of the algorithm. \begin{example} For the example relations in Figure~\ref{fig:example-relations}, we first handle the starting endpoint of $r_1$ and add the tuple to the set of active tuples. Then we handle the ending endpoint of $r_1$ and remove the tuple from the active tuple set. After that we encounter the starting endpoint of $r_2$ and add $r_2$ to the active tuple set. Next we handle the starting endpoint of $s_1$. At this point we output all pairs of active \Rel r tuples (which is $\{r_2\}$ at the moment) with $s_1$: i.e., one pair: $\Tuple{r_2, s_1}$. When two tuples $r$ and $s$ start at the same time, we handle the $r$ tuple first: this way it will be in the active tuple set when we deal with the start of tuple $s$ and thus produce an output pair (satisfying the condition of the \RelationName{start preceding} predicate, which allows the argument tuples to start simultaneously). The algorithm will continue by adding $r_3$ to the active tuples set, removing $r_2$ from it and, finally, producing the second and last output pair $\Tuple{r_3, s_2}$. For this predicate, we can ignore the ending endpoints of \Rel s. \end{example} The original definition of the \JoinName{start preceding} predicate with relaxed $\delta$ (``$r$ and $s$ should overlap, with $r$ not starting after $s$'') can be reformulated as: ``$r$ should be active or start at the same time when $s$ starts''. This is exactly what this algorithm does: for every started tuple $s$ it produces an output pair with those $r$ tuples that are active or start at the same time as~$s$. \subsection{The ISEQL End Following Join} \label{sec:end-following-idea} As before, we first consider the ISEQL \JoinName{end following} predicate (\Doodle{\EndFollowing}) with a relaxed $\delta$ parameter (Table~\RefOnPage{table:allen-relations}), meaning that tuples $r$ and $s$ should overlap and $r$ is not allowed to end before $s$. The algorithm works similarly to the ISEQL \JoinName{start preceding join}, with two main differences. First, the output is triggered by the \emph{ending} endpoint of an \Rel s tuple, not the starting one. Second, when two tuples $r$ and $s$ end at the same time (e.g., $r_2$ and $s_1$ in Figure~\ref{fig:example-relations}), we have to process $s$ first before removing $r$ from the active tuple set, as such a pair should be reported in the output. In our example (Figure~\RefOnPage{fig:example-relations}), when we encounter the ending endpoint of $s_1$, the active tuple set contains $r_2$ and $r_3$. Therefore, the algorithm outputs $\Tuple{r_2, s_1}$ and $\Tuple{r_3, s_1}$. At the moment $s_2$ finishes, the active tuple set consists of $r_3$ only, resulting in the output $\Tuple{r_3, s_2}$. \subsection{The ISEQL Before Join} \label{sec:before-idea} In the case of the ISEQL \RelationName{before} predicate (\Doodle{\Before}), the tuples should not overlap at all. However, we are able to express it in terms of the \RelationName{start preceding} predicate and thus reduce the \JoinName{before join} to the \JoinName{start preceding join}, which we already know how to do. In contrast to the joins described above, the parameter $\delta$ is crucial for the implementation of the \JoinName{before join}. The main idea is to change the interval of every outer relation tuple $r.T = [T_s, T_e)$ to $[T_e, T_e + \delta + 1)$, and then join the relation using the \JoinName{start preceding join}. (We can still set $\delta$ to $\infty$ in this generalized version, thus relaxing the constraints.) It has to be noted that we use expression $T_e + \delta + 1$ as the new $T_e$, but the general form is $[T_e, T_e + \delta + \epsilon)$, where $\epsilon$ is the smallest unit of the timeline, such that, having $\delta = 0$, expression $[T_e, T_e + \epsilon)$ is a valid interval of minimal possible length. We assume integer timestamps here, therefore we use $\epsilon = 1$. Figure~\ref{fig:before-join} shows the modified \Rel r tuples of our example relations for $\delta = 2$ (the original intervals are shown as gray dashed lines). If we now join both relations using the \JoinName{start preceding join}, we will get the correct output for the \JoinName{before join} with $\delta = 2$: $\Tuple{r_1, s_1}$, $\Tuple{r_1, s_2}$, and $\Tuple{r_2, s_2}$. \begin{figure}[h] \centering \begin{tikzpicture}[relations] \renewcommand*{}[4] { \DrawTuple[old tuple]{#1}{#2 }{#3}{ } \DrawTuple {#2}{#2+3}{#3}{#4} } \DrawISEQLExampleRelations \ResetDrawRTuple \end{tikzpicture} \caption{Working principle of ISEQL \JoinName{before join}, $\delta = 2$} \label{fig:before-join} \end{figure} At first we thought about introducing a simpler change, such as incrementing the $T_e$ value of each \Rel r tuple by $\delta$ or decrementing the $T_s$ value of each \Rel s tuple. However, these simpler solutions can produce false positives if the original tuples overlap, which they are not allowed to do for the \JoinName{before} predicate. \fi \section{Our Framework} After introducing the interval joins formally, we now turn to their efficient implementation. We develop a framework to express the different interval joins with the help of just one core join algorithm. The framework also includes an index and several iterators for scanning through sets of intervals to increase the performance and flexibility. \subsection{The Endpoint Index} \label{sec:endpoint-index} We can gain a lot of speed-up by sweeping through the interval endpoints in chronological order using an Endpoint Index, which is a simplified version of the Timeline Index \cite{kaufmann_timeline_2013}. The idea of the \emph{Endpoint Index} is that intervals, which can be seen as points in a two-dimensional space, are mapped onto one-dimensional \emph{endpoints} or \emph{events}. Let $\mathbf r$ be an interval relation with tuples $r_i$, where $1\leq i\leq n$. A tuple $r_i$ in an Endpoint Index is represented by two events of the form $e = \Tuple{\mathit{timestamp},\allowbreak \mathit{type},\allowbreak tuple\_id}$, where $\mathit{timestamp}$ is the $T_s$ or $T_e$ of the tuple, $\mathit{type}$ is either a $\mathrm{start}$ or $\mathrm{end}$ flag, and $\mathit{tuple\_id}$ is the tuple identifier, i.e., the two events for a tuple $r_i$ are $\langle r_i.T_s,\allowbreak \mathrm{start},\allowbreak i\rangle$ and $\langle r_i.T_e,\allowbreak \mathrm{end},\allowbreak i\rangle$. For instance, for $r_3.T = [3,\allowbreak 5)$, the two events are $\langle 3,\allowbreak \mathrm{start},\allowbreak 3\rangle$ and $\langle 5,\allowbreak \mathrm{end},\allowbreak 3\rangle$, which can be seen as ``at time 3 tuple 3 started'' and ``at time 5 tuple 3 ended''. Since events represent timestamps, we can impose a total order among events, where the order is according to $\mathit{timestamp}$ and ties are broken by $\mathit{type}$. In our case of half-open intervals, the order of $\mathit{type}$ values is: $\mathrm{end} < \mathrm{start}$. Endpoints with equal timestamps and types but different tuple identifiers are considered equal. An Endpoint Index for interval relation $\mathbf r$ is built by first extracting the interval endpoints from the relation and then creating the ordered list of events $\left[e_1, e_2, \dots, e_{2n}\right]$ sorted in ascending order. In case of event detection, the endpoints (events) can be taken directly from the event stream and we do not even have to construct an index. Consider exemplary interval relation \Rel r from Figure~\ref{fig:example-relations}. The Endpoint Index for it is $[\langle 0,\allowbreak \mathrm{start},\allowbreak 1\rangle$, $ \langle 1,\allowbreak \mathrm{end}, \allowbreak 1\rangle$, $ \langle 1,\allowbreak \mathrm{start},\allowbreak 2\rangle$, $ \langle 2,\allowbreak \mathrm{start},\allowbreak 3\rangle$, $ \langle 3,\allowbreak \mathrm{end}, \allowbreak 2\rangle$, $ \langle 5,\allowbreak \mathrm{end}, \allowbreak 3\rangle]$. \subsection{Endpoint Iterators} \label{sec:iterators} Before continuing with the join algorithms, we introduce the concept of the Endpoint Iterator, upon which our family of algorithms is based. An \emph{Endpoint Iterator} represents a cursor, that allows forward traversing a list of endpoints (e.g., an Endpoint Index). More formally, it is an abstract data type (an interface), that supports three operations: \begin{itemize} \item \texttt{getEndpoint}: returns the endpoint, which the iterator is currently pointing to (initially returns the first endpoint in the list); \item \texttt{moveToNextEndpoint}: advances the cursor to the next endpoint; \item \texttt{isFinished}: return \texttt{true} if the cursor is pointing beyond the last endpoint of the list, \texttt{false} otherwise. \end{itemize} \noindent More details on the implementation of Endpoint Iterators can be found in Appendix~\ref{appendix:iterators}. The basic implementation of the Endpoint Iterator is the \emph{Index Iterator}, which provides an Endpoint Iterator interface to a physical Endpoint Index. Given an instance of the index, such an iterator traverses all Endpoint Index elements using the native method applicable to the Endpoint Index. In the text and in the algorithm descriptions we use the terms ``Endpoint Index'' and ``Index Iterator'' interchangeably, i.e., we create an Index Iterator for an Endpoint Index implicitly if needed. There are also \emph{wrapping} iterators. Such iterators do not have direct access to an Endpoint Index, but modify, filter and/or combine the output of one or several \emph{source} Endpoint Iterators. In software design pattern terminology such iterators are called \emph{decorators}. We conclude this subsection by introducing one such wrapping iterator. We introduce more of them later, as needed. The simplest wrapping Endpoint Iterator is the \emph{Filtering Iterator}. It receives, upon construction, a source Endpoint Iterator and an endpoint type ($\mathrm{start}$ or $\mathrm{end}$). It then traverses only endpoints having the specified type. \subsection{The Core Algorithm \texttt{JoinByS}} \label{sec:corealg} We are now ready to define the core algorithm, which forms the basis of all our joins. This algorithm receives the relations to be joined \Rel r and \Rel s, Endpoint Iterators for them, a comparison predicate, and a callback function that will be called for each result pair. The algorithm performs the interleaved scan of the endpoint iterators. While doing so, it maintains the set of active \Rel r tuples. \emph{Every} endpoint for relation \Rel s triggers the output---the Cartesian product of the corresponding tuple $s$ and the set of active \Rel r tuples. The comparison predicate is used to define the order in which equal endpoints of different relations are handled (``equal'' meaning endpoints having the same timestamp and type). The pseudocode for the core algorithm \emph{JoinByS} is presented in Algorithm~\ref{alg:join-by-s}. The algorithm starts by initializing an active \Rel r tuple set implemented via a map (an associative array) of tuple identifiers to tuples. \begin{algorithm2e}[htb] \caption{JoinByS(\Rel r, \Rel s, \textsf{itR}, \textsf{itS}, \textsf{comp}, \textsf{consumer})} \label{alg:join-by-s} % \KwData {argument relations \Rel r and~\Rel s, corresponding Endpoint Iterators \ItR and~\ItS, endpoint comparison predicate \Comp (`$<$' or `$\leq$'), function \Consumer{$r, s$} for result pairs} % \Var \ActiveR$ \leftarrow$ \New Map of tuple identifiers to tuples\; % \While{\Not \ItR.\IsFinished \And \Not \ItS.\IsFinished} { \eIf{\Comp{\ItR.\GetEndpoint, \ItS.\GetEndpoint}} { \tcp{handle an \Rel r endpoint (maintain active \Rel r tuples)} % $\mathit{tid} \leftarrow \ItR.\GetEndpoint.\mathit{tuple\_id}$\; \eIf{\ItR.\GetEndpoint$ = \mathrm{start}$} { $r \leftarrow \Rel r[\mathit{tid}]$\tcp*{load the tuple} \ActiveR.\Insert{$\mathit{tid}$, $r$}\; } { \ActiveR.\Remove{$\mathit{tid}$}\; } % \ItR.\MoveToNextEndpoint\; } { \tcp{handle an \Rel s endpoint (trigger output)} $\mathit{s} \leftarrow \Rel s[\ItS.\GetEndpoint.\mathit{tuple\_id}]$\tcp*{load tuple $s$} \ForEach(\tcp*[f]{with every active tuple $r$}){r $\in$ \ActiveR} { \Consumer{$r$, $s$}\tcp*{produce output pair $\Tuple{r, s}$} } \ItS.\MoveToNextEndpoint\; } } \end{algorithm2e} The main loop (line 2) and the main ``if'' (line 3) implement the interleaved scan of the endpoint indices (like in a sort-merge join). The tricky part here is that instead of a hardwired comparison operator (`$<$' or `$\leq$'), we use the function $\textsf{comp}$, that we pass as an argument to the algorithm. In case of the \JoinName{start preceding join}, for instance, if both current endpoints of \Rel r and \Rel s are equal, we have to handle the \Rel r endpoint first (Section~\RefOnPage{sec:start-preceding-idea}), and thus we have to use the `$\leq$' predicate. In case of the \JoinName{end following join}, on the other hand, if both current endpoints of \Rel r and \Rel s are equal, we have to handle the \Rel s endpoint first (Section~\RefOnPage{sec:end-following-idea}), and thus we have to use the `$<$' predicate. Having the predicate as an argument of the algorithm allows us to choose the needed predicate upon using the algorithm, which prevents code duplication.\footnote{Note that the comparison function is not the same as the parameter $\theta$ of the interval-timestamp join. \REV{The comparison operator in JoinByS makes sure that the events are processed in the right order.}} The rest of the algorithm consists of two parts. The first part (lines 4--10) handles an \Rel r endpoint and manages the active \Rel r tuple set. When a tuple starts, the algorithm loads it from the relation by the tuple identifier stored in the endpoint and puts the tuple in the map using the identifier as the key. When a tuple ends, the algorithm removes it from the active tuple map, again using the tuple identifier as the key. The second part (lines 12--15) handles an \Rel s endpoint. It first loads the corresponding tuple $s$ from the relation. Then it iterates through all elements in the active \Rel r tuple map. For every active $r$ tuple the algorithm outputs the pair $\Tuple{r, s}$ by passing it into the $\mathit{consumer}$ function, which is another function-type argument of the algorithm. \REV{In some cases, the consumer has to do additional work such as evaluating a selection predicate. We call these consumers $\mathit{filteringConsumers}$. If they have access to the full tuple, they can check the predicate and immediately output a result tuple. In a streaming environment, we do not have access to the end events immediately, which means that a filteringConsumer also needs to buffer data until these events become available.} \section{Assembling the Parts} We now show how to construct the different interval relations using our JoinByS operator and iterators. We start with the expressions from Section~\ref{sec:formalization} that do not include map operators, followed by those that do. \subsection{Expressions Without Map Operators} \paragraph*{Start Preceding and End Following Joins:\,} These two join predicates are the easiest to implement, as they can be mapped directly to the JoinByS operator. For the \JoinName{start preceding join} (\Doodle{\StartPreceding}) we have to keep track of the active \Rel r tuples, and trigger the output by the \emph{start} of an \Rel s tuple. If two tuples start at the same time, we have to handle the \Rel r tuple first. Therefore, we call the JoinByS function, passing to it only the starting \Rel s endpoints. This is achieved by using a Filtering Iterator (Section~\RefOnPage{sec:iterators}). We also have to pass the `$\leq$' predicate as the comparison function. A \JoinName{start preceding join} then boils down to a single call of JoinByS (see Algorithm~\ref{alg:start-preceding-join-base}). \begin{algorithm2e} \caption{StartPrecedingJoin(\Rel r, \Rel s, \textsf{itR}, \textsf{itS}, \textsf{consumer})} \label{alg:start-preceding-join-base} % JoinByS(\Rel r, \Rel s, \ItR, FilteringIterator(\ItS, $\mathrm{start}$), `$\leq$', \Consumer)\; \end{algorithm2e} The algorithm StartPrecedingJoin receives iterators to the Endpoint Indexes. When using this algorithm with Endpoint Indices, we simply wrap each index in an Index Iterator---an operation, which, as noted before, we consider implicit. We define the algorithm for the \JoinName{end following join} (\Doodle{\EndFollowing}) similarly, but filter the \emph{ending} endpoints of \Rel s, and pass the `$<$' as the comparison function. The pseudocode of the EndFollowingJoin is presented in Algorithm~\ref{alg:end-following-join-base}. \begin{algorithm2e} \caption{EndFollowingJoin(\Rel r, \Rel s, \textsf{itR}, \textsf{itS}, \textsf{consumer})} \label{alg:end-following-join-base} % JoinByS(\Rel r, \Rel s, \ItR, FilteringIterator(\ItS, $\mathrm{end}$), `$<$', \Consumer)\; \end{algorithm2e} \paragraph*{Overlap Joins:\,} The \JoinName{left overlap join} (\Doodle{\LeftOverlap}) can be implemented using the StartPrecedingJoin algorithm with an additional constraint $r.T_e \leq s.T_e$. The pseudocode is shown in Algorithm~\ref{alg:left-overlap-join}. The \JoinName{right overlap join} (\Doodle{\Reverse\LeftOverlap}) is implemented along similar lines using the EndFollowingJoin algorithm and the selection predicate $s.T_s \leq r.T_s$. \begin{algorithm2e} \caption{LeftOverlapJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, \textsf{consumer})} \label{alg:left-overlap-join} \textsf{filteringConsumer} $\leftarrow$ \Function{$(r, s)$} { \lIf{$r.T_e \leq s.T_e$} { \Consumer{$r$, $s$} } } StartPrecedingJoin(\Rel r, \Rel s, \IdxR, \IdxS, \textsf{filteringConsumer}) \end{algorithm2e} For the Allen versions of the overlap joins, we use strict versions of Algorithms \ref{alg:start-preceding-join-base} and \ref{alg:end-following-join-base}, \emph{StartPrecedingStrictJoin} and \emph{EndFollowingStrictJoin}, which do not allow a tuple $r$ to start with a tuple $s$ or a tuple $s$ to end with a tuple $r$, respectively. They are just simple variations: StartPrecedingStrictJoin merely replaces the `$\leq$' in Algorithm~\ref{alg:start-preceding-join-base} with `$<$' and EndFollowingStrictJoin replaces the `$<$' in Algorithm~\ref{alg:end-following-join-base} with `$\leq$'. Additionally, we change the `$\leq$' in the selection predicates in the \textsf{filteringConsumer} functions to `$<$'. \REV{ \paragraph*{During Joins:\,} Implementing the \JoinName{during} join (\Doodle{\During}) is similar to Algorithm~\ref{alg:left-overlap-join}: we just have to swap the arguments for $\Rel r$ and $\Rel s$ (alternatively, we could also use the \JoinName{end following} variant). For the Allen version of \JoinName{during} joins, we replace the StartPrecedingJoin, EndFollowingJoin, and selection predicates with their strict counterparts. If we simply call an algorithm with swapped arguments, the elements of the result pairs appear in a different order, i.e., $\Tuple{s, r}$ instead of the expected $\Tuple{r, s}$. If this is an issue, we can swap them back using a lambda function as the consumer. Putting everything together, we get Algorithm~\ref{alg:during-join}. \begin{algorithm2e} \caption{DuringJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, \textsf{consumer})} \label{alg:during-join} \textsf{reversingConsumer} $\leftarrow$ \Function{$(s, r)$} { \Consumer{$r$, $s$}\; } \textsf{filteringConsumer} $\leftarrow$ \Function{$(s, r)$} { \lIf{$r.T_e \leq s.T_e$} { \textsf{reversingConsumer}($s$, $r$) } } StartPrecedingJoin(\Rel s, \Rel r, \IdxS, \IdxR, \textsf{filteringConsumer}) \end{algorithm2e} } \subsection{Expressions With Map Operators} In order to avoid physically changing tuple values or even the Endpoint Index, we apply the changes made by the map operators virtually with an iterator. While performing an interleaved scan of two Endpoint Indexes, instead of simply comparing the two endpoints $r^e$ and $s^e$ (as in $r^e < s^e$), we shift the timestamp of one of them when comparing: $r^e + \delta < s^e$. In this way the algorithm performs an interleaved scan of the indexes as if we had shifted all \Rel r tuples in time by $+\delta$. During an interleaved scan, instead of forcing the iterators of the two Endpoint Indexes (for the relations \Rel r and \Rel s) to move synchronously as in all the operators so far, now one of the iterators lags behind by a constant offset. This behavior can be easily incorporated into our framework by using a special Endpoint Iterator that shifts the timestamp of every endpoint it returns on-the-fly. There is a second issue: the new \emph{starting} endpoint often is actually a shifted \emph{ending} endpoint or vice versa. Consequently, we have to change the endpoint type as well. With the help of our \emph{Shifting Iterator}, we can shift timestamps and also change endpoint types. As input parameters a shifting iterator receives a source Endpoint Iterator, the shifting distance, and an endpoint type (start or end). The final issue is \REV{separately} shifting the starting and ending endpoints by different amounts. We solve this by having independent iterators for both starting and ending endpoints and merging them on-the-fly in an interleaved fashion. The input parameters of the \emph{Merging Iterator} are two other iterators, the events of which it merges. See Appendix \ref{appendix:iterators} for more details. \paragraph*{Before and Meets Joins:\,} We are now ready to create a \emph{GeneralBeforeJoin} (see Algorithm~\ref{alg:before-join} and \REV{Figure~\ref{fig:schematics} for a schematic representation}); we already handle the parameterized version here as well. This algorithm performs a virtual three-way sort-merge join of the two Endpoint Indexes. One pointer will traverse the Endpoint Index for relation \Rel s, and two pointers will traverse the Endpoint Index for relation \Rel r, all three pointers moving synchronously, but at different positions. This is why we had to (implicitly) create two Index Iterators for the same index (lines 4 and 6)---each of them represents a physical pointer to the same Endpoint Index, therefore we need two of them. \begin{algorithm2e} \SetInd{0pt}{1.5em} \caption{GeneralBeforeJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, $\beta$, $\delta$, \textsf{consumer})} \label{alg:before-join} % StartPrecedingJoin(\Rel r,\Rel s,\\\Indp MergingIterator(\\\Indp \tcp{$T_e + \beta \rightarrow T_s$} ShiftingIterator(\\\Indp FilteringIterator(\IdxR, $\mathrm{end}$), $\beta$, $\mathrm{start}$),\\\Indm \tcp{$T_e + \delta + 1\rightarrow T_e$} ShiftingIterator(\\\Indp FilteringIterator(\IdxR, $\mathrm{end}$), $\delta + 1$, $\mathrm{end}$)),\\\Indm\Indm IndexIterator(\IdxS),\\ \Consumer)\; \end{algorithm2e} \REV{ \begin{figure}[htb] \centering \begin{tikzpicture} \draw (0,0.6) node (A) {\Rel r}; \draw (0,0) node (B) {\Rel s}; \draw (1.5,0.8) node (C) {filter}; \draw (1.5,0.4) node (D) {filter}; \draw (2,0) node (E) {index}; \draw (3,0.8) node (F) {shift}; \draw (3,0.4) node (G) {shift}; \draw (4.5,0.6) node (H) {merge}; \draw (4,0) node (I) {filter}; \draw (6,0.4) node (J) {JoinByS}; \draw[-latex] (A.east) -- (C.west); \draw[-latex] (A.east) -- (D.west); \draw[-latex] (C.east) -- (F.west); \draw[-latex] (D.east) -- (G.west); \draw[-latex] (F.east) -- (H.west); \draw[-latex] (G.east) -- (H.west); \draw[-latex] (H.east) -- (J.west); \draw[-latex] (B.east) -- (E.west); \draw[-latex] (E.east) -- (I.west); \draw[-latex] (I.east) -- (J.west); \end{tikzpicture} \caption{Schematic representation of \emph{GeneralBeforeJoin}} \label{fig:schematics} \end{figure} } We express the Allen's \JoinName{before join} (\Doodle{\Before}) by substituting 1 and $+\infty$ for $\beta$ and $\delta$, respectively; Allen's \JoinName{meets join} (\Doodle{\Meets}) by substituting 0 and 0, respectively; and the ISEQL \JoinName{before join} by substituting 0 for $\beta$ and only using the $\delta$ for the parameterized version. The parameter $\beta$ distinguishes between the strict (Allen) and non-strict (ISEQL) versions of the operator. \paragraph*{Equals and Starts Joins:\,} For the \JoinName{equals join} (\Doodle{\Equals}) we keep the original starting endpoints of \Rel r and use as ending endpoints the starting endpoints shifted by one and then execute a StartPrecedingJoin. This matches tuples from \Rel r and \Rel s with the same starting endpoints. We check that we have matching ending endpoints in the \textsf{filteringConsumer} function, which receives the actual tuples as input and thus has access to the timestamp attributes of the original tuples (see Algorithm~\ref{alg:equals-join}) for the pseudocode. \begin{algorithm2e} \SetInd{0pt}{1.5em} \caption{EqualsJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, \textsf{consumer})} \label{alg:equals-join} % \textsf{filteringConsumer} $\leftarrow$ \Function{$(r, s)$} { \lIf{$r.T_e = s.T_e$} { \Consumer{$r$, $s$} } } StartPrecedingJoin(\Rel r,\Rel s,\\\Indp MergingIterator(\\\Indp \tcp{keep the original \Rel r starting endpoints} FilteringIterator(\IdxR, $\mathrm{start}$), \\ \tcp{$T_s + 1\rightarrow T_e$} ShiftingIterator(\\\Indp FilteringIterator(\IdxR, $\mathrm{start}$), $1$, $\mathrm{end}$)),\\\Indm\Indm IndexIterator(\IdxS),\\ \textsf{filteringConsumer})\; \end{algorithm2e} For a \JoinName{starts join} (\Doodle{\Starts}) we just have to change the predicate in the \textsf{filteringConsumer} function from `$=$' to `$<$'. \paragraph*{Finishes Join:\,} For the tuples in \Rel r we turn the ending events into starting events and shift the ending events by one before joining them to the tuples in \Rel s via an EndFollowingJoin (Algorithm~\ref{alg:end-following-join-base}). Finally, we check that the tuple from \Rel s started before the one from \Rel r. For the pseudocode of the \JoinName{finishes join} (\Doodle{\Finishes}), see Algorithm~\ref{alg:finishes-join}. \begin{algorithm2e} \SetInd{0pt}{1.5em} \caption{FinishesJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, \textsf{consumer})} \label{alg:finishes-join} % \textsf{filteringConsumer} $\leftarrow$ \Function{$(r, s)$} { \lIf{$s.T_s < r.T_s$} { \Consumer{$r$, $s$} } } EndFollowingJoin(\Rel r,\Rel s,\\\Indp MergingIterator(\\\Indp \tcp{\REV{$T_e - 1 \rightarrow T_s$}} ShiftingIterator(\\\Indp FilteringIterator(\IdxR, $\mathrm{end}$), \REV{$-1$}, $\mathrm{start}$),\\\Indm \tcp{\REV{$T_e \rightarrow T_e$}} ShiftingIterator(\\\Indp FilteringIterator(\IdxR, $\mathrm{end}$), \REV{$0$}, $\mathrm{end}$)),\\\Indm\Indm IndexIterator(\IdxS),\\ \textsf{filteringConsumer})\; \end{algorithm2e} \paragraph*{Parameterized Start Preceding Join:\,} We now turn to the parameterized variant of the \JoinName{start preceding join} (\Doodle{\StartPreceding}), which has the parameter $\delta$ constraining the maximum distance between tuple starting endpoints. The basic idea is to take the starting endpoints of relation \Rel r, shift them by $\delta + 1$, change their type to ending endpoints, and add these virtual endpoints to the original endpoints of \Rel r. This way each $r$ tuple will be represented by three endpoints: the original starting and ending endpoints and the virtual ending endpoint. Then the parameterless StartPrecedingJoin algorithm (Algorithm~\ref{alg:start-preceding-join-base}) is applied to both streams of \Rel r and \Rel s endpoints. When encountering the second ending endpoint in the merged iterator, it can simply be ignored when its corresponding tuple cannot be found in the active tuple set (see Appendix~\ref{sec:firstend}). Algorithm~\ref{alg:parametrized-start-preceding-join} depicts the pseudocode. \begin{algorithm2e} \SetInd{0pt}{1.5em} \caption{PStartPrecedingJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, $\delta$, \textsf{consumer})} \label{alg:parametrized-start-preceding-join} % StartPrecedingJoin(\Rel r,\Rel s,\\\Indp \REV{FirstEndIterator}(\\\Indp MergingIterator(\\\Indp \tcp{keep the original \Rel r endpoints} IndexIterator(\IdxR),\\ \tcp{$T_s + \delta + 1 \rightarrow T_e$} ShiftingIterator(\\\Indp FilteringIterator(\IdxR, $\mathrm{start}$), $\delta + 1$, $\mathrm{end}$))),\\\Indm\Indm IndexIterator(\IdxS),\\ \Consumer)\; \end{algorithm2e} \paragraph*{Parameterized End Following Join:\,} A similar parameterized \JoinName{end following join} (\Doodle{\EndFollowing}) is more complicated. The problem here is that each $r$ tuple will have to be represented by two starting endpoints. The algorithm must consider a tuple activated only if \emph{both} starting endpoints (and no ending endpoint) have been encountered. We achieve this by introducing an iterator, called \emph{Second Start Iterator}, that stores the tuple identifiers of events for which we have only encountered one starting endpoint in a hash set \REV{(see Appendix~\ref{sec:secondstart})}. Only the second starting endpoint of this tuple will return the starting event. The pseudocode for the parameterized \JoinName{end following join} is shown in Algorithm~\ref{alg:parametrized-end-following-join}. \begin{algorithm2e} \SetInd{0pt}{1.5em} \caption{PEndFollowingJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, $\varepsilon$, \textsf{consumer})} \label{alg:parametrized-end-following-join} % EndFollowingJoin(\Rel r,\Rel s,\\\Indp SecondStartIterator(\\\Indp MergingIterator(\\\Indp \tcp{keep the original \Rel r endpoints} IndexIterator(\IdxR),\\ \tcp{$T_e - \varepsilon - 1 \rightarrow T_s$} ShiftingIterator(\\\Indp FilteringIterator(\IdxR, $\mathrm{end}$), $- \varepsilon - 1$, $\mathrm{start}$))),\\\Indm\Indm IndexIterator(\IdxS),\\ \Consumer)\; \end{algorithm2e} \paragraph*{Parameterized Overlap Join:\,} Now that we have an algorithm for the parameterized StartPrecedingJoin, we can define the parameterized \JoinName{left overlap join} (\Doodle{\LeftOverlap}) by combining PStartPrecedingJoin with a \textsf{filteringConsumer} function, similarly to what we have done for the non-parameterized overlap join. Algorithm~\ref{alg:parameterized-left-overlap-join} shows the pseudocode. Alternatively, we can use a PEndFollowingJoin and then check the predicate for the starting endpoint of the $s$ tuple in the \textsf{filteringConsumer} function. \begin{algorithm2e} \caption{PLeftOverlapJoin(\Rel r, \Rel s, \textsf{idxR}, \textsf{idxS}, $\delta$, $\varepsilon$, \textsf{consumer})} \label{alg:parameterized-left-overlap-join} \textsf{filteringConsumer} $\leftarrow$ \Function{$(r, s)$} { \lIf{$r.T_e \leq s.T_e \leq r.T_e + \varepsilon$} { \Consumer{$r$, $s$} } } PStartPrecedingJoin(\Rel r, \Rel s, \IdxR, \IdxS, $\delta$, \textsf{filteringConsumer}) \end{algorithm2e} The \JoinName{right overlap join} (\Doodle{\Reverse\LeftOverlap}) uses a PEndFollowingJoin with the corresponding predicate in the \textsf{filteringConsumer} function. \REV{ \paragraph*{Parameterized During Join:\,} The parameterized \JoinName{during join} (\Doodle{\During}) looks similar to Algorithm~\ref{alg:parameterized-left-overlap-join}, we apply changes along the lines of those shown in the paragraph for the non-parameterized \JoinName{during join}. (There is also an alternative version using an PEndFollowingJoin.) } \subsection{Correctness of Algorithms} \label{sec:correctness} Showing the correctness of our algorithms boils down to illustrating that we handle the map operators correctly and demonstrating the correctness of the StartPreceding and EndFollowing joins, as our algorithms are either StartPreceding and EndFollowing joins or are built on top them. \paragraph*{Iterators and Map Operators:\,} Here we show how to implement map operators with the help of iterators. Instead of materializing the result (e.g.\ on disk), we make the corresponding changes in a tuple as it passes through an iterator. If we still need a copy of the old event later on, we feed this event through another iterator and merge the two tuple streams using a merge iterator. \paragraph*{StartPreceding Join:\,} We have to show that all tuples created by Algorithm~\ref{alg:start-preceding-join-base} satisfy the predicate $r.T_s \le s.T_s < r.T_e$. A Filter Iterator removes all the ending events from \Rel s, so we only have to deal with starting events from \Rel s and with both types of events from \Rel r. As comparison operator we use `$\le$'. This determines the order in which events are dealt with. First, let us look at the case that both upcoming events in \textsf{itR} and \textsf{itS} are starting events. If $r.T_s \le s.T_s$, then $r$ will be inserted into the active tuple set before $s$ is processed, meaning that the (later) arrival of $s$ will trigger the join with $r$. If $r.T_s > s.T_s$, then $s$ will be processed first, not encountering $r$ in the active tuple set, meaning that the two will not join. Second, if the next event in \Rel r is an ending event and the next event in \Rel s a starting event, then the two events can never be equal. Even if they have the same timestamp, the ending endpoint of $r$ will always be considered less than the starting endpoint of $s$. Therefore, if $r.T_e \le s.T_s$, $r$ will be removed first, so $r$ and $s$ will not join, and if $r.T_e > s.T_s$, $s$ will still join with $r$. So, in summary, all the tuples generated by Algorithm~\ref{alg:start-preceding-join-base} satisfy the predicate $r.T_s \le s.T_s < r.T_e$. For a StrictStartPreceding join we run Algorithm~\ref{alg:start-preceding-join-base} with the comparison operator `$<$', yielding output tuples that satisfy the predicate $r.T_s < s.T_s < r.T_e$. If both upcoming events in \textsf{itR} and \textsf{itS} are starting events, we get the correct behavior: $r.T_s < s.T_s$ will lead to a join, $r.T_s \ge s.Ts$ will not. If the $r$ event is an ending event and the $s$ event is a starting one, we also get the correct behavior: $r.T_e \le s.T_s$ will not join the $r$ and $s$ tuple, $r.T_e > s.T_s$ will (the ending event of $r$ is always less than the starting event of $s$). \paragraph*{EndFollowing Join:\,} We show that all tuples created by Algorithm~\ref{alg:end-following-join-base} satisfy the predicate $r.T_s < s.T_e \le r.T_e$. This time a Filter Iterator removes all the starting events from \Rel s, so we only have to deal with ending events from \Rel s and with both types of events from \Rel r. The comparison operator used for the non-strict version is `$<$'. First, assume that the next event in \textsf{itR} is a starting event and the next event in \textsf{itS} is an ending event. As an ending event takes precedence over a starting event, if $r.T_s = s.T_e$, the $s$ event will come first. In turn this means that if $r.T_s < s.T_e$, $r$ is added to the active set first, resulting in a join, and if $r.T_s \ge s.T_e$, $s$ is processed first, meaning there is no join. Second, we now look at the case that both events are ending events. Due to the comparison operator `$<$', the events are handled in the right way: if $r.T_e < s.T_e$, we remove $r$ first, so there is no join, and if $r.T_e \ge s.T_e$ we handle $s$ first, resulting in a join. For a StrictEndFollowing join we run Algorithm~\ref{alg:end-following-join-base} with `$\le$' as comparison operator to obtain tuples that satisfy the predicate $r.T_s < s.T_e < r.T_e$. Let us first look at a starting event for \Rel r and an ending event for \Rel s. As ending events are processed before starting events with the same timestamp, we get: if $r.T_s < s.T_e$, then $r$ is added first, resulting in a join, and if $r.T_s \ge s.T_e$, then $s$ is removed first, meaning there is no join. Finally, we investigate the case that both events are ending events: if $r.T_e \le s.T_e$, then $r$ is removed first, i.e., no join, and if $r.T_e > s.T_e$, then $s$ is processed first, joining $r$ and $s$. \section{Implementation Considerations} In this section we look at techniques to implement our framework efficiently, in particular how to represent an active tuple set, utilizing contemporary hardware. We also investigate the overhead caused by our heavy use of abstractions (such as iterators). \subsection{Managing the Active Tuple Set} \label{sec:active-tuple-maps} For managing the active tuple set we need a data structure into which we can {\tt insert} key-value pairs, {\tt remove} them, and quickly enumerate (scan) one by one all the values contained in the data structure via the operation {\tt getnext}. In our case, the keys are tuple identifiers and the values are the tuples themselves. The data structure of choice here is a map or associative array. The most efficient implementation of a map optimizing the {\tt insert} and {\tt remove} operations is a hash table (with $O(1)$ time complexities for these operations). However, hash tables are not well-suited for scanning. The \emph{std::unordered\_map} class in the C++ Standard Template Library and the \emph{java.util.HashMap} in the Java Class Library, for instance, scan through all the buckets of a hash table, making the performance of a scan operation linear with respect to the capacity of the hash table and not to the actual amount of elements in it. In order to achieve an $O(1)$ complexity for {\tt getnext}, the elements in the hash table can be connected via a doubly-linked list (see Figure~\ref{fig:linked-hash-map}). The hash table stores pointers to elements, which in turn contain a key, a value, two pointers for the doubly-linked list (\emph{list prev} and \emph{list next}) and a pointer for chaining elements of the same bucket for collision resolution (pointer \emph{bucket next}). This approach is employed in the \emph{java.util.LinkedHashMap} in the Java Class Library. \begin{figure}[htb] \centering \includegraphics[width=6.1cm]{images/linked-hash-map} \vspace*{-.15cm} \caption{Linked hash map} \label{fig:linked-hash-map} \end{figure} While this data structure offers a constant complexity for {\tt getnext}, the execution times of different calls of {\tt getnext} can vary widely in practice, depending on the memory footprint of the map. After a series of insertions and deletions the elements of the linked list become randomly scattered in memory, which \REV{has an impact on caching: sometimes the next list element is still in the cache (resulting in fast retrieval), sometimes it is not (resulting in slow random memory accesses). Additionally, the pointer structure make it hard for a prefetcher to determine where the next elements are located.} However, for our approach it is crucial that {\tt getnext} can be executed very efficiently, as it is typically called much more often than \texttt{insert} and \texttt{remove}. We will see in Section~\ref{sec:map} how to implement a hash map more efficiently. \subsection{Lazy Joining of the Active Tuple Set} \label{sec:lazyjoining} The fastest {\tt getnext} operations are actually those that are not executed. We modify our algorithm to boost its performance by significantly reducing the number of {\tt getnext} operations needed to generate the output. We illustrate our point using the example setting in Figure~\ref{fig:example-data}. Assume we have just encountered the left endpoint of $s_1$, which means that our algorithm now scans the tuple set $active^r$, which contains $r_1$ and $r_2$. After that we scan it again and again when encountering the left endpoints of $s_2$, $s_3$, and $s_4$. However, since no endpoints of $\mathbf r$ were encountered during that time, we scan the same version of $active^r$ four times. We can reduce this to one scan if we keep track of the tuples $s_1$, $s_2$, $s_3$, and $s_4$ in a (contiguous) buffer, delaying the scan until there is about to be a change in $active^r$. \begin{figure}[htb] \includegraphics[width=\linewidth]{images/example-data} \vspace*{-.15cm} \caption{Example interval relations} \label{fig:example-data} \end{figure} To remedy this situation, we collect all consecutively encountered \Rel s tuples in a small buffer that fits into the L1 cache. Scanning the active tuple set when producing the output now requires only one traversal. Thanks to the design of our join algorithms we can incorporate this optimization into the whole framework by modifying JoinByS. The optimized version is shown in Algorithm~\ref{alg:lazy-join-by-s}. This technique has been introduced for overlap joins in~\cite{piatov_interval_2016}, here we generalize it to the JoinByS algorithm. We recommend using a size for the buffer $c$ that is smaller than the size of the L1d CPU cache (usually 32 Kilobytes) for this method to be effective. \begin{algorithm2e} \caption{LazyJoinByS(\Rel r, \Rel s, \textsf{itR}, \textsf{itS}, \textsf{comp}, \textsf{consumer})} \label{alg:lazy-join-by-s} % \KwData {argument relations \Rel r and~\Rel s, corresponding Endpoint Iterators \ItR and~\ItS, endpoint comparison predicate \Comp (`$<$' or `$\leq$'), function \Consumer{$r, s$} for result pairs} % \Var \ActiveR$ \leftarrow$ \New Map of tuple identifiers to tuples\; \Var \Buffer$ \leftarrow$ \New array of capacity $c$\; % \While{\Not \ItR.\IsFinished \And \Not \ItS.\IsFinished} { \eIf{\Comp{\ItR.\GetEndpoint, \ItS.\GetEndpoint}} { \tcp{handle an \Rel r endpoint (maintain active \Rel r tuples)} % $\mathit{tid} \leftarrow \ItR.\GetEndpoint.\mathit{tuple\_id}$\; \eIf{\ItR.\GetEndpoint$ = \mathrm{start}$} { $r \leftarrow \Rel r[\mathit{tid}]$\tcp*{load the tuple} \ActiveR.\Insert{$\mathit{tid}$, $r$}\; } { \ActiveR.\Remove{$\mathit{tid}$}\; } % \ItR.\MoveToNextEndpoint\; } { \tcp{get sequence of $s$ tuples uninterrupted by $r$ events} \Repeat{\ItS.\IsFinished \Or \Comp{\ItR.\GetEndpoint, \ItS.\GetEndpoint} \Or \Buffer.\IsFull} { $\mathit{s} \leftarrow \Rel s[\ItS.\GetEndpoint.\mathit{tuple\_id}]$\; \Buffer.\Insert{$s$}\; \ItS.\MoveToNextEndpoint\; } \tcp{produce Cartesian product with active \Rel r tuples} \ForEach(\tcp*[f]{scan the active $r$ tuples once}){r $\in$ \ActiveR} { \ForEach(\tcp*[f]{the inner loop, in L1 cache}){s $\in$ \Buffer} { \Consumer{$r$, $s$}\tcp*{produce output pair} } } \Buffer.\Clear\; } } \end{algorithm2e} For the sake of simplicity, we only refer to the JoinByS algorithm in the following section. It can be replaced by the LazyJoinByS algorithm without any change in functionality. \subsection{Features of Contemporary Hardware} Before describing further optimizations, we briefly review mechanisms employed by contemporary hardware to decrease main memory latency. This latency can have a huge impact, as fetching data from main memory may easily use up more than a hundred CPU cycles. \paragraph*{Mechanisms:\,} Usually, there is a hierarchy of caches, with smaller, faster ones closer to CPU registers. Cache memory has a far lower latency than main memory, so a CPU first checks whether the requested data is already in one of the caches (starting with the L1 cache, working down the hierarchy). Not finding data in a cache is called a {\em cache miss} and only in the case of cache misses on all levels, main memory is accessed. In practice an algorithm with a small memory footprint runs much quicker, because in the ideal case, when an algorithm's data (and code) fits into the cache, the main memory only has to be accessed once at the very beginning, loading the data (and code) into the cache. Besides the size of a memory footprint, the access pattern also plays a crucial role, as contemporary hardware contains {\em prefetchers} that speculate on which blocks of memory will be needed next and preemptively load them into the cache. The easier the access pattern can be recognized by a prefetcher, the more effective it becomes. Sequential access is a pattern that can be picked up by prefetchers very easily, while random access effectively renders them useless. Also, programs do not access physical memory directly, but through a virtual memory manager, i.e., virtual addresses have to be mapped to physical ones. Part of the mapping table is cached in a so-called {\em translation lookaside buffer} (TLB). As the size of the TLB is limited, a program with a high level of locality will run faster, as all look-ups can be served by the TLB. {Out-of-order execution} (also called {\em dynamic execution}) allows a CPU to deviate from the original order of the instructions and run them as the data they process becomes available. Clearly, this can only be done when the instructions are independent of each other and can be run concurrently without changing the program logic. Finally, certain properties of DRAM (dynamic random access memory) chips also influence latency. Accessing memory using fast page or a similar mode means accessing data stored within the same page or bank without incurring the overhead of selecting it. This mechanism favors memory accesses with a high level of locality. \paragraph*{Performance Numbers:\,} We provide some numbers to give an impression of the performance of currently used hardware. For contemporary processors, such as ``Core'' and ``Xeon'' by Intel\footnote{We use the cache and memory latencies obtained for the Sandy Bridge family of Intel CPUs using the SiSoftware Sandra benchmark, \url{http://www.sisoftware.net/?d=qa\&f=ben\_mem\_latency}.}, one random memory access within the L1 data (\emph{L1d}) cache (32~KB per core) takes 4 CPU cycles. Within the L2 cache (256~KB per core) one random memory access takes 11--12 cycles. Within the L3 cache (3--45~MB) one random memory access takes 30--40 CPU cycles. Finally, one random physical RAM access takes around 70--100~ns (200--300 processor cycles). It follows that the performance gap between an L1 cache access and a main memory access is huge: two orders of magnitude. \subsection{Implementation of the Active Tuple Set} \label{sec:map} As we will see later in an experimental evaluation, managing the active tuple set efficiently in terms of memory accesses is crucial for the performance of the join algorithm. Otherwise we run the risk of starving the CPU while processing a join. Our goals have to be to store the active tuple set as compactly as possible and to access it sequentially, allowing the hardware to get the data to the CPU in an efficient manner. We store the elements of our hash map in a contiguous memory area. For the {\tt insert} operation this means that we always append a new element at the end of the storage area. Removing the last element from the storage area is straightforward. If the element to be removed is not the last in the storage area, we swap it with the last element and then remove it. When doing so, we have to update all the references to the swapped elements. Scanning involves stepping through the contiguous storage area sequentially. We call our data structure a \emph{gapless hash map} (see Figure~\ref{fig:gapless-hash-map}). \begin{figure}[htb] \centering \includegraphics[width=6.1cm]{images/unordered-hash-map-separated} \vspace*{-.15cm} \caption{Gapless hash map} \label{fig:gapless-hash-map} \end{figure} We also separate the tuples from the elements, storing them in a different contiguous memory area in corresponding locations. \REV{Assuming fixed-size records}, all basic element operations (append and move) are mirrored for the corresponding tuples. This slightly increases the costs for insertions and removal of tuples. However, scanning the tuples is as fast as it can become, because we do not need to read any metadata, only tuple information. The hash table stores pointers to elements, which contain a key, a pointer for chaining elements of the same bucket when resolving collisions (pointer \emph{bucket next}, solid arrows), and a pointer \emph{bucket prev} to a hash table entry or an element (whichever holds the forward pointer to this element, dashed arrows). The latter is used for updating the reference to an element when changing the element position. The main difference to the random memory access of a linked hash map (Fig.~\ref{fig:linked-hash-map}) is the allocation of all elements in a contiguous memory area, allowing for fast sequential memory access when enumerating the values. \begin{example} Assume we want to remove tuple 7 from the structure depicted in Figure~\ref{fig:gapless-hash-map}. First of all, the bucket-next pointer of the element with key 5 is set to NULL. Next, the last element in the storage area (tuple 2) is moved to the position of the element with key 7. Following the bucket-prev pointer of the just moved element we find the reference to the element in the hash table and update it. Finally, the variable \emph{tail} is decremented to point to the element with key 9. \end{example} \subsection{Overhead for Abstractions} All the abstractions we use (iterators, predicates passed as function arguments, and lambda functions) allow us to express all joins by means of a single function, which is extremely practical due to the huge simplification of implementation and subsequent maintenance of the code. In this section we explain why the impact of this architecture on the performance is minimal for C++ and not significant for Java. We compare our implementation empirically to a manual rewrite without abstractions of a selected join algorithm. Here, we show the results for our most complicated implementation, Algorithm~\ref{alg:before-join}. We compare its performance to a version that was fully inlined manually into a single leaf function. We did so for C++ and also for Java. We then launched each one of the four versions separately using the synthetic dataset of $10^6$ tuples with an average number of active tuples equal to $10$ (see Section~\ref{sec:setup} for the dataset). Each version was executing the join several times sequentially to allow the JVM to perform all necessary optimizations. The results are shown in Figure~\ref{fig:experiments-compilers}. We see that the C++ version is several times faster than the Java version. Moreover, we see that the C++ compiler was able to optimize our abstracted code so well that its performance is indistinguishable from the manually optimized version. The situation with Java is more complicated, in the end the manually optimized version was ${\sim}10\%$ faster. \begin{figure}[t] \centering \begin{tikzpicture}[baseline] \ReadTable{normal-vs-inlined.txt} \begin{axis} [ global plot style, xlabel = {Iteration sequential number}, ylabel = {Execution time, s}, xmin = 1, xmax = 13, xtick = data, ymin = 0, ymax = 1, legend columns = 2, height = 2.8cm, width = 7cm, ] \AddPlot[]{x = run, y = java-n} \addlegendentry{Java normal} \AddPlot[]{x = run, y = java-i} \addlegendentry{Java inlined} \AddPlot[]{x = run, y = cpp-n} \addlegendentry{C++ normal} \AddPlot[]{x = run, y = cpp-i} \addlegendentry{C++ inlined} \end{axis} \end{tikzpicture} \caption{Overhead of the abstractions used in the algorithms} \label{fig:experiments-compilers} \end{figure} \paragraph*{C++:\,} This language was designed to support penalty-free abstractions. Not all abstractions in C++ are penalty-free, though. We first implemented the family of Endpoint Iterators as a hierarchy of virtual classes and found that the compilers we used (GCC and Clang) were not able to inline virtual method calls (even though they had all the required information to do so). We then rewrote the code using templates and functors, each iterator becoming a non-virtual class, passed into the join algorithm as template argument. The comparator used by the core algorithm was a functor \texttt{std::less} or \texttt{std::less\_equals}. The consumers were defined as C++11 lambda-functions, also passed as a template parameter. This time both compilers were able to inline all method calls and generate very optimized code with all variables (including iterator fields) kept in CPU registers. \paragraph*{Java:\,} We face a different situation with Java, as the optimization is not performed by the compiler, but the Java virtual machine (JVM) during run-time. The JVM (in particular, the standard Oracle HotSpot implementation) compiles, optimizes and recompiles the code while executing it. It can potentially apply a wider range of optimizations (e.g., speculative optimization) than a C++ compiler can, as it actively learns about the actual workload, but in the case of Java we have limited control over this process. As we show in Figure~\ref{fig:experiments-compilers}, Java does in fact optimize the code with abstractions. Not as well as C++, but the performance difference is very small compared to a manually rewritten join. \REV{ \subsection{Parallel Execution} While parallelization is not a main focus of this paper, we know how to parallelize our scheme and have implemented a parallel version of our earlier EBI-Join operator~\cite{piatov_interval_2016}. We give a brief description here: the tuples in both input relations, \Rel r and \Rel s, are sorted by their starting time and then partitioned in a round-robin fashion, i.e., the $i$-th tuple of a relation is assigned to partition $(i \mod k)$ of that relation, where $k$ is the number of partitions. By assigning close neighbors to different partitions, we lower the size of the active tuple sets, which is a crucial parameter for the performance of our algorithm. We then do a pairwise join between all partitions of \Rel r with all partitions of \Rel s. As all partitions are disjoint, the joins can run in parallel independently of each other. A downside of this approach is that we need $k^2$ processes. Nevertheless, we achieved an average speed-up of 2.7, 4.3, and 5.3 for $k = $ 2, 3, and 4, respectively, on a machine with two CPUs (eight cores each). One major difference between JoinByS and EBI-Join is that JoinByS maintains only one active tuple set (for \Rel r), whereas EBI-Join maintains two (one for \Rel r and one for \Rel s). So, in order to keep the active tuple set small, for JoinByS we only need to partition \Rel r, resulting in one process for each of the $k$ partitions. The tuples in \Rel s are fed to each of these processes. } \section{Theoretical Analysis} \label{sec:theoretical} \paragraph*{\REV{One-dimensional Overlap:\,}} Our approach is related to finding all the intersecting line segments, or intervals, given a set of $n$ segments in one-dimensional space. The optimal (plane-sweeping) algorithm for doing so has complexity $\ensuremath{O}(n \log n + k)$, where $k$ is the number of intersecting segments~\cite{Cormen09}. \REV{In the worst case, when we have a large number of intersecting segments, the complexity becomes $\ensuremath{O}(n \log n + n^2)$. In this case, the run time of the algorithm is dominated by the output cardinality.} Each segment $s_i$ is split up into a left starting event $\langle l_i, \mathrm{start} \rangle$ and a right ending event $\langle r_i, \mathrm{end} \rangle$. Afterwards the events of all segments are sorted, which takes $\ensuremath{O}(n \log n)$ time. We then traverse the sorted list of events. When encountering a left endpoint, we insert it into a data structure $D$, which keeps track of the currently active segments. When encountering a right endpoint, we remove it from $D$ and join it with all the segments currently stored in $D$. If we use a balanced search tree for $D$ (e.g. a red-black tree), then inserting and removing an endpoint will cost us $\ensuremath{O}(\log n)$. As we have $2n$ endpoints, we arrive at a total of $\ensuremath{O}(2n \log 2n) = \ensuremath{O}(n \log n)$. Generating all the output will take $\ensuremath{O}(k)$. If we use a hash table, insertion and removal of endpoints can be done in $\ensuremath{O}(1)$, for a total of $\ensuremath{O}(n)$. As long as we make sure that the entries in the hash table are linked or packed compactly (as in our gapless hash map), this will have an overall complexity of $\ensuremath{O}(k)$. \paragraph*{\REV{Generalization:\,}} Joins with predicates involving Allen or ISEQL relations are not exactly the same as the one-dimensional line segment intersection. Nevertheless, the joins can be mapped onto orthogonal line segment intersection, which is a special case of two-dimensional line segment intersection that can also be done in $\ensuremath{O}(n \log n + k)$, \REV{with $k = n^2$ in the worst case}, using a plane-sweeping algorithm that traverses the segments sorted by one dimension~\cite{Cormen09}. This also explains why there were no further developments for interval joins recently, as the state-of-the-art algorithms achieve this complexity. However, when generating the output, we cannot just join a segment with all active ones, we need to check additional constraints: two segments can overlap on the x-axis, but may or may not do so on the y-axis. As we will see shortly, this has implications for the data structure $D$. \paragraph*{\REV{Complexity of Different Join Predicates:\,}} Let us now have a closer look at the different join predicates. For all of them, we need the relations $\Rel r$ and $\Rel s$ to be sorted. Either we keep them in a Timeline Index or operate in a streaming environment, in which they are already sorted, or we need to sort them in $\ensuremath{O}(n \log n)$. The non-parameterized and parameterized versions of \JoinName{start preceding}, \JoinName{end following}, and \JoinName{before} (which includes \JoinName{meets} in its parameterized version) are not hard to analyze. They all have a complexity of $\ensuremath{O}(n \log n + k)$. For \JoinName{start preceding}, we maintain the active tuple set of $\Rel r$ in a gapless hash map, which means $\ensuremath{O}(1)$ for the insertion and removal of a single tuple, or $\ensuremath{O}(2n) = \ensuremath{O}(n)$ in total. Additionally, whenever we encounter a starting event of $\Rel s$, we generate result tuples, resulting in a total of $\ensuremath{O}(k)$ for generating all the output. For the parameterized version, we merely shift the endpoints of the tuples in $\Rel r$. \JoinName{end following} is very similar, the only differences being that we generate output when encountering ending events of $\Rel s$ and for the parameterized variant, we shift the starting points of $\Rel r$. \JoinName{before} is not much different, we shift both events of tuples in $\Rel r$ and whenever we encounter starting events of $\Rel s$, we generate the output. We now turn to \JoinName{overlap} and \JoinName{during} joins, which we implement using \JoinName{start preceding} (or \JoinName{end following}) joins; the same reasoning also holds for our implementation of the \JoinName{equals}, \JoinName{starts}, and \JoinName{finishes} joins. \REV{Processing a \JoinName{left overlap} or a \JoinName{reverse during} join}, we cannot just output the results in a straightforward way when encountering a starting event in $\Rel s$ as before, as \REV{at this point we cannot determine whether two intervals are in a left-overlap or reverse-during relationship: the relationship between the starting events both look the same, we need to see the ending events to make a final decision}. A similar argument holds for implementing \JoinName{overlap} and \JoinName{during} joins with \JoinName{end following} joins: the role of the starting and ending events are switched in this case. The textbook solution is to keep the intervals sorted by ending events, e.g. in a tree. We can then search quickly for the qualifying tuples in this tree and generate the output, resulting in an overall complexity of $\ensuremath{O}(n \log n + k)$.\footnote{Assuming that insertion and removal costs us $\ensuremath{O}(\log n)$.} However, it is more difficult to do this in a cache-friendly manner, as a tree traversal entails more random I/O than a sequential scan. \REV{Using a gapless hash map instead, we go through {\em all} the tuples in the active tuple set. Compared to the tree data structure, the processing of the join generates a larger intermediate result, as we join all intervals that satisfy an \JoinName{overlap} or \JoinName{during} join predicate. We filter out the tuples satisfying the predicate we are not interested in afterwards with a selection operator. Consequently, our approach has an overall complexity of $\ensuremath{O}(n \log n + k')$ for \JoinName{overlap} and \JoinName{during} joins, with $k' geq k$. However, we utilize a sequential scan during the processing and as we will see in the experimental evaluation, introducing random I/O into the traversal of the active tuple set (like in a tree data structure) starves the CPU and slows down the whole process by two orders of magnitude.} On paper, our approach looks worse, but in practice it outperforms the allegedly better method. \section{Experimental Evaluation} \label{sec:experimental-evaluation} \subsection{Setup} \label{sec:setup} \paragraph*{Environment:\,} All algorithms were implemented in-memory in C++ by the same author and compiled with GCC~4.9.4 to 64-bit binaries using the \texttt{\rule{0pt}{1ex}-O3} optimization flag. We executed the code on a machine with two Intel Xeon E5-2667 v3 processors under Linux. All experiments used 12-byte tuples containing two 32-bit timestamp attributes ($T_s$ and $T_e$) and a 32-bit integer payload. All experiments were repeated (also with bigger tuple sizes) on a seven-year-old Intel Xeon X5550 processor and on a notebook processor i5-4258U, showing a similar behavior. \paragraph*{Algorithms:\,} We compare our approach with the Leung-Muntz family of sweeping algorithms \cite{leung_query_1989,leung_query_1990} and with an algorithm for generic inequality joins, IEJoin \cite{khayyat_fast_2017}. We implemented the Leung-Muntz algorithms in the most effective way, i.e., performing all stages of the algorithm simultaneously, as recommended by the authors. For a fair comparison, we stored the set of started tuples in a Gapless List, adapting the Gapless Hash Map technique (Section~\ref{sec:map}) to the Leung-Muntz algorithms to boost their performance. We implemented IEJoin using all optimizations from the original paper. Our algorithms were implemented as described before, i.e., using abstractions and lambda-functions. The workload for all algorithms consisted of accumulating the sum of $T_s$ attributes of the joined tuples. For benchmarking, we implemented the tuples as structures and the relations as \texttt{std::vector} containers. The Endpoint Index was implemented analogously, using structures for the endpoints and a vector for the index. \paragraph*{Synthetic Datasets:\,} \label{sec:synthetic-datasets} To show particular performance aspects of the algorithms we create synthetic datasets with uniformly distributed starting points of the intervals in the range of $[1,10^6]$. The duration of the intervals is distributed exponentially with rate parameter $\lambda$ (with an average duration $1/\lambda$). To perform a join, both relations in an individual workload follow the same distribution, but are generated independently with a different seed. In the experiments, for a specific value of $\lambda$, we varied the cardinality of the generated relations. \paragraph*{Real-World Datasets:\,} \label{sec:rw-datasets} We use five real-world datasets that differ in size and data distribution. The main properties of them are summarized in Table~\ref{table:rw-datasets}. Here $n$ is the number of tuples, $|r.T|$ is the tuple interval length, ``$r.T_s$ and $r.T_e$ domain'' is the size of the time domain of the dataset and ``$r.T_s$ and $r.T_e$ \#distinct'' is the number of distinct time points in the dataset. \begin{table} \caption{Real-world dataset statistics} \label{table:rw-datasets} \centering \vspace{-1.5mm} \begin{tabular}{@{}lrrrrrr@{}} \toprule & & \multicolumn{3}{c}{$|r.T|$} & \multicolumn{2}{c@{}}{$r.T_s$ and $r.T_e$} \\ \cmidrule(lr){3-5} \cmidrule(l){6-7} dataset\!\!\!\!\! & $n$ & min & avg & max & domain & \#distinct \\ \midrule flight & 58\,k & 61 & 8\,k & 86\,k & 812\,k & 10\,k \\ inc & 84\,k & 2 & 184 & 574 & 9\,k & 2.7\,k \\ web & 1.2\,M & 1 & 60\,M & 352\,M & 352\,M & 110\,k \\ feed & 3.7\,M & 1 & 432 & 8.5\,k & 8.6\,k & 5.6\,k \\ basf & 5.3\,M & 1 & 127\,k & 16\,M & 16\,M & 760\,k \\ \bottomrule \end{tabular} \end{table} The \emph{flight} dataset~\cite{behrend_flight_data_2014} is a collection of international flights for November 2014, start and end of the intervals represent plane departure and arrival times with minute precision. The Incumbent (\emph{inc}) dataset~\cite{GendranoSSY98} records the history of employees assigned to projects over a sixteen year period at a granularity of days. The \emph{web} dataset~\cite{webkit} records the history of files in the SVN repository of the Webkit project over an eleven year period at a granularity of seconds. The valid times indicate the periods in which a file did not change. The \emph{feed} dataset records the history of measured nutritive values of animal feeds over a 24 year period at a granularity of days; a measurement remains valid until a new measurement for the same nutritive value and feed becomes available \cite{dignos_overlap_2014}. Finally, rather than using time as a domain, the dataset \emph{basf} contains NMR spectroscopy data describing the resonating frequencies of different atomic nuclei \cite{Hel07}. As these frequencies can shift, depending on the bonds an atom forms, they are defined as intervals. For the experiments we used self-joins of these datasets, the only exception are the ``wi'' and ``fi'' workloads, where we joined the ``web'' and ``feed'' datasets with ``incumbent''. \subsection{Experiments and Results} \subsubsection{Cache Efficiency} First, we look at the impact of improving the cache efficiency of the data structure used for maintaining the active tuple set. We investigate the average latency of a \texttt{getnext} operation, which is crucial for generating the result tuples. We compare a linked hash map (Section~\ref{sec:active-tuple-maps}), a gapless hash map (Section~\ref{sec:map}), and a tree structure (mentioned in Section~\ref{sec:theoretical}). The tree was implemented using a red-black tree (std::map) from the C++ Standard Library. We filled the data structures with various numbers of 32-byte tuples, then randomly added and removed tuples to simulate the management of an active tuple set. Afterwards, we performed several scans of the data structures. Figure~\ref{fig:linked-vs-gapless-getnext}, shows the average latency of a \texttt{getnext} operation depending on the number of tuples (note the double-logarithmic scale). \begin{figure}[htb] \centering \begin{tikzpicture} \begin{loglogaxis} [ list performance plot, ylabel = {\texttt{getnext} latency, ns}, ] \addplot table[x=size, y=t]\dataPLists; \addlegendentry{Tree} \addplot table[x=size, y=l]\dataPLists; \addlegendentry{Linked hash map} \addplot table[x=size, y=u]\dataPLists; \addlegendentry{Gapless hash map} \end{loglogaxis} \end{tikzpicture} \vspace*{-.15cm} \caption{Latency of \texttt{getnext} operation} \label{fig:linked-vs-gapless-getnext} \end{figure} We see that the latency of a \texttt{getnext} operation is not constant but grows depending on the memory footprint of the tuples. In order to find the cause of this, we used the Performance Application Programming Interface (PAPI) library to read out the CPU performance counters \cite{MoRa11}. When looking at the average number of stalled CPU cycles (PAPI-RES-STL) per {\tt getnext} operation, we get a very similar picture (see Figure~\ref{fig:linked-vs-gapless-getnext-res-stl}). Therefore, the latency is clearly caused by the CPU memory subsystem. \begin{figure}[htb] \centering \begin{tikzpicture} \begin{loglogaxis} [ list performance plot, ylabel = {Stalled cycles}, ymin = 0.3, ymax = 1000, ] \addplot table[x=size, y=t-PAPI-RES-STL]\dataPLists; \addlegendentry{Tree} \addplot table[x=size, y=l-PAPI-RES-STL] \dataPLists; \addlegendentry{Linked hash map} \addplot table[x=size, y=u-PAPI-RES-STL] \dataPLists; \addlegendentry{Gapless hash map} \end{loglogaxis} \end{tikzpicture} \vspace*{-.15cm} \caption{Stalled CPU cycles per \texttt{getnext} operation} \label{fig:linked-vs-gapless-getnext-res-stl} \end{figure} \begin{figure*}[htb] \begin{tabular}{ccc} \begin{tikzpicture} \begin{loglogaxis} [ list performance plot small, ymode=normal, yticklabel=, ylabel = {L1d cache misses}, ymax = 4, ] \addplot table[x=size, y=t-PAPI-L1-DCM] \dataPLists; \addplot table[x=size, y=l-PAPI-L1-DCM] \dataPLists; \addplot table[x=size, y=u-PAPI-L1-DCM] \dataPLists; \end{loglogaxis} \end{tikzpicture} & \hspace*{-2em}\begin{tikzpicture} \begin{loglogaxis} [ list performance plot small, ymode=normal, yticklabel=, ylabel = {L2 cache misses}, ymax = 4, ] \addplot table[x=size, y=t-PAPI-L2-TCM] \dataPLists; \addlegendentry{Tree} \addplot table[x=size, y=l-PAPI-L2-TCM] \dataPLists; \addlegendentry{Linked hash map} \addplot table[x=size, y=u-PAPI-L2-TCM] \dataPLists; \addlegendentry{Gapless hash map} \end{loglogaxis} \end{tikzpicture}\hspace*{-3em} & \begin{tikzpicture} \begin{loglogaxis} [ list performance plot small, ymode=normal, yticklabel=, ylabel = {L3 cache misses}, ymax = 4, ] \addplot table[x=size, y=t-PAPI-L3-TCM]\dataPLists; \addplot table[x=size, y=l-PAPI-L3-TCM]\dataPLists; \addplot table[x=size, y=u-PAPI-L3-TCM]\dataPLists; \end{loglogaxis} \end{tikzpicture} \\ \end{tabular} \vspace*{-.15cm} \caption{Cache misses per \texttt{getnext} operation} \label{fig:cache-miss} \end{figure*} In Figure~\ref{fig:linked-vs-gapless-getnext} we can easily identify three distinct transitions. In case of a small number of tuples, all of them fit into the L1d CPU cache (32~KB \REV{per core}) and we have a low latency. For the tree and linked hash map, as the tuple count grows towards \REV{500 tuples}, we start using the L2 cache (256~KB \REV{per core}) with a greater latency. When we increase the number of tuples further and start reaching \REV{4000 tuples}, the data is mostly held in the L3 cache (20~MB \REV{in total, shared by all cores}) and, finally, after arriving at a tuple count of around \REV{300\,000}, the tuples are mostly located in RAM.\footnote{\REV{All CPUs have 32~KB and 256~KB per core for the L1d and L2 cache, respectively. The L3 cache for the Xeon X5550 is 8~MB and for the i5-4258u 3~MB, which means that they reach the last phase earlier.}} We make a couple of important observations. First, due to the more compact storage scheme of the gapless hash map, the transitions set in later \REV{(at 5000, 10\,000, and 600\,000 tuples, respectively)}. Second, the improvement gains of the gapless hash map are considerable and can be measured in orders of magnitude (note the logarithmic scale). Third, the latency of a \texttt{getnext} operation for the gapless hash map plateaus at around 2.7~ns, while the latency for the linked hash map and the tree reaches 100~ns. Cache misses alone do not explain all of the latency. Figure~\ref{fig:cache-miss} shows the average number of cache misses for the L1d (PAPI-L1-DCM), the L2 (PAPI-L2-TCM), and the L3 cache (PAPI-L3-TCM). While in general the average number of cache misses per {\tt getnext} operation is lower for the gapless hash map, the factor between the data structures in terms of stalled CPU cycles is disproportionately higher (please note the double-logarithmic scale in Figure~\ref{fig:linked-vs-gapless-getnext-res-stl}). Also, the cache misses do not explain the left-most part of Figure~\ref{fig:linked-vs-gapless-getnext-res-stl}, in which there are no cache misses at all. The additional performance boost stems from out-of-order execution. Examining the different (slightly simplified) versions of the machine code generated for {\tt getnext} makes this clear. For the gapless hash map, the code looks like this: \begin{scriptsize} \begin{verbatim} loop: add rax, [rdx] add rdx, 32 ; pointer += 32 (increment) cmp rcx, rdx jne loop \end{verbatim} \end{scriptsize} \noindent while for the linked hash map we have the following picture (we omit the code for the tree, as it is much more complex): \begin{scriptsize} \begin{verbatim} loop: add rax, [rdx] mov rdx, [rdx + 32] ; pointer = pointer->field (dereference) cmp rcx, rdx jne loop \end{verbatim} \end{scriptsize} \noindent When scanning through a gapless hash map, we add a constant to the pointer, which means that there is no data dependency between loop iterations. Consequently, the CPU is able to predict the instructions that will be executed in the future and can already start preparing them out-of-order (i.e., issue cache misses up front for the referenced data) while some of the instructions are still waiting for data from the L1 cache. For the linked hash map and the tree the CPU has to wait until a pointer to the next item has been dereferenced. In summary, multiple parallel cache misses in a sequential access pattern are processed much faster than isolated requests to random memory locations. We made another observation: there were no L1 instruction (L1i) cache misses. The increase of L1d cache misses for the linked hash map and the tree for large numbers of tuples is caused by TLB cache misses. We obtained very similar results for different CPUs on different machines (the diagrams shown here are for an Intel Xeon E5-2667 v3 processor), which led us to the conclusion that the techniques we employ will generally improve the performance on CPU architectures with a cache hierarchy, prefetching, and out-of-order execution. For the remainder of the experiments we only consider the gapless hash map, as it clearly outperforms the linked hash map. \subsubsection{Lazy Joining} For every tuple in $\mathbf s$, the basic JoinByS algorithm (Section~\ref{sec:corealg}) scans the current set of active tuples in $\mathbf r$. Using the improved LazyJoinByS algorithm from Section~\ref{sec:lazyjoining}, we can reduce the number of scans considerably. As long as we only encounter starting events of tuples in $\mathbf s$ and no events caused by tuples in $\mathbf r$, we can delay the scanning of the active tuple set of $\mathbf r$. \paragraph*{Analyzing the Data:$\;$} We now take a closer look at how frequently such uninterrupted sequences of events of one relation appear. Figure~\ref{fig:srf} shows this data for the table ``Incumbent'' from the real-world datasets when joining it with itself. On the x-axis we have the length of uninterrupted sequences of starting events and on the y-axis their relative frequency of appearance. In 60\% of the cases we have sequences of length ten or more, meaning that our lazy joining technique can avoid a considerable number of scans on active tuple sets. We found that starting events of intervals are generally not uniformly distributed in real-world datasets, but tend to cluster around certain time points. This can be recognized by looking at the number of distinct points in Table~\ref{table:rw-datasets}. For example, for the ``Incumbent'' dataset, employees are usually not assigned to new projects on random days, the assignments tend to happen at the beginning of a week or month. For the ``Feed'' dataset, multiple measurements (which are valid until the next one is made) are taken in the course of a day, resulting in a whole batch of intervals starting at the same time. The clustering is not just due to the relatively coarse granularity (one day) of these two datasets. The ``Webkit'' repository dataset, which looks at intervals in which files are not modified, has a granularity measured in milliseconds. Still we observe a clustering of starting events: a commit usually affects and modifies several files. The ``Flight'' dataset, which has a granularity of minutes, also exhibits a similar pattern in the form of batched departure times. Even for the frequency data of the ``BASF'' dataset, the values for the start and end points of the intervals seem to be clustered around multiples of one hundred. \begin{figure}[htb] \centering \begin{tikzpicture} \begin{axis} [ height=4cm, width=8cm, ymin=0, ymax=100, symbolic x coords={1,2,3,4,5,6,7,8,9,10+}, xtick={1,2,3,4,5,6,7,8,9,10+}, ybar, xlabel={Uninterrupted endpoint sequence length}, ylabel={Relative frequency}, yticklabel=\pgfmathprintnumber{\tick}\,\%, nodes near coords={\pgfmathprintnumber{\pgfplotspointmeta}\,\%}, nodes near coords align={vertical}, every node near coord/.append style={rotate=90, anchor=west, font=\scriptsize} ] \addplot[fill=blue,draw=black] coordinates {(1,11) (2,6.4) (3,5.3) (4,4.4) (5,3.8) (6,2.6) (7,2.4) (8,2.3) (9,1.6) (10+,60)}; \end{axis} \end{tikzpicture} \vspace*{-.15cm} \caption{Distribution of uninterrupted sequence lengths for self-join of the ``inc'' dataset} \label{fig:srf} \end{figure} \paragraph*{Reduction Factor:$\;$} The real performance implication is that LazyJoinByS executes fewer {\tt getnext} operations than JoinByS in such a scenario. The actual reduction depends not only on the clusteredness of the events, but also on the size of the corresponding active tuple set and the buffer capacity reserved in LazyJoinByS. We define a \emph{{\tt getnext} operation reduction factor} ($\mathit{GNORF}$), changing the cost for scanning through active tuple sets for the LazyJoinByS to $\sfrac{k \cdot c_{getnext}}{\mathit{GNORF}}$, where $k$ is the cardinality of the result set. For the self-join of the ``Incumbent'' dataset and for buffer capacity of 32 the $\mathit{GNORF}$ is equal to 23.6, which corresponds to huge savings in terms of run time. We also calculated this statistic for self-joins of other real-world datasets (``feed'': 31.2, ``web'': 9.73, ``flight'': 7.14, ``basf'': 11.2). Even for self-joins we get a considerable reduction factor: when encountering multiple starting events with the same timestamp, we first deal with all those of one relation before those of the other. \paragraph*{Join Performance:$\;$} Next we investigate the relative performance of an actual join operation, employing JoinByS and LazyJoinByS for an overlap join. Figure~\ref{fig:basic-vs-cache-optimized-rw} depicts the results for the ``Incumbent'' (inc), ``Webkit'' (web), ``Feed'' (feed), ``flight'', and ``basf'' datasets, showing that LazyJoinByS outperforms JoinByS by up to a factor of eight. Therefore, we only consider LazyJoinByS from here on. \begin{figure}[htb] \centering \begin{tikzpicture} \begin{axis} [ real-world join bar plot small = {inc}, xlabel = {}, xticklabel=\empty, name=first, legend columns=-1 ] \pgfplotstablegetelem{0}{2u}\of\dataRW \AddPlotCoord{inc}; \addlegendentry{JoinByS, gapless} \pgfplotstablegetelem{0}{4u}\of\dataRW \AddPlotCoord{inc}; \addlegendentry{LazyJoinByS, gapless} \end{axis} \begin{axis} [ real-world join bar plot small = {web}, xlabel = {}, xticklabel=\empty, /pgf/number format/precision=0, anchor=north west, at=(first.below south west), name=second, ] \pgfplotstablegetelem{2}{2u}\of\dataRW \AddPlotCoord{web}; \pgfplotstablegetelem{2}{4u}\of\dataRW \AddPlotCoord{web}; \end{axis} \begin{axis} [ real-world join bar plot small = {feed}, xlabel = {}, xticklabel=\empty, /pgf/number format/precision=0, anchor=north west, at=(second.below south west), name=third, ] \pgfplotstablegetelem{3}{2u}\of\dataRW \AddPlotCoord{feed}; \pgfplotstablegetelem{3}{4u}\of\dataRW \AddPlotCoord{feed}; \end{axis} \begin{axis} [ real-world join bar plot small = {flight}, xlabel = {}, xticklabel=\empty, /pgf/number format/precision=3, anchor=north west, at=(third.below south west), name=fourth, ] \pgfplotstablegetelem{1}{2u}\of\dataRW \AddPlotCoord{flight}; \pgfplotstablegetelem{1}{4u}\of\dataRW \AddPlotCoord{flight}; \end{axis} \begin{axis} [ real-world join bar plot small = {basf}, /pgf/number format/precision=0, anchor=north west, at=(fourth.below south west), ] \pgfplotstablegetelem{4}{2u}\of\dataRW \AddPlotCoord{basf}; \pgfplotstablegetelem{4}{4u}\of\dataRW \AddPlotCoord{basf}; \end{axis} \end{tikzpicture} \vspace*{-.15cm} \caption{JoinByS vs LazyJoinByS, real-world data} \label{fig:basic-vs-cache-optimized-rw} \end{figure} \newcommand*{\AddSpeedupPlot}[1] { \ReadTable{#1} \AddPlot[]{x = dataset, y expr = (\thisrow{lm} / \thisrow{danila}) } } \newcommand*{\AddSpeedupPlotTreeRed}[1] { \ReadTable{#1} \AddPlot[dashed,red,mark=*,mark size = 1.92]{x = dataset, y expr = (\thisrow{tree} / \thisrow{danila}) } } \newcommand*{\AddSpeedupPlotTreeBlue}[1] { \ReadTable{#1} \AddPlot[dashed,blue,mark=square*,mark size = 1.66]{x = dataset, y expr = (\thisrow{tree} / \thisrow{danila}) } } \newcommand*{\AddSpeedupPlotTreeGreen}[1] { \ReadTable{#1} \AddPlot[dashed,MyGreen,mark=triangle*,mark size = 2.13]{x = dataset, y expr = (\thisrow{tree} / \thisrow{danila}) } } \newcommand*{\AddSpeedupPlotTree}[1] { \ReadTable{#1} \AddPlot[dashed]{x = dataset, y expr = (\thisrow{tree} / \thisrow{danila}) } } \begin{figure*} \centering \ref{legend-exp}\\ \begin{tikzpicture} \begin{groupplot} [ three plots, xmode = log, xlabel = {Relation cardinalities}, ylabel = {Rel. perf., times faster}, ymin = 0.5, ymax = 10, xmin = 1e3, xmax = 1e6, legend columns = 3, enlarge x limits = 0.1, enlarge y limits = 0.1, ymode = log, log basis y = 2, log y ticks with fixed point base 2, ] \nextgroupplot[ legend to name = legend-exp, title = {\JoinName{Inverse During Join}} ] \AddSpeedupPlot{exp-reverse-during-w1e6.txt} \addlegendentry{Long tuples (gapless)} \AddSpeedupPlot{exp-reverse-during-w1e4.txt} \addlegendentry{Medium tuples (gapless)} \AddSpeedupPlot{exp-reverse-during-w1e2.txt} \addlegendentry{Short tuples (gapless)} \AddSpeedupPlotTreeRed{exp-reverse-during-w1e6.txt} \addlegendentry{Long tuples (tree)} \AddSpeedupPlotTreeBlue{exp-reverse-during-w1e4.txt} \addlegendentry{Medium tuples (tree)} \AddSpeedupPlotTreeGreen{exp-reverse-during-w1e2.txt} \addlegendentry{Short tuples (tree)} \nextgroupplot[ multiple plots middle, title = {\JoinName{Start Preceding Join}} ] \AddSpeedupPlot{exp-start-preceding-w1e6.txt} \AddSpeedupPlot{exp-start-preceding-w1e4.txt} \AddSpeedupPlot{exp-start-preceding-w1e2.txt} \nextgroupplot[ multiple plots right, title = {\JoinName{Before Join}} ] \AddSpeedupPlot{exp-before-join-w1e6.txt} \AddSpeedupPlot{exp-before-join-w1e4.txt} \AddSpeedupPlot{exp-before-join-w1e2.txt} \end{groupplot} \end{tikzpicture} \caption{Performance of our solution w.r.t.\ Leung-Muntz algorithm, synthetic data} \label{fig:exp} \end{figure*} \renewcommand*{\AddSpeedupPlot}[1] { \ReadTable{#1} \AddPlot[]{x = dataset, y expr = (\thisrow{ie} / \thisrow{danila}) } } \begin{figure*}[htb] \centering \ref{legend-ie-exp} \begin{tikzpicture} \begin{groupplot} [ three plots, xmode = log, xlabel = {Relation cardinalities}, ylabel = {Rel. perf., times faster}, ymin = 1, ymax = 1e5, xmin = 1e3, xmax = 1e6, legend columns = 3, enlarge x limits = 0.1, ymode = log, ] \nextgroupplot[ legend to name = legend-ie-exp, title = {\JoinName{Inverse During Join}} ] \AddSpeedupPlot{ie-exp-reverse-during-w1e6.txt} \addlegendentry{Long tuples} \AddSpeedupPlot{ie-exp-reverse-during-w1e4.txt} \addlegendentry{Medium tuples} \AddSpeedupPlot{ie-exp-reverse-during-w1e2.txt} \addlegendentry{Short tuples} \nextgroupplot[ multiple plots middle, title = {\JoinName{Start Preceding Join}} ] \AddSpeedupPlot{ie-exp-start-preceding-w1e6.txt} \AddSpeedupPlot{ie-exp-start-preceding-w1e4.txt} \AddSpeedupPlot{ie-exp-start-preceding-w1e2.txt} \nextgroupplot[ multiple plots right, title = {\JoinName{Before Join}} ] \AddSpeedupPlot{ie-exp-before-join-w1e6.txt} \AddSpeedupPlot{ie-exp-before-join-w1e4.txt} \AddSpeedupPlot{ie-exp-before-join-w1e2.txt} \end{groupplot} \end{tikzpicture} \caption{Performance of our solution w.r.t.\ IEJoin, synthetic data} \label{fig:ie-exp} \end{figure*} \subsubsection{Scalability} To test the scalability of our algorithms we compared them to the Leung-Muntz and IEJoin algorithms while varying the cardinality of the synthetic datasets. For the IEJoin we used the algebraic expressions in Table~\ref{table:allen-relations}. Due to space constraints, we limit ourselves to three characteristic joins (join-only, join with selection, and parameterized join with map operators): \JoinName{start preceding} (\Doodle{\StartPreceding}), \JoinName{inverse during} (\Doodle{\Reverse{\During}}), and \JoinName{before} join (\Doodle{\Before}), where $\delta$ is set to the average tuple length in the outer relation. We tested the algorithms using short, medium-length and long tuples with average lengths of $0.5 \cdot 10^2$, $0.5 \cdot 10^4$, and $0.5 \cdot 10^6$ time points, respectively. The speedup of our approach compared to Leung-Muntz and IEJoin is shown in Figures~\ref{fig:exp} and~\ref{fig:ie-exp}, respectively. We see that our solution quickly becomes faster than the Leung-Muntz algorithms and that the difference grows with increasing numbers of tuples and their lengths. Only for small relations and short tuples, Leung-Muntz is faster. \REV{Leung-Muntz is a simpler algorithm, so for light workloads, i.e., small relations and short intervals (resulting in smaller result sets), it shows a good performance as it does not have an initialization overhead. However, it does not scale as well as our algorithm.} In the left-most diagram of Figure~\ref{fig:exp} (\JoinName{inverse during join}), we also show the difference between using gapless hash maps and trees for managing the active tuple set. We only do so for the \JoinName{inverse during join}, as for the other joins a tree would only add overhead without any benefits. For the \JoinName{inverse during join}, with a tree we generate only valid tuples, meaning that we do not need a selection operator as needed for the gapless hash map. While the tree-based active tuple set seems to pay off for long tuples (meaning larger active tuple sets), for shorter tuples the situation is not that clear. Consequently, we propose to use gapless hash maps, as this avoids the implementation and integration of an additional data structure that is only useful for some interval relations and even then does not always show superior performance. For the remainder of Section~\ref{sec:experimental-evaluation}, we restrict ourselves to gapless hash maps. For the IEJoin, the performance differs by one to several orders of magnitude depending on relation cardinalities and tuple lengths. Even though the IEJoin is highly optimized, it still has quadratic complexity and cannot compete with specialized algorithms. Therefore, we drop it from the further investigation. Because the Leung-Muntz and the IEJoin algorithms do not scale well, we stopped running experiments for larger relation cardinalities when they took a few hours to conduct. We also restrict ourselves to the \JoinName{inverse during} join from now on, since Leung-Muntz showed the best performance for it. \subsubsection{Real-World Workloads} \begin{figure}[t] \begin{tikzpicture} \begin{axis} [ global plot style, real world bar plot, ylabel = {Rel. perf., times faster}, ymin = 1, ymax = 128, ymode = log, log basis y = 2, log ticks with fixed point, ] \ReadTable{rw-reverse-during.txt} \AddPlot[]{x = dataset, y expr = (\thisrow{lm} / \thisrow{danila}) } \addlegendentry{\JoinName{during}} \ReadTable{rw-start-preceding.txt} \AddPlot[]{x = dataset, y expr = (\thisrow{lm} / \thisrow{danila}) } \addlegendentry{\JoinName{start preceding}} \ReadTable{rw-before-join.txt} \AddPlot[]{x = dataset, y expr = (\thisrow{lm} / \thisrow{danila}) } \addlegendentry{\JoinName{before}} \end{axis} \end{tikzpicture} \caption{Performance of our solution w.r.t.\ Leung-Muntz, real-world data} \label{fig:rw} \end{figure} We repeated the experiments on the real-world workloads. The speedup is shown in Figure~\ref{fig:rw}. Here the performance difference is two orders of magnitude in some cases. On the one hand, this is due to the lazy joining cache optimization, which is more effective for the real-world datasets (cf.~\cite{piatov_interval_2016}). On the other hand, the heuristics used in the Leung-Muntz algorithm work worse for real-worlds workloads and especially those where the relation cardinalities differ substantially. \REV{ \subsubsection{Comparison with bgFS} Our approach and bgFS~\cite{bouros_forward_2017} follow different paradigms for processing the data: backward scanning versus forward scanning. The iterator framework we utilize has been geared completely towards the backward scanning paradigm, allowing us to introduce changes to endpoint timestamps on the fly. This makes it challenging to integrate bgFS into our iterator framework effectively. Clearly, we can add shifted intervals to the tuples in the relations before executing an bgFS join. However, this requires and additional sweep over the relations, eating up the efficiency gained by forward scanning the relations. On top of that, bgFS will start producing output tuples at the very end of the processing time. Figure~\ref{fig:comparisonwithfs} shows the run time of processing a (general) \JoinName{before} join using bgFS and our approach (this was run on an i7-4870HQ CPU with four cores, 32~KB and 256~KB per core for the L1d and L2 cache, respectively, and 6~MB L3 cache). } \begin{figure}[htb] \centering \begin{tikzpicture} \begin{axis} [ real-world join bar plot small = {inc}, xlabel = {}, xticklabel=\empty, name=first, legend columns=-1 ] \pgfplotstablegetelem{0}{JoinByS}\of\dataFS \AddPlotCoord{inc}; \addlegendentry{JoinByS} \pgfplotstablegetelem{0}{bgFS}\of\dataFS \AddPlotCoord{inc}; \addlegendentry{bgFS} \end{axis} \begin{axis} [ real-world join bar plot small = {web}, xlabel = {}, xticklabel=\empty, /pgf/number format/precision=0, anchor=north west, at=(first.below south west), name=second, ] \pgfplotstablegetelem{3}{JoinByS}\of\dataFS \AddPlotCoord{web}; \pgfplotstablegetelem{3}{bgFS}\of\dataFS \AddPlotCoord{web}; \end{axis} \begin{axis} [ real-world join bar plot small = {inc-web}, xlabel = {}, xticklabel=\empty, anchor=north west, at=(second.below south west), name=third, ] \pgfplotstablegetelem{1}{JoinByS}\of\dataFS \AddPlotCoord{inc-web}; \pgfplotstablegetelem{1}{bgFS}\of\dataFS \AddPlotCoord{inc-web}; \end{axis} \begin{axis} [ real-world join bar plot small = {web-inc}, xticklabel=\empty, anchor=north west, at=(third.below south west), name=fourth, ] \pgfplotstablegetelem{2}{JoinByS}\of\dataFS \AddPlotCoord{web-inc}; \pgfplotstablegetelem{2}{bgFS}\of\dataFS \AddPlotCoord{web-inc}; \end{axis} \end{tikzpicture} \vspace*{-.15cm} \caption{Comparison of JoinByS with bgFS, real-world data} \label{fig:comparisonwithfs} \end{figure} \subsubsection{Selection Efficiency} To explore the source of the performance difference between the algorithms, we tested the selectivity of the selection operation that is applied after the join in the Leung-Muntz algorithms and, in some cases, in ours. The results for the \JoinName{inverse during join} are shown in Figure~\ref{fig:selectivities}. The other joins exhibit a similar behavior. We see that both algorithms are keeping the sizes of the working sets similar. Our algorithm is slightly more effective, but insufficiently so to explain the performance difference. We look at the real cause in the next section. \begin{figure}[t] \begin{tikzpicture} \begin{axis} [ global plot style, real world bar plot, ylabel = {Selectivity}, ymin = 0, ymax = 1.1, ] \ReadTable{counters-rw-reverse-during.txt} \AddPlot[]{x = dataset, y = my-sel} \addlegendentry{Our solution} \AddPlot[]{x = dataset, y = lm-sel} \addlegendentry{Leung-Muntz join} \end{axis} \end{tikzpicture} \caption{Selectivity of the filtering selection operator after the main join, \JoinName{inverse during join}} \label{fig:selectivities} \end{figure} \subsubsection{Comparison Count} In this experiment we measured the number of tuple endpoint comparison operations (e.g., ``$r.T_s < s.T_s$''). The results for the \JoinName{inverse during join} are shown in Figure~\ref{fig:comparisons}. We see that the difference in the number of comparisons is huge. The Leung-Muntz algorithm performs many more comparisons because it has to heuristically estimate the next tuple to be read and to perform the garbage collection of the outdated tuples. The selection operation of the Leung-Muntz algorithm requires two comparisons. Our algorithm, on the other hand, requires a single comparison in the selection operation for the \JoinName{inverse during join}, and no tuple comparisons at all for the \JoinName{before} and \JoinName{start preceding join}. \begin{figure} \begin{tikzpicture} \begin{axis} [ global plot style, real world bar plot, ylabel = {Comparison count}, ymode = log, ] \ReadTable{counters-rw-reverse-during.txt} \AddPlot[]{x = dataset, y = my-comp} \addlegendentry{Our solution} \AddPlot[]{x = dataset, y = lm-comp} \addlegendentry{Leung-Muntz join} \end{axis} \end{tikzpicture} \caption{Join comparison counts, real-world data} \label{fig:comparisons} \end{figure} \subsubsection{Latency} In this experiment we measure the delay in producing output tuples of the Leung-Muntz algorithms. A low latency is an important feature for event detection systems. While our algorithms generates output as soon as possible, when all of the required endpoints have been spotted, the Leung-Muntz has a delay caused by the fact that it requires streams of complete and ordered tuples as the input. The average latency (expressed in average tuple lengths, as the different data sets have vastly different granularities) is shown in Figure~\ref{fig:latency}. Depending on the workload the differences in latency can in some cases reach ten or even a hundred times the average tuple length. \begin{figure} \begin{tikzpicture} \begin{axis} [ global plot style, real world bar plot, ylabel = {\scriptsize{Avg latency in avg tuple length}}, ymin = 0.01, ymax = 100, ymode = log, log origin = infty, log ticks with fixed point, ] \ReadTable{latency.txt} \AddPlot[]{x = dataset, y expr = \thisrow{latency-2-avg} / \thisrow{r-avg-len} } \addlegendentry{Leung-Muntz} \end{axis} \end{tikzpicture} \caption{Algorithm reporting latency, \RelationName{reverse during join}, real-world data} \label{fig:latency} \end{figure} \section{Conclusions and Future Work} \label{sec:conclusions} We developed a family of effective, robust and cache-opti\-mized plane-sweeping algorithms for interval joins on different interval relationship predicates such as Allen's or parameterized ISEQL relations. The algorithms can be used in temporal databases, exploiting the Timeline Index, which made its way into a prototype of a commercial temporal RDBMS as the main universal index supporting temporal joins, temporal aggregation and time travel. We thus extend the set of operations supported by this index. Our solution is based on a flexible framework, that allows combining its components in different ways to elegantly and efficiently express any interval join in terms of a single core function. Additionally, our approach makes good use of the features of contemporary hardware, utilizing the cache infrastructure well. We compared the performance of our solution with the state-of-the-art in interval joins on Allen's predicates---the Leung-Muntz and the IEJoin algorithm. The results show that our solution is several times faster, scales better and is more stable. Another major advantage of our approach is that it can be directly applied to real-time stream event processing, as it will report the results as soon as logically possible for the applied predicate, without necessarily waiting for intervals to finish. The Leung-Muntz algorithm has to wait for the tuples to finish before processing them. Moreover, the requirement for tuples to be processed chronologically allows any unfinished tuple to block the processing of all following tuples. The IEJoin is also not suitable for a streaming environment: it needs the complete relations to work. For future work, we want to explore the possibilities of embedding our solution into a real-time complex event processing framework. In particular, we want to combine the results of multiple joins to detect patterns within $n$ streams of events of different type. \REV{Additional research directions are working out the details of parallelizing our approach and the handling of multiple predicates in a join operation.} \section{Correctness of Rewrites} \label{sec:rewrites} Joining the tuples in the relations \Rel r and \Rel s such that they satisfy the Allen and ISEQL relations shown in Table~\ref{table:allen-relations} results in a tuple set $ RS_{name} = \SetBuilder{ r \times s }{ r \in \Rel r, s \in \Rel s: P(r,s) } $ where $name$ is the name of the relation in the first column and $P(r,s)$ is the formal definition in the third column. We follow the same order as in Section~\ref{sec:formalization}. \subsection{Non-parameterized Joins} \paragraph*{ISEQL Start Preceding and End Following Joins:\,} These follow directly from the definition of our interval-timestamp joins $\JoinBySsm_{\mathrm{start}}^\theta$ and $\JoinBySsm_{\mathrm{end}}^\theta$. \paragraph*{Overlap and During Joins:\,} For the \JoinName{left overlap join}, the tuples have to satisfy $P(r,s) = r.T_s \le s.T_s < r.T_e \le s.T_e$. The inequalities $r.T_s \le s.T_s < r.T_e$ are enforced by the join $\JoinBySsm_{\mathrm{start}}^\le$, the inequality $r.T_e \le s.T_e$ by the selection predicate. For the right overlap join, this works analogously using the join $\JoinBySsm_{\mathrm{end}}^\le$ and a selection. For the (reverse) \JoinName{during join}, only the inequality enforced by the selection operator changes. \paragraph{Before Join:\,} By replacing $T_s$ with $T_e$ and $T_e$ with $\infty$ in every tuple in \Rel r and then running a $\JoinBySsm_{\mathrm{start}}^\le$ join, we get tuples that satisfy the predicate $r.T_e \le s.T_s < \infty$, which is equivalent to the predicate for the \JoinName{before} relation. For the Allen relation we use $\JoinBySsm_{\mathrm{start}}^<$ instead. \paragraph*{Meets Join:\,} Applying the map operators and running a $\JoinBySsm_{\mathrm{start}}^\le$ join, $P(r,s)$ becomes $r.T_e \le s.T_s < r.T_e + 1$. Since we use integer timestamps, this means that $r.T_e = s.T_s$. \paragraph*{Equals and Starts Joins:\,} For the \JoinName{equals join} the $\JoinBySsm_{\mathrm{start}}^\le$ join enforces the predicate $r.T_s \le s.T_s < r.T_s + 1$, which becomes $r.T_s = s.T_s$ due to the integer timestamps, while the selection does so for $r.T_e = s.T_e$. For the \JoinName{starts join} the predicate enforced by the selection operator changes accordingly. \paragraph*{Finishes Join:\,} For the \JoinName{finishes join} we use the $\JoinBySsm_{\mathrm{end}}^\le$ operator arriving at $r.T_e - 1 < s.T_e \le r.T_e$, which means that $r.T_e = s.T_e$. Together with the selection predicate of $s.T_s < r.T_s$, we complete the predicate for the \JoinName{finishes join}. \subsection{Parameterized Joins} \paragraph*{ISEQL Start Preceding and End Following Joins:\,} The $\JoinBySsm_{\mathrm{start}}^\le$ join (together with the map operators) guarantees us that $r.T_s \le s.T_s < \min(r.T_e, r.T_s + \delta + 1)$. Assume that the minimum is $r.T_e$, which means that $r.T_e \le r.T_s + \delta + 1$. The first part of $P(r,s)$, $r.T_s \le s.T_s < r.T_e$ follows directly. We also know that $s.T_s < r.T_e \le r.T_s + \delta + 1$ and due to integer timestamps can conclude that $s.T_s - r.T_s \le \delta$. For the other case, the minimum is $r.T_s + \delta + 1$, which means $r.T_s + \delta + 1 \le r.T_e$. Due to integer timestamps, it immediately follows that $s.T_s \le r.T_s + \delta$. We also know that $r.T_s \le s.T_s < r.T_s + \delta + 1 \le r.T_e$ and therefore $r.T_s \le s.T_s < r.T_e$. The proof for the \JoinName{end following join} follows along similar lines. \paragraph*{Before Join:\,} The $\JoinBySsm_{\mathrm{start}}^\le$ join enforces $r.T_e \le s.T_s < r.T_e + \delta + 1$, which due to integer timestamps becomes $r.T_e \le s.T_s \le r.T_e + \delta$ (which is equivalent to $r.T_e \le s.T_s \wedge s.T_s - r.T_e \le \delta$). \paragraph*{Overlap and During Joins:\,} Here we only show the proof for the \JoinName{left overlap join}, the proofs for the remaining operators follow the same pattern. With the $\JoinBySsm_{\mathrm{start}}^\le$ join we already make sure that $r.T_s \le s.T_s$ and with the selection operator that $r.T_e \le s.T_e$ and $s.T_e \le r.T_e + \varepsilon \Leftrightarrow s.T_e - r.T_e \le \varepsilon$. If $r.T_e \le r.T_s + \delta + 1$ then $s.T_s < r.T_e$ follows from the $\JoinBySsm_{\mathrm{start}}^\le$ as well and because $s.T_s < r.T_e \le r.T_s + \delta$, we know that $s.T_s < r.T_s + \delta$, which is equivalent to $s.T_s - r.T_e \le \delta$ for integer intervals. If $r.T_s + \delta + 1 \le r.T_e$, then $s.T_s < r.T_s + \delta + 1 \Leftrightarrow s.T_s - r.T_s \le \delta$ (for integer intervals) follows from the $\JoinBySsm_{\mathrm{start}}^\le$ and because $s.T_s < r.T_s + \delta + 1 \le r.T_e$, we also know that $s.T_s < r.T_e$. } \section{Implementation of Iterators} \label{appendix:iterators} In this section we illustrate the inner workings of the different iterators and show how they can be implemented. If we have an instance of an Endpoint Iterator, \texttt{iterator}, then to traverse all endpoints it represents, we can use the following pseudocode: \begin{footnotesize} \begin{verbatim} while not iterator.isFinished do output(iterator.getEndpoint) iterator.moveToNextEndpoint end \end{verbatim} \end{footnotesize} \vspace{-.5cm} \subsection{Index Iterator} We use the \texttt{std::vector} container of the C++ Standard Template Library (STL) as the implementation of the Endpoint Index, resulting in the following code: \begin{itemize} \footnotesize \item \texttt{IndexIterator(endpointIndex)}:\\ \hspace*{1em}\texttt{this.it = endpointIndex.begin();}\\ \hspace*{1em}\texttt{this.end = endpointIndex.end();} \item \texttt{getEndpoint}:\\ \hspace*{1em}\texttt{return *it;} \item \texttt{moveToNextEndpoint}:\\ \hspace*{1em}\texttt{++it;} \item \texttt{isFinished}:\\ \hspace*{1em}\texttt{return it == end;} \end{itemize} \vspace{-.5cm} \subsection{Filtering Iterator} \begin{itemize} \footnotesize \item \texttt{FilteringIterator(iterator, type)}:\\ \hspace*{1em}\texttt{this.iterator = iterator;}\\ \hspace*{1em}\texttt{this.type = type;}\\ \hspace*{1em}\texttt{while getEndpoint.type $\ne$ type do}\\ \hspace*{1em}\texttt{\hspace{2em}moveToNextEndpoint;} \item \texttt{getEndpoint}:\\ \hspace*{1em}\texttt{return iterator.getEndpoint;} \item \texttt{moveToNextEndpoint}:\\ \hspace*{1em}\texttt{do}\\ \hspace*{1em}\texttt{\hspace{2em}iterator.moveToNextEndpoint;}\\ \hspace*{1em}\texttt{while not isFinished and}\\ \hspace*{1em}\texttt{\phantom{while }getEndpoint.type $\ne$ type;} \item \texttt{isFinished}:\\ \hspace*{1em}\texttt{return iterator.isFinished;} \end{itemize} \vspace{-.5cm} \subsection{Shifting Iterator} \begin{itemize} \footnotesize \item\texttt{ShiftingIterator(iterator, delta, type)}:\\ \hspace*{1em}\texttt{this.iterator = iterator;}\\ \hspace*{1em}\texttt{this.delta = delta;}\\ \hspace*{1em}\texttt{this.type = type;} \item \texttt{getEndpoint}:\\ \hspace*{1em}\texttt{var endpoint = iterator.getEndpoint;}\\ \hspace*{1em}\texttt{endpoint.timestamp += delta;}\\ \hspace*{1em}\texttt{endpoint.type = type;}\\ \hspace*{1em}\texttt{return endpoint;} \item \texttt{moveToNextEndpoint}:\\ \hspace*{1em}\texttt{iterator.moveToNextEndpoint;} \item \texttt{isFinished}:\\ \hspace*{1em}\texttt{return iterator.isFinished;} \end{itemize} \vspace{-.5cm} \subsection{Merging Iterator} \begin{itemize} \footnotesize \item \texttt{MergingIterator(iterator1, iterator2)}:\\ \hspace*{1em}\texttt{this.it1 = iterator1;}\\ \hspace*{1em}\texttt{this.it2 = iterator2;}\\ \hspace*{1em}\texttt{moveToNextEndpoint;} \item \texttt{getEndpoint}:\\ \hspace*{1em}\texttt{return this.endpoint;} \item \texttt{moveToNextEndpoint}:\\ \hspace*{1em}\texttt{if it2.isFinished or not it1.isFinished}\\ \hspace*{1em}\texttt{\phantom{if }and it1.getEndpoint < it2.getEndpoint}\\ \hspace*{1em}\texttt{then}\\ \hspace*{1em}\texttt{\hspace{2em}this.endpoint = it1.getEndpoint;}\\ \hspace*{1em}\texttt{\hspace{2em}it1.moveToNextEndpoint;}\\ \hspace*{1em}\texttt{else}\\ \hspace*{1em}\texttt{\hspace{2em}this.endpoint = it2.getEndpoint;}\\ \hspace*{1em}\texttt{\hspace{2em}it2.moveToNextEndpoint;}\\ \hspace*{1em}\texttt{end} \item \texttt{isFinished}:\\ \hspace*{1em}\texttt{return it1.isFinished and it2.isFinished;} \end{itemize} \vspace{-.5cm} \REV{ \subsection{First End Iterator} \label{sec:firstend} \begin{itemize} \footnotesize \item \texttt{FirstEndIterator(iterator)}:\\ \hspace*{1em}\texttt{this.iterator = iterator;}\\ \hspace*{1em}\texttt{this.hs = new HashSet;}\\ \item \texttt{getEndpoint}:\\ \hspace*{1em}\texttt{return this.endpoint;} \item \texttt{moveToNextEndpoint}:\\ \hspace*{1em}\texttt{do}\\ \hspace*{1em}\texttt{\hspace{2em}iterator.moveToNextEndpoint;}\\ \hspace*{1em}\texttt{\hspace{2em}if getEndpoint.type = end then}\\ \hspace*{1em}\texttt{\hspace{4em}if getEndpoint.tuple\_id $\not\in$ hs then}\\ \hspace*{1em}\texttt{\hspace{6em}insert getEndpoint.tuple\_id into hs;}\\ \hspace*{1em}\texttt{\hspace{6em}break;}\\ \hspace*{1em}\texttt{\hspace{4em}else}\\ \hspace*{1em}\texttt{\hspace{6em}remove getEndpoint.tuple\_id from hs;}\\ \hspace*{1em}\texttt{while not isFinished;} \item \texttt{isFinished}:\\ \hspace*{1em}\texttt{return iterator.isFinished;} \end{itemize} } \vspace{-.5cm} \subsection{Second Start Iterator} \label{sec:secondstart} \begin{itemize} \footnotesize \item \texttt{SecondStartIterator(iterator)}:\\ \hspace*{1em}\texttt{this.iterator = iterator;}\\ \hspace*{1em}\texttt{this.hs = new HashSet;}\\ \hspace*{1em}\texttt{while getEndpoint.type = start and}\\ \hspace*{1em}\texttt{\phantom{while }getEndPoint.tuple\_id $\not\in$ hs}\\ \hspace*{1em}\texttt{do}\\ \hspace*{1em}\texttt{\hspace{2em}moveToNextEndpoint;} \item \texttt{getEndpoint}:\\ \hspace*{1em}\texttt{return this.endpoint;} \item \texttt{moveToNextEndpoint}:\\ \hspace*{1em}\texttt{do}\\ \hspace*{1em}\texttt{\hspace{2em}iterator.moveToNextEndpoint;}\\ \hspace*{1em}\texttt{\hspace{2em}if getEndpoint.type = start then}\\ \hspace*{1em}\texttt{\hspace{4em}if getEndpoint.tuple\_id $\in$ hs then}\\ \hspace*{1em}\texttt{\hspace{6em}remove getEndpoint.tuple\_id from hs;}\\ \hspace*{1em}\texttt{\hspace{6em}break;}\\ \hspace*{1em}\texttt{\hspace{4em}else}\\ \hspace*{1em}\texttt{\hspace{6em}insert getEndpoint.tuple\_id into hs;}\\ \hspace*{1em}\texttt{while not isFinished;} \item \texttt{isFinished}:\\ \hspace*{1em}\texttt{return iterator.isFinished;} \end{itemize}
1,314,259,996,026
arxiv
\section{Introduction and Main Result} \bigskip Let $G$ be an additive finite abelian group. For a (multiplicatively written) sequence $S = g_1 \cdot \ldots \cdot g_l$ over $G$, $|S| = l$ is called the length of $S$, and $S$ is said to be zero-sum free if $\sum_{i \in I}g_i \ne 0$ for every nonempty subset $I \subset [1, l]$. Let $\mathsf d (G)$ denote the maximal length of a zero-sum free sequence over $G$. Then $\mathsf d (G) + 1$ is the Davenport constant of $G$, a classical constant from Combinatorial Number Theory (for surveys and historical comments, the reader is referred to \cite{Ga-Ge06b}, \cite[Chapter 5]{Ge-HK06a}, \cite{Ge09a}). In general, the precise value of $\mathsf d (G)$ (in terms of the group invariants of $G$) and the structure of the extremal sequences is unknown, see \cite{Le-Sc07a, Bh-SP07a, Ra-Sr-Th08, Gi09b, Ga-Ge-Gr09a, Sc09e, Su09a} for recent progress. \smallskip Group algebras $R [G]$ - over suitable commutative rings $R$ - have turned out to be powerful tools for a great variety of questions from combinatorics and number theory, among them the Davenport constant. We recall the definition of an invariant (involving group algebras) which was used for the investigation of the Davenport constant since the 1960s. \smallskip For a commutative ring $R$, let \ $\mathsf d (G, R) \in \mathbb N \cup \{\infty\}$ \ denote the supremum of all $l \in \mathbb N$ having the following property: \begin{enumerate} \item[] There is some sequence \ $S = g_1 \cdot \ldots \cdot g_l$ \ of length $l$ over $G$ such that \[ (X^{g_1} - a_1) \cdot \ldots \cdot (X^{g_l} - a_l) \ne 0 \in R[G] \quad \text{for all} \quad a_1, \ldots , a_l \in R \setminus \{0\} \,. \] \end{enumerate} \smallskip \noindent If $S$ is zero-sum free, $R$ is an integral domain, $a_1, \ldots, a_l \in R \setminus \{0\}$ and \[ f \ = \ (X^{g_1} - a_1) \cdot \ldots \cdot (X^{g_l} - a_l) \ = \ \sum_{g \in G} c_g X^g \ \,, \] then $c_0 \ne 0$. Hence $f \ne 0$, and it follows that \[ \mathsf d (G) \le \mathsf d (G, R)\,. \] The following \,Theorem {\bf A}\, was achieved by P. van Emde Boas, D. Kruyswijk and J.E. Olson in the 1960s (indeed, they did not explicitly define the invariants $\mathsf d (G, K)$ but got these results implicitly. Historical remarks and proofs in the present terminology may be found in \cite[Section 2.2]{Ge09a} and \cite[Theorem 5.5.9]{Ge-HK06a}; see also \cite{Ga-Ge-HK09}). \medskip \noindent {\bf Theorem A.} {\it Let $G$ be a finite abelian group with $\exp (G) = n \ge 2$. \begin{enumerate} \item Let $K$ be a splitting field of $G$ with $\text{\rm char} (K) \nmid \exp (G)$. Then \[ \mathsf d (G, K) \le (n-1) + n \log \frac{|G|}{n} \,. \] \smallskip \item If $G$ is a $p$-group, then $\mathsf d (G) = \mathsf d (G, \mathbb Z/ p \mathbb Z)$. \end{enumerate} } \smallskip Note that for a cyclic group $G$ of order $n$, the above upper bound implies that $\mathsf d (G) = \mathsf d (G, K) = n-1$, since $\mathsf d( \mathsf{C} _n) \ge n-1$ can easily be seen. Only recently, W. Gao and Y. Li showed that $\mathsf d ( \mathsf{C} _2 \oplus \mathsf{C} _{2n}) = \mathsf d ( \mathsf{C} _2 \oplus \mathsf{C} _{2n}, K)$ (\cite[Theorem 3.3]{Ga-Li09a}). We extend their result, but we also show that Conjecture 3.4 in \cite{Ga-Li09a}, stating that $\mathsf d (G) = \mathsf d (G, K)$ for all groups $G$, does not hold. Here is the main result of the present paper. \medskip \begin{thm} \label{thm:main} Let $G = \mathsf{C} _p \oplus \mathsf{C} _{pn}$ with $p \in \mathbb{P} $, $n \in \mathbb{N} $ and let $K$ be a splitting field of $G$. \begin{enumerate} \smallskip \item If $p \le 3$, then $\mathsf d (G) = \mathsf{d} (G,K)$. \smallskip \item If $p \ge 5$ and $n \ge 2$, then $\mathsf d (G) < \mathsf{d} (G,K)$. \end{enumerate} \end{thm} \bigskip \section{Preliminaries} \bigskip Let $\mathbb N$ denote the set of positive integers, $ \mathbb{P} \subset \mathbb{N} $ the set of prime numbers, and let $\mathbb N_0 = \mathbb N \cup \{ 0 \}$. For real numbers $a, b \in \mathbb R$, we set $[a, b] = \{ x \in \mathbb Z \mid a \le x \le b\}$. For $n \in \mathbb N$ and $p \in \mathbb{P} $, let $C_n$ denote a cyclic group with $n$ elements, $\mathsf v_p (n) \in \mathbb{N} _0$ the $p$-adic valuation of $n$ with $\mathsf v_p (p) = 1$ and $\mathbb F_p = \mathbb{Z} / p \mathbb{Z} $ the finite field with $p$ elements. \smallskip Let $G$ be an additive finite abelian group. Suppose that $G \cong C_{n_1} \oplus \ldots \oplus C_{n_r}$ with $1 < n_1 \t \ldots \t n_r$. Then $r = \mathsf r (G)$ is the {\it rank} of $G$, $n_r = \exp (G)$ is the {\it exponent} of $G$, and we define $\mathsf d^* (G) = \sum_{i=1}^r (n_i-1)$. If $|G| = 1$, then the exponent $\exp (G) = 1$, the rank $\mathsf r (G) = 0$, and we set $\mathsf d^* (G) = 0$. If $A, B \subset G$ are nonempty subsets, then $A + B = \{ a + b \mid a \in A, b \in B \}$ is their sumset. We will make use of a Theorem of Cauchy-Davenport which runs as follows (for a proof see \cite[Cor. 5.2.8.1]{Ge-HK06a}). \medskip \begin{lemma} \label{2.1} Let $G$ be a cyclic group of order $p \in \mathbb{P} $ and let $A,B \subset G$ be nonempty subsets. Then $\card{A+B} \ge \min\{ \card{A}+\card{B}-1, p \}$. \end{lemma} \medskip {\bf Sequences over groups.} Let $\mathcal F(G)$ be the (multiplicatively written) free abelian monoid with basis $G$. The elements of $\mathcal F(G)$ are called \ {\it sequences} \ over $G$. We write sequences $S \in \mathcal F (G)$ in the form \[ S = \prod_{g \in G} g^{\mathsf v_g (S)}\,, \quad \text{with} \quad \mathsf v_g (S) \in \mathbb N_0 \quad \text{for all} \quad g \in G \,. \] We call \ $\mathsf v_g (S)$ the \ {\it multiplicity} \ of $g$ in $S$, and we say that $S$ \ {\it contains} \ $g$ \ if \ $\mathsf v_g (S) > 0$. A sequence $S_1 $ is called a \ {\it subsequence} \ of $S$ \ if \ $S_1 \, | \, S$ \ in $\mathcal F (G)$ \ (equivalently, \ $\mathsf v_g (S_1) \le \mathsf v_g (S)$ \ for all $g \in G$). If a sequence $S \in \mathcal F(G)$ is written in the form $S = g_1 \cdot \ldots \cdot g_l$, we tacitly assume that $l \in \mathbb N_0$ and $g_1, \ldots, g_l \in G$. For a sequence \[ S \ = \ g_1 \cdot \ldots \cdot g_l \ = \ \prod_{g \in G} g^{\mathsf v_g (S)} \ \in \mathcal F(G) \,, \] we call \[ |S| = l = \sum_{g \in G} \mathsf v_g (S) \in \mathbb N_0 \qquad \text{the \ {\it length} \ of \ $S$ \ and} \] \[ \sigma (S) = \sum_{i = 1}^l g_i = \sum_{g \in G} \mathsf v_g (S) g \in G \qquad \text{the \ {\it sum} \ of \ $S$}\,. \] The sequence \ $S$ \ is called a {\it zero-sum sequence} if $\sigma (S) = 0$, and it is called {\it zero-sum free} if $\sum_{i \in I} g_i \ne 0$ for all $\emptyset \ne I \subset [1, l]$ (equivalently, if there is no nontrivial zero-sum subsequence). We denote by \begin{itemize} \item $\mathsf D (G)$ the smallest integer $l \in \mathbb{N} $ such that every sequence $S$ over $G$ of length $|S| \ge l$ has a nontrivial zero-sum subsequence; \item $\mathsf d (G)$ the maximal length of a zero-sum free sequence over $G$. \end{itemize} Then $\mathsf D (G)$ is called the {\it Davenport constant} of $G$, and we have trivially that \[ \mathsf d^* (G) \le \mathsf d (G) = \mathsf D (G) - 1 \,. \] We will use without further mention that equality holds for $p$-groups and for groups of rank $\mathsf r (G) \le 2$ (\cite[Theorems 5.5.9 and 5.8.3]{Ge-HK06a}) (equality holds for further groups, but not in general \cite[Corollary 4.2.13]{Ge09a}). \medskip {\bf Group algebras and characters.} Let $R$ be a commutative ring \ (throughout, we assume that $R$ has a unit element $1 \ne 0$) \ and $G$ a finite abelian group. The \ {\it group algebra} \ $R [G]$ \ of $G$ over $R$ is a free $R$-module with basis $\{ X^g \mid g \in G\}$ \ (built with a symbol $X$), where multiplication is defined by \[ \Bigl( \sum_{g \in G} a_g X^g \Bigr) \Bigl( \sum_{g \in G} b_g X^g \Bigr) = \sum_{ g \in G} \Bigl( \sum_{h \in G} a_h b_{g-h} \Bigr) X^g \,. \] We view $R$ as a subset of $R [G]$ by means of $a = a X^0$ for all $a \in R$. An element of $R$ is a zero-divisor [\,a unit\,] of $R[G]$ if and only if it is a zero-divisor [\,a unit\,] of $R$. \smallskip Let $K$ be a field, \ $G$ a finite abelian group with $\exp(G) = n \in \mathbb N$ \ and \ $\mu_n(K) = \{\zeta \in K \mid \zeta^n =1\}$ the group of $n$-th roots of unity in $K$. An $n$-th root of unity $\zeta$ is called \ {\it primitive} \ if $\zeta^m \ne 1$ \ for all $m \in [1,n-1]$, and we denote by $\mu_n^* (K) \subset \mu_n (K)$ the subset of all primitive $n$-th roots of unity. We denote by \ ${\rm Hom} (G, K^{\times}) = {\rm Hom} (G, \mu_n(K))$ \ the \ {\it character group of \ $G$ \ with values in $K$} (whose operation is given by pointwise multiplication with the constant $1$ function as identity), and we briefly set $\widehat G = {\rm Hom} (G, K^{\times})$ if there is no danger of confusion. Every character \ $\chi \in \widehat G$ \ has a unique extension to a $K$-algebra homomorphism \ $\chi \colon K[G] \to K$ (again denoted by $\chi$) acting by means of \[ \chi \Bigl( \sum_{g \in G} a_g X^g \Bigr) = \sum_{g \in G} a_g \chi (g) \,. \] We call $K$ a \ {\it splitting field} \ of $G$ if \ $|\mu_n(K)| = n$. Let $K$ be a splitting field of $G$ and $\widehat G = {\rm Hom} (G, K^{\times})$. We gather the properties needed for the sequel (for details see \cite[Section 5.5]{Ge-HK06a} and \cite[\S 17]{Cu-Re81}). We have \ ${\rm char} (K) \nmid \exp (G)$, \ $|G| = |G| \,1_K \in K^{\times}$, \ $G \cong {\rm Hom} (G, K^{\times})$, and the map \[ {\rm Hom} (G, K^{\times}) \times G \to K^\times\,, \quad \text{defined by} \quad (\chi, \, g) \ \mapsto \ \chi(g)\,, \] is a non-degenerated pairing (that is, if $\chi (g) = 1$ for all $\chi \in \widehat G$, then $g= 0$, and if $\chi (g) = 1$ for all $g \in G$, then clearly $\chi = 1$, the constant $1$ function). Furthermore, the Orthogonality Relations hold (\cite[Proposition 5.5.2]{Ge-HK06a}), and for every $f \in K[G]$, we have (see \cite[Proposition 5.5.2]{Ge-HK06a}) $$f = 0\in K[G]\;\mbox{ if and only if }\chi (f) = 0\;\mbox{ for every }\;\chi \in {\rm Hom} (G, K^{\times}).$$ Moreover, if $\chi(f) \ne 0$ for all $\chi \in {\rm Hom}(G,K^\times)$, then $f \in K[G]^\times$; explicitly, a simple calculation using the Orthogonality Relations shows that \[ f^{-1} = \frac 1 {|G|} \sum_{g \in G} \Bigl( \sum_{\chi \in {\rm Hom}(G,K^\times) } \frac{\chi(-g)}{\chi(f)} \,\Bigr)\, X^g \ \,. \] For a subgroup $H \subset G$, we set \[ H^\perp = \{ \chi \in \widehat G \mid \chi (h) = 1 \ \text{for all} \ h \in H \} \,. \] We clearly have a natural isomorphism $H^\perp \cong \widehat{G/H}$. \bigskip \section{Proof of the Theorem} \bigskip We fix our notation, which will remain valid throughout this section. Let $G = \mathsf{C} _m \oplus \mathsf{C} _{mn}$ with $m \in \mathbb{N} _{\ge 2}$, $n \in \mathbb{N} $ and let $e_1,e_2 \in G$ be such that $G = \langle e_1 \rangle \oplus \langle e_2 \rangle$, $\ord(e_1)=m$ and $\ord(e_2)=mn$. Furthermore, let $K$ be a splitting field of $G$, $\zeta \in \mu^*_{mn}(K)$, and let $\psi,\varphi \in \widehat G$ be defined by $\psi(e_1)=\zeta^n$, $\psi(e_2)=1$ and $\varphi(e_1)=1$, $\varphi(e_2)=\zeta$. Then $\ord(\psi)=m$, $\ord(\varphi)=mn$ and $\widehat G = \langle\psi\rangle \oplus\langle\varphi\rangle$. Note that, in the case $m=p \in \mathbb{P} $, \[ \theta \colon \begin{cases} \mathbb F_p \times \langle\psi,\varphi^n\rangle &\to \langle\psi,\varphi^n\rangle\\ (k+p \mathbb{Z} , \chi) &\mapsto \chi^k, \end{cases} \] is an $\mathbb F_p$-vector space structure on $(\langle\psi,\varphi^n\rangle, \cdot)$. Whenever $\langle\psi,\varphi^n\rangle$ is considered as $\mathbb F_p$-vector space it is done so with respect to $\theta$. The following Lemmas \ref{lemma:reduce-multiples} and \ref{lemma:G0} will allow us to restrict ourselves to sequences consisting of certain special elements in the proof of Theorem \ref{thm:main}.1. Lemma \ref{lemma:G0} is a generalization of a statement used by W. Gao and Y. Li in their proof of the case $m=2$ \cite{Ga-Li09a}. \medskip \begin{lemma} \label{lemma:reduce-multiples} Let $R$ be a commutative ring, $g_1\cdot\ldots\cdot g_l \in \mathcal{F}(G)$ a sequence over $G$, and let $a_1,\ldots,a_l \in R\setminus \{0\}$ be such that $(X^{g_1} - a_1) \cdot\ldots\cdot (X^{g_l} - a_l) = 0 \in R[G]$ Then, for any $k_1,\ldots,k_l \in \mathbb{N} $, also $(X^{k_1 g_1} - a_1^{k_1}) \cdot\ldots\cdot (X^{k_l g_l} - a_l^{k_l}) = 0 \in R[G]$. \end{lemma} \begin{proof} For all $i \in [1,l]$, \[ X^{k_i g_i} - a_i^{k_i} = (X^{g_i} - a_i) \sum_{j=0}^{k_i-1} X^{j g_i} (a_i)^{k_i - 1 -j}, \] from which the lemma immediately follows. \end{proof} \medskip \begin{lemma} \label{lemma:G0} Let $R$ be a commutative ring and \[ G_0 = \{ e_1 \} \cup \Big\{ k e_1 + \prod_{p \in \mathbb{P} ,\, p \mid m} p^{u_p} e_2 \mid k \in [0,m-1], u_p \in \mathbb{N} _0 \Big\}. \] Let $M \in \mathbb{N} $ be such that, for every sequence $S=g_1 \cdot\ldots\cdot g_{M+1} \in \mathcal{F}(G_0)$, there exist $a_1,\ldots,a_{M+1} \in R\setminus \{0\}$ such that \[ f = (X^{g_1} - a_1) \cdot\ldots\cdot (X^{g_{M+1}} - a_{M+1}) = 0 \in R[G]. \] Then $ \mathsf{d} (G,R) \le M$. \end{lemma} \begin{proof} By Lemma \ref{lemma:reduce-multiples} and the definition of $ \mathsf{d} (G,R)$, it is sufficient to show that every element $g \in G$ is a multiple of an element in $G_0$. Let $g=k e_1 + l e_2$ with $k \in [0,m-1]$ and $l \in [0,mn-1]$. If $l=0$, $g$ is obviously a multiple of $e_1$. Consider the case $l \ne 0$. Then $l = \prod_{p \in \mathbb{P} , p \mid m} p^{\val_p(l)} \cdot q$ with $q \in [1,mn-1]$ and $\gcd(q,m)=1$. Therefore there exists an $a \in [1,m-1]$ with $qa \equiv 1 \mod m$. From $\ord(e_1)=m$, it follows that $g = q(ak e_1 + \prod_{p \in \mathbb{P} , p \mid m} p^{\val_p(l)} e_2)$. Choosing $k' \in [0,m-1]$ such that $k' \equiv ak \mod m$, we obtain $g = q(k' e_1 + \prod_{p \in \mathbb{P} , p \mid m} p^{\val_p(l)} e_2)$, which is a multiple of an element in $G_0$. \end{proof} \medskip \begin{lemma} \label{lemma:orthogonal} Let $g \in G$ and $\chi, \chi' \in \widehat G$. Then $\chi'(g)=\chi(g)$ if and only if $\chi' \in \chi \langle g\rangle^\perp$. Also \begin{enumerate} \item \label{el:ke1+e2} $\langle k e_1 + e_2 \rangle^\perp = \langle \psi\varphi^{-nk} \rangle$ for $k \in [0,m-1]$; \item \label{el:ke1+mle2} $\langle \varphi^n \rangle\subset\langle k e_1 + m l e_2 \rangle^\perp$ for $k \in [0,m-1]$ and $l \in [0,n-1]$. \end{enumerate} \end{lemma} \begin{proof} Clearly $\chi'(g) = \chi(g)$ if and only if $\chi^{-1} \chi'(g) = 1$, i.e., $\chi' \in \chi \langle g \rangle^\perp$. \smallskip 1. From $\psi^{-1}(k e_1 + e_2) = \zeta^{-nk} = \varphi^{-nk}(k e_1 + e_2)$, it follows that $\langle \psi\varphi^{-nk} \rangle \subset \langle k e_1 + e_2 \rangle^\perp$. Then $\ord(k e_1 + e_2) = mn$ and $\langle k e_1 + e_2 \rangle^\perp \cong \widehat{ G/\langle k e_1+e_2 \rangle }$ imply $\card{\langle k e_1 + e_2 \rangle^\perp}=m$, from which $\langle k e_1 + e_2 \rangle^\perp = \langle \psi\varphi^{-nk} \rangle$ follows. \smallskip 2. Observe that $\varphi^n(ke_1 + ml e_2) = \zeta^{nml} = (\zeta^{nm})^l = 1$ implies $\langle \varphi^n \rangle \subset \langle k e_1 + ml e_2 \rangle^\perp$. \end{proof} \medskip \begin{lemma} \label{lemma:cover-equiv} Let $H \subset \widehat G$ and $S=g_1\cdot\ldots\cdot g_l \in \mathcal{F}(G)$. Then the following statements are equivalent\,{\rm :} \begin{enumerate} \item[(a)] There exist $a_1,\ldots,a_l \in K^\times$ such that $\chi\big( \prod_{i=1}^l (X^{g_i}-a_i)\big)=0$ for all $\chi \in H$. \item[(b)] There exist $s \in [0, l]$ and $\chi_1,\ldots,\chi_s \in H$ such that $H \subset \bigcup_{i=1}^s \chi_i \langle g_i \rangle^\perp$. \item[(c)] $H=\emptyset$ or there exist $\chi_1,\ldots,\chi_l \in H$ such that $H \subset \bigcup_{i=1}^l \chi_i \langle g_i \rangle^\perp$. \end{enumerate} \end{lemma} \begin{proof} For $H=\emptyset$ all statements are trivially true. Let $H \neq \emptyset$. (a) $\Rightarrow$ (b) The extension of $\chi \in \widehat G$ onto $K[G]$ is a $K$-algebra homomorphism, and thus $$\chi\big( \prod_{i=1}^l (X^{g_i}-a_i) \big) =0$$ if and only if there is an $i \in [1,l]$ with $\chi(X^{g_i}-a_i)=0$, i.e., $\chi(g_i)=a_i$. Let \[ s = \card{ \{ i \in [1,l] \mid \text{ there exists a $\chi \in H$ such that } \chi(g_i) = a_i \} } \in [0,l]. \] Without restriction let $g_1,\ldots,g_s$ and $a_1,\ldots,a_s$ be such that there exist $\chi_i \in H$ with $\chi_i(g_i)=a_i$ for $i \in [1,s]$. Let $\chi \in H$. Then, by assumption, $\chi(g_i)=a_i$ for some $i \in [1,s]$. Therefore $\chi_i^{-1} \chi(g_i) = 1$, i.e. $\chi \in \chi_i \langle g_i\rangle^\perp$. (b) $\Rightarrow$ (a) Let $a_i=\chi_i(g_i)$ for $i \in [1,s]$ and let $a_{s+1}=\ldots=a_l=1$. Let $\chi \in H$. Then, by assumption, there exists an $i \in [1,s]$ such that $\chi \in \chi_i \langle g_i \rangle^\perp$, i.e., $\chi(g_i)=\chi_i(g_i)=a_i$. Hence $\chi(X^{g_i}-a_i)=0$. (b) $\Leftrightarrow$ (c) Obvious. \end{proof} \medskip Note that, in particular, $ \mathsf{d} (G,K)$ is the supremum of all $l \in \mathbb{N} _0$ such that there exists a sequence $S=g_1\cdot\ldots\cdot g_l \in \mathcal{F}(G)$ with \[ \widehat G \subsetneq \bigcup_{i=1}^l \chi_i \langle g_i \rangle^\perp \] for any choice of $\chi_1,\ldots,\chi_l \in \widehat G$. Or, equivalently, $ \mathsf{d} (G,K)+1$ is the minimum of all $l \in \mathbb{N} _0$ such that, for any sequence $S=g_1\cdot\ldots\cdot g_l \in \mathcal{F}(G)$, there exist $\chi_1,\ldots,\chi_l \in \widehat G$ such that $\widehat G$ can be covered as above: \[ \widehat G = \bigcup_{i=1}^l \chi_i \langle g_i \rangle^\perp. \] Consider $m=p \in \mathbb{P} $. Our strategy for finding an upper bound on $ \mathsf{d} (G,K)$ will be to subdivide $\widehat G$ into cosets modulo $\langle\psi,\varphi^n\rangle$ and cover each of these cosets individually. Lemma \ref{lemma:G0} allows us to restrict ourselves to certain special elements $g \in G$ in doing so, and from Lemma \ref{lemma:orthogonal}, we see that for these elements $\langle g \rangle^\perp$ contain (or are) $1$-dimensional subspaces, i.e., lines of the $2$-dimensional $\mathbb F_p$-vector space $\langle \psi,\varphi^n \rangle$. Then, for $\chi \in \langle\psi,\varphi^n\rangle$, $\chi \langle g\rangle^\perp$ is an affine line in $\langle \psi,\varphi^n \rangle$ containing the ``point'' $\chi$, and our task essentially boils down to covering $n$ copies of $\langle\psi,\varphi^n\rangle$ by such lines (where the slopes are fixed by $S$). Before we do so, we study some simple configurations in Lemmas \ref{lemma:parallel} and \ref{lemma:star}. The main part of the proof for the cases $m \in \{2,3\}$ then follows in Lemma \ref{lemma:residue-seqs}. It is based on the proof by Gao and Li of the case $m=2$, but is stated in terms of group characters instead of working with the group algebra directly. \medskip \begin{lemma} \label{lemma:parallel} Let $s \in [0,m]$ and let $S=g_1\cdot\ldots\cdot g_{s+(m-s)m} \in \mathcal{F}(G)$ such that either $g_1=\ldots=g_s=ke_1+e_2$ with $k \in [0,m-1]$ or $g_1,\ldots,g_s \in \{ k e_1 + m l e_2 \mid k \in [0,m-1], l \in \mathbb{N} _0 \}$. Then there exist $\chi_1,\ldots,\chi_{s+(m-s)m}$ such that $\langle\psi,\varphi^n\rangle \subset \bigcup_{i=1}^{s+(m-s)m} \chi_i \langle g_i \rangle^\perp$. \end{lemma} \begin{proof} Let $L=\langle\psi\varphi^{-nk}\rangle$ in the case $g_1=\ldots=g_s=ke_1+e_2$, and let $L=\langle\varphi^n\rangle$ otherwise. Since $L$ is a subgroup of $\langle\psi,\varphi^n\rangle$ and has cardinality $\card{L}=m$, there exist $\tau_1,\ldots,\tau_m \in \langle\psi,\varphi^n\rangle$ such that $\langle\psi,\varphi^n\rangle = \biguplus_{i=1}^m \tau_i L$. By Lemma \ref{lemma:orthogonal}, $L \subset \langle g_i \rangle^\perp$ for $i \in [1,s]$. Then \[ \langle\psi,\varphi^n\rangle \; \subset \; \bigcup_{i=1}^s \tau_i \langle g_i \rangle^\perp \cup \biguplus_{i=s+1}^m \tau_i L. \] For $j \in [s+1,s+(m-s)m]$, let $\chi'_j \in \langle g_j \rangle^\perp$, and let $L=\{\lambda_1,\ldots,\lambda_{m}\}$. Then, for $i \in [s+1,m]$, \[ \tau_i L = \{ \tau_i \lambda_j \mid j \in [1,m] \} \subset \bigcup_{j=1}^{m} \tau_i \lambda_j \chi_{s+(i-(s+1))m+j}'^{-1} \langle g_{s+(i-(s+1))m+j} \rangle^\perp \,. \qedhere \] \end{proof} \medskip \begin{lemma} \label{lemma:star} Let $m=p \in \mathbb{P} $, $g \in \{ k e_1 + p l e_2 \mid k \in [0,p-1], l \in \mathbb{N} _0 \}$ and $S=\prod_{i=0}^{p-1} (i e_1 + e_2) g$. Then $\langle\psi,\varphi^n\rangle \subset \bigcup_{i=0}^{p-1} \langle i e_1 + e_2\rangle^\perp \cup \langle g \rangle^\perp$. \end{lemma} \begin{proof} By Lemma \ref{lemma:orthogonal}, \[ \bigcup_{i=0}^{p-1} \langle\psi\varphi^{-ni}\rangle \cup \langle\varphi^n\rangle \; \subset \; \bigcup_{i=0}^{p-1} \langle i e_1 + e_2\rangle^\perp \cup \langle g \rangle^\perp. \] Let $\psi^k\varphi^{nl} \in \langle\psi,\varphi^n\rangle$ with $k,l\in[0,p-1]$. In the case $k=0$, clearly $\varphi^{nl} \in \langle\varphi^n\rangle$. Otherwise, there exists an $i \in [0,p-1]$ such that $-ik \equiv l \mod p$. Hence $\psi^k\varphi^{nl} = (\psi\varphi^{-ni})^k \in \langle\psi\varphi^{-ni}\rangle$. \end{proof} \medskip \begin{lemma} \label{lemma:residue-seqs} Let $m=p \in \mathbb{P} $, $G_1 = \{ e_1 \} \cup \{ k e_1 + p^u e_2 \mid k \in [0,p-1], u \in \mathbb{N} \}$, and \[ G_0 = \{ e_1 \} \cup \{ k e_1 + p^u e_2 \mid k \in [0,p-1], u \in \mathbb{N} _0 \} = \{ k e_1 + e_2 \mid k \in [0,p-1] \} \uplus G_1. \] If, for all sequences $T=h_1 \cdot\ldots\cdot h_{rp-1} \in \mathcal{F}(G_0)$ with $r \in [2,\min{ \{ p-1, n+1 \} }]$ and $\val_g(T) < p$ for all $g \in G_0$ as well as $\sum_{g \in G_1} \val_g(T) < p$, there exist $\chi_1,\ldots,\chi_{rp-1} \in \widehat G$ such that $\bigcup_{i=0}^{r-2} \varphi^i \langle\psi,\varphi^n\rangle \subset \bigcup_{i=1}^{rp-1} \chi_i \langle h_i \rangle^\perp$, then $ \mathsf{d} (G,K)= \mathsf{d} ^*(G)$. \end{lemma} \begin{proof} Since $ \mathsf{d} ^*(G) \le \mathsf{d} (G) \le \mathsf{d} (G,K)$ always holds, it is sufficient to show that $ \mathsf{d} (G,K) \le \mathsf{d} ^*(G) = (pn-1) + (p-1) = (n+1)p - 2$. By Lemma \ref{lemma:G0}, it is sufficient to show that, for any sequence $S=g_1\cdot\ldots\cdot g_{(n+1)p -1} \in \mathcal F(G_0)$, there exist $a_1,\ldots,a_{(n+1)p-1} \in K^\times$ such that \[ f = \prod_{i=1}^{(n+1)p-1} (X^{g_i} - a_i) = 0 \in K[G]. \] To see this, we use Lemma \ref{lemma:cover-equiv} and show that there exist $\chi_1,\ldots, \chi_{(n+1)p-1}$ such that \[ \widehat G = \biguplus_{i=0}^{n-1} \varphi^i \langle\psi,\varphi^n\rangle \subset \bigcup_{i=1}^{(n+1)p-1} \chi_i \langle g_i \rangle^\perp. \] We group the elements of $S$ into as many $p$-tuples of the forms $(e_2,\ldots,e_2)$, $(e_1 + e_2,\ldots, e_1 + e_2)$, \ldots, $( (p-1)e_1 + e_2, \ldots (p-1) e_1 + e_2)$ and $(g'_1,\ldots,g'_p) \in G_1^p$ as possible to obtain $l \in [0,n]$ such tuples. Without restriction, let these $p$-tuples be $(g_1,\ldots,g_p)$, \ldots, $(g_{(l-1)p+1},\ldots, g_{lp})$. For each $i \in [1,l]$, the tuple $(g_{(i-1)p+1},\ldots,g_{ip})$ fulfills the conditions of Lemma \ref{lemma:parallel} with $s=p$. Therefore, there exist $\chi_{(i-1)p+1},\ldots,\chi_{ip}$ such that \[ \varphi^{n-i} \langle\psi,\varphi^n\rangle \; \subset\; \bigcup_{j=(i-1)p+1}^{ip} \chi_{j} \langle g_{j} \rangle^\perp. \] It remains to be shown that $\chi_{lp+1},\ldots,\chi_{(n+1)p-1}$ can be chosen such that \[ \bigcup_{i=0}^{n-l-1} \varphi^i \langle\psi,\varphi^n\rangle\; \subset\; \bigcup_{j=lp+1}^{(n+1)p-1} \chi_{j}\langle g_{j}\rangle^\perp. \] In the case $l \geq n$, this is trivially so, and therefore it is sufficient to consider $l \leq n-1$. By $T=g_{lp+1}\cdot\ldots\cdot g_{(n+1)p-1}$ we denote the subsequence of $S$ consisting of the remaining elements. We have $\length{T} = \length{S} - lp = (n+1-l)p - 1$. In the process of creating $p$-tuples, we partitioned the elements of $G_0$ into $p+1$ different types. If there were at least $p$ elements of one type, we could create another tuple, in contradiction to the maximal choice of $l$. Thus we must have $\val_g(T) < p$ for all $g \in G_0$, $\sum_{g \in G_1} \val_g(T) < p$, and $\length{T} \le (p+1)(p-1) = p^2 - 1$, which implies $n+1-l \le p$. Altogether, we have $n+1-l \in [2,p]$. In the case $n+1-l \le p-1$, we set $r=n+1-l \in [2,\min{\{p-1,n+1\}}]$. Then, by assumption, $\chi_{lp+1},\ldots,\chi_{(n+1)p-1}$ can be chosen such that $$\bigcup_{i=0}^{r-2} \varphi^i \langle \psi, \varphi^n \rangle\; \subset\; \bigcup_{j=lp+1}^{(n+1)p-1} \chi_{j}\langle g_{j}\rangle^\perp.$$ Since $r-2 = n-l-1$, this already means $\widehat G \subset \bigcup_{i=1}^{(n+1)p-1} \chi_i \langle g_i \rangle^\perp$. In the case $n+1-l = p$, we have $\card{T}=p^2 - 1=(p+1)(p-1)$. This can only happen if each of the $p+1$ different types of elements occurs exactly $p-1$ times. Therefore $$T=\prod_{j=0}^{p-1} (je_1+e_2)^{p-1} \cdot \prod_{i=0}^{p-2} h_j = \prod_{i=0}^{p-2} \Big( \prod_{j=0}^{p-1} (j e_1 + e_2) \cdot h_i \Big)$$ with $h_0,\ldots,h_{p-2} \in G_1$. Without restriction, for $i \in [0,p-2]$, let $g_{lp+i(p+1)+1}\cdot\ldots\cdot g_{lp+i(p+1)+(p+1)} = \prod_{j=0}^{p-1} (j e_1 + e_2) \cdot h_i$. For every $i \in [0,p-2]$, we set $\chi_{lp+i(p+1)+1}=\ldots=\chi_{lp+i(p+1)+(p+1)}=\varphi^i$. Then, from Lemma \ref{lemma:star}, it follows that $\varphi^i \langle \psi,\varphi^n \rangle \subset \bigcup_{j=i(p+1)+1}^{i(p+1)+(p+1)} \chi_{lp+j} \langle g_{lp+j} \rangle^\perp$. Due to $n-l-1=p-2$, this again implies $\widehat G \subset \bigcup_{i=1}^{(n+1)p-1} \chi_i \langle g_i \rangle^\perp$. \end{proof} \bigskip \begin{proof}[\textbf{\emph{Proof of Theorem \ref{thm:main}.1}}] For $p=2$, i.e. $G = \mathsf{C} _2 \oplus \mathsf{C} _{2n}$, this follows trivially from Lemma \ref{lemma:residue-seqs}, since there are no admissible sequences. Consider $p=3$, i.e., $G = \mathsf{C} _3 \oplus \mathsf{C} _{3n}$. Let $G_1 = \{ e_1 \} \cup \{ k e_1 + 3^u e_2 \mid k \in [0,2], u \in \mathbb{N} \}$ and $G_0 = \{ e_2, e_1+e_2, 2e_1 + e_2 \} \uplus G_1$. Then, by Lemma \ref{lemma:residue-seqs}, it is sufficient to show that, for $T=h_1\cdot\ldots \cdot h_5 \in \mathcal{F}(G_0)$, we can choose $\chi_1,\ldots,\chi_5 \in \widehat G$ such that $\langle \psi,\varphi^n \rangle \; \subset \; \chi_1 \langle h_1 \rangle^\perp \cup \ldots \cup \chi_5 \langle h_5 \rangle^\perp$. We divide the elements into four types: $e_2$, $e_1+e_2$, $2e_1+e_2$ and elements from $G_1$. Since $\length{T}=5$, one of these types must occur at least twice. Without restriction, let $h_1$ and $h_2$ be of the same type. Thus we have either $h_1=h_2=k e_1+e_2$ for some $k \in [0,2]$ or $h_1,h_2 \in G_1$. Then $T$ fulfills the conditions of Lemma \ref{lemma:parallel} with $s=2$, and it follows that $\chi_1,\ldots,\chi_5$ can be chosen such that $\langle \psi,\varphi^n\rangle \subset \bigcup_{i=1}^5 \chi_i \langle h_i \rangle^\perp$. \end{proof} The following Lemma \ref{lemma:lines} recapitulates a few simple facts, which are well known in the context of affine lines, and will be used extensively in the construction of a counterexample in the case $p \ge 5$ and $n \ge 2$. \medskip \begin{lemma} \label{lemma:lines} Let $m=p \in \mathbb{P} $, $g_1 = k_1 e_1 + e_2$, $g_2 = k_2 e_1 + e_2$ with $k_1,k_2 \in [0,p-1]$, $\chi \in \widehat G$ and $\chi_1,\chi_2 \in \chi \langle\psi,\varphi^n\rangle$. \begin{enumerate} \item $\chi^{-1} \chi_i \langle g_i \rangle^\perp = \varphi^{n s_i} \langle g_i \rangle^\perp$ with $s_i \in [0,p-1]$ for $i \in \{1,2\}$. \item $\chi^{-1} \chi_i \langle g_i \rangle^\perp = \{ \psi^u \varphi^{nv} \mid u,v \in [0,p-1] \text{ with } k_i u + v \equiv s_i \mod p \}$ for $i \in \{1,2\}$. \item \begin{enumerate} \item $\card{\chi_1 \langle g_1 \rangle^\perp \cap \chi_2 \langle g_2 \rangle^\perp} = 1$ if and only if $g_1 \neq g_2$. \item $\card{\chi_1 \langle g_1 \rangle^\perp \cap \chi_2 \langle g_2 \rangle^\perp} = 0$ if and only if $g_1 = g_2$ and $s_1 \neq s_2$. \item $\card{\chi_1 \langle g_1 \rangle^\perp \cap \chi_2 \langle g_2 \rangle^\perp} = p$ if and only if $g_1 = g_2$ and $s_1 = s_2$. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} 1. Let $i \in \{1,2\}$ and $\chi^{-1} \chi_i=\psi^{u_i}\varphi^{n v_i}$ with $u_i,v_i \in [0,p-1]$. By Lemma \ref{lemma:orthogonal}.1, $\langle g_i \rangle^\perp = \langle\psi\varphi^{-n k_i} \rangle$. Therefore $\varphi^{-n(k_i u_i + v_i)} \chi^{-1} \chi_i = \psi^{u_i} \varphi^{-n k_i u_i} \in \langle g_i \rangle^\perp$, and hence $\chi^{-1} \chi_i \langle g_i \rangle^\perp = \varphi^{n s_i} \langle g_i \rangle^\perp$ with $s_i \in [0,p-1]$ chosen such that $s_i \equiv k_i u_i + v_i \mod p$. \smallskip 2. In view of Lemma \ref{lemma:orthogonal}.1, we have, for $u,v \in [0,p-1]$, \, $\psi^u \varphi^{nv} \in \chi^{-1} \chi_i \langle g_i \rangle^\perp = \varphi^{n s_i} \langle \psi \varphi^{-n k_i}\rangle$ if and only if $\psi^u \varphi^{nv} = \psi^w \varphi^{n(s_i - k_i w)}$ for some $w \in [0,p-1]$. This is the case if and only if $u \equiv w \mod p$ and $v \equiv s_i - k_i w \mod p$, i.e., if and only if $u \equiv w \mod p$ and $k_i u + v \equiv s_i \mod p$ (recall by Lemma \ref{lemma:orthogonal}.1 that $\langle g_i\rangle^ \perp\subset \langle \psi,\varphi^n\rangle$). \smallskip 3. By 2, we have $\chi^{-1} \chi_1 \langle g_1 \rangle^\perp \cap \chi^{-1} \chi_2 \langle g_2 \rangle^\perp = \{ \psi^u \varphi^{nv} \mid u,v \in [0,p-1] \text{ with } k_1 u + v \equiv s_1 \mod p \text{ and } k_2 u + v \equiv s_2 \mod p \}$. Since \[ \card{\chi^{-1} \chi_1 \langle g_1 \rangle^\perp \cap \chi^{-1} \chi_2 \langle g_2 \rangle^\perp} = \card{\chi_1 \langle g_1 \rangle^\perp \cap \chi_2 \langle g_2 \rangle^\perp}, \] it is sufficient to consider the number of solutions of the linear system \[ k_1 u + v \equiv s_1 \mod p \qquad\textrm{ and }\qquad k_2 u + v \equiv s_2 \mod p \] for $u,v\in[0,p-1]$ over $\mathbb F_p$. In the case $g_1 \neq g_2$, i.e., $k_1 \ne k_2$, it possesses a unique solution. In the case $g_1 = g_2$, it possesses no solution for $s_1 \neq s_2$. For $s_1=s_2$, the two equations coincide, and we obtain $p$ solutions. \end{proof} In the construction of the counterexamples, we use the same characterization of $ \mathsf{d} (G,K)$, derived from Lemma \ref{lemma:cover-equiv}, as in the proof of Theorem \ref{thm:main}.1---except now we show that it is not possible to cover $\widehat G$ with such subsets. To do so, we first consider a special type of sequence in Lemma \ref{lemma:l-triple}, which will turn out to be the only one which cannot be discarded with simpler combinatorial arguments, as will be given in the proof of Theorem \ref{thm:main}.2 that follows the lemma. \medskip \begin{lemma} \label{lemma:l-triple} Let $m=p \in \mathbb{P} $, $p \ge 5$ and $k_1,k_2,k_3 \in [0,p-1]$ be distinct. Let $l \in [2,p-1]$, \[ T = (k_1 e_1 + e_2)^l (k_2 e_1 + e_2)^l (k_3 e_1 + e_2)^l \in \mathcal{F}(G), \] and $\chi \in \widehat G$. For $i \in [1,3]$ and $j \in [1,l]$, let $\chi_{i,j} \in \widehat G$. Then \[ \lrcard{ \Big( \bigcup_{i=1}^3 \bigcup_{j=1}^l \chi_{i,j} \langle k_i e_1+e_2 \rangle^\perp \Big) \cap \chi \langle \psi,\varphi^n\rangle } < l (3p - 2l). \] \end{lemma} \begin{proof} We set $g_i=k_i e_1 + e_2$ for $i \in [1,3]$. Let $i \in [1,3]$ and $j \in [1,l]$. We can assume $\chi_{i,j} \in \chi \langle\psi,\varphi^n\rangle$ since otherwise $\chi_{i,j} \langle g_i\rangle^\perp \cap \chi \langle\psi,\varphi^n\rangle = \emptyset$ (due to $\langle g_i\rangle^\perp = \langle\psi\varphi^{-nk_i} \rangle \subset \langle\psi,\varphi^n\rangle$). Using Lemma \ref{lemma:lines}.1, we can furthermore assume $\chi^{-1} \chi_{i,j}=\varphi^{ns_{i,j}}$ with $s_{i,j} \in [0,p-1]$. And we can then also assume, without restriction, $s_{i,j} \neq s_{i,j'}$ for $j' \in [1,l]\setminus \{j\}$, since otherwise $\chi_{i,j}\langle g_i\rangle^\perp = \chi_{i,j'} \langle g_i \rangle^\perp$. For $i \in [1,3]$, let $E_i = \bigcup_{j=1}^l \chi_{i,j} \langle g_i \rangle^\perp$. Then \[ \Big( \bigcup_{i=1}^3 \bigcup_{j=1}^l \chi_{i,j} \langle g_i \rangle^\perp \Big) \cap \chi \langle \psi,\varphi^n\rangle = E_1 \cup E_2 \cup E_3 \] and \[ \card{E_1 \cup E_2 \cup E_3} = \sum_{i=1}^3 \card{E_i} - \sum_{1 \le i < i' \le 3} \card{E_i \cap E_{i'}} + \card{E_1 \cap E_2 \cap E_3}. \] For $i,i' \in [1,3]$ distinct, we show $\card{E_i}=lp$, $\card{E_i \cap E_{i'}} = l^2$ and $\card{E_1 \cap E_2 \cap E_3} < l^2$. Then $\card{E_1 \cup E_2 \cup E_3} < 3lp - 3l^2 + l^2 = l(3p - 2l)$. Let $i \in [1,3]$. By Lemma \ref{lemma:lines}.3b, $\chi_{i,j}\langle g_i \rangle^\perp \cap \chi_{i,j'}\langle g_i\rangle^\perp = \emptyset$ for $j,j' \in [1,l]$ with $j \neq j'$, and $\card{\langle g_i \rangle^\perp}=\card{\langle \psi\varphi^{-n k_i}\rangle} = p$ (by Lemma \ref{lemma:orthogonal}.1). Therefore $\card{E_i}=lp$. Let $i,i' \in [1,3]$ be distinct. For $j,j' \in [1,l]$ distinct, we have $\chi_{i,j} \langle g_i \rangle^\perp \cap \chi_{i,j'} \langle g_i \rangle^\perp = \emptyset$ and $\chi_{i',j} \langle g_{i'} \rangle^\perp \cap \chi_{i',j'} \langle g_{i'} \rangle^\perp = \emptyset$ (by Lemma \ref{lemma:lines}.3b). This implies that, for \[ E_i \cap E_{i'} = \big( \bigcup_{j=1}^l \chi_{i,j} \langle g_i\rangle^\perp \big) \cap \big( \bigcup_{j'=1}^l \chi_{i',j'} \langle g_{i'}\rangle^\perp \big) = \biguplus_{j=1}^l \biguplus_{j'=1}^l ( \chi_{i,j} \langle g_i\rangle^\perp \cap \chi_{i',j'} \langle g_{i'} \rangle^\perp ), \] the union is disjoint. By Lemma \ref{lemma:lines}.3a $\card{\chi_{i,j} \langle g_i\rangle^\perp \cap \chi_{i',j'} \langle g_{i'} \rangle^\perp} = 1$ for $j,j' \in [1,l]$, and therefore $\card{E_i \cap E_{i'}} = l^2$. Assume $\card{E_1 \cap E_2 \cap E_2} \ge l^2$. Then, since $\card{E_1 \cap E_2} = l^2$, $\card{E_1 \cap E_2 \cap E_3} = l^2$. For $a \in \mathbb{Z} $, let $ \overline a = a + p \mathbb{Z} \in \mathbb F_p$. Let $u,v \in [0,p-1]$. By Lemma \ref{lemma:lines}.2, $\chi \psi^u\varphi^{nv} \in E_1 \cap E_2 \cap E_3$ if and only if there are $b_i \in \{ s_{i,1}, \ldots, s_{i,l} \}$, for $i \in [1,3]$, such that \begin{align*} \overline {k_1} \overline u + \overline v &= \overline {b_1} \\ \overline {k_2} \overline u + \overline v &= \overline {b_2} \\ \overline {k_3} \overline u + \overline v &= \overline {b_3}. \end{align*} Since $ \overline {k_1}, \overline {k_2}$ and $ \overline {k_3}$ are pairwise distinct, $( \overline {k_1}, \overline 1)$, $( \overline {k_2}, \overline 1)$ and $( \overline {k_3}, \overline 1)$ are pairwise $\mathbb F_p$-linearly independent. For $i \in [1,3]$, we define $\Phi_i: \chi \langle\psi,\varphi^n\rangle \to \mathbb F_p$ by $\Phi_i(\chi \psi^u\varphi^{nv}) = \overline {k_i} \overline {u} + \overline {v}$. Then the linear independence of $( \overline {k_1}, \overline 1)$ and $( \overline {k_2}, \overline 1)$ implies that $\Phi=(\Phi_1,\Phi_2): \chi \langle\psi,\varphi^n\rangle \to \mathbb F_p^2$ is bijective. We have $\Phi(E_1\cap E_2\cap E_3) \subset \{ s_{1,1},\ldots,s_{1,l} \} \times \{ s_{2,1}, \ldots, s_{2,l} \}$, and due to $l^2 = \card{E_1 \cap E_2 \cap E_3} \le \card{\{ s_{1,1},\ldots,s_{1,l} \} \times \{ s_{2,1}, \ldots, s_{2,l} \} } = l^2$, equality holds. In particular, $\Phi_1(E_1\cap E_2\cap E_3)=\{ s_{1,1},\ldots,s_{1,l} \}$ and $\Phi_2(E_1\cap E_2\cap E_3)=\{ s_{2,1},\ldots,s_{2,l} \}$. Because $( \overline {k_1}, \overline 1)$, $( \overline {k_2}, \overline 1)$ and $( \overline {k_3}, \overline 1)$ are pairwise $\mathbb F_p$-linearly independent, there exist $x,y \in \mathbb F_p^\times$ such that $( \overline {k_3}, \overline 1) = x ( \overline {k_1}, \overline 1) + y ( \overline {k_2}, \overline 1)$. Hence $\Phi_3 = x \Phi_1 + y \Phi_2$. Now $\card{x \Phi_1(E_1\cap E_2\cap E_3)} = \card{y \Phi_2(E_1\cap E_2\cap E_3)} = l$. Also, since $x,\,y\neq 0$, we have (similar to $\Phi$) that $(x\Phi_1,y\Phi_2): \chi \langle\psi,\varphi^n\rangle \to \mathbb F_p^2$ is a bijective map. Thus, in view of $\card{x \Phi_1(E_1\cap E_2\cap E_3)} = \card{y \Phi_2(E_1\cap E_2\cap E_3)} = l$, $|E_1\cap E_2\cap E_3|=l^2$ and the pigeonhole principle, we see that $$\Phi_3(E_1\cap E_2\cap E_3)=x\Phi_1(E_1\cap E_2\cap E_3)+y\Phi_2(E_1\cap E_2\cap E_3),$$ whence, from the Cauchy-Davenport Theorem (Lemma \ref{2.1}), it follows that $\card{\Phi_3(E_1\cap E_2\cap E_3) } \ge \min{ \{ 2l-1, p \} } > l$, a contradiction, since $\Phi_3(E_1\cap E_2\cap E_3) \subset \{ s_{3,1},\ldots,s_{3,l} \}$. \end{proof} \bigskip \begin{proof}[\textbf{\emph{Proof of Theorem \ref{thm:main}.2}}] Consider $m=p \in \mathbb{P} _{\geq 5}$ and $n \ge 2$. Let $k_1,\ldots,k_4 \in [0,p-1]$ be pairwise distinct and set $g_i = k_i e_1 + e_2 \in G$ for $i \in [1,4]$. Furthermore, set $m_1 = (n-2)p+(p-1)$, $m_2=m_3=p-1$ and $m_4=2$. We consider the sequence \[ S = \prod_{i=1}^4 g_i^{m_i} \in \mathcal{F}(G) \] and, for any choice of $\chi_{i,j} \in \widehat G$ for $i \in [1,4]$ and $j \in [1,m_i]$, show that \[ \bigcup_{i=1}^4 \bigcup_{j=1}^{m_i} \chi_{i,j} \langle g_i \rangle^\perp \subsetneq \widehat G. \] Then, by Lemma \ref{lemma:cover-equiv} and the definition of $ \mathsf{d} (G,K)$, \[ \mathsf{d} (G,K) \geq \length{S} = p + pn -1 > p + pn - 2 = \mathsf{d} ^*(G). \] Let $\chi_{i,j} \in \widehat G$ for $i \in [1,4]$ and $j \in [1,m_i]$ be arbitrary. Assume, to the contrary, $\bigcup_{i=1}^4 \bigcup_{j=1}^{m_i} \chi_{i,j} \langle g_i \rangle^\perp = \widehat G$. For $i \in [1,4]$ and $j,j' \in [1,m_i]$ distinct, we can without restriction assume $\chi_{i,j} \langle g_i \rangle^\perp \ne \chi_{i,j'} \langle g_i \rangle^\perp$. For any permutation $\sigma \in \mathfrak{S}_n$ (which will be fixed later), \[ \widehat G = \biguplus_{\nu=1}^n \varphi^{\sigma(\nu)} \langle\psi,\varphi^n\rangle. \] For given $i \in [1,4]$ and $j \in [1,m_i]$, we have by Lemma \ref{lemma:orthogonal} that $\chi_{i,j} \langle g_i \rangle^\perp \subset \varphi^{\sigma(\nu)} \langle\psi,\varphi^n\rangle$ for a uniquely determined $\nu \in [1,n]$. For $i \in [1,4]$ and $\nu \in [1,n]$, we can therefore define \[ B_i^{(\nu)} = \big\{ \chi_{i,j} \mid j \in [1,m_i] \text{ with } \chi_{i,j} \langle g_i \rangle^\perp \subset \varphi^{\sigma(\nu)} \langle\psi,\varphi^n\rangle \big\}. \] We also define $n^{(\nu)} = \max{ \{ \card{B_i^{(\nu)}} \mid i \in [1,4] \} }$ as well as $l^{(\nu)} = \sum_{i=1}^4 \card{ B_i^{(\nu)} }$, for $\nu \in [1,n]$. Let $\nu \in [1,n]$. By assumption, \[ \varphi^{\sigma(\nu)} \langle\psi,\varphi^n\rangle = \bigcup_{i=1}^4 \bigcup_{\chi \in B_i^{(\nu)}} \chi \langle g_i \rangle^\perp. \] Thus, since $\card{ \langle\psi,\varphi^n\rangle } = p^2$ and $\card{\langle g_i \rangle^\perp}=p$ for all $i \in [1,4]$, we have $l^{(\nu)} \ge p$. On the other hand, $n^{(\nu)} \le p$ because otherwise there would exist $i \in [1,4]$ and $j, j' \in [1,m_i]$ distinct such that $\chi_{i,j} \langle g_i \rangle^\perp \cap \chi_{i,j'} \langle g_i \rangle^\perp \neq \emptyset$, but this would already imply $\chi_{i,j} \langle g_i \rangle^\perp = \chi_{i,j'} \langle g_i \rangle^\perp$, contrary to assumption. Fix $\sigma \in \mathfrak{S}_n$ so that there is a $k \in \mathbb{N} _0$ such that $n^{(1)},\ldots,n^{(k)} < p$ and $n^{(k+1)}=\ldots=n^{(n)}=p$. Since $m_i<p$ for $i\geq 2$, we see (for $\nu \in [1,n]$) that $n^{(\nu)} = p$ is only possible if $\card{B_1^{(\nu)}}=p$. Due to $m_1 = (n-2)p + (p-1)$, this is possible for at most $n-2$ different $\nu \in [1,n]$. Thus $k \ge 2$. We can also estimate $\lrcard{ \bigcup_{i=1}^4 \bigcup_{\chi \in B_i^{(\nu)}} \chi \langle g_i \rangle^\perp }$ in a different way: Assume for the purpose of showing \eqref{teetime} (the other cases are argued identically) that $n^{(\nu)} = \card{B_1^{(\nu)}} \ge \card{B_2^{(\nu)}} \ge \card{B_3^{(\nu)}} \ge \card{B_4^{(\nu)}}$. Each of the characters $\chi \in B_1^{(\nu)}$ contributes $\chi \langle g_1 \rangle^\perp$, and therefore exactly $p$ characters, to the union. Each of the characters $\chi \in B_2^{(\nu)}$ contributes at most $p - \card{B_1^{(\nu)}}$ characters, since $\card{\chi_1 \langle g_1 \rangle^\perp \cap \chi \langle g_2 \rangle^\perp} = 1$ for all $\chi_1 \in B_1^{(\nu)}$. Similarly, each of the characters $\chi \in B_3^{(\nu)}$ contributes at most $p - \max\{ \card{B_1^{(\nu)}} , \card{B_2^{(\nu)}} \} = p - \card{B_1^{(\nu)}}$ characters, since $\card{\chi_1 \langle g_1 \rangle^\perp \cap \chi \langle g_3 \rangle^\perp} = 1$ for all $\chi_1 \in B_1^{(\nu)}$ and $\card{\chi_2 \langle g_2 \rangle^\perp \cap \chi \langle g_3 \rangle^\perp} = 1$ for all $\chi_2 \in B_2^{(\nu)}$. Continuing this thought for $B_4^{(\nu)}$, we obtain \begin{eqnarray} p^2 =\nonumber \lrcard{\bigcup_{i=1}^4 \bigcup_{\chi \in B_i^{(\nu)}} \chi \langle g_i \rangle^\perp} & \leq & p \card{B_1^{(\nu)}} + (p - \card{B_1^{(\nu)}})(\sum_{i=2}^4 \card{B_i^{(\nu)}}) \\ & = &p n^{(\nu)} + (p - n^{(\nu)})(l^{(\nu)} - n^{(\nu)})\nonumber. \end{eqnarray} Therefore \begin{equation}\label{teetime} (n^{(\nu)} - (l^{(\nu)} - p))(n^{(\nu)} - p) = p n^{(\nu)} + (p - n^{(\nu)})(l^{(\nu)} - n^{(\nu)}) - p^2 \ge 0. \end{equation} Thus either $n^{(\nu)} \ge p$ (and therefore already $n^{(\nu)}=p$) or $n^{(\nu)} \le l^{(\nu)} - p$. For $\nu \in [1,k]$, we obtain $n^{(\nu)} \le l^{(\nu)} - p$. Due to $\card{B_4^{(\nu)}} \le m_4=2$, we also have $l^{(\nu)} = \sum_{i=1}^4 \card{B_i^{(\nu)}} \le 3n^{(\nu)} + 2$. Then \[ 3 l^{(\nu)} \geq 3n^{(\nu)} + 3p = 3 n^{(\nu)} + 2 + 3p - 2 \geq l^{(\nu)} + 3p - 2, \] and hence $l^{(\nu)} \geq \frac{3}{2} p - 1$ for all $\nu \in [1,k]$. Because of $\sum_{i=1}^n l^{(\nu)} = \length{S} = pn + (p-1)$ and $l^{(\nu)} \geq n^{(\nu)} = p$ for all $\nu \in [k+1,n]$, we have $l^{(1)} + \ldots + l^{(k)} \le pk + (p-1)$. For the remainder of the argument, we consider $\nu\in [1,k]$. Then, by the above, $\sum_{i=1, i\neq\nu}^{k} l^{(\nu)} \geq (k-1) (\frac{3}{2} p - 1)$, and hence \[ (k-1)\left(\frac{3}{2} p - 1\right) + l^{(\nu)} \leq pk + (p-1), \] which implies \[ \begin{split} l^{(\nu)} & \leq pk + (p-1) - (k-1)\left(\frac{3}{2}p - 1\right) = pk + p - 1 - \frac{3}{2} kp + k + \frac{3}{2} p - 1\\ &= \frac{3}{2} p + (p - 2) + k - \frac{1}{2} pk = \frac{3}{2} p + (p - 2) - \frac{k}{2}(p - 2). \end{split} \] Hence, since $k \ge 2$, it follows that $l^{(\nu)} \le \lfloor \frac{3}{2}p \rfloor$. Together with $l^{(\nu)} \ge \lceil \frac{3}{2}p - 1 \rceil$, this implies $l^{(\nu)} = \frac{3}{2}p - \frac{1}{2}$. Since $\card{B_4^{(1)}} + \ldots + \card{B_4^{(k)}} \leq m_4 = 2$ and $k \ge 2$, there exists a $\nu \in [1,k]$ with $\card{B_4^{(\nu)}} \le 1$. Then \[ \card{B_1^{(\nu)}},\ldots,\card{B_3^{(\nu)}} \leq n^{(\nu)} \leq l^{(\nu)} - p = \frac{1}{2} (p-1), \] $\card{B_4^{(\nu)}} \leq 1$ and $\sum_{i=1}^4 \card{B_i^{(\nu)}} = l^{(\nu)} = \frac{3}{2}(p-1)+1$. Therefore we must have $$\card{B_1^{(\nu)}}=\card{B_2^{(\nu)}}=\card{B_3^{(\nu)}} = n^{(\nu)}=\frac{1}{2}(p-1)$$ and $\card{B_4^{(\nu)}} = 1$. With the help of Lemma \ref{lemma:l-triple}, we show that this leads to a contradiction. Consider $T=g_1^{\frac{1}{2}(p-1)} g_2^{\frac{1}{2}(p-1)} g_3^{\frac{1}{2}(p-1)} \in \mathcal{F}(G)$. Then, by Lemma \ref{lemma:l-triple} (with $l=\frac{1}{2}(p-1)$ and $\chi=\varphi^{\sigma(\nu)}$), \[ \lrcard{ \bigcup_{i=1}^3 \bigcup_{\chi' \in B_i^{(\nu)}} \chi' \langle g_i \rangle^\perp } < \frac{1}{2}(p-1)(2p+1). \] Thus, with $B_4^{(\nu)} = \{ \tau \}$, \[ \begin{split} p^2 &= \lrcard{ \Big( \bigcup_{i=1}^3 \bigcup_{\chi' \in B_i^{(\nu)}} \chi' \langle g_i \rangle^\perp \Big) \cup \tau \langle g_4 \rangle^\perp } \le \lrcard{ \bigcup_{i=1}^3 \bigcup_{\chi' \in B_i^{(\nu)}} \chi' \langle g_i \rangle^\perp } + (p - n^{(\nu)}) \\ &< \frac{1}{2}(p-1)(2p + 1) + \frac{1}{2}(p+1) = p^2, \end{split} \] a contradiction. \end{proof} \section{Acknowledgements} I am indebted to Alfred Geroldinger for his constant feedback and help during the creation of this paper. I would also like to thank David Grynkiewicz and G{\"u}nter Lettl for their comments on preliminary versions of this paper. In particular, G. Lettl suggested the use of the Cauchy-Davenport Theorem in Lemma \ref{lemma:l-triple}, which significantely shortened the proof, compared to an earlier version. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,314,259,996,027
arxiv
\section*{Introduction}\label{sec:introduction} A fundamental open question in the classification of Riemann surfaces, known as the \textit{type problem} is: {\centerline{whether a Riemann surface S supports a Green's function?}} \noindent Behind this question is the classical problem of uniformization (see \emph{e.g.} \cite{Hilbert}*{Problem 22}, \cite{Abi}*{p. 578}): being $S$ a Riemann surface, find all domains $\widetilde{S}$ of the Riemann sphere $\widehat{\mathbb{C}}$ and holomorphic functions $f: \widetilde{S}\to S$ such that at each point $p\in S$, the map $f$ is a local uniformizing variable at $p$. The answer to this celebrated problem is given by the Uniformization Poincar\'e-Koebe's theorem (see \emph{e.g.} \cite{Abi}*{p. 588}). In the context of simply connected Riemann surfaces we have that the Riemann sphere $\hat{\mathbb{C}}$ and the Poincar\'e disc $\Delta$ admit Green's functions, while the complex plane $\mathbb{C}$ does not. In general, Riemann surfaces are classified accordingly to: \begin{definition}[\cite{Makoto}*{Section 2} \cite{Bear1}*{p.164}]\label{typeclass} We call the Riemann surface $S$ \textbf{elliptic} if and only if $S$ is compact (equivalently, closed). An open Riemann surface $S$ is said to be \textbf{parabolic type} or \textbf{parabolic}, if $S$ does not carry a negative non-constant subharmonic function. All open Riemann surface which are not parabolic will be called \textbf{hyperbolic}. \end{definition} Recall that all compact Riemann surface admits Green's Function \cite{FarKra}*{p. 19}. To digress from this definition (see \cite{MR0264064}, \cite{MR3186310}, \cite{MR0414898}) we recall that a Dirichlet problem on the surface $S$ deals with the construction of harmonic functions $u$ on a region $\Omega$, such that coincides with the function $f$ on $\partial \Omega$. For this purpose, we are required to construct a continuous function $u$ on $\bar{\Omega}=\Omega \cup \partial \Omega$, which coincides with $f$ on $\partial \Omega$ and is harmonic on $\Omega$. By the maximum principle if such a function exists, then, it is uniquely determined. This uniqueness imply that $S$ does not admit a negative nonconstant subharmonic function. Equivalently, such a function exists if and only if there is not Green's function on $S$ (see \cite{AhlSar}*{p. 204}). The type problem becomes equivalent to answering the question {\centerline{which open Riemann surfaces are of parabolic type?}} \noindent The interest in solving this problem has captured the attention of the mathematical community for almost a hundred years. The first partial results to this question can be found in theorems of section 6 in \cite{AhlSar} (see p. 204). After these results, numerous known characterizations of parabolic surfaces, that is equivalent to the type problem have been produced from potential theory, theory of functions, dynamics and geometry of surfaces, among others. This is a short list of known characterizations of Riemann surfaces of parabolic type: if the Riemann surface $S$ is the quotient of the Poincar\'e unit disk $\Delta$ by a Fuchsian group $\Gamma$, then $S$ is parabolic if and only if \begin{itemize} \item[$\bullet$] The series $\sum\limits_{\gamma\in \Gamma}\left(\frac{1-\Vert \gamma(\textbf{0})\Vert}{1+\Vert \gamma(\textbf{0})\Vert}\right)$ diverges \cite{Nicho}*{Theorem 5.2.1}, \item[$\bullet$] Geodesic flow on the unit tangent bundle of $S$ is ergodic \cite{Nicho}*{Theorem 8.3.4}; \item[$\bullet$] Mostow rigidity holds for G \cite{Agard}, \cite{AstZin1990}, \cite{Tukia}; \item[$\bullet$] The group $\Gamma$ has the Bowen's property \cite{AstZin1990}, \cite{Bishop}. \end{itemize} \medskip \noindent {\bf Parabolicity of zero-twist tight flute surfaces.} Recall that a \emph{flute surface} is the unique infinite-type surface, up to homeomorphism, of genus zero and with space of ends homeomorphic to the ordinal $\omega + 1$ (\S \ref{Subsec:TopologicalSurfaces}). The class conformed by all the Riemann flute surfaces is one of the simplest families where we can ask about the parabolicity problem. However, even in this class the problem of parabolicity is widely open (see \textit{e.g.} \cite{MR}, \cite{BasHakSa}). A better manage family in the class of Riemann flute surfaces are the so-called \emph{tight flute surfaces}. Such surfaces are hyperbolic surfaces obtained by starting with a geodesic pair of pants $P_0$ with two punctures and one boundary geodesic and consecutively gluing geodesic pair of pants $P_n$ ($n\geq 1$) with one cusp and two boundary geodesics in an infinite chain (\S \ref{definition:tight_flute_surface}). A tight flute surface is determined by its \emph{Fenchel-Nielsen parameters} (\S \ref{Subsec:FenchelNielsen}) $(\{l_n,t_n\})$ where $l_n$ and $t_n$ are the length and twist parameter of the boundary closed geodesic $\alpha_n$ of the surface obtained after gluing $n$ pair of pants. This surface is denoted by $S=S(\{l_{n},t_n\})_{n\in \mathbb{N}}$. If $t_n=0$ for all $n\in \mathbb{N}$, then we say that $S=S(\{l_{n},0\})_{n\in\mathbb{N}}$ is a \emph{zero-twist flute surface}. In this article, we aim to present a new way of describing zero-twist flute surfaces; this description differs from the Fenchel-Nielsen parameters. Our punctual contribution is as follows: \begin{theorem}\label{Teo:Parametrization-ZTFS} Given a zero-twist flute surface $S=S(\{l_{n},0\})_{n\in\mathbb{N}_{0}}$ there exists a unique sequence ${\mathbf{x}}=(x_n)_{n\in \mathbb{N}_{0}}$ of positive real numbers and a Fuchsian group $\Gamma_{\mathbf{x}}$ (completely determined by $\mathbf{x}$) generated by the M\"obius transformations % \begin{equation}\label{eq:maps_g} g_n(z):=\dfrac{\left(1+\dfrac{2s_{n-1}}{s_n-s_{n-1}}\right)z-2s_{n-1}\left(1+\dfrac{s_{n-1}}{s_n-s_{n-1}}\right)}{-\dfrac{2}{s_n-s_{n-1}}z+\left(1+\dfrac{2s_{n-1}}{s_n-s_{n-1}}\right)}, \quad \end{equation} with $n\in\mathbb{N}_{0}$ and $s_{n}=\sum\limits_{i=0}^{n}x_{i}$, such that the convex core of the surface $\mathbb{H}^{2}/\Gamma_{\mathbf{x}}$ is isometric to $S$. Moreover, $\Gamma_{\mathbf{x}}$ is of first kind if and only if the series $\sum x_n$ diverges. In this case, $\mathbb{H}^{2}/\Gamma_{\mathbf{x}}$ coincides, up to isometry, with $S$ and it is complete. \end{theorem} Our construction above is inspired on Basmajian work in \cite{Bas93}. In his work, he gave a realization theorem showing that $(\{l_n\},\{s_n\})$ are the Fenchel-Nielsen coordinates of a tight flute surface if and only if $t_n> \ln 2$ for all $n$ odd, and $t_n< \ln 2$ for all $n$ even, where $t_n:=\sum_{i=1}^n(-1)^{i+1}l_i$, see \cite{Bas93}*{Theorem 2}. Observe that these conditions does not depend on twist parameters. In contrast with our construction of a zero-twist tight flute surface in terms of the sequence $\mathbf{x}=(x_n)_{n\in \mathbb{N}_0} \in \mathbb{R}_{+}^{\mathbb{N}_0}$, we have that it does not depend of any additional condition on the sequence. Additionally, we can recover the Fenchel-Nielsen parameters of $S=S(\{l_n,0\})$ using \begin{equation}\label{Eq:LengthParameterZTFS} l_n=\ln\left(\frac{s_{n-1}+s_{n}+2\sqrt{s_{n-1}s_{n}}}{s_{n-1}+s_{n}-2\sqrt{s_{n-1}s_{n}}} \right). \end{equation} Let $\mathbb{R}_{+}^{\mathbb{N}_0}$ be the space conformed by all sequences of positive real numbers indexed by the set $\mathbb{N}_{0}$ endowed with the compact-open topology, and denote by $\mathcal{F}$ the set of all zero-twist flute surfaces, that is, $\mathcal{F}:=\{S=S(\{l_{n},0\})_{n\in\mathbb{N}_{0}}\}$. Accordingly to Theorem \ref{Teo:Parametrization-ZTFS}, there is a bijective map $\sigma: \mathbb{R}_{+}^{\mathbb{N}_0} \rightarrow \mathcal{F}$ which associate to each sequence in $\mathbb{R}_{+}^{\mathbb{N}_0}$ a unique zero-twist flute surface. We denote by $S_{\mathbf{x}}$ the image of $\mathbf{x}\in \mathbb{R}_{+}^{\mathbb{N}_0}$ under $\sigma$.\\ Recently, A. Basmajian, H. Hakobyan and D. {\v{S}}ari{\'c} in \cite{BasHakSa}*{Theorems 1.1 and 1.2} gave sufficient conditions on Fenchel-Nielsen parameters of a surface to determine parabolicity. In the particular case of zero-twist flute surfaces, they were able to characterize parabolicty, see \cite{BasHakSa}*{Theorem 1.5}. Using this characterization we have the following result. \begin{corollary}\label{Cor:ParabolicZTFS} A zero-twist flute surface $S_{\mathbf{x}}$ is of parabolic type if and only if $\sum x_n$ diverges. \end{corollary} Theorem \ref{Teo:Parametrization-ZTFS} gives an explicit set of generators for $\Gamma_{\mathbf{x}}$. We have that if $\mathbf{x} \in \{1,2\}^{\mathbb{N}_0}$, then $\Gamma_{\mathbf{x}}$ is a subgroup of ${\rm PSL}(2,\mathbb{Z})$. This proves the following: \begin{corollary} The set of all zero-twist tight flute surfaces uniformized by subgroups of ${\rm PSL}(2,\mathbb{Z})$ contains a homeomorphic copy of the Cantor set. \end{corollary} \medskip \noindent {\bf An uncountable family of hyperbolic Loch Ness Monsters}. Recall that the Loch Ness monster surface is the only surface (up to homeomorphism) which has infinite genus and only one end (\S \ref{Subsec:TopologicalSurfaces}). In \cite{ALCA} the authors introduced a Fuchsian group uniformizing the Loch Ness monster . We generalize their construction and find an uncountable family of hyperbolic surfaces such that each one of these surface is homeomorphic to the Loch Ness Monster. Denote by $\mathcal{N}$ the set of all sequences $\mathbf{y}:=(y_n)_{n\in \mathbb{Z}}$ of elements $y_n=(a_{n},b_{n},c_{n},d_{n},e_{n}) \in \mathbb{R}^5$ satisfying \begin{equation}\label{Eq:Condition} a_{n}<b_{n}<c_{n}<d_{n}<e_{n} \mbox{ and } e_{n}\leq a_{n+1} \end{equation} Let $\mathbf{y}:=(y_n)_{n\in \mathbb{Z}} \in \mathcal{N}$. For each $n\in \mathbb{Z}$, let $f_{n}$ and $g_{n}$ be the M\"obius transformations mapping $\sigma_{n}$ onto $\tilde{\sigma}_{n}$ and $\rho_{n}$ onto $\tilde{\rho}_{n}$, respectively, where $\sigma_{n}$, $\rho_{n}$, $\tilde{\sigma}_{n}$ and $\tilde{\rho}_{n}$ are the half-circles in the hyperbolic plane $\mathbb{H}^2$ depicted in Figure \ref{Fig:half_circleIntro}. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-3.3,-0.5) rectangle (5.3,3.5); \draw [blue, line width=1pt] (-1,0) arc(0:180:1); \draw [red, line width=1pt] (0,0) arc(0:180:0.5); \draw [blue, line width=1pt] (3,0) arc(0:180:1.5); \draw [red, line width=1pt] (5,0) arc(0:180:1); \draw[dashed, color=red!60, thick, <-] (3.6,1) arc (0:180:2); \draw[dashed, color=blue!60, thick, <-] (0.6,1.3) arc (0:180:1.3); \draw [dashed, line width=1pt, black!30](-3,0) -- (-3,3); \draw [dashed, line width=1pt, black!30](5,0) -- (5,3); \node at (-3,-0.3) {\tiny{$a_{n}$}}; \node at (-1,-0.3) {\tiny{$b_{n}$}}; \node at (0,-0.3) {\tiny{$c_{n}$}}; \node at (3,-0.3) {\tiny{$d_{n}$}}; \node at (5,-0.3) {\tiny{$e_{n}$}}; \node at (-2.2,1.2) {\small{{\color{blue}$\sigma_{n}$}}}; \node at (-0.5,0.7) {\small{{\color{red}$\rho_{n}$}}}; \node at (1.5,1.8) {\small{{\color{blue}$\tilde{\sigma}_{n}$}}}; \node at (4,1.3) {\small{{\color{red}$\tilde{\rho}_{n}$}}}; \node at (-0.7,2.9) {\small{$f_{n}$}}; \node at (1.5,3.2) {\small{$g_{n}$}}; \draw [->, >=latex, black!30](-3.3,0) -- (5.3,0); \end{scope} \end{tikzpicture} \caption{\emph{Half-circles $\sigma_{n}$, $\rho_{n}$, $\tilde{\sigma}_{n}$ and $\tilde{\rho}_{n}$.}} \label{Fig:half_circleIntro} \end{center} \end{figure} Define \begin{equation} G_\mathbf{y}:= \langle f_n,g_n:\, n\in \mathbb{Z} \rangle \leq \mathrm{Isom}^+(\mathbb{H}^2). \end{equation} The result we establish is the following: \begin{theorem}\label{t:LNM} For each $\mathbf{y}\in \mathcal{N}$, $G_{\mathbf{y}}$ is a Fuchsian group such that $S_\mathbf{y}:=\mathbb{H}^{2}/G_{\mathbf{y}}$ is topologically equivalent to the Loch Ness Monster. Moreover, if $G_\mathbf{y}$ is of first kind, then $e_n=a_{n+1}$ for all $n\in \mathbb{Z}$, $\lim\limits_{n\to \infty} e_n=\infty$ and $\lim\limits_{n\to -\infty} a_n=-\infty$. \end{theorem} \noindent {\bf Organization of the paper}. In Section \ref{sec:preliminaries} we compile some classic results and concepts of the theory of surfaces necessary for the development of this work. Section \ref{sec:proof-ZTFS} is dedicated to the proof of Theorem \ref{Teo:Parametrization-ZTFS} and Corollary \ref{Cor:ParabolicZTFS}. In Section \ref{sec:proof-LNM} we give the proof of Theorem \ref{t:LNM}. \section{Preliminaries} \label{sec:preliminaries} \subsection{Topological surfaces}\label{Subsec:TopologicalSurfaces} \noindent A \emph{topological surface} $S$ is a topological space, connected, Hausdorff, second countable and locally homeomorphic to $\mathbb{R}^2$. In this text we are only concerned with \emph{orientable} surfaces. Topological orientable surfaces are classified, up to homeomorphisms, by their genus $g(S)\in \mathbb{N}_{0}\cup\{\infty\}$, and a pair of nested topological spaces ${\rm Ends}_{\infty}(S)\subseteq {\rm Ends}(S)$ homeomorphic to a pair of nested closed subsets of the Cantor space. The spaces ${\rm Ends}(S)$ and ${\rm Ends}_{\infty}(S)$ are called the \emph{end space} and the \emph{non-planar ends space} of $S$ (or, ends accumulated by genus), respectively. Moreover, any pair of nested closed subsets of the Cantor set can be realized as the space of ends of a connected, orientable topological surface. For more details, we refer the reader to \cite{Ian}. \begin{theorem}[Classification of topological surfaces, \cite{Ker}*{\S 7}, \cite{Ian}*{Theorem 1}]\label{Thm:ClassificationOfSurfaces} Two orientable surfaces $S_1$ and $S_2$ having the same genus are topological equivalent if and only if there exists a homeomorphism $f: {\rm Ends}(S_1)\to {\rm Ends}(S_2)$ such that $f( {\rm Ends}_{\infty}(S_1))= {\rm Ends}_{\infty}(S_2)$. \end{theorem} A topological surface is of \emph{infinite-type} if it has fundamental group infinitely generated. Now, we recall the definitions of the two infinite-type surfaces discussed in this text, these surfaces are the simplest infinite-type surface that we can find. We call to the isolated elements of the end space \emph{punctures} of the surface. \begin{definition}[\cite{Bas93}*{p. 423}] The \textbf{flute surface} is the unique topological surface, up to homeomorphisms, of genus zero and with ends space homeomorphic to the ordinal number $\omega +1$, see Figure \ref{Fig:topo_tight_flute}. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-0.5,-0.4) rectangle (11,3.4); \draw [line width=1pt] (0.5,0) -- (9.8,0); \draw [line width=1pt] (9.5,1) -- (9.8,1.01); \draw [dashed, line width=1pt] (0.5,0.3) ellipse (1mm and 3mm); % \draw [line width=1pt] (0.5,0.6) to[out=0,in=-90] (1.5,2.5); % \draw [dashed, line width=1pt] (2,2.5) ellipse (5mm and 2mm); \node at (2,3.1) {$\vdots$}; \draw [line width=1pt] (2.5,2.5) to[out=-90,in=180] (3.5,1); \draw [line width=1pt] (3.5,1) to[out=0,in=-90] (4.5,2.5); \draw [dashed, line width=1pt] (5,2.5) ellipse (5mm and 2mm); \node at (5,3.1) {$\vdots$}; \draw [line width=1pt] (5.5,2.5) to[out=-90,in=180] (6.5,1); \draw [line width=1pt] (6.5,1) to[out=0,in=-90] (7.5,2.5); % \draw [dashed, line width=1pt] (8,2.5) ellipse (5mm and 2mm); \node at (8,3.1) {$\vdots$}; \draw [line width=1pt] (8.5,2.5) to[out=-90,in=180] (9.5,1); \node at (10.3,0.5) {$\ldots$}; \draw [dashed, line width=0.6pt] (3.5,0.5) ellipse (2mm and 5mm); \draw [line width=1pt] (3.5,1) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=0.6pt] (6.5,0.5) ellipse (2mm and 5mm); \draw [line width=1pt] (6.5,1) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=0.6pt] (9.5,0.5) ellipse (2mm and 5mm); \draw [line width=1pt] (9.5,1) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \end{scope} \end{tikzpicture} \end{center} \caption{\emph{A flute surface.}} \label{Fig:topo_tight_flute} \end{figure} \end{definition} \begin{definition}[\cite{Val}*{Definition 2}] The \textbf{Loch Ness Monster} is the unique infinite-type surface, up to homeomorphism, of infinite genus with exactly one end, see Figure \ref{Fig:LNM}. From the historical point of view as shown in \cite{AyC}, this nomenclature is due to A. Phillips and D. Sullivan \cite{PSul}. \begin{figure}[h!] \centering \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.45] \clip (-5.8,-5) rectangle (12,3.9); \draw[line width=1pt] (-3,0) arc (45:315:1.5 and 1); \draw [line width=1pt] (-4.8,-0.55) arc [ start angle=180, end angle=360, x radius=6mm, y radius =3mm ] ; \draw [line width=1pt] (-3.82,-0.8) arc [ start angle=-20, end angle=200, x radius=4mm, y radius =2mm ] ; \draw [dashed, line width=0.8pt] (-3,-1.4) arc [ start angle=-90, end angle=90, x radius=3mm, y radius =7mm ] ; \draw [dashed, line width=0.8pt] (-3,0) arc [ start angle=90, end angle=270, x radius=3mm, y radius =7mm ] ; \draw [line width=1pt] (-3,0) to[out=0,in=-90] (-2,1); \draw [dashed, line width=0.8pt] (-0.6,1) arc [ start angle=0, end angle=180, x radius=7mm, y radius =3mm ] ; \draw [dashed, line width=0.8pt] (-2,1) arc [ start angle=180, end angle=360, x radius=7mm, y radius =3mm ] ; \draw[line width=1pt] (-0.6,1) arc (-45:225:1 and 1.5); \draw [line width=1pt] (-1.15,2.7) arc [ start angle=90, end angle=270, x radius=3mm, y radius =6mm ] ; \draw [line width=1pt] (-1.35,2.5) arc [ start angle=100, end angle=-100, x radius=2mm, y radius =4mm ] ; \draw [line width=1pt] (-0.6,1) to[out=-90,in=180] (0.4,0); \draw [dashed, line width=0.8pt] (0.4,-1.4) arc [ start angle=-90, end angle=90, x radius=3mm, y radius =7mm ] ; \draw [dashed, line width=0.8pt] (0.4,0) arc [ start angle=90, end angle=270, x radius=3mm, y radius =7mm ] ; \draw [line width=1pt] (0.4,-1.4) to[out=0,in=90] (1.4,-2.4); \draw [dashed, line width=0.8pt] (2.8,-2.4) arc [ start angle=0, end angle=180, x radius=7mm, y radius =3mm ] ; \draw [dashed, line width=0.8pt] (1.4,-2.4) arc [ start angle=180, end angle=360, x radius=7mm, y radius =3mm ] ; \draw [line width=1pt] (2.8,-2.4) to[out=90,in=180] (3.8,-1.4); \draw [dashed, line width=0.8pt] (3.8,0) arc [ start angle=90, end angle=270, x radius=3mm, y radius =7mm ] ; \draw [dashed, line width=0.8pt] (3.8,0) arc [ start angle=90, end angle=-90, x radius=3mm, y radius =7mm ] ; \draw [line width=1pt] (2.8,-2.4) arc (45:-225:1 and 1.5); \draw [line width=1pt] (2.2,-3) arc [ start angle=90, end angle=270, x radius=3mm, y radius =6mm ] ; \draw [line width=1pt] (2,-3.2) arc [ start angle=100, end angle=-100, x radius=2mm, y radius =4mm ] ; \draw [line width=1pt](-3,-1.4) -- (0.4,-1.4); \draw [line width=1pt](0.4,0) -- (3.8,0); \node at (11.5,-0.7) {$\ldots$}; \draw [line width=1pt] (3.8,0) to[out=0,in=-90] (4.8,1); \draw[line width=1pt] (6.2,1) arc (-45:225:1 and 1.5); \draw [line width=1pt] (5.6,2.7) arc [ start angle=90, end angle=270, x radius=3mm, y radius =6mm ] ; \draw [line width=1pt] (5.4,2.5) arc [ start angle=100, end angle=-100, x radius=2mm, y radius =4mm ] ; \draw [dashed, line width=0.8pt] (6.2,1) arc [ start angle=0, end angle=180, x radius=7mm, y radius =3mm ] ; \draw [dashed, line width=0.8pt] (6.2,1) arc [ start angle=0, end angle=-180, x radius=7mm, y radius =3mm ] ; \draw [line width=1pt] (6.2,1) to[out=-90,in=180] (7.2,0); \draw [line width=1pt](3.8,-1.4) -- (7.2,-1.4); \draw [dashed, line width=0.8pt] (7.2,0) arc [ start angle=90, end angle=270, x radius=3mm, y radius =7mm ] ; \draw [dashed, line width=0.8pt] (7.2,0) arc [ start angle=90, end angle=-90, x radius=3mm, y radius =7mm ] ; \draw [line width=1pt] (7.2,-1.4) to[out=0,in=90] (8.2,-2.4); \draw [line width=1pt] (9.6,-2.4) arc (45:-225:1 and 1.5); \draw [line width=1pt] (9,-3) arc [ start angle=90, end angle=270, x radius=3mm, y radius =6mm ] ; \draw [line width=1pt] (8.8,-3.2) arc [ start angle=100, end angle=-100, x radius=2mm, y radius =4mm ] ; \draw [dashed, line width=0.8pt] (9.6,-2.4) arc [ start angle=0, end angle=180, x radius=7mm, y radius =3mm ] ; \draw [dashed, line width=0.8pt] (9.6,-2.4) arc [ start angle=0, end angle=-180, x radius=7mm, y radius =3mm ] ; \draw [line width=1pt] (9.6,-2.4) to[out=90,in=180] (10.6,-1.4); \draw [line width=1pt](7.2,0) -- (10.6,0); \draw [dashed, line width=0.8pt] (10.6,0) arc [ start angle=90, end angle=270, x radius=3mm, y radius =7mm ] ; \draw [dashed, line width=0.8pt] (10.6,0) arc [ start angle=90, end angle=-270, x radius=3mm, y radius =7mm ] ; \end{scope} \end{tikzpicture} \caption{\emph{The Loch Ness monster.}} \label{Fig:LNM} \end{figure} \end{definition} In section \ref{section:Loch_ness_monster} we use the following result to prove the first part of Theorem \ref{t:LNM}. \begin{lemma}[\cite{SPE}*{\S 5.1., p. 320}]\label{lemma:spec} The surface $S$ has exactly one end if and only if for all compact subset $K \subset S$ there is a compact subset $K^{'}\subset S$ such that $K\subset K^{'}$ and $S \setminus K^{'}$ is connected. \end{lemma} We say that a simple close curve on a surface $S$ is \emph{essential} if it is not isotopic to either a disk or a punctured disk. Recall that a \emph{topological pair of pants} is a topological surface homeomorphic to a three punctured sphere. \begin{definition}\label{Def:PairPantsDecomposition} We say that a collection of pairwise disjoint essential curves in a surface $S$ is a \textbf{pair of pants decomposition} of $S$ if it decomposes the surface into a disjoint union of pair of pants. \end{definition} Observe that any topological surface admits a pair of pants decomposition \cite{Alvarez2004}. \subsection{Fenchel-Nielsen parameters}\label{Subsec:FenchelNielsen} We think that the reader is familiar with the basic tools of hyperbolic geometry, which can be found in \cite{KS}, \cite{Bear1}. In this subsection we recall the definition of the \emph{Fenchel-Nielsen parameters} of a hyperbolic surface. We refer the reader to see \cite{FaMa2012}*{Chapter 10} for the definition of the Fenchel-Nielsen parameters in the context of finite-type surfaces. We also recommend consulting \cite{AlLiPaWeSun2011} where the authors studied the Fenchel-Nielsen parameters for surfaces of infinite-type. A \emph{geodesic pair of pants} is a complete surface $S$ (respect to its hyperbolic structure) of finite hyperbolic area such that its interior is homeomorphic to a pair of pants and with at least one boundary a closed geodesic, see Figure \ref{Fig:GeodesicPairPants}. A \emph{tight pair of pants} is a geodesic pair of pants that has exactly one puncture, see Figure \ref{Fig:GeodesicPairPants}-b. Similarly, a collection of pairwise disjoint of essential geodesic curves in a surface $S$ is \emph{geometric pair of pants decomposition} of $S$ if it decompose the surface into geodesic pair of pants. In \cite{AlLiPaWeSun2011}*{Theorem 4.5} the authors gave sufficient conditions under which a topological pair of pants decomposition of a surface is straightened to a geometric pair of pants decomposition. \begin{figure}[h!] \centering \begin{tabular}{ccc} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-4.4,-0.9) rectangle (0.4,2.6); \draw [line width=1pt] (-4,0) ellipse (2mm and 5mm); \draw [line width=1pt] (-2,2.3) ellipse (5mm and 2mm); \draw [line width=1pt] (0,0) ellipse (2mm and 5mm); \draw [line width=1pt](-4,-0.5) to[out=0,in=-180] (0,-0.5); \draw [line width=1pt] (-4,0.5) to[out=0,in=-90] (-2.5,2.3); \draw [line width=1pt] (-1.5,2.3) to[out=-90,in=180] (0,0.5); \end{scope} \end{tikzpicture} & \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-0.4,-0.9) rectangle (4.4,2.6); \draw [line width=1pt] (0,0) ellipse (2mm and 5mm); \draw [line width=1pt] (4,0) ellipse (2mm and 5mm); \draw [line width=1pt](0,-0.5) to[out=0,in=-180] (4,-0.5); \draw [line width=1pt] (0,0.5) to[out=0,in=-90] (1.98,2.5); \draw [line width=1pt] (2.02,2.5) to[out=-90,in=180] (4,0.5); \end{scope} \end{tikzpicture} & \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-0.4,-0.9) rectangle (4.4,2.6); \draw [line width=1pt] (0,1) ellipse (2mm and 5mm); \draw [line width=1pt] (0,1.5) to[out=0,in=-180] (2.5,2.5); \draw [line width=1pt] (0,0.5) to[out=0,in=180] (2.5,-0.5); \draw [line width=1pt] (2.5,2.5) to[out=-160,in=160] (2.5,-0.5); \end{scope} \end{tikzpicture} \\ a) & b) & c) \\ \end{tabular} \caption{\textit{All possible representations of geodesics pair of pants. In particular b) represents a tight pair of pants.}} \label{Fig:GeodesicPairPants} \end{figure} \subsubsection{Length and twist parameters} Let $P$ be a geodesic pair of pants and fix $\alpha$ a boundary curve of $P$. We choose a \emph{marked point} $x$ in $\alpha$ in the following way: Let $\beta$ be either a boundary component (different from $\alpha$) or a puncture of $P$. Let $\gamma$ be the orthogeodesics between $\alpha$ and $\beta$. Then the marked point $x$ in $\alpha$ is defined as the intersection point of $\alpha$ with $\gamma$. In Figure \ref{Fig:Orthogeodesic} is shown such marked point $x$ in $\alpha$ which is obtained from drawing the orthogeodesic $\gamma$ connecting the boundary component $\alpha$ with $\beta$. The hyperbolic length of the geodesic $\alpha$ is denoted by $l(\alpha)$. \begin{figure}[h!] \centering \begin{tabular}{cc} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-4.4,-1.1) rectangle (0.4,2.6); \draw [red, line width=1pt] (-4,0) ellipse (2mm and 6mm); \draw [line width=1pt] (-2,2.3) ellipse (6mm and 2mm); \draw [line width=1pt] (0,0) ellipse (2mm and 6mm); \draw [line width=1pt](-4,-0.6) to[out=0,in=-180] (0,-0.6); \draw [blue, line width=1pt] (-4,0.6) to[out=0,in=-90] (-2.6,2.3); \draw [line width=1pt] (-1.4,2.3) to[out=-90,in=180] (0,0.6); \draw [blue, line width=1pt] (-3.5,0.5)--(-3.6,0.66); \draw [blue, line width=1pt] (-3.85,0.43)--(-3.5,0.5); \node at (-4,0.9) {{\small $x$}}; \node at (-4,-0.9) {{\small {\color{red}$\alpha$}}}; \node at (-1,2.3) {{\small $\beta$}}; \node at (-2.6,1) {{\small {\color{blue}$\gamma$}}}; \node at (-4,0.57) {{\small $\bullet$}}; \node at (-2,0.1) {{\small $P$}}; \end{scope} \end{tikzpicture} & \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-0.4,-1.1) rectangle (4.4,2.6); \draw [red, line width=1pt] (0,0) ellipse (2mm and 6mm); \draw [line width=1pt] (4,0) ellipse (2mm and 6mm); \draw [line width=1pt](0,-0.6) to[out=0,in=-180] (4,-0.6); \draw [blue, line width=1pt] (0,0.6) to[out=0,in=-90] (1.98,2.5); \draw [line width=1pt] (2.02,2.5) to[out=-90,in=180] (4,0.6); \draw [blue, line width=1pt] (0.5,0.46)--(0.4,0.66); \draw [blue, line width=1pt] (0.15,0.4)--(0.5,0.46); \node at (0,0.9) {\small{$x$}}; \node at (0,-0.9) {\small{ {\color{red}$\alpha$}}}; \node at (2.5,2.3) {\small{$\beta$}}; \node at (1.8,1) {{\small {\color{blue}$\gamma$}}}; \node at (0,0.57) {{\small $\bullet$}}; \node at (2,0.1) {{\small $P$}}; \end{scope} \end{tikzpicture}\\ \end{tabular} \caption{\emph{The point $x \in \alpha$ is the intersection of $\alpha$ with the orthogeodesic $\gamma$.}} \label{Fig:Orthogeodesic} \end{figure} Let $P^\prime$ be another geodesic pair of pants with a boundary geodesic $\alpha^\prime$ and suppose that $l(\alpha)=l(\alpha^\prime)$. We identify $\alpha$ and $\alpha^\prime$ by an isometry to obtain a complete hyperbolic surface from the two pairs of pants. The isometric identification is determined by the relative position of the marked points $x\in \alpha$ and $x^\prime \in \alpha^\prime$ which is recorded by the \emph{twist parameter} $t(\alpha) \in \left[-\frac{1}{2},\frac{1}{2}\right)$. Namely, if $x=x^\prime$ then $t(\alpha)=0$. If $x\neq x^\prime$ then $\alpha\smallsetminus \{x,x^\prime\}$ consists of two arcs. Define $\vert t(\alpha) \vert$ as the length of the shorter arc contained in $\alpha\smallsetminus \{x,x^\prime\}$. If $\vert t(\alpha) \vert = \frac{1}{2}$ then we define $t(\alpha)=\frac{1}{2}$. If $\vert t(\alpha) \vert < \frac{1}{2}$ then we orient $\alpha$ with the orientation induced from $P$. If the shorter of the two arcs of $\alpha\smallsetminus \{x,x^\prime\}$ is the arc $(x,x^\prime)$ from $x$ to $x^\prime$ then $t(\alpha)=\vert t(\alpha) \vert$; otherwise $t(\alpha)=-\vert t(\alpha) \vert$. Let $P$ and $P^\prime$ be two geodesic pair of pants with $\alpha$ and $\alpha'$ boundary geodesic curve of $P$ and $P^\prime$, respectively. Let $\gamma$ and $\gamma'$ be the geodesic in $P$ and $P'$, respectively, that define the marked points $x$ in $\alpha$ and $x'$ in $\alpha'$ as above. Suppose that $\alpha$ and $\alpha'$ have the same hyperbolic length, that is, $l(\alpha)=l(\alpha')$. Then we identify $\alpha$ and $\alpha^\prime$, by gluing with an isometry $T$, to obtain a complete surface $S$ from $P$ and $P'$. This isometric identification is determined by the pair $\{l(\alpha),t(\alpha)\}$, such that the \emph{length parameter} $l(\alpha)$ is the hyperbolic length of $\alpha$, and the \emph{twist parameter} $t(\alpha) \in \left(-\frac{1}{2},\frac{1}{2}\right]$ corresponds to the relative position of the marked points $x$ in $\alpha$ and $x'$ in $\alpha'$, which is given as follows: \begin{itemize} \item[$\ast$] If $x=T(x')$, then $t(\alpha)=0$. \item[$\ast$] If $x\neq T(x')$, then $\vert t(\alpha)\vert \leq \frac{1}{2}$ is equal to the hyperbolic length of the shorter arc $\sigma$ having endpoints $x$ and $T(x')$ contained in $\alpha \setminus \{x,T(x')\}$, divided by $l(\alpha)$. If $\vert t(\alpha)\vert = \frac{1}{2}$, then $t(\alpha)=\frac{1}{2}$. If $\vert t(\alpha) \vert < \frac{1}{2}$ then we orient $\alpha$ with the orientation induced from $P$. If $\sigma$ is the arc from $x$ to $T(x^\prime)$ then $t(\alpha)=\vert t(\alpha) \vert$; otherwise $t(\alpha)=-\vert t(\alpha) \vert$. See Figure \ref{Fig:TwistParameter}. a). \end{itemize} \begin{figure}[h!] \centering \begin{tabular}{ccc} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=1] \clip (-2.4,-0.7) rectangle (3.4,3.4); \draw [dashed, line width=1pt] (-2,-0.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt] (-2,0.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=1pt] (-2,1.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt] (-2,2.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt] (-0.5,1) ellipse (2mm and 15mm); \draw [<-<, >=latex, red, line width=1pt] (-0.373,2.15) arc [ start angle=50, end angle=147, x radius=2mm, y radius =15mm ] ; \draw [line width=1pt](-2,-0.5) -- (-0.5,-0.5); \draw [line width=1pt] (-2,0.5) to[out=0,in=0] (-2,1.5); \draw [line width=1pt] (-2,2.5) -- (-0.5,2.5); \draw [blue, line width=1pt] (-2.2,1.84) -- (-0.66,1.84); \draw [blue, line width=0.7pt] (-0.9,1.7) -- (-0.67,1.7); \draw [blue, line width=0.7pt] (-0.9,1.7) -- (-0.9,1.84); \node at (-0.66,1.8) {{\small$\bullet$}}; \node at (-0.9,2.1) {{\small $x$}}; \node at (-0.37,2.15) {{\small $\bullet$}}; \node at (1.4,2.15) {{\small $\bullet$}}; \node at (1.2,2.5) {{\small $x'$}}; \node at (0.2,2.5) {{\small $T(x')$}}; \node at (-0.6,2.8) {{\color{red}$\sigma$}}; \node at (-1.3,1.5) {{\color{blue}$\gamma$}}; \node at (2.3,1.8) {{\color{blue}$\gamma'$}}; \node at (-1,1) {{\small $\alpha$}}; \node at (-1.3,0) {{\small $P$}}; \node at (2,1) {{\small $\alpha'$}}; \node at (2.3,0) {{\small $P'$}}; \draw [dashed, blue, line width=1pt] (1.4,2.15) -- (2.8,2.15); \draw [blue, line width=0.7pt] (1.35,2) -- (1.55,2); \draw [blue, line width=0.7pt] (1.55,2) -- (1.55,2.15); \draw [<-, >=latex, line width=1pt] (-0.2,2.15) -- (1.2,2.15); \draw [line width=1pt] (1.5,1) ellipse (2mm and 15mm); \draw [line width=1pt] (3,-0.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=1pt] (3,0.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt] (3,1.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=1pt] (3,2.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt](3,-0.5) -- (1.5,-0.5); \draw [line width=1pt] (3,0.5) to[out=180,in=180] (3,1.5); \draw [line width=1pt] (3,2.5) -- (1.5,2.5); \end{scope} \end{tikzpicture}& & \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=1] \clip (-2.4,-0.7) rectangle (1.4,3.4); \draw [dashed, line width=1pt] (-2,-0.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt] (-2,0.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=1pt] (-2,1.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt] (-2,2.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [red, dashed, line width=0.3pt] (-0.5,-0.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =15mm ] ; \draw [red, line width=1pt] (-0.5,2.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =15mm ] ; \draw [line width=1pt](-2,-0.5) -- (-0.5,-0.5); \draw [line width=1pt] (-2,0.5) to[out=0,in=0] (-2,1.5); \draw [line width=1pt] (-2,2.5) -- (-0.5,2.5); \draw [blue, line width=1pt] (-2.2,1.84) -- (-0.66,1.84); \draw [blue, line width=0.7pt] (-0.9,1.7) -- (-0.67,1.7); \draw [blue, line width=0.7pt] (-0.9,1.7) -- (-0.9,1.84); \node at (-0.66,1.8) {{\small$\bullet$}}; \node at (-0.9,2.1) {{\small $x$}}; \node at (-0.4,2.1) {{\small $x'$}}; \node at (-1.3,1.5) {{\color{blue}$\gamma$}}; \node at (0.3,1.5) {{\color{blue}$\gamma'$}}; \node at (-1,1) {{\small {\color{red}$\alpha$}}}; \node at (-1.3,0) {{\small $P$}}; \node at (-0.49,1) {{\small {\color{red}$\alpha'$}}}; \node at (0.3,0) {{\small $P'$}}; \draw [blue, line width=1pt] (-0.67,1.84) -- (1.2,1.84); \draw [blue, line width=0.7pt] (-0.67,1.7) -- (-0.4,1.7); \draw [blue, line width=0.7pt] (-0.4,1.7) -- (-0.4,1.84); \draw [line width=1pt] (1,-0.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=1pt] (1,0.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt] (1,1.5) arc [ start angle=-90, end angle=90, x radius=2mm, y radius =5mm ] ; \draw [dashed, line width=1pt] (1,2.5) arc [ start angle=90, end angle=270, x radius=2mm, y radius =5mm ] ; \draw [line width=1pt](-0.5,-0.5) -- (1,-0.5); \draw [line width=1pt] (1,0.5) to[out=180,in=180] (1,1.5); \draw [line width=1pt] (1,2.5) -- (-0.5,2.5); \end{scope} \end{tikzpicture}\\ \emph{a). Twist parameter.}& &\emph{b). If $t(\alpha)=0$},\\ &&\emph{then $\gamma \cup \gamma'$ is orthogonal to $\alpha$.}\\ \end{tabular} \caption{ \label{Fig:TwistParameter} \end{figure} \begin{remark}\label{remark:BiinfiniteGeodesic} If the twist parameter $t(\alpha)=0$, then $\gamma \cup \gamma'$ is a geodesic in $S$ that is orthogonal to the closed geodesic $\alpha$, see Figure \ref{Fig:TwistParameter}. b). \end{remark} \begin{definition} Let $S$ be a surface obtained by gluing countably many geodesic pair of pants in the way described above. These gluing generate a geometric pair of pants decomposition $\{\alpha_i: i \in \mathbb{N}\}$ of $S$. Let $l_{i}$ and $t_{i}$ be the length and twist parameter respectively of $\alpha_i$. The collection of pairs \begin{equation} (\{l_{i},t_{i}\})_{i\in\mathbb{N}}, \end{equation} is called the \textbf{Fenchel-Nielsen parameters} of $S$. As the hyperbolic metric on $S$ is uniquely determined by its Fenchel-Nielsen parameters $(\{l_{i},t_{i}\})_{i\in\mathbb{N}}$, then \[ S:=S(\{l_{i},t_{i}\})_{i\in\mathbb{N}}. \] \end{definition} The surface $S$ might not be complete in the induced hyperbolic metric. V. \'Alvarez and J. M. Rodr\'iguez in \cite{Alvarez2004} showed that the boundary of the metric completion of $S$ consists of simple closed geodesics and bi-infinite simple geodesics. Moreover, they proved that by attaching funnels to the closed geodesics and attaching geodesic half-planes to the bi-infinite geodesics of the boundary of the metric completion of $S$, we obtain a surface $\hat{S}$ homeomorphic to $S$ with a geodesically complete hyperbolic metric such that the inclusion $i: S \hookrightarrow \hat{S}$ is an isometric embedding. Conversely, any geodesically complete surface is obtained by attaching funnels and half-planes to the convex core of the surface, see also \cite{BasmajianSaric}. \medskip We are in a position to define the so-called \emph{tight flute surfaces}. \begin{definition}[\cite{Bas93}*{p. 423}]\label{definition:tight_flute_surface} A \textbf{tight flute surface} $S$ is a surface obtained by starting with a geodesic pair of pants $P_0$ with two punctures and then consecutively gluing tight pairs of pants $P_n$, $n\geq 1$, as is shown in Figure \ref{Fig:TightFluteSurface}. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.9] \clip (0,-0.2) rectangle (11,4); \draw [line width=1pt] (0,0) -- (10,0); \draw [line width=1pt] (0,0.01) to[out=10,in=-90] (2,3); \draw [line width=1pt] (2.01,3) to[out=-90,in=180] (3.5,1.1); \draw [line width=1pt] (3.5,1.1) to[out=0,in=-100] (4.5,2.5); \draw [blue, dashed, line width=1pt] (3.5,0.55) ellipse (1.5mm and 5.5mm); \draw [blue, line width=1pt] (3.5,1.1) arc [ start angle=90, end angle=270, x radius=1.5mm, y radius =5.5mm ] ; \draw [line width=1pt] (4.51,2.5) to[out=-90,in=180] (5.5,1); \draw [line width=1pt] (5.5,1) to[out=0,in=-100] (7,3.3); \draw [blue, dashed, line width=1pt] (5.5,0.5) ellipse (1.3mm and 5mm); \draw [blue, line width=1pt] (5.5,1) arc [ start angle=90, end angle=270, x radius=1.3mm, y radius =5mm ] ; \draw [line width=1pt] (7,3.32) to[out=-80,in=180] (8,2); \draw [line width=1pt] (8,2) to[out=0,in=-90] (9,3.7); \draw [blue, dashed, line width=1pt] (8,1) ellipse (3mm and 10mm); \draw [blue, line width=1pt] (8,2) arc [ start angle=90, end angle=270, x radius=3mm, y radius =10mm ] ; \draw [line width=1pt] (9.02,3.7) to[out=-90,in=180] (10,1.8); \draw [dashed, blue, line width=1pt] (10,0.9) ellipse (3mm and 9mm); \draw [blue, line width=1pt] (10,1.8) arc [ start angle=90, end angle=270, x radius=3mm, y radius =9mm ] ; \node at (3,0.5) {{\small {\color{blue}$\alpha_{1}$}}}; \node at (5,0.5) {{\small {\color{blue}$\alpha_{2}$}}}; \node at (7.4,1) {{\small {\color{blue}$\alpha_{2}$}}}; \node at (9.4,1) {{\small {\color{blue}$\alpha_{3}$}}}; \node at (10.8,0.9) {$\ldots$}; \node at (2,0.5) {{\small $P_{0}$}}; \node at (4.3,0.5) {{\small $P_{1}$}}; \node at (6.8,0.5) {{\small $P_{2}$}}; \node at (8.9,0.5) {{\small $P_{3}$}}; \end{scope} \end{tikzpicture} \end{center} \caption{\emph{A tight flute surface.}} \label{Fig:TightFluteSurface} \end{figure} \end{definition} If $S$ is a tight flute surface, we denote by $\alpha_n$ the closed geodesic of the boundary of the surface obtained after gluing $n$ geodesic pairs of pants and let $l_n$ and $t_n$ be the length and twist parameters associated to $\alpha_n$. In terms of Fenchel-Nielsen parameters this surface is denoted by $S:=S(\{l_n, t_n\})_{n\in\mathbb{N}}$. \begin{definition} If all twist parameters of a tight flute surface are zero, that is $S:=S(\{l_n,0\})_{n\in\mathbb{N}}$, then we called it a \textbf{zero-twist tight flute surface}. \end{definition} We have that the problem of parabolicity is fully understood for zero-twist flute surfaces. \begin{theorem}[\cite{BasHakSa}*{Theorem 1.5}]\label{Teo:zero_twist_tight_flute} A zero-twist tight flute surface $S(\{l_{n}, 0\})_{n\in\mathbb{N}}$ is parabolic if and only if one of the following holds: \begin{enumerate} \item The surface $S(\{l_{n}, 0\})_{n\in\mathbb{N}}$ is complete. \item The series $\sum\limits_{n=1}^{\infty}e^{-\frac{l_{n}}{2}}=\infty$. \end{enumerate} \end{theorem} \section{Proof of Theorem \ref{Teo:Parametrization-ZTFS}}\label{sec:proof-ZTFS} We begin by associating to each zero-twist flute surface $S=S(\{l_n,0\})_{n\in\mathbb{N}_{0}}$ a sequence $\textbf{x}$ of positive real numbers and a Fuchsian group $\Gamma_{\mathbf{x}}$ such that the convex core of $\mathbb{H}^2/\Gamma_{\mathbf{x}}$ is isometric to $S$. We do this by constructing explicitly a hyperbolic polygon $\mathcal{P}$ by cutting $S$ along an infinite collection of bi-infinite geodesics which serves as fundamental domain for $\Gamma_{\mathbf{x}}$. \medskip Let $S=S(\{l_n,0\})_{n\in\mathbb{N}_{0}}$ be a zero-twist flute surface, which come from gluing the geodesic pairs of pants of the family $\{P_{n}\}_{n\in\mathbb{N}_{0}}$ (see Figure \ref{Fig:tight_flute_1}). For each $n\in\mathbb{N}$, let $\alpha_{n}$ be the closed geodesic in $S$, which comes from gluing the geodesic pair of pants $P_{n-1}$ and $P_{n}$. Let $\textbf{0}$ and $s_{0}$ be the punctures of $P_0$ and for $n\geq 1$, let $s_{n}$ be the unique puncture of $P_n$. Now we describe how to obtain an ideal hyperbolic polygon $\mathcal{P}$ by removing a collection $\{\gamma_n\}_{n\in \mathbb{N}_0}$ of geodesics in $S$ having endpoints on the punctures of $S$. Let $\gamma_0 \subset P_0$ be the geodesic connecting the two punctures of $P_0$. Let also $\gamma_n$ be the geodesic connecting the punctures $s_{n-1}$ and $s_n$, which is orthogonal to the close geodesic $\alpha_{n-1}$, see Remark \ref{remark:BiinfiniteGeodesic}. Finally, we draw on $S$ the geodesic ray $\beta$ orthogonal to each $\alpha_{n}$ and having one endpoint in $\textbf{0}$. In Figure \ref{Fig:tight_flute_1} is shown $S$ with the closed geodesics $\alpha_n$, the bi-infinite geodesics $\gamma_n$ and the geodesic ray $\beta$. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=1] \clip (0,-0.5) rectangle (11.5,5.5); \draw [green, line width=1pt] (0.1,0) to[out=0,in=180] (4.5,0); \draw [green, line width=1pt] (4.5,0) to[out=0,in=180] (7.5,0); \draw [green, line width=1pt] (7.5,0) to[out=0,in=180] (10.5,0); \node at (0.1,0.3) {$\textbf{0}$}; \node at (2.98,5.2) {$s_{0}$}; \node at (5.98,5.2) {$s_{1}$}; \node at (8.98,5.2) {$s_{2}$}; \draw [line width=1pt] (0.1,0.03) to[out=10,in=-90] (2.98,5); \draw [red, line width=1pt] (0.1,0.01) to[out=-1,in=-88] (3,5); \node at (2.9,2) {{\small {\color{red} $\gamma_{0}$}}}; \draw [line width=1pt] (3.02,5) to[out=-90,in=180] (4.5,2); \draw [line width=1pt] (4.5,2) to[out=10,in=-90] (5.98,5); \draw [red, line width=1pt] (3,5) to[out=-90,in=180] (4.17,1.5); \draw [red, line width=1pt] (4.17,1.5) to[out=0,in=-90] (6,5); \node at (5.8,2) {{\small {\color{red}$\gamma_{1}$}}}; \draw [line width=1pt] (6.02,5) to[out=-90,in=180] (7.5,2); \draw [line width=1pt] (7.5,2) to[out=0,in=-90] (8.98,5); \draw [red, line width=1pt] (6,5) to[out=-90,in=180] (7.15,1.5); \draw [red, line width=1pt] (7.15,1.5) to[out=0,in=-86] (9,5); \node at (8.9,2.2) {{\small{\color{red}$\gamma_{2}$}}}; \draw [line width=1pt] (9.02,5) to[out=-90,in=180] (10.5,2); \draw [red, line width=1pt] (9,5) to[out=-90,in=180] (10.15,1.5); \node at (10,1.8) {{\small {\color{red}$\gamma_{3}$}}}; \draw [blue, dashed, line width=1pt] (4.5,1) ellipse (3.5mm and 10mm); \draw [blue, line width=1pt] (4.5,2) arc [ start angle=90, end angle=270, x radius=3.5mm, y radius =10mm ] ; \node at (3.8,1) {{\small {\color{blue}$\alpha_{1}$}}}; \draw [blue, dashed, line width=1pt] (7.5,1) ellipse (3.5mm and 10mm); \draw [blue, line width=1pt] (7.5,2) arc [ start angle=90, end angle=270, x radius=3.5mm, y radius =10mm ] ; \node at (6.8,1) {{\small {\color{blue}$\alpha_{2}$}}}; \draw [blue, dashed, line width=1pt] (10.5,1) ellipse (3.5mm and 10mm); \draw [blue, line width=1pt] (10.5,2) arc [ start angle=90, end angle=270, x radius=3.5mm, y radius =10mm ] ; \node at (9.8,1) {{\small {\color{blue}$\alpha_{3}$}}}; \node at (11.3,1) {$\ldots$}; \node at (2.9,1) {{\small$P_{0}$}}; \node at (5.8,1) {{\small$P_{1}$}}; \node at (5.8,-0.3) {{\small {\color{green}$\beta$}}}; \node at (8.8,1) {{\small$P_{2}$}}; \end{scope} \end{tikzpicture} \end{center} \caption{\emph{A zero-twist tight flute surface.}} \label{Fig:tight_flute_1} \end{figure} Now, we cut $S$ along the geodesics $\gamma_{n}$. Then $S$ turns into an ideal hyperbolic polygon $\mathcal{P}$ with infinitely many sides and infinitely many ideal vertices. More precisely, the edges and vertices of $\mathcal{P}$ are given as follows: \begin{itemize} \item[$\ast$] For each $n\geq 0$, let $\gamma_{n}^{+}$ and $\gamma_{n}^{-}$ be the sides of $\mathcal{P}$ coming from cutting $S$ along $\gamma_n$. \item[$\ast$] For each $n\geq 0$, let $s_n^+$ and $s_n^-$ be the ideal vertices of $\mathcal{P}$ coming from the puncture $s_n$, see Figure \ref{Fig:HyperbolicPolygonP}. \end{itemize} \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-6,-2.6) rectangle (3,2.6); \draw [green, line width=1.2pt](-4,0) -- (3,0); \draw [red, line width=1pt] (-4,0) to[out=10,in=-90] (-3,2); \draw [red, line width=1pt] (-2.97,2) to[out=-90,in=180] (-2,0.8); \draw [red, line width=1pt] (-2,0.8) to[out=0,in=-90] (-1,2); \draw [red, line width=1pt] (-0.97,2) to[out=-90,in=180] (0,0.8); \draw [red, line width=1pt] (0,0.8) to[out=0,in=-90] (1,2); \draw [red, line width=1pt] (1.03,2) to[out=-90,in=180] (2,0.8); \draw [red, line width=1pt] (-4,0) to[out=-10,in=90] (-3,-2); \draw [red, line width=1pt] (-2.97,-2) to[out=90,in=-180] (-2,-0.8); \draw [red, line width=1pt] (-2,-0.8) to[out=0,in=90] (-1,-2); \draw [red, line width=1pt] (-0.97,-2) to[out=90,in=-180] (0,-0.8); \draw [red, line width=1pt] (0,-0.8) to[out=0,in=90] (1,-2); \draw [red, line width=1pt] (1.03,-2) to[out=90,in=-180] (2,-0.8); \node at (-4.4,0) {\small{$\textbf{0}$}}; \node at (-3,2.3) {\small{$s_{0}^{-}$}}; \node at (-3,-2.3) {\small{$s_{0}^{+}$}}; \node at (-1,2.3) {\small{$s_{1}^{-}$}}; \node at (-1,-2.3) {\small{$s_{1}^{+}$}}; \node at (1,2.3) {\small{$s_{2}^{-}$}}; \node at (1,-2.3) {\small{$s_{2}^{+}$}}; \node at (-3.5,1) {\small{{\color{red}$\gamma_{0}^{-}$}}}; \node at (-3.5,-1) {\small{{\color{red}$\gamma_{0}^{+}$}}}; \node at (-2,1.3) {\small{{\color{red}$\gamma_{1}^{-}$}}}; \node at (-2,-1.3) {\small{{\color{red}$\gamma_{1}^{+}$}}}; \node at (0,1.3) {\small{{\color{red}$\gamma_{2}^{-}$}}}; \node at (0,-1.3) {\small{{\color{red}$\gamma_{2}^{+}$}}}; \node at (2.5,0.2) {\small{{\color{green}$\beta$}}}; \end{scope} \end{tikzpicture} \caption{\emph{Ideal hyperbolic polygon $\mathcal{P}$ obtained by cutting $S(\{l_n,0\})_{n\in\mathbb{N}_{0}}$ along the geodesics $\gamma_n$.}} \label{Fig:HyperbolicPolygonP} \end{center} \end{figure} \begin{remark}\label{Rem:PolygonWithIdentifications} For each $n\geq 0$, the sides $\gamma_{n}^{+}$ and $\gamma_{n}^{-}$ of the ideal hyperbolic polygon $\mathcal{P}$ are identified by a M\"obius transformation $g_{n}$. If we identify the sides $\gamma_{n}^{+}$ and $\gamma_{n}^{-}$ using $g_{n}$, then we recover the zero-twist tight surface $S(\{l_n,0\})_{n\in\mathbb{N}_{0}}$. \end{remark} The ideal hyperbolic polygon $\mathcal{P}$ can be thought in the hyperbolic plane $\mathbb{H}^2$ satisfying the following properties (see Figure \ref{Fig:RealizedPolygonP}): \begin{itemize} \item[$\ast$] Its vertices are on the real axis. Up to take a real translation, we can assume that the vertex $\textbf{0}$ of $\mathcal{P}$ is equal to the complex number zero and that the geodesic ray $\beta$ coincides with the imaginary axis. The collection of vertices $(s_{n}^{+})_{n\in\mathbb{N}_{0}}$ defines a strictly increasing sequence of positive real numbers. The collection of vertices $(s_{n}^{-})_{n\in\mathbb{N}_{0}}$ defines a strictly decreasing sequence of negative real numbers. Given that $\mathcal{P}$ has a reflexive symmetry fixing the ray $\beta$, then $s_{n}^{+}$ and $s_{n}^{-}$ are symmetric with respect to the imaginary axis, it means, $s_n^{-}=-s_n^{+}$, for all $n\geq 0$. \item[$\ast$] For each $n\in\mathbb{N}_{0}$, the edges $\gamma_{n}^{+}$ and $\gamma_{n}^{-}$ of $\mathcal{P}$ are half-circles having the same radius and endpoints $s_{n-1}^{+}$ and $s_{n}^{+}$; and $s_{n-1}^{-}$ and $s_{n}^{-}$, respectively. Thus, the intersection of any two of those edges is either empty or they meet at the same point in the real line. Moreover, if we choose $\gamma$ one of these half-circles, then the other half-circles are in the exterior of $\gamma$. \end{itemize} \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.9] \clip (-6,-0.8) rectangle (8,3.5); \draw [red, line width=1pt] (0.5,0) arc(0:180:0.5); \draw [red, line width=1pt] (1.5,0) arc(0:180:0.5) \draw [red, line width=1pt] (3.5,0) arc(0:180:1) \draw [red, line width=1pt] (-0.5,0) arc(0:180:1) \draw [red, line width=1pt] (4.5,0) arc(0:180:0.5); \draw [red, line width=1pt] (-2.5,0) arc(0:180:0.5); \draw [dashed, blue, line width=1pt] (1,0.5) arc(20:160:0.5); \draw [dashed, blue, line width=1pt] (2.6,1) arc(20:160:2.2); \draw [dashed, blue, line width=1pt] (4.1,0.5) arc(20:160:3.8); \node at (1,0.5) {{\color{red}\tiny{$<$}}}; \node at (2.5,1) {{\color{red}\tiny{$<$}}}; \node at (4,0.5) {{\color{red}\tiny{$<$}}}; \node at (0,0.5) {{\color{red}\tiny{$>$}}}; \node at (-1.5,1) {{\color{red}\tiny{$>$}}}; \node at (-3,0.5) {{\color{red}\tiny{$>$}}}; \node at (-4,0.2) {$\ldots$}; \node at (5,0.2) {$\ldots$}; \node at (1.2,0.8) {{\small{\color{red}$\gamma_{0}^{+}$}}}; \node at (2.8,1.3) {{\small{\color{red}$\gamma_{1}^{+}$}}}; \node at (4.3,0.8) {{\small{\color{red}$\gamma_{2}^{+}$}}}; \node at (-0.2,0.8) {{\small{\color{red}$\gamma_{0}^{-}$}}}; \node at (-1.7, 1.3) {\small{\color{red}{$\gamma_{1}^{-}$}}}; \node at (-3.3,0.8) {\small{\color{red}{$\gamma_{2}^{-}$}}}; \node at (0.5,1.1){\small{\color{blue}{$\alpha_{0}$}}}; \node at (0.5,2.65){\small{\color{blue}{$\alpha_{1}$}}}; \node at (0.5,3.2){\small{\color{blue}{$\alpha_{2}$}}}; \node at (0.5,-0.3) {\tiny{$\mathbf{0}$}}; \node at (1.5,-0.3) {\tiny{$s_{0}^{+}$}}; \node at (3.5,-0.3) {{\tiny$s_{1}^{+}$}}; \node at (4.5,-0.3) {\tiny{$s_{2}^{+}$}}; \node at (-0.5,-0.3) {\tiny{$s_{0}^{-}$}}; \node at (-2.5,-0.3) {\tiny{$s_{1}^{-}$}}; \node at (-3.5,-0.3) {\tiny{$s_{2}^{-}$}}; \draw [->](-4,0) -- (5,0); \draw[green] (0.5,0) -- (0.5,3.5); \node at (0.7, 1.7){\small{\color{green}{$\beta$}}}; \end{scope} \end{tikzpicture} \caption{\emph{Realization of the ideal hyperbolic polygon $\mathcal{P}$ in $\mathbb{H}^2$.}} \label{Fig:RealizedPolygonP} \end{center} \end{figure} To the zero-twist flute surface $S$ we associated the sequence $\mathbf{x}=(x_{n})_{n\in\mathbb{N}_{0}}$ of positive real numbers that satisfies \begin{equation} s_{n}^{+}=\sum\limits_{i=0}^{n}x_{i}. \end{equation} Thus, to the sequence $\textbf{x}$ we associated the group $\Gamma_{\mathbf{x}}$ given by \begin{equation} \Gamma_{\textbf{x}}=\langle g_{n}: n\in\mathbb{N}_0 \rangle. \end{equation} Now we prove that $\Gamma_{\mathbf{x}}$ satisfies the properties described in Theorem \ref{Teo:Parametrization-ZTFS}. \noindent For each $n\in\mathbb{N}_{0}$, let $s_n:=s_n^+$. On the hyperbolic plane $\mathbb{H}^{2}$, we draw the half-circles $\gamma_{0}^{+}$ and $\gamma_{0}^{-}$ having endpoints $\textbf{0}$ and $s_{0}$, and $\textbf{0}$ and $-s_{0}$, respectively. For each $n\geq 1$, we draw the half-circles $\gamma_{n}^{+}$ and $\gamma_{n}^{-}$ having endpoints $s_{n-1}$ and $s_{n}$, and $-s_{n-1}$ and $-s_{n}$, respectively. See Figure \ref{Fig:RealizedPolygonP}. We remark that the half-circle $\gamma_{n}^{+}$ and $\gamma_{n}^{-}$ are symmetric with respect to the imaginary axis. Then the M\"obius transformation $g_{n}$ is given by \begin{equation}\label{Ecu:General} g_n(z):=\dfrac{\left(1+\dfrac{2s_{n-1}}{s_n-s_{n-1}}\right)z-2s_{n-1}\left(1+\dfrac{s_{n-1}}{s_n-s_{n-1}}\right)}{-\dfrac{2}{s_n-s_{n-1}}z+\left(1+\dfrac{2s_{n-1}}{s_n-s_{n-1}}\right)}, \end{equation} which send the half-circle $\gamma_{n}^{+}$ onto the half-circle $\gamma_{n}^{-}$, for each $n\geq 0$. Observe that $g_{0}$ is parabolic and, for each $n\geq 1$, $g_{n}$ is hyperbolic with trace equal to \[ \left\vert 2\left(1+\frac{2s_{n-1}}{s_n-s_{n-1}}\right)\right\vert>2. \] As $\Gamma_{\textbf{x}}$ is obtained by sided paring the sides of the convex ideal polygon $\mathcal{P}$, then it is a non elementary group and its set of generators is conformed by non elliptic elements. Thus, $\Gamma_{\textbf{x}}$ is a Fuchsian group (see \cite{Bear1}*{Theorem 8.3.1, p. 198}), with fundamental domain $\mathcal{P}$ for its action on $\mathbb{H}^2$. In what follows we describe $\mathcal{P}$ more precisely. Recall that if $\gamma$ is a half-circle in $\mathbb{H}^{2}$ having as center the point $z\in\mathbb{R}$ and radius $r>0$, then $\mathbb{H}^{2}\setminus \gamma$ has two connected components. The connected component of $\mathbb{H}^2\setminus \gamma$ equal to the set $\{w\in\mathbb{H}^{2}:\vert z-w\vert>r\}$ is called the \emph{exterior of $\gamma$} and it is denoted by ${\rm Ext}(\gamma)$. To the complement of the closure in $\mathbb{H}^2$ of $\mathrm{Ext}(\gamma)$ we called the \emph{interior of $\gamma$} and we denote it by ${\rm Int}(\gamma)$, see Figure \ref{Fig:int_ext}. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-3,-0.8) rectangle (3.7,2.9); \draw [blue, line width=1pt] (2,0) arc(0:180:2); \node at (-2,-0.3) {\tiny{$z-r$}}; \node at (0,-0.3) {\tiny{$z$}}; \node at (2.1,-0.3) {\tiny{$z+r$}}; \node at (0,2.5) {\small{${\rm{Ext}}(\gamma)$}}; \draw [->, >=latex, black!30](-3,0) -- (3,0); \node at (0,0.9) {\small{${\rm{Int}}(\gamma)$}}; \node at (-1.8,1.5) {\small{{\color{blue}$\gamma$}}}; \end{scope} \end{tikzpicture} \caption{\emph{Exterior and interior of a half-circle $\gamma$} in $\mathbb{H}^{2}$.} \label{Fig:int_ext} \end{center} \end{figure} The limit $\lim\limits_{n\to \infty} s_n$ is well defined because the sequence $(s_n)_{n\in\mathbb{N}_{0}}$ is the partial sums of $\mathbf{x}$. Let $a$ be such limit, which must be either a positive real number or infinity. If $a$ is infinity, then $\mathcal{P}$ is equal to the closure (in $\mathbb{H}^2$) of \[D:=\bigcap_{n\in \mathbb{N}_{0}} \left({\rm{Ext}}(\gamma_n^+)\cap {\rm{Ext}}(\gamma_n^-)\right)\subset \mathbb{H}^2.\] Otherwise, if $a$ is a positive real number, then we denote by $\gamma$ the half-circle in $\mathbb{H}^2$ having $a$ and $-a$ as ends points. In this case $\mathcal{P}$ is given by \[\overline{D}\cap {\rm{Int}}(\gamma) \subset \mathbb{H}^2.\] Notice that $\overline{D}$ is a fundamental domain for $\Gamma_{\mathbf{x}}$ and $\overline{D}/\Gamma_{\mathbf{x}}$ is a complete hyperbolic surface. Therefore, if $a$ is infinity then $\mathcal{P}/\Gamma_{\mathbf{x}}=\overline{D}/\Gamma_{\mathbf{x}}$ is isometric to $S$ and $\Gamma_{\mathbf{x}}$ is a Fuchsian group of first kind. Otherwise, if $a$ is finite then $\mathcal{P}/\Gamma_{\mathbf{x}}$ is isometric to $S$ and it is not a complete hyperbolic surface because the projection points of $\gamma$ are limit points which are not in the surface. However, $\mathcal{P}/\Gamma_{\mathbf{x}}$ is the convex core of $\overline{D}/\Gamma_{\mathbf{x}}$. Here finished the proof of Theorem \ref{Teo:Parametrization-ZTFS}.\qed \section{Proof of Theorem \ref{t:LNM}}\label{section:Loch_ness_monster}\label{sec:proof-LNM} From the introduction we recall that $\mathcal{N}$ denotes the set of all sequences $\mathbf{y}:=(y_n)_{n\in \mathbb{Z}}$ of elements $y_n=(a_{n},b_{n},c_{n},d_{n},e_{n}) \in \mathbb{R}^5$ satisfying \begin{equation}\label{Eq:Condition} a_{n}<b_{n}<c_{n}<d_{n}<e_{n} \mbox{ and } e_{n}\leq a_{n+1} \end{equation} Let $\mathbf{y}:=(y_n)_{n\in \mathbb{Z}} \in \mathcal{N}$. For each $n\in \mathbb{Z}$, let $f_{n}$ and $g_{n}$ be the M\"obius transformations mapping $\sigma_{n}$ onto $\tilde{\sigma}_{n}$ and $\rho_{n}$ onto $\tilde{\rho}_{n}$, respectively, where $\sigma_{n}$, $\rho_{n}$, $\tilde{\sigma}_{n}$ and $\tilde{\rho}_{n}$ are the half-circles in the hyperbolic plane $\mathbb{H}^2$ depicted in Figure \ref{Fig:half_circleIntro}. Let \begin{equation} G_\mathbf{y}:= \langle f_n,g_n:\, n\in \mathbb{Z} \rangle \leq \mathrm{Isom}^+(\mathbb{H}^2). \end{equation} To start with the proof of Theorem \ref{t:LNM}, the following lemma is required. \begin{lemma}\label{lemma:hyperbolic_moebius_map} Let $\sigma$ and $\tilde{\sigma}$ denote the half-circles having endpoints $a$ and $b$; $c$ and $d$ respectively, where $a<b<c<d$ (see Figure \ref{Fig:half_circles_1}). Then the M\"obius transformation $f$ which sends the half-circle $\sigma$ onto the half-circle $\tilde{\sigma}$ is hyperbolic. \end{lemma} \begin{proof} By hypothesis, the half-circles $\sigma$ and $\tilde{\sigma}$ are disjoint, and each one of the half-circles is contained in the exterior of the other half-circle. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[baseline=(current bounding box.north)] \begin{scope}[scale=0.8] \clip (-3.3,-0.5) rectangle (3.6,3.3); \draw [blue, line width=1pt] (-1,0) arc(0:180:1); \draw [blue, line width=1pt] (3,0) arc(0:180:1.5); \draw[dashed, color=blue!60, thick, <-] (0.6,1.3) arc (0:180:1.3); \node at (-3,-0.3) {\tiny{$a$}}; \node at (-1,-0.3) {\tiny{$b$}}; \node at (0,-0.3) {\tiny{$c$}}; \node at (3,-0.3) {\tiny{$d$}}; \node at (-2,-0.3) {\tiny{$O$}}; \node at (-2.1,0.5) {\small{$r$}}; \draw [->, >=latex](-2,0) -- (-1.8,0.99); \node at (1.5,-0.3) {\tiny{$\tilde{O}$}}; \node at (1.4,0.75) {\small{$\tilde{r}$}}; \draw [->, >=latex](1.5,0) -- (1.8,1.48); \node at (-2.2,1.2) {\small{{\color{blue}$\sigma$}}}; \node at (1.5,1.8) {\small{{\color{blue}$\tilde{\sigma}$}}}; \node at (-0.7,2.9) {\small{$f$}}; \draw [->, >=latex, black!30](-3.3,0) -- (3.5,0); \end{scope} \end{tikzpicture} \caption{\emph{Half-circles $\sigma$ and $\tilde{\sigma}$.}} \label{Fig:half_circles_1} \end{center} \end{figure} Explicitly, they have centers at the real numbers $O=a+\frac{b-a}{2}$ and $\tilde{O}=c+\frac{d-c}{2}$ respectively, and radius $r=\frac{b-a}{2}$ and $\tilde{r}=\frac{d-c}{2}$, respectively. Then the M\"obius transformation $f$ is given by \begin{equation}\label{eq:hyperbolic_moebius_map} f(z)=\frac{-r\tilde{r}}{z-O} + \tilde{O}. \end{equation} Let us note that the fixed points of $f$ are \begin{equation}\label{eq:fix_points} z=\frac{-(r+\tilde{r})\pm \sqrt{\delta}}{2}, \end{equation} where $\delta=(O+\tilde{O})^2-4(O\tilde{O}+r\tilde{r})=(O-\tilde{O})^2-4r\tilde{r}$. Since $0< r+\tilde{r} < \vert \tilde{O}-O \vert$, then $$0\leq (r-\tilde{r})^2= (r+\tilde{r})^2-4r\tilde{r} <(\tilde{O}-O)^2-4r\tilde{r}=\delta.$$ It implies that $\sqrt{\delta}$ is a real number. Therefore, $f$ is hyperbolic. \end{proof} \medskip \noindent \emph{The group $G_{\mathbf{y}}$ is Fuchsian}. By Lemma \ref{lemma:hyperbolic_moebius_map}, $G_{\mathbf{y}}$ is generated by hyperbolic elements. So, by Theorem 8.2.1 in \cite{Bear1}, $G_{\mathbf{y}}$ is a Fuchsian group. By construction, the collection of half-circles $\mathcal{C}=\{\sigma_{n}, \tilde{\sigma}_{n}, \rho_{n} ,\tilde{\rho}_{n}:n\in\mathbb{Z}\}$ are pairwise disjoint, and each one of the half-circles in $\mathcal{C}$ is contained in the exterior of each one of the other half-circles in $\mathcal{C}$. From Theorem 3.3.5 in \cite{KS}, a fundamental region $D(G_{\mathbf{y}})$ for the Fuchsian group $G_{\mathbf{y}}$ is given by \begin{equation}\label{eq:fundamental_domain_LNM} D(G_{\mathbf{y}})=\bigcap\limits_{n\in\mathbb{Z}}\left({\rm Ext}(\sigma_{n})\cap {\rm Ext}(\tilde{\sigma}_{n})\cap {\rm Ext}(\rho_{n})\cap {\rm Ext}(\tilde{\rho}_{n})\right)\subset \mathbb{H}^{2}. \end{equation} Given that the intersection of any two different elements belonged to $\mathcal{C}$ is either empty or at infinity, the last means, they meet at the same point in the real line, then $G_{\mathbf{y}}$ acts freely and properly discontinuously on whole $\mathbb{H}^{2}$. Hence, the quotient space $S:=\mathbb{H}^{2}/G_{\mathbf{y}}$ is a complete Riemann surface.\qed \medskip \noindent In order to prove that $S$ is topologically equivalent to the Loch Ness monster, we must verify that $S$ has only one end and infinite genus. The proof uses the same ideas appearing in \cite{AyC}. \medskip \noindent \emph{The surface $S$ has only one end}. By Lemma \ref{lemma:spec} is enough to prove that for any compact subset $K$ of $S$, there exists a compact subset $K'$ of $S$ such that $K\subset K'$ and the space $S\setminus K'$ is connected. Let $K$ be a compact subset of $S$. Observe that $\tilde{K}:=\pi^{-1}(K)$ is a compact subset of $\overline{D(G_{\mathbf{y}})}$ where $\pi:\overline{D(G_{\mathbf{y}})}\rightarrow S= \overline{D(G_{\mathbf{y}})}/G_{\mathbf{y}}$ is the quotient map. Thus, $\tilde{K}$ is a compact subset of $\mathbb{H}^2$. Therefore, there exist closed intervals $I_x$ and $I_y$ in the real and imaginary axes, respectively, such that, $\pi_x(\tilde{K}) \subset I_x$ and $\pi_y(\tilde{K}) \subset I_y$, where $\pi_{x}:\mathbb{H}^{2}\to\mathbb{R}$ is the standard projection map on the real axis and $\pi_{y}:\mathbb{H}^{2}\to (0,+\infty)$ is the projection map on the imaginary axis. By construction of the fundamental domain for $G_{\mathbf{y}}$ we have that $\tilde{K}':=\overline{D(\Gamma)}\cap I_{x}\times I_{y}$ is a compact subset of $\overline{D(G_{\mathbf{y}})}$ such that $\tilde{K}\subset \tilde{K}'$. Thus, $K':=\pi(\tilde{K}')$ is a compact subset of $S$ which contains $K$. Finally, we can verify that $S\smallsetminus K'$ is connected. \noindent \emph{The surface $S$ has infinite genus}. For each $n\in\mathbb{Z}$, we define the strip \[ \tilde{S}_{n}:=D(G_{\mathbf{y}})\cap \{z\in\mathbb{H}^2: a_{n}<{\rm Re}(z)<e_{n} \}\subset \mathbb{H}^2.\] Observe that $\tilde{S}_{n}/\langle f_n,g_n \rangle$ defines, under the projection map $\pi$, a subsurface $S_{n}$ of $S$ topologically equivalent to a torus minus a disk. By construction, any for two different integers $m\neq n$, the subsurfaces $S_{n}$ and $S_{m}$ are disjoint because the strips $\tilde{S}_n \cap \tilde{S}_m =\emptyset$. This shows that $S$ has infinite genus. \medskip \noindent \emph{If $G_\mathbf{y}$ is of first kind, then $e_n=a_{n+1}$ for all $n\in \mathbb{Z}$, $\lim\limits_{n\to \infty} e_n=\infty$ and $\lim\limits_{n\to -\infty} a_n=-\infty$.} Suppose that $G_{\mathbf{y}}$ if of first kind but that the conclusion is false. In any case it is possible to find an interval $I\subset \mathbb{R}$ not contained in the limit set of $G_{\mathbf{y}}$ which leads to a contradiction. Indeed, if $e_n\neq a_{n+1}$ for some $n\in \mathbb{Z}$ then $I=(e_n,a_{n+1})$ is such interval. If $\lim\limits_{n\to \infty} e_n=r<\infty$ then $I=(a,\infty)$. Finally, if $\lim\limits_{n\to -\infty} a_n=R> -\infty$, then $I=(-\infty,R)$. \section{Some open questions} We conclude with some more open questions: \begin{problem} Determine the points $\mathbf{y}$ of $\mathcal{N}$, which defines Fuchsian groups $\Gamma_{\mathbf{y}}$ of first kind. \end{problem} \begin{problem} Which points $\mathbf{y}$ of $\mathcal{N}$ define Fuchasian groups uniformizing Loch Ness monsters of parabolic type? \end{problem} \begin{problem} Find a sequence $\mathbf{x} \in \mathbb{R}_{+}^{\mathbb{N}_0}$ such that the zero twist tight flute surface associated be of parabolic type but not complete. \end{problem} \section*{Acknowledgements} Camilo Ram\'irez Maluendas was partially supported by UNIVERSIDAD NACIONAL DE COLOMBIA, SEDE MANIZALES. He has dedicated this work to his beautiful family: Marbella and Emilio, in appreciation of their love and support.
1,314,259,996,028
arxiv
\section{EM algorithm} \section{Initialization choices} The EM algorithm requires an initialization of the parameters $\theta_{old}$. Since the likelihood function we are considering is non-convex the parameters initialization is crucial to find the optimal solution. In particular,given the cluster number $K$ we have to initialize four parameters: the transition matrix $A$ and the initial probabilities $\+\pi$ and the Gaussian distribution parameters $\+\Theta$ and $\+\mu$. In our implementation \footnote{\texttt{link-omitted-for-anonimity}} we provide several initializations choices: \begin{itemize} \item \textbf{the transition matrix $A$ and the initial probabilities $\+\pi$} can either be initialised with equal probabilities for each state $\frac{1}{K}$ or by randomly sampling from a uniform distribution $\mathcal U(0,1)$ or Dirichlet distribution Dir$(1s)$, with the constraints that each row has to sum to one. \item \textbf{The Gaussian distribution parameters $\+\Theta$ and $\+\mu$}: we start by computing the initial subdivision into clusters. In literature the most common way is through the K-means. Since the dependency of the HMMs to Gaussian mixture model (GMM) we also allow the initialization with GMM. Both GMM and K-means are non-convex, thus, depending on initialisation lead to different solutions as well. Given the dataset initial subdivision, we compute respectively the empirical covariances and means. Finally we run the graphical lasso to compute the corresponding precision matrix. \end{itemize} \section{Model selection} Our model has two hyper-parameters to cross-validate: \begin{enumerate} \item the number of finite states $K$; \item the regularization parameter $\lambda$ which regulates the sparsity of the precision matrix $\Theta_k$; \end{enumerate} To estimate these two hyper-parameters we employ two different kinds of cross validation (CV), Bayesian Information Criterion (BIC) and the Stability of clusters (SoC), when we are cross-validating one hyper-parameter we suppose that the other one is fixed. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{BIC} \caption{Cross validation} \label{CV} \end{figure} Based on our experiments we noticed that the best performances were obtained when we used the BIC to estimate $K$ while the SoC for $\lambda$. To see that this CV combination of methods is suitable for the estimation of the hyper-parameters of our model we generated a multivariate time series with $d=10$ and $K=5$ and we cross-validate $K$ and $\lambda$ from the sets $K\in\{3,\dots,8\}$ and $\lambda\in$ numpy.linspace$(18,25,10)$. We show in Figure \ref{CV} the results and as we can see it found the $K$ from which we have generated the data. \subsection{Bayesian Information Criterion (BIC)} To determine the number of hidden states we use the BIC approach \cite{schwarz1978} which has the form \[ \text{BIC}(m)= \ln\+p(\+X|m,\theta)-\frac{\nu}{2}\ln(n). \] $\nu$ represents the number of free parameters and $m$ the considered model. In our case the number of free parameters can be computed in the following: \begin{itemize} \item the probabilities $\pi$ have dimension $K$ with one constraint, so $\nu_{\pi}=K-1$; \item the transition matrix $A$ has dimension $K\times K$ but each row has a constraint, so $\nu_{A}=K(K-1)$; \item the means $\mu$ are $K$ with dimension $d$ without any constraint, so $\nu_{\mu}=Kd$; \item the precision matrices $\Theta$ are $K$, one for each state, with dimension $d\times d$ but they have the constraint given by graphical lasso therefore $\nu_{\Theta} = \sum_{i\geq j} e_{i,j}$ where $e_{i,j}= 0$ if $\hat{\Theta}_{i,j}=0$ and $e_{i,j}= 1$ otherwise. $\hat{\Theta}$ is the estimated precision matrix. \end{itemize} Putting all together the total number of free parameter $\nu$ is \[ \nu = \nu_{\pi}+\nu_{A}+\nu_{\mu}+\nu_{\Theta} =(K-1)(K+1)+Kd+\sum_{k=1}^K\nu_{\Theta_k} . \] \subsection{Stability of clusters} The choice of the number of states $K$ can also be performed by looking at the stability of the inferred clusters \cite{brunet2004metagenes}. Such analysis is reached by fitting the algorithm a certain number of times with different initializations. For each repetition we compute the \emph{connectivity matrix}, $C$ \textit{i.e.} a matrix that has $C_{ij} =1$ if the state $i$ and the state $j$ belong to the same cluster and 0 otherwise. We then take the mean of the connectivity matrices into a consensus matrix $\bar{C}$ that has in each entry the probability that the states $i$ and $j$ belong to the same cluster. On this matrix we can compute the dispersion coefficients that provide us a measurement of how much the clustering is stable \cite{kim2007sparse}. \section{Synthetic dataset generation} Real world time-series behaviour have a deep dependence on the sequence in which they appear and the relations among them can change in time. For these reasons we generate synthetic data which are tied together by a Markov process which controls the probability to remain in the same state or to go from one state to another one. To not have a sudden transition from one state to another, we define the variable $S_t$, the \textit{smooth transition} variable, which counts the number of steps necessary to go from state $i$ to state $j$, with $i\not = j$ and $i,j\in \{1,\dots,K\}$. The synthetic data generation comprehend the following steps: \begin{enumerate} \item we fix suitable values for: \begin{enumerate} \item \textbf{N}: the number of observations; \item \textbf{K}: the number of states; \item \textbf{d}: the number of multivariate time series; \end{enumerate} \item for every state $k\in K$, in our implementation we allow for several combinations of distributions to generate the observations. The idea is to study the behavior of the algorithm in many peculiar cases when the labels are known and from this experience try to interpret the behavior in the real dataset case when the labels are unknown. \begin{enumerate} \item The mean can be drawn in two ways \begin{enumerate} \item if someone wants to generate means which are close to zero like in the stock return case he can pick the \textit{normal distribution} case with $\+\mu_k\sim \mathcal{N}(\+0, \mathcal{I}$), where $\mathcal{I}$ is the identity matrix; \item otherwise, \textit{uniform distribution} case with $\+\mu_k\sim \mathcal{U}(a,b)$ and $a,b \in \mathbb{R}$ with $a<b$. If $a\ll b$ then the generated cluster are more likely to be separated between each other. \end{enumerate} \item The covariance matrix $\Sigma_k$ can be set in three ways: \begin{enumerate} \item generating the precision matrix similarly to what was proposed in \cite{meinshausen2006high,yuan2012discussion}, fixing a certain maximum degree for each node $d$, we randomly selected its neighbours and put deterministically the weights of the edges to $0.98/d$ to ensure positive definiteness of the resulting precision matrix; \item from the tool \textbf{scikit-learn.datasets} which generates a random symmetric, positive-definite matrix; \item from the precision matrix stressing the links between nodes, starting from the identity matrix and putting randomly ones in the off-diagonal places respecting the symmetric matrix constraint. In this way we are generating precision matrix with either strong links between nodes or no links at all. This case is interesting because in this way the networks corresponding to each state $k$ are very different between each others like the case with means very far away. \end{enumerate} \end{enumerate} \item each row of the transition matrix $A$ is generate from a Dirichlet distribution Dir$(\+\alpha)$ where $\+\alpha\in \mathbb{R}_+^{K}$. In particular, to not have too quick transitions from one state to another we impose $\alpha_i= \kappa\cdot \alpha_j$ with $i\not=j$ where $i$ is the index of the row transition we are drawing and $\alpha_j$ all the other element of $\+\alpha$ different than $\alpha_i$. $\kappa$ is also known as the \textit{force} constant, in the sense that the bigger $\kappa$ is the more likely the state $i$ is respect to the others; \item at each step $n$ the state $k$ is drawn from the transition matrix $A$ then the data are drawn from the normal distribution $\+X_n\sim\mathcal{N}(\+\mu_k,\Sigma_k)$. \end{enumerate} From the above description it is possible to note that the introduction of the variable $S_t$ gives the freedom to create different kind of datasets according to their transition between states. Moreover, if we fix $S_t = \Bar{S}_t$ during the transition from state $i$ to state $j$ the data will be drawn from a weighted sum of the parameters which characterise the two states. If during this transition a third state is drawn then the weights will be recalibrated for the three states and so on. So our datasets will differentiate according to $S_t$ and how we define the transition weights. In particular, we have: \begin{enumerate} \item the \textbf{sudden transition} dataset: where $S_t=1$ fixed and therefore we have no smooth transition; \item the \textbf{fixed smooth transition} dataset: where $S_t=\Bar{S}_t\in\mathbb{Z}_+$ always and the weighted sum will take into account how many consecutive steps travelling to the destination we have done so far; \item the \textbf{random smooth transition} dataset: where $S_t=\Bar{S}_t\sim \mathcal{U}(a,b)$, with $a,b>1$ and $b>a$. The weights will be set as in 2. but if during a transition another state is drawn then the $\Bar{S}_t$ in this case will replace the previous one and the weights will be recomputed from the latter; \item the \textbf{random smooth transition and random weights} dataset: the setting is as in 3. But the weights in this case are drawn randomly from Dir$(\+\alpha)$ with $\+\alpha$ is a vector of $1$s. \end{enumerate} \section{Evaluation metrics} We use a metric score for each of the following aspects: \begin{enumerate} \item \textbf{clustering performance}: we compare the clustering results in terms of V-measure \cite{rosenberg2007v} which return a value $v\in[0,1]$ where $v=0$ means that the cluster labels are assigned completely randomly while $v=1$ means that there is a perfect match between the true labels and the one found by the models. \item \textbf{network inference performance}: in order to evaluate the performances of the methods we need to identify a map between the true clusters and the identified ones in order to compare the underlying graphs. Such map is obtained by taking the maximum per row of the contingency table of the true and predicted labels. We then consider the true and inferred graphs as binary classes (0 no edge identified, 1 edge identified) and we compute the Matthews correlation coefficient (MCC) \cite{matthews1975comparison} which return a value in the interval$[-1,1]$ where $0$ corresponds to chance. \item \textbf{forecasting performance}: we used as score the Mean Absolute Error (MAE) which measures the error between the true next point value and the predicted one. Since we are predicting $d$ values for each future time point we compute the mean MAE across entries of the vector: $$ MAE = \frac1n\sum_{i=1}^N \bigg(\frac1d\sum_{j=1}^d |x_{nj} - \hat{x}_{nj} |\bigg) $$. \end{enumerate} \section{Higher order extension: Memory Time Adaptive Gaussian model (MemTAGM)} Sometimes real world applications have events which rely on their past realizations. Therefore we can exploit more information from data if we consider a higher-order Markov process whose $\+z_n$ state probability does not depend only on $\+z_{n-1}$ but also on the other $r$ past states according to the choice of $r$. TAGM can be extended to higher order sequential relationships. We consider a homogeneous Markov process of order $r\in\mathbb{Z}^+$ over a finite state set $\{1,\dots,K\}$ with hidden sequence $\{\+z\}_{n=1}^N$. This stochastic process satisfies \[ p(\+z_n|\{\+z_\ell\}_{\ell<n})=p(\+z_n|\{\+z_\ell\}_{\ell=n-r}^{n-1}) \] or in other words $\+z_n$ can depend on a different number of hidden past states, and we assume that the process is homogeneous i.e., the transition probability is independent of $n$. To be as more general as possible we allow that the emission probability of $\+x_n$ can depend not only on $\+z_n$ but also from the previous $m\in\mathbb{Z}^+$ sequence of states \[ p(\+x_n|\{\+x_\ell\}_{\ell<n},\{\+z_\ell\}_{\ell\leq n})= p(\+x_n|\{\+z_\ell\}_{\ell =n-(m-1)}^n). \] Each observation is conditionally independent of the previous ones and of the state sequence history, given the current and the preceding $m-1$ states. The idea is transform the High order hidden Markov model (HHMM) to a first order hidden Markov model (HMM). It can be done by considering the following two propositions where we omit the prove but it can be found in \cite{hadar2009high} \begin{proposition} Let $\+Z_n = [\+z_n,\+z_{n-1},\dots,\+z_{n-(\nu-1)}]^\top$. The process $\{\+Z_n\}$ is a first order homogeneous Markov process for any $\nu\geq r$, taking values in $\mathcal{S}^\nu$. \end{proposition} \begin{proposition}\label{prop2} Let $\nu= \max\{ r,m\}$. The state sequence $\{ \+Z_n\}$ and the observation sequence $\{ \+x_n\} $ satisfy \[ p(\+x_n|\{\+x_\ell\}_{\ell<n},\{\+z_\ell\}_{\ell\leq n})= p(\+x_n|\+Z_n) \] and thus constitute a first order HMM. \end{proposition} We can thus reformulate HHMM as a first order HMM with $K^\nu$ states, where $\nu = \max\{r,m\}$. Note that the last $\nu-1$ entries of $\+Z_n$ are equal to the first $\nu-1$ entries of $\+Z_{n+1}$, one concludes that a transition from $\+z_n$ to $\+z_{n+1}$ is possible only if $\lfloor \+z_n/K\rfloor=\+z_{n+1}-\lfloor \+z_{n+1}/K^{\nu-1}\rfloor K^{\nu-1}$, and thus \[ A_{i,j}=0\quad\text{if }\Big\lfloor \frac{i}{K}\Big\rfloor\not = j-\Big\lfloor \frac{1}{K^{\nu-1}}\Big\rfloor K^{\nu-1}. \] Therefore we can use the EM algorithm to find the optimal parameters changing the number of states to $K^\nu$. The $z_{n,j}$s which contribute to the M step for a quantity of state $i$ are given by the set \[ \mathcal{I}_m(i)=\Bigg\{\Big\lfloor\frac{i}{K^{\nu-m}}\Big\rfloor K^{\nu-m},\Big\lfloor\frac{i}{K^{\nu-m}}\Big\rfloor K^{\nu-m}+1,\dots,\Big(\Big\lfloor\frac{i}{K^{\nu-m}}\Big\rfloor +1\Big)K^{\nu-m}-1\Bigg\}. \] Therefore the means become \[ \mu_i=\frac{\sum_{n=1}^N\+x_n\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}{\sum_{n=1}^N\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})} \] The empirical covariances to substitute in the graphical lasso equation is \[ \Tilde{S}_i= \frac{\sum_{n=1}^N(\+x_{n}-\+\mu_i)(\+x_{n}-\+\mu_i)^\top\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}{\sum_{n=1}^N\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}, \] with the hyper-parameter $\tilde{\lambda}_k=\frac{\lambda}{\sum_{n=1}^N\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}$. The transition probability matrix becomes \[ A_{i,j}= \begin{cases} &\frac{\sum_{n=2}^N\sum_{k\in\mathcal{I}_r(i)}\xi(z_{n-1,k},z_{n,\lfloor\frac{k}{K}\rfloor+\lfloor\frac{j}{K^{\nu-1}}\rfloor K^{\nu-1}})}{ \sum_{l=1}^{K^\nu}\sum_{n=2}^N\sum_{k\in\mathcal{I}_r(i)}\xi(z_{n-1,k},z_{n,l})}\quad\text{if }\lfloor\frac{i}{K}\rfloor=j-\lfloor\frac{j}{K^{\nu-1}}\rfloor K^{\nu-1},\\ &0\quad\text{otherwise}. \end{cases} \] Finally, the initial state probabilities are given by \[ \pi_i=\gamma(z_{1,i}). \] Therefore if we increase the number of states to $K^{\nu}$ and modify the TAGM M step formulas with the one just found we obtain the MemTAGM. We test MemTAGM performance in subsection 9.2 where we compare it with TAGM. \section{On-line learning: Incremental Time Adaptive Gaussian model (IncTAGM)} In many applications it is important to update the TAGM parameters almost instantaneously every time a new observation comes up. Since TAGM does not allow to have such a quick response, it can be extended to an incremental version. We call this model IncTAGM and it initially starts as a standard TAGM reading a set of observations and updating its current parameters $\pi,A, \mu, \+\Theta$ according to new incoming data. Therefore, after the standard TAGM has finished training on its observation set, it calculates the revised $\alpha, \beta, \xi$ and $\gamma$ variables based on the new set of observations. To update the model incrementally, we need recursive equations for $\alpha$ and $\beta$ which depend on past values. Notice that the $\alpha$ recursive equation is already of this form. While the $\beta$ recursive equation needs an approximation to become of that form. In fact if we assume that $\beta(z_{T,i})\simeq \beta(z_{T,j})$ for every $i\not=j$, the $\beta$ recursive equation becomes \begin{equation} \beta(\+z_{T+1}) =\frac{\beta(\+z_{T}) }{\sum_{\+z_{T+1}}p(\+x_{T+1}|\+z_{T+1})p(\+z_{T+1}|\+z_T)}. \end{equation} The M step optimal parameters are updated in the following way: the initial state $\pi$ \begin{equation} \pi_k' =\gamma(z_{1,k}), \end{equation} the transition matrix $\+A$ \begin{align} A^{T+1}_{j,k} &= \frac{ \sum_{n=2}^T\xi(z_{n-1,j},z_{n,k})+\xi(z_{T,j},z_{T+1,k})}{\sum_{l=1}^K\sum_{n=2}^T\xi(z_{n-1,j},z_{n,l})+\sum_{l=1}^K\xi(z_{T,j},z_{T+1,l})}\notag\\ &= \frac{ \sum_{n=2}^T\xi(z_{n-1,j},z_{n,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})} + \frac{\xi(z_{T,j},z_{T+1,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}\notag\\ &= \frac{ \sum_{n=2}^{T}\gamma(z_{n-1,j})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})} \frac{ \sum_{n=2}^T\xi(z_{n-1,j},z_{n,k})}{\sum_{n=2}^{T}\gamma(z_{n-1,j})} + \frac{\xi(z_{T,j},z_{T+1,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}\notag\\ &= \frac{ \sum_{n=2}^{T}\gamma(z_{n-1,j})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}A^{T}_{j,k} + \frac{\xi(z_{T,j},z_{T+1,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}.\label{recA} \end{align} Note that if we sum by $k\in\{1,\dots,K\}$ the row normalization holds. Similarly we obtain the formula for the means \begin{equation} \mu_k^{T+1}=\frac{ \sum_{n=1}^{T}\gamma(z_{n,k})}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}\mu_k^T+\frac{\gamma(z_{T+1,k})\+x_{T+1}}{\sum_{n=1}^{T+1}\gamma(z_{n,k})} \end{equation} and the empirical covariances \begin{equation} \Tilde{S}^{T+1}_k=\frac{ \sum_{n=1}^{T}\gamma(z_{n,k})}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}\Tilde{S}^{T}_k+ \frac{\gamma(z_{T+1,k})(\+x_{T+1}-\+\mu^{T+1})(\+x_{T+1}-\+\mu^{T+1})^\top}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}, \end{equation} with the hyper-parameter $\tilde{\lambda}_k=\frac{\lambda}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}$. We test IncTAGM performance in subsection 9.2 where we compare it with TAGM. \subsection{Slide Incremental Time Adaptive Gaussian model (S-IncTAGM)} The training of new data points results in the accumulation of an increasingly large observation set. As a result, if the time sequence considered is large and the first point is far away in the past respect to the last point it is possible that the initial trained observation points become outdated after many updates and therefore they do not carry any useful information to analyze the more recent points. Therefore, the new addition to the IncTAGM is a fixed sliding window to effectively analyze discrete data (appropriately discarding the outdated observations) whilst updating its model parameters. The estimation of the $\alpha, \beta, \xi$ and $\gamma$ variables remains the same as in the IncTAGM algorithm. What changes are the $A$, $\+\mu$ and $\+\Theta$ updates. Using the \textit{simple moving average} (SMA) definition \begin{equation} sma=\frac{x_1+x_2\dots+x_n+x_{n+1}-x_1}{n}= ave +\frac{x_{n+1}}{n}-\frac{x_1}{n} \end{equation} where $ave= \frac{x_1+x_2\dots+x_n}{n}$, we update $A$, $\+\mu$ and $\+\Theta$ in the following way \begin{align} A^{T+1}_{j,k} &= \frac{\sum_{n=3}^{T+1}\xi(z_{n-1,j},z_{n,k})}{\sum_{n=3}^{T+1}\gamma(z_{n-1,j})},\\ \mu_k^{T+1} &=\frac{ \sum_{n=2}^{T+1}\gamma(z_{n,k})\+x_n}{\sum_{n=2}^{T+1}\gamma(z_{n,k})},\\ \Tilde{S}^{T+1}_k&=\frac{ \sum_{n=2}^{T+1}\gamma(z_{n,k})(\+x_{n}-\+\mu^{T+1})(\+x_{n}-\+\mu^{T+1})^\top}{\sum_{n=2}^{T+1}\gamma(z_{n,k})} \end{align} with $\tilde{\lambda}_k=\frac{\lambda}{\sum_{n=2}^{T+1}\gamma(z_{n,k})}$. \subsection{Higher order and on-line extension experiments} \begin{figure}[t] \includegraphics[width=0.5\textwidth]{incremental_comparison.pdf} \includegraphics[width=0.5\textwidth]{HHMM_results.pdf} \caption{Comparison of TAGM with its extensions in terms of V-measure and MCC. On the left hand panel we drew the V-measure and MCC ratio of IncTAGM and TAGM as the number of batch training data increases. On the right hand panel we drew the V-measure and MCC of MemTAGM and TAGM as the memory of the hidden Markov process increases. } \label{fig:incremental} \end{figure} Using the same setting to generate synthetic data as in section 5 of the main document, we perform two types of experiments to compare the higher order and the incremental extensions with the batch model described in the main document. \paragraph{TAGM vs IncTAGM:}For this experiment we generated a synthetic dataset with $K=5$ states, $d=10$ dimensions and $N=2000$ observations. We want to assess the behaviour of IncTAGM w.r.t. TAGM as the percentage of initial data given in input $\hat{N} = \%N$ to IncTAGM increases. The results are shown on the left panel of Figure~\ref{fig:incremental}, where we can see that IncTAGM asymptotically tends to the performance of TAGM as the percentages of input data reaches 100\% but the on-line model performance decreases very quickly when we increase the number of data which make the parameters update on-line. \paragraph{TAGM vs MemTAGM} For this experiment we generated a synthetic dataset with $K=3$ states, $d=10$ dimensions and $N=2000$ observations. But, in this case we let the memory of the hidden Markov process vary. In this way we are able to evaluate the behaviour of MemTAGM w.r.t. TAGM as the memory of the hidden Markov process increases. The results are shown on the right panel of Figure~\ref{fig:incremental}, where we can see that MemTAGM performance in terms of V-measure and MCC is slightly better respect to the one of TAGM as the hidden Markov process memory increases. Recall that the time complexity of the forward-backward algorithm is $O(K^2N)$ and therefore since MemTAGM number of states is $K^\nu$ where $\nu$ is the Markov process memory, the performance improvement does not justify the amount of time needed to wait. \subsection{Hidden Markov Model} HMMs are statistical models widely applied sequential data structures. HMMs assume that the series of observations is generated by a given number $K$ of (hidden) internal states which follow a Markov process (see left panel of Figure~\ref{fig:hmm}) \cite{baum1966statistical}. Consider the $N$ sequential (temporal) observations, we pair each of them with a hidden (latent) state $\mathbf{z}_n = \sum_{i=1}^K\mathds{1}_{\{i=k\}} \+e_i$, where $\+e_i$ is the $K$-dimensional natural basis which has a non-zero component only at position $i$ and $k$ is the \emph{cluster} label of observation $n$. We use the notation $z_{n,k}$ to indicate the $k$-th positional value of the vector $\+z_{n}$. Through their hidden states, the observations $\mathbf{x}_n$ and $\mathbf{x}_{n+1}$ become independent given their states (see Figure~\ref{fig:hmm} left panel). The hidden states, on the other hand, follow a Markov chain process which satisfies the conditional independence property $ \+z_{n+1}\independent \+z_{n-1}|\+z_n $. \\ The HMM joint distribution on both the observations $\mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_N)$ and the latent variables of $\mathbf{Z} = (\mathbf{z}_1, \mathbf{z}_2, \dots, \mathbf{z}_N)$ is then given by \begin{equation}\label{eq:probability} p(\+X,\+Z |\+\pi,A,\+\phi)= p(\+z_1 |\pi)\Big[\prod_{n=2}^Np(\+z_n|\+z_{n-1}, A)\Big]\prod_{n=1}^Np(\+x_n|\+z_n, \phi). \end{equation} The probability $p(\+z_1|\+\pi)=\prod_{k=1}^K\pi_k^{z_{1,k}} \quad\text{with}\quad\sum_k\pi_k=1$ is the initial latent node $\+z_1$ probability, which differs from the other states as there is no parent node. Thus, its marginal distribution is embodied by a vector of probabilities $\+\pi$ whose elements $\pi_k \equiv p(z_{1,k} = 1)$ represents the probability of the first observation to belong to the $k$-th state.\\ The probability $p(\+z_n|\+z_{n-1,A})=\prod_{k=1}^K\prod_{j=1}^KA_{j,k}^{z_{n-1,j}z_{n,k}}$ is the \textit{transition probability} of moving from one state to the other where $A \in [0,1]^{K \times K}$ is the \textit{transition matrix} that we assume to be constant in time.It is defined as $ A_{j,k} = p(z_{n,k} = 1|z_{n-1,j}=1)$ with $0\leq A_{j,k}\leq 1$ and $ \sum_k A_{j,k}=1$. \\ Lastly, $p(\+x_n|\+z_n,\+\phi) = \prod_{k=1}^Kp(\+x_n|\phi_k)^{z_{n,k}}$, are the \textit{emission probabilities} where $\+\phi=\{\phi_1,\dots,\phi_K\}$ is a set of $K$ different parameters governing the distributions, one for each of the possible $K$ states. \subsection{Gaussian Graphical Models} GGMs are typically employed in the analysis of multivariate problems where one seeks to understand the relationships among variables. A GGM is a probability distribution which factorizes according to an undirected graph whose set of edges univocally determines a multivariate normal distribution $\mathcal{N}(\mu, \Sigma)$. Indeed, the \emph{precision matrix}, $\Theta = \Sigma^{-1}$ encodes the conditional independence between pairs of variables, and $\Theta(i,j) =0$ implies the absence of an edge in the graph. Thus, $\Theta$ is the weighted adjacency matrix of the graph (see right panel Figure~\ref{fig:hmm}) \cite{lauritzen1996graphical}.\\ Given $M$ observations of $d$ variables \(\mathbf{X} \in \mathbb{R}^{M\times d}\) we aim at inferring the underlying graph corresponding to the precision matrix $\Theta$. In order to perform such inference we need to assume that the underlying graph is sparse. This is due to the combinatorial nature of the problem that requires to be constrained to have identifiability guarantees \cite{friedman2008sparse}. The most common way of inferring such graph is through the \emph{Graphical Lasso} (GL) \cite{friedman2008sparse}, a penalized Maximum Likelihood Estimation (MLE) method that solves the following problem \begin{equation}\label{GLexpr} \underset{\Theta \succ 0}{\text{argmin}} ~~\text{tr}(\Theta S) - \text{log det}(\Theta) + \lambda\|\Theta\|_{1, od} \end{equation} where $S$ is the empirical covariance matrix of the input data defined as $S = \frac1n X^\top X$, $\text{tr}(\Theta S)~-~\text{log det}(\Theta) $ is the negative log likelihood of the multivariate normal distribution and $\|\cdot\|_{1, dot}$ is the off-diagonal $\ell_1$-norm that imposes sparsity on the precision matrix $\Theta$ without considering the diagonal elements. \subsection{E step} An efficient algorithm to evaluate the quantities $\gamma(\+z_n)$ and $\xi(\+z_{n-1},\+z_n)$ is the \textit{forward-backward} algorithm described in \cite{baum1970maximization}. Here we want to emphasize the most relevant formulas which characterizes the algorithm. If the reader is interested in more details and how to derive them we refer to \cite{bishop2006pattern}. Using Bayes’ theorem, we have \begin{equation}\label{gammazn} \gamma(\+z_n)=\frac{p(\+X|\+z_n)p(\+z_n)}{p(\+X)}= \frac{p(\+x_1,\dots,\+x_n,\+z_n)p(\+x_{n+1},\dots,\+x_N|\+z_n)}{p(\+X)}= \frac{\alpha(\+z_n)\beta(\+z_n)}{p(\+X)}. \end{equation} where we have defined \begin{align*} \alpha(\+z_n)&=p(\+x_1,\dots,\+x_n,\+z_n)\\ \beta(\+z_n)&=p(\+x_{n+1},\dots,\+x_N|\+z_n). \end{align*} $\alpha(\+z_n)$ is also called \emph{forward process} while $\beta(\+z_n)$ \emph{backward process}. It is possible to prove that by the conditional independence assumption $\+z_{n+1}\independent \+z_{n-1}|\+z_n$ the forward and backward processes are characterized by two recursive equations: \begin{align} \alpha(\+z_n)&=p(x_n|\+z_n)\sum_{\+z_{n-1}}\alpha(\+z_{n-1})p(\+z_n|\+z_{n-1}),\label{alp}\\ \beta(\+z_n)&=\sum_{\+z_{n+1}}\beta(\+z_{n+1})p(\+x_{n+1}|\+z_{n+1})p(\+z_{n+1}|\+z_{n}),\label{bet} \end{align} with the initial conditions: \begin{align*} \alpha(\+z_1)&=\prod_{k=1}^K\{\pi_k\mathcal{N}(\+x_1|\mu_k,\Theta^{-1}_k)\}^{z_{1,k}}\\ \beta(\+z_N)&=(1,\dots,1). \end{align*} If we sum both sides of \eqref{gammazn} over $\+z_n$, and use the fact that the left-hand side is a normalized distribution, we obtain \[ p(\+X)=\sum_{\+z_n}\alpha(\+z_n)\beta(\+z_n)=\sum_{\+z_N}\alpha(\+z_N). \] Using similar arguments we have \[ \xi(\+z_{n-1},\+z_n)=\frac{\alpha(\+z_{n-1}) p(\+x_{n}|\+z_{n})p(\+z_n|\+z_{n-1})\beta(\+z_n)}{p(\+X)}. \] For moderate lengths of chain the calculation of $\alpha(\+z)$ can go to zero exponentially quickly. We therefore work with re-scaled versions of $\alpha(\+z)$ and $\beta(\+z)$ whose values remain of order unity. The corresponding scaling factors cancel out when we use there re-scaled quantities in the EM algorithm. We define a normalised version of $\alpha(\+z)$ as \[ \hat{\alpha}(\+z_n) = p(\+z_n| \+x_1, \dots, \+x_n) = \frac{\alpha(\+z_n)}{p( \+x_1, \dots, \+x_n)} \] which we expect to be well behaved numerically because it is a probability distribution over $K$ variables for any value of $n$. In order to relate to the original $\alpha(\+z)$ variables we introduce scaling factors \[ c_n = p( \+x_n| \+x_1, \dots, \+x_{n-1}) \] and therefore \[ p( \+x_1, \dots, \+x_n) = \prod_{m=1}^nc_m. \] From the $\alpha$ and $\beta$ recursive equations \eqref{alp} and \eqref{bet} its scaled formula $\hat{\alpha}(\+z)$ becomes \[ \hat{\alpha}(\+z_n) = \frac{p(\+x_n |\+z_n) \sum_{\+z_{n-1} }\hat{\alpha}(\+z_{n-1})p(\+z_n |\+z_{n-1})}{c_n} \] and, similarly \[ \hat{\beta}(\+z_n) = \frac{\sum_{\+z_{n+1} }\hat{\beta}(\+z_{n+1})p(\+x_{n+1} |\+z_{n+1}) p(\+z_{n+1} |\+z_{n})}{c_{n+1}}. \] Note that for the computation of $\beta$s we can recur to the scaling factors we computed in the $\alpha$ phase. We can also notice that the probability distribution of $\+X$ becomes \[ p(\+X) = \prod_{n=1}^N c_n \] and that \begin{align*} \gamma(\+z_n) &= \hat{\alpha}(\+z_n)\hat{\beta}(\+z_n)\\ \xi(\+z_{n-1},\+z_n ) &= c^{-1}_n \hat{\alpha}(\+z_n)p(\+x_n |\+z_n)p(\+z_n|\+z_{n-1})\hat{\beta}(\+z_{n}). \end{align*} \subsection{M step:} Given $\gamma(\+z_n)$ and $\xi(\+z_{n-1},\+z_n)$ computed in the E step, the M step finds the optimal parameters $\+\theta$. To maximize respect to $\pi$ and $\+A$ we keep the addends of $Q(\theta,\theta_{\text{old}})$ which are directly interested in, where we use the appropriate Lagrangian to take into account of the constraints. \begin{align*} \frac{\partial Q(\pi_k)}{\partial \pi_k}&= \frac{\partial \Big(\sum_{k=1}^K\gamma(z_{1,k})\ln(\pi_k)+ \lambda(1-\sum_{k=1}^K\pi_k)\Big)}{\partial \pi_k}= \frac{\gamma(z_{1,k})}{\pi_k}-\lambda=0 \end{align*} We multiply both sides by $\pi_k$ and summing over all $k\in \{1,\dots,K\}$ and obtain $\lambda= \sum_{j=1}^K\gamma(z_{1,k})$ using that $\sum_{j=1}^K\pi_j=1$. Therefore \[ \pi_k = \frac{\gamma(z_{1,k})}{ \sum_{j=1}^K\gamma(z_{1,k})}. \] Analogously for $\+A$ \begin{align*} \frac{\partial Q(A_{j,k})}{\partial A_{j,k}}&= \frac{\partial \Big(\sum_{n=2}^N\sum_{j=1}^K\sum_{k=1}^K \xi(z_{n-1,j},z_{n,k})\ln A_{j,k}+ \lambda(1-\sum_{k=1}^KA_{j,k})\Big)}{\partial A_{j,k}}\\& = \frac{\sum_{n=2}^N\xi(z_{n-1,j},z_{n,k})}{ A_{j,k}}-\lambda=0 \end{align*} Similarly as before we multiply both sides by $A_{j,l}$ and summing over all $l\in \{1,\dots,K\}$ and obtain $\lambda= \sum_{l=1}^K\sum_{n=2}^N\xi(z_{n-1,j},z_{n,l})$ using that $\sum_{l=1}^K A_{j,l}=1$. Therefore \[ A_{j,k} = \frac{\sum_{n=2}^N\xi(z_{n-1,j},z_{n,k})}{ \sum_{l=1}^K\sum_{n=2}^N\xi(z_{n-1,j},z_{n,l})}. \] When we differentiate by $\mu_k$ we have the same equation as in the Gaussian mixture model, therefore we give directly the result \[ \mu_k=\frac{\sum_{n=1}^N\gamma(z_{n,k})\+x_n}{\sum_{n=1}^N\gamma(z_{n,k})}. \] The maximization of $\Theta_k$ is performed in the main paper in Section 3. For a detailed explanation of all the properties of this algorithm we refer to \cite{neath2013convergence}. The E step corresponds to evaluating the expected value of the log-likelihood at $\+\theta$. Given the values $\theta_{\text{old}}$, \textit{i.e.}, the parameter values at the previous iteration, the expectation is defined by the function \begin{align}\label{Q} \begin{split} Q(\theta,\theta_{\text{old}})&=\gamma(\+z_n)\ln(\+\pi)+\sum_{n=2}^N \xi(\+z_{n-1},\+z_n)\ln A \\ &+\sum_{n=1}^N \sum_{k=1}^K \left[\gamma(z_{n,k})\ln \mathcal{N}(\+x_n| \mu_k, \Theta_k^{-1})-\frac{1}{2}\lambda ||\Theta_k||_{1,od} \right] \end{split} \end{align} In the functional, $\gamma(\+z_n)= p(\+z_n|\+X,\+\theta_{\text{old}})$ and $\xi(\+z_{n-1},\+z_n)= p(\+z_{n-1},\+z_n|\+X,\+\theta_{\text{old}})$ are the expectations on the latent variables and they are typically computed by a \emph{forward-backward} algorithm \cite{bishop2006pattern}.\\ Once the computation of the expectation is performed, the M step consists in finding the $\+\theta$ values which maximize the function $Q$. The derivation of the new parameters $\pi, A$ and $\mu$ is straightforward from literature \cite{bishop2006pattern}. The maximization of Equation~\eqref{Q} w.r.t. $\+\Theta$, instead requires further attention and it can be shown that, given the imposition of the Laplacian prior, reduces to the Graphical Lasso (Equation \eqref{GLexpr}) with few algebraic manipulations. Furthermore, the maximization of $\+\Theta$ can be performed separately for each $\Theta_k$ as, given $k$, all the maximizations are independent between each other. Thus, if we indicate as $\bar{\+x}_i=\+x_i-\+\mu_k$ the centered observations belonging to cluster $k$ the joint log-likelihood at fixed $k$ writes out as \begin{align} \small &\sum_{n=1}^N\gamma(z_{n,k})\ln\mathcal{N}(\mathbf{x}_n|\mu_k,\Theta_k^{-1})-\frac{1}{2}\lambda || \Theta_k||_{1,od}\notag\\ =& \sum_{n=1}^N\frac{\gamma(z_{n,k})}{2}\ln\text{det}\Theta_k-\frac{1}{2}\text{tr}\Big(\sum_{n=1}^N\gamma(z_{n,k})(\bar{\+x}_n\bar{\+x}_n')\Theta_k\Big)\notag-\frac{1}{2}\lambda || \Theta_k||_{1,od}\notag\\ =& \sum_{n=1}^N\frac{\gamma(z_{n,k})}{2}\Big[\ln\text{det}\Theta_k-\text{tr}\Big(\Tilde{S}_k\Theta_k\Big)-\Tilde{\lambda}_k|| \Theta_k||_{1,od}\Big]\label{Qthk} \end{align} where $\Tilde{S}_k=\frac{\sum_{n=1}^N\gamma(z_{n,k})(\bar{\mathbf{x}}_n,\bar{\mathbf{x}}_n')}{\sum_{n=1}^N\gamma(z_{n,k})}$ is the weighted empirical covariance matrix, $\Tilde{\lambda}_k=\frac{\lambda}{\sum_{n=1}^N\gamma(z_{n,k})}$ and \begin{equation}\label{sigmastarlasso} \Theta_k= \arg\max_{\Theta_k}\Big\{ \ln\text{det}\Theta_k-\text{tr}\Big(\Tilde{S}_k\Theta_k\Big)-\Tilde{\lambda}_k|| \Theta_k||_{1,od} \Big\} \end{equation} which is equivalent to a Graphical Lasso \cite{friedman2008sparse}. Equation~\eqref{sigmastarlasso} is a convex functional having guarantees of reaching a global optimum. Differently, Equation~\eqref{Q} is non-convex and depending on the initialization different local optima may be reached. \input{extensions} \section{TAGM extensions} The proposed approach can benefit from two types of extensions. The first one consists in higher-order Markov relationships among latent states. This can be achieved following \cite{hadar2009high}, with the main drawback of a much higher computational time for learning as the transition matrix and the corresponding initial state dimensions increase. The second extension consists in an online learning version that will allow to employ the model in a more applicative setting where we may deal with high-frequency data \cite{chis2015adapting}. Indeed, in a situation where new observations arrive at a high rate (every second or even millisecond) we want to be able to fine tune the model online in order to consider such observations instantaneously. This will allow to promptly gain insights on data and possibly predict the next time point. The weakness of this extension is that it requires an approximation which makes the updated parameters less accurate respect to the batch (original) version. We discuss in further details these two extensions in Sections 8 and 9 of the Supplementary material. In particular, we present two experiments where we compare them with TAGM that show the potential weaknesses. \subsection{Higher order extension: Memory Time Adaptive Gaussian model (MemTAGM)} Sometimes real world applications have events which rely on their past realizations. Therefore we can exploit more information from data if we consider a higher-order Markov process whose $\+z_n$ state probability does not depend only on $\+z_{n-1}$ but also on the other $r$ past states according to the choice of $r$. TAGM can be extended to higher order sequential relationships. We consider a homogeneous Markov process of order $r\in\mathbb{Z}^+$ over a finite state set $\{1,\dots,K\}$ with hidden sequence $\{\+z\}_{n=1}^N$. This stochastic process satisfies \[ p(\+z_n|\{\+z_\ell\}_{\ell<n})=p(\+z_n|\{\+z_\ell\}_{\ell=n-r}^{n-1}) \] or in other words $\+z_n$ can depend on a different number of hidden past states, and we assume that the process is homogeneous i.e., the transition probability is independent of $n$. To be as more general as possible we allow that the emission probability of $\+x_n$ can depend not only on $\+z_n$ but also from the previous $m\in\mathbb{Z}^+$ sequence of states \[ p(\+x_n|\{\+x_\ell\}_{\ell<n},\{\+z_\ell\}_{\ell\leq n})= p(\+x_n|\{\+z_\ell\}_{\ell =n-(m-1)}^n). \] Each observation is conditionally independent of the previous ones and of the state sequence history, given the current and the preceding $m-1$ states. The idea is transform the High order hidden Markov model (HHMM) to a first order hidden Markov model (HMM). It can be done by considering the following two propositions where we omit the prove but it can be found in \cite{hadar2009high} \begin{proposition} Let $\+Z_n = [\+z_n,\+z_{n-1},\dots,\+z_{n-(\nu-1)}]^\top$. The process $\{\+Z_n\}$ is a first order homogeneous Markov process for any $\nu\geq r$, taking values in $\mathcal{S}^\nu$. \end{proposition} \begin{proposition}\label{prop2} Let $\nu= \max\{ r,m\}$. The state sequence $\{ \+Z_n\}$ and the observation sequence $\{ \+x_n\} $ satisfy \[ p(\+x_n|\{\+x_\ell\}_{\ell<n},\{\+z_\ell\}_{\ell\leq n})= p(\+x_n|\+Z_n) \] and thus constitute a first order HMM. \end{proposition} We can thus reformulate HHMM as a first order HMM with $K^\nu$ states, where $\nu = \max\{r,m\}$. Note that the last $\nu-1$ entries of $\+Z_n$ are equal to the first $\nu-1$ entries of $\+Z_{n+1}$, one concludes that a transition from $\+z_n$ to $\+z_{n+1}$ is possible only if $\lfloor \+z_n/K\rfloor=\+z_{n+1}-\lfloor \+z_{n+1}/K^{\nu-1}\rfloor K^{\nu-1}$, and thus \[ A_{i,j}=0\quad\text{if }\Big\lfloor \frac{i}{K}\Big\rfloor\not = j-\Big\lfloor \frac{1}{K^{\nu-1}}\Big\rfloor K^{\nu-1}. \] Therefore we can use the EM algorithm to find the optimal parameters changing the number of states to $K^\nu$. The $z_{n,j}$s which contribute to the M step for a quantity of state $i$ are given by the set \[ \mathcal{I}_m(i)=\Bigg\{\Big\lfloor\frac{i}{K^{\nu-m}}\Big\rfloor K^{\nu-m},\Big\lfloor\frac{i}{K^{\nu-m}}\Big\rfloor K^{\nu-m}+1,\dots,\Big(\Big\lfloor\frac{i}{K^{\nu-m}}\Big\rfloor +1\Big)K^{\nu-m}-1\Bigg\}. \] Therefore the means become \[ \mu_i=\frac{\sum_{n=1}^N\+x_n\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}{\sum_{n=1}^N\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})} \] The empirical covariances to substitute in the graphical lasso equation is \[ \Tilde{S}_i= \frac{\sum_{n=1}^N(\+x_{n}-\+\mu_i)(\+x_{n}-\+\mu_i)^\top\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}{\sum_{n=1}^N\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}, \] with the hyper-parameter $\tilde{\lambda}_k=\frac{\lambda}{\sum_{n=1}^N\sum_{j\in\mathcal{I}_m(i)}\gamma(z_{n,j})}$. The transition probability matrix becomes \[ A_{i,j}= \begin{cases} &\frac{\sum_{n=2}^N\sum_{k\in\mathcal{I}_r(i)}\xi(z_{n-1,k},z_{n,\lfloor\frac{k}{K}\rfloor+\lfloor\frac{j}{K^{\nu-1}}\rfloor K^{\nu-1}})}{ \sum_{l=1}^{K^\nu}\sum_{n=2}^N\sum_{k\in\mathcal{I}_r(i)}\xi(z_{n-1,k},z_{n,l})}\quad\text{if }\lfloor\frac{i}{K}\rfloor=j-\lfloor\frac{j}{K^{\nu-1}}\rfloor K^{\nu-1},\\ &0\quad\text{otherwise}. \end{cases} \] Finally, the initial state probabilities are given by \[ \pi_i=\gamma(z_{1,i}). \] Therefore if we increase the number of states to $K^{\nu}$ and modify the TAGM M step formulas with the one just found we obtain the MemTAGM. We test MemTAGM performance in subsection 9.2 where we compare it with TAGM. \subsection{On-line learning: Incremental Time Adaptive Gaussian model (IncTAGM)} In many applications it is important to update the TAGM parameters almost instantaneously every time a new observation comes up. Since TAGM does not allow to have such a quick response, it can be extended to an incremental version. We call this model IncTAGM and it initially starts as a standard TAGM reading a set of observations and updating its current parameters $\pi,A, \mu, \+\Theta$ according to new incoming data. Therefore, after the standard TAGM has finished training on its observation set, it calculates the revised $\alpha, \beta, \xi$ and $\gamma$ variables based on the new set of observations. To update the model incrementally, we need recursive equations for $\alpha$ and $\beta$ which depend on past values. Notice that the $\alpha$ recursive equation is already of this form. While the $\beta$ recursive equation needs an approximation to become of that form. In fact if we assume that $\beta(z_{T,i})\simeq \beta(z_{T,j})$ for every $i\not=j$, the $\beta$ recursive equation becomes \begin{equation} \beta(\+z_{T+1}) =\frac{\beta(\+z_{T}) }{\sum_{\+z_{T+1}}p(\+x_{T+1}|\+z_{T+1})p(\+z_{T+1}|\+z_T)}. \end{equation} The M step optimal parameters are updated in the following way: the initial state $\pi$ \begin{equation} \pi_k' =\gamma(z_{1,k}), \end{equation} the transition matrix $\+A$ \begin{align} A^{T+1}_{j,k} &= \frac{ \sum_{n=2}^T\xi(z_{n-1,j},z_{n,k})+\xi(z_{T,j},z_{T+1,k})}{\sum_{l=1}^K\sum_{n=2}^T\xi(z_{n-1,j},z_{n,l})+\sum_{l=1}^K\xi(z_{T,j},z_{T+1,l})}\notag\\ &= \frac{ \sum_{n=2}^T\xi(z_{n-1,j},z_{n,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})} + \frac{\xi(z_{T,j},z_{T+1,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}\notag\\ &= \frac{ \sum_{n=2}^{T}\gamma(z_{n-1,j})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})} \frac{ \sum_{n=2}^T\xi(z_{n-1,j},z_{n,k})}{\sum_{n=2}^{T}\gamma(z_{n-1,j})} + \frac{\xi(z_{T,j},z_{T+1,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}\notag\\ &= \frac{ \sum_{n=2}^{T}\gamma(z_{n-1,j})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}A^{T}_{j,k} + \frac{\xi(z_{T,j},z_{T+1,k})}{\sum_{n=2}^{T+1}\gamma(z_{n-1,j})}.\label{recA} \end{align} Note that if we sum by $k\in\{1,\dots,K\}$ the row normalization holds. Similarly we obtain the formula for the means \begin{equation} \mu_k^{T+1}=\frac{ \sum_{n=1}^{T}\gamma(z_{n,k})}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}\mu_k^T+\frac{\gamma(z_{T+1,k})\+x_{T+1}}{\sum_{n=1}^{T+1}\gamma(z_{n,k})} \end{equation} and the empirical covariances \begin{equation} \Tilde{S}^{T+1}_k=\frac{ \sum_{n=1}^{T}\gamma(z_{n,k})}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}\Tilde{S}^{T}_k+ \frac{\gamma(z_{T+1,k})(\+x_{T+1}-\+\mu^{T+1})(\+x_{T+1}-\+\mu^{T+1})^\top}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}, \end{equation} with the hyper-parameter $\tilde{\lambda}_k=\frac{\lambda}{\sum_{n=1}^{T+1}\gamma(z_{n,k})}$. \subsubsection{Slide Incremental Time Adaptive Gaussian model (S-IncTAGM)} The training of new data points results in the accumulation of an increasingly large observation set. As a result, if the time sequence considered is large and the first point is far away in the past respect to the last point it is possible that the initial trained observation points become outdated after many updates and therefore they do not carry any useful information to analyze the more recent points. Therefore, the new addition to the IncTAGM is a fixed sliding window to effectively analyze discrete data (appropriately discarding the outdated observations) whilst updating its model parameters. The estimation of the $\alpha, \beta, \xi$ and $\gamma$ variables remains the same as in the IncTAGM algorithm. What changes are the $A$, $\+\mu$ and $\+\Theta$ updates. Using the \textit{simple moving average} (SMA) definition \begin{equation} sma=\frac{x_1+x_2\dots+x_n+x_{n+1}-x_1}{n}= ave +\frac{x_{n+1}}{n}-\frac{x_1}{n} \end{equation} where $ave= \frac{x_1+x_2\dots+x_n}{n}$, we update $A$, $\+\mu$ and $\+\Theta$ in the following way \begin{align} A^{T+1}_{j,k} &= \frac{\sum_{n=3}^{T+1}\xi(z_{n-1,j},z_{n,k})}{\sum_{n=3}^{T+1}\gamma(z_{n-1,j})},\\ \mu_k^{T+1} &=\frac{ \sum_{n=2}^{T+1}\gamma(z_{n,k})\+x_n}{\sum_{n=2}^{T+1}\gamma(z_{n,k})},\\ \Tilde{S}^{T+1}_k&=\frac{ \sum_{n=2}^{T+1}\gamma(z_{n,k})(\+x_{n}-\+\mu^{T+1})(\+x_{n}-\+\mu^{T+1})^\top}{\sum_{n=2}^{T+1}\gamma(z_{n,k})} \end{align} with $\tilde{\lambda}_k=\frac{\lambda}{\sum_{n=2}^{T+1}\gamma(z_{n,k})}$. \subsubsection{Aternative way to compute $\Theta_{k}$} \section{Introduction} The inference of temporal networks has started to become a common topic in the last few years \cite{guo2011joint,danaher2014joint, hallac2015network, hallac2017network, hallac2017toeplitz, chang2019graphical, tomasi2019temporal, tomasi2018latent}. Current method approach the problem by taking a multi-variate time series and dividing it in chunks \cite{foti2016sparse,hallac2017network,tomasi2018latent}. Each chunk is assumed to be a short enough period of time that all its points are identically and independently sampled from a unique distribution. Such distribution, when the variables are continuous, can be represented as a Gaussian Graphical Model (GGM), where the conditional independency patterns of variables are encoded as edges of graphs that evolve in time. These approaches show good performances, but the assumption that the time points in each chunk are i.i.d. is most often is not true. In this paper, we propose Time Adaptive Gaussian Model (TAGM), a combination of GGMs with Hidden Markov Models (HHMs) \cite{baum1966statistical} that allows to easily consider the sequence of single time points and relax the chunk assumption. It also allows us to obtain clusters of time points as well as repeated evolving patterns of graphs that may be impossible to obtain with current state-of-the-art methods. A schematic representation of the model is presented in Figure~\ref{fig:hmm}. Here, we are considering an HMM at 2 states, which by looking at the left panel, presents as the sequence 1, 2, 2, 1. Given the latent states, observations $x_1$ and $x_4$ belong to an underlying distribution while $x_2$ and $x_3$ belong to another one. Given the Markov chain that connects the latent state we can assume that all these observations are independent and, thus, use them to infer two GGMs that models the probability distribution of cluster 1 and cluster 2 (right panel Figure~\ref{fig:hmm}). The inferred GGMs provide us more information on how, within each latent state, the variables are dependent to each other. \begin{figure}[t] \includegraphics[width=1\textwidth]{model_schema.png} \caption{TAGM schematic representation. To each temporal observation is associated a state. Each state is characterized by an underlying distribution which is represented by a graphical model. } \label{fig:hmm} \end{figure} Our approach, given a multi-variate time-series, is able to \begin{enumerate} \item \textbf{cluster temporal data points considering sequentiality}. Note that, TAGM does not intend to group time series based on their morphology. Rather, it \emph{clusters} the observations within the time series based on their similarities and the order they appear. Such approach, in literature, is either formulated as a standard clustering problem on the time points or as a longitudinal clustering on the time-series \cite{macqueen1967some, everitt2014finite, ng2002spectral}. \item \textbf{Infer temporal-dependent conditional dependencies among variables}: TAGM relaxed the chunk assumption common to the inference methods available in literature \cite{hallac2017network, tomasi2018latent} thus inferring a time-varying network that adapts at each observation. Note that TAGM assumes a sequentiality of the states, while in \cite{lotsi2013high,everitt2014finite} the authors combined GGMs with Gaussian Mixture Models. This approach is more prone in clustering the points in classes which can be seen as an unsupervised extension of the Joint Graphical Lasso \cite{danaher2014joint}. \end{enumerate} % We show on synthetic data that the model performs better than current state-of-the-art methods on both tasks (1) and (2). We argue that the general characteristics of TAGM make it a trustful model that may be applied in a variety of applicative domains. \paragraph{Related work} For time series clustering, to our knowledge, HMM \cite{baum1966statistical} is the only clustering method which considers also sequentiality, as state-of-the-art methods typically cluster points based on the feature differences \cite{macqueen1967some, everitt2014finite, ng2002spectral}. The inference of time-varying network has being tackled recently in literature \cite{foti2016sparse,hallac2017network,tomasi2018latent} by dividing the time-series in chunks with the strong assumption that all the time points within a chunk are i.i.d. TAGM overcomes this assumption proposing a more elegant way of inferring evolving networks as well as their pattern of evolution. By inferring $K$ different graphs we reach a deeper level of understanding on the states that allows us to gain insights on the system under analysis \cite{hallac2017toeplitz}. Lastly, state-of-the-art prediction methods on time-series \cite{sims1980macroeconomics, hochreiter1997long, 8569801, vert2004primer} commonly assume the relations among the past values of the variables and the present values of each variable to be constant in time. An idea similar to TAGM was proposed with Gaussian Mixture Models (GMMs) \cite{everitt2014finite} where they combined GMM with GGMs \cite{lotsi2013high}. The use of GMMs though would not allow to explicitly consider sequentiality and it is therefore not suited for the analysis of time-series. In literature, we found two examples that explicitly consider non-stationarity and sequentiality in a setting similar to ours. \section{Preliminaries} \input{background} \section{Time Adaptive Gaussian Model} \input{contribution} \section{Conclusions} \input{conclusions} \bibliographystyle{plain}
1,314,259,996,029
arxiv
\section{Introduction}\label{Sone} The aim of the Thomas-Fermi (\lq\lq TF") statistical multi-quark model, introduced in Ref.\cite{wilcox}, is to explore many-quark baryon states including strange matter\cite{witten,farhi}. It is a useful new tool for the quark physicist to quickly assess the possibility of bound and resonance states in preparation for much more detailed and expensive lattice QCD calculations. The TF semi-classical model combines Coulombic quark interactions with a bag model type spherical confinement assumption. The model was seen to be versatile in that a number of widely varying physics scenarios, including non-relativistic, extreme relativistic, massive gluon, or color-flavor locking, can be addressed. The development in Ref.\cite{wilcox}, however, was limited to an investigation of a single quark wave function representing an equal number of $N_f$ mass-degenerate flavors. We would like to build upon the previous results to show how wave functions for unequal numbers of degenerate flavors or non-degenerate masses can be determined through the coupled differential equations. Although the model is designed to be most reliable for many-quark states, we find surprisingly that it may be used to fit the low energy spectrum of baryons. Note that this model is designed only to look for true many-quark bound states and resonances rather than bound nucleonic states like the deuteron. In this sense, the model complements calculations of loosely bound states as in Ref.\cite{meissner}. The traditional TF model has no explicit spin interactions since the spin states are treated as degenerate. In order to produce a realistic spectrum, the model was extended to include an explicit spin splitting term. We introduce spin as a generalized \lq\lq flavor" in our non-relativistic model. This change keeps the quark differential equations unmodified and simply induces an overall scale change in the energy expressions. The non-relativistic nature of the model is a shortcoming, but we view our work as the appropriate initial step toward developing a more realistic relativistic version of the model. However, we note that the quark mass parameters dominate the overall baryon masses and that the overall fits are reasonably good, so the model is self-consistent. The TF quark model is not meant to compete with more fundamental lattice QCD investigations but to explore phenomenological regions where lattice QCD can not yet go. Lattice calculations are limited by volume considerations for many-quark systems and such applications are computationally expensive. This model attempts to \lq\lq set the stage" for such lattice calculations by examining possible scenarios where interesting existence and structural questions can be addressed. Our goal is not to produce final answers but to uncover physical systematics associated with many-quark states. In the next section we will describe some of the changes in the formalism necessary to introduce quark spin as a generalized flavor. We will then present the differential equations and the generalized expressions for the quark energies in Section \ref{Sthree} for states with two inequivalent TF wave functions. More of the formalism for the consideration of spin will then be presented in Section \ref{Sfour}. We will describe the computer program that solves the TF wave function differential equations and present our fit to the baryon spectrum in Section \ref{Sfive}. The baryonic charge radii obtained are compared with lattice and other model calculations. One initial interesting finding is that the flavors in a baryon are naturally separated, even if degenerate in mass. As an application, we will attempt such a search in Section \ref{SSone} for the hypothesized {\it H}-dibaryon state\cite{jaffe}, apply the formalism to high multi-quark strange states in Section\ref{SSthree}, and make connection to nucleon-nucleon resonances in Section\ref{SStwo}. \section{Mathematical Preliminaries}\label{Stwo} Let us recall how the TF quark model is formulated. The original flavor and color number, $n^I_i(p_{F})$, and non-interacting quark kinetic energy, ${\cal E}^I_i(p_{F})$, densities were \begin{equation} n^I_i(p_{F})=2\int^{p_F} \frac{d^3p^I_i}{(2\pi \hbar)^3}=\frac{((p_F)^I_i)^3}{3\pi^2\hbar^3}\label{n}, \end{equation} and \begin{equation} {\cal E}^I_i(p_{F})=2\int^{p_F} \frac{d^3p^I_i}{(2\pi \hbar)^3}\frac{(p^I_i)^2}{2m}=\frac{(3\pi^2\hbar^3n^I_i(p_F))^{5/3}}{10\pi^2\hbar^3m},\label{E} \end{equation} where $(p_F)^I_i$ is the Fermi momenta, the \lq\lq $I$" superscript stands for flavor and the \lq\lq $i$" subscript stands for color. Notice the explicit degenerate spin factors of $2$. Such an assumption of degeneracy is excellent for atomic physics, marginal for nuclear physics, but unrealistic for hadronic physics. The addition of a spin term in the TF Hamiltonian requires the introduction of non-equal spin densities. Therefore, we will define new quantities distinguished by caligraphic superscripts, \begin{equation} n^{\cal I}_i(p_{F})\equiv \int^{p_F} \frac{d^3p^{\cal I}_i}{(2\pi \hbar)^3}=\frac{((p_F)^{\cal I}_i)^3}{6\pi^2\hbar^3}\label{n2}, \end{equation} \begin{equation} {\cal E}^{\cal I}_i(p_{F})\equiv \int^{p_F} \frac{d^3p^{\cal I}_i}{(2\pi \hbar)^3}\frac{(p^{\cal I}_i)^2}{2m_{\cal I}}=\frac{(6\pi^2\hbar^3n^{\cal I}_i(p_{F}))^{5/3}}{20\pi^2\hbar^3m_{\cal I}},\label{E2} \end{equation} where the spin specification is now included in an augmented flavor-spin index, ${\cal I}$. (For notational simplicity, the arguments of $n^{\cal I}_i$ and ${\cal E}^{\cal I}_i$ have been simplified from $((p_F)^{\cal I}_i)$ to $p_F$.) We are allowing for different masses, $m_{\cal I}$, and different Fermi momentums for each flavor, $p_F^{\cal I}$. After variation of the densities in the energy functional, Eq.(21) of Ref.\cite{wilcox} still holds: \begin{equation} \frac{(p_F^{\cal I})^2} {2m_{\cal I}}=-\lambda^{\cal I} + \frac{3\times\frac{4}{3}g^2}{(3A-1)}\left(\frac{N_{\cal I}-1}{N_{\cal I}} \int^{r_{max}}\!\!d^3r' \frac{n^{\cal I}(r')}{|{\vec r}-{\vec r}\,'|}+\sum_{{\cal J}\ne {\cal I}}\int^{r_{max}}\!\!d^3r' \frac{n^{\cal J}(r')}{|{\vec r}-{\vec r}\,'|} \right). \label{Equation} \end{equation} We have assumed as before that the color densities are the same: $n^{\cal I}\equiv n^{\cal I}_1=n^{\cal I}_2=n^{\cal I}_3$. The momentum density, $(p_F^{\cal I})^2 /2m_{\cal I}$, is defined in terms of particle density from Eq.(\ref{n2}). However, the previous relationship between the TF spatial wave function and the density, \begin{equation} f^{I}(r) \equiv \frac{ra}{2\times\frac{4}{3}\alpha_s}(3\pi^2 n^{I}(r))^{2/3}\label{FinN1}. \end{equation} is now \begin{equation} f^{\cal I}(r) \equiv \frac{ra}{2\times\frac{4}{3}\alpha_s}(6\pi^2 n^{\cal I}(r))^{2/3}\label{FinN2}. \end{equation} The fundamental scale $a$ is \begin{equation} a\equiv \frac{\hbar}{m_1c}, \end{equation} where $m_1$ is the lightest quark mass. Rather than change the TF differential equations it turns out that all one needs do to accommodate the new approach is to make a change in the overall spatial scale factor. The previous definition \begin{equation} r = Rx, \quad\quad R \equiv \left(\frac{a}{2\times\frac{4}{3}\alpha_s}\right)\left[\frac{3\pi A}{4} \right]^{2/3},\label{peq1} \end{equation} is now replaced by \begin{equation} r = {\cal R}x, \quad\quad{\cal R} \equiv \left(\frac{a}{2\times\frac{4}{3}\alpha_s}\right)\left[\frac{3\pi A}{2} \right]^{2/3}.\label{peq2} \end{equation} equivalent to a simple change in the underlying scale factor: \begin{equation} a\longrightarrow 2^{2/3}a.\label{scalechange} \end{equation} The form of the differential TF equations in Ref.\cite{wilcox} and the wave function normalization, \begin{equation} \int_0^{x_{max}} dx \sqrt{x}(f^{\cal I}(x))^{3/2}=\frac{N_{\cal I}}{3A}.\label{norm} \end{equation} are unchanged from before. \section{Generalized Model for Two Wave Functions}\label{Sthree} In Ref.\cite{wilcox}, a non-relativistic model with equal numbers of $N_f$ degenerate flavors was considered in a case study. However, in order to do realistic spectrum calculations, we have to generalize to unequal numbers of flavors. In this section, we explain how to solve for systems with two inequivalent TF wave functions. This can arise either from flavors with different masses or systems with two equal masses but different numbers of particles. We will see this is equivalent to the introduction of two quark flavor-degeneracy factors, $(N_f)_1\equiv g_1$ and $(N_f)_2\equiv g_2$. The differential form of the TF quark equations for this case, which involves two distinct wave functions, is \begin{eqnarray} \alpha_{\cal I}\frac{d^2f^{\cal I}(x)}{dx^2}=-\frac{A}{A-\frac{1}{3}}\frac{1}{\sqrt{x}}\left(\frac{N_{\cal I}-1}{N_{\cal I}}\left(f^{\cal I}(x)\right)^\frac{3}{2}+\sum_{{\cal J}\ne {\cal I}}\left(f^{\cal I}(x)\right)^\frac{3}{2}\right).\label{generalform} \end{eqnarray} Here $\alpha_{\cal I}$ is the mass ratio of the lightest mass quark, $m_1$, to quark flavor $\cal I$, \begin{eqnarray} \alpha_{\cal I}=\frac{m_1}{m_{\cal I}}.\label{alpha} \end{eqnarray} and $N_{\cal I}$ is the number of quarks with generalized flavor ${\cal I}$. (Note we are using subscripts on $N_{\cal I}$, $\alpha_{\cal I}$ and $m_{\cal I}$ so that they will not be mistaken for powers. In Ref.\cite{wilcox} all these quantities were denoted with superscripts.) For $g_{1}$ flavors with equal number $ N_1$, considered the state with the larger radius, $x_1\ge x_2$, and $g_{2}$ flavors with equal numbers $ N_2$, we have \begin{equation} g_{1}N_1+g_{2}N_2=3A, \label{add} \end{equation} for quark quantum numbers, and the generalized two wave function equations are: \begin{eqnarray} \alpha_{1}\frac{d^2f^1(x)}{dx^2} =-\frac{A}{A-\frac{1}{3}}\frac{1}{\sqrt{x}}\left[\left(\frac{N_1-1}{N_1}+g_{1}-1\right)\left(f^1(x)\right)^{3/2}+g_{2}\left(f^2(x)\right)^{3/2}\right],\label{one}\\ \alpha_{2}\frac{d^2f^2(x)}{dx^2} =-\frac{A}{A-\frac{1}{3}}\frac{1}{\sqrt{x}}\left[\left(\frac{N_2-1}{N_2}+g_{2}-1\right)\left(f^2(x)\right)^{3/2}+g_{1}\left(f^1(x)\right)^{3/2}\right].\label{two} \end{eqnarray} In the spirit of the TF atomic model, we assume there is only one truly independent wave function. In this two wave function case, we assume a linear relation between $f^1(x)$ and $f^2(x)$: $f^1(x)=kf^2(x)$. The equations (\ref{one}) and (\ref{two}) now become: \begin{eqnarray} \alpha_{1}k\frac{d^2f^2(x)}{dx^2} =-\frac{A}{A-\frac{1}{3}}\frac{1}{\sqrt{x}}\left[\left(\frac{N_1-1}{N_1}+g_{1}-1\right)k^{3/2}+g_{2}\right]\left(f^2(x)\right)^{3/2},\label{18}\\ \alpha_{2}\frac{d^2f^2(x)}{dx^2} =-\frac{A}{A-\frac{1}{3}}\frac{1}{\sqrt{x}}\left[\left(\frac{N_2-1}{N_2}+g_{2}-1\right)+g_{1}k^{3/2}\right]\left(f^2(x)\right)^{3/2}.\label{19} \end{eqnarray} To make these two equation consistent with each other implies that \begin{eqnarray} \Rightarrow\alpha_{1}g_{1}k^{5/2}-\alpha_{2}\left(\frac{N_1-1}{N_1}+g_{1}-1\right)k^{3/2}+\alpha_{1}\left(\frac{N_2-1}{N_2}+g_{2}-1\right)k-\alpha_{2}g_{2}=0.\label{consistent} \end{eqnarray} We solve this condition numerically for the $k$ value for each particle parameter set. Remember we have two normalization conditions: \begin{equation} \left\{ \begin{array}{ll} \int_{0}^{x_{2}}\sqrt{x}\left(f^2(x)\right)^{3/2}dx=(N_2/3A), \\ \int_{0}^{x_{1}}\sqrt{x}\left(f^1(x)\right)^{3/2}dx=(N_1/3A). \end{array} \right.\label{five} \end{equation} With Eq.(\ref{five}), we can express our normalization conditions in the form of boundary conditions: \begin{equation} \left\{ \begin{array}{ll} \left(x\frac{df^2(x)}{dx}-f^2(x)\right)|_{x_{2}}=-\frac{A}{A-\frac{1}{3}}\left[g_{1}k^{3/2}+\left(\frac{N_2-1}{N_2}+g_{2}-1\right)\right]\frac{N_2}{3A\alpha_{2}}, \\ \left(x\frac{df^1(x)}{dx}-f^1(x)\right)|_{x_{1}}=-\frac{1}{\alpha_{1}}. \end{array} \right. \end{equation} The $f^1(x)=k f^2(x)$ condition must be consistent with the normalization conditions. We will see that this in general implies that the $f_2$ TF wave function has an internal discontinuity or discontinuities. The position or positions of these are not determined by the TF quark differential equations, but by energy minimization. Because of the attractive nature of the system, it is a natural assumption that the wave functions are continuously connected to the origin. That is, we assume no voids in the particle interiors. Although this is a natural assumption, we have not proven these configurations always have the lowest energies. With these modified boundary conditions, we can derive the expression for kinetic, $T$, and potential, $U$, energies. For the kinetic energy one starts with \begin{eqnarray} T=\sum_{{\cal I},i}{\cal E}^{\cal I}_i(p_{F}^{\cal I})=3\sum_{{\cal I}}{\cal E}^{\cal I}(p_{F}^{\cal I}), \end{eqnarray} where ${\cal E}^{\cal I} \equiv {\cal E}^{\cal I}_1={\cal E}^{\cal I}_2={\cal E}^{\cal I}_3$. Using Eqs.(\ref{n2}), (\ref{E2}) and (\ref{FinN1}) then gives \begin{eqnarray} T=\sum_{{\cal I}}\frac{12}{5\pi}\left(\frac{3\pi A}{2}\right)^{1/3}\frac{\frac{4}{3}g^{2} \times \frac{4}{3}\alpha_{s}}{a}\alpha_{\cal I} \int_{0}^{x_{{\cal I}}}\frac{\left(f^{\cal I}(x)\right)^{5/2}}{\sqrt{x}}dx. \end{eqnarray} In our case with $g_1$ flavors with $N_1$ particles and $g_2$ flavors with $N_2$ particles we have \begin{eqnarray} T=\frac{12}{5\pi}\left(\frac{3\pi A}{2}\right)^{1/3}\frac{\frac{4}{3}g^{2} \times \frac{4}{3}\alpha_{s}}{a}\left[ g_{1}\alpha_1\int_{0}^{x_{1}}\frac{\left(f^1(x)\right)^{5/2}}{\sqrt{x}}dx+g_{2}\alpha_2\int_{0}^{x_{2}}\frac{\left(f^2(x)\right)^{5/2}}{\sqrt{x}}dx \right ]. \end{eqnarray} Using the wave function differential equations, consistency condition for $k$ and boundary conditions allows one to relate the integrals to wave function values and derivatives on the discontinuous surfaces: \begin{eqnarray} \lefteqn{T=\frac{12}{5\pi}\left(\frac{3\pi A}{2}\right)^{1/3}\frac{\frac{4}{3}g^{2} \times \frac{4}{3}\alpha_{s}}{a}\left[-\frac{5}{7}g_{2}\frac{df^2(x)}{dx}|_{x_{2}}\frac{N_2}{3A}\left(\alpha_2-\frac{kg_{1}\alpha_1}{\frac{N_1-1}{N_1}+g_{1}-1}\right)\right.}\nonumber \\ & &\left.-\frac{5}{7}\frac{\frac{A-\frac{1}{3}}{A}g_{1}\alpha_1}{\frac{N_1-1}{N_1}+g_{1}-1}\frac{df^1(x)}{dx}|_{x_{1}}+\frac{4}{7}g_{2}\alpha_2\left(f^2(x_{2})\right)^{5/2}\sqrt{x_{2}}+\frac{4}{7}g_{1}\alpha_1\left(f^1(x_{1})\right)^{5/2}\sqrt{x_{1}}\right].\label{fullkinetic} \end{eqnarray} Likewise for the potential energy we begin with \begin{eqnarray} U= -\frac{ 9\times\frac{4}{3}g^2}{2(3A-1)}\left[\sum_{\cal I} \frac{N_{\cal I}-1}{N_{\cal I}} \int^{r_{\cal I}}\int^{r_{\cal I}}d^3r\,d^3r'\frac{n^{\cal I}(r)n^{\cal I}(r')}{|{\vec r}-{\vec r}\,'|}\right. \nonumber \\ +\left.\sum_{{\cal I}\ne{\cal J}}\int^{r_{\cal I}}\int^{r_{\cal J}}d^3r\,d^3r'\frac{n^{\cal I}(r)n^{\cal J}(r')}{|{\vec r}-{\vec r}\,'|}\right]. \end{eqnarray} Reducing this to the TF wave functions, $f^{\cal I}(x)$, gives \begin{eqnarray} \lefteqn{U=-\frac{2}{\pi}\frac{A}{A-\frac{1}{3}}\left(\frac{3\pi A}{2}\right)^{1/3}\frac{\frac{4}{3}g^{2} \times \frac{4}{3}\alpha_{s}}{a} } \nonumber \\ &&\times\left[ \sum_{\cal I}\frac{N_{\cal I}-1}{N_{\cal I}}\left[\int_{0}^{x_I}dx\frac{\left(f^{\cal I}(x)\right)^{3/2}}{\sqrt{x}}\int_{0}^{x}dx'\sqrt{x'}(f^{\cal I}(x'))^{3/2}+\int_{0}^{x_{\cal I}}dx\sqrt{x}(f^{\cal I})^{3/2}(x)\int_{x}^{x_{\cal I}}dx'\frac{\left(f^{\cal I}(x')\right)^{3/2}}{\sqrt{x'}}\right]\right. \nonumber \\ & &+\left. \sum_{\cal I \neq \cal J}\left[\int_{0}^{x_{\cal I}}dx\frac{\left(f^{\cal I}(x)\right)^{3/2}}{\sqrt{x}}\int_{0}^{x}dx'\sqrt{x'}(f^{\cal J}(x'))^{3/2}+\int_{0}^{x_{\cal I}}dx\sqrt{x}(f^{\cal I}(x))^{3/2}\int_{x}^{x_{\cal J}}dx'\frac{\left(f^{\cal J}(x')\right)^{3/2}}{\sqrt{x'}}\right]\right]. \end{eqnarray} In the ($g_1, N_1$), ($g_2, N_2$) case this gives \begin{equation} U=-\frac{2}{\pi}\frac{A}{A-\frac{1}{3}}\left(\frac{3\pi A}{2}\right)^{1/3}\frac{\frac{4}{3}g^{2} \times \frac{4}{3}\alpha_{s}}{a} \left[ g_1(\frac{N_1-1}{N_1}+g_1-1)K_1 +g_2(\frac{N_2-1}{N_2}+g_2-1)K_2 +2g_1g_2 K_{12}\right], \end{equation} where \begin{equation} K_1 \equiv \int_{0}^{x_1}dx\frac{\left(f^1(x)\right)^{3/2}}{\sqrt{x}}\int_{0}^{x}dx'\sqrt{x'}(f^1(x'))^{3/2}+\int_{0}^{x_1}dx\sqrt{x}(f^1(x))^{3/2}\int_{x}^{x_1}dx'\frac{\left(f^1(x')\right)^{3/2}}{\sqrt{x'}}, \end{equation} \begin{equation} K_2 \equiv \int_{0}^{x_2}dx\frac{\left(f^2(x)\right)^{3/2}}{\sqrt{x}}\int_{0}^{x}dx'\sqrt{x'}(f^2(x'))^{3/2}+\int_{0}^{x_2}dx\sqrt{x}(f^2(x))^{3/2}\int_{x}^{x_2}dx'\frac{\left(f^2(x')\right)^{3/2}}{\sqrt{x'}}, \end{equation} and \begin{equation} K_{12} \equiv \int_{0}^{x_1}dx\frac{\left(f^1(x)\right)^{3/2}}{\sqrt{x}}\int_{0}^{x}dx'\sqrt{x'}(f^2(x'))^{3/2}+\int_{0}^{x_1}dx\sqrt{x}(f^1(x))^{3/2}\int_{x}^{x_2}dx'\frac{\left(f^2(x')\right)^{3/2}}{\sqrt{x'}}. \end{equation} (A nontrivial point is that switching $1 \rightleftarrows 2$ above does not change the $K_{12}$ integral.) Again, doing the integrals as above gives the amazingly compact result, \begin{eqnarray} \lefteqn{U=-\frac{2}{\pi}\left(\frac{3\pi A}{2}\right)^{1/3}\frac{\frac{4}{3}g^{2} \times \frac{4}{3}\alpha_{s}}{a} \times \left[\frac{N_{2}}{3A}\frac{df^{2}(x)}{dx}|_{x_{2}}\left(-\frac{12}{7}\alpha_{2}g_{2}+\frac{12}{7}\frac{g_{1}g_{2}k\alpha_{1}}{\frac{N_{1}-1}{N_{1}}+g_{1}-1}\right)\right.} \nonumber \\ & &+\left.\frac{df^{1}(x)}{dx}|_{x_{1}} \left(-\frac{g_{1}g_{2}\alpha_{1}}{\frac{N_{1}-1}{N_{1}}+g_{1}-1}\frac{N_{2}}{3A}-k^{3/2}g_{1}\alpha_{1}\frac{N_{2}}{3A}-\frac{5}{7}\frac{\frac{A-1/3}{A}\alpha_{1}g_{1}}{\frac{N_{1}-1}{N_{1}}+g_{1}-1}\right) \right.\nonumber \\ & &+\left.\frac{\frac{A-1/3}{A}\alpha_{1}g_{1}}{\frac{N_{1}-1}{N_{1}}+g_{1}-1}-\frac{N_{2}}{3A}\alpha_{1}g_{1}k^{3/2}-\frac{g_{1}g_{2}\alpha_{1}}{\frac{N_{1}-1}{N_{1}}+g_{1}-1}\frac{N_{2}}{3A}\right. \nonumber \\ & &+\left.\frac{4}{7}g_{2}\alpha_{2}\left(f^{2}(x_{2})\right)^{5/2}\sqrt{x_{2}}+\frac{4}{7}g_{1}\alpha_{1}\left(f^{1}(x_{1})\right)^{5/2}\sqrt{x_{1}}\right],\label{fullpotential} \end{eqnarray} which likewise reduces $U$ to values and derivatives of the TF wave functions at the surfaces. Of course, one is not limited to two different wave functions in this model, and the above equations can be generalized. However, we will see that such a generalization is not necessary for the low energy baryon spectrum for mass degenerate light quarks. \section{More on Spin}\label{Sfour} As outlined above, the traditional TF model simply assumes a spin degeneracy factor of 2 in Eqs.(\ref{n}) and (\ref{E}). We initially attempted to develop such a model based upon Ref.\cite{wilcox}. In this traditional approach there is an intrinsic splitting between states like the nucleon and $\Delta^{++}$ (or $\Delta^{-}$) already just from the different TF flavor wave functions, exclusive of spin, with the correct ordering of states. That is, there is a degeneracy splitting from the different quark flavor content, $uud$ versus $uuu$, for example. However, the nucleon-delta splitting turns out to be much smaller than the actual splitting for reasonable parameter values. In our attempts to fit the low energy baryon spectrum, we found that our $\alpha_s$ and $m_1$ parameters (the latter sets the overall scale) were driven to unrealistic values to try to account for such splittings. The model was incapable of producing a realistic low energy spectrum. It is necessary to add an explicit spin interaction term. This introduces a problem of course because products of the spin \lq\lq up" and \lq\lq down" (usually taken along the z-axis) in the semi-classical TF model do not combine to form \lq\lq good" total angular momentum states. Non-relativistic baryon states such as the proton and neutron have total angular momentum $j=\frac{1}{2}$ and magnetic quantum number, $m$. To explain our approach to incorporating spin, consider the appropriately symmetrized non-relativistic proton flavor-spin $j=\frac{1}{2}$, $m=\frac{1}{2}$ wave function: \begin{eqnarray} |P,+\rangle\equiv |uud\rangle \left(2|++-\rangle-|+-+\rangle-|-++\rangle\right)/(3\sqrt{2})+ {\rm cyclic\, permutations}.\label{proton} \end{eqnarray} The TF quark model is incapable of reproducing this linear combination. Instead given this wave function the TF model simply considers the probabilities of certain configurations determined by projections. In the proton world, the possible spin up configurations are: \begin{eqnarray*} (u^{\uparrow}u^{\uparrow}d^{\downarrow}),\quad (u^{\uparrow}u^{\downarrow}d^{\uparrow}). \label{pconfig1} \end{eqnarray*} The proton is then said to be in the TF configuration: \begin{eqnarray*} \frac{2}{3}(u^{\uparrow}u^{\uparrow}d^{\downarrow})+\frac{1}{3}(u^{\uparrow}u^{\downarrow}d^{\uparrow}). \label{pconfig2} \end{eqnarray*} (We assign no meaning to the flavor or spin sequential ordering.) We call this procedure the \lq\lq TF projection". It is the configurations which have distinct masses in the TF quark model. Thus, the TF spin model deals with probabilities of certain projected configurations rather than spin amplitudes. We assume the mass and other properties of the physical state are the probability weighted average of the configurations. Using the flavor degeneracy factors introduced in the last section, these two configurations are classified: \begin{eqnarray*} & (u^{\uparrow}u^{\uparrow}d^{\downarrow}):& \, g_1=1, N_1=2;\, g_2=1, N_2=1.\\ & (u^{\uparrow}u^{\downarrow}d^{\uparrow}):& \, g_1=3, N_1=1; N_2=0. \label{pconfig3} \end{eqnarray*} That is, the first configuration has two identical particles (the $u$ quarks) and a second generalized flavor ($d$), whereas the second configuration simply has three non-identical but mass-degenerate quarks, with no second set of particle labelings necessary; note how the different spin $u$ quarks are treated as different flavors. We will have more to say on the violation of rotational symmetry later in this section. Having introduced spin classifications into the model, we need to introduce a spin-splitting term. The classical interaction of a magnetically charged particle in an external magnetic field is given by \begin{equation} (H_{m}^{class})_{ij}=-{\vec m}_i\cdot {\vec B}_j, \end{equation} where ${\vec m}_i$ is the magnetic moment of particle $i$ and ${\vec B}_j$ is the external magnetic field of particle $j$. A magnetic dipole field is given by \begin{equation} {\vec B}^{d}=\frac{3\hat{r}(\hat{r}\cdot {\vec m})-{\vec m}}{r^3} +\frac{8\pi}{3}{\vec m}\,\delta({\vec r}), \end{equation} where $\delta({\vec r})$ is a Dirac delta function. In our case, the spherical symmetry of all integrations require the interaction to be of the form, \begin{equation} (H_{m}^{class})_{ij}=-\frac{8\pi}{3}{\vec m}_i\cdot {\vec m}_j\,\delta({\vec r}). \end{equation} These considerations lead us to postulate the form of the color magnetic interaction in our model: \begin{equation} (H_{m})_{ij}=-\frac{8\pi}{3}\gamma_i\gamma_j(S_z^{\cal I})_i(S_z^{\cal J})_j({\vec q}_i\cdot{\vec q}_j)\, \delta({\vec r}),\label{spininteraction} \end{equation} where the superscripts ${\cal I}$ and ${\cal F}$ are flavor labels and $i, j$ are particle number labels which will take on values from 1 to $N_{\cal I}$, $N_{\cal J}$. We define \begin{equation} \gamma_{i}\equiv \frac{\rm g}{2m_ic}, \end{equation} where $\rm g$ is the color gyromagnetic factor, which takes on a value of 1 classically and a value of 2 to lowest order in QED perturbation theory for the electron and muon. The problem of recovering a large enough spin-splitting in light mesons and baryons in non-relativistic models and even lattice QCD for large quark masses is well known. In this work, we will consider the g-factor to be an adjustable parameter due to relativistic and higher-order strong interaction effects. Other non-relativistic treatments keep the tree-level g-factor but instead introduce an adjustable wave function overlap factor\cite{dR} or use an extended interaction potential\cite{barnes}. Ultimately, the problem is only solved in a fully relativistic context at physical quark masses in lattice QCD\cite{lattice}. Here we hope to only reasonably model such effects. However, we expect spin effects to become smaller for increasing baryon number $A$, as will be seen in Section \ref{SSthree}. Note that the specialization to spins along the z-axis in Eq.(\ref{spininteraction}) is appropriate for our individual spin basis. For two quarks in a colorless system, \begin{equation} \vec{q}_i\cdot\vec{q}_j=\left\{ \begin{array}{l} \frac{4}{3}g^2, \,\,{\rm same\,\,color} \\ -\frac{2}{3}g^2, \,\,{\rm different\,\,color} \end{array}\right.\label{color} \end{equation} (Note the notational distinction between \lq\lq$\rm g$" the color gyromagnetic factor and \lq\lq$g$" the color coupling constant and that $\alpha_s = g^2/(\hbar c)$.) The treatment of potential energy terms is explained in Ref.\cite{wilcox}. We sum over same flavor and different flavor contributions, using a single particle normalization for the particle densities ($\hat{n}^{\cal I} (r)= 3 n^{\cal I}(r)/N_{\cal I}$). We then average over the color interactions in Eq.(\ref{spininteraction}) using (\ref{color}) and the color probabilities, $P_{ij}$. The result for the magnetic spin interaction energy, $E_m$, may be written: \begin{eqnarray} E_m=\frac{8\pi}{3}\frac{9\times\frac{4}{3}g^2}{(3A-1)}\left[ \left(\frac{\hbar}{2}\right)^2 \sum_{\cal I} \gamma_{\cal I}^2 \left( \frac{N_{\cal I}-1}{2N_{\cal I}} \right) \int d^3r\, \left(n^{\cal I}(r)\right)^2 \right. \nonumber \\ + \left. \sum_{{\cal I}\langle{\cal J}} \gamma_{\cal I} \gamma_{\cal J} \sum_{i=1}^{N_{\cal I}}\sum_{j=1}^{N_{\cal J}} (S_z^{\cal I})_i(S_z^{\cal J})_j \int d^3r\, \left( \frac{n^{\cal I}(r)n^{\cal J}(r)}{N_{\cal I}N_{\cal J}} \right) \right]. \label{spinenergy1} \end{eqnarray} Switching to the TF wave function, $f^{\cal I}(r)$ defined in Eq.(\ref{FinN2}), and using the $x$ variable defined in Eq.\ref{peq2} now gives \begin{eqnarray} E_m= \frac{16\hbar c}{3\pi^2 (3A-1)}\frac{(\frac{4}{3}\alpha_s)^4}{a}\left(\frac{\rm g}{2}\right)^2\left[ \sum_{\cal I}\alpha_{\cal I}^2\frac{N_{\cal I}-1}{2N_{\cal I}} \int dx\, \frac{(f^{\cal I}(x))^3}{x} \right. \nonumber\\ + \left. \sum_{{\cal I}\langle{\cal J}} \alpha_{\cal I} \alpha_{\cal J} \sum_{i=1}^{N_{\cal I}}\sum_{j=1}^{N_{\cal J}} (\hat{S}_z^{\cal I})_i(\hat{S}_z^{\cal J})_j \int dx \frac{1}{x} \frac{(f^{\cal I}(x))^{3/2}}{N_{\cal I}} \frac{(f^{\cal J}(x))^{3/2}}{N_{\cal J}} . \right]\label{spinenergy2} \end{eqnarray} where the unit normalized normalized spins have $(\hat{S}_z^{\cal J})_j = 1$ for spin up and $(\hat{S}_z^{\cal J})_j = -1$ for spin down along z. As pointed out above, the introduction of spin labeling as an extended flavor attribute has the obvious shortcoming of loss of rotational symmetry. One way of stating the issue is that there is a conflict between the configurations used to determine flavor content, which use an individual spin basis with good values of $J^{\alpha}_z$, where $\alpha$ labels the particles, and the rotationally invariant total spin states which have good total angular momentum quantum numbers. This problem will not affect multi-quark total spin $0$ states, which are rotationally invariant, or spin $\frac{1}{2}$ states, whose $m= \pm \frac{1}{2}$ states have TF projections with the same mass. For example, the proton $m= \frac{1}{2}$ and $m= -\frac{1}{2}$ states project to the configurations $\frac{2}{3}(u^{\uparrow}u^{\uparrow}d^{\downarrow})+\frac{1}{3}(u^{\uparrow}u^{\downarrow}d^{\uparrow})$ and $\frac{2}{3}(u^{\downarrow}u^{\downarrow}d^{\uparrow})+\frac{1}{3}(u^{\uparrow}u^{\downarrow}d^{\downarrow})$, respectively, which have the same mass. However, the spin $\frac{3}{2}$ $m=\frac{1}{2}$ state, \begin{eqnarray} \frac{1}{\sqrt{3}}(|-++\rangle+|+-+\rangle+|++-\rangle), \nonumber \end{eqnarray} produces a different mass than the $m=\frac{3}{2}$ state when projected into the individual spin basis states. The $m=\frac{1}{2}$ state corresponds to a $g_1=1, N_1=2; g_2=1, N_2=1$ configuration in our model, whereas the $m=\frac{3}{2}$ state corresponds to $g=1, N=3$. These have different masses. \begin{table} \caption{The Clebsh-Gordan coefficients $\langle j1;m,0|j1;j',m\rangle$} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline\hline & $j=0$ &$j=1$ & $j=1$ & $j=2$ & $j=2$ & $j=3$\\ & $j'=0$ & $j'=0$ & $j'=2$ & $j'=1$ & $j'=3$ & $j'=2$\\ \hline \vspace{-.3cm} & & & & & & \\ $m=3$ & 0 & 0 & 0 &0 &0 &0\\ \vspace{-.3cm} & & & & & & \\ \hline \vspace{-.3cm} & & & & & & \\ $m=2$ & 0 & 0 & 0 &0 &$\sqrt{\frac{5}{15}}$ &$-\sqrt{\frac{5}{21}}$ \\ \vspace{-.3cm} & & & & & & \\ \hline \vspace{-.3cm} & & & & & & \\ $m=1$ & 0 & 0 &$\sqrt{\frac{3}{6}}$ &$-\sqrt{\frac{3}{10}}$ &$\sqrt{\frac{8}{15}}$ &$-\sqrt{\frac{8}{21}}$\\ \vspace{-.3cm} & & & & & & \\ \hline \vspace{-.3cm} & & & & & & \\ $m=0$ & 1 &$-\sqrt{\frac{1}{3}}$ &$\sqrt{\frac{4}{6}}$ &$-\sqrt{\frac{4}{10}}$ &$\sqrt{\frac{9}{15}}$ &$-\sqrt{\frac{9}{21}}$ \\ \vspace{-.3cm} & & & & & & \\ \hline \vspace{-.3cm} & & & & & & \\ $m=-1$ & 0 & 0 &$\sqrt{\frac{3}{6}}$ &$-\sqrt{\frac{3}{10}}$ &$\sqrt{\frac{8}{15}}$ &$-\sqrt{\frac{8}{21}}$\\ \vspace{-.3cm} & & & & & & \\ \hline \vspace{-.3cm} & & & & & & \\ $m=-2$ & 0 & 0 & 0 &0 &$\sqrt{\frac{5}{15}}$ &$-\sqrt{\frac{5}{21}}$ \\ \vspace{-.3cm} & & & & & & \\ \hline \vspace{-.3cm} & & & & & & \\ $m=-3$ & 0 & 0 & 0 &0 &0 &0\\ & & & & & & \\\hline \hline \end{tabular} \end{center} \label{CGtable} \end{table} We believe the best projection is always performed in a maximum $m$ states, $|j,m=j\rangle$. The reason has to do with what we will call the \lq\lq maximum compatibility" of the total spin and product spin basis states. As an example, let us consider the spin $|\frac{3}{2},\frac{3}{2}\rangle$ 3 quark states. Here there is only one way to build the state, namely $|\frac{1}{2}\rangle_1|\frac{1}{2}\rangle_2|\frac{1}{2}\rangle_3$ in the product spin basis. This state is also an eigenstate of total $\vec{J}^{\,2}$ and $J_z$. Thus we have ($\alpha=\{1,2,3\}$ is the particle label) \begin{eqnarray} [\vec{J}^{\,2},J^{\alpha}_z] |\frac{3}{2},\frac{3}{2}\rangle = 0, \end{eqnarray} for the commutator of $\vec{J}^{\,2}$ and the $J^{\alpha}_z$, expressing that the total and product quantum numbers are compatible. On the other hand the expectation value of the commutator of $\vec{J}^{\,2}$ with the individual $J^{\alpha}_z$ does not vanish, as evidenced by the nonzero matrix element: \begin{eqnarray} \langle \frac{1}{2},\frac{1}{2}| [\vec{J}^{\,2},J^{\alpha}_z] |\frac{3}{2},\frac{1}{2}\rangle = -3\hbar^2 \langle\frac{1}{2},\frac{1}{2}|J^{\alpha}_z|\frac{3}{2},\frac{1}{2}\rangle \\ \nonumber = -3\hbar^2 \langle\frac{3}{2},1;\frac{1}{2},0|\frac{3}{2},1;\frac{1}{2},\frac{1}{2}\rangle\langle\alpha,\frac{1}{2}||\vec{J}||\alpha,\frac{3}{2}\rangle . \end{eqnarray} The first multiplicative factor on the right is the Clebsch-Gordan (\lq\lq CG") coefficient, \begin{equation} \langle\frac{3}{2},1;\frac{1}{2},0|\frac{3}{2},1;\frac{1}{2},\frac{1}{2}\rangle=-\sqrt{\frac{1}{3}}, \nonumber \end{equation} \noindent and $\langle\alpha,\frac{1}{2}||\vec{J}||\alpha,\frac{3}{2}\rangle$ is the reduced matrix element. Clearly, the $j=3/2,m=3/2$ state is to be preferred over the $j=3/2,m=1/2$ state in the projection process. As a more general example, consider the $j=0,1,2,3$ states produced by coupling 6 quarks in zero angular momentum states. The $j=0$ states will have a rotationally invariant projection, whereas the $j=3,m=3$ state will be maximally compatible with the states with good individual $J^{\alpha}_z$ ($\alpha=\{1,2,3,4,5,6\}$). The other maximal $m$ states, $j=2,m=2$ and $j=1,m=1$ will also be the most compatible states to use with $j=2$ and $j=1$, although now the commutators with higher $j$ states will no longer vanish. This is evidenced by the CG coefficients in Table \ref{CGtable}. Note the zeros at the outside edge for the $(j=3,j'=2)$, $(j=2,j'=1)$ and $(j=1,j'=0)$ cases, expressing the fact that there is no lower $j'$ state to couple to for the highest $j,m$ value. In addition, the magnitude of the entries in each column are always smallest at the larger $|m|$ values, and largest for $m=0$ states. Again, we conclude the maximal $m$ states are to be preferred in the projection process. \section{Ground State Baryon Fits}\label{Sfive} In order for the model we are presenting to be predictive, we need to fix the phenomenological parameters. These 5 parameters are: \\ \begin{align*} &B{\rm :\, \lq\lq Bag"\, constant}\\ &m_1{\rm :\, light\, quark\, mass}\\ &m_s{\rm : \,strange\, quark\, mass}\\ &\alpha_s{\rm :\, strong\, coupling\, constant}\\ &{\rm g: \,color\, gyromagnetic\, factor}\\ \end{align*} First, we must understand that the model itself is not well designed to fit low quark number states, just as the atomic version would be poorly constructed to fit the low atomic binding energy states. In fact, applying this model to a \lq\lq gas" of three quarks would seem to be impossible. However, just like the atomic model, an unreasonable aspect of the mathematics of the TF model is that such a fit seems to be entirely reasonable. An aspect that helps is the fact that in the hadronic case there are both the octet and decuplet ground states energies available which may be used to help make the fit more robust; that is, there are more states available than parameters which need to be fit. There are eight types of base configurations and thirteen associated TF wave functions that are necessary to fit all the octet and decuplet states. Table \ref{picturetable} lists the wave functions present in the ground state hadrons as well as the particles which partly or wholly share that wave function. They are given in the spin \lq\lq up" state. In five cases there are two TF wave functions for each base configuration. The double subscript on these functions designates first the base configuration, and second the quark-type. The base configuration label runs from 1 through 8. The quark-type label takes on three possible values: light (\lq\lq $l$"), double light (\lq\lq $ll$") or strange (\lq\lq $s$"). For example, $f_{2,ll}$ is the configuration type in the second row of the Table associated with the double \lq\lq up" light sector, either $u^{\uparrow}u^{\uparrow}$ or $d^{\uparrow}d^{\uparrow}$. \begin{table} \caption{TF configuration wave functions.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline\hline Base configuration (spin \lq\lq up")&Particle(s) &Wave function & $g_{1}$ & $N_{1}$ &$g_{2}$ &$N_{2}$ &$g_{s}$ &$N_{s}$\\ \hline \vspace{-.3cm} & & & & & & & &\\ $u^{\uparrow}u^{\downarrow}d^{\uparrow},d^{\uparrow}d^{\downarrow}u^{\uparrow}$ &$P,N$ &$f_{1}$ &3 &1 &-- &-- &-- &-- \\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $u^{\uparrow}u^{\uparrow}d^{\downarrow},u^{\uparrow}u^{\uparrow}d^{\uparrow},d^{\uparrow}d^{\uparrow}u^{\downarrow},d^{\uparrow}d^{\uparrow}u^{\uparrow}$ &$P,N,\Delta^{+}$ &$f_{2,ll},f_{2,l}$ &1 &2 &1 &$1^{+}$ &-- &-- \\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $u^{\uparrow}u^{\uparrow}u^{\uparrow},d^{\uparrow}d^{\uparrow}d^{\uparrow}$ &$\Delta^{++}$ &$f_{3}$ &1 &3 &-- &-- &-- &-- \\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $u^{\uparrow}u^{\downarrow}s^{\uparrow},d^{\uparrow}d^{\downarrow}s^{\uparrow}$ & & & & & & & & \\ $u^{\uparrow}d^{\uparrow}s^{\downarrow},u^{\uparrow}d^{\downarrow}s^{\uparrow}$ &$\Sigma^{+},\Sigma^{0},\Sigma^{*0},\Lambda$ &$f_{4,l},f_{4,s}$ &2 &1 &-- &-- &$1^{+}$ &1 \\ $u^{\downarrow}d^{\uparrow}s^{\uparrow},u^{\uparrow}d^{\uparrow}s^{\uparrow}$ & & & & & & & & \\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $u^{\uparrow}u^{\uparrow}s^{\downarrow},u^{\uparrow}u^{\uparrow}s^{\uparrow},d^{\uparrow}d^{\uparrow}s^{\downarrow},d^{\uparrow}d^{\uparrow}s^{\uparrow}$ &$\Sigma^{+},\Sigma^{*+}$ &$f_{5,l},f_{5,s}$ &1 &2 &-- &-- &1 &$1^{+}$ \\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $s^{\uparrow}s^{\downarrow}d^{\uparrow},s^{\uparrow}s^{\downarrow}d^{\uparrow}$ &$\Xi^{0}$ &$f_{6,l},f_{6,s}$ &$1^{+}$ &1 &-- &-- &2 &1 \\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $s^{\uparrow}s^{\uparrow}u^{\downarrow},s^{\uparrow}s^{\uparrow}u^{\uparrow},s^{\uparrow}s^{\uparrow}d^{\downarrow},s^{\uparrow}s^{\uparrow}d^{\uparrow}$ &$\Xi^{0},\Xi^{*0}$ &$f_{7,l},f_{7,s}$ &1 &$1^{+}$ &-- &-- &1 &2 \\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $s^{\uparrow}s^{\uparrow}s^{\uparrow}$ &$\Omega^{-}$ &$f_{8}$ &-- &-- &-- &-- &1 &3 \\ & & & & & & & & \\ \hline \hline \end{tabular} \end{center} \label{picturetable} \end{table} For the five configurations listed in Table \ref{picturetable} other than the three with either $g_1=3$ ($P,N$), $N_1=3$ ($\Delta^{++}$) or $N_s=3$ ($\Omega^-$), the consistency condition, Eq.(\ref{consistent}), does not formally have a solution when all the $g$ or $N$ values are integer. For each of the five configurations this occurs when the ($N_1, N_2$), ($N_1, N_s$) or ($g_1, g_s$) sector takes on a value of either (2,1) or (1,2). These cases are indicated in the Table with the designation $1^+$ to indicate that the solution to (\ref{consistent}) is defined by an infinitesimal approach to that parameter from above. For example, in the case of the $f_{2,ll},f_{2,l}$ wave functions, which contribute to the energies of the $P$, $N$ and $\Delta^+$ states, we have the robust numerical condition \begin{equation} f_{2,ll}(x)=1.23162 f_{2,l}(x),\label{f1,f2} \end{equation} from the consistency condition, Eq.(\ref{consistent}), for the relationship between $f^1(x)$ ($f_{2,ll}(x)$) and $f^2(x)$ ($f_{2,l}(x)$) for $0< x< x_2$ in the limit $N_2\longrightarrow 1^+$. We use this limit to {\it define} the value of the model for $N_2=1$ exactly. (Note that $f_{2l}(x)=0$ for $x_2< x< x_1$ since it is the TF wave function with the smaller radius.) Table \ref{picturetable2} gives both the base particle configuration, to be used in conjunction with Table \ref{picturetable}, as well as the actual formulas for the spin splittings from Eq.(\ref{spinenergy2}) above. Besides the proton and neutron, note that a number of other particles share the same base particle energies before magnetic spin splitting terms are added because we assume the $u$ and $d$ quark masses are degenerate. These degeneracies are: ($\Sigma^{+}, \Sigma^-$), ($\Xi^{0}, \Xi^-$), ($\Delta^{++}, \Delta^-$), ($\Delta^{+}, \Delta^0$), ($\Sigma^{*+}, \Sigma^{*-}$), ($\Xi^{*0}, \Xi^{*-}$), and the ($\Lambda$, $\Sigma^0$, $\Sigma^{*0}$) particles. We list only the $P$, $\Sigma^{+}$, $\Xi^0$, $\Delta^{++}$, $\Delta^{+}$, $\Sigma^{*+}$, and $\Xi^{*0}$ particles in Table \ref{picturetable2} because of these degeneracies. The $\Lambda$, $\Sigma^0$ and $\Sigma^{*0}$ all get different masses after spin interaction and are listed separately. \begin{table} \caption{Base particle asignments and spin energy contributions of ground state baryons. The various $f$ functions used here are from Table \ref{picturetable}.} \begin{center} \begin{tabular}{|c|c|} \hline\hline & \\ Base particle configuration & Magnetic Energy Splitting ($C \equiv \frac{16 \hbar c}{3 \pi^{2}(3A-1)}\frac{(\frac{4}{3}\alpha_{s})^{4}}{a}(\frac{g}{2})^{2}$) \\ (spin \lq\lq up", maximal $m$) & \\ \hline \vspace{-.3cm} & \\ $P=\frac{1}{3}\left(u^{\uparrow}u^{\downarrow}d^{\uparrow}\right)+\frac{2}{3}\left(u^{\uparrow}u^{\uparrow}d^{\downarrow}\right)$ &$E^{P}_{m}=C\left\{\frac{1}{3}\left(-\int dx\frac{(f_{1})^{3}}{x}\right)+\frac{2}{3}\left(\frac{1}{4}\int dx\frac{(f_{2,ll})^{3}}{x}-\int dx\frac{(f_{2,ll}f_{2,l})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm} & \\ \hline \vspace{-.3cm} & \\ $\Lambda=u^{\uparrow}d^{\downarrow}s^{\uparrow}$ &$E^{\Lambda}_{m}=C\left\{-\int dx \frac{(f_{4,l})^{3}}{x}\right\}$ \\ \vspace{-.3cm}& \\ \hline \vspace{-.3cm} & \\ $\Sigma^{+}=\frac{1}{3}\left(u^{\uparrow}u^{\downarrow}s^{\uparrow}\right)+\frac{2}{3}\left(u^{\uparrow}u^{\uparrow}s^{\downarrow}\right)$ &$E^{\Sigma^{+}}_{m}=C\left\{\frac{1}{3}\left(-\int dx\frac{(f_{4,l})^{3}}{x}\right)+\frac{2}{3}\left(\frac{1}{4}\int dx\frac{(f_{5,l})^{3}}{x}-\alpha_{str} \int dx\frac{(f_{5,l}f_{5,s})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm}& \\ \hline \vspace{-.3cm}& \\ $\Sigma^{0}=\frac{1}{3}\left(u^{\uparrow}d^{\downarrow}s^{\uparrow}\right)+\frac{2}{3}\left(u^{\uparrow}d^{\uparrow}s^{\downarrow}\right)$ &$E^{\Sigma^{0}}_{m}=C\left\{\frac{1}{3}\left(-\int dx\frac{(f_{4,l})^{3}}{x}\right)+\frac{2}{3}\left(\int dx\frac{(f_{4,l})^{3}}{x}-2\alpha_{str} \int dx\frac{(f_{4,l}f_{4,s})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm} & \\ \hline \vspace{-.3cm}& \\ $\Xi^{0}=\frac{1}{3}\left(s^{\uparrow}s^{\downarrow}u^{\uparrow}\right)+\frac{2}{3}\left(s^{\uparrow}s^{\uparrow}u^{\downarrow}\right)$ &$E^{\Xi^{0}}_{m}=C\left\{\frac{1}{3}\left(-\alpha^{2}_{str}\int dx\frac{(f_{6,s})^{3}}{x}\right)+\frac{2}{3}\left(\frac{1}{4}\alpha^{2}_{str}\int dx\frac{(f_{7,s})^{3}}{x}-\alpha_{str} \int dx\frac{(f_{7,l}f_{7,s})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm}& \\ \hline \vspace{-.3cm} & \\ $\Delta^{++}=u^{\uparrow}u^{\uparrow}u^{\uparrow}$ &$E^{\Delta^{++}}_{m}=C\left\{\frac{1}{3}\int dx \frac{(f_{3})^{3}}{x}\right\}$ \\ \vspace{-.3cm}& \\ \hline \vspace{-.3cm}& \\ $\Delta^{+}=u^{\uparrow}u^{\uparrow}d^{\uparrow}$ &$E^{\Delta^{+}}_{m}=C\left\{\frac{1}{4}\left(\int dx\frac{(f_{2,ll})^{3}}{x}\right)+\left(\int dx\frac{(f_{2,ll}f_{2,l})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm}& \\ \hline \vspace{-.3cm} & \\ $\Sigma^{*+}=u^{\uparrow}u^{\uparrow}s^{\uparrow}$ &$E^{\Sigma^{*+}}_{m}=C\left\{\frac{1}{4}\left(\int dx\frac{(f_{5,l})^{3}}{x}\right)+\alpha_{str}\left(\int dx\frac{(f_{5,l}f_{5,s})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm}& \\ \hline \vspace{-.3cm}& \\ $\Sigma^{*0}=u^{\uparrow}d^{\uparrow}s^{\uparrow}$ &$E^{\Sigma^{*0}}_{m}=C\left\{\left(\int dx\frac{(f_{4,l})^{3}}{x}\right)+2\alpha_{str}\left(\int dx\frac{(_{4,l}f_{4,s})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm}& \\ \hline \vspace{-.3cm}& \\ $\Xi^{*0}=u^{\uparrow}s^{\uparrow}s^{\uparrow}$ &$E^{\Xi^{*0}}_{m}=C\left\{\frac{1}{4}\alpha^{2}_{str}\left(\int dx\frac{(f_{7,s})^{3}}{x}\right)+\alpha_{str}\left(\int dx\frac{(f_{7,l}f_{7,s})^{3/2}}{x}\right)\right\}$ \\ \vspace{-.3cm} & \\ \hline \vspace{-.3cm}& \\ $\Omega^{-}=s^{\uparrow}s^{\uparrow}s^{\uparrow}$ &$E^{\Omega^{-}}_{m}=C\left\{\frac{1}{3}\int dx \frac{(f_{8})^{3}}{x}\right\}$ \\ & \\ \hline\hline \end{tabular} \end{center} \label{picturetable2} \end{table} There is a subtlety concerning the $\Xi^{0,-}$ and $\Xi^{*0,-}$ particles which prevents us from including them in our numerical results. All these particles contain two strange quarks and a light quark. In our numerical simulations we find that the two strange quark wave functions, $f_{6,s}$ or $f_{7,s}$, have a smaller radius than the single light quark wave functions, $f_{6,l}$ or $f_{7,l}$, which in Table \ref{picturetable} are characterized as either $g_1=1^+, N_1=1$ or $g_1=1, N_1=1^+$. This implies that the right hand side of Eq.(\ref{one}) is zero for the region $x_2< x< x_1$, where $f^2(x)=0$, and that the light quark is \lq\lq free" with no restoring force. The equations can still be formally solved and the normalization condition for $f^1$ in Eq.(\ref{five}) can still be fulfilled, but we reject this solution as unrealistic. Thus, although the $\Xi^{0,-}$ and $\Xi^{*0,-}$ wave functions are formally calculable, we don't believe the model can give correct wave functions in this case, and they are omitted from our numerical results. This is a case of a gas of a single particle not being properly calculable. A computer program in {\it Mathematica} has been developed and the parameters which fit the low energy baryon spectrum have been determined by explicit numerical energy minimization. The first step in this process is the numerical solution of the $f^2(x)$ (inner) function differential equation, Eqs.(\ref{18}) or (\ref{19}), beginning with a guess for the initial slope, followed by the reconstruction/solution of the $f^1(x)$ (outer) function for a given set of external parameters and normalization conditions. Once this is done for all particle states, a chi-square minimization is carried out among the hadron masses. The values assumed for the experimental masses in Table \ref{table3} are rounded to the nearest $MeV$. In addition in Table \ref{table3}, the $\Sigma^+$ row actually lists the average experimental mass of the $\Sigma^+$ and $\Sigma^-$, the $\Sigma^{*+}$ row actually lists the average experimental mass of the $\Sigma^{*+}$ and $\Sigma^{*-}$, and a nominal value of 1232 $MeV/c^2$ is used for all the $\Delta$ particles. Using a grid search algorithm we have found a best fit with the values: $B^{1/4}=84.4$ $MeV$, $m_1=290$ $MeV/c^2$, $m_s= 507$ $MeV/c^2$ (corresponding to $\alpha_{str}=0.572$ as the mass ratio in Eq.(\ref{alpha})), $\alpha_s=0.430$, $\rm g=7.09$. Here we are are fitting nine data points with five parameters. The spin splitting terms are treated as perturbations and are not included in the numerical energy minimizations to simplify the calculations. In addition, the color gyromagnetic value, g, was determined as a separate chi-squared minimization for each set of $B, m_1, m_s, \alpha_s$ values. These values are phenomenologically appropriate and reasonable. The bag constant is lower than in the original application\cite{bag}, which means the non-relativistic fit is actually more self-consistent. The strong coupling constant is consistent with a bag model designed for heavy-light systems where the center of mass motion less of an issue\cite{wilcox2}. The color gyromagnetic factor, $\rm g$, is large compared to the electrodynamic case for electrons ($\sim 2$), but not unreasonably so. \begin{table} \caption{The full set of calculable octet and decuplet particle energies in the TF quark model. Particles which have degenerate mass in the $m_u=m_d$ limit are not listed.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline\hline Particle &Bag Radius &Rest Mass & T &U &Spin &Volume &Total &Exp \\ & $(fm)$ &($MeV/c^2$) & ($MeV/c^2$) &($MeV/c^2$) &($MeV/c^2$) & ($MeV/c^2$) &($MeV/c^2$) &($MeV/c^2$)\\ \hline \vspace{-.3cm}& & & & & & & & \\ $P$ &1.48 (1.37) &870 &219.6 &-145.3 &-70.9 &83.7 &957.1 &939 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Delta^{+}$ &1.48 &870 &237.6 &-140.3 &151.4 &89.6 &1208 &1232 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Delta^{++}$ &1.68 &870 &250.6 &-124.0 &104.5 &132.2 &1233 &1232 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Lambda$ &1.29 &1087 &200.0 &-151.9 &-81.0 &59.5 &1114 &1116 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Sigma^{+}$ &1.40 (1.29) &1087 &233.2 &-152.4 &-41.3 &70.9 &1197 &1193 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Sigma^{0}$ &1.29 &1087 &200.0 &-151.9 &-43.7 &59.5 &1151 &1193 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Sigma^{*+}$ &1.40 &1087 &249.8 &-152.7 &144.3 &76.6 &1405 &1385 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Sigma^{*0}$ &1.29 &1087 &200.0 &-151.9 &187.1 &59.5 &1382 &1384 \\ \vspace{-.3cm}& & & & & & & & \\ \hline \vspace{-.3cm}& & & & & & & & \\ $\Omega^{-}$ &1.45 &1521 &196.0 &-146.5 &31.3 &83.9 &1686 &1672 \\ & & & & & & & & \\ \hline\hline \end{tabular} \end{center} \label{table3} \end{table} \begin{figure} \begin{center} \leavevmode \includegraphics*[trim=00 000 0 0, clip,scale=1.0]{picture1bb.pdf} \caption{The density profile of the $g=3, N=1$ proton or neutron TF wave function in terms of the dimensionless variable $x$.} \label{fig1} \vspace{2cm} \includegraphics*[trim=00 000 0 0, clip,scale=1.0]{picture2bb.pdf} \caption{The density profile of the $N_1=2,g_1=1;N_2=1,g_2=1 $ part of the proton or neutron TF wave function in terms of the dimensionless variable $x$. The top $f$ function represents $f_{2,ll}$ and the bottom represents $f_{2,l}$ from Table \ref{picturetable}. These wave functions are also relevant to the $\Delta^{+,0}$ particles.} \label{fig2} \end{center} \end{figure} Table \ref{table3} gives a full accounting of the various particle energies in our model. Notice that the final particle energies involved are consistent with the non-relativistic assumption partly because there are large cancellations between the kinetic and potential energies. One can see that the model overestimates the strong isospin breaking effects from the particle wave functions for the differently charged $\Delta$, $\Sigma$ and $\Sigma^*$ particles. We note that the $\Delta^{++,-}$ and $\Delta^{+,0}$ particles actually originate in different base configurations; similarly for the $\Sigma^{+,-}$ and $\Sigma^{0}$ as well as the $\Sigma^{*+,-}$ and $\Sigma^{*0}$. There should actually be such isospin breaking effects in nature from the exclusion principle, but clearly they are not as large as seen here. As far as the authors know, lattice QCD data has not been examined for such effects. Although the fit is not as good as for a typical non-relativistic quark model, we would not expect it to be. As explained above, the known ground state baryons badly violate the many-quark semi-classical assumption. Nevertheless, we are encouraged by the overall reasonableness of the fit. Some states will be well represented while others will not. It is especially encouraging that the masses of states such as the $\Lambda$, $\Omega$ and $\Delta^{++}$ seem to be accurate. These states are \lq\lq purest" in terms of their lack of mixings. The $\Lambda$ is the only baryon built out of a partially anti-symmetric flavor-spin combination, and the $\Omega$ and $\Delta^{++}$ have especially simple unmixed flavor-spin wave functions. The $\Lambda$ is especially important because of the possibility of strange matter formation, which will be studied in Sections \ref{SSone} and \ref{SSthree}. We think it is informative to compare the ground-state baryon mass fit achieved here with two historically important hadronic models. The first column of Table \ref{table6} lists the total TF quark model masses again from Table \ref{table3}, the second column gives the original fit of the MIT bag model\cite{bag} and the third column gives the fit from the Isgur and Capstick relativized quark model\cite{Isgur}. (Note that the $\Xi$ and $\Xi^{*}$ TF masses have not been listed in Table \ref{table6} for the reason pointed out earlier in this section.) We evaluate the average absolute mass difference between the computed masses and the experimental masses as a rough measure of goodness of fit. The bag model average mass difference is 10.8 $MeV/c^2$ while the Isgur/Capstick model average mass difference is 13.2 $MeV/c^2$. The isospin splittings for the TF case makes a direct comparison impossible. However, if the isospin states are first averaged before the mass difference is calculated, we obtain 12.2 $MeV/c^2$ for the overall average. On the other hand, taking the average of the separate high and low mass differences gives 13.5 $MeV/c^2$. In this sense the goodness of fit of the TF model is approximately the same as the other two models. \begin{table} \caption{Comparison among different models. Various charge states are identified with superscripts for the present TF quark model. The experimental results represent averages over the charge states.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline\hline Particle &TF quark model &MIT Bag model\cite{bag} & Relativized model\cite{Isgur} &\quad Exp \quad \\ & ($MeV/c^2$) &($MeV/c^2$) & ($MeV/c^2$) &\quad($MeV/c^2$) \quad \\ \hline \vspace{-.3cm}& & & & \\ $P,N$ &957 &938 &960 & 939 \\ \vspace{-.3cm}& & & & \\ \hline \vspace{-.3cm}& & & & \\ $\Delta$ &$1233^{++,-}, 1208^{+,0}$ &1233 &1230 &1232 \\ \vspace{-.3cm}& & & & \\ \hline \vspace{-.3cm}& & & & \\ $\Lambda$ & 1114 &1105 &1115 &1116 \\ \vspace{-.3cm}& & & & \\ \hline \vspace{-.3cm}& & & & \\ $\Sigma$ &$1197^{+,-}, 1151^{0}$ &1144 &1190 &1193 \\ \vspace{-.3cm}& & & & \\ \hline \vspace{-.3cm}& & & & \\ $\Xi $ & - & 1289 & 1305 & 1318 \\ \vspace{-.3cm}& & & & \\ \hline \vspace{-.3cm}& & & & \\ $\Sigma^{*}$ &$1405^{+,-}, 1382^{0} $ &1382 &1370 & 1385 \\ \vspace{-.3cm}& & & & \\ \hline \vspace{-.3cm}& & & & \\ $\Xi^{*}$ & - & 1529 & 1505 & 1533 \\ \vspace{-.3cm}& & & & \\ \hline \vspace{-.3cm}& & & & \\ $\Omega$ &1686 & 1672 &1635 &1672 \\[1ex] \hline \end{tabular} \end{center} \label{table6} \end{table} The radii listed in Table \ref{table3} are actually the bag radii of the wave functions. In two cases, the $P$ and $\Sigma^+$, two radii are given since there are two configurations involved; see Table \ref{picturetable2}. The larger radius in both cases is associated with the configuration with the larger particle number $N_1$, which is exactly as one would expect from the exclusion principle. As an example of the form of the TF wave functions, Figures \ref{fig1} and \ref{fig2} show the two parts of the proton wave function. Note that $x=1$ corresponds to a distance of 1.67\,$fm$ from Eq.(\ref{peq2}). Also notice an unusual aspect of our model due to its statistical nature: there is a clean separation of quark flavors phases for ($u,d$) quarks in the outer part of the $\Delta^{+,0}$ TF wave functions (see Fig.\ref{fig2}), as well as the (light, strange) sectors in the $\Sigma^{*+,-}$ particles. (This would be true as well for the $\Xi^{*0,-}$ particles had they been constructed.) \begin{table} \caption{Calculated baryonic electromagnetic squared charge radii (in $fm^2$) compared to various result and models.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline\hline {\bf Particle} &{\bf TF model} & {\bf lattice}\cite{Leinweber} & {\bf HB$\chi$PT}\cite{ramsey} & {\bf Chiral}\cite{berger} & {\bf Expt.}\cite{PDG} \\ \hline $P$ &1.18 & 0.685$\pm$ 0.066 & 0.735 & 0.82 & 0.769$\pm$ 0.009\\ \hline $N$ & -0.11 & -0.158$\pm$ 0.033 & -0.113 & -0.13 & -0.1161$\pm$ 0.0022\\ \hline $\Delta^{++}$ &1.62 & - & - & 0.43 & -\\ \hline $\Delta^{+}$ &1.25 & - & - & 0.43 & -\\ \hline $\Delta^{0}$ &-0.17 & - & - & 0.0 & - \\ \hline $\Delta^{-}$ &1.62 & - & - & 0.43 & -\\ \hline $\Lambda$ &0.11 & 0.010$\pm0.009$ & -0.284 & 0.03 & -\\ \hline $\Sigma^{+}$ &1.21 & 0.749$\pm$ 0.072 & 1.366 & 1.13 & - \\ \hline $\Sigma^{-}$ &0.90 & 0.657$\pm$ 0.058 & & 0.72 & 0.61$\pm$ 0.15 \\ \hline $\Sigma^{0}$ &0.12 & - & - & 0.20 & - \\ \hline $\Sigma^{*+}$ &1.27 & - & 0.798 & 0.42 & -\\ \hline $\Sigma^{*-}$ &0.93 & - & - & 0.37 & - \\ \hline $\Sigma^{*0}$ &0.11 & - & - & 0.03 & -\\ \hline $\Omega^{-}$ &1.16 & - & - & 0.29 & - \\ \hline \hline \end{tabular} \end{center} \label{table5} \end{table} To obtain radii comparable to experiment, we also calculate the electromagnetic squared charge radii from our charge distributions in Table \ref{table5}. We calculate the squared charge radius for a particle $P$ with total charge $Q$ from \begin{equation} \langle r^2\rangle_P = \frac{R^2}{Q} \sum_{B,q} P_B Q_q \langle x^2\rangle_{B,q}.\label{rsquared} \end{equation} The sum is over base configurations ($B=1,\dots ,8$) and quark types ($q=l,ll,s$) in Table \ref{picturetable}. $R$ is given by Eq.(\ref{peq2}), $P_B$ are configuration weightings read off from Table \ref{picturetable}, and $Q_q$ are the individual quark charges in units of the magnitude of the electron charge. When the total charge of the particle is zero, we must replace $Q$ with 1. We also have from Eq.(\ref{norm}) \begin{equation} \langle x^2\rangle_{B,q} = \frac{3A}{N_{q}}\int_0^{x_{max}} dx\, x^{5/2} (f_{B,q}(x))^{3/2}. \end{equation} Then for the proton for example, \begin{equation} \langle r^2\rangle_P = R^2\left(\frac{1}{3}\langle x^2\rangle_1 +\frac{2}{3}\left(\frac{4}{3}\langle x^2\rangle_{2,ll}-\frac{1}{3}\langle x^2\rangle_{2,l}\right)\right),\label{r2proton} \end{equation} whereas for the neutron \begin{equation} \langle r^2\rangle_N = R^2\frac{2}{3}\left(-\frac{2}{3}\langle x^2\rangle_{2,ll}+\frac{2}{3}\langle x^2\rangle_{2,l}\right).\label{r2neutron} \end{equation} The $\langle x^2\rangle_1$ term corresponding to the $B=1$ configuration is absent here since the same function weights both the positive and negative charges. The Table \ref{table5} radii are relatively large compared, for example, to measured charged particle electromagnetic radii. This is a result of the fit with a smaller value of $B^{1/4}$ than most standard bag model phenomenologies. For comparison, we have also listed in this Table recent results from a lattice calculation\cite{Leinweber}, heavy baryon chiral perturbation theory\cite{ramsey}, a chiral constituent quark model\cite{berger}, and the three known results from experiment\cite{PDG}. (Note that Ref.\cite{bag} gives the charge radii squared of the proton and neutron as 0.53\,$fm^2$ and 0, respectively. Ref.\cite{Isgur} did not calculate this quantity.) Our proton and $\Sigma^-$ squared charge radii are too large. However, the more extended $d$ quark TF wave function seen in Fig.\ref{fig2} results in a negative squared charge radius value, comparable to experiment. We now turn to try applying the phenomenology developed here to quark configurations in a number of sectors. \section{Three Case Studies}\label{Ssix} In this section we will apply the fit found in the previous section to begin to explore some of the interesting phenomenology of high number multi-quark states. In particular, we will examine three possibilities: the {\it H}-dibaryon, high multi-quark strange states, and nucleon-nucleon 6 quark resonances. \subsection{{\it H}-Dibaryon Considerations}\label{SSone} The {\it H}-dibaryon, which is an isospin $I=0$, total angular momentum $J=0$ $uuddss$ flavor state, is an interesting application of the model. Although there are many lattice results\cite{detmold,inoue,sakai,wetzorke,luo,inuoe2,beane,beane2,inuoe3,inuoe4}, there are systematic issues in the calculations due to lattice volume and quark mass effects. There are experimental results\cite{takahashi,yoon} which limit the bound state energy to shallow values if indeed it exists at all\cite{trattner}. The lattice results must be extrapolated in quark mass, and the results of this procedure are still rather uncertain\cite{detmold}. Our model is designed to evaluate states for further lattice study, so it is interesting to study this possible bound state with the new tool we now possess, even though we are doing this study \lq\lq post-lattice" at this point. We may form a total $I=0$, $j=0$ combination from combining two particles in the ($\Lambda$$\Lambda$), ($\Sigma$$\Sigma$) or ($\Xi N$) systems\cite{meissner}. The individual particle isospins are $I_{\Lambda}=0$, $I_{\Sigma}=1$, $I_{N}=\frac{1}{2}$ and $I_{\Xi}=\frac{1}{2}$. One can look up the Clebsch-Gordon coefficients to couple these systems to a $|I_{tot}=0\rangle$ state. In quantum mechanics the states mix with one another, giving a coupled channel problem\cite{inoue}. In principle, one would proceed here exactly as one would for the quantum mechanical problem; assume a general linear combination and minimize with respect to the parameters to find the ground state. Such considerations are beyond the present application purview. However, given the quantum wave functions each of these combinations may be examined separately for the lowest mass. The possible TF quark 4 quark $uuddss$ $j=0$ combinations are only three in number (we don't list some redundant configurations in the $m_u=m_d$ isospin limit): \begin{eqnarray} \lefteqn{\quad(u^{\uparrow}u^{\uparrow}d^{\downarrow}d^{\downarrow}s^{\uparrow}s^{\downarrow}): N_1=2, g_1=2,=2; N_s=1,g_s=2,} \nonumber \\ &&(u^{\uparrow}u^{\uparrow}d^{\uparrow}d^{\downarrow}s^{\uparrow}s^{\uparrow}): N_1=2, g_1=1; N_2=1, g_2=2; N_s=2, g_s=1,\nonumber \\ &&(u^{\uparrow}u^{\downarrow}d^{\uparrow}d^{\downarrow}s^{\uparrow}s^{\downarrow}): N_1=1, g_1=4; N_s=1, g_s=2.\nonumber \end{eqnarray} Unfortunately, the $u, d$ asymmetric second combination, $(u^{\uparrow}u^{\uparrow}d^{\uparrow}d^{\downarrow}s^{\uparrow}s^{\uparrow})$, can not be formed here because it involves {\it three} separate and different quark wave functions. This limits us to investigation of the $\Lambda\Lambda$ state only, although this is also likely the lightest state from Pauli blocking considerations. We emphasize that the technical limitation here is simply due to the software developed so far, not in the model itself. When fully reduced, the bare TF configuration for the $\Lambda\Lambda$ $I=0,j=0$ state is given by: \begin{equation*} \frac{1}{2}(u^{\uparrow}u^{\uparrow}d^{\downarrow}d^{\downarrow}s^{\uparrow}s^{\downarrow}) +\frac{1}{2}(u^{\uparrow}u^{\downarrow}d^{\uparrow}d^{\downarrow}s^{\uparrow}s^{\downarrow}). \end{equation*} The energy spin term is also easily calculable and is included in our results. Our result for the $\Lambda\Lambda$ state is 2228 MeV/c$^2$. The component energies and bag radius (not charge radius) are given in Table \ref{multiquark}. This is 86 MeV/c$^2$ more than the two $\Lambda$ threshold. Our present results indicate the {\it H}-dibaryon state is not bound in the TF quark model. \subsection{High Multi-Quark Strange States}\label{SSthree} We have studied the possibility of a bound {\it H}-dibaryon in Section \ref{SSone}. Lattice results have already begun to accumulate for this system. We can get ahead of the lattice results by beginning to investigate the possibility of bound states with many more quarks. We have investigated the possibility of additional $I=0$ states constructed from two $\Lambda$ wave functions. Such states can be thought of as multi-combinations of {\it H}-dibaryons with $12, 18, 24\dots$ quarks. Such states would be very difficult if not impossible for the lattice to simulate. Here, it is a simple matter of changing the initial simulation parameters to accommodate these new states. In this model, we formally have a maximum of 4 light quark flavors. We have investigated the 12 quark configuration, \begin{eqnarray} N_1=2, g_1=4; N_s=2, g_s=2,\nonumber \end{eqnarray} the 18 quark configuration, \begin{eqnarray} N_1=3, g_1=4; N_s=3, g_s=2,\nonumber \end{eqnarray} and the 24 quark configuration, \begin{eqnarray} N_1=4, g_1=4; N_s=4, g_s=2.\nonumber \end{eqnarray} Higher quark number states are obvious generalizations. We find the following energies and masses in our investigation:\\ \noindent $\bullet$ For the 12 quark case, we find the mass as 4690 MeV/c$^2$, 234 MeV/c$^2$, more than the 4 $\Lambda$ threshold and 62 MeV/c$^2$ more than twice the TF {\it H}-dibaryon mass. \\ \noindent $\bullet$ For the 18 quark case, we find the mass as 7100 MeV/c$^2$, 416 MeV/c$^2$ more than the 6 $\Lambda$ threshold and 158 MeV/c$^2$ more than three times the TF {\it H}-dibaryon mass.\\ \noindent $\bullet$ Finally, for the 24 quark case, we find the mass as 9510 MeV/c$^2$, 598 MeV/c$^2$ more than the 8 $\Lambda$ threshold and 254 MeV/c$^2$ more than four times the TF {\it H}-dibaryon mass.\\ These states are moving further away from the bound particle thresholds. Interestingly, the systems are remaining relatively non-relativistic, with a smaller percentage of the component energy being kinetic as more quarks are added. In addition, the spin energies are becoming a smaller system component as well. These results are listed in Table \ref{multiquark}. Again, note that the radius quoted here is the bag radius of the system rather than the charge radius. \begin{table} \caption{Multi-quark configuration energies.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline\hline Particle &Bag Radius &Rest Mass &{\it T} &{\it U} &Spin &Vol. Energy &Total &Threshold \\ & $(fm)$ & $(MeV/c^2)$ & $(MeV/c^2)$ &$(MeV/c^2)$ &$(MeV/c^2)$ & $(MeV/c^2)$ &$(MeV/c^2)$ &$(MeV/c^2)$ \\ \hline \vspace{-.3cm} & & & & & & & & \\ $H-{\rm dibaryon}$ &1.58 (1.45) &2174 &386.4 &-280.4 &-61.7 &96.2 &2314 &2228\\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $12-{\rm quark}$ &1.88 &4348 &640.8 &-458.8 &-22.8 &183.2 &4690 &4456\\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $18-{\rm quark}$ &2.18 &6522 &932.5 &-627.3 &-13.9 &286.3 &7100 &6684\\ \vspace{-.3cm} & & & & & & & & \\ \hline \vspace{-.3cm} & & & & & & & & \\ $24-{\rm quark}$ &2.41 &8696 &1223 &-789.0 &-10.0 &390.0 &9510 &8912\\ & & & & & & & & \\ \hline\hline \end{tabular} \end{center} \label{multiquark} \end{table} \subsection{Nucleon-Nucleon Resonances}\label{SStwo} When bound states are not present in a channel, our model is useful in predicting multi-quark baryon resonances. In Table \ref{table4} we present a \lq\lq scan" of the configuration energies for the light, two wave function case for nucleon-nucleon scattering. Note these are the configuration energies, not the particle energies. To obtain particle resonance energies, we must proceed by projecting the quantum state into the particle configurations. Linear combinations of these configuration energies will be the actual resonance energies. It turns out there are 8 different configurations that we can reach with our two inequivalent wave functions given that $g_1 N_1+g_2 N_2=6$. There are actually 4 light flavors available here, $u^{\uparrow},u^{\downarrow},d^{\uparrow},d^{\downarrow}$, so that the maximum value of $g_1+g_2$ is 4. Some missing cases are equivalent to those listed. For example, the $(N_1=3, g_1=1; N_2=3,g_2=1)$ case is obviously equivalent to the $(N_1=3, g_1=2)$ case. The explicit possibilities are listed in Table \ref{table4} from highest mass ($N_1=6,g_1=1$) to lowest mass ($N_1=2,g_1=3$). The results are largely as one would expect from the exclusion principle in that states with a greater $N_1$ or $N_2$ are sequentially heavier. We see that all of the states are greater than the two nucleon threshold and that there are no elemental six quark nucleon-nucleon bound states in our model. These could be contributing to the known resonances in both $pp$ and $np$ elastic scattering above the two nucleon threshold. Of course it is impossible to tell if the resonances are from mesons and baryons or true 6 quark states. \begin{table} \caption{The eight six-quark configuration energies containing light quarks in the TF quark model. Note that $g_1N_1+G_2N_2=6$.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline\hline \vspace{-.2cm}& & & & \\ $N_{1}$ &$g_{1}$ &$N_{2}$ & $g_{2}$ &$Mass$ \\ & & & &$(MeV/c^{2})$ \\ \hline 6 &1 &0 &0 &2309 \\ \hline 5 &1 &1 &1 &2170 \\ \hline 4 &1 &2 &1 &2154 \\ \hline 4 &1 &1 &2 &2125 \\ \hline 3 &2 &0 &0 &2104 \\ \hline 3 &1 &1 &3 &2068 \\ \hline 2 &2 &1 &2 &2005 \\ \hline 2 &3 &0 &0 &2003 \\ \hline \end{tabular} \end{center} \label{table4} \end{table} However, there are two surprising aspects to our results. First, the lowest mass state is not the one we expect. One would expect the $(N_1=2, g_1=2; N_2=1,g_2=2)$ state to be lower in mass than the $(N_1=2, g_1=3)$ from the exclusion principle since we are spreading 6 quarks across the maximum number of light flavors. Notice the mass difference here is quite small, $~2$ MeV/c$^2$. Second, the rest energy for the lightest six-quark state is only 89 MeV/c$^2$ above the two-nucleon threshold, using the calculated nucleon mass, 957 MeV/c$^2$, from Table \ref{table3}. This is comparable to the threshold energy found for the {\it H}-dibaryon, which most investigators would expect to be closer to the bound threshold. Note that we only calculate the bare mass here in this quick scan of configuration energies. \section{Conclusions}\label{seven} We have presented the first numerical results in the application of the TF model to strong interaction phenomenology. In this paper we have begun to develop the TF quark model as a relatively uncomplicated tool in strong interaction dynamics. Our results are presently restricted to non-relativistic aspects and to two inequivalent TF wave functions. We have shown how to introduce spin into the formalism and found parameters which effectively fit the low energy baryon states. We have begun to understand the internal structure required by the differential equations. In \cite{wilcox} we saw discontinuities at the edge of TF wave functions due to external pressure. We also now see internal flavor discontinuities, even for degenerate masses, simply due to energy minimization. Such a spatial separation does not occur in traditional non-relativistic models and is a consequence of the statistical nature of the model. The results presented here show how to construct the necessary wave functions in the two wave function multi-flavor case with configuration quantum numbers $N_1,g_1,N_2,g_2$. The parameterization found with 5 fit parameters ($\alpha_s, B, m_u =m_d, m_{str}, {\rm g}$) is relatively good, but the isospin violations are too large for baryon multiplets. The squared charge radii of the particles found with our non-relativistic approach are in general too large, but the systematics resulting in a good value of the neutron squared charge radius are encouraging. We have undertaken three case studies using the low energy phenomenological fits: the {\it H}-dibaryon, nucleon-nucleon resonances and high multi-quark strange states. Our results for strange quark states show growing systems which are remaining non-relativistic and whose spin interactions are a decreasing percentage of the total energy. There are no hints that the systems can be bound. Indeed, the difference between the mass and threshold values in Table \ref{multiquark} are growing as a percentage of the threshold. Strange quark matter scenarios look very unlikely from this point of view. We have also confirmed a phenomenology in nucleon-nucleon scattering consistent with known low energy hadronic physics. One surprising finding is that the lightest 6 quark light sector mass actually has a threshold value comparable to the equivalent strange sector involving the $\Lambda$. Of course, these findings should be regarded as initial indications rather than robust results. However, in these applications we are consistently finding that the semi-classical statistical point of view does not encourage the idea of bound many-quark baryonic states. The phenomenological fits and findings can be viewed in different ways. On one hand the fact that a many-particle theory can accommodate states of three particles is extremely non-trivial. On the other hand one has to keep in mind that one is required to do the fitting in the exact wrong place for such a model. This is similar to situation the $1/N_c$ expansion in strong interaction field theory when $N_c=3$. Keeping this in mind, we are emphasizing the emerging {\it systematics} in the fits rather than the numerical results. Hopefully, lattice calculations can be extended to investigate this picture more completely. We are just at the beginning point in the application of this model to known strong interaction phenomenology. There are many remaining areas in which to continue the development of the TF quark model. Besides removing the technical restriction to two inequivalent TF flavor wave functions here, we note that the model may be extended:\\ \noindent$\bullet$ To be relativistic \\ \noindent$\bullet$ To include anti-quarks\\ \noindent$\bullet$ To include central heavy quarks\\ \noindent$\bullet$ To examine exotic forms of matter\\ \noindent All these matters can be addressed within this model. The relativistic equation to be solved is given in Ref.\cite{wilcox}, although the system energy formulas are not yet developed. We can not strongly defend our non-relativistic assumption other than stating that the present model should be regarded as a work in progress. The overly simplistic non-relativistic wave functions used in the spin projection process are leading to unrealistically large isospin splittings within multiplets. This is surely a signal that higher component wave functions should be developed. This would most naturally be done in a relativistic context. The inclusion of anti-quarks is straightforward and would simply be accomplished by introducing a different color coupling for such particles and averaging over the color couplings in the model. Such an extension would allow us to investigate exotic states such as a penta quark $(udud\bar{s})$ state\cite{LEPS} or a four quark $(\bar{c}c\bar{d}u)$ state\cite{BELLE}. Heavy central quarks can be examined using Coulombic sources much like atomic TF systems. Perhaps most interestingly, the color-flavor locking scenario, involving quark Cooper pairs and massive gluons, can also be explored\cite{cflock}. We believe that the TF quark model can be an effective theoretical tool in delineating the systematics of interesting many-quark baryonic states. \newpage
1,314,259,996,030
arxiv
\section{Monitoring the position} Time-continuous position measurement can be understood as an idealization of a sequence of discrete unsharp position measurements carried out consecutively on a single copy of a quantum particle \cite{Dio88}. The notion of unsharp measurement is instrumental here. Such an unsharp measurement of the position $\hq$ can be realized as indirect von Neumann measurement; instead of measuring the particle's position directly, an ancilla system is scattered off the particle and then the ancilla is measured \cite{Neu55,CavMilb87, Busch95, BrePet02,JacSte06}. The observed results yield limited information on the position $\hq$ of the scatterer. In a simple description, a single unsharp measurement of resolution $\sigma$ collapses the wave function onto a neighborhood with characteristic extent $\sigma$ of a random value $\bq$: \begin{equation} \psi(q)\longrightarrow \frac{1}{p(\bq)}\sqrt{G_\sigma(q-\bq)}\psi(q)\,, \label{Gpsi} \end{equation} where $G_\sigma(q)=(1/\sqrt{2\pi\sigma^2})\exp(-q^2/2\sigma^2)$ is a central Gaussian function. The random quantity $\bq$ is the measured position which determines the collapse, i.e., the weighted projection, of the wave function. The probability to obtain the measurement result $\bq$ - which also plays the role of the normalization factor of the post-measurement wave function - reads: \begin{equation} p(\bq)= \int G_\sigma(q-\bq)\vert\psi(q)\vert^2 dq\,. \label{pbq} \end{equation} As a matter of fact, sharp (direct) von Neumann position measurements are the idealized special case while unsharp measurements - though not necessarily with the Gaussian profile - are the ones which we encounter in practice and which suite a tractable theory of real-time monitoring of the position of a single quantum particle. In our discretized model of monitoring a single particle, we assume an unknown initial wave function $\psi_0(q)$ and we are performing consecutive unsharp position measurements of resolution $\sigma$ at times $t=\tau,2\tau,\dots$, resp., yielding the corresponding sequence $\bq_t$ of measurement outcomes. Between two consecutive unsharp measurements the wave function evolves according to its Schr\"odinger equation (self-dynamics). The resolution $\sigma$ and the frequency $1/\tau$ of unsharp measurements should be chosen in such a way as not to heavily distort the self-dynamics of the particle. It turns out that the relevant parameter is $\sigma^2\tau$, we call \begin{equation} \gamma=\frac{1}{\sigma^2\tau} \label{gamma} \end{equation} the strength of position monitoring. If $\sigma_\psi$ stands for the characteristic extension of the wave function, then $\gamma\sigma_\psi^2$ is the average decoherence rate at which our monitoring distorts the monitored particle's self-dynamics. We should keep this rate modest compared to the rate of the Schr\"odinger evolution due to the Hamiltonian $\hH$ of the monitored particle. Low values of the strength $\gamma$ may, however, result in low efficiency of position monitoring and slow convergence of our method of wave function estimation, cp. Sec \ref{numSim}. The above constraints on $\sigma^2\tau$ can in general be matched with further ones - see Sec.\ IV - that assure the applicability of the continuous limit and its analytic equations. \section{Monitoring the wave function} While it seems plausible that after a sufficiently long time $t$ the sequence of unsharp position measurements provides enough data to estimate $\vert\psi_t(q)\vert^2$, it may come as surprise that position measurements enable a faithful monitoring of the full wave function $\psi_t(q)$ as well. Let's just outline the reason. Measuring the position $\hq$ at times $t=\tau,2\tau,\dots$ on an system with evolving Schr\"odinger wave function $\psi_t$ is equivalent to consecutive measurements of the Heisenberg observables $\hq_t=\exp(it\hH)\hq\exp(-it\hH)$ on a system with static wave function $\psi_0$. The set of Heisenberg coordinates $\{\hq_t\}$ will exhaust a sufficiently large space of incompatible observables so that their measurements will lead to a faithful determination of $\psi_0$ and - this way - to our faithful determination of $\psi_t$ for long enough times $t$. In the degenerate case $\hH=0$, monitoring turns out to be trivial: For long enough times, a large number $t/\tau$ of unsharp position measurements of resolution $\sigma$ is equivalent with a single sharp measurement of resolution $\sigma/\sqrt{t/\tau}$, position monitoring thus yields just preparation of a static sharply localized wave function - an approximate `eigenstate' of $\hq$. Our monitoring of the wave function means a real-time estimation of it, where the quality of monitoring depends on the fidelity of the estimation. We start from a certain initial estimate $\psi_0^e$ and simulate its evolution according to the self-dynamics of the particle, which is assumed to be known, until time $t=\tau$. Immediately after we have learned the first position $\bq_\tau$ from the first measurement on the particle, we update the estimate according to the same rule (\ref{Gpsi}) as the actual wave function of the particle and renormalize it: \begin{equation} \psi^e_\tau(q)\longrightarrow \mbox{normalization}\times\sqrt{G_\sigma(q-\bq_\tau)}\psi^e_\tau(q)\,. \label{Gpsie} \end{equation} This update resembles the Bayes principle of non-parametric statistical estimation. We repeat this procedure for $t=2\tau,3\tau\dots$ to expect that the estimated $\psi^e_t$ and the observed wave function $\psi_t$ will converge! A rigorous proof of convergence is missing. In the continuous limit, nonetheless, it has been proved for the general case \cite{DioKonSchAud06} - not excluding the lack of convergence in specific degenerate cases when the set of Heisenberg observables $\{\hq_t\}$ remains too narrow to determine $\psi_0$. This is the case for example for a two-dimensional separable dynamics in two coordinates $\hat x,\hat y$, where only the coordinate $\hat x$ is monitored. Rather than pursuing the rigorous theoretical conditions of convergence cf. \cite{HanMab05}, we turned to numerical tests of continuous measurements that have definitely confirmed our method. \section{Numerical Simulations}\label{numSim} We simulated the evolution of a single hydrogen atom subject to continuous measurements in several potentials. However, the conclusions of our discussion are not restricted to hydrogen atoms; similar results can be expected for atoms with higher masses in appropriately scaled potentials. The coupled evolutions of wave function, measurement readout and the estimated wave function were simulated numerically by discretizing the corresponding stochastic differential equations (cp. Sec.\ref{Methods}). For this purpose we employed a corresponding scheme of Kloeden and Platen which is accurate up to second order in the time step of the discretization \cite{KLoPla92,BrePet02}. In order to study the relation between the evolution of the wave function of the particle on one hand and the evolution of its estimate on the other hand we first restrict to a one-dimensional spatial motion. In this case the graphical representation is the simplest and thus gives a clear picture of the convergence between real and estimated wave functions. As example we consider a hydrogen atom situated in a quartic double well potential with a shape as depicted in Fig.\ref{sequence}. We assumed a continuous measurement of the position of the hydrogen atom with strength $\gamma=9.9856 /(\mu\mbox{m})^2\mbox{s}$. In order to get an impression of the dimensions of the measurement, let us invoke Eq.~(\ref{gamma}) to note that this value of $\gamma$ may correspond, e.g., to single Gaussian measurements with spatial resolution of $\sigma= 1.4$mm repeated at time periods $\tau=50$ns. The spatial resolution of the single weak measurements \cite{Dio06} is thus $140$ times poorer than the width $\sigma=10 \mu$m of the initial Gaussian wave function of the atom. \begin{figure}[ttt] \centering \includegraphics[width=16cm]{papersequev3} \caption{Time sequence of real probability density of atomic position $|\psi|^2$ (blue solid line) and the estimated probability density of position $|\psi^e|^2$ (read dashed line). The solid black line represents the double well potential as a function of position. Its minima are 189 $\mu$m apart and the height of the central maximum is given by $1\times 10^{-13}$ eV.} \label{sequence} \end{figure} In Fig.\ref{sequence} we have depicted snapshots recorded at different times of the spatial probability density $|\psi(x)|^2$ of the H-atom (blue solid line) and the squared modulus $|\psi^e(x)|^2$ of the estimated wave function (red dashed line). Initially both probability densities assume the form of Gaussians which differ in width and location within the double-well potential. The sequence of pictures demonstrates the convergence of the densities in the course of a continuous measurement. The real probability density $|\psi(x)|^2$ of the H-Atom, which possesses initially a slightly higher mean energy than the middle peak of the potential, oscillates back and forth between the sides of the potential. The oscillatory motion of the centre of $|\psi(x)|^2$ would also be expected qualitatively without measurements -as well as from a classical particle of the same mass moving in the double well. However, the real probability density does not spread as it would do without measurements. This localisation effect caused by the continuous unsharp position measurement adapts the motion of the H-Atom to that of a classical particle which is perfectly localised at each instance. This illustrates the influence of the measurements and points to a particular kind of control of the wave function that can be exercised by means of unsharp position measurements. A smaller measurement strength would lead to less disturbance of the unitary motion in the potential but also to less speed of convergence between real and estimated probability density. The estimated probability density $|\psi^e(x)|^2$, which is centered initially on the right-hand side of the potential, follows the real wave function until after approximately one oscillation period the corresponding probability densities coincide and evolve identically thereafter. But not only the probabilities to find the atom at a certain position converge, in fact the complete wave function $\psi(x)$ and its estimate $\psi^e(x)$ coincide after a sufficiently long period! It can be proved analytically, that the estimation fidelity $F=|\langle \psi\ket{\psi^e}|$, which measures the overlap between the wave functions $\psi$ and $\psi^e$, when averaged over many realisations of the continuous measurement increases with time and converges to $1$ \cite{DioKonSchAud06}. Numerical simulations for the double-well potential show that in a typical realisation with initial values as described above and measurement strength $\gamma=9.9856 /(\mu\mbox{m})^2\mbox{s}$ the fidelity amounts to more than $95\%$ after $1.5$ oscillation periods. Fig.\ref{fidstrength} shows the evolution of estimation fidelity for of a continuously measured H-atom moving in a plane under the influence of a mexican-hat potential. The latter is a rotationally symmetric version of the double well in two spatial dimensions. For the sake of simplicity we assumed that both position coordinates are simultaneously and independently measured with the same strength $\gamma$. Such a continuous measurement of both coordinates typically yields evolutions of the estimation fidelities which are shown in Fig.\ref{fidstrength} for several values of the measurement strength $\gamma$. In all depicted cases the fidelity comes very close to one within a period of $5$ms, i.e., estimated and real wave function then coincide. Thereafter the dynamics of the wave function including the influence of the measurement can thus be monitored with perfect fidelity. \begin{figure}[t] \includegraphics[width=9.4cm]{paperfidstrengthv3} \caption{The fidelity of estimation $F$ is plotted as a function of the duration of the continuous measurement of a H-Atom moving in a mexican hat potential for different values of measurement strength $\gamma$ in units of $10/(\mu \mbox{m})^2\mbox{s}$. The height of the mexican hat's central peak situated at the origin of the reference frame is given by $1.07\times 10^{-12}$eV, its minima lie on a concentric circle with radius $40\mu$m around the origin. The wave function $\psi(x)$ and its estimate $\psi^e(x)$ are initially Gaussians centered at ($-55\mu$m,$ -14.8\mu$m) and ($-103.6\mu$m,$ -103.6\mu$m) with widths of $10\mu$m as well as $5\mu$m, respectively. The plots demonstrate that perfect fidelity is reached eventually for all considered measurements strengths. However, the convergence time decreases with increasing $\gamma$.} \label{fidstrength} \end{figure} One might doubt that our monitoring remains efficient for heavily complex wave functions like those developing in classically chaotic systems. Instead of the integrable Mexican hat potential, this time we study the chaotic H\'enon-Heiles potential which depends on the radius $r$ as well as on the azimuth $\phi$: \begin{equation} V(x,y)=A\left[r^4+ar^2+br^3\cos(3\phi)\right]\,. \label{Henon} \end{equation} For this potential we simulated continuous position measurement and monitoring with the following results. The saturation of fidelity is reassuring: the estimate converges to the real wave function (Fig.\ref{fig:henonconv}), which is found at a just slightly longer time scale than in the integrable Mexican hat potential, whereas the wave functions show an apparently irregular complex structure. Fig.\ref{fig:henon2Dplot} shows the estimate and the real wave functions with an average overlap (i.e. a fidelity) of 91.58\%. Indicating already an accurate overall estimation of the real wave function, this value still includes in small areas of the potential differences of the corresponding probability densities up to 35\% of their highest peak. In particular, Fig.\ref{fig:henon2Dplot} indicates that faithful monitoring is not only possible when the shape of the wave function of the particle is close to a Gaussian but also for rather complex shapes. \begin{figure}[t] \centering \includegraphics[width=10cm]{paperhenfidv2} \caption[Convergence for the H\'enon-Heiles potential]{The fidelity of estimation $F$ is plotted as a function of the duration of the continuous measurement at strength $\gamma=12.351/(\mu \mbox{m})^2\mbox{s}$ of a H-Atom in the non-integrable H\'enon-Heiles potential (\ref{Henon}) with parameters $A=5.44\times 10^{-17}\mbox{eV}/(\mu \mbox{m})^4, a=13.09\mu \mbox{m}^2$ and $b=36.18\mu \mbox{m}$. The wave function $\psi(x)$ and its estimate $\psi^e(x)$ are initially Gaussians centered at ($-14.8\mu$m,$ -29.6\mu$m) and ($-29.6\mu$m,$ -29.6\mu$m) both with widths of $10\mu$m, respectively. We find that the fidelity converges to 1 and therefore our estimate becomes a good approximation of the real wave function. } \label{fig:henonconv} \end{figure} Monitoring, i.e. continuous unsharp observation, has a specific capacity. It is its robustness against external unexpected perturbations. To demonstrate such a robustness, we assumed that close to saturation of the estimation fidelity, like in Fig.\ref{fig:henon2Dplot}, our atom in the H\'enon-Heiles potential is suddenly perturbed, e.g., by a collision with an environmental particle (here another hydrogen atom). For simplicity, we assume a momentum kick $p_x$ along the x-direction which implies a multiplication of the real wave function by the complex function $\exp(ip_x x/\hbar)$ hence the estimated wave function has to start a new cycle of convergence. We map the momentum kick $p_x$ to a temperature by $k_BT=p_x^2/m$ as if it had a thermal origin, just to give a hint of its strength. Numeric results (Fig.\ref{fig:kick}) show that the estimation fidelity recovers against these momentum perturbations (cp.\ movie \cite{movie}). In realty, repeated random perturbations might prevent perfect monitoring and fidelity will saturate at less than $1$. This case is beyond the scope of our present work, its study will be of immediate interest since real systems are subject to various noises that are not measured at all. The monitoring theory at non-optimum efficiency has been outlined earlier \cite{DioKonSchAud06}. \begin{figure}[t] \centering \includegraphics[width=12cm]{interference} \caption[HH2D]{The real (blue) and estimated (red) spatial probability densities $|\psi|^2$ and $|\psi^e|^2$, whichever is the bigger one, are depicted in the H\'enon-Heiles potential after time $3.15$ms, at fidelity $0.9158$, for the same initial states as in Fig.\ref{fig:henonconv}. } \label{fig:henon2Dplot} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=14cm]{paperscatterfid3} \caption[Robustness of estimation scheme] {At $t=3.15ms$ and fidelity $0.9158$, exactly when the snapshot of Fig.\ref{fig:henon2Dplot} was taken, the H-atom is hit by another thermal H-atom which causes an immediate drop of fidelity. We supposed a single momentum transfer $p_x=\sqrt{mk_BT}$ in the $x-$direction only, at different temperatures. The fidelity recovers and converges to $1$ after some time which depends on the temperature, i.e., on the strength of the momentum kick.} \label{fig:kick} \end{figure} \section{The Ito-method} \label{Methods} The discrete sequence of unsharp measurements (\ref{Gpsi},\ref{pbq}) and wave function updates (\ref{Gpsie}) possess their continuous limit \cite{Bar86} if we take $\tau\rightarrow0$ and $\sigma^2\rightarrow\infty$ at $1/\gamma=\tau\sigma^2=\mbox{const}$. In this `continuous limit' both the true wave function $\psi_t(q)$ and the estimated wave function $\psi_t(q)$ become continuous stochastic processes such that they are tractable by two stochastic differential equations respectively. The position measurement outcomes $\bq_t$ do not yield a continuous stochastic process themselves. It is their time-integral $Q_t$, specified below, that becomes a continuous stochastic process. Let us consider the discrete increment of the true wave function during the period $\tau$, cf. (\ref{Gpsi}). In Dirac formalism we get: \begin{equation} \Delta\ket{\psi}=\exp(-i\tau\hH)\frac{1}{p(\bq)}\sqrt{G_\sigma(\hq-\bq)}\ket{\psi}-\ket{\psi}\,. \label{Dpsi} \end{equation} For simplicity, we omit notations of time dependence $t$. The symbol $\ave{\hq}$ stands for $\bra{\psi_t}\hq\ket{\psi_t}$. In the continuous limit, the Eq.~(\ref{Dpsi}) transforms into the following Ito-stochastic differential equation \cite{Dio88}: \begin{eqnarray} d\ket{\psi}&=&\left(-i\hH-\frac{\gamma}{8}(\hq-\ave{\hq})^2\right)dt\ket{\psi}\nonumber\\ &{}&{} +\frac{\sqrt{\gamma}}{2}(\hq-\ave{\hq})(dQ-\ave{\hq}dt)\ket{\psi}\label{SSE} \end{eqnarray} The equation of the discrete increment $\Delta\ket{\psi^e}$ of the estimate (slight change) assumes the same form as Eq.~(\ref{Dpsi}) of $\Delta\ket{\psi}$ but the normalization factor differs from $1/p(\bq)$, cf. (\ref{Gpsie}). Yet, it yields the same Ito-stochastic differential equation as the equation above. The estimated state $\ket{\psi_t^e}$ must be evolved according to the same non-linear differential equation (\ref{SSE}) that describes the evolution of the monitored particle's state $\ket{\psi_t}$. These two equations are coupled via the stochastic process $Q$ whose discrete increment is defined by $\Delta Q=q\tau$, in the continuous limit this means formally $Q_t=\int_0^t q_s ds$ where $q_s$ is the measured position at time $s$. In realty, the random process $Q_t$ is obtained from the measured data $\{\bq_t\}$. If the measurement is just simulated, like in our work, then in the continuous limit $\Delta Q$ transforms into the Ito-differential $dQ$ whose random evolution can be generated by the standard Wiener process $W$ via $dQ=\ave{\hq}dt+\gamma^{-1/2}dW$. Of course, $Q_t$ breaks the symmetry between the stochastic processes $\psi_t$ and $\psi^e_t$ because $dQ/dt$ fluctuates around $\bra{\psi_t}\hq\ket{\psi_t}$ and not around $\bra{\psi^e_t}\hq\ket{\psi^e_t}$. The stochastic differential equation (\ref{SSE}) - combined with the same one for $\ket{\psi^e}$- is a suitable approximation of our discrete model (Secs. II-III) under two conditions: (i) a single measurement does not resolve any particular structure of the wave function, i.e., $\sigma\gg\sigma_\psi$ where $\sigma_{\psi}$ is the width of the spatial area on which $\psi$ is not negligibly small. Thus $\sigma_\psi$ can, e.g., be of the order of magnitude of the available width of the confining potential. (ii) The length of the time period $\tau$ between two consecutive measurements is small compared to the timescale of self-dynamics generated by the Hamiltonian $\hH$. Then the discrete model of position monitoring and wave function estimation becomes tractable by the time-continuous equation (\ref{SSE}) depending on the single parameter $1/\gamma=\tau\sigma^2$, cf. also Eq.~(\ref{gamma}). \section{Summary} We simulated numerically continuous position measurements carried out on a single quantum particle in one- and two-dimensional potentials. In order to monitor the evolution of an initially unknown state of the particle in a known potential, we estimated its wave function and updated the estimate continuously employing the measurement results. Our simulations show that for all considered potentials the overlap between estimated and real wave function comes close to $1$ after a finite period of measurement -guaranteeing thereafter precise knowledge of the particle's state and a real-time monitoring of its further evolution with high fidelity. The power of our method is indicated by the ability to monitor even the motion of a particle in a classically chaotic potential subject to continuous position measurement. We thus demonstrated, that monitoring the complete state of a quantum system with infinite dimensional state space is feasible by continuously measuring a single observable on a single copy of the system. Moreover, the simulations indicate that our monitoring method is robust against sudden external perturbations such as occasional random momentum kicks. How much and what kinds of external noise this monitoring scheme tolerates is important for its applicability in control and error correction tasks, and might be object of future research. \section{Acknowledgements} We gratefully acknowledge support by the Bilateral Hungarian-South African R\&D Collaboration Project, Hungarian OTKA grant 49384 and South African NRF Focus Area grant 65579. We thank J.\ Audretsch and A.\ Scherer for discussions. In particular, we are grateful to Ronnie Kosloff for his idea to monitor chaotic dynamics as well.
1,314,259,996,031
arxiv
\section{Introduction} \label{sec:introduction} Consider the following process on a rooted tree with $n$~vertices. Pick an edge uniformly at random and ``cut'' it, separating the tree into a pair of rooted trees; the tree containing the root of the original tree retains its root while the tree not containing the root of the original tree is rooted at the vertex adjacent to the edge that was cut. In the \emph{one-sided} variant of the problem the tree not containing the original root is discarded and the process is continued recursively until the original root is isolated. In the \emph{two-sided} variant the process is continued recursively on each of the rooted trees. Assume that the cost incurred for selecting an edge and splitting the tree is $t_n$. In this paper we derive the limiting distribution of the total cost accrued when the tree is a random \emph{very simple tree} (defined below) and $t_n = n^\alpha$ for fixed $\alpha \geq 0$, for both the two-sided variant (Theorems~\ref{thm:dist12},~\ref{thm:lt12},and~\ref{thm:moments-half}) and the one-sided variant (Theorem~\ref{theo1}). A salient feature of the limiting distributions obtained (after normalizing in a family-specific manner) is that they only depend on~$\alpha$. In the one-sided variant, the case $t_n \equiv 1$ (i.e., $\alpha = 0$) corresponds to the number of cuts required to disconnect the tree. For this random variable, Meir and Moon~\cite{MR44:1598} derived the mean and variance for Cayley trees; Chassaing and Marchand~\cite{chassaing_marchand_cuts} derived the limiting distribution for Cayley trees. Panholzer obtained limiting distributions for non-crossing trees~\cite{MR2042393} and very simple families of trees~\cite{panholzer:2003}. Recently Janson extended these results to all simply generated families~\cite{janson-167}. The interest in the two-sided variant stems from the fact that when the very simple family is Cayley trees, the process is equivalent to a probabilistic model (the ``random spanning tree model'') involved in the Union--Find (or equivalence-finding) algorithm. Knuth and Sch{\"o}nhage~\cite{MR81a:68049} derived the expected value of the cost in the cases (among others) $t_n \sim a \sqrt{n}$ and $t_n = n/2$. These results were later extended~\cite{MR89i:05024} to the cases $t_n = n^\alpha$ when $\alpha > 1/2$ and $t_n = O(n^{\alpha})$ when $\alpha < 1/2$. (Some of these expected values were rederived using singularity analysis in~\cite{FFK}.) In~\cite{math.PR/0406094}, Chassaing and Marchand derive limit laws for the costs considered by Knuth and Sch{\"o}nhage. We treat both variants of the destruction process using singularity analysis~\cite{MR90m:05012}, a complex-analytic technique that relates asymptotics of sequences to singularities of their generating functions. We rely on applicability of singularity analysis to the Hadamard product (the term-by-term product) of sequences~\cite{FFK} and the amenability of the generalized polylogarithm to singularity analysis~\cite{MR2000a:05015}. The organization of the paper is as follows. In Section~\ref{sec:very-simple-trees} we define families of very simple trees, noting the key ``randomness-preservation'' property that is crucial for the application of our methods. Section~\ref{sec:preliminaries} establishes notation and other preliminaries that will be used in the subsequent proofs. In Section~\ref{sec:toll-nalpha} the two-sided variant is considered, and Section~\ref{sec:one-sided-destr} deals with the one-sided variant. \medskip \noindent \emph{Notation.} In the sequel we will use~$\ln$ to denote natural logarithms and~$\log$ when the base of the logarithm does not matter. \section{Very simple trees} \label{sec:very-simple-trees} An \emph{ordered tree} is a rooted tree in which the order of the subtrees of each given node is relevant. Given a sequence $(\phi_i)_{i \geq 0}$ of nonnegative numbers (called a degree generating sequence) with $\phi_0 = 1$, a \emph{simply generated family}~$\mathcal{F}$ of trees is obtained by assigning each ordered tree $T$ the weight \begin{equation*} w(T) := \prod_{v \in T} \phi_{d(v)}, \end{equation*} where $d(v)$ is the outdegree of the node $v$. Let $\mathcal{F}_n$ denote the set of trees in $\mathcal{F}$ with~$n$ nodes, and let $T_n$ denote the weighted number of trees in $\mathcal{F}_n$, i.e., \begin{equation*} T_n := \sum_{T \in \mathcal{F}_n} w(T). \end{equation*} A \emph{random simply generated tree} of size~$n$ is obtained by assigning probability~$w(T)/T_n$ to the tree $T \in \mathcal{F}_n$. Many combinatorially interesting families such as (unweighted) ordered trees, Cayley trees, Motzkin trees, and $d$-ary trees are simply generated. It is also well known that simply generated trees correspond to certain conditioned Galton--Watson trees; see the introductory section of~\cite{janson-167} for the precise connection. It is well-known that the generating function $T(z) := \sum_{n \geq 1} T_n z^n$ satisfies the functional equation \begin{equation*} T(z) = z \Phi(T(z)), \end{equation*} where $\Phi(t) := \sum_{k \geq 0} \phi_k t^k$ is the degree generating function of the family. For further background on simply generated trees we refer the reader to~\cite{MR80k:05043}. In this paper we consider the subclass of simply generated families, called \emph{very simple families}, that, among simply generated families, are characterized by the following property. \medskip \begin{center} \parbox[c]{.9\linewidth}{% Choose a random simply generated tree from the family~$\mathcal{F}_n$ and then one of its $n-1$ edges uniformly at random. Cutting this edge produces a pair of trees of size~$k$ (the one that contains the root) and $n-k$, as described in Section~\ref{sec:introduction}. Then the subtrees themselves are random simply generated trees from the family~$\mathcal{F}_k$ and $\mathcal{F}_{n-k}$. } \end{center} \medskip It is clear that the ``randomness-preservation'' property of very simple trees allows a simple recursive formulation [see~\eqref{eq:1} and~\eqref{eqno1}] of the total cost of destroying such a tree. Panholzer~\cite[Lemma~1]{panholzer:2003} characterized the degree generating functions of very simple trees; the relevant constraints are summarized in Table~\ref{tab:sp}. \subsection{Singular expansions} \label{sec:singular-expansions} As is usual for treatment of simply generated families, let $\tau$ denote the unique root of $t \Phi'(t) = \Phi(t)$ with $0 < t < R$, where $R$ is the radius of convergence of the series~$\Phi$. Let $\rho := \tau/\Phi(\tau)$. Let $Z := 1-\rho^{-1}z$, and let $\mathcal{A}$ denote a generic power series in $Z$, possibly different at each occurrence. Then as $z \to \rho$, the dominant singularity for~$T(z)$, a singular expansion for $T$ is~\cite[Theorem~VII.2]{flajolet:_analy_combin} \begin{equation} \label{eq:7} T(z) \sim \tau - b\rho^{1/2}Z^{1/2} + Z\mathcal{A} + Z^{3/2}\mathcal{A}, \end{equation} where $b := \Phi(\tau)\sqrt{\frac{2}{\tau \Phi''(\tau)}}$. immediately from singularity analysis that \begin{equation} \label{eq:15.1} T_n \sim c \rho^{-n} n^{-3/2}(1 + n^{-1}\mathcal{N}), \end{equation} where $c := b\rho^{1/2}/(2\sqrt\pi) = [\Phi(\tau)/2\pi\Phi''(\tau)]^{1/2}$. In the sequel we will also use \begin{equation} \label{eq:15} \sigma^2 := \tau^2 \frac{\Phi''(\tau)}{\Phi(\tau)}. \end{equation} Differentiating the expansion~\eqref{eq:7} term-by-term~\cite[Theorem~6]{FFK} we get \begin{equation*} T'(z) \sim \frac{b}2 \rho^{-1/2}Z^{-1/2} + \mathcal{A} + Z^{1/2}\mathcal{A}. \end{equation*} Since $z = \rho - \rho Z$, \begin{equation} \label{eq:8} z T'(z) \sim \frac{b}{2} \rho^{1/2} Z^{-1/2} + \mathcal{A} + Z^{1/2}\mathcal{A}. \end{equation} The constants~$a_0$ and~$a_1$ described by Table~\ref{tab:sp} are fundamental constants for our analysis; see especially~\eqref{eq:p_nk}. Using~\eqref{eq:7} and~\eqref{eq:8} we get \begin{equation} \label{eq:9} 1 + 2 a_0 T(z) + a_1 z T'(z) \sim a_1 \rho^{1/2} \frac{b}{2} Z^{-1/2} + \mathcal{A} + Z^{1/2}\mathcal{A} \end{equation} and \begin{equation*} 1 - a_1 T(z) \sim (1- a_1\tau) + a_1 b \rho^{1/2} Z^{1/2} + Z\mathcal{A} + Z^{3/2}\mathcal{A}. \end{equation*} It is easily verified that for each very simple family $1 - a_1\tau = 0$ (this fact will be used numerous times in subsequent calculations), so that the constant term vanishes in the singular expansion of $1 - a_1 T(z)$. This leads to \begin{equation*} z[1 - a_1 T(z)] \sim \rho^{3/2} a_1 b Z^{1/2} + Z\mathcal{A} + Z^{3/2}\mathcal{A} \end{equation*} and consequently \begin{equation} \label{eq:10} z^{-1}[1 - a_1 T(z)]^{-1} \sim \rho^{-3/2} a_1^{-1} b^{-1} Z^{-1/2} + \mathcal{A} + Z^{1/2}\mathcal{A}. \end{equation} We will also need \begin{equation} \label{eq:27} \frac1{T(z)} \sim {\tau}^{-1} + \frac{b\rho^{1/2}}{\tau^2} Z^{{1}/{2}} + Z \mathcal{A} + Z^{{3}/{2}} \mathcal{A}, \end{equation} which follows from~\eqref{eq:7}. \section{Preliminaries} \label{sec:preliminaries} Throughout, $\stackrel{\mathcal{L}}{=}$ denotes equality in law (or distribution) and $\xrightarrow{\mathcal{L}}$ denotes convergence in law. Recall that the \emph{Hadamard product} of two power series $f$ and $g$, denoted by $f \odot g$, is the power series defined by \begin{equation*} ( f \odot g)(z) \equiv f(z) \odot g(z) := \sum_{n} f_n g_n z^n, \end{equation*} where \begin{equation*} f(z) = \sum_{n} f_n z^n \quad \text{ and } \qquad g(z) = \sum_{n} g_n z^n. \end{equation*} \subsection{Two-sided destruction} \label{sec:two-sided-destr} The cost of cutting down a very simple tree of size~$n$, call it $X_n$, satisfies the distributional recurrence \begin{equation} \label{eq:1} X_n \stackrel{\mathcal{L}}{=} X_{K_n} + X_{n-K_n}^* + t_n, \quad n \geq 2; \qquad X_1 = t_1, \end{equation} where $t_n$, for $n \geq 2$, is the toll for cutting an edge from a tree of size~$n$. Here $K_n$, the (random) size of the tree containing the root, is independent of $(X_j)_{j \geq 1}$ and $(X_j^{*})_{j \geq 1}$, which are independent copies of each other. The \emph{splitting probabilities} are given by \begin{equation} \label{eq:p_nk} \Pr[ K_n = k ] =: p_{n,k} = (a_1 k + a_0) \frac{T_k T_{n-k}}{(n-1) T_n}, \quad k=1,\ldots,n-1. \end{equation} Table~\ref{tab:sp} gives the constants~$a_1$ and~$a_0$ for each type of very simple family; see~(14)--(16) in~\cite{panholzer:2003}. Here $\alpha_i := \phi_{i+1}/\phi_i$, $i=0,1$, where $(\phi_i)_{i \geq 0}$ is the degree generating sequence of the simply generated tree. \begin{table}[htbp] \centering \begin{tabular}[p]{ccccc} Family & Generating function & Constraints & $a_1$ & $a_0$ \\ \hline A & $e^{\alpha_0t}$ & & $\alpha_0$ & 0 \\ B & $(1 + \frac{\alpha_0t}d)^d$ & $d \geq 2$ & $\alpha_0 \frac{d-1}{d}$ & $\frac{\alpha_0}d$ \\ C & $[1 - (2\alpha_1 - \alpha_0)t]^{-\frac{\alpha_0}{2\alpha_1-\alpha_0}}$ & $2\alpha_1 - \alpha_0 > 0$ & $2\alpha_1$ & $-(2\alpha_1 - \alpha_0)$ \\ & & & & \end{tabular} \caption{Generating functions for very simple families. For each family, $\alpha_0 > 0$ is also a constraint.} \label{tab:sp} \end{table} It is easy to check that family~A is Cayley trees, family~B is $d$-ary trees, and family~C contains unweighted ordered trees. (As it turns out, the distributional recurrence for Cayley trees is identical to the one obtained for the Union--Find process studied in~\cite{MR81a:68049,MR89i:05024,FFK}---see Remark~\ref{rem:symmetry} below.) Define $\mun{s} := \E{X_n^s}$. Taking $s$th powers of both sides of~\eqref{eq:1} and taking expectations by conditioning on~$K_n$, we get \begin{equation} \label{eq:2} \mun{s} = \sum_{k=1}^{n-1} p_{n,k} ( \mu_k^{[s]} + \mu_{n-k}^{[s]} ) + r_n^{[s]}, \quad n \geq 2, \end{equation} where \begin{equation} \label{eq:29} r_n^{[s]} := \sum_{\substack{s_1+s_2+s_3=s\\ s_2, s_3 < s}} \binom{s}{s_1,s_2,s_3} t_n^{s_1} \sum_{k=1}^{n-1} p_{n,k} \mu_{k}^{[s_2]} \mu_{n-k}^{[s_3]}, \end{equation} and $\mu_1^{[s]} = t_1^s$. Define generating functions \begin{equation*} \muz{s} := \sum_{n \geq 1} \mun{s} T_n z^n, \qquad t(z) := \sum_{n \geq 1} t_n z^n, \qquad T(z) := \sum_{n \geq 1} T_n z^n. \end{equation*} [Observe that $\mu^{[0]}(z) = T(z)$.] Multiply~\eqref{eq:2} by $(n-1)T_n z^n$ and sum over $n \geq 2$. The resulting left side is \begin{equation*} \sum_{n \geq 2} (n-1) T_n \mun{s} z^n = \sum_{n \geq 1} (n-1) T_n \mun{s} z^n = z \partial_z \muz{s} - \muz{s}, \end{equation*} where $\partial_z$ denotes derivative with respect to~$z$. Similarly, the resulting first term on the right side is \begin{equation*} a_1\left[ z \left( \partial_z \muz{s} \right) T(z) + z T'(z)\muz{s} \right] + 2 a_0 \muz{s} T(z). \end{equation*} The resulting second term on the right side is \begin{align} & r^{[s]}(z) := \sum_{n \geq 2} (n-1) T_n r_n^{[s]} z^n = \sum_{n \geq 1} (n-1) T_n r_n^{[s]} z^n \notag \\ \label{eq:3} & = \sum_{\substack{s_1+s_2+s_3=s\\ s_2, s_3 < s}} \binom{s}{s_1,s_2,s_3} t^{\odot s_1}(z) \odot \left[ a_1 z \left( \partial_z \muz{s_2} \right) \muz{s_3} + a_0 \muz{s_2} \muz{s_3} \right]. \end{align} Thus~\eqref{eq:2} translates to \begin{multline*} z \partial_z \muz{s} - \muz{s} \\ = a_1\left[ z \left( \partial_z \muz{s} \right) T(z) + z T'(z)\muz{s} \right] + 2 a_0 \muz{s} T(z) + r^{[s]}(z), \end{multline*} i.e., \begin{equation} \label{eq:3.1} \partial_z \muz{s} + p(z) \muz{s} = g^{[s]}(z), \end{equation} where \begin{equation} \label{eq:4} p(z) := - \frac{1 + 2a_0 T(z) + a_1 z T'(z)}{z[1 - a_1T(z)]} \end{equation} and \begin{equation} \label{eq:5} g^{[s]}(z) := \frac{r^{[s]}(z)}{z[1-a_1T(z)]}, \end{equation} with $\mu^{[s]}(0) = 0$. By variation of parameters (see, for example,~\cite[2.1-(22) and Problem~2.1.21]{boyce86:_elemen_differ_equat}, the general solution to the first-order linear differential equation~\eqref{eq:3.1} is given by \begin{equation} \label{eq:6.1} \muz{s} = A^{[s]}(z) \exp\left[ - \int_{z_0}^z p(t)\,dt \right], \end{equation} where \begin{equation} \label{eq:6} A^{[s]}(z) := \int_0^z g^{[s]}(t) \exp\left[ \int_{z_0}^t p(u)\,du \right]\,dt + \beta_s, \end{equation} with $z_0$ chosen as follows and $\beta_s$ an arbitrary constant. The integrand~$p(z)$ defined at~\eqref{eq:4} and appearing in~\eqref{eq:6.1}--\eqref{eq:6} is asymptotic to~$-1/z$ as $z \to 0$ and has~[see~\eqref{eq:38} below] another singularity at~$z=\rho$. In~\mbox{\eqref{eq:6.1}--\eqref{eq:6}} we may choose (and fix) $z_0$ arbitrarily from the punctured disc of radius~$\rho$ centered at the origin. Then, in~\eqref{eq:6}, as $t \to 0$ we have \begin{equation*} \begin{split} \exp \left[ \int_{z_0}^t p(u)\,dt \right] &= \exp \left[ \int_{z_0}^t \left(-\frac1u\right)\,du + \int_{z_0}^t \left[ p(u) + \frac1u \right]\, du \right] \\ &= \exp \left[ -\ln t + \ln z_0 + \int_{z_0}^t \left[ p(u) + \frac1u \right]\, du \right] \\ &\sim z_0 e^a t^{-1}, \text{ where } a := \int_{z_0}^0 \left[ p(u) + \frac1u \right]\, du, \end{split} \end{equation*} whereas, using~\eqref{eq:5} and~\eqref{eq:3}, \begin{equation*} g^{[s]}(t) \sim \frac{r^{[s]}(t)}t \sim T_2 r_2^{[s]} t; \end{equation*} thus the integrand in~\eqref{eq:6} has no singularity at~$t=0$. Now we obtain the particular solution of interest, using the boundary condition $\mu^{[s]}(z) \sim t_1^s T_1 z$ as $z \to 0$. We find the constant~$\beta_s$ is specified in terms of~$z_0$ as \begin{equation} \label{eq:beta_s} \beta_s = z_0 e^a t_1^s T_1. \end{equation} \begin{remark} \label{rem:two-sided-closed} One can check for each very simple family that \begin{equation*} \Phi'(t) = \frac{a_{0} + a_{1}}{1+a_{0}t} \Phi(t), \end{equation*} and for any simply generated family that \begin{equation*} T'(z) = \frac{\Phi(T(z))}{1-\frac{\Phi'(T(z)) T(z)}{\Phi(T(z))}}. \end{equation*} Thus \begin{equation*} p(z) = - \frac{\Phi(T(z))}{T(z) [1-a_{1}T(z)]} \left[1+2a_{0} T(z) + \frac{a_{1} T(z)} {1-\frac{(a_{0}+a_{1})T(z)}{1+a_{0}T(z)}}\right]. \end{equation*} This leads to \begin{align*} \int_{z_0}^{z} p(t)\,dt &= - \int_{T(z_0)}^{T(z)} \frac1{T (1-a_{1}T)}{\left[1+2a_{0} T + \frac{a_{1} T} {1-\frac{(a_{0}+a_{1})T}{1+a_{0}T}}\right] \left[1-\frac{(a_{0}+a_{1})T}{1+a_{0}T}\right]}\,dT \\ & = -\int_{T(z_0)}^{T(z)} \Big(\frac{1}{T} + \frac{a_{0}}{1+a_{0}T} + \frac{a_{1}}{1-a_{1} T}\Big)\,dT \\ &= \ln\left[\frac{1-a_{1} T(z)}{T(z) (1+a_{0}T(z))}\right] - \ln \left[ \frac{1-a_1T(z_0)}{T(z_0)(1+a_0T(z_0))} \right] \end{align*} and finally, again using the boundary conditions on $\mu^{[s]}(z)$ as $z \to 0$, to the following explicit form of~\eqref{eq:6.1}: \begin{equation*} \muz{s} = \frac{T(z) [1+a_{0} T(z)]}{1-a_{1}T(z)} \left\{ \int_0^{z} g^{[s]}(t) \frac{1-a_{1}T(t)}{T(t) [1+a_{0}T(t)]}\,dt + t_1^s \right\}. \end{equation*} \end{remark} \subsection{One-sided destruction} \label{sec:one-sided-destr-1} Here, the cost of cutting down a very simple tree of size~$n$, call it $Y_n$, satisfies the distributional recurrence \begin{equation} Y_n \stackrel{\mathcal{L}}{=} Y_{K_n} + t_n, \quad n \geq 2; \qquad Y_1 = t_1, \label{eqno1} \end{equation} where $t_n$, for $n \geq 2$, is the toll for cutting an edge from a tree of size~$n$ and the splitting probabilities are given by $p_{n,k}$ at~\eqref{eq:p_nk}. Defining $\mu_{n}^{[s]} := \E Y_{n}^{s}$, one obtains from equation \eqref{eqno1} by conditioning on $K_{n}$ the recurrence relation \begin{equation} \label{eqno2} \mu_{n}^{[s]} = \sum_{k=1}^{n-1} p_{n,k} \mu_{k}^{[s]} + r_{n}^{[s]}, \quad n \ge 2, \end{equation} where \begin{equation*} r_{n}^{[s]} := \sum_{\substack{s_{1}+s_{2}=s, \\ s_{2} < s}} \binom{s}{s_{1}} t_{n}^{s_{1}} \sum_{k=1}^{n-1} p_{n,k} \mu_{k}^{[s_{2}]}, \end{equation*} and $\mu_{1}^{[s]} = t_{1}^{s}$. Using the same notation as in Section~\ref{sec:two-sided-destr}, we obtain the following differential equation by multiplying~\eqref{eqno2} by $(n-1)T_{n}z^{n}$ and summing over $n \ge 2$: \begin{equation*} z \partial_{z} \mu^{[s]}(z) - \mu^{[s]}(z) = T(z) \big(a_{1} z \partial_{z} \mu^{[s]}(z) + a_{0} \mu^{[s]}(z)\big) + r^{[s]}(z), \end{equation*} where \begin{equation} \label{eq:28} r^{[s]}(z) := \sum_{\substack{s_{1}+s_{2}=s, \\ s_{2} < s}} \binom{s}{s_{1}} t^{\odot s_{1}}(z) \odot \big[ T(z) \big(a_{1} z \partial_{z} \mu^{[s_{2}]}(z) + a_{0} \mu^{[s_{2}]}(z)\big)\big]. \end{equation} This can be written as \begin{equation} \partial_{z} \mu^{[s]}(z) + p(z) \mu^{[s]}(z) = g^{[s]}(z), \label{eqno3} \end{equation} with \begin{equation} \label{eq:26} p(z) := - \frac{1+a_{0} T(z)}{z[1-a_{1} T(z)]} \quad \text{and} \quad g^{[s]}(z) := \frac{r^{[s]}(z)}{z [1-a_{1} T(z)]}. \end{equation} One can check that for each very simple family, $p(z) = - \partial_{z} \ln(T(z))$, so that we obtain as general solution of the first order linear differential equation \eqref{eqno3}: \begin{equation*} \mu^{[s]}(z) = T(z) \int_{0}^{z} \frac{g^{[s]}(t)}{T(t)} dt + C \, T(z), \end{equation*} and finally by adapting to the initial condition $\left.\partial_{z} \mu^{[s]}(z)\right|_{z=0} = T_{1} \mu_{1}^{[s]} = T_{1} t_{1}^{s}$, that the integration constant is given as $C=t_{1}^{s}$. Therefore, we get \begin{equation} \label{eqno4} \mu^{[s]}(z) = T(z) \int_{0}^{z} \frac{g^{[s]}(t)}{T(t)} dt + t_{1}^{s} T(z). \end{equation} \section{Two-sided destruction} \label{sec:toll-nalpha} We begin by obtaining a singular expansion for $p(z)$ at~\eqref{eq:4}. Using~\eqref{eq:9} and~\eqref{eq:10} in~\eqref{eq:4} we get \begin{equation} \label{eq:38} p(z) \sim - \frac{\rho^{-1}}{2} Z^{-1} + Z^{-1/2}\mathcal{A} + \mathcal{A}. \end{equation} Integrating this singular expansion term-by-term~\cite[Theorem~7]{FFK}, \begin{equation*} \int_{z_0}^z p(t)\,dt \sim - \frac{1}{2} \ln Z^{-1} + \mathcal{A} + Z^{1/2}\mathcal{A}. \end{equation*} Thus \begin{equation} \label{eq:11} \exp\left[ - \int_{z_0}^z p(t)\,dt \right] \sim \xi Z^{-1/2} + \mathcal{A} + Z^{1/2}\mathcal{A}, \end{equation} where \begin{equation} \label{eq:12} \xi := (1 - \rho^{-1}z_0)^{1/2} \exp\left[ - \int_{z_0}^\rho \left[ p(t) + \frac{\rho^{-1}}2(1-\rho^{-1}t)^{-1} \right]\,dt \right]. \end{equation} Taking the reciprocal of~\eqref{eq:11} gives \begin{equation} \label{eq:13} \exp \left[ \int_{z_0}^z p(t)\,dt \right] \sim \xi^{-1} Z^{1/2} + Z\mathcal{A} + Z^{3/2}\mathcal{A}. \end{equation} Let us now consider two-sided destruction with the toll $t_n = n^\alpha$, with $\alpha > 0$. (Notice that the case $\alpha=0$ is trivial since then the total cost of destruction is simply the number of edges in the tree, which is always~$n-1$.) The toll generating function $t(z)$ is the generalized polylogarithm~$\Li_{-\alpha,0}(z)$, which is amenable to singularity analysis~\cite[Theorem~1]{MR2000a:05015}. \subsection{Expectation} \label{sec:mean} Now we obtain a singular expansion for $r^{[1]}(z)$ defined at~\eqref{eq:3}, recalling that $\mu^{[0]}(z) = T(z)$: \begin{equation} \label{eq:31} r^{[1]}(z) = t(z) \odot [ a_1 zT'(z) T(z) + a_0 T^2(z) ]. \end{equation} Using~\eqref{eq:7} we conclude that \begin{equation*} T^2(z) \sim \mathcal{A} + Z^{1/2}\mathcal{A}, \end{equation*} and using~\eqref{eq:8} that \begin{equation} \label{eq:13.1} a_1 z T'(z) T(z) + a_0 T^2(z) \sim \rho^{1/2} \frac{b}{2} Z^{-1/2} + \mathcal{A} + Z^{1/2}\mathcal{A}. \end{equation} We will use the Zigzag algorithm of~\cite{FFK} to obtain a singular expansion for $r^{[1]}(z)$. We recall the use of the notation~$\mathcal{N}$ to denote a generic power series in $1/n$, possibly different at each occurrence. By singularity analysis, \begin{equation} \label{eq:33} \rho^n [z^n] [ a_1 z T'(z) T(z) + a_0 T^2(z) ] \sim \rho^{1/2} \frac{b}2 \frac{n^{-1/2}}{\sqrt\pi} + n^{-3/2} \mathcal{N}. \end{equation} Thus \begin{equation} \label{eq:23} \rho^n [z^n] r^{[1]}(z) \sim \rho^{1/2} \frac{b}2 \frac{n^{\alpha-\frac12}}{\sqrt\pi} + n^{\alpha-\frac32} \mathcal{N}. \end{equation} Until further notice, suppose $\alpha \not\in \{ \frac12, \frac32, \ldots \}$. Then a compatible singular expansion for $r^{[1]}(z)$ at~\eqref{eq:31} is obtained as \begin{equation} \label{eq:30} r^{[1]}(z) \sim \rho^{1/2} b \frac{\Gamma(\alpha+\frac12)}{2 \sqrt\pi} Z^{-\alpha-\frac12} + Z^{-\alpha+\frac12}\mathcal{A} + \mathcal{A}. \end{equation} Recalling~\eqref{eq:5} and~\eqref{eq:10} we have \begin{equation} \label{eq:32} g^{[1]}(z) \sim \frac{\tau}{\rho} \frac{\Gamma(\alpha+\frac12)}{2\sqrt\pi} Z^{-\alpha-1} + Z^{-\alpha-\frac12}\mathcal{A} + Z^{-\alpha}\mathcal{A} + Z^{-1/2}\mathcal{A} + \mathcal{A}. \end{equation} Using this expansion and~\eqref{eq:13}, \begin{equation*} g^{[1]}(z) \exp\left[ \int_{z_0}^z p(t)\,dt \right] \sim \frac{\tau}{\xi \rho} \frac{\Gamma(\alpha+\frac12)}{2\sqrt\pi} Z^{-\alpha-\frac12} + Z^{-\alpha}\mathcal{A} + Z^{-\alpha+\frac12}\mathcal{A} + \mathcal{A} + Z^{1/2}\mathcal{A}. \end{equation*} By~\eqref{eq:6} and Theorem~7 of~\cite{FFK}, we may integrate this expansion term-by-term to get a complete singular expansion for~$A$. If $\alpha \not\in \{1, 2, \dots\}$, we have \begin{equation*} A^{[1]}(z) \sim \frac{\tau}{\xi} \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi} Z^{-\alpha + \frac12} + L_0 + Z^{-\alpha+1}\mathcal{A} + Z^{-\alpha + \frac32}\mathcal{A} + Z\mathcal{A} + Z^{3/2}\mathcal{A}, \end{equation*} where $L_0$ is a constant. [The value of~$L_0$ is immaterial unless $0 < \alpha < 1/2$, in which case see~\eqref{eq:40}.] On the other hand, if $\alpha \in \{1, 2, \dots \}$, a logarithmic term appears upon integration, so that \begin{equation*} A^{[1]}(z) \sim \frac{\tau}{\xi} \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi} Z^{-\alpha + \frac12} + Z^{-\alpha+1}\mathcal{A} + Z^{-\alpha + \frac32}\mathcal{A} + K_0 \ln Z^{-1}, \end{equation*} where $K_0$ is a constant. Combining these expansions with~\eqref{eq:11}, we finally obtain [recalling~\eqref{eq:6.1}] \begin{equation} \label{eq:14} \muz{1} \sim \tau \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi} Z^{-\alpha} + L_0 \xi Z^{-1/2} + Z^{-\alpha+\frac12}\mathcal{A} + Z^{-\alpha+1}\mathcal{A} + \mathcal{A} + Z^{1/2}\mathcal{A} \end{equation} when $\alpha \not\in \{1,2,\dots\}$ and \begin{equation} \label{eq:14a} \muz{1} \sim \tau \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi} Z^{-\alpha} + Z^{-\alpha+\frac12}\mathcal{A} + Z^{-\alpha+1}\mathcal{A} + Z^{-1/2}(\ln Z^{-1})\mathcal{A} + (\ln Z^{-1})\mathcal{A} \end{equation} when $\alpha \in \{1,2, \dots\}$. Note that the remainder in~\eqref{eq:14a} is $O(|Z|^{-\alpha + \frac12})$ unless $\alpha=1$, in which case it is $O(|Z|^{-\frac12} \ln Z^{-1}) = O(|Z|^{-\frac12 - \epsilon})$ for any $\epsilon > 0$. When $\alpha > 1/2$ and $\alpha \not\in \{1,2,\dots\}$, by singularity analysis we have \begin{equation*} \rho^n \mun{1} T_n \sim \tau \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi\Gamma(\alpha)} n^{\alpha-1} + n^{\alpha-\frac32}\mathcal{N} + n^{\alpha-2}\mathcal{N} + n^{-1/2}\mathcal{N}, \end{equation*} so that, recalling~\eqref{eq:15.1} and~\eqref{eq:15}, \begin{equation*} \mun{1} \sim \sigma \frac{\Gamma(\alpha-\frac12)}{\sqrt2\Gamma(\alpha)} n^{\alpha+\frac12} + n^\alpha\mathcal{N} + n^{\alpha-\frac12}\mathcal{N} + n\mathcal{N}. \end{equation*} When $\alpha \in \{1, 2, \dots \}$, starting from~\eqref{eq:14a} and the note following that display, we can similarly derive the expansion \begin{equation*} \mun{1} = \sigma \frac{\Gamma(\alpha-\frac12)}{\sqrt2\Gamma(\alpha)} n^{\alpha+\frac12} + O(n^{\alpha}) + O(n \log n). \end{equation*} When $0 < \alpha < 1/2$, a similar computation yields \begin{equation} \label{eq:16} \mun{1} \sim \frac{L_0 \xi}{c\sqrt\pi} n + \sigma \frac{\Gamma(\alpha-\frac12)}{\sqrt2\Gamma(\alpha)} n^{\alpha+\frac12} + n^\alpha\mathcal{N} + n^{\alpha-\frac12}\mathcal{N} + \mathcal{N}, \end{equation} where \begin{equation} \label{eq:40} L_0 := \int_0^\rho g^{[1]}(t) \exp\left[ \int_{z_0}^t p(u)\,du \right] \,dt + \beta_1, \end{equation} with $p$ and $g^{[1]}$ defined at~\eqref{eq:4} and~\eqref{eq:5}, respectively, and $\xi$ and~$\beta_1$ at~\eqref{eq:12} and~\eqref{eq:beta_s}, respectively. When $\alpha \in \{\frac32, \frac52, \dots \}$, one can check that logarithmic terms appear in the singular expansion compatible with~\eqref{eq:23} but the lead-order term and asymptotic order of the remainder are unchanged. Indeed, now \begin{equation} \label{eq:24} \muz{1} = \tau \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi} Z^{-\alpha} + O(|Z|^{-\alpha + \frac12}) \end{equation} and consequently \begin{equation*} \mun{1} = \sigma\frac{\Gamma(\alpha-\frac12)}{\sqrt2\Gamma(\alpha)} n^{\alpha+\frac12} + O(n^{\alpha}). \end{equation*} Finally we consider $\alpha=1/2$. Now, a compatible singular expansion for~\eqref{eq:23} is \begin{equation*} r^{[1]}(z) \sim \frac{\rho^{1/2}b}{2\sqrt\pi} Z^{-1} + (\log Z)\mathcal{A} + \mathcal{A}. \end{equation*} Proceeding as in the case $\alpha \ne 1/2$ we have the singular expansions \begin{equation*} \begin{split} g^{[1]}(z) \sim \frac{\tau}{2 \rho \sqrt\pi} Z^{-3/2} + Z^{-1}\mathcal{A} + (Z^{-1/2} \log Z)\mathcal{A} + Z^{-1/2}\mathcal{A} + (\log Z)\mathcal{A}, \\ g^{[1]}(z) \exp\left[ \int_{z_0}^z p(t)\,dt \right] \sim \frac{\tau}{\xi\rho} \frac1{2\sqrt\pi} Z^{-1} + Z^{-\frac12}\mathcal{A} + (\log Z)\mathcal{A} + \mathcal{A} + (Z^{1/2}\log Z)\mathcal{A}, \\ A^{[1]}(z) \sim \frac{\tau}{\xi} \frac{1}{2\sqrt\pi} \ln Z^{-1} + L_1 + Z^{1/2}\mathcal{A} + Z\mathcal{A} + (Z \log Z)\mathcal{A} + (Z^{3/2}\log Z)\mathcal{A}, \end{split} \end{equation*} where \begin{equation} \label{eq:39} L_1 := \int_0^\rho \left\{ g^{[1]}(t) \exp\left[ \int_{z_0}^t p(u)\,du \right] - \frac{\tau}{\xi\rho} \frac1{2\sqrt\pi} (1-\rho^{-1}t) \right\} \,dt + \beta_1. \end{equation} This leads to \begin{equation*} \muz{1} \sim \frac{\tau}{2\sqrt\pi} Z^{-1/2} \ln Z^{-1} + \xi L_1 Z^{-1/2} + (\log Z)\mathcal{A} + \mathcal{A} + (Z^{1/2} \log Z)\mathcal{A} + Z^{1/2}\mathcal{A}, \end{equation*} so that by singularity analysis and~\eqref{eq:15.1} we have \begin{equation} \label{eq:35} \begin{split} \mun{1} \sim \frac{\sigma}{\sqrt{2\pi}} n \ln n + \left[ \frac{\xi L_1}{c \sqrt\pi} + \frac{\sigma}{\sqrt{2\pi}}(\gamma + 2\ln 2) \right] n \\ {} + (n^{1/2} \log n) \mathcal{N} + (\log n)\mathcal{N} + \mathcal{N}, \end{split} \end{equation} where $\sigma$ is defined at~\eqref{eq:15}. \subsection{Higher moments and limiting distributions} \label{sec:higher-moments} We proceed to higher moments. We will consider separately the cases $\alpha > 1/2$, $\alpha < 1/2$, and $\alpha = 1/2$. We present the details for $\alpha > 1/2$ and sketch the main ideas for the other cases. Throughout $\alpha' := \alpha + \frac12$. \begin{proposition} \label{prop:gt12} Let $\alpha > 1/2$ and $\epsilon > 0$. Then \begin{equation*} \muz{s} = C_s Z^{-s\alpha' + \frac12} + O(|Z|^{-s\alpha' + \frac12 + q}), \end{equation*} where \begin{equation*} q := \begin{cases} \min\{\alpha-\frac12,\frac12\} & \text{\textup{if} $\alpha \ne 1$} \\ \frac12 - \epsilon & \text{\textup{if} $\alpha = 1$} \end{cases} \end{equation*} with \begin{equation*} C_1 = \tau \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi}, \end{equation*} and, for $s \geq 2$, \begin{equation} \label{eq:17} C_s = \rho^{-1/2}b^{-1} \left[ \frac{1}{s\alpha'-1} \sum_{k=1}^{s-1} \binom{s}{k} \left(k\alpha'-\frac12 \right) C_{k} C_{s-k} + s \tau \frac{\Gamma(s\alpha'-1)}{\Gamma((s-1)\alpha'-\frac12)} C_{s-1}\right]. \end{equation} \end{proposition} \begin{proof} The proof is by induction on $s$. The claim is true for $s=1$ by~\eqref{eq:14}, \eqref{eq:14a}, and~\eqref{eq:24}. Suppose $s \geq 2$. We analyze each term in the sum for $r^{[s]}(z)$ at~\eqref{eq:3}. If both $s_2$ and $s_3$ are nonzero, then by the induction hypothesis, \begin{equation*} z\partial_z \muz{s_2} = C_{s_2}\left(s_2\alpha' - \frac12\right) Z^{-s_2\alpha' - \frac12} + O(|Z|^{-s_2\alpha' - \frac12 + q}), \end{equation*} so that \begin{equation*} z\left( \partial_z \muz{s_2} \right) \muz{s_3} = C_{s_2} C_{s_3} \left(s_2\alpha' - \frac12\right) Z^{-(s_2+s_3)\alpha'} + O(|Z|^{-(s_2 + s_3)\alpha' + q}). \end{equation*} Also, $\muz{s_2} \muz{s_3} = O(|Z|^{-(s_2+s_3)\alpha'+1}).$ Hence \begin{multline*} a_1 z \left(\partial_z \muz{s_2} \right) \muz{s_3} + a_0 \muz{s_2} \muz{s_3} \\ = a_1 C_{s_2} C_{s_3} \left(s_2\alpha' - \frac12 \right) Z^{-(s_2+s_3)\alpha'} + O(|Z|^{-(s_2+s_3)\alpha' + q}). \end{multline*} Taking the Hadamard product of this expansion with $t^{\odot s_1}(z)$ (using the Zigzag algorithm again) gives the contribution of such terms to $r^{[s]}(z)$ as \begin{equation*} \binom{s}{s_1,s_2,s_3} a_1 C_{s_2} C_{s_3} \left(s_2 \alpha' - \frac12 \right) \frac{\Gamma(s\alpha' - \frac{s_1}2)}{\Gamma((s_2+s_3)\alpha')} Z^{-(s\alpha' - \frac{s_1}{2})} + O(|Z|^{-(s\alpha' - \frac{s_1}{2}) + q}). \end{equation*} Notice that if $s_1 \ne 0$ the contribution is $O(|Z|^{-s\alpha'+\frac12})$. Next consider the case when $s_2$ is nonzero but $s_3 = 0$. By the induction hypothesis and the singular expansion of $T$ at~\eqref{eq:7}, \begin{equation*} z\left( \partial_z \muz{s_2} \right) T(z) = \tau C_{s_2} \left(s_2\alpha'-\frac12\right) Z^{-s_2\alpha'-\frac12} + O(|Z|^{-s_2\alpha'-\frac12+q}). \end{equation*} Also $\muz{s_2}T(z) = O(|Z|^{-s_2\alpha' + \frac12})$. Hence \begin{multline*} a_1z\left( \partial_z \muz{s_2} \right) T(z) + a_0 \muz{s_2}T(z)\\ = C_{s_2} \left(s_2\alpha'-\frac12\right) Z^{-s_2\alpha'-\frac12} + O(|Z|^{-s_2\alpha'-\frac12+q}). \end{multline*} Taking the Hadamard product of this singular expansion with $t^{\odot s_1}(z)$ we get that the contribution to $r^{[s]}(z)$ from such terms is \begin{equation*} \binom{s}{s_1} C_{s_2} \left(s_2 \alpha'-\frac12\right) \frac{\Gamma(s\alpha'-\frac{s_1}2+\frac12)}{\Gamma(s_2\alpha'+\frac12)} Z^{-(s\alpha'-\frac{s_1}2+\frac12)} + O(|Z|^{-(s\alpha'-\frac{s_1}2+\frac12)+q}). \end{equation*} Notice that $s_1 \geq 1$ and that when $s_1 > 1$ the contribution of such terms is $O(|Z|^{-s\alpha'+\frac12})$. We move on to the case when $s_2=0$ but $s_3$ is nonzero. By the induction hypothesis,~\eqref{eq:7}, and~\eqref{eq:8}, we have $T(z) \muz{s_3} = O(|Z|^{-s_3\alpha'+\frac12})$ and $zT'(z) \muz{s_3} = O(|Z|^{-s_3\alpha'})$. Thus \begin{equation*} a_1 zT'(z) \muz{s_3} + a_0 T(z) \muz{s_3} = O(|Z|^{-s_3\alpha'}). \end{equation*} Taking the Hadamard product with $t^{\odot s_1}(z)$ we see (recalling $s_1 \geq 1$) that the contribution to $r^{[s]}(z)$ from these terms is $O(|Z|^{-s\alpha'+\frac12})$. Finally we consider the case when $s_2=s_3=0$. In this case, using~\eqref{eq:13.1} it is easy to verify that the contribution to $r^{[s]}(z)$ from this term is $O(|Z|^{-s\alpha'+\frac12})$. Summing all the contributions we see that \begin{equation} \label{eq:37} r^{[s]}(z) = D_s Z^{-s\alpha'} + O(|Z|^{-s\alpha'+q}), \end{equation} where \begin{equation*} D_s := a_1 \left[ \sum_{k=1}^{s-1} \binom{s}{k} \left(k\alpha'-\frac12 \right) C_{k} C_{s-k} + s \tau \frac{[(s-1)\alpha' - \frac12]\Gamma(s\alpha')}{\Gamma((s-1)\alpha'+\frac12)} C_{s-1} \right] \end{equation*} Thus, using~\eqref{eq:5} and~\eqref{eq:10}, \begin{equation*} g^{[s]}(z) = \rho^{-3/2} a_1^{-1} b^{-1} D_s Z^{-s\alpha'-\frac12} + O(|Z|^{-s\alpha'-\frac12+q}), \end{equation*} whence, using~\eqref{eq:13}, \begin{equation*} g^{[s]}(z) \exp\left[ \int_{z_0}^z p(t)\,dt \right] = \xi^{-1} \rho^{-3/2} a_1^{-1} b^{-1} D_s Z^{-s\alpha'} + O(|Z|^{-s\alpha'+q}). \end{equation*} To get $A^{[s]}(z)$ at~\eqref{eq:6} we integrate this singular expansion, noting that since $s \geq 2$ and $\alpha' > 1$, we have $s\alpha' > 2$. Hence \begin{equation*} A^{[s]}(z) = \xi^{-1} \rho^{-1/2} a_1^{-1} b^{-1} \frac{D_s}{s\alpha'-1} Z^{-s\alpha'+1} + O(|Z|^{-s\alpha'+1+q}). \end{equation*} Now by~\eqref{eq:6.1} and~\eqref{eq:11}, \begin{equation*} \muz{s} = \rho^{-1/2} a_1^{-1} b^{-1} \frac{D_s}{s\alpha'-1} Z^{-s\alpha'+\frac12} + O(|Z|^{-s\alpha'+\frac12+q}). \end{equation*} Taking \begin{align*} C_s &= \rho^{-1/2} a_1^{-1} b^{-1} \frac{D_s}{s\alpha'-1} \\ &= \frac{\rho^{-1/2}b^{-1}}{s\alpha'-1} \left[ \sum_{k=1}^{s-1} \binom{s}{k} \left(k\alpha'-\frac12 \right) C_{k} C_{s-k} + s \tau \frac{[ (s-1)\alpha' - \frac12]\Gamma(s\alpha')}{\Gamma((s-1)\alpha'+\frac12)} C_{s-1} \right] \\ &= \rho^{-1/2}b^{-1} \left[ \frac{1}{s\alpha'-1} \sum_{k=1}^{s-1} \binom{s}{k} \left(k\alpha'-\frac12 \right) C_{k} C_{s-k} + s \tau \frac{\Gamma(s\alpha'-1)}{\Gamma((s-1)\alpha'-\frac12)} C_{s-1}\right] \end{align*} completes the proof. \end{proof} Using singularity analysis we can now derive asymptotics for the moments~$\mun{s}$. \begin{theorem} \label{thm:gt12} Let $\alpha > 1/2$. Then, as $n \to \infty$, \begin{equation*} \sigma^{-s}n^{-s\alpha'}\mun{s} \to m_s, \end{equation*} where $\sigma^2 := \tau^2 \frac{\Phi''(\tau)}{\Phi(\tau)}$ and $m_s$ (which does not depend on the very simple family) is given by \begin{equation*} m_1 = \frac{\Gamma(\alpha-\frac12)}{\sqrt2 \Gamma(\alpha)} \end{equation*} and, for $s \geq 2$, \begin{equation} \label{eq:18} m_s = \frac{1}{4\sqrt\pi} \sum_{k=1}^{s-1} \binom{s}{k} \frac{\Gamma(k\alpha'-\tfrac12) \Gamma((s-k)\alpha'-\tfrac12)}{\Gamma(s\alpha'-\frac12)} m_k m_{s-k} + \frac{s\Gamma(s\alpha'-1)}{\sqrt2\Gamma(s\alpha'-\frac12)} m_{s-1}. \end{equation} \end{theorem} \begin{proof} Using singularity analysis and Proposition~\ref{prop:gt12}, \begin{equation*} \rho^n \mun{s} T_n = C_s \frac{n^{s\alpha'-\frac32}}{\Gamma(s\alpha'-\frac12)} + O(n^{s\alpha'-\frac32-q}), \end{equation*} and using the asymptotics of $T_n$ at~\eqref{eq:15.1}, \begin{equation*} \mun{s} = \frac{C_s}{c \Gamma(s\alpha'-\frac12)} n^{s\alpha'} + O(n^{s\alpha'-q}). \end{equation*} Then \begin{equation*} \sigma^{-s} n^{-s\alpha'} \mun{s} \to m_s, \end{equation*} where \begin{equation} \label{eq:19} m_s := \sigma^{-s} \frac{C_s}{c \Gamma(s\alpha'-\frac12)}. \end{equation} Thus, using $2\sqrt\pi c \sigma = \sqrt2\tau$, \begin{equation*} m_1 = \frac{C_1}{c\sigma\Gamma(\alpha)} = \frac{\Gamma(\alpha-\frac12)}{\sqrt2 \Gamma(\alpha)} \end{equation*} Using~\eqref{eq:17},~\eqref{eq:19}, and the identities \begin{equation*} c\rho^{-1/2}b^{-1} = \frac{1}{2\sqrt{\pi}}, \qquad \sigma^{-1} \tau \rho^{-1/2} b^{-1} = \frac1{\sqrt2}, \qquad \Gamma(x+1) = x\Gamma(x), \end{equation*} we obtain the following recurrence for $m_s$: \begin{equation} \label{eq:20} m_s = \frac{1}{2\sqrt\pi} \sum_{k=1}^{s-1} \binom{s}{k} \frac{\Gamma(k\alpha'+\tfrac12) \Gamma((s-k)\alpha'-\tfrac12)}{(s\alpha'-1)\Gamma(s\alpha'-\frac12)} m_k m_{s-k} \\ + \frac{s\Gamma(s\alpha'-1)}{\sqrt2\Gamma(s\alpha'-\frac12)} m_{s-1}. \end{equation} To obtain the form of the recurrence in~\eqref{eq:18}, write~\eqref{eq:20} in the form \begin{equation*} m_s = \frac{1}{2\sqrt\pi} \sum_{k=1}^{s-1} e_{s,k} + \tilde{e}_s = \frac{1}{4\sqrt\pi} \sum_{k=1}^{s-1} (e_{s,k} + e_{s,s-k}) + \tilde{e}_s \end{equation*} and simplify. \end{proof} \begin{remark} \label{rem:symmetry} In going from~\eqref{eq:20} to~\eqref{eq:18} we symmetrized by collecting coefficients of $m_k m_{s-k}$. We might also have symmetrized from the start by choosing the splitting probabilities as \begin{equation*} \tilde{p}_{n,k} := \frac12( p_{n,k} + p_{n,n-k} ). \end{equation*} In the particular case of Cayley trees this leads to the same splitting probabilities as for the Union--Find recurrence studied in~\cite{MR81a:68049,MR89i:05024,FFK}. \end{remark} We can now show convergence in distribution via the method of moments. \begin{theorem} \label{thm:dist12} Let $\alpha > 1/2$. Define $\sigma^2 := \tau^2\Phi''(\tau)/\Phi(\tau)$ and $\alpha' := \alpha + \frac12$. Then, as $n \to \infty$, \begin{equation*} \sigma^{-1} n^{-\alpha'} X_n \xrightarrow{\mathcal{L}} X^{(\alpha)}, \end{equation*} with convergence of all moments, where $X^{(\alpha)}$ has the unique distribution whose $s$th moment $m_s \equiv m_s(\alpha)$ is given by \begin{equation*} m_1 = \frac{\Gamma(\alpha-\frac12)}{\sqrt2 \Gamma(\alpha)} \end{equation*} and for $s \geq 2$ by the recurrence~\eqref{eq:18}. \end{theorem} \begin{proof} One need only check that the $m_k$'s satisfy Carleman's condition. This has already been established in~\cite{fill03:_limit_catal}. \end{proof} \begin{remark} \label{rem:curious} It is curious that $\sigma^{-1} n^{-\alpha'} X_n$ has the same limiting distribution as \begin{equation*} \sigma n^{-\alpha'} \sum_{v \in T} |T_v|^\alpha. \end{equation*} Here $T$ is a random simply generated tree and $|T_v|$ denotes the size of the tree rooted at a node~$v$. This was established in~\cite{sgtechreport}. \end{remark} For the case $0 < \alpha < 1/2$ it is convenient instead to consider the random variable \begin{equation*} \widetilde{X}_n := X_n - \mu n,\qquad \mu := \frac{L_0\xi}{c\sqrt\pi}. \end{equation*} [Note that, by~\eqref{eq:16}, $\mu n$ is the lead term in the asymptotics of $\E X_n$ when $\alpha < 1/2$.] Using~\eqref{eq:1}, \begin{equation} \label{eq:21} \widetilde{X}_n \stackrel{\mathcal{L}}{=} \widetilde{X}_{K_n} + \widetilde{X}_{n-K_n}^* + t_n, \quad n \geq 2; \qquad \widetilde{X}_1 = t_1 - \mu. \end{equation} Define $\tilde{\mu}_n^{[s]} := \E{\widetilde{X}_n^s}$ and \begin{equation*} \tilde{\mu}^{[s]}(z) := \sum_{n \geq 1} \tilde{\mu}_n^{[s]} T_n z^n. \end{equation*} Then, in analogous fashion,~\eqref{eq:3}--\eqref{eq:6} hold with $\muz{s}$ replaced by $\tilde{\mu}^{[s]}(z)$. Observe that, by~\eqref{eq:14}, \begin{equation} \label{eq:22} \tilde{\mu}^{[1]}(z) \sim \tau \frac{\Gamma(\alpha-\frac12)}{2\sqrt\pi} Z^{-\alpha} + (Z^{-\alpha+\frac12} + Z^{-\alpha+1} + 1 + Z^{1/2})\mathcal{A}. \end{equation} We can use~\eqref{eq:22} and~\eqref{eq:3}--\eqref{eq:6} to show that Proposition~\ref{prop:gt12} holds for $\alpha < 1/2$ with $\muz{s}$ replaced by $\tilde{\mu}^{[s]}(z)$ and $q$ changed to $2\alpha-\epsilon$, for sufficiently small $\epsilon>0$. It follows then that $X_n - \mu n$ has (after scaling) a limiting distribution. \begin{theorem} \label{thm:lt12} Let $\alpha < 1/2$. Define $\sigma^2 := \tau^2\Phi''(\tau)/\Phi(\tau)$ and $\alpha' := \alpha + \frac12$. Then, as $n \to \infty$, \begin{equation*} \sigma^{-1} n^{-\alpha'} (X_n - \mu n) \xrightarrow{\mathcal{L}} X^{(\alpha)}, \end{equation*} with convergence of all moments, where $X^{(\alpha)}$ has the unique distribution whose $s$th moment $m_s \equiv m_s(\alpha)$ is given for $s=1$ by \begin{equation*} m_1 = \frac{\Gamma(\alpha-\frac12)}{\sqrt2 \Gamma(\alpha)} \end{equation*} and for $s \geq 2$ by the recurrence~\eqref{eq:18}. \end{theorem} Finally we turn our attention to the case $\alpha=1/2$. Now, we define \begin{equation*} \widetilde{X}_n := X_n - \frac{\sigma}{\sqrt{2\pi}} n \ln n - \delta n \text{ with } \delta := \frac{\xi L_1}{c \sqrt\pi} + \frac{\sigma}{\sqrt{2\pi}} (\gamma + 2 \ln 2), \end{equation*} with $L_1$ defined at~\eqref{eq:39}. Then [cf.~\eqref{eq:1}] \begin{equation*} \widetilde{X}_n \stackrel{\mathcal{L}}{=} \widetilde{X}_{K_n} + \widetilde{X}_{n-K_n}^* + t_{n,K_n}, \quad n \geq 2, \end{equation*} with $\widetilde{X}_1 = 1 - \delta$ and \begin{equation*} t_{n,k} := \frac{\sigma}{\sqrt{2\pi}}\left[ k \ln k + (n-k) \ln{(n-k)} - n \ln n + \frac{\sqrt{2\pi}}\sigma n^{1/2}\right]. \end{equation*} As in the case $\alpha < 1/2$, it is easily checked that~\eqref{eq:3}--\eqref{eq:6} hold with $\muz{s}$ replaced by $\tilde{\mu}^{[s]}(z)$ and $r_n^{[s]}$ at~\eqref{eq:29} replaced by \begin{equation} \label{eq:36} \tilde{r}_n^{[s]} := \sum_{\substack{s_1+s_2+s_3=s\\ s_2, s_3 < s}} \binom{s}{s_1,s_2,s_3} \sum_{k=1}^{n-1} p_{n,k} t_{n,k}^{s_1} \tilde{\mu}_{k}^{[s_2]} \tilde{\mu}_{n-k}^{[s_3]}. \end{equation} The limiting distribution is given by the following result. \begin{theorem} \label{thm:moments-half} As $n \to \infty$, \begin{equation*} \sigma^{-s} n^{-s} \tilde{\mu}_n^{[s]} \to m_s, \end{equation*} where $m_0 = 1$, $m_1 = 0$, and for $s \geq 2$, \begin{equation*} m_s = \frac{\Gamma(s-1)}{2\sqrt\pi\Gamma(s-\frac12)} \sum_{\substack{s_1+s_2+s_3=s\\ s_2, s_3 < s}} \binom{s}{s_1, s_2, s_3} \left( \frac{1}{\sqrt{2\pi}} \right)^{s_1} m_{s_2} m_{s_3} J_{s_1, s_2, s_3}, \end{equation*} with \begin{equation*} J_{s_1, s_2, s_3} := \int_0^1 [ x \ln x + (1-x) \ln(1-x)]^{s_1} x^{s_2 - \frac12} (1-x)^{s_3 - \frac32} \, dx. \end{equation*} Consequently \begin{equation*} \sigma^{-1} n^{-1} \widetilde{X}_{n} \xrightarrow{\mathcal{L}} X^{(1/2)}, \end{equation*} where $X^{(1/2)}$ has the unique distribution whose $s$th moment is given by $m_s$. \end{theorem} \begin{proof}[Proof sketch] We provide an outline of the proof, leaving the details to the reader. We claim that it is sufficient to show that \begin{equation} \label{eq:34} \rho^n \tilde{\mu}_n^{[s]} T_n = [C_s + o(1)]n^{s-\frac32}, \end{equation} with $C_0 = c$, $C_1 = 0$, and for $s \geq 2$, \begin{equation*} C_s = \frac{1}{b \sqrt\rho}\frac{\Gamma(s-1)}{\Gamma(s-\frac12)} \sum_{\substack{s_1+s_2+s_3=s\\ s_2, s_3 < s}} \binom{s}{s_1, s_2, s_3} \left(\frac{\sigma}{\sqrt{2\pi}} \right)^{s_1} C_{s_2} C_{s_3} J_{s_1, s_2, s_3}. \end{equation*} Indeed, defining $m_s := \sigma^{-s} c^{-1} C_s$ and proceeding as in Theorem~\ref{thm:gt12} yields the claim. To show~\eqref{eq:34}, we proceed by induction. The case~$s=0$ is easily checked, and the case $s=1$ follows from~\eqref{eq:35}. For $s \geq 2$ we use the induction hypothesis and approximation of sums by Riemann integrals in~\eqref{eq:36} to get \begin{equation*} \rho^n (n-1) T_n \tilde{r}_n^{[s]} \sim D_s n^{s-1}, \end{equation*} where \begin{equation*} D_s := a_1 \sum_{\substack{s_1+s_2+s_3=s\\ s_2, s_3 < s}} \binom{s}{s_1, s_2, s_3} \left(\frac{\sigma}{\sqrt{2\pi}} \right)^{s_1} C_{s_2} C_{s_3} J_{s_1, s_2, s_3}. \end{equation*} Since we know a priori that $\tilde{r}^{[s]}(z)$ is amenable to singularity analysis it follows that [cf.~\eqref{eq:37}] \begin{equation*} \tilde{r}^{[s]}(z) \sim \Gamma(s){D_s} Z^{-s} \end{equation*} and completing the computations as in the proof of Proposition~\ref{prop:gt12} yields the proof of~\eqref{eq:34}. \end{proof} \section{One-sided destruction} \label{sec:one-sided-destr} \subsection{Expectation\label{seco1}} We study equation \eqref{eqno4} for the toll $t_{n} = n^{\alpha}$ with $\alpha \ge 0$ and start by establishing a singular expansion for the expectation $\mu^{[1]}(z)$. Since $\mu^{[0]}(z) = T(z)$, we have from~\eqref{eq:28} that \begin{equation*} r^{[1]}(z) = t(z) \odot \big[a_{1} z T'(z) T(z) + a_{0} T^{2}(z)\big], \end{equation*} which has already been considered in Section~\ref{sec:mean}. In the remaining part of Section~\ref{seco1}, we suppose now $\alpha \not\in \{ \frac12, \frac32, \ldots \} \cup \{0,1,2,\ldots \}$. (The complementary cases are covered in the proof of Theorem~\ref{theo1}.) Then a compatible singular expansion for $r^{[1]}(z)$ is available at~\eqref{eq:30}. This leads to the expansion~\eqref{eq:32} for $g^{[1]}(z)$ and consequently, using~\eqref{eq:27}, to \begin{equation*} \frac{g^{[1]}(z)}{T(z)} \sim \frac{\Gamma(\alpha+\frac{1}{2})}{2 \rho \sqrt{\pi}} Z^{-\alpha-1} + Z^{-\alpha-\frac{1}{2}} \mathcal{A} + Z^{-\alpha} \mathcal{A} + Z^{-{1}/{2}} \mathcal{A} + \mathcal{A}. \end{equation*} Integrating the last expression gives the singular expansion \begin{equation*} \int_{0}^{z} \frac{g^{[1]}(t)}{T(t)} dt \sim \frac{\Gamma(\alpha+\frac{1}{2})}{2 \alpha \sqrt{\pi}} Z^{-\alpha} + Z^{-\alpha+\frac{1}{2}} \mathcal{A} + Z^{-\alpha+1} \mathcal{A} + \mathcal{A} + Z^{{1}/{2}} \mathcal{A}. \end{equation*} Now using~\eqref{eqno4}, we obtain easily the desired expansion for $\mu^{[1]}(z)$: \begin{align} \mu^{[1]}(z) & = T(z) \int_{0}^{z} \frac{g^{[1]}(t)}{T(t)} dt + t_{1} T(z) \notag \\ & \sim \frac{\tau \Gamma(\alpha+\frac{1}{2})}{2 \alpha \sqrt{\pi}} Z^{-\alpha} + Z^{-\alpha+\frac{1}{2}} \mathcal{A} + Z^{-\alpha+1} \mathcal{A} + \mathcal{A} + Z^{{1}/{2}} \mathcal{A}. \label{eq:25} \end{align} Via singularity analysis, we thus get the following expansion for the coefficients: \begin{equation*} \rho^n [z^{n}] \mu^{[1]}(z) = \rho^{n} \mu_{n}^{[1]} T_{n} \sim \frac{\tau \Gamma(\alpha+\frac{1}{2})}{2 \sqrt{\pi} \, \Gamma(\alpha+1)} n^{\alpha-1} + n^{\alpha-\frac{3}{2}} \mathcal{N} + n^{\alpha-2} \mathcal{N} + n^{-{3}/{2}} \mathcal{N}, \end{equation*} which together with~\eqref{eq:15.1} yields the full asymptotic expansion \begin{equation} \mu_{n}^{[1]} \sim \frac{\sigma \Gamma(\alpha+\frac{1}{2})}{\sqrt{2} \, \Gamma(\alpha+1)} n^{\alpha+\frac{1}{2}} + n^{\alpha} \mathcal{N} + n^{\alpha-\frac{1}{2}} \mathcal{N} + \mathcal{N}, \label{eqno5} \end{equation} with $\sigma$ defined at~\eqref{eq:15}. \subsection{Higher moments and limiting distributions} We state the main result of this section: \begin{theorem} \label{theo1} Let $\alpha \geq 0$. Define $\sigma := \tau \sqrt{\frac{\Phi''(\tau)}{\Phi(\tau)}}$ and $\alpha' := \alpha + \frac{1}{2}$. Then, for toll function $t_{n} = n^{\alpha}$, the moments $\mu_n^{[s]} := \E Y_{n}^{s}$ satisfy the following asymptotic expansion as $n \to \infty$: \begin{equation*} \mu_{n}^{[s]} = \frac{s! \sigma^{s}}{2^{{s}/{2}}} \prod_{j=1}^{s}\frac{\Gamma(j \alpha')}{\Gamma(j\alpha'+\frac{1}{2})} n^{s \alpha'} + O\big(n^{s \alpha'-q }\big), \end{equation*} with \begin{equation*} q := \begin{cases} \frac12 - \epsilon & \text{\textup{if} $\alpha \in \{0, 1/2\}$} \\ \min\{\alpha, 1/2\} & \textup{otherwise}, \end{cases} \end{equation*} where $\epsilon > 0$ is arbitrarily small. Thus the normalized random variable~$Y_{n}$ converges weakly to a random variable~$Y^{(\alpha)}$: \begin{equation*} \sigma^{-1} n^{-\alpha'} Y_{n} \xrightarrow{\mathcal{L}} Y^{(\alpha)}, \end{equation*} where $Y^{(\alpha)}$ has the unique distribution with (for $s \geq 1$) $s$th moment \begin{equation*} m_{s} = \frac{s!}{2^{{s}/{2}}} \prod_{j=1}^{s} \frac{\Gamma(j \alpha')}{\Gamma(j\alpha'+\frac{1}{2})}. \end{equation*} In particular when $\alpha = 0$ (i.e., $t_n \equiv 1$), $\sigma n^{-1/2} Y_n$ converges weakly to a standard Rayleigh distributed random variable~$Y^{(0)}$ with density \begin{equation*} f(y) = y e^{-y^2/2}, \quad y \geq 0. \end{equation*} In this case the asymptotics of $\mun{s}$ can be sharpened to \begin{equation*} \mu_{n}^{[s]} = \frac{s! \sigma^s \sqrt\pi}{2^{s/2} \Gamma(\frac{s+1}{2})} n^{{s}/{2}} \, \left[1 + O\left(\frac{\log n}{\sqrt{n}}\right) \right]. \end{equation*} \end{theorem} \begin{proof} We use induction on~$s$. We begin with $\alpha > 0$. Observe that it is sufficient to show that the generating functions~$\mu^{[s]}(z)$ admit the asymptotic expansions~\eqref{eqno6} around their dominant singularities at $z=\rho$. Then, using singularity analysis, the claim follows. What we will show is that \begin{equation} \label{eqno6} \mu^{[s]}(z) = \frac{s! \sigma^{s-1} \tau}{2^{\frac{s+1}{2}} (s \alpha'-\frac{1}{2}) \sqrt{\pi}} \gamma_s Z^{-s \alpha'+\frac{1}{2}} + O\big(|Z|^{-s\alpha'+ \frac12 + q}\big), \end{equation} where \begin{equation*} \gamma_s := \frac{\prod_{j=1}^{s} \Gamma(j \alpha')}{\prod_{j=1}^{s-1} \Gamma(j \alpha'+\frac{1}{2})}. \end{equation*} First we consider $s=1$, where we immediately obtain from the full expansion~\eqref{eq:25} that~\eqref{eqno6} is true for all $\alpha \not\in \{ \frac12, \frac32, \ldots \} \cup \{1,2,\ldots \}$. If on the other hand $\alpha \in \{\frac{1}{2}, \frac32, \ldots \} \cup \{1,2,\ldots \}$, then, repeating the computations of Section~\ref{seco1}, it is easily seen that logarithmic terms appear in the expansion of $\mu^{[1]}(z)$. But apart from the case $\alpha=\frac{1}{2}$, they don't have an influence on the main term or on the asymptotic growth order of the second-order term. If $\alpha = \frac{1}{2}$, one observes that the general formula for the main term holds, but the bound for the remainder term is different:\ $O(|\log Z^{-1}|)$, not $O(1)$. Summarizing these cases, the expansion~\eqref{eqno6} holds for $s=1$. Next we assume that \eqref{eqno6} holds for all $1 \le s_{2} < s$ with a given $s > 1$. From~\eqref{eqno6} follows the expansion \begin{equation*} \partial_{z} \mu^{[s_{2}]}(z) = \frac{s_{2}! \sigma^{s_{2}-1} \tau} {2^{\frac{s_{2}+1}{2}} \sqrt{\pi} \rho} \gamma_{s_2} Z^{-s_{2} \alpha' - \frac{1}{2}} + O\big(|Z|^{-s_{2} \alpha' - \frac{1}{2} + q }\big), \end{equation*} which holds for all $1 \le s_{2} < s$. Together with $\mu^{[0]}(z) = T(z)$ and $a_1 \tau = 1$, this gives the following singular expansion: \begin{multline*} T(z) \big(a_{1} z \partial_{z} \mu^{[s_{2}]}(z) + a_{0} \mu^{[s_{2}]}(z)\big) = \\ \begin{cases} \frac12 b \rho^{1/2} Z^{-{1}/{2}} + O(1), & s_{2}=0 \\ \frac{s_{2}! \sigma^{s_{2}-1} \tau}{2^{(s_{2}+1)/{2}} \sqrt{\pi}} \gamma_{s_2} Z^{-s_{2} \alpha' - \frac{1}{2}} + O\big(|Z|^{-s_{2} \alpha' - \frac12 + q}\big), & 1 \leq s_2 < s. \end{cases} \end{multline*} Under the assumptions $s_{1}+s_{2}=s$ and $s_{2}<s$, we get via singularity analysis the expansion \begin{multline*} \rho^{n} [z^{n}] \binom{s}{s_{1}} t^{\odot s_{1}}(z) \odot \big[ T(z) \big(a_{1} z \partial_{z} \mu^{[s_{2}]}(z) + a_{0} \mu^{[s_{2}]}(z)\big)\big] = \\ \begin{cases} c n^{s \alpha - \frac{1}{2}} + O\big(n^{s \alpha - 1}\big), & s_{2} = 0 \\ \binom{s}{s_{1}} \frac{s_{2}! \sigma^{s_{2}-1} \tau}{2^{(s_{2}+1)/{2}} \sqrt{\pi}} \prod_{j=1}^{s_{2}}\frac{\Gamma(j \alpha')}{\Gamma(j \alpha' + \frac{1}{2})} n^{s \alpha + \frac{s_{2}-1}{2}} + O\big(n^{s \alpha + \frac{s_{2}-1}{2}- q}\big), & 1 \le s_{2} < s. \end{cases} \end{multline*} Thus under the assumptions given above, the dominant contribution to~$r^{[s]}(z)$ is obtained when $s_{2} = s-1$ and $s_{1}=1$, giving the expansion \begin{equation*} \rho^{n} [z^{n}] r^{[s]}(z) = \frac{s! \sigma^{s-2} \tau}{2^{{s}/{2}} \sqrt{\pi}} \prod_{j=1}^{s-1} \frac{\Gamma(j \alpha')}{\Gamma(j \alpha' + \frac{1}{2})} n^{s\alpha'-1} + O\big(n^{s \alpha' - 1 - q}\big), \end{equation*} which in turn yields the following singular expansion for $r^{[s]}(z)$: \begin{equation} r^{[s]}(z) = \frac{s! \sigma^{s-2} \tau}{2^{{s}/{2}} \sqrt{\pi}} \gamma_s Z^{-s \alpha'} + O\big(|Z|^{-s \alpha'+ q}\big). \end{equation} Immediately from~\eqref{eq:26} and~\eqref{eq:10} follow the expansions \begin{equation*} g^{[s]}(z) = \frac{s! \sigma^{s-1} \tau}{2^{({s+1})/{2}} \sqrt{\pi} \rho} \gamma_s Z^{-s \alpha' - \frac{1}{2}} + O\big(|Z|^{-s \alpha' - \frac12 + q}\big) \end{equation*} and \begin{equation*} \frac{g^{[s]}(z)}{T(z)} = \frac{s! \sigma^{s-1}}{2^{({s+1})/{2}} \sqrt{\pi} \rho} \gamma_s Z^{-s \alpha' - \frac{1}{2}} + O\big(|Z|^{-s \alpha' - \frac12 + q}\big). \end{equation*} Integrating leads to \begin{equation*} \int_{0}^{z} \frac{g^{[s]}(t)}{T(t)} dt = \frac{s! \sigma^{s-1}}{2^{({s+1})/{2}} \sqrt{\pi} (s \alpha' - \frac{1}{2})} \gamma_s Z^{-s \alpha' + \frac{1}{2}} + O\big(|Z|^{-s \alpha'+ \frac12 + q}\big). \end{equation*} Using~\eqref{eqno4} and~\eqref{eq:7}, we obtain~\eqref{eqno6} and Theorem~\ref{theo1} is proved for $\alpha > 0$. The case $\alpha=0$ has already been proved in \cite{panholzer:2003}, where the distribution has been characterized by its moments. Therefore we describe only very briefly how to obtain this result with the present approach. One need only show by induction the singular behavior \begin{multline} \label{eqno7} \mu^{[s]}(z) \\= \begin{cases} \frac{\tau}{2} \ln Z^{-1} + \mathcal{A} + O\big(|Z^{{1}/{2}} \log Z^{-1}| \big), & s=1, \\ \sqrt{2} \sigma \tau Z^{-{1}/{2}} + O\big(|\log Z^{-1}|^2\big), & s=2, \\ \frac{\tau}{\sqrt{\pi}} \sigma^{s-1} 2^{({s-1})/{2}} \Gamma(\frac{s}{2}+1) \Gamma(\frac{s-1}{2}) Z^{-({s-1})/{2}} + O\big(|Z^{-\frac{s}{2}+1} \log Z^{-1}|\big), & s \ge 3. \end{cases} \end{multline} The desired result then follows by applying singularity analysis and the duplication formula for the $\Gamma$-function. To begin the proof of~\eqref{eqno7}, first we remark that for $s=1$ one proceeds as in Section~\ref{seco1} and gets the full expansio \begin{equation*} \mu^{[1]}(z) = \frac{\tau}2 \ln Z^{-1} + \mathcal{A} + (Z^{{1}/{2}} \log Z^{-1}) \mathcal{A} + Z^{{1}/{2}} \mathcal{A} + (Z \log Z^{-1}) \mathcal{A}, \end{equation*} which of course gives~\eqref{eqno7} in that case. Assuming that~\eqref{eqno7} holds for $1 \le s_{2} < s$ with a given $s \ge 2$, we have the singular expansion \begin{equation*} \begin{split} \partial_{z} \mu^{[s_{2}]}(z) = \frac\tau{\sqrt\pi\rho}{\sigma^{s_{2}-1} 2^{({s_{2}-1})/{2}} \Gamma\left(\frac{s_{2}}{2}+1\right) \Gamma\left(\frac{s_{2} + 1}{2}\right)} Z^{-({s_{2}+1})/{2}} \\ {}+ O\big(|Z^{-\frac{s_2}{2}} \log Z^{-1}|\big). \end{split} \end{equation*} Under the restrictions $s_{1}+s_{2}=s$ and $s_{2}<s$, we obtain via singularity analysis the expansions \begin{multline*} \rho^{n} [z^{n}] \left\{\binom{s}{s_{1}} t^{\odot s_{1}}(z) \odot \big[T(z) \big(a_{1} z \partial_{z} \mu^{[s_{2}]}(z) + a_{0} \mu^{[s_{2}]}(z)\big) \big]\right\} \\ = \binom{s}{s_{1}} \frac{\tau}{\sqrt\pi} \sigma^{s_{2}-1} 2^{\frac{s_{2}-1}{2}} \Gamma\left(\frac{s_{2}}{2}+1\right) n^{\frac{s_{2}-1}{2}} + O\big(n^{\frac{s_{2}}{2}-1} \log n\big) \end{multline*} for $1 \le s_{2} < s$. [For $s_2 = 0$, an expansion is already available at~\eqref{eq:33}.] Under the given restrictions, the dominant contribution to $r^{[s]}(z)$ is obtained when $s_{2}=s-1$ and we obtain the following singular behavior of $r^{[s]}(z)$: \begin{equation*} r^{[s]}(z) = \frac\tau{\sqrt\pi} {s \sigma^{s-2} 2^{\frac{s}{2}-1} \Gamma\left(\frac{s+1}{2}\right) \Gamma \left( \frac{s}{2} \right)} Z^{-{s}/{2}} + O\big(|Z^{-\frac{s-1}{2}} \log Z^{-1}|\big). \end{equation*} We proceed with \begin{equation*} \frac{g^{[s]}(z)}{T(z)} = \frac{1}{\rho\sqrt\pi} {s \sigma^{s-1} 2^{\frac{s-3}{2}} \Gamma\left(\frac{s+1}{2}\right) \Gamma\left(\frac{s}{2}\right)} Z^{-\frac{s+1}{2}} + O\big(|Z^{-{s}/{2}} \log Z^{-1}|\big), \end{equation*} and, due to~\eqref{eqno4} and~\eqref{eq:7}, integrating gives~\eqref{eqno7} for $s \geq 2$ and completes the proof. \end{proof} \bibliographystyle{habbrv}
1,314,259,996,032
arxiv
\section{Conclusion} In this paper, we present an updated discussion of commonly used data compression schemes in memory column databases. This study contributes the following messages to our community: \begin{itemize} \item It is still beneficial to apply data compression for in-memory databases. A clear benefit is that, compression can save memory requirement. More importantly, some compressions (e.g., dictionary encoding, run-length encoding and bitmap) can directly answer queries over compressed data, yielding high performance. \item We give insights regarding how to use data compression in memory databases to maximize performance, as shown in Figure~\ref{fig:decision}. If a column is integer-type (if not, it is recommended to apply word-aligned dictionary encoding first to transform to integer-type), then if the column is sorted, we recommend to use vsb-RLE. Otherwise, depending on the domain size, we can use compressed bitmap encoding (small domain size) or bit-aligned dictionary encoding (big domain size). \item Huffman encoding is not recommended to be used in memory databases due to the slow decompression performance \end{itemize} \begin{figure}[htbp] \centering \includegraphics[width=1\columnwidth]{decision2.pdf} \caption{A decision tree of selecting proper compression schemes}\label{fig:decision} \end{figure} \section{How to compress in memory \\databases?} \label{sec:compression} In this section, we (i) introduce the widely used database compression schemes, (ii) analyze whether they can be applied in memory databases, and (iii) propose our optimizations (if any) to further improve the performance. \subsection{Dictionary encoding (DICT)} \noindent\textbf{Compression description.} Dictionary encoding (DICT)~\cite{chen2001query} is widely used for string-type columns. It maps each original string value to a 32-bit integer (a.k.a \textbf{word-DICT}) according to a global dictionary table. For example, in a \col{State} column, ``Alabama'' is mapped to ``1'' and ``Alaska'' is mapped to ``2''. Note that, one underlying assumption is that the global dictionary table is small in order to be fitted in memory. \noindent\textbf{Is it useful in memory databases?} Dictionary encoding schemes can be applied in memory databases, since they can answer queries directly on the compressed data by encoding query conditions using the same global dictionary table. For example, assume the \col{State} column of a \col{Customer} table is encoded by word-DICT, then SQL query of ``counting the total number of customers in Alaska'' (shown in Figure~\ref{fig:dictionary_example}(a)) is rewritten to the query in Figure~\ref{fig:dictionary_example}(b) by converting the query condition ``Alaska'' to ``2''. \begin{figure}[h!] \centering \includegraphics[width=1\columnwidth]{dictionary_example} \caption{\textbf{Example query and its rewriting query}}\label{fig:dictionary_example} \end{figure} The rewriting query can be performed directly over the compressed column, which is much faster than that over the uncompressed data, because it reduces the expensive string comparison problem to the cheap integer comparison problem. \noindent\textbf{Our optimizations.} Existing dictionary encoding schemes mainly focuses on string-type columns, however, we want to emphasize that, there are some optimization opportunities for integer-type columns, which is common for foreign keys.\footnote{Usually, if a foreign key column is string-type, then a word-aligned dictionary encoding can be applied to convert it to an integer-type column.} Integer values can be mapped to smaller codes with the minimal number of bits to distinguish the original integers. More formally, let $D$ be the domain size, then each original value can be encoded in $\lceil \log_{2} D\rceil$ bits, which is called \textbf{bit-DICT}. For example, suppose the domain size $D=50$, each value can be expressed by $\lceil \log_{2} 50\rceil = 6$ bits, then number 8 and 22 can be encoded as 001000 and 010110 respectively.\footnote{The most significant 0's are padding 0's.} It is worth mentioning that the global dictionary table is no longer needed because the mapping from integer to its binary expression (with padding 0's) is a one-to-one mapping. Note that, maintaining the global table is a limitation of existing dictionary encoding schemes when applying to string-type columns, because the dictionary table is big when the domain size is large. \noindent\textbf{Remark.} For in-memory column databases, word-DICT is a reasonable choice for string-type columns with small domain size (say less than $50,000$~\cite{abadi2006integrating}). And, our bit-DICT is highly recommended for integer-type columns (e.g., foreign key columns). \subsection{Run-length encoding (RLE)} \noindent\textbf{Compression description.} Run-length encoding (RLE)~\cite{abadi2006integrating} is an attractive approach for compressing data in column databases. It compresses continuous duplicated values to a compact singular representation, which implies that it is only applicable to sorted columns. Traditional RLE expresses the repeats of the same value as pairs (\textbf{v}alue, run-\textbf{l}ength), we call it \textbf{vl-RLE}. For example, value ``Alaska'' appears 10000 times continuously in the \col{State} column, then it can be simply expressed as (Alaska, 10000) instead of storing 10000 duplicates. \noindent\textbf{Is it useful in memory databases?} The answer is absolutely positive. vl-RLE is efficient for queries that operate on the compressed column. For instance, the query in Figure~\ref{fig:dictionary_example}(a) can be answered directly on the compressed column, i.e., the answer is 10000 in this case. Not only the vl-RLE can be applied in memory databases, but also its two variants: \textbf{vsl-RLE} and \textbf{vs-RLE}. vsl-RLE uses a (\textbf{v}alue, \textbf{s}tart-position, run-\textbf{l}ength) scheme to represent the repeats of the same value. By adding the \textit{start-position}, it tells the start row id of the repeat values. The queries that can be directly executed on the compressed data by vsl-RLE is a superset of that by vl-RLE, since it supports queries accessing other columns. Consider a table R(A, B) with A encoded by vsl-RLE, the encoding for values $t\in A$ is $(t, s, l)$, where $s$ is the start row id and $l$ is the number of repeats of $t$. The answer of query $\pi_B\sigma_{A=t}R$ is a list of values in column B from $s$-th row to $(s+l)$-th row. vs-RLE uses a (\textbf{v}alue, \textbf{s}tart-position) scheme to represent the repeats of the same value. It has the same capability with vsl-RLE of answering queries directly over compressed data, but needs less memory requirement. Suppose the table R(A, B) with A encoded by vs-RLE, the encoding for values $t\in A$ and $t+1\in A$ are $(t, s)$ and $(t+1, s')$ respectively. The answer of query $\pi_B\sigma_{A=t}R$ is a list of values in column B from $s$-th row to $(s'-s)$-th row. \noindent\textbf{Our optimizations.} We claim that for integer-type columns, vs-RLE can be further compressed by bit-DICT, we call it \textbf{vsb-RLE}. Due to the consideration of the query processing performance, only the values (not run-lengths) are further encoded by bit-DICT. Note that, as we described before, bit-DICT does not hurt performance, thus vsb-RLE has similar performance with vs-RLE and vsl-RLE but requires less space. Table~\ref{table:RLE} reports the space cost of each RLE. \begin{table}[h!] \small \centering \renewcommand{\tabcolsep}{2.8mm} \begin{tabular}{c|c|c|c|c}\hline\hline Uncompress & vl-RLE & vsl-RLE & vs-RLE & vsb-RLE \\\hline 4GB & 8MB & 12MB & 8MB & \textbf{6.5MB}\\\hline \hline \end{tabular} \caption{\textbf{Space cost of different RLE schemes on an integer-type column with 1 billion rows and 1 million distinct values}} \label{table:RLE} \end{table} \noindent\textbf{Remark.} For sorted columns, vsb-RLE is highly recommended, because it has less space than other competitors but with similar performance. \subsection{Bitmap encoding} \noindent\textbf{Compression description.} Another well-studied encoding method in column databases is bitmap encoding~\cite{antoshenkov1995byte,wu2006optimizing}. Each distinct value $t$ is associated with a bit-vector indicating the occurrences of $t$ on the column. The default values in the bit-vector are zeros while the $i$-th position of the bit-vector is set to 1 if the $i$-th position has value $t$ on the original column. The size of each bit-vector is the size of the table, which is extremely large in large-scale databases. To reduce the space overhead of the bitmap index, run-length encoding is employed to compress the continuous 1's and 0's on bit-vectors. Two representative compressed bitmap encodings are BBC (byte-aligned bitmap code)~\cite{antoshenkov1995byte} and WAH (word-aligned hybrid code)~\cite{wu2006optimizing}. Compared with BBC, WAH outperforms BBC by about 12 times and uses about 60\% more space~\cite{wu2006optimizing}. Though both WAH and BBC can reduce the size of each bit-vector, the number of bit-vectors cannot be reduced. Therefore, compressed bitmap schemes are only applicable to columns with small size, e.g., less than 50~\cite{abadi2006integrating}. \noindent\textbf{Is it useful in memory databases?} Bitmap can be applied in memory databases, since it allows logical bitwise operation directly on a compressed bitmap~\cite{wu2006optimizing}. For example, consider the query: \smallskip ~~~~~~~\textbf{SELECT} B \textbf{FROM} R \textbf{WHERE} A $\leq$ 2 \smallskip The main operation of the query is to get row ids satisfying $\sigma_{A\leq2}R$, which can be answered by performing a bitwise logical operation $b_1~OR~b_2$, where $b_1$ and $b_2$ are compressed bit-vectors for value 1 and 2 respectively. \noindent\textbf{Remark.} It is recommended to use compressed bitmap for a column in memory column databases only when the domain size is small. \subsection{Huffman encoding} \noindent\textbf{Compression description.} Huffman encoding~\cite{Huffman52} is a representative of the variable length encodings, which is widely used in many areas. It is based on the frequency of occurrence of a data. The principle is to use a smaller number of bits to encode the data that occurs more frequently. \noindent\textbf{Is it useful in memory databases?} Unfortunately, Huffman encoding cannot be applied in memory databases, since it needs to decompress the whole column to answer queries, which is time-consuming. This is because Huffman encoding does not support partial decompression due to its variable length structure. We conducted experiments to test the decompression performance. The decompression time is $3.2$ minutes for a column with 1 million rows (domain size is 100 and data follows Zipf distribution). \noindent\textbf{Remark.} We do not recommend to use Huffman encoding for in-memory column databases. \section{Introduction} Data compression is a well-known optimization in traditional disk-resident database systems~\cite{abadi2006integrating,chen2001query,IdreosGNMMK12}. The most important benefit of compression is to reduce the expensive disk I/O cost significantly by reducing the data size. The CPU decompression overhead (if needed) is inarguably negligible compared to the I/O time saving because of the giant performance gap between disks and CPUs. As a result, modern databases systems all embrace compressions, e.g., Microsoft SQL Server, IMB DB2, Oracle, PostgreSQL. However, recent years have witnessed an explosion of main memory databases, e.g., MonetDB~\cite{IdreosGNMMK12} and SAP HANA~\cite{FarberMLGMRD12}. It is fueled by several fundamental technology trends. (1) Memory is $5\sim6$ orders of magnitude faster than disk-based storage, which is crucial to high-performance big data analytics. (2) The capacity of memory is continuously increasing while the price (in \$/GB) is dropping. As a result, today's commodity server machines have more than 32 GB's memory. (3) The breakthrough in non-volatile main memory exemplified by STT-RAM, PCM, and Re-RAM \cite{zhang2015} makes it possible to extend the DRAM capacity cost-effectively. That is because NVMM is cheaper than DRAM (in \$/GB) while still being competitive in terms of performance. (4) The significant advances in networking also makes it feasible to extend DRAM capacity by connecting many machines together through ultra-fast networks, e.g., RAMCloud \cite{OusterhoutGGKLM15} is such a memory system. With everything in memory and disks being removed from the critical path, the decompression overhead pumps out, which adversely affects the performance of in-memory databases. Thus, a natural question is that, \textbf{\emph{is it still reasonable to apply data compression for in-memory databases}?} If the answer is yes, then the follow-up question is \textbf{\textit{how to apply data compression schemes for in-memory databases in order to maximize the performance?}} Existing works provide insufficient discussion on the above two questions. Therefore, in this work, we want to bridge the gap. In particular, we want to answer the two questions with a focus on OLAP (Online Analytical Processing) workloads\footnote{The counterpart is OLTP (Online Transaction Processing) workloads.} due to their importance in decision support and data mining applications. OLAP workloads usually require reading a large portion of data from particular columns. Therefore, it is preferred to use column-oriented databases to answer such queries efficiently.\footnote{It has been proven that column-oriented databases outperform row-oriented databases on OLAP workloads by $1\sim2$ orders of magnitude~\cite{abadi2006integrating}.} Therefore, this work mainly discusses data compression for OLAP workloads over large-scale in-memory column databases. \section{To compress or not in memory databases?} In this section, we elaborate whether it still makes sense to use data compression in memory-based databases. An obvious advantage of data compression in memory databases is less memory requirement. Next, we explain more about system performance. Intuitively, if the entire data is stored in memory, then data compression will slow down the performance due to the additional decompression overhead. This is indeed true for some compression schemes, e.g., Huffman encoding~\cite{Huffman52}. However, for some other compression schemes (e.g., dictionary encoding~\cite{chen2001query}, run-length encoding~\cite{abadi2006integrating} and bitmap encoding~\cite{wu2006optimizing}), as we will show in Section~\ref{sec:compression}, they allow queries to be answered directly over compressed data without decompression at all. More importantly, performing queries over compressed data directly is even faster than that over uncompressed raw data, since less data is processed. Therefore, whether a data compression scheme is useful in memory databases depends on whether it can answer queries directly without decompression. If yes, it is strongly recommended to use that compression in memory databases, because it ``kills two birds with one stone'', saving not just memory requirement, but also improving query performance. Otherwise, it is not recommended (since the decompression time can be a dominant factor if data is huge), although it is up to the applications to balance the tradeoff between the decompression overhead and the saving of memory requirement.
1,314,259,996,033
arxiv
\section{References}\vspace{-.1\baselineskip}} \newcommand{\spara}[1]{\smallskip\noindent\textbf{#1}} \newcommand{\mpara}[1]{\medskip\noindent\textbf{#1}} \newcommand{\para}[1]{\noindent\textbf{#1}} \newcommand{\ensuremath{\mathbf{NP}}}{\ensuremath{\mathbf{NP}}} \newcommand{{{\np}-hard}\xspace}{{{\ensuremath{\mathbf{NP}}}-hard}\xspace} \newcommand{{\ensuremath{k}}-{\sc Edge\-Addition}\xspace}{{\ensuremath{k}}-{\sc Edge\-Addition}\xspace} \newcommand{{\ensuremath{k}}-{\sc Edge\-Addition\-Expectation}\xspace}{{\ensuremath{k}}-{\sc Edge\-Addition\-Expectation}\xspace} \newcommand{{\ensuremath{k}}-{\sc Edge\-Addition}{\ensuremath{_{0}}}\xspace}{{\ensuremath{k}}-{\sc Edge\-Addition}{\ensuremath{_{0}}}\xspace} \newcommand{\pagerank}[2]{{\ensuremath{\mathrm{pr}_{#1}({#2})}}\xspace} \newcommand{{\sc Vertex\-Cover}\xspace}{{\sc Vertex\-Cover}\xspace} \newcommand{{\sc metis}\xspace}{{\sc metis}\xspace} \newcommand{\ensuremath{\text{\sc RWC}}\xspace}{\ensuremath{\text{\sc RWC}}\xspace} \newcommand{\ensuremath{\text{\sc ROV}}\xspace}{\ensuremath{\text{\sc ROV}}\xspace} \newcommand{\ensuremath{\text{\sc ROV-AP}}\xspace}{\ensuremath{\text{\sc ROV-AP}}\xspace} \newcommand{c\xspace}{c\xspace} \newcommand{s\xspace}{s\xspace} \newcommand{w\xspace}{w\xspace} \newenvironment {squishlist} {\begin{list}{$\bullet$} { \setlength{\itemsep}{0pt} \setlength{\parsep}{3pt} \setlength{\topsep}{3pt} \setlength{\partopsep}{0pt} \setlength{\leftmargin}{1.5em} \setlength{\labelwidth}{1em} \setlength{\labelsep}{0.5em} } } {\end{list}} \begin{document} \copyrightyear{2017} \acmYear{2017} \setcopyright{acmlicensed} \acmConference{WebSci '17}{June 25-28, 2017}{Troy, NY, USA}\acmPrice{15.00}\acmDOI{http://dx.doi.org/10.1145/3091478.3091486} \acmISBN{978-1-4503-4896-6/17/06} \title{{{The Effect of Collective Attention\\on Controversial Debates on Social Media}}} \author{Kiran Garimella} \affiliation{% \institution{Aalto University} \city{Helsinki} \country{Finland} } \email{kiran.garimella@aalto.fi} \author{Gianmarco De~Francisci~Morales} \affiliation{% \institution{Qatar Computing Research Institute} \city{Doha} \country{Qatar} } \email{gdfm@acm.org} \author{Aristides Gionis} \affiliation{% \institution{Aalto University} \city{Helsinki} \country{Finland} } \email{aristides.gionis@aalto.fi} \author{Michael Mathioudakis} \affiliation{% \institution{Aalto University} \city{Helsinki} \country{Finland} } \email{michael.mathioudakis@aalto.fi} \renewcommand{\shortauthors}{Garimella, De Francisci Morales, Gionis, Mathioudakis} \begin{abstract} We study the evolution of long-lived controversial debates as manifested on Twitter from 2011 to 2016. Specifically, we explore how the \emph{structure of interactions} and \emph{content of discussion} varies with the level of collective attention, as evidenced by the number of users discussing a topic. Spikes in the volume of users typically correspond to external events that increase the public attention on the topic -- as, for instance, discussions about `gun control' often erupt after a mass shooting. This work is the first to study the dynamic evolution of polarized online debates at such scale. By employing a wide array of network and content analysis measures, we find consistent evidence that increased collective attention is associated with increased network polarization and network concentration within each side of the debate; and overall more uniform lexicon usage across all users. \end{abstract} \maketitle \section{Introduction} Social media are a major venue of public discourse today, hosting the opinions of hundreds of millions of individuals. Due to their prevalence they have become an invaluable instrument in the study of social phenomena and a fundamental subject of \emph{computational social science}. In this work, we study discussions around issues that are deemed important at a societal level --- and in particular, ones that are controversial. This work is a step towards understanding how the discussion about controversial topics on social media evolves, and more broadly how these topics shape the discussion at a societal and political level~\citep{abramowitz2005can,highton2011long,lindaman2002issue}. We study how online discussions around controversial topics change as interest in them increases and decreases. We are motivated by the observation that interest in enduring controversial issues is re-kindled by external events, e.g., when a major related story is reported. One typical example is the gun control debate in U.S., which is revived whenever a mass shooting occurs.\footnote{See, e.g., \url{http://slate.me/1NswLLD}.} The occurrence of such an event commonly causes an increase in collective attention, e.g., in volume of related activity in social media. Given a controversial topic, our focus is to analyze the interactions among users involved in the discussion, and quantify how certain structural properties of the interaction network vary with the change in volume of activity. Our main finding is that the polarization reflected in the network structure of online interactions is correlated with the increase in the popularity of a topic. Differently from previous studies, we study the \emph{dynamic} aspects of controversial topics on social media. While the evolution of networks and polarization on social media have been studied in the past~\citep{conover2011political,leskovec2005graphs}, they have not been studied in conjunction before. In addition, we seek to understand the \emph{response} of social media to stimuli that cause increased interest in the topics, an issue that only very recently has seen some attention~\citep{romero2016social}. We take a longitudinal approach and collect data from Twitter that covers approximately five years. This dataset gives us a very fine-grained view of the activity on social media, including the structure of the interactions among users, and the content they produced during this period. We track four topics of discussion that are controversial in the U.S., that are recurring, and have seen considerable attention during the 2016 U.S. elections. Our methodology relies on recent advances in quantifying controversy on social media~\citep{garimella2016quantifying}. We build two types of networks: an \emph{endorsement} network from the retweet information on Twitter, and a \emph{communication} network from the replies. We aggregate the data at a daily level, thus giving rise to a time series of interaction graphs. Then, we identify the sides of a controversy via graph clustering, and find the \emph{core} of the network, i.e., the users who are consistently participating to the online discussion about the topic. Finally, we employ a wide array of measures that characterize the discussion about a topic on social media, both from the point of view of the network structure and of the actual content of the posts. Apart from our main result --- an increase in polarization linked to increased interest --- we also report on several other findings. We find that most of the interactions during events of interest happen within the different controversy sides, and replies do not cross sides very often, in line with previous observations~\citep{smith2013role}. In addition, increased interest does not alter the fundamental structure of the endorsement network, which is hierarchical, with a disproportionately large fraction of edges linking the periphery to the core. This finding suggests that most casual users, who seldom participate in the discussion, endorse opinions from the core of the side they belong to. When looking at the content of the posts on the two sides of a controversy, we find a consistent trend of \emph{convergence}, as the lexicons become both more uniform and more similar to each other. This result indicates that, while the discussion is still controversial, both sides of the debate focus over the same fundamental issues brought under the spotlight by the event at hand. Conversely, we do not find a consistent long-term trend in the polarization of discussions, which contradicts the common narrative that our society is becoming more divided over time. Finally, we perform similar measurements for a set of topics that are non-political and non-controversial, and highlight differences with the results for controversial discussions.\footnote{A limited subset of our results appeared in a poster at ICWSM 2017~\cite{ebbandflow2017}.} \section{Related Work} \label{sec:related} A few studies exist on the topic of controversy in online news and social media. In one of the first papers, \citet{adamic2005political} study linking patterns and topic coverage of political bloggers, focusing on blog posts on the U.S.\ presidential election of 2004. They measure the degree of interaction between liberal and conservative blogs, and provide evidence that conservative blogs are linking to each other more frequently and in a denser pattern. These findings are confirmed by a more recent study of \citet{conover2011political}, who focus on political communication regarding congressional midterm elections. Using data from Twitter, they identify a highly segregated partisan structure (present in the retweet graph, but not in the mention graph), with limited connectivity between left- and right-leaning users. In another recent work, \citet{mejova2014controversy} consider discussions of controversial and non-controversial news over a span of $7$ months. They find a significant correlation between controversial issues and the use of negative affect and biased language. More recently, \citet{garimella2016quantifying} show that controversial discussions on social media have a well-defined structure, when looking at the \emph{endorsement} network. They propose a measure based on random walks (\ensuremath{\text{\sc RWC}}\xspace), which is able to identify controversial topics, and \emph{quantify} the level of controversy of a given discussion via its network structure alone. The aforementioned studies focus on static networks, which are a snapshot of the underlying dynamic networks. Instead, we are interested in network dynamics and, specifically, in how it responds to increased collective attention in the controversial topic. Several studies have looked at how networks evolve, and proposed models of network formation~\cite{leskovec2005graphs,leskovec2008microscopic}. Densification over time is a pattern often observed~\cite{leskovec2005graphs}, i.e., social networks gain more edges as the number of nodes grows. A change in the scaling behavior of the degree distribution has also been observed~\cite{ahn2007analysis}. \citet{newman2011structure} offer a comprehensive review. Most of these studies focus on social networks, and in particular, on the friendship relationship. In our work, we are interested in studying an \emph{interaction} network, which has markedly different characteristics. There is a large amount of literature devoted to studying the evolution of networks. For an overview, see the book by~\citet{dorogovtsev2013evolution}. However, none of these previous studies has devoted much attention to the evolution of interaction networks for controversial topics, especially when tracking topics for a long period of time. \citet{difonzo2014network} report on a user study that shows how the network structure affects the formation of stereotypes when discussing controversial topics. They find that segregation and clustering lead to a stronger ``echo chamber'' effect, with higher polarization of opinions. Our study examines a similar correlation between polarization and network structure, although in a much wider context, and focusing on the influence of external events. \citet{garimella2017long} study polarization on Twitter over a long period of time, using content and network-based measures for polarization and find that over the past decade, polarization has increased. We find no consistent trend among the topics we study. Perhaps the closest work to this paper is the work by \citet{smith2013role}, who study the role of social media in the discussion of controversial topics. They try to understand how positions on controversial issues are communicated via social media, mostly by looking at user level features such as retweet and reply rates, url sharing behavior, etc. They find that users spread information faster if it agrees with their position, and that Twitter debates may not play a big role in deciding the outcome of a controversial issue. However, there are differences with our work: (i) they study one local topic (California ballot), over a small period of time, while we study a wide range of popular topics, spanning multiple years; and (ii) their analysis is mostly user centric, whereas we take a global viewpoint, constructing and analyzing networks of user interaction. \spara{The effect of external events on social networks.} A few studies have examined the effects of events on social networks. \citet{romero2016social} study the behavior of a hedge-fund company via the communication network of their instant messaging systems. They find that in response to external shocks, i.e., when stock prices change significantly, the network ``turtles up,'' strong ties become more important, and the clustering coefficient increases. In our case, we examine both a communication network and an endorsement network, and we focus on controversial issues. Given the different setting, many of our findings are quite different. Other works, such as the ones by~\citet{Lehmann2012} and~\citet{Wu2014}, examine how collective attention focuses on individual topics or items and evolves over time. \citet{Lehmann2012} examine spikes in the frequency of hashtags and whether most frequency volume appears before or after the spike. They find that the observed patterns point to a classification of hashtags, that agrees with whether the hashtags correspond to topics that are endogenously or exogenously driven. \citet{Wu2014}, on the other hand, examine items posted on digg.com and how their popularity decreases over time. \citet{morales2015measuring} study polarization over time for a single event, the death of Hugo Chavez. Our analysis has a more broad spectrum, as we establish common trends across several topics, and find strong signals linking the volume of interest to the degree of polarization in the discussion. \citet{andris2015rise} study the partisanship of the U.S.\ congress over a long period of time. They find that partisanship (or non-cooperation) in the U.S.\ congress has been increasing dramatically for over 60 years. Our study suggests that increased controversy is linked to an increase in attention on a topic, whereas we do not see a global trend over time. \section{Dataset} \label{sec:dataset} Our study uses data collected from Twitter. Using the repositories of the Internet Archive,\footnote{\small\url{https://archive.org/details/twitterstream}} we collect a $1\%$ sample of tweets from September 2011 to August 2016,\footnote{\small{To be precise, we have data for $57$ months from that period}} for four topics of discussion, related to `Obamacare', `Abortion', `Gun Control', and `Fracking'. These topics constitute long-standing controversial issues in the U.S.\footnote{\small{According to \small\url{http://2016election.procon.org}.}} and have been used in previous work~\cite{lu2015biaswatch}. For each topic, we use a keyword list as proposed by~\citet{lu2015biaswatch} (shown in Table~\ref{tab:keywords}), and extract a base set of tweets which contain at least one topic-related keyword. To enrich this original dataset, we use the Twitter REST API to obtain all tweets of users who have participated in the discussion at least once.\footnote{\small Up to \num{3200} due to limits imposed by the Twitter API.} Admittedly, this dataset might suffer from sampling bias, however the topics are specific enough that the distortion should be negligible~~\citep{morstatter2013sample}. There might also be recency bias due to the addition of the latest tweets of the users. However, the data does not show any clear trend in this sense (see Figure~\ref{fig:timeline}). In addition, given that we rely on detecting volume peaks, the trend does not affect our analysis. Table~\ref{tab:keywords} shows the final statistics for the dataset. \begin{table}[t] \centering \small \caption{Keywords for the controversial topics.} \label{tab:keywords} \begin{tabular}{l>{\raggedright}p{10em}rr} \toprule Topic & Keywords & \#Tweets & \#Users \\ \midrule Obamacare & obamacare, \#aca & \num{866484} & \num{148571} \smallskip\\ Abortion & abortion, prolife, prochoice, anti-abortion, pro-abortion, planned parenthood & \num{1571363} & \num{327702} \smallskip\\ Gun Control & gun control, gun right, pro gun, anti gun, gun free, gun law, gun safety, gun violence& \num{824364} & \num{224270} \smallskip\\ Fracking & fracking, \#frack, hydraulic fracturing, shale, horizontal drilling & \num{2117945} & \num{170835} \\ \bottomrule \end{tabular} \end{table} We infer two types of interaction network from the dataset: ($i$) a retweet network --- a directed endorsement network of users, where there is an edge between two users ($u\,{\rightarrow}\,v$) if $u$ retweets $v$, and ($ii$) a reply network --- a directed communication network of users, where an edge ($u\,{\rightarrow}\,v$) indicates that user $u$ has replied to a tweet by user~$v$. Note that replies are characterized by a tweet starting with \texttt{`@username'} and do not include mentions and retweets.\footnote{\small See also \url{https://support.twitter.com/articles/14023} for terminology related to different types of Twitter messages.} Polarized networks, especially the ones considered here, can be broadly characterized by two opposing \emph{sides}, which express different opinions on the topic at hand. It is commonly understood that retweets indicate endorsement, and endorsement networks for controversial topics have been shown to have a bi-clustered structure~\cite{conover2011political,garimella2016quantifying}, i.e., they consist of two well-separated clusters that correspond to the opposing points of view on the topic. Conversely, replies can indicate discussion, and several studies have reported that users tend to use replies to talk across the sides of a controversy~\cite{bessi2014social,liu2014twitter}. These two types of network capture different dynamics of activity, and allow us to tease apart the processes that generate these interactions. In this paper, we build upon the observation that the clustering structure of retweet networks reveals the opposing sides of a topic. In particular, following an approach from previous work~\citep{garimella2016quantifying}, we collapse all retweets contained in the dataset of each topic into a single large static retweet network. Then, we use the METIS clustering algorithm~\cite{karypis1995metis} to identify two clusters that correspond to the two opposing sides. This process allows us to identify more consistent sides for the topic. We evaluate the sides by manual inspection of the top retweeted users, URLs, and hashtags. The results are consistent and accurate, and can be inspected online.\footnote{\small\url{https://mmathioudakis.github.io/polarization/}\label{footnote:website}} Let us now consider the temporal dynamics of these interaction networks. Given the traditional daily news reporting cycle, we build the time series of networks with the same daily granularity. This high resolution allows us to easily discern the level of interest in the topic, and possibly identify spikes of interest linked to real world external events, as shown in Figure~\ref{fig:timeline}. These spikes usually correspond to external newsworthy events, as shown by the annotations. These results support the observation that Twitter is used as an \emph{agor\'{a}} to discuss the daily matters of public interest~\citep{deFrancisciMorales2012trex}. As shown in Figure~\ref{fig:timeline}, the size of the active network for each day varies significantly. There is, however, a \emph{hard core} set of active users who are involved in the discussion of these controversial topics most of the time. Therefore, to understand the role of these more engaged users, we define the `core network' as the one induced by users who are active for more than $\sfrac{3}{4}$ of the observation time. Specifically, to build a \emph{core} set of users, we first identify two subsets --- one consisting of those users who generated or received a retweet at least once per month for $45$ months; and another one defined similarly for replies. We define the core set of users as the union of the aforementioned two sets. Nodes of a network that do not belong to the core are said to belong to the \emph{periphery} of that network. The size of the core ranges from around \num{600} to \num{2800} nodes for the four topics. For any given day, the core accounts for at most around $10\%$ of the active users. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{obamacare_volume} \includegraphics[width=\textwidth]{abortion_volume} \includegraphics[width=\textwidth]{guncontrol_volume} \includegraphics[width=\textwidth]{fracking_volume} \caption{Daily trends for number of active users for the four controversial topics under study. Clear spikes occur at several points in the timeline. Manually chosen labels describing related events reported in the news on the same day are shown in blue for some of the spikes.} \label{fig:timeline} \end{figure*} \subsection{Notation} The set of retweets that occur within a single day $d$ gives rise to one retweet network $N^\mathit{rt}_d$. Each user associated with a retweet is represented with one node in the network. There is a directed edge from user $u$ to user $v$ only when user $u$ has retweeted at least one tweet authored by user $v$. Correspondingly, the set of replies that occur within a single day give rise to a reply network $N^\mathit{re}_d$. In addition, each node $u$ in the network is associated with a binary attribute $c\xspace(u) \in\{\text{\tt true}, \text{\tt false}\}$ that indicates whether the node is part of the core, and an attribute $s\xspace(u) \in \{1, 2\}$ that represents the side the node belongs to. In some cases, we consider undirected versions of the networks defined above. In such cases, we write $G^\mathit{rt}_d$, $G^\mathit{re}_d$ to denote the undirected graphs corresponding to $N^\mathit{rt}_d$, $N^\mathit{re}_d$, respectively. Besides these two types of network, for each day we consider the set of tweets that were generated on that day. Every tweet $m$ is associated with an attribute $s\xspace(m) \in \{1, 2\}$ that indicates the side its author belongs to. Moreover, every tweet $m$ is associated with the list of words $w\xspace(m)$ that occur in its text. This information gives rise to two unigram distributions $W^1_d$ and $W^2_d$, one for each side. Each distribution expresses the number of times each word appears in the tweets of nodes from each side. \section{Measures} \label{sec:measures} For each day $d$, we employ a set of measures on the associated networks $N^\mathit{rt}_d$, $N^\mathit{re}_d$, and unigram distributions $W^1_d$ and $W^2_d$. We describe them below. \spara{Polarization.} We quantify the polarization of a network $N_d$ by using the random-walk controversy (\ensuremath{\text{\sc RWC}}\xspace) score introduced in previous work~\cite{garimella2016quantifying}. Intuitively, the score captures whether the network consists of two well-separated clusters. \spara{Clustering coefficient.} In an undirected graph, the clustering coefficient $\mathit{cc}(u)$ of a node $u$ is defined as the fraction of closed triangles in its immediate neighborhood. Specifically, let $d$ be the degree of node $u$, and $T$ be the number of closed triangles involving $u$ and two of its neighbors, then \[ \mathit{cc}(u) = \frac{2T}{d (d - 1)}. \] In our case, we consider the undirected graph $G_d$ and compute the average clustering coefficient of all nodes that belong to each \emph{side} -- then take the mean of the two averages as the clustering coefficient of the network. In order to control for scale effects, i.e., correlation between the size of the network (as determined by the volume of users active on day $d$) and the clustering coefficient, we employ a normalizer for the score. More in detail, we use an Erd\H{o}s-R\'{e}nyi graph as null model (with edges drawn at random among pair of nodes), and normalize the score by the expected value for a null-model graph of the same size. Unless otherwise specified, we apply the same type of normalization for all the methods defined below. \spara{Tie strength.} For each node $u$ in a graph $G_d$, we consider all nodes $v$ it is connected to across all days, and order them decreasingly by the number of occurrences $|\{ d : (u, v) \in G_{d} \}|$. That is, the node $v$ at the top of the list for $u$ is the node to which $v$ connects the most consistently throughout the time span of the dataset. Then, we define the \emph{strong ties} of a node $u$ as the top $10\%$ of the nodes ordered as described above. For a given day $d$, we define the tie strength of a node as the number of strong ties it is connected to in the corresponding graph $G_d$. The tie-strength measure for the day is defined as the average tie strength for all nodes on either side. As described for the previous measure above, we normalize the reported measure by the expected value for a random graph with the same number of nodes and edges. \spara{Cross--side openness.} This measure reports the number of edges that connect nodes from opposing sides, and captures the inter-side interaction happening in the network on a given day. Formally, it is defined as \[ \mathit{CSO} = | \{ (u,v) \in G_d : s\xspace(u) \neq s\xspace(v) \} |. \] We apply the same normalization as described above. \spara{Sides edge composition.} For a given network, we distinguish two types of edges: \emph{within-sides}, where both adjacent nodes belong to the same side, and \emph{across-sides}, where the adjacent nodes belong to different sides. For each day and network, we track the fraction of the two types of edges. \spara{Core--periphery openness.} This measure is defined as the number of edges that connect a node from the core to the periphery. It captures the amount of interaction between the hard core users and the casual ones. Formally, \[ \mathit{CPO} = | \{ (u,v) \in G_d : c\xspace(u) \wedge \neg c\xspace(v) \} |. \] \spara{Bimotif.} For a network $N_d$, we define the bimotif measure as the number of directed edges $(u, v) \in N_d$ for which the opposite edge $(v, u)$ also appears in the network \[ \mathit{Bimotif} = | \{ (u,v) \in N_d : (v,u) \in N_d \} |. \] This measure captures the mutual interactions happening within the network. It is also known as `reciprocity' in the literature. \spara{Core Density.} This measure captures the number of edges that connect exclusively members of the core \[ \mathit{CoreDens} = | \{ (u,v) \in N_d : c\xspace(u) \wedge c\xspace(v) \} |. \] \spara{Core--periphery edge composition.} For a given network, we distinguish three types of edges: \emph{core--core}, where both adjacent nodes belong to the core we have identified, \emph{core--periphery}, where one node belongs to the core and one to the periphery, and \emph{periphery--periphery}, where both nodes belong to the periphery. For each day and network, we track the fraction of each type of edges. \spara{Cross--side content divergence.} This measure captures the difference between the word distributions $W^1_d$ and $W^2_d$, and is based on the Jensen-Shannon divergence~\cite{lin1991divergence}. The Jensen-Shannon divergence is undefined when one of the two distributions is zero at a point where the other is not. Thus, we smooth the distributions by adding Laplace counts $\beta = 10^{-5}$ to avoid zero entries in either distribution. The traffic volume on a given day can increase the vocabulary size, and thus induce an unwanted bias in the measure. In order to counter this bias, we employ a sampling procedure similar to bootstrapping from the two distributions. For each smoothed distribution $W^1_d$ and $W^2_d$, we sample with replacement $k = \num{10000}$ words at random, and compute the Jensen-Shannon divergence of these equal-sized samples. We repeat the process $100$ times and report the average sample Jensen-Shannon divergence as the `cross-side content divergence' for day $d$. Intuitively, the higher its value, the more different the word distributions across the two sides. \spara{Within-side entropy.} This measure captures how `concentrated' each of the two distributions $W^1_d$ and $W^2_d$ is. For each side, we compute the entropy for each distribution. The higher its value, the more widely spread is the corresponding distribution. We use the same bootstrap sampling method described above to avoid bias due to activity volume. \spara{Topic variance.} This measure captures, to some extent, $what$ is being talked on the two sides of the discussion. We extract a large number of topics by using Latent Dirichlet Allocation (k=100) on the complete tweet corpus. We then compute the distribution of topics in each bucket. This distribution gives an estimate on which of the 100 topics are being talked about in the bucket. We report the variance of this distribution. If the distribution is focused on a small number of topics, the variance is high. Conversely, a low variance indicates a uniform distribution of topics. \spara{Sentiment variance.} This measure captures the variance of sentiment valence (positive versus negative) in all the tweets of one day $d$~\cite{garimella2016quantifying}. \spara{Psychometric analysis.} To understand the if there are behavioral changes in terms of content generated and shared by users with increasing activity, we use the Linguistic Inquiry and Word Count (LIWC) dictionary,\footnote{\small\url{http://liwc.net}} which identifies emotions in words~\cite{kramer2014experimental}. We measure the fraction of tweets containing the LIWC categories: anger, sadness, posemo, negemo, and anxiety. \subsection{Analysis} We explore how the aforementioned measures vary with the number of active users in the networks, which is a proxy for the amount of collective attention the topic attracts. We sort the time series of networks by volume of active users, and partition it into ten quantiles (each having an equal number of days), so that days of bucket $i$ are associated with smaller volume than those of bucket $j$, for $i < j$. For each bucket, we report the mean and standard deviation of the values for each measure, and observe the trend from lower to higher volume. Note that the measures presented in this section are carefully defined so that their expected value does not depend on the volume of underlying activity (i.e., number of network nodes and edges or vocabulary size). \section{Findings} \label{sec:findings} In what follows, we report our findings on the measures defined in Section~\ref{sec:measures} --- starting from the ones related to the retweet and reply networks (Section~\ref{sec:network_findings}), then proceeding to the ones related to content (Section~\ref{sec:content_findings}) and network cores (Section~\ref{sec:core_findings}). We provide additional analysis for the periods around the spikes in interest (Section~\ref{sec:local_findings}), as well as for the evolution of measures over time (Section~\ref{sec:time_findings}). \subsection{Network} \label{sec:network_findings} We observe a significant correlation between RWC score and interest in the topic. Figure~\ref{fig:rwc} shows the RWC score as a function of the quantiles of the network by retweet volume (as explained in the previous section). There is a clear increasing trend, which is consistent across topics. This trend suggests that increased interest in the topic is correlated with an increase in controversy of the debate, and increased polarization of the retweet networks for the two sides. Conversely, reply networks are sparser and more disconnected, thus, the RWC score is not meaningful in this case (not shown due to space constraints). This difference is expected, and was already observed in the work that introduced RWC~\citep{garimella2016quantifying}. A similar result can be observed for the clustering coefficient, as shown in Figure~\ref{fig:cc-rt}. As the interest in the topic increases, the two sides tend to \emph{turtle up}, and form a more close-knit retweet network. This result suggests that the \emph{echo chamber} phenomenon gets stronger when the discussion sparks. Our finding is also consistent with results by~\citet{romero2016social}. As for the previous measure, the clustering coefficient does not show a significant pattern for the reply networks. Replies are often linked to dyadic interactions, while the clustering coefficient measures triadic ones, so we expect such a difference between the two types of network. In line with the above results, tie strength is correlated with retweet volume, as indicated by Figure~\ref{fig:ts-rt}. When the discussion intensifies, users tend to endorse the opinions of their closest friends, or their trusted sources of information. Again, this observation indicates a closing up of both sides when the debate gets heated. Interestingly, a similar trend is present for the reply network, as shown in Figure~\ref{fig:ts-re}. Differently from previous work, we find an increase of communication of users with their strong ties, rather than with weak ties or users of the opposing side. We also observe an increase in back-and-forth communication, indicating a dialogue between users of the same side. Figure~\ref{fig:bi-re} shows an increase in bimotifs in the reply network when the discussion intensifies. This measure is inconclusive for the retweet network, for the reasons mentioned above. Finally, when calculating the fractions of \emph{within-side edges} and \emph{across-side edges} for \emph{across sides edge composition}, we find that reply networks typically contain higher proportions of across-side activity compared to retweet networks, consistently with earlier work. In fact, for retweet networks, almost all edges are classified as \emph{within-side edges}. Interestingly, we also find that these proportions do not change significantly as the volume increases. The same is true for the \emph{cross-side openness} measure (not shown). \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_retweets_rwc.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_retweets_rwc.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_retweets_rwc.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_retweets_rwc.pdf}} \end{minipage}% \caption{RWC score as a function of the activity in the retweet network. An increase in interest in the controversial topic corresponds to an increase in the controversy score of the retweet network.} \label{fig:rwc} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_retweets_clust_coeff.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_retweets_clust_coeff.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_retweets_clust_coeff.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_retweets_clust_coeff.pdf}} \end{minipage}% \caption{Average clustering coefficient of as a function of the activity in the retweet network. Spikes in interest correspond to an increase in the clustering coefficient on both sides of the discussion, which indicates the retweet networks tend to close up.} \label{fig:cc-rt} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_retweets_tie_str.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_retweets_tie_str.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_retweets_tie_str.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_retweets_tie_str.pdf}} \end{minipage}% \caption{Tie strength as a function of the activity in the retweet network. Spikes in activity correspond to more interaction with stronger ties, which indicates a closing up of the retweet network.} \label{fig:ts-rt} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_replies_tie_str.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_replies_tie_str.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_replies_tie_str.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_replies_tie_str.pdf}} \end{minipage}% \caption{Tie strength as a function of the activity in the reply network. Users tend to communicate proportionally more with closer ties when interest spikes, which reveals a further closing up of the network.} \label{fig:ts-re} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_replies_bimotif.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_replies_bimotif.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_replies_bimotif.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_replies_bimotif.pdf}} \end{minipage}% \caption{Bimotifs as a function of the activity in the reply network. Users tend to reciprocate the communication more as the discussion intensifies.} \label{fig:bi-re} \end{figure} \subsection{Content} \label{sec:content_findings} Let us now switch our attention to the content measures. Recall that for these measures we do not distinguish between retweet and reply networks, but only between the two sides of the discussion. The main observation is that the Jensen-Shannon divergence between the two sides decreases, as shown by Figure~\ref{fig:js}. This decrease indicates that the lexicon of the two sides tends to converge. The cause of this phenomenon might be the participation of casual users to the discussions, who contribute a more general lexicon to the discussion. Alternatively, the cause might be in the event that sparks the discussion, which brings the whole network to adopt similar lexicon to speak about it, i.e., there is an event-based convergence. To further examine the cause of the convergence of lexicon, we report the entropy of the unigram distribution. Figure~\ref{fig:entropy1} shows that the entropy for one of the sides increases as interest increases (results for the other side show similar trends). Thus, we find that the lexicon is more uniform and less skewed, which supports the hypothesis that a larger group of users brings a more general lexicon to the discussion, rather than the alternative hypothesis of event-based convergence. To investigate \emph{what} causes the lexicon to be generalized, we compute the variance of the topic distribution for each bucket. As we see from Figure~\ref{fig:topic_variance}, the variance decreases with increased activity, meaning that the topic distribution becomes more uniform\footnote{\small The term `fracking' is also sometimes used as an expletive, which might explain why the effects we measure are not as pronounced for this topic as the other ones. E.g. see \url{https://twitter.com/KitKat0122/status/19820978435522561}}. This result provides evidence that users do indeed discuss a wider range of topics when there is a spike in activity. Finally, we also examine how the sentiment and other linguistic cues change with interest. We measure the variance in sentiment, fraction of tweets containing various LIWC categories, such as anger, sadness, positive and negative emotion, and anxiety. Previous work shows that sentiment variance is a measure able to separate controversial from non-controversial topics~\citep{garimella2016quantifying} and linguistic patterns of communication change during shocks~\cite{romero2016social}. However, we do not see any consistent trend. We hypothesise that this might be due to the noise in language (slang, sarcasm, short text, etc) on social media. \begin{figure}[t] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_none_js_div.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_none_js_div.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_none_js_div.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_none_js_div.pdf}} \end{minipage}% \caption{Jensen-Shannon divergence of the lexicon between the two sides as a function of network activity. As the interest in the topic rises, the lexicon used by the two sides tends to converge.} \label{fig:js} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_none_entropy.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_none_entropy.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_none_entropy.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_none_entropy.pdf}} \end{minipage}% \caption{Entropy of the distribution over the lexicon for one side of the discussion as a function of the activity in the network (the other side shows similar patterns). As the interest increases, the entropy increases, thus indicating the use of a wider lexicon.} \label{fig:entropy1} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_none_topic_variance.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_none_topic_variance.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_none_topic_variance.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_none_topic_variance.pdf}} \end{minipage}% \caption{Variance of the topic distribution. As the interest increases, variance decreases, indicating that a wider range of topics are being discussed.} \label{fig:topic_variance} \end{figure} \subsection{Core} \label{sec:core_findings} Looking at the fractions of the different types of edges (core--core, core--periphery, and periphery--periphery) across the volume buckets in Figures~\ref{fig:retweet_triplets} and~\ref{fig:reply_triplets}, we see that the composition of edges does not change significantly with increase collective attention. This result suggests that the discussion grows in a self-similar way. A disproportionately large fraction of edges link the periphery to the core, when taking into account the core size, as seen in Figure~\ref{fig:retweet_triplets}. During a spike in interest, most casual users, who seldom participate in the discussion, endorse opinions from the core of the side they belong to (red bars). For replies, we see a similar trend with respect to activity volume in Figure~\ref{fig:reply_triplets}. In general, the core is less prevalent in the discussion, as shown by the lower fraction of core-periphery edges (green bars). However, when looking at the \emph{core--periphery openness} (Figures~\ref{fig:cpo-rt} and~\ref{fig:cpo-re}), we see that the \emph{normalized} number of edges between core and periphery increases, i.e., the number of edges between core and periphery increases compared to the expected number based on a random-graph null model. To interpret this result, note that when the network grows, given that the periphery is much larger than the core, most edges for the null model are among periphery nodes. Therefore, the interaction networks show a clear hierarchical structure when growing. \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_core_retweets_edge_comp.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_core_retweets_edge_comp.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_core_retweets_edge_comp.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_core_retweets_edge_comp.pdf}} \end{minipage}% \caption{Edge composition as a function of network activity in the retweet network. As the interest increases, there are no major changes in the fractions of core-core (blue), core-periphery (green), and periphery-periphery (red) edges.} \label{fig:retweet_triplets} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_core_replies_edge_comp.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_core_replies_edge_comp.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_core_replies_edge_comp.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_core_replies_edge_comp.pdf}} \end{minipage}% \caption{Edge composition as a function of network activity in the reply network. As the interest increases, there are no major changes in the fractions of core-core (blue), core-periphery (green), and periphery-periphery (red) edges.} \label{fig:reply_triplets} \end{figure} \afterpage{ \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_core_retweets_cpo.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_core_retweets_cpo.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_core_retweets_cpo.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_core_retweets_cpo.pdf}} \end{minipage}% \caption{Core--periphery openness as a function of activity in the retweet network. As the interest increases, the number of core-periphery edges, normalized by the expected number of edges in a random network, increases. This suggests a propensity of periphery nodes to connect with the core nodes when interest increases.} \label{fig:cpo-rt} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{obamacare_core_replies_cpo.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{abortion_core_replies_cpo.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{guncontrol_core_replies_cpo.pdf}} \end{minipage}% \begin{minipage}{.25\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{fracking_core_replies_cpo.pdf}} \end{minipage}% \caption{Core-periphery openness as a function of activity in the reply network. As the interest increases, the number of core-periphery edges, normalized by the expected number of edges in a random network, increases for most topics. This suggests a propensity of periphery nodes to connect with the core nodes when interest increases.} \label{fig:cpo-re} \end{figure} } \subsection{Local analysis} \label{sec:local_findings} So far, we have analyzed global trends across the time series. We now focus on local trends, to drill down on what happens around the spikes, and look at local variations of the measures just before and after the spike. We mark a day in the time series as a spike if the volume of active users is at least two standard deviations above the mean. Table~\ref{tab:time} shows the Pearson correlation between various measures and network activity, one week before and after the spike. The trends observed globally still hold. There is a positive correlation of RWC with activity, which adds more evidence to our finding that polarization increases during spikes. The trends for bimotif, tie strength, and content divergence also persist, and are much stronger locally. In addition to the previous measures, we also analyze other content features, such as the fraction of retweets, replies, mentions, and URLs around the spike. Interestingly, we find strong positive correlation of retweets, mentions, and URLs with volume, which indicates that discussion and endorsement increase during a spike. This finding is consistent with the ones by~\citet{smith2013role}, who find that users tend to add URLs to their tweets when discussing controversial topics. Note that these additional content measures are only indicative for the local analysis, and do not produce consistent results at the global level. \begin{table}[tb] \centering \small \caption{Pearson correlation of various measures with volume one week before, during and after a spike in interest. All values except those marked with an asterisk (*) are significant at $p < 0.05$.} \label{tab:time} \begin{tabular}{lcccc} \toprule Measure & Obamacare & Gun Control & Abortion & Fracking \\ \midrule RWC & 0.20 & 0.21 & 0.19 & 0.23 \\ Openness & -0.09* & 0.81 & 0.23 & 0.08 \\ Bimotif & 0.27 & 0.36 & 0.33 & 0.23 \\ Tie Strength & 0.96 & 0.98 & 0.95 & 0.86 \\ JSD & -0.66 & -0.86 & -0.63 & -0.46 \\ Entropy & 0.42 & 0.46 & 0.67 & 0.26 \\ Frac. RT & 0.15* & 0.6 & 0.59 & 0.56 \\ Frac. Men. & 0.20 & 0.71 & 0.54 & 0.51 \\ Frac. URL & 0.32 & 0.36 & 0.39 & 0.40 \\ \bottomrule \end{tabular} \end{table} \subsection{Evolution over time} \label{sec:time_findings} \begin{figure} \centering \includegraphics[width=\linewidth]{time_rwcV2} \caption{Long-term trends of RWC (controversy) score in our dataset. No consistent trend can be observed, which contradicts the narrative that social media is making our society more divided.} \label{fig:time-trends} \end{figure} Let us now focus on how the measures change throughout time. The longitudinal span of the dataset of five years allows us to track the long-term evolution of discussion on controversial topics. A common point of view holds that social media is aggravating the polarization of society and exacerbating the divisions in it~\citep{benkler2006wealth}. At the same time, the political debate (in U.S.) itself has become more polarized in recent years~\citep{andris2015rise}. However, we do not find conclusive evidence for this argument with our analysis on this dataset. Figure~\ref{fig:time-trends} shows the long-term trends of the RWC measure for the four topics. The trend is downwards for `abortion' and `fracking', while it is upwards for `obamacare' and `gun control'. One could argue that the latter topics are more politically linked to the current administration in U.S., and for this reason have received increasing attention with the elections approaching. However, the only safe conclusion that can be drawn from this dataset is that there is no clear signal. The figure suggests that social media, and in particular Twitter, are better suited at capturing the `twitch' response of the public to events and news. In addition, while our dataset spans a quite long time span for typical social media studies, it is still much shorter than other ones used typically in social science (coming from, e.g., census, polls, congress votes). This limit is intrinsic of the tool, given that social media have risen in popularity only relatively recently (e.g., Twitter is 10 years old). \subsection{Non-controversial topics} For comparison, we perform measurements over a set of non-controversial topics, defined by the hashtags {\it \#ff}, standing for `Follow Friday', used every Friday by users to recommend interesting accounts to follow; {\it \#nba} and {\it \#nfl}, used to discuss sports games; {\it \#sxsw}, used to comment on the {\it South-by-South-West} conference; {\it \#tbt}, standing for `Throwback Thursday', used every Thursday by users to share memories (news, pictures, stories) from the past. We find that several structural measures, namely \emph{clustering coefficient}, \emph{tie strength}, and \emph{bimotif}, behave similarly to the controversial topics, in that they obtain increased values for increased volume of activity. This result is in accordance with the ones by \citet{romero2016social}. Conversely, the values of the \ensuremath{\text{\sc RWC}}\xspace measure typically remain in ranges that indicate low presence of controversy, even as the volume of activity spikes (Figure~\ref{fig:noncontrrwc}). Additionally, with the definition of `core' introduced above, we could only identify a negligibly small core for these topics (i.e., found very few users who were consistently active on these topics). Finally, in terms of content measures we find that, as for the controversial topics, the entropy of the lexicon increases with volume (Figure~\ref{fig:noncontrentropy}). Topic variance also decreases with volume in most cases, meaning that a wider range of topics are discussed (Figure~\ref{fig:noncontrtopicvariance}). On the contrary, the Jensen-Shannon divergence stays at relatively constant values across volume levels (Figure~\ref{fig:noncontrjsdiv}). It thus behaves differently compared to controversial topics (Figure~\ref{fig:js}). This result is to be expected, as the two `sides' identified by METIS on the networks of non-controversial topics are not as well defined as they are in the case of controversial topics. \begin{figure}[tb] \centering \begin{minipage}{.3\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{ff_retweets_rwc.pdf}} \end{minipage}% \begin{minipage}{.3\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{tbt_retweets_rwc.pdf}} \end{minipage}% \begin{minipage}{.3\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{sxsw_retweets_rwc.pdf}} \end{minipage}% \begin{minipage}{.3\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nfl_retweets_rwc.pdf}} \end{minipage}% \begin{minipage}{.3\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nba_retweets_rwc.pdf}} \end{minipage}% \caption{Non-controversial topics: RWC score as a function of the activity in the retweet network.} \label{fig:noncontrrwc} \end{figure} \begin{figure}[t] \centering \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{ff_none_js_div.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{tbt_none_js_div.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{sxsw_none_js_div.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nfl_none_js_div.pdf}} \end{minipage}% \centering \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nba_none_js_div.pdf}} \end{minipage}% \caption{Non-controversial topics: Jensen-Shannon divergence of the lexicon between the two sides as a function of network activity. As the interest in the topic rises, the lexicon used by the two sides tends to converge.} \label{fig:noncontrjsdiv} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{ff_none_entropy.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{tbt_none_entropy.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{sxsw_none_entropy.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nfl_none_entropy.pdf}} \end{minipage}% \centering \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nba_none_entropy.pdf}} \end{minipage}% \caption{Non-controversial topics: Entropy of the distribution over the lexicon for one side of the discussion as a function of the activity in the network (the other side shows similar patterns). } \label{fig:noncontrentropy} \end{figure} \begin{figure}[tb] \centering \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{ff_none_topic_variance.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{tbt_none_topic_variance.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{sxsw_none_topic_variance.pdf}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nfl_none_topic_variance.pdf}} \end{minipage}% \centering \begin{minipage}{.33\linewidth} \centering \subfloat{\label{}\includegraphics[width=\textwidth]{nba_none_topic_variance.pdf}} \end{minipage}% \caption{Non-controversial topics: Variance of the topic distribution. As the interest increases, variance decreases, indicating that a wider range of topics are being discussed.} \label{fig:noncontrtopicvariance} \end{figure} \section{Conclusion} \label{sec:conclusions} The evolution of networks is a well-studied phenomenon in social sciences, physics, and computer science. However, the evolution of \emph{interaction} networks has received substantially less attention so far. In particular, interaction networks related to discussions of controversial topics, which are important from a sociological point of view, have not been analyzed before. This study is a first step towards understanding this important social phenomenon. We analyzed four highly controversial topics of discussion on Twitter for a period of five years. By examining the endorsement and communication networks of users involved in these discussions, we found that spikes in interest correspond to an increase in the controversy of the discussion. This result is supported by a wide array of network analysis measures, and is consistent across topics. We also found that interest spikes correspond to a convergence of the lexicon used by the opposite sides of a controversy, and a more uniform lexicon overall. The code and datasets used in the paper are available on the project website.\textsuperscript{\ref{footnote:website}} Implications of this work relate to the understanding of how our society evolves via continuous debates, and how culture wars develop~\citep{abramowitz2005can,highton2011long,lindaman2002issue}. It is often argued that technology, and social media in particular, is having a negative impact on our ability to relate to the unfamiliar~\citep{benkler2006wealth}, due to the ``echo chamber'' and ``filter bubble'' effects. However, while we found instantaneous temporary increase in controversy in relation to external events, our study did not find evidence of long term increase in polarization of the discussions, neither after these events nor as a general longitudinal trend. At the same time, investigating how to reduce the polarization of these discussions on controversial topics is a research-worthy problem~\citep{garimella2016connecting,garimella2017mary}, and taking into account the dynamics of the process is a promising direction to explore. Our observations pave the way to the development of models of evolution for controversial interaction networks, similarly to how studies about measuring the Web and social media were the stepping stone to developing models for them. A logical next step for this line of work is to investigate how to use early signals from social media network structure and content to predict the impact of an event. Equally of interest is whether the observations made in this study translate to other social media beside Twitter, for instance, Facebook or Reddit. Finally, while we did not find any consistent long-term trend in the polarization of the discussions, it is worth continuing this line of investigation, as the effects of increased polarization might not be easily discoverable from social-media analysis alone. \spara{Acknowledgements.} This work has been supported by the Academy of Finland project ``Nestor'' (286211) and the EC H2020 RIA project ``SoBigData'' (654024). \bibliographystyle{ACM-Reference-Format}
1,314,259,996,034
arxiv
\section{Introduction} This paper is mainly concerned with the following system of reaction-diffusion equations \begin{equation}\label{eqA}\begin{array}{l} \partial_t a_i-\nabla\cdot(D_i\nabla a_i)=Q_i(a),\qquad i\in\{1,2,3,4\},\ t\geq 0,\ x\in\mathbb R^N, \\[.3cm] Q_i(a)=(-1)^{i+1}(a_2a_4-a_1a_3), \end{array}\end{equation} with initial condition \begin{equation}\label{eqAci} a\big|_{t=0}=a^0=(a_1^0,a_2^0,a_3^0,a_4^0). \end{equation} This system arises in chemistry where four species interact according to the reactions \[A_1 + A_3 \rightleftharpoons A_2+A_4,\] the unknowns $(t,x)\mapsto a_i(t,x)$ in \eqref{eqA} being the local mass concentrations of the species labelled by $i\in\{1,2,3,4\}$: $\int_{\mathbb R^N} a_i(t,x)\, {\mathrm{d}} x$ is interpreted as the mass of the constituent $i$ at time $t$. It is thus physically relevant to consider initial data $a_i^0$ which are non negative integrable functions. The reactants are subjected to space diffusion and the diffusion coefficients depend on the considered species. In full generality, $D_i$ can be a function of the space variable with values in the space of $N\times N$ matrices. Throughout this paper, we restrict to the case of scalar and constant matrices \[D_i(x)=d_i \mathbb I,\qquad \textrm{$d_i>0$ constant}\] with coefficients that satisfy \begin{equation} \label{hypcoef} \textrm{ $ 0<\delta_\star\leq d_i \leq \delta^\star.$} \end{equation} Assuming that the initial data are smooth, say $a_i^0\in C^\infty(\mathbb R^N)$, existence-uniqueness of smooth and non-negative solutions for \eqref{eqA}--\eqref{eqAci} can be justified at least on a small time interval, by using a standard fixed point reasoning (see for instance \cite[Proposition~A.2]{GoVa} or \cite[Lemma~1.1]{Pie}). Global existence of weak solutions is established in \cite{DFPV}. We address the question of the boundedness of the solutions, which will imply that solutions are globally defined and remain infinitely smooth \cite[Proposition~A.1]{GoVa}. The difficulty comes from the fact we are dealing with different diffusion coefficients. As already noticed in \cite{GoVa}, the question becomes trivial when all the $D_i$'s vanish: in this case, we are concerned with a mere system of ODE which clearly satisfies a maximum principle. The answer is also immediate when all the diffusion coefficients are equal to the same constant $d_i=\delta_\star$. Indeed, in this situation, the total mass \[ M(t,x)=\displaystyle\sum_{i=1}^4 a_i(t,x)\] satisfies the heat equation $\partial_t M=\delta_\star\Delta M$, which, again, easily leads to a maximum principle. In the general situation, one may wonder whether or not the system has the explosive behavior of non linear heat equations \cite{We}. Counter--examples of systems with polynomial non linearities presented in \cite{PiSc2} show that this question is relevant and non trivial, see also \cite[Theorem~4.1]{Pie}. We refer the reader to the survey \cite{Pie} for a general presentation of the problem, further references, and many deep comments on the mathematical difficulties raised by such systems. \\ Two properties are crucial for the analysis of the problem. First of all, system \eqref{eqA} conserves mass \begin{equation}\label{masscons} \displaystyle\frac{\, {\mathrm{d}}}{\, {\mathrm{d}} t}\displaystyle\sum_{i=1}^4 \displaystyle\int_{\mathbb R^N} a_i\, {\mathrm{d}} x=0. \end{equation} Second of all, it dissipates entropy: \begin{equation}\label{ent_d} \displaystyle\sum_{i=1}^4 Q_i(a)\ln (a_i)=-(a_2a_4-a_1a_3)\ln\Big(\displaystyle\frac{a_2a_4}{a_1a_3}\Big)\leq 0. \end{equation} These properties suggest to consider more general systems, involving more reactants and possibly more intricate non linearities. To be more specific, we extend the discussion to systems that read \begin{equation}\label{eqA2}\begin{array}{l} \partial_t a_i-\nabla\cdot(D_i\nabla a_i)=Q_i(a),\qquad i\in\{1,...,p\},\ t\geq 0,\ x\in\mathbb R^N, \\[.3cm] Q_i: a\in\mathbb R^p\longmapsto \mathbb R^p, \end{array}\end{equation} endowed with the initial condition \begin{equation}\label{eqA2ci} a\big|_{t=0}=a^0=(a_1^0,...,a_p^0), \end{equation} where the reaction term fulfils the following conditions \begin{enumerate} \item [h1)] there exists $\mathscr Q>0$ and $q>0$ such that for any $a\in \mathbb R^p$ and $i\in \{1,...,p\}$, we have $|\nabla_a Q_i(a)|\leq \mathscr Q |a|^{q-1}$, \item [h2)] for any $i\in \{1,...,p\}$, if $a_i\leq 0$ then $Q_i(a)\leq 0$, \item [h3)] $\displaystyle\sum_{i=1}^p Q_i(a)=0,$ \item [h4)] $\displaystyle\sum_{i=1}^p Q_i(a)\ln(a_i)\leq 0.$ \end{enumerate} Assumption h1) governs the growth of the non linearity. In what follows, we will be concerned with quadratic and super-quadratic growth: $q\geq2$ (but $q$ is not necessarily assumed to be an integer). Assumption h2) relies on the preservation of non negativity of the solutions, and it is thus physically relevant. Assumptions h3) and h4) imply mass conservation and entropy dissipation, respectively. Note that the entropy dissipation actually provides an estimate on (nonlinear) derivatives of the unknown since it leads to \begin{equation}\label{ent_diss} \displaystyle\frac{\, {\mathrm{d}}}{\, {\mathrm{d}} t}\displaystyle\sum_{i=1}^p \displaystyle\int_{\mathbb R^N} a_i\ln(a_i)\, {\mathrm{d}} x +4\delta_\star\displaystyle\sum_{i=1}^p \displaystyle\int_{\mathbb R^N} |\nabla\sqrt {a_i}|^2\, {\mathrm{d}} x\leq 0. \end{equation} In view of h3) and h4), it is thus natural to consider initial data such that \begin{equation}\label{hyp_ci}\begin{array}{l} a_i^0:x\in \mathbb R^N\longmapsto a_i^0(x)\geq 0, \\[.3cm] \displaystyle\sup_{i\in\{1,...,p\}} \displaystyle\int_{\mathbb R^N} a_i^0(1+\ln (a_i^0)+ |x|)\, {\mathrm{d}} x=\mathscr M^0<\infty. \end{array} \end{equation} We refer the reader to Proposition~\ref{P1} below for a more precise statement in terms of a priori estimate. It means that the initial concentrations have finite mass and entropy. The moment condition controls the spreading of the mass. However, while \eqref{ent_diss} has a clear physical meaning, it does not provide enough estimates for the analysis of the problem: note that with $u, u\ln (u) \in L^1$ and $\nabla\sqrt u\in L^2$, it is still not clear how the nonlinear term $Q(u)$ can make sense in $\mathscr D'$~! For this reason, a notion of \emph{renormalized} solutions is introduced in \cite{Fis}, and existence of solutions in this framework can be established. \\ In the specific quadratic and two-dimensional case ($q=2$, $N=2$) the question is fully answered in \cite{GoVa}: starting from $L^\infty\cap C^\infty$ initial data, the solution remains bounded and smooth and the problem is globally well-posed. In fact \cite{GoVa} proves a \emph{regularizing} effect: with data satisfying \eqref{hyp_ci} only, the solution becomes \emph{instantaneously} bounded and smooth, which implies global well-posedness. The proof in \cite{GoVa} relies on De Giorgi's approach \cite{DeG}; it uses entropy dissipation, see \eqref{ent_diss}, to get a non linear control on level sets of the solution, which eventually leads to the $L^\infty$ bound. The result is extended for higher space dimensions in \cite{CDF} which handles the quadratic case when the diffusion coefficients are close enough to the same constant (how small the distance between the $d_j$'s should be depends on the space dimension, in a explicit way), and in \cite{CaVa}, which handles subquadratic non linearities ($q<2$ in h1), non necessarily integer). Two ingredients are crucial in the approach of \cite{CaVa}: \begin{itemize} \item First, \cite{CaVa} uses systematically rescaled quantities \begin{equation}\label{scal} a^{(\epsilon)}_i(s,y)=\epsilon^{2/(q-1)}\ a_i(t+\epsilon^2 s,x+\epsilon y) \end{equation} with $\epsilon>0$: $a^{(\epsilon)}$ satisfies the same evolution equation as $a$. Note that in the quadratic case ($q=2$), for $N=2$, the rescaling leaves invariant the natural norms of the problem $\|a\|_{L^\infty(0,\infty;L^1(\mathbb R^2))}$ and $\|\nabla\sqrt a\|_{L^2((0,\infty)\times\mathbb R^2)}$. \item Second, the parabolic regularity is obtained by adapting De Giorgi's techniques, and by working with a certain norm of the rescaled unknown which becomes small as $\epsilon\to 0$. It turns out that the necessary estimate holds in a weak sense. Namely, one has to consider the set of distributions \[\textrm{$T\in \mathscr D'((0,T)\times\mathbb R^N)$ such that $T=\Delta \Phi$, with $\Phi\in L^\infty((0,\infty)\times\mathbb R^N)$}.\] The corresponding rescaled norm behaves like $\mathscr O(\epsilon^{(4-2q)/(q-1)})$, which indeed tends to 0 as $\epsilon \to 0$ for subquadratic non linearities $q<2$. The idea of using such a weak norm also appeared in the regularity analysis for the Navier-Stokes equation \cite{Va}. We also refer the reader to \cite{CafVa, Va0}, for further applications of De Giorgi's techniques to the analysis of fluid mechanics systems and to \cite{AAG, GU} for the study of models for populations dynamics governed by ``chemotactic-like'' mechanisms. This approach is also useful for the analysis of the preservation of bounds by numerical schemes when solving non linear convection-diffusion systems \cite{CMV}. In the reasoning adopted in \cite{CaVa}, a special role is played by the total mass $M=\sum_{i=1}^p a_i$ which satisfies the diffusion equation \begin{equation}\label{eqMass} \begin{array}{l} \partial_t M - \Delta (dM)=0, \\[.3cm] d(t,x)=\displaystyle\frac{\displaystyle\sum_{i=1}^p d_i a_i(t,x)}{\displaystyle\sum_{i=1}^p a_i(t,x)}, \end{array}\end{equation} where, by virtue of \eqref{hypcoef}, the diffusion coefficient $d$ satisfies \[0<\delta_\star\leq d(t,x)\leq \delta^\star.\] \end{itemize} This relation can be used to establish, through an elegant duality argument, an estimate in $L^2((0,T)\times\mathbb R^N)$, see \cite{PiSc2} and \cite{DFPV}. This estimate is a key for proving the global existence of weak solutions for the quadratic problem \eqref{eqA}--\eqref{eqAci} in \cite{DFPV}: at least, it is worth pointing out that with this $L^2$ estimate the right hand side $Q_i(a)$ in \eqref{eqA} makes sense, while the estimates based on the mass conservation and entropy dissipation were not enough. However, the $L^2$ estimate does not schrink the rescaled solutions $a^{(\epsilon)}$ as $\epsilon\to 0$ and it is thus not enough to provide global boundedness and regularity. This is where we can take advantage of using a weak norm. In the present work, we wish to fill the gap in the boundedness theory and to provide a complete answer for the quadratic case in \emph{any} dimension. In fact, our analysis also covers higher non linearities, but with a non explicit condition on the growth condition. Our main results state as follows. \begin{theo}\label{theo_main} Let $N\in\mathbb N$, with $N\geq 3$. For any initial data $a^0=(a^0_1,a^0_2,a^0_3,a^0_4)$ in $\big(C^\infty(\mathbb{R}^N)\cap L^\infty(\mathbb{R}^N)\big)^4$ such that $a_i(x)\geq0$ for any $x\in \mathbb{R}^n$ and $i\in\{1,...,4\}$, there exists a unique, globally defined, solution $a= (a_1,a_2,a_3,a_4)$ to \eqref{eqA}--\eqref{eqAci} which is non negative, bounded on $[0,T]\times\mathbb R^N$ for any $0<T<\infty$, and $C^\infty$-smooth. \end{theo} \begin{theo}\label{Theo_principal} Let $N\in\mathbb N$, with $N\geq 3$. Consider a system \eqref{eqA2} verifying h1)-h4). There exists $\nu_0>0$ depending on $N$, $\delta_\star$ and $\delta^\star$ such that if h1) holds with $2\leq q\leq 2+\nu_0\leq 2\frac{N+1}{N}$, then for any non negative $a^0 \in C^{\infty}(\mathbb{R}^N;\mathbb{R}^p)\cap L^{\infty}(\mathbb{R}^N;\mathbb{R}^p)$, there exists a unique, globally defined, solution $a$ to \eqref{eqA2}--\eqref{eqA2ci} which is non negative, bounded on $[0,T]\times\mathbb R^N$ for any $0<T<\infty$, and $C^\infty$-smooth. \end{theo} Theorem \ref{theo_main} thus appears as a consequence of Theorem \ref{Theo_principal}. The extra power $\nu_0$ allowed on the nonlinearities depends on $N$, $\delta_\star$ and $\delta^\star$ in a non explicit way and our method does not provide any precise estimate. It seems unlikely that it can correspond to a physically relevant threshold. The problem of regularity remains open for higher nonlinearities. The proof still follows the De Giorgi strategy, and relies on a refinement of the weak norm estimate obtained in \cite{CaVa} (which, though, remains a crucial ingredient of the proof). To be more specific, we are going to upgrade the $L^\infty$ estimate to a $C^\alpha$ estimate, working with the set of distributions \[\textrm{$T\in \mathscr D'((0,T)\times\mathbb R^N)$ such that $T=\Delta \Phi$, with $\Phi\in L^\infty(0,\infty;C^\alpha(\mathbb R^N))$}\] for a certain regularity coefficient $0<\alpha\leq 1$. This is combined with a $L^{(N+1)/N}$ estimate on the total mass, obtained through a duality argument. This argument is directly inspired by the derivation of elliptic estimates by Fabes and Stroock \cite{FaSt} and it appears as a dual version of the Alexandrof-Bakelman-Pucci-Krylov-Tso (ABPKT) estimate \cite{Ale,Bak,Puc,Kry,Tso}. We point out that, contrarily to the approach in \cite{CaVa}, we do not use here the bounds derived from the entropy dissipation \eqref{ent_diss}. The paper is organized as follows. In Section \ref{S:main}, we give an overview of the main steps of the proof. Section \ref{S:3} is concerned with the weak estimate on the total mass. It relies on a H\"olderian regularity analysis for parabolic equations. This is combined with a duality argument which uses crucially the non negativity of the solution. Section \ref{S:4} is devoted to a complementary estimate in a suitable Lebesgue space, which, again, relies on a duality approach. Section \ref{S:fin} explains how the arguments combine to end the proof of the main results. \section{Main steps of the proof} \label{S:main} \subsection{A priori estimates; boundedness, global existence and regularity of the solutions} In what follows, we are going to establish several a priori estimates satisfied by the solutions of \eqref{eqA2}. To this end, we will perform various manipulations such as integrations by parts, permutations of integrals and derivation, etc. These manipulations apply to the smooth solutions of the problem that can be shown to exist on a small enough time interval, see \cite[Proposition~A.2]{GoVa}. They equally apply to solutions of suitable approximations of the problem \eqref{eqA2}. The construction of such an approximation --- by regularizing data, coefficients, cutting-off the non linearirities... --- can be a delicate issue in order to preserve the structural features of the original equation, and to admit a globally defined smooth solution. We refer the reader on this issue to \cite{DFPV}. As it will be clear in the forthcoming discussion, the estimates we are going to derive do not depend on the regularization parameter, but only on $N$, $\delta_\star$, $\delta^\star$, and $\mathscr Q$, $p$, $q$ (see h1)), which, eventually, allows us to conclude by getting rid of the regularization parameter. The very first estimate is a direct consequence of the mass conservation and entropy dissipation properties of the system. The following claim, see \cite[Proposition~2.1]{GoVa}, applies without any restriction on the number of species $p$, the degree of non linearity $q$ nor on the space dimension $N$. \begin{proposition}[\cite{GoVa}]\label{P1} Assume h1)-h4). Let $a_0=(a_1^0,...,a_p^0)$, with non negative components, satisfy \eqref{hyp_ci}. Then, for any $0<T<\infty$, there exists $0<C(T)<\infty$ such that \[ \begin{array}{l} \displaystyle\sup_{0\leq t\leq T}\Big\{\displaystyle\sum_{i=1}^{p}\displaystyle\int_{\mathbb R^N} a_i\big(1+|x|+|\ln(a_i)|\big)(t,x) \, {\mathrm{d}} x\Big\} \\ \qquad\qquad + \displaystyle\sum_{i=1}^{p}\displaystyle\int_0^T\displaystyle\int_{\mathbb R^N} \big|\nabla \sqrt{a_i}\big|^2(s,x) \, {\mathrm{d}} x\, {\mathrm{d}} s + \displaystyle\sum_{i=1}^{p}\displaystyle\int_0^T\displaystyle\int_{\mathbb R^N} Q_i(a)\ln (a_i) \, {\mathrm{d}} x\, {\mathrm{d}} s \leq C(T).\end{array} \] \end{proposition} The entropy dissipation \eqref{ent_diss} tells us that $\sum_{i=1}^{p}\int_{\mathbb R^N} a_i\ln(a_i)(t,x) \, {\mathrm{d}} x$ is a non increasing function of the time variable. However, this quantity has no sign. To make this information a useful estimate, involving the non negative quantities $a_i|\ln(a_i)| $ we need a control on the first order space moments $\int_{\mathbb R^N} |x| a_i(t,x) \, {\mathrm{d}} x $. We refer the reader to \cite{GoVa} for details. This estimate will not be used in our reasoning; nevertheless the entropy dissipation still has a crucial role in the proof of Theorems~\ref{theo_main} and~\ref{Theo_principal}. By the way note that the counter examples of systems that produce blow up in \cite{PiSc2} very likely do not satisfy the entropy dissipation property. \\ As said above, for data in $C^\infty\cap L^\infty(\mathbb R^N)$, we can construct a $C^\infty$ and bounded solution defined on a small enough interval. Let $T_{\mathrm{max}}$ be the lifespan of such a solution. Standard bootstrapping arguments tell us that if $T_{\mathrm{max}}<\infty$ then we have $$ \limsup_{t\to T_{\mathrm{max}}}\| a (t,\cdot)\|_{L^\infty(\mathbb{R}^N)}=+\infty. $$ In what follows, we are going to obtain a uniform bound satisfied by $\|a(t,\cdot)\|_{L^\infty(\mathbb{R}^N)}$ on the time interval $[0,T_{\mathrm{max}})$, depending only on $T_{\mathrm{max}}$ and the assumptions on the data, which thus contradicts the occurrence of a blow-up of the solution in finite time. Therefore, the $L^\infty$ estimate implies that the lifespan of the solutions of \eqref{eqA2}--\eqref{eqA2ci} is infinite. Moreover, boundedness also implies the regularity of the solution, by a bootstrap argument, see \cite[Proposition~A.1]{GoVa}. \subsection{The key intermediate statements} The main ingredient consists in showing that the local boundedness can be obtained from a local estimate in $L^r$, with $r>1$, see \cite[Proposition~4]{CaVa}. We thus work on balls \[B_\rho=\big\{x\in\mathbb R^N,\ |x|\leq \rho\big\} .\] \begin{lemma}[De Giorgi type Lemma, \cite{CaVa}]\label{lemm_de giorgi} We suppose that $2\leq q< \frac{2(N+1)}{N}$. We also suppose that h1)-h4) holds. Let $a $ be a non negative solution to \eqref{eqA2} on $(-1,0)\times {B}_1$. Then, for any $r>1$, there exists a universal constant $\delta_r>0$ such that, if $a=(a_1,\cdot\cdot\cdot, a_p)$ verifies $$ \displaystyle\sum_{i=1}^p \|a_i\|_{L^{ r}((-1,0)\times B_1)}\leq \delta_r,$$ then, $0\leq a_i(0,0)\leq 1$, for $i\in\{1,...,p\}$. \end{lemma} The proof relies on De Giorgi's techniques \cite{DeG} (see also \cite{Alikakos} for a related approach). For the sake of completeness we describe the main steps in Appendix~\ref{a:L3}; it is also important to detail this proof since this is where the entropy dissipation plays a central role. At first sight this information does not look very useful since the natural estimates for \eqref{eqA2}--\eqref{eqA2ci} in Proposition~\ref{P1} do not involve $L^r$ norms for an exponent $r$ larger than 1. However, we will be able to identify further estimates, that shrink for the rescaled solutions \eqref{scal} as $\epsilon\to 0$. Thus, for $\epsilon$ small enough the rescaled solution fulfils the criterion in Lemma~\ref{lemm_de giorgi}. \begin{lemma}\label{lemm_de giorgi2} There exists $\epsilon_0>0$ and $\nu_0>0$ depending on $N$, $\delta_\star$ and $\delta^\star$ such that if h1) holds with $2\leq q \leq 2+\nu_0\leq 2\frac{N+1}{N}$, then for all $0<\epsilon\leq\epsilon_0$ we have $$\sum_{i=1}^{p} \|{ a}^{(\epsilon)}_i\|_{L^{(N+1)/N}((-1,0)\times B_1)}\leq \delta$$ with $\delta=\delta_{(N+1)/N}$ as defined in Lemma~\ref{lemm_de giorgi}. \end{lemma} \noindent Coming back to the original variables, we obtain the $L^\infty$ estimate. \begin{coro}\label{unibound} Let $\epsilon_0$ be defined in Lemma~\ref{lemm_de giorgi2}. Then, for all $\frac{T_{\mathrm{max}}}{2}<t<T_{\mathrm{max}}$, we have $$\sum_{i=1}^{p} \|a_i(t,\cdot)\|_{L^{\infty}(\mathbb{R}^N)}\leq \epsilon_0^{-2/(q-1)}.$$ \end{coro} \noindent {\bf Proof.} Pick $x_0$ in $\mathbb{R}^N$ and $t_0\in (\frac{T_{\mathrm{max}}}{2},T_{\mathrm{max}})$. Applying Lemma~\ref{lemm_de giorgi} to $a^{(\epsilon_0)}$ yields $$0\leq \sum_{i=1}^{p} a_i(t_0,x_0)=\epsilon_0^{\frac{-2}{q-1}}\sum_{i=1}^{p} a_i^{(\epsilon_0)}(0,0)\leq \epsilon_0^{\frac{-2}{q-1}}.$$ \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par Having this statement at hand allows us to conclude the proof of Theorem~\ref{Theo_principal}. Let $2\leq q\leq 2+\nu_0\leq 2\frac{N+1}{N}$. Let $a=(a_1,..., a_p)$ be a solution to~\eqref{eqA2}--\eqref{eqA2ci}, and let $T_{\mathrm{max}}$ be the lifespan of $a$. Assume that $T_{\mathrm{max}}$ is finite. Then, for each $i\in \{1,...,p\}$, Corollary~\ref{unibound} tells us that $a_i(t,\cdot)$ is uniformly bounded for all $\frac{T_{\mathrm{max}}}{2}<t<T_{\mathrm{max}}$ and thus the sup norm does not blow up as $t\to T_{\mathrm{max}}$. This contradicts the fact that $T_{\mathrm{max}}$ is the maximal time of existence of a smooth solution of \eqref{eqA2}--\eqref{eqA2ci}. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par Therefore the cornerstone of the proof consists in proving Lemma~\ref{lemm_de giorgi2} and identifying the specific role payed by the norm $L^{(N+1)/N}$. The argument is two-fold and it uses the diffusion equation \eqref{eqMass} satisfied by the total mass $M(t,x)=\sum_{i=1}^p a_i(t,x)$. On the one hand, we shall show that the norm $L^{(N+1)/N}$ of $M$ can be controlled by means of the norm $L^\infty(0,\infty;L^1(\mathbb R^N))$. On the other hand, we shall obtain a new estimate on a \emph{weak norm} of $M$, which will allow us to conclude that \[\displaystyle\lim_{\epsilon\to 0} \|M^{(\epsilon)}\|_{L^\infty(0,\infty;L^1(\mathbb R^N))} =0,\qquad \textrm{with $M^{(\epsilon)}(s,y)= \epsilon^{2/(q-1)}\ M(t+\epsilon^2 s,x+\epsilon y)$}.\] This analysis is based on duality arguments and regularization properties of parabolic equations. Accordingly, we can conclude to the shriking as $\epsilon\to 0$ of the $ L^{(N+1)/N}$ norm of the rescaled solutions. \subsection{Preliminary comments} The De Giorgi approach leads us to construct sequences, based on energy-entropy estimates, where the parameter of the sequence controls level sets of the solution and space-time localization. Roughly speaking, we obtain a non linear control of the $k$th level by the $(k-1)$th level. We can finally conclude to a local property of the solution by using the following simple result. \begin{lemma}\label{2bete} Let $\big( u_n\big)_{n\in\mathbb N}$ be a sequence of non negative real numbers. We suppose that it satisfies, for any $n\in \mathbb N \setminus\{0\}$, $$u_n\leq \Lambda ^n u_{n-1}^{\gamma}$$ where $\Lambda, \gamma>1$. Then, there exists $\kappa>0$ such that, if $0\leq u_0\leq \kappa$, then $\lim_{n\to \infty} u_n=0$. \end{lemma} \noindent {\bf Proof.} We set $v_n=\ln (u_n)$ which satisfies \[v_n\leq n\ln(\Lambda)+ \gamma v_{n-1},\] and thus \[ v_n\leq \ln(\Lambda)\displaystyle\sum_{j=0}^n j\gamma^{n-j} +v_0 \gamma^n \leq \gamma^n\ln (\Lambda^{F(\gamma)} u_0) \] with $$F(\gamma)=\displaystyle\frac1\gamma\displaystyle\sum_{j=0}^\infty j\Big(\displaystyle\frac1\gamma\Big)^{j-1} =\displaystyle\frac1\gamma \ \displaystyle\frac{\, {\mathrm{d}}}{\, {\mathrm{d}} x}\Big(\displaystyle\frac{1}{1-x}\Big)\Big|_{x=1/\gamma}= \displaystyle\frac1\gamma\Big(\displaystyle\frac{1}{1-1/\gamma}\Big)^2. $$ Therefore $v_n$ tends to $-\infty$, and $u_n$ tends to 0, as $n\to \infty$ provided $u_0$ is small enough. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par \section{Weak norm estimates on the total mass and shrinking of the rescaled total mass} \label{S:3} Our approach relies on the following statement. \begin{proposition}\label{p:1} Let $\Phi:(0,T)\times \mathbb R^N\rightarrow \mathbb R$ such that \begin{itemize} \item[a)] $\Phi$ lies in $L^\infty((0,T)\times\mathbb R^N)$; \item[b)] $\Delta \Phi=M\geq 0$; \item[c)] $\Phi$ satisfies $\partial_t \Phi-d\Delta \Phi=0$ on $(0,T)\times \mathbb R^N$, with a coefficient $d:(0,T)\times \mathbb R^N\rightarrow \mathbb R$ verifying $0< \delta_\star \leq d(t,x)\leq \delta^\star<\infty$ for a.\,e.\,$(t,x)\in (0,T)\times \mathbb R^N$. \end{itemize} Then, there exists $\alpha\in (0,1]$ such that $\Phi\in C^{[\alpha/2,\alpha]}([t_0,T]\times \mathbb R^N)$ for any $t_0>0$, which means that we can find $C>0$ such that, for any $(t,x)\in [t_0,T]\times \mathbb R^N$ and $(\tau,h)\in \mathbb R\times \mathbb R^N$ with $t+\tau \geq t_0$, we have \[ \displaystyle\frac{|\Phi(t+\tau,x+h)-\Phi(t,x)|}{|\tau|^{\alpha/2}+|h|^\alpha}\leq C \|\Phi\|_{L^\infty}.\] \end{proposition} \noindent This H\"older regularity estimate for non conservative parabolic equations dates back to Krylov-Safonov \cite{SK1,SK2}. In fact, the result of \cite{SK1,SK2} does not need the sign property b). However, as it will be explained below, this sign property naturally appears for the system under consideration, and it plays a further crucial role throughout the analysis. Let us explain the interest of this statement for our purpose. As said above the total mass $M$ satisfies the diffusion equation \eqref{eqMass}. Of course, by definition, $M$ is a non negative function which lies in $L^\infty(0,\infty;L^1(\mathbb R^N))$. Let $\Phi$ satisfy $\Delta \Phi =M\geq 0$. Since $d(t,x)$ is bounded above by $\delta^\star$, $\Phi$ also satisfies the evolution equation \[ \partial_t \Phi-\delta^\star \Delta \Phi=(d-\delta^\star)\Delta \Phi=(d-\delta^\star)M\leq 0.\] This observation is the cornerstone of the analysis performed in \cite{CaVa}. In particular, we will make use of the following crucial property established in \cite[Proposition~11 \& Corollary~12]{CaVa}. \begin{proposition}\label{P:CaVa} Let $N\in\mathbb N$, with $N\geq 3$. Let $\Phi=\Delta^{-1}M$ with $M$ the total mass associated to a solution of \eqref{eqA2}. Then, we have \[ \|\Phi\|_{L^\infty((0,T)\times\mathbb R^N)}\leq \|\Phi(0,\cdot)\|_{L^\infty(\mathbb R^N)}\leq K_N\ \| M(0,\cdot)\|_{L^\infty(\mathbb R^N)}^{1-2/N}\ \| M(0,\cdot)\|_{L^1(\mathbb R^N)}^{2/N} ,\] where $K_N>0$ is a certain universal constant, which only depends on the space dimension. \end{proposition} Proposition~\ref{p:1} thus strengthens \cite{CaVa}'s results in the sense that it provides, beyond the $L^\infty$ estimate on $\Phi$, a H\"older-regularity estimate. Since the estimate in Proposition~\ref{P:CaVa} is not evident at first sight, we give the main steps of the proof in Appendix~\ref{app:CaVa} for the sake of completeness. We shall use the following consequence of Proposition~\ref{p:1}, which is precisely the estimate that allows us to go beyond the subquadratic non linearities dealt with in \cite{CaVa}. \begin{lemma}\label{l:L1shr} Let $M$ be a non negative solution of \eqref{eqMass}, and let $\Phi=\Delta^{-1}M$. Let $t\geq t_0>0$ and $x\in\mathbb R^N$. For $\epsilon>0$, we set $M^{(\epsilon)}(s,y)=\epsilon^{2/(q-1)}M(t+\epsilon^2 s,x+\epsilon y)$. We suppose that $M^{(\epsilon)}$ lies in $L^\infty(-4,0;L^1(\mathbb R^N))$. Then, there exists $c>0$ and $0<\alpha\leq 1$, depending only on $N$, $\delta_\star$ and $\delta^\star$, such that for any $0<\epsilon\leq \sqrt {t_0}/2$, \[ \displaystyle\sup_{-4\leq s\leq 0}\displaystyle\int_{B_2} M^{(\epsilon)}(s,y)\, {\mathrm{d}} y\leq c\ \|\Phi\|_{L^\infty}\ \epsilon^{\alpha-2+2/(q-1)}\ .\] \end{lemma} \noindent {\bf Proof.} Let $\zeta\in C^\infty_c(\mathbb R^N)$ be such that $\mathrm{supp}(\zeta)\subset B_2$ and $\zeta(x)=1$ for any $x\in B_1$. Since $M^{(\epsilon)}\geq 0$, we get \[\begin{array}{lll} \displaystyle\int_{B_1} M^{(\epsilon)}(s,y)\, {\mathrm{d}} y&\leq& \displaystyle\int_{B_2} \zeta M^{(\epsilon)}(s,y)\, {\mathrm{d}} y= \displaystyle\int_{B_2} \zeta \Delta \Phi^{(\epsilon)}(s,y)\, {\mathrm{d}} y \\ [.3cm] &\leq &\displaystyle\int_{B_2}\Delta \zeta(s,y) \big( \Phi^{(\epsilon)}(s,y)-\Phi^{(\epsilon)}(0,0)\big) \, {\mathrm{d}} y. \end{array}\] By virtue of Proposition~\ref{p:1}, we can write \[\begin{array}{lll}\displaystyle\int_{B_1} M^{(\epsilon)}(s,y)\, {\mathrm{d}} y &\leq& \epsilon^{-2+2/(q-1)}\displaystyle\int_{B_2}\Delta \zeta(y) \big( \Phi(t+\epsilon^2 s,x+\epsilon y)-\Phi(t,x)\big) \, {\mathrm{d}} y \\[.3cm] &\leq& C\|\zeta\|_{W^{2,\infty}(\mathbb R^N)} \|\Phi\|_{L^\infty} \epsilon^{\alpha-2+2/(q-1)} \end{array}\] for any $s\in (-4,0)$ and $0<\epsilon^2<t_0/4$. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par As indicated above the H\"older estimate in Proposition~\ref{p:1} is due to \cite{SK1,SK2}. For the sake of completeness, we provide here an alternative proof, which, however, uses the additional assumption b). The interest of this proof is that it entirely relies on energy estimates and De Giorgi's methods. Since the result stated in Proposition~\ref{p:1} is standard, the remaining of this Section can be safely skipped by the reader not interested in such an alternative proof (the original proof relies on a probabilistic interpretation of the equation and uses arguments from the theory of diffusion processes).\\ Here and below, given $\rho>0$, with $B_\rho$ the ball $\{x\in\mathbb R^N, |x|\leq \rho\}$, we denote \[Q_\rho=(-\rho^2,0)\times B_\rho.\] In fact, we shall work within $Q_2$, considered as a reference domain. From an equation satisfied on $Q_2$ we wish to establish qualitative properies on a smaller domain, say $Q_1$ or $Q_{1/2}$. It is also convenient to introduce the domain \[\widetilde Q=(-9/4,-1)\times B_1.\] We refer the reader to Fig.~\ref{Fig:TimeShift}; having the picture of the subdomains of $Q_2$ might be helpful in following the arguments. \\ The argument for proving Proposition \ref{p:1} relies on a technical lemma that controls oscillations. From now on, for a function $\varphi$ defined on $\Omega\subset \mathbb R^d$, we set \[ \mathrm{osc}(\varphi,\ \Omega)=\displaystyle\sup_{x\in \Omega} \varphi(x)-\displaystyle\inf_{x\in \Omega} \varphi(x) .\] \begin{lemma}[Decay of oscillations]\label{l:osc} Let $\Phi$ satisfy the assumptions of Proposition~\ref{p:1}. There exists $\lambda\in (0,1)$, which depends only on $N$ and $\delta_\star$, such that \[ \mathrm{osc}\big(\Phi,\ Q_{1/2}\big)\leq \lambda\ \mathrm{osc}\big(\Phi, \ Q_2\big).\] \end{lemma} Let us assume temporarily that Lemma \ref{l:osc} holds true. We pick $(t,x)\in (t_0,T)\times \mathbb R^N$, where $0<t_0<T<\infty$, and we set \[\Phi_k(t+2^{-2k}s,x+2^{-k}y).\] where $k\in \mathbb N$ is large enough so that the time variable remains larger than $t_0$ when $-4\leq s\leq 0$; namely, we have $k\geq k_0=\ln\big(\frac{t-t_0}{4}\big)\frac{1}{2\ln(1/2)}$. The function $\Phi_k$ is defined on $Q_2$ and it satisfies \[ \partial_s \Phi_k = d_k \Delta_y \Phi_k\] where \[ d_k(s,y)=d(t+2^{-2k}s,x+2^{-k}y).\] Moreover, we still have $-1\leq\Phi_k(s,y)\leq +1$. Applying Lemma \ref{l:osc} yields \[\mathrm{osc}\big(\Phi_k, \ Q_{1/2}\big)\leq \lambda\ \mathrm{osc}\big(\Phi_k, \ Q_2\big) \] which rephrases as \[\mathrm{osc}\big(\Phi(t+\cdot,x+\cdot), \ Q_{2^{-k-1}}\big)\leq \lambda\ \mathrm{osc}\big(\Phi(t+\cdot,x+\cdot), \ Q_{2^{-k+1}}\big).\] We deduce that \[\mathrm{osc}\big(\Phi(t+\cdot,x+\cdot), \ Q_{2^{-k}}\big)\leq \sqrt \lambda^k\times C_0,\qquad C_0=\displaystyle\frac{2}{\sqrt\lambda^{k_0}}\|\Phi\|_{L^\infty}.\] (We should bear in mind the fact that $C_0$ depends on $t_0$ through the definition of $k_0$ and it is proportional to $\|\Phi\|_{L^\infty}$.) Let $x'\in \mathbb R^N$ and $t'>t_0$; there exists a unique $k\in\mathbb N$ such that $x'-x\in B_{2^{-k+1}}\setminus B_{2^{-k}} $, $2^{-2k}\leq| t'-t|\leq 2^{-2(k-1)}$. It follows that \[ \displaystyle\frac{|\Phi(t',x')-\Phi(t,x)|}{|t'-t|^{\alpha/2}+|x'-x|^\alpha} \leq \displaystyle\frac{C_0}{\sqrt\lambda}\ \big(\sqrt\lambda 2^\alpha)^k.\] If $0<\sqrt\lambda \leq 1/2$, the right hand side remains obviously bounded, uniformly with respect to $k$, for any $0<\alpha\leq 1$; otherwise we choose \[0<\alpha=\displaystyle\frac{\ln(1/\sqrt\lambda)}{\ln(2)}<1.\] Hence Proposition ~\ref{p:1} follows from Lemma~\ref{l:osc}. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par \noindent We are thus left with the task of proving Lemma~\ref{l:osc}. To this end, we shall apply the following statement. \begin{proposition}\label{osc} Let $(t,x)\mapsto v(t,x)$ satisfy \begin{itemize} \item the differential inequality $\partial_t v-\delta^\star\Delta v\leq 0$ on $Q_2$; \item $-1\leq v(t,x)\leq +1$ on $Q_2$; \item $\mathrm{meas}\big(\big\{(t,x)\in \widetilde Q,\ v(t,x)\leq 0\big\}\big)\geq \mu\ \mathrm{meas}( \widetilde Q)$, for some $\mu>0$ \end{itemize} Then, there exists $0<\eta<1$ such that \[v(t,x)\leq \eta\qquad \textrm{on $Q_{1/2}$}.\] \end{proposition} The function \[\widetilde \Phi(t,x)=\displaystyle\frac{2}{\mathrm{osc}(\Phi, Q_2)}\Big( \Phi(t,x)-\displaystyle\frac{\sup_{Q_2}\Phi+\inf_{Q_2}\Phi}{2}\Big) \] satisfies the first two assumptions of Proposition \ref{osc}. Suppose that \[\mathrm{meas}\big(\big\{(t,x)\in Q_2,\ \widetilde\Phi(t,x)\leq 0\big\}\big)\geq \displaystyle\frac{\mathrm{meas}(Q_2)}{2}.\] (Otherwise, we shall apply the same reasoning to $-\widetilde\Phi$.) Proposition~\ref{osc} tells us that $\widetilde \Phi(t,x)\leq \eta$ on $Q_{1/2}$, which yields $\mathrm{osc}(\widetilde \Phi,Q_{1/2})\leq 1+\eta$ (since $\inf_{Q_{1/2}}\widetilde \Phi\geq -1$), and thus \[\mathrm{osc}(\Phi, Q_{1/2})\leq \displaystyle\frac{1+\eta}{2}\ \mathrm{osc}(\Phi, Q_{2}).\] It justifies Lemma \ref{l:osc}, with $\lambda=\frac{1+\eta}{2}\in (0,1)$. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par \noindent The proof of Proposition~\ref{osc} relies on a series of intermediate statements. \begin{lemma}\label{distrib} Let $-\infty<a,b<\infty$ and let $\Omega$ be a smooth bounded domain in $\mathbb R^N$. We denote $Q=(a,b)\times \Omega$. \begin{enumerate}[label=(\alph*)] \item Let $u\in L^\infty(a,b;L^2(\Omega))\cap L^2(a,b;H^1(\Omega))$ such that $$\partial_t u-\delta^\star\Delta u+\mu=0$$ holds in $\mathscr D'(Q)$, with $\mu$ a non negative measure on $Q$. Let $F:\mathbb R\rightarrow \mathbb R$ be a non decreasing convex function. We assume that $F(0)=0$ and $F\in W^{1,\infty}_{\mathrm{loc}}(\mathbb R)$. Then, there exists a non negative measure $\nu$ such that $v=F(u)$ satisfies $\partial_t v-\delta^\star\Delta v+\nu=0$ holds in $\mathscr D'(Q)$ \item Let $v\in L^\infty((a,b)\times \Omega)\cap L^2(a,b;H^1(\Omega))$ be a non negative solution of $\partial_t v-\delta^\star\Delta v+\nu=0$, with $\nu$ a non negative measure on $Q$. Then, for any trial function $\varphi\in C^\infty_c(\Omega)$ there exists $C>0$, which depend only on $\delta_\star$, $\|v\|_{L^\infty}$ and $\varphi$, such that, for a.~e.~$a<s<t<b$, the following energy inequality holds $$\begin{array}{l} \displaystyle\frac12\displaystyle\int_\Omega v^2(t,x)\varphi^2(x)\, {\mathrm{d}} x +\delta_\star\displaystyle\int_s^t\displaystyle\int_\Omega| \nabla(\phi v)|^2(\tau, x)\, {\mathrm{d}} x\, {\mathrm{d}} \tau \\ [.3cm] \qquad\qquad\qquad\qquad \leq \displaystyle\frac12\displaystyle\int_\Omega v^2(s,x)\varphi^2(x)\, {\mathrm{d}} x + C(t-s). \end{array}$$ \end{enumerate} \end{lemma} \noindent {\bf Proof.} Note that $v=F(u)$ also lies in $L^\infty(a,b;L^2(\Omega))\cap L^2(a,b;H^1(\Omega))$, see e.~g.~\cite[Prop.~IX.5]{Brez}. Item a) follows from the following computation $$ \partial_t F(u)= -F'(u)\mu + F'(u)\delta_\star\Delta u =\underbrace{-F'(u)\mu- \delta^\star F"(u)|\nabla u|^2}_{\leq 0} + \delta_\star\Delta F(u). $$ The argument can be made rigorous by working on the weak variational formulation of the equation, with suitable approximation of the solution $u$. \noindent For proving item (b), we compute \[ \begin{array}{lll}\displaystyle\frac12\partial_t (v^2\varphi^2)&=&\delta^\star\varphi^2v\nabla\cdot\nabla v-\nu \varphi^2 v \\ &=&\delta^\star\nabla\cdot(\varphi^2 v\nabla v)-\nu \varphi^2 v-\delta^\star\nabla v\cdot \nabla(\varphi^2 v) \\ & =& \delta^\star\nabla\cdot(\varphi^2 v\nabla v)-\nu \varphi^2 v-\delta^\star|\nabla (\varphi v)|^2 +\delta^\star \ v^2\ |\nabla\varphi|^2. \end{array}\] The second and third terms of the right hand side are non positive; the integral of the last term is dominated by $\delta_\star\|v\|_{L^\infty(Q)}^2 \|\varphi\|_{H^1(\Omega)}.$ Again a full justification proceeds through an approximation argument. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par For proving Proposition~\ref{osc}, we shall work with several subdomains of $Q_2$, as indicated by Fig.~\ref{Fig:TimeShift} which might help to follow the arguments. \begin{lemma}\label{l:3sets} Let $u$ satisfy $\partial_t u-\delta^\star\Delta u\leq 0$ and $-1\leq u(t,x)\leq +1$ in $Q_2$. Let us set \[\begin{array}{l} \mathscr A=\big\{(t,x)\in Q_1,\ u(t,x)\geq 1/2\big\}, \\ \mathscr B=\big\{(t,x)\in \widetilde Q,\ u(t,x)\leq 0\big\}, \\ \mathscr C=\big\{(t,x)\in Q_1\cup \widetilde Q,\ 0<u(t,x)<1/2\big\}.\end{array} \] There exists $\alpha>0$ such that if $\mathrm{meas}(\mathscr A)\geq \eta$ and $\mathrm{meas}(\mathscr B)\geq \frac12\mathrm{meas}(\widetilde Q)$, then $\mathrm{meas}(\mathscr C)\geq \alpha$. \end{lemma} \begin{figure}[!ht] \begin{center} \begin{tikzpicture} \draw [very thin,->] (0,-5) -- (0,5); \draw (0,5) node[above] {$x$} ; \draw [very thin,->] (-9,0) -- (1,0); \draw (-9,0) node[below] {$t$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (1.5,2) -- (1.5,-2)node [black,midway,xshift=20pt] {\small $B_1$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (3.5,4) -- (3.5,-4)node [black,midway,xshift=20pt] {\small $B_2$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (2.5,3) -- (2.5,-3)node [black,midway,xshift=20pt] {\small $\ B_{3/2}$}; \draw [very thick] (-8,-4) -- (-8,4) ; \draw [very thick] (-8,4) -- (0,4) ; \draw [very thick] (0,4) -- (0,-4) ; \draw [very thick] (-8,-4) -- (0,-4) ; \draw [dotted] (-8,-4) -- (-8,-5); \draw [dotted] (-4.5,0) -- (-4.5,-5); \draw [dotted] (-2,0) -- (-2,-5); \draw [dotted] (0,-3) -- (2.5,-3); \draw [dotted] (0,3) -- (2.5,3); \draw [dashed, very thick] (-8,-3) -- (0,-3); \draw [dashed, very thick] (-8,3) -- (0,3); \draw [dotted] (0,-4) -- (3.5,-4); \draw [dotted] (0,4) -- (3.5,4); \draw [dotted] (0,-2) -- (1.5,-2); \draw [dotted] (0,2) -- (1.5,2); \draw (-8.5,-5) node[below] {{\small {$t=-4$}}}; \draw (-4.5,-5) node[below] {{\small {$t=-9/4$}}}; \draw (-2,-5) node[below] {{\small {$t=-1$}}}; \draw (0.5,-5) node[below] {{\small {$t=0$}}}; \draw [very thick] (-2,-2) -- (-2,2) ; \draw [very thick] (-2,2) -- (0,2) ; \draw [very thick] (0,2) -- (0,-2) ; \draw [very thick] (-2,-2) -- (0,-2) ; \draw [dashed] (-4.5,-2) -- (-4.5,2) ; \draw [dashed] (-4.5,2) -- (0,2) ; \draw [dashed] (0,2) -- (0,-2) ; \draw [dashed] (-4.5,-2) -- (0,-2) ; \coordinate[] (A) at (-2,-2); \coordinate[] (B) at (0,-2); \coordinate[] (C) at (0,2); \coordinate[] (D) at (-2,2); \coordinate[] (E) at (-4.5,2); \coordinate[] (F) at (-4.5,-2); \fill[gray, opacity=0.5, draw=black] (A) -- (B) -- (C) -- (D) -- cycle; \fill [pattern=north east lines,draw=black] (D) -- (E) -- (F) -- (A) -- cycle; \end{tikzpicture} \caption{ The domains $Q_2$ (the largest box), $\widetilde Q$ (the dashed box) and $Q_1$ (the grey box)} \label{Fig:TimeShift}\end{center} \end{figure} \noindent {\bf Proof.} We argue by contradiction, assuming that a sequence $\big(u_k\big)_{k\in\mathbb N}$ of solutions of $\partial_t u_k-\delta^\star\Delta u_k\leq 0$ in $Q_2$ satisfies $-1\leq u_k(t,x)\leq +1$ and \begin{equation}\label{hyp_neg}\begin{array}{l} \mathrm{meas}(\mathscr A_k)\geq \eta,\ \textrm{ with $\mathscr A_k= \big\{(t,x)\in Q_1,\ u_k(t,x)\geq 1/2\big\}$}, \\[.3cm] \mathrm{meas}(\mathscr B_k)\geq \displaystyle\frac12\mathrm{meas}(\widetilde Q),\ \textrm{ with $\mathscr B_k= \big\{(t,x)\in \widetilde Q,\ u_k(t,x)\leq 0\big\}$}, \\[.3cm] \mathrm{meas}(\mathscr C_k)\leq \displaystyle\frac1k,\ \textrm{ with $\mathscr C_k= \big\{(t,x)\in Q_1\cup \widetilde Q,\ 0<u_k(t,x)<1/2\big\}$}. \end{array}\end{equation} We focus our interest on the positive part $ v_k=[u_k]_+$, with $[z]_+=\max(z,0)$, which is still uniformly bounded: $0\leq v_k(t,x)\leq 1$, By virtue of Lemma~\ref{distrib}-(a), it satisfies \begin{equation}\label{eqvk} \partial_t v_k-\delta^\star \Delta v_k+\mu_k=0, \end{equation} with $\mu_k$ a non negative measure. The strategy can be recapped as follows. We shall establish the compactness of $v_k$ in the reduced domain $(-4,0)\times B_{3/2}$. It allows us to assume that $v_k$ converges to a certain function $v$. Roughly speaking, we are going to show that $v(s,x)$ vanishes on $B_1$ for certain times $-3/2<s<-1$, which will imply that $v$ vanishes over $Q_1$. It will eventually lead to a contradiction by considering the behavior of the sets $\mathscr A_k$, $\mathscr B_k$, $\mathscr C_k$ as $k\to \infty$. \\ Let us pick a trial function $\zeta\in C^\infty_c(B_2)$ such that $\zeta(x)=1$ for any $x\in B_{3/2}$ and $0\leq \zeta(x)\leq 1$ for any $x\in \mathbb R^N$. By using Lemma~\ref{distrib}-(b), we get for $-4<t_1<t_2<0$ \begin{equation}\label{energ_k_2} \displaystyle\int \zeta^2 |v_k|^2(t_2,x)\, {\mathrm{d}} x + \delta^\star\displaystyle\int_{t_1}^{t_2}\displaystyle\int |\nabla (\zeta v_k)|^2(s,x)\, {\mathrm{d}} x\, {\mathrm{d}} s \leq \displaystyle\int \zeta^2 v_k^2(t_1,x)\, {\mathrm{d}} x + C(t_2-t_1),\end{equation} for a certain constant $C>0$. In particular, $\big(\zeta v_k\big)_{k\in\mathbb N}$ is bounded in $L^\infty(-4,0;L^2(B_2))\cap L^2(-4,0;H^1(B_2))$. Going back to \eqref{eqvk}, since $\mu_k\geq 0$, $v_k\geq 0$, we observe that \[\begin{array}{lll} 0\leq \displaystyle\int_{t_1}^{t_2}\displaystyle\int_{B_{3/2}} \mu_k \, {\mathrm{d}} x\, {\mathrm{d}} s &\leq & \displaystyle\int_{t_1}^{t_2}\displaystyle\int_{B_2} \zeta \mu_k \, {\mathrm{d}} x\, {\mathrm{d}} s \\[.3cm] &\leq& \displaystyle\int_{B_2} \zeta v_k(t_1,x)\, {\mathrm{d}} x - \delta^\star\displaystyle\int_{t_1}^{t_2} \displaystyle\int_{B_2} \nabla v_k\cdot\nabla \zeta\, {\mathrm{d}} x\, {\mathrm{d}} s \\ [.3cm] &\leq& \|\zeta\|_{L^1}+ 2 \delta^\star \|\nabla v_k\|_{L^2(Q_2)} \|\nabla\zeta\|_{L^2(B_2)} \end{array}\] is bounded uniformly with respect to $k$. Coming back to \eqref{eqvk}, we deduce that $\big(\partial_t v_k\big)_{k\in\mathbb N}$ is bounded in $\mathscr M^1((-4,0)\times B_{3/2})+L^2(-4,0;H^{-1}(B_{3/2}))$. By virtue of Aubin-Lions-Simon's lemma \cite{JS} (in fact we use the extended version \cite[Theorem~1]{Moussa} which allows us to deal with measure valued time derivatives), we conclude that $\big( v_k\big)_{k\in\mathbb N}$ is compact in $L^2((-4,0)\times B_{3/2})$. We can thus assume that $v_k$ (possibly relabelling the sequence) converges to some $v$ in $L^2((-4,0)\times B_{3/2})$. Bienaym\'e-Tchebyschev's inequality yields \[ \mathrm{meas}\big(\big\{(t,x)\in ((-4,0)\times B_1),\ |v_k(t,x)-v(t,x)|\geq \epsilon\big\}\big) \leq \displaystyle\frac{\| v_k-v\|^2_{L^2((-4,0)\times B_1)}}{\epsilon^2}\xrightarrow[k\to \infty]{} 0,\] for any $\epsilon>0$. Let $(t,x)\in (-4,0)\times B_1$ be such that $\epsilon\leq v(t,x)\leq 1/2-\epsilon$. Then we distinguish the following two cases: either $|v-v_k|(t,x)\geq \epsilon$ or $0\leq v_k(t,x)=(v_k-v)(t,x)+v(t,x)\leq |v-v_k|(t,x) +v(t,x)\leq 1/2$. It follows that \[\begin{array}{l} \mathrm{meas}\big(\big\{(t,x)\in Q_1\cup \widetilde Q,\ \epsilon\leq v(t,x)\leq 1/2-\epsilon \big\}\big) \\[.3cm] \qquad \leq \mathrm{meas}\big(\big\{(t,x)\in Q_1\cup \widetilde Q,\ |v-v_k|(t,x)\geq\epsilon \big\}\big) \\[.3cm] \qquad\qquad + \underbrace{\mathrm{meas}\big(\big\{(t,x)\in Q_1\cup \widetilde Q),\ 0\leq v_k(t,x)\leq1/2 \big\}\big)}_{\mathrm{meas}(\mathscr C_k)} \\[.3cm] \qquad \leq \mathrm{meas}\big(\big\{(t,x)\in Q_1\cup \widetilde Q,\ |v-v_k|(t,x)\geq\epsilon \big\}\big) + \displaystyle\frac 1k, \end{array}\] by using \eqref{hyp_neg}. Letting $k$ go to $\infty$ yields \[\mathrm{meas}\big(\big\{(t,x)\in Q_1\cup \widetilde Q,\ \epsilon\leq v(t,x)\leq 1/2-\epsilon \big\}\big)=0.\] Since this property holds for any $\epsilon$, the monotone convergence property leads to \[\mathrm{meas}\big(\big\{(t,x)\in Q_1\cup \widetilde Q,0< v(t,x)< 1/2 \big\}\big)=0.\] Therefore, we have \begin{equation}\label{dich} \textrm{for a.\ e.\ $t\in (-9/4,0)$, either $v(t,x)=0$ or $v(t,x)\geq 1/2$ in $B_1$}.\end{equation} Similarly, let $(t,x)\in (-4,0)\times B_1$ be such that $ v_k(t,x)=0$. We distinguish the following two cases: either $|v-v_k|(t,x)\geq \epsilon$ or $0\leq v(t,x)=(v-v_k)(t,x)\leq |v-v_k|(t,x) \leq \epsilon$. Coming back to \eqref{hyp_neg}, we get \[\begin{array}{l} \displaystyle\frac 12 \ \mathrm{meas}(\widetilde Q) \leq \mathrm{meas}(\mathscr B_k) \\ [.3cm] \qquad\qquad\qquad \leq \mathrm{meas}\big(\big\{(t,x)\in \widetilde Q,\ |v-v_k|(t,x)\geq \epsilon \big\}\big) + \mathrm{meas}\big(\big\{(t,x)\in \widetilde Q,v(t,x)\leq \epsilon \big\}\big). \end{array}\] Letting $k$ go to $\infty$ we obtain \[ \displaystyle\frac 12 \ \mathrm{meas}(\widetilde Q) \leq \mathrm{meas}\big(\big\{(t,x)\in \widetilde Q,v(t,x)\leq \epsilon \big\}\big). \] By monotone convergence, as $\epsilon\to 0$, we arrive at \[ \displaystyle\frac 12 \ \mathrm{meas}(\widetilde Q) \leq \mathrm{meas}\big(\big\{(t,x)\in \widetilde Q,v(t,x) =0 \big\}\big). \] Consequently, we can find a non negligible set of times $s\in (-3/2,-1)$ such that $v(s,x)=0$ holds for a.\ e.\ $x\in B_1$. Letting $k$ go to $\infty$ in \eqref{eqvk}, we obtain $\partial_t v-\delta^\star\Delta v+\nu=0$ on $(-4,0)\times B_{3/2}$, with $\nu$ a non negative measure. Let $\zeta\in C^\infty_c(B_{3/2})$ be a non negative trial function such that $\zeta(x)=1$ for any $x\in B_1$. We apply Lemma~\ref{distrib}-(b), and we obtain for a.~e.~$t\in (s,0)$, $$ \displaystyle\int_{B_1} v^2(t,x)\, {\mathrm{d}} x\leq \displaystyle\int_{B_{3/2}} v^2(t,x)\zeta^2(x)\, {\mathrm{d}} x\leq \displaystyle\int_{B_{3/2}} v^2(s,x)\zeta(x)\, {\mathrm{d}} x + C(t-s)=C(t-s),$$ where, owing to \eqref{dich}, we also know that the left hand side is either null or larger than $\frac{\mathrm{meas}(B_1)}{4}$. We deduce that, actually, $v$ vanishes on $Q_1$. We are going to show that it contradicts \eqref{hyp_neg}. Indeed, let us consider $(t,x)\in Q_1$ such that $v_k(t,x)\geq 1/2$. Then, for any $\epsilon>0$, either $|v-v_k|(t,x) \geq \epsilon$ or $v(t,x)=v_k(t,x)+(v-v_k)(t,x)\geq v_k(t,x) -|v-v_k|(t,x) \geq 1/2-\epsilon$. With the first property in \eqref{hyp_neg}, it follows that \[\begin{array}{lll} \eta\leq \mathrm{meas}(\mathscr A_k)&\leq& \mathrm{meas}\big(\big\{(t,x)\in Q_1,\ |v-v_k|(t,x) \geq \epsilon\big\}\big) \\[.3cm] &&\qquad + \mathrm{meas}\big(\big\{(t,x)\in Q_1,v(t,x) \geq 1/2-\epsilon\big\}\big). \end{array}\] Letting $k$ go to $\infty$ yields \[\eta\leq \mathrm{meas}\big(\big\{(t,x)\in Q_1,v(t,x) \geq 1/2-\epsilon\big\}\big).\] Since this inequality holds for any $\epsilon>0$, we conclude, by monotone convergence, that \[\eta\leq \mathrm{meas}\big(\big\{(t,x)\in Q_1,v(t,x) \geq 1/2\big\}\big) \] holds, a contradiction. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par \noindent {\bf Proof of Proposition~\ref{osc}.} We consider $(t,x)\mapsto v(t,x)$ such that $-1\leq v(t,x)\leq +1$, $\mathrm{meas}\big(\big\{ (t,x)\in \widetilde Q,\ v(t,x)\leq 0\big\}\big)\geq \mu\ \mathrm{meas}(\widetilde Q)$, and $v$ satisfies $\partial_t v-\delta^\star \Delta v\leq 0$ in $Q_2$. The proof splits into two steps. \noindent {\it \underline{Step 1.}} \noindent For $k\in\mathbb N$, set $$v_k(t,x)=2^k(v(t,x)-(1-1/2^k)).$$ We shall show that the integral \[ \displaystyle\iint_{Q_1} [v_k]^2_+\, {\mathrm{d}} x\, {\mathrm{d}} t\] can be made as small as we wish, by choosing $k$ large enough. Observe that $$v_k=2^k(v-1)+1=2v_{k-1}-1$$ which implies that $v_k\leq 1$ and $$\big\{(t,x)\in \widetilde Q,\ v(t,x)\leq 0\big\}\subset \big\{(t,x)\in \widetilde Q,\ v(t,x)\leq 1-1/2^k\big\}= \big\{(t,x)\in \widetilde Q,\ v_k(t,x)\leq 0\big\}.$$ Thus, by assumption on $v$,we have $$ \mathrm{meas}\big(\big\{ (t,x)\in \widetilde Q,\ v_k(t,x)\leq 0\big\}\big)\geq \mathrm{meas}\big(\big\{ (t,x)\in \widetilde Q,\ v(t,x)\leq 0\big\}\big)\geq \mu\ \mathrm{meas}(\widetilde Q).$$ Let us suppose that, for any $k\in\mathbb N$ \ \displaystyle\iint_{Q_1} [v_k]_+^2\, {\mathrm{d}} x\, {\mathrm{d}} t\geq \delta \] holds for a certain $\delta >0$. Since this integral is dominated by $$ \mathrm{meas}\big(\big\{ (t,x)\in Q_1,\ v_k(t,x)\geq 0\big\}\big)= \mathrm{meas}\big(\big\{ (t,x)\in Q_1,\ v_{k-1}(t,x)\geq 1/2\big\}\big) $$ we infer \[\mathrm{meas}\big(\big\{ (t,x)\in Q_1,\ v_{k-1}(t,x)\geq 1/2\big\}\big)\geq \delta\] independently of $k$. Applying Lemma~\ref{l:3sets} yields \[\mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ 0<v_{k-1}(t,x)< 1/2\big\}\big)\geq \alpha,\] still independently of $k$. It follows that \[\begin{array}{l} \mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ v_{k}(t,x)\leq 0\big\}\big) \\[.3cm] \qquad =\mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ 2v_{k-1}(t,x)-1\leq 0\big\}\big) \\[.3cm] \qquad =\mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ v_{k-1}(t,x)\leq 0\big\}\big) \\ [.3cm] \qquad\qquad+\mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ 0<v_{k-1}(t,x)\leq 1/2\big\}\big) \\[.3cm] \qquad \geq \mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ v_{k-1}(t,x)\leq 0\big\}\big)+\alpha. \end{array}\] Since $\mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ v_{0}(t,x)\leq 0\big\}\big) \geq \mathrm{meas}\big(\big\{ (t,x)\in \widetilde Q,\ v_{0}(t,x)\leq 0\big\}\big)\geq \mu\ \mathrm{meas}(\widetilde Q)$, this recursion formula leads to \[ \mathrm{meas}\big(\big\{ (t,x)\in Q_1\cup \widetilde Q,\ v_{k}(t,x)\leq 0\big\}\big)\geq \mu\ \mathrm{meas}(\widetilde Q)+k\alpha . \] However, this cannot occur for any $k$ since the left hand side is bounded by $\mathrm{meas}(Q_2)$. We conclude that, given $\delta >0$, there exists $k_\star\in\mathbb N$ such that \[\displaystyle\iint_{Q_1} [v_{k_\star}]_+^2\, {\mathrm{d}} x\, {\mathrm{d}} t\leq \delta.\] \\ \noindent {\it\underline {Step 2.}} \noindent The second step relies on De Giorgi's analysis. Let us set $w(t,x)=v_{k_\star}(t,x)$. We shall show that, provided $\delta$ is small enough (which means $k_\star$ large enough), $w(t,x)\leq 1/2$ on $Q_{1/2}$. To this end, let us set, for $\ell\in \mathbb N$, \[\begin{array}{l} m_\ell=\displaystyle\frac12\Big(1-\displaystyle\frac{1}{2^\ell}\Big), \\[.3cm] w_\ell(t,x)=[w(t,x)-m_\ell]_+, \\[.3cm] r_\ell= \displaystyle\frac12\Big(1+\displaystyle\frac{1}{2^\ell}\Big), \qquad t_\ell= -r_\ell^2=- \displaystyle\frac14\Big(1+\displaystyle\frac{1}{2^\ell}\Big)^2. \end{array}\] We are going to work in the domains $Q_{1/2}\subset Q_{r_\ell}\subset Q_1$ that shrink to $Q_{1/2} $ as $\ell\to \infty$, see Fig.~\ref{Fig:Qell}. \begin{figure}[!ht] \begin{center} \begin{tikzpicture} \draw [very thin,->] (0,-5) -- (0,5); \draw (0,5) node[above] {$x$} ; \draw [very thin,->] (-9,0) -- (1,0); \draw (-9,0) node[below] {$t$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (1.5,2) -- (1.5,-2)node [black,midway,xshift=20pt] {\small $B_{1/2}$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (3.5,4) -- (3.5,-4)node [black,midway,xshift=20pt] {\small $\ B_{1}$}; \draw [very thick] (-8,-4) -- (-8,4) ; \draw [very thick] (-8,4) -- (0,4) ; \draw [very thick] (0,4) -- (0,-4) ; \draw [very thick] (-8,-4) -- (0,-4) ; \draw [dotted] (-8,-4) -- (-8,-5); \draw [dashed, very thick] (-6.5,-3.5) -- (0,-3.5); \draw [dashed, very thick] (-6.5,3.5) -- (0,3.5); \draw [dashed, very thick] (-6.5,-3.5) -- (-6.5,3.5); \draw [dashed, very thick] (-5,-3) -- (0,-3); \draw [dashed, very thick] (-5,3) -- (0,3); \draw [dashed, very thick] (-5,-3) -- (-5,3); \draw [dashed, very thick] (-3.5,-2.5) -- (0,-2.5); \draw [dashed, very thick] (-3.5,2.5) -- (0,2.5); \draw [dashed, very thick] (-3.5,-2.5) -- (-3.5,2.5); \draw [dashed, very thick] (-2.4,-2.2) -- (0,-2.2); \draw [dashed, very thick] (-2.4,2.2) -- (0,2.2); \draw [dashed, very thick] (-2.4,-2.2) -- (-2.4,2.2); \draw [dotted] (0,-4) -- (3.5,-4); \draw [dotted] (0,4) -- (3.5,4); \draw [dotted] (0,-2) -- (1.5,-2); \draw [dotted] (0,2) -- (1.5,2); \draw [dotted] (-2,-2) -- (-2,-5); \draw (-8.5,-5) node[below] {{\small {$t=-1$}}}; \draw (-2,-5) node[below] {{\small {$t=-1/4$}}}; \draw (0.5,-5) node[below] {{\small {$t=0$}}}; \draw [very thick] (-2,-2) -- (-2,2) ; \draw [very thick] (-2,2) -- (0,2) ; \draw [very thick] (0,2) -- (0,-2) ; \draw [very thick] (-2,-2) -- (0,-2) ; \coordinate[] (A) at (-2,-2); \coordinate[] (B) at (0,-2); \coordinate[] (C) at (0,2); \coordinate[] (D) at (-2,2); \fill[gray, opacity=0.5, draw=black] (A) -- (B) -- (C) -- (D) -- cycle; \end{tikzpicture} \caption{ The domains $Q_1$, $Q_{r_\ell}$ and $Q_{1/2}$ (the grey box)} \label{Fig:Qell}\end{center} \end{figure} We consider a sequence of functions $\zeta_\ell \in C_c^\infty(B_{r_{\ell-1}})$ such that $0\leq \zeta_\ell(x)\leq 1$ on $B_{r_{\ell-1}}$ and $\zeta_\ell(x)= 1$ on $B_{r_{\ell}}$. We shall use the basic estimate \[ |\nabla\zeta_\ell(x)|\leq C2^\ell,\qquad \displaystyle\frac{1}{t_\ell-t_{\ell-1}}\leq C2^{2\ell}.\] We already know that $0\leq w_\ell(t,x)\leq 1$, by definition. We can apply the energy estimate in Lemma ~\ref{distrib}, which reads \begin{equation}\label{energ00} \begin{array}{l} \displaystyle\frac12\displaystyle\int_{B_1} w_\ell^2(t,x)\zeta_\ell^2(x)\, {\mathrm{d}} x +\delta^\star\displaystyle\int_s^t\displaystyle\int_{B_1}| \nabla(\zeta_\ell w_\ell)|^2(\tau, x)\, {\mathrm{d}} x\, {\mathrm{d}} \tau \\ [.3cm] \qquad\qquad\qquad\qquad \leq \displaystyle\frac12\displaystyle\int_{B_1} w_\ell^2(s,x)\zeta_\ell^2(x)\, {\mathrm{d}} x +\delta^\star \displaystyle\int_s^t\displaystyle\int_{B_1} w_\ell^2| \nabla\zeta_\ell |^2 (\tau, x)\, {\mathrm{d}} x\, {\mathrm{d}} \tau. \end{array}\end{equation} for $-1<s< t_\ell < t<0$ (note that here we keep explicit the integral in the right hand side that is roughly estimated by a constant in Lemma~\ref{distrib}). Averaging over $s\in (t_{\ell-1},t_\ell)$ (and using the fact that the integral of a positive quantity over $(s,t)$ is thus bounded below --- resp. above --- by the integral over $(t_\ell,t)$ --- resp. $(t_{\ell-1},t)$) yields $$ \begin{array}{l} \displaystyle\frac12\displaystyle\int_{B_1} w_\ell^2(t,x)\zeta_\ell^2(x)\, {\mathrm{d}} x +\delta^\star\displaystyle\int_{t_{\ell}}^t\displaystyle\int_{B_1}| \nabla(\zeta_\ell w_\ell)|^2(\tau, x)\, {\mathrm{d}} x\, {\mathrm{d}} \tau \\ [.3cm] \qquad\qquad\qquad\qquad \leq (1/2+\delta^\star)C 2^{2\ell} \displaystyle\int_{t_{\ell-1}}^0 \displaystyle\int_{\mathrm{supp}(\zeta_\ell)} |w_\ell|^2(\tau ,x) \, {\mathrm{d}} x\, {\mathrm{d}} \tau. \end{array}$$ Let us set \[\mathscr U_\ell=\displaystyle\int_{t_\ell}^0\displaystyle\int_{B_{r_\ell}} |w_\ell|^2(t,x)\, {\mathrm{d}} x\, {\mathrm{d}} t,\] and \[\mathscr E_\ell=\displaystyle\sup_{t_\ell\leq t\leq 0}\displaystyle\int_{B_1} w_\ell^2(t,x)\zeta_\ell^2(x)\, {\mathrm{d}} x +\displaystyle\int_{t_{\ell}}^0\displaystyle\int_{B_1}| \nabla(\zeta_\ell w_\ell)|^2(\tau, x)\, {\mathrm{d}} x\, {\mathrm{d}} \tau \] We wish to establish a non linear recursion for $\mathscr U_\ell$, which will allow us to justify that it tends to 0 as $\ell\to \infty$. On the one hand, since \[w_\ell\leq w_{\ell-1} \qquad\textrm{and} \qquad \mathrm{supp}(\zeta_\ell)\subset B_{\ell-1},\] we note that \eqref{energ00} yields \[ \mathscr E_\ell \leq (2+1/\delta^\star) (1/2+\delta^\star)C 2^{2\ell} \mathscr U_{\ell-1}. \] On the other hand, we observe that $$\begin{array}{lll} \mathscr U_\ell&\leq& \displaystyle\int_{t_\ell}^0\displaystyle\int_{B_{r_\ell}} |\zeta_\ell w_\ell|^2(t,x)\, {\mathrm{d}} x\, {\mathrm{d}} t \\ [.3cm]&\leq& \left( \displaystyle\int_{t_\ell}^0\displaystyle\int_{B_{r_\ell}} |\zeta_\ell w_\ell|^{2(N+2)/N}(t,x)\, {\mathrm{d}} x\, {\mathrm{d}} t \right)^{N/(N+2)}\\ [.3cm]&& \qquad\qquad\qquad \times \left(\mathrm{meas}\big(\big\{(t,x)\in (t_\ell,0)\times B_{r_\ell},\ \zeta_\ell w_\ell>0\big\}\big) \right)^{2/(N+2)} ,\end{array}$$ by using H\"older's inequality. Remark that \[w-m_{\ell-1}=w-m_\ell+\displaystyle\frac{1}{2^{\ell+1}} \] which leads to to \[\begin{array}{l} \mathrm{meas}\big(\big\{(t,x)\in (t_\ell,0)\times B_{r_\ell},\ \zeta_\ell w_\ell>0\big\}\big) \\[.3cm] \qquad \leq \mathrm{meas}\big(\big\{(t,x)\in (t_{\ell-1},0)\times B_{r_{\ell-1}},\ w_{\ell-1}>2^{-\ell-1}\big\}\big) \\[.3cm] \qquad \leq 2^{2\ell+2} \mathscr U_{\ell-1}, \end{array}\] by virtue of the Bienaym\'e-Tchebyschev inequality. Next, we use the Gagliardo-Nirenberg-Sobolev inequality, see \cite[Theorem p.~125]{Nir} \[ \left( \displaystyle\int_{B_{r_\ell}} |\zeta_\ell w_\ell|^{2N/(N-2)}(t,x)\, {\mathrm{d}} x\right)^{(N-2)/N} \leq C_S \displaystyle\int_{B_{r_\ell}} |\nabla(\zeta_\ell w_\ell)|^{2}(t,x)\, {\mathrm{d}} x.\] Mind that we have integrated with respect to the space variable only. We can write \[ \displaystyle\frac{N+2}{N}=\theta\displaystyle\frac{2N}{N-2}+2(1-\theta), \qquad \theta=\displaystyle\frac{N-2}{N}\in(0,1),\] so that \[\begin{array}{l} \displaystyle\int_{t_\ell}^0 \displaystyle\int_{B_{r_\ell}} |\zeta_\ell w_\ell|^{2(N+2)/N}(t,x)\, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm] \qquad\leq \displaystyle\int_{t_\ell}^0 \left( \displaystyle\int_{B_{r_\ell}} |\zeta_\ell w_\ell|^{2N/(N-2)}(t,x)\, {\mathrm{d}} x \right)^{\theta} \underbrace{ \left( \displaystyle\int_{B_{r_\ell}} |\zeta_\ell w_\ell|^{2}(t,x) \, {\mathrm{d}} x \right)^{1-\theta}}_{\leq \mathscr E_\ell^{1-\theta}} \, {\mathrm{d}} t \\[.3cm] \qquad \leq C_S^\theta \mathscr E_\ell^{2-\theta}. \end{array} \] Therefore, gathering all these informations together, we obtain \[ \mathscr U_\ell\leq \Lambda ^\ell \mathscr U_{\ell-1}^{1+2/(N+2)}, \] for a certain constant $\Lambda >1$. Owing to Lemma~\ref{2bete}, we deduce that $\lim_{\ell\to \infty} \mathscr U_\ell=0$ provided $\mathscr U_0$ is small enough. The smallness condition on $\mathscr U_0$ is precisely ensured by the definition $w=v_{k_\star}$ coming from Step~1. Since $$\displaystyle\frac{1}{|t_{\ell}|}\displaystyle\int_{t_\ell}^0\displaystyle\int_{B_{r_\ell}} |w_\ell|^2(t,x)\, {\mathrm{d}} x\, {\mathrm{d}} t\leq \mathscr U_\ell$$ we conclude, by applying Fatou's lemma, that $$ 2\displaystyle\iint_{Q_{1/2}} [w-1/2]_+^2(t,x)\, {\mathrm{d}} x\, {\mathrm{d}} t\leq \liminf_{\ell\to\infty}\displaystyle\frac{1}{t_{\ell}}\displaystyle\int_{t_\ell}^0\displaystyle\int_{B_{r_\ell}} |w_\ell|^2(t,x)\, {\mathrm{d}} x\, {\mathrm{d}} t=0$$ so that, finally, $w(t,x)\leq 1/2$ holds a.~e.~on $Q_{1/2}$. Coming back to the change of unknown $w(t,x)=v_{k_\star}(t,x)=2^{k_\star}(v(t,x)-(1-1/2^{k_\star}))\leq 1/2$ becomes $$ v(t,x)\leq 1+\displaystyle\frac{1}{2^{k_\star+1}}-\displaystyle\frac{1}{2^{k_\star}}=1-\displaystyle\frac{1}{2^{k_\star+1}}<1. $$ \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par \section{$L^{(N+1)/N}$ estimate on the total mass} \label{S:4} This Section is devoted to the proof of the following statement. \begin{proposition}\label{Fabes} There exists a constant $K>0$ such that, $M\geq 0$ being a solution of \eqref{eqMass} in $Q_2$. We have \[ \|M\|_{L^{(N+1)/N}(Q_1)}\leq K \ \displaystyle\sup_{-4\leq t\leq 0} \displaystyle\int_{B_2} M(t,x)\, {\mathrm{d}} x .\] \end{proposition} \noindent {\bf Proof.} Let $f$ be in $C^\infty_c(Q_1)$ such that $$\|f\|_{L^{N+1}(Q_1)}\leq 1.$$ We consider the solution of the final problem \begin{equation}\label{equ}\begin{array}{ll} \partial_t u +d\Delta u=f \qquad & \textrm{in $(0,T)\times \mathbb R^N$}, \\ u(T,x)=0,\qquad& u\big|_{\partial B_2}=0. \end{array}\end{equation} We start by reminding the reader the Alexandrof-Bakelman-Pucci-Krylov-Tso (ABPKT) inequality \cite{Ale,Bak,Puc,Kry,Tso}: there exists a constant $\mathscr C>0$ such that \begin{equation}\label{ABP} \displaystyle\sup_{(t,x)\in Q_2}|u(t,x)| \leq \mathscr C\ \|f\|_{L^{N+1}(Q_2)}. \end{equation} In order to obtain an estimate on the $L^{(N+1)/N}(Q_1)$ norm of $M$, solution of \eqref{eqMass}, we proceed by duality, bearing in mind the definition \[ \|M\|_{L^{(N+1)/N}(Q_1)}= \sup \left\{ \Big|\displaystyle\iint_{Q_1} Mf\, {\mathrm{d}} x\, {\mathrm{d}} t\Big|,\ f\in C^\infty_c(Q_1),\ \|f\|_{L^{N+1}(Q_1)}\leq 1\right\}.\] \\ Let $\zeta$ be a cut-off function: $\zeta\in C^\infty_c(B_{3/2})$, $\zeta(x)=1$ for any $x\in B_1$, and $0\leq \zeta(x)\leq 1$ for any $x\in\mathbb R^N$. Remark that \[ \displaystyle\iint_{Q_2} \zeta M f\, {\mathrm{d}} x\, {\mathrm{d}} t=\displaystyle\iint_{Q_1} Mf \, {\mathrm{d}} x\, {\mathrm{d}} t,\] since $\mathrm{supp}(f)\subset Q_1$. We compute this integral by using \eqref{equ} \[ \begin{array}{lll} \displaystyle\iint_{Q_2} \zeta M f\, {\mathrm{d}} x\, {\mathrm{d}} t &=& \displaystyle\iint_{Q_2} \zeta M(\partial_t u+d\Delta u) \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm] &=&\displaystyle\int_{-2}^0 \displaystyle\frac{\, {\mathrm{d}}}{\, {\mathrm{d}} t}\left( \displaystyle\int_{B_2} \zeta Mu \, {\mathrm{d}} x\right)\, {\mathrm{d}} t -\displaystyle\iint_{Q_2} \zeta u\Delta (dM) \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm]&&+\displaystyle\iint_{Q_2} \zeta Md\Delta u \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm] &=& \displaystyle\int_{B_2} \zeta Mu(0,x) \, {\mathrm{d}} x -2\displaystyle\iint_{Q_2}dM \nabla\zeta \cdot \nabla u\ \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm]&&-\displaystyle\iint_{Q_2} udM \Delta \zeta \, {\mathrm{d}} x\, {\mathrm{d}} t. \end{array}\] We have used several integration by parts where the boundary terms vanish owing to the fact that $\mathrm{supp}(\zeta)\subset B_{3/2}\subset B_2$. The integrand of the penultimate in the right hand side can be rewritten as $\sqrt {dM}\nabla u\cdot \sqrt {dM}\nabla\zeta$, and then we use the Cauchy-Schwarz inequality and the Young inequality $ab=\sqrt{\kappa} a\frac{b}{\sqrt {\kappa}}\leq \frac12(\kappa a^2+ \frac{b^2}{\kappa})$. We thus arrive at the following estimate \begin{equation}\label{est0} \begin{array}{lll} \left|\displaystyle\iint_{Q_2} \zeta f M\, {\mathrm{d}} x\, {\mathrm{d}} t\right| &\leq& \left| \displaystyle\int_{B_2} \zeta Mu(0,x) \, {\mathrm{d}} x\right| \\[.3cm]&&+ \kappa \displaystyle\iint_{Q_2}dM | \nabla u|^2\ \, {\mathrm{d}} x\, {\mathrm{d}} t +\displaystyle\frac 1\kappa \displaystyle\iint_{Q_2}dM | \nabla \zeta|^2\ \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm]&&+\left|\displaystyle\iint_{Q_2} udM \Delta \zeta \, {\mathrm{d}} x\, {\mathrm{d}} t\right|, \end{array}\end{equation} where $\kappa\in(0,1)$ is a parameter that will be determined later on. Inspired from \cite[proof of Theorem~2.1]{FaSt}, in order to estimate the second integral in the right hand side, we use the elementary relation \[ |\nabla u|^2=\displaystyle\frac12 \Delta (u^2)-u\Delta u .\] Going back to \eqref{equ}, we are thus led to \[d|\nabla u|^2=\displaystyle\frac d 2 \Delta (u^2)+ \displaystyle\frac12\partial_t (u^2) - uf.\] The advantage of this formulation relies on the fact that, denoting $\nu$ the outward unit normal on $\partial B_2$, \[u\big|_{\partial B_2}=u^2\big|_{\partial B_2}=0,\qquad \nabla u^2\cdot \nu\big|_{\partial B_2} =2u\nabla u\cdot \nu\big|_{\partial B_2}=0,\] which allows us to perform further integration by parts. We get \[\begin{array}{l} \displaystyle\iint_{Q_2}dM | \nabla u|^2\ \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm] \qquad = \displaystyle\frac12\displaystyle\iint_{Q_2}dM \Delta ( u^2)\ \, {\mathrm{d}} x\, {\mathrm{d}} t + \displaystyle\frac12 \displaystyle\iint_{Q_2} M \partial_t (u^2)\ \, {\mathrm{d}} x\, {\mathrm{d}} t - \displaystyle\iint_{Q_2}M u f \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm] \qquad = -\displaystyle\frac12\displaystyle\iint_{Q_2}\nabla(dM)\cdot \nabla ( u^2)\ \, {\mathrm{d}} x\, {\mathrm{d}} t + \displaystyle\frac12 \displaystyle\int_{B_2} M u^2(0,x)\ \, {\mathrm{d}} x \\[.3cm] \qquad\qquad- \displaystyle\frac12 \displaystyle\iint_{Q_2}\Delta(dM) u^2 \, {\mathrm{d}} x\, {\mathrm{d}} t - \displaystyle\iint_{Q_2}M u f \, {\mathrm{d}} x\, {\mathrm{d}} t \\[.3cm] \qquad = \displaystyle\frac12\displaystyle\int_{B_2}M u^2(0,x)\ \, {\mathrm{d}} x - \displaystyle\iint_{Q_2}M u f \, {\mathrm{d}} x\, {\mathrm{d}} t. \end{array}\] For the last term, since $\mathrm{supp}(f)\subset Q_1$, the integral actually reduces over $Q_1$ only. The H\"older inequality then yields \[\begin{array}{lll} \left| \displaystyle\iint_{Q_2}M u f \, {\mathrm{d}} x\, {\mathrm{d}} t\right| &=& \left| \displaystyle\iint_{Q_1}M u f \, {\mathrm{d}} x\, {\mathrm{d}} t\right| \leq \|u\|_{L^\infty(Q_1)} \|M\|_{L^{(N+1)/N}(Q_1)} \|f\|_{L^{N+1}(Q_1)} \\[.3cm]&\leq&\mathscr C\ \|f\|^2_{L^{N+1}(Q_1)} \|M\|_{L^{(N+1)/N}(Q_1)}, \end{array}\] by using \eqref{ABP}. Besides, still by using \eqref{ABP} and $\mathrm{supp}(f)\subset Q_1$, we get \[\begin{array}{lll} \displaystyle\frac12\displaystyle\iint_{Q_2}M u^2(0,x) \, {\mathrm{d}} x &\leq& \displaystyle\frac12 \|u\|_{L^\infty(Q_2)}^2\|M\|_{L^\infty(-4,0;L^1(Q_2))} \\[.3cm] & \leq&\mathscr C^2\ \|f\|^2_{L^{N+1}(Q_1)} \|M\|_{L^\infty(-4,0;L^1(Q_2))}.\end{array}\] The last two terms in the right hand side of \eqref{est0} are estimated as follows: we get \[ \displaystyle\iint_{Q_2}dM | \nabla \zeta|^2\ \, {\mathrm{d}} x\, {\mathrm{d}} t \leq 4\delta^\star \|\zeta\|^2_{W^{1,\infty}(B_2)}\|M\|_{L^\infty((-4,0);L^1(B_2))},\] and \[ \begin{array}{lll} \left|\displaystyle\iint_{Q_2} udM \Delta \zeta \, {\mathrm{d}} x\, {\mathrm{d}} t\right| &\leq& 4\delta^\star \|\zeta\|_{W^{2,\infty}(B_2)}\ \|u\|_{L^\infty(Q_2)} \|M\|_{L^\infty((-4,0);L^1(B_2))} \\[.3cm] &\leq& 4\delta^\star \|\zeta\|_{W^{2,\infty}(B_2)}\ \mathscr C \|f\|_{L^{N+1}(Q_1)} \|M\|_{L^\infty((-4,0);L^1(B_2))}. \end{array}\] The first integral in the right hand side of \eqref{est0} is dominated by \[ \|u\|_{L^\infty(Q_2)}\|M\|_{L^\infty(-4,0;L^1(Q_2))} \leq \mathscr C \|f\|_{L^{N+1}(Q_2)} \|M\|_{L^\infty(-4,0;L^1(Q_2))}. \] Finally, we have found a constant $C>0$ such that for any $f\in C^\infty_c(Q_1)$, with $\|f\|_{L^{N+1}(Q_1)}\leq 1$, we have \[ \left|\displaystyle\iint_{Q_1} fM\, {\mathrm{d}} x\, {\mathrm{d}} t \right| \leq C\Big(\Big(1+\kappa + \displaystyle\frac1\kappa\Big) \|M\|_{L^\infty(-4,0;L^1(B_2))} + \kappa \|M\|_{L^{(N+1)/N}(Q_1)} \Big).\] Taking the supremum over such $f$'s makes the dual norm $L^{(N+1)/N}(Q_1)$ appear. We choose $\kappa$ small enough, so that $1-\kappa C>1$, and we conclude that \[ \|M\|_{L^{(N+1)/N}(Q_1)}\leq \displaystyle\frac{C(1+\kappa +1/\kappa)}{1-\kappa C} \|M\|_{L^\infty(-4,0;L^1(B_2))} \] holds. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par \section{End of proof of Theorem \ref{Theo_principal}: proof of Lemma~\ref{lemm_de giorgi2}} \label{S:fin} \noindent Let $0<\epsilon_0<\sqrt{T_{\mathrm{max}}/2}$. For each component $a^{(\epsilon)}_i$, Proposition~\ref{Fabes} gives \begin{equation}\label{lemmadeg1} \|a^{(\epsilon)}_i\|_{L^{(N+1)/N}(Q_1)}\leq \|M^{(\epsilon)}\|_{L^{(N+1)/N}(Q_1)}\leq K \ \|M^{(\epsilon)}\|_{L^\infty(-4,0;L^1(B_2))}. \end{equation} Next, Lemma~\ref{l:L1shr}, yields \begin{equation}\label{lemmadeg2} \|M^{(\epsilon)}\|_{L^\infty(-4,0;L^1(B_2))}\leq c\|\Phi\|_{L^\infty} \epsilon^{\alpha-2+2/(q-1)} . \end{equation} Combining~(\ref{lemmadeg1}) and ~(\ref{lemmadeg2}) with Proposition~\ref{P:CaVa} leads to \begin{equation}\label{lemmadeg3} \sum_{i=1}^{p} \|a^{(\epsilon)}_i\|_{L^{(N+1)/N}(Q_1)}\leq \mathscr K\ \|a^0\|_{L^\infty(\mathbb R^N)}^{1-2/N} \|a^0\|_{L^1(\mathbb R^N)}^{2/N}\ \epsilon^{\alpha-2+2/(q-1)} \end{equation} for a constant $\mathscr K$ which depends on $p$ and $N$. This information is useful as far as the degree of non linearities is such that the exponent remains positive, which means $q\leq 2+\frac{\alpha}{2-\alpha}$. It ends the proof of Lemma~\ref{lemm_de giorgi2}. As explained in Section~\ref{S:main}, having at hand this property of the rescaled solution we go back to the original unknown, and we deduce the $L^\infty$ bound of the solution, see Corollary~\ref{unibound}. Theorem~\ref{Theo_principal}, and therefore Theorem~\ref{theo_main} too, is fully justified. \mbox{}\hfill \raisebox{-0.2pt}{\rule{5.6pt}{6pt}\rule{0pt}{0pt}} \medskip\par \begin{rmk} The estimates discussed above differ from \cite[see sp. Corollary~14 \& Lemma~15]{CaVa}; and in particular the smallness condition on $\epsilon_0$ does not involve the initial entropy \eqref{hyp_ci}. \end{rmk}
1,314,259,996,035
arxiv
\section{Introduction} \input{Introduction.tex} \section{Definitions and Background} \input{Definitions.tex} \section{Finiteness of Once-Unclean Arcs} \input{Finiteness.tex} \section{Monodromies with Clean or Once-Unclean Arcs} \input{Monodromies.tex} \section{Crossing Changes} \input{CrossingChanges.tex} \bibliographystyle{gtart} \subsection{Manifolds} We can use Lemmas \ref{lemma:FactorizationClean} and \ref{lemma:FactorizationUnclean}, to give a link-surgery description of every GOF-knot with a clean arc or once-unclean arc, and describe the manifolds in which the GOF-knots sit. Let $GOF(0;k,\ell)$ and $GOF(\pm1;m)$ be GOF-knots with monodromy $D_\beta^\ell\circ D_\alpha^k$ and $(D_\partial)^{1/2} \circ D_\alpha^m$, respectively, and let $M(0;k,\ell)$ and $M(\pm1;m)$, respectively, be the manifolds in which they sit. We will describe the resulting knot complement as the result of a particular Dehn surgery on the trefoil knot in $S^3$. With disjoint arcs $\alpha$ and $\beta$ in a once-punctured torus $F$, the monodromy of trefoil knot is represented by $D_\alpha\circ D_\beta$. Namely, the exterior of the trefoil knot is homeomorphic to the manifold which is obtained from $F\times [0,1]$ by identifying two points $(x,1)$ and $(h(x),0)$, and the meridian corresponds to $y\times [0,1]$ for a point $y$ in $\partial F$, see Figure \ref{fig:TrefoilFiber}. For a loop $c$ in $F\times \{*\}$, we consider $\left(\frac{n\ell_c+1}{n}\right)$-surgery along $c$, where $\ell_c$ is the linking number of $c$ with a loop parallel to $c$ in $F\times \{*\}$. This surgery corresponds to the operation of cutting the fiber bundle along $F\times \{*\}$ and gluing it again after twisting $n$-times along $c$. Then the resulting manifold is a new once-punctured torus bundle, whose monodromy is changed by $D_c^n$ from the original one, where $D_c$ is a Dehn twist along $c$. We will use this method multiple times to give surgery descriptions of $GOF(0;k,\ell)$ in $M(0;k,\ell)$ and $GOF(\pm1;m)$ in $M(\pm1;m)$. Note that $\ell_c=1$ if $c=c_\alpha\times\{*\}$ or $c=c_\beta\times\{*\}$. In the case of $GOF(0;k,\ell)$ in $M(0;k,\ell)$, which has a monodromy $h=D_\beta^\ell\circ D_\alpha^k$, we use two loops $c_1=c_\alpha\times\left\{\frac{1}{3}\right\}$ and $c_2=c_\beta\times\left\{\frac{2}{3}\right\}$ to provide a surgery description. The surgery coefficient are $\frac{k}{k-1},\frac{\ell}{\ell-1}$ for $c_1,c_2$ respectively. Then the resulting manifold is a once-punctured torus bundle with the monodromy $D_\beta^{\ell-1}\circ D_\alpha^{k}\circ D_\beta$, which is conjugate to $h$. Let $L$ be the $\alpha$-loop for the fiber $F \times \set{0}$. By the Kirby calculus, we have a surgery description of $GOF(0;k,\ell)$, together with the $\alpha$-loop, $L$, see Figure \ref{fig:surgerydescriptionGOF(0;k,l)}. The manifold $M(0;k,\ell)$ is homeomorphic to $L(-\ell,1)\, \sharp\, L(-k,1)$. In particular, \begin{center} $M(0;k,\ell)=\begin{cases} L(-k,1)&(\ell=\pm1)\\ L(2,1)\, \sharp\, L(-k,1)&(\ell=\pm2).\\ \end{cases}$ \end{center} In the case of $GOF(1;m)$ in $M(1;m)$, which has a monodromy $h=(D_\partial)^{1/2} \circ D_\alpha^m$, we use three loops $c_1=c_\alpha\times\left\{\frac{1}{4}\right\},c_2=c_\beta\times\left\{\frac{1}{2}\right\},c_3=c_\alpha\times\left\{\frac{3}{4}\right\}$ for a surgery description. The surgery coefficients are $2,2,\frac{m+3}{m+2}$ for $c_1,c_2,c_3$ respectively. Then the resulting manifold is a once-punctured torus bundle with the monodromy $D_\alpha^{m+2}\circ D_\beta\circ D_\alpha^2\circ D_\beta$, which is conjugate to $h=(D_\alpha\circ D_\beta)^3\circ D_\alpha^m$. Let $L$ be the $\alpha$-loop for the fiber $F \times \set{ \frac{3}{4} }$. By the Kirby calculus, we have a surgery description of $GOF(1;m)$, together with the $\alpha$-loop, $L$, see Figure \ref{fig:surgerydescriptionGOF(1;m)}. As the exterior of the $(2, 4)$-torus link is a Seifert fibered space over the annulus with one exceptional fiber of multiplicity $2$, and the regular fibers intersect the $0$- and $(-m)$-slopes 2 and $m+2$ times, respectively, the result of Dehn filliing is the Seifert fibered space over the sphere with three exceptional fibers, having Seifert invariants $\left(-1; (2, 1), (2, 1), (m+2, 1)\right)$, see also the Appendix. In particular then (see \cite{BalChiMacNiOchVafPMRP}), \begin{center} $M(1;m) = \begin{cases} L(4,-1)&(m=-1)\\ L(2,1)\, \sharp\, L(2,1)&(m=-2)\\ L(4,1)&(m=-3)\\ \mbox{a prism manifold}&(\mbox{otherwise}). \end{cases}$ \end{center} In the case of $GOF(-1;m)$ in $M(-1;m)$, which has a monodromy $h=(D_\partial)^{-1/2} \circ D_\alpha^m$, we have a surgery description by taking the mirror image of the case of $GOF(1;-m)$ in $M(1;-m)$. The manifold $M(-1;m)$ is homeomorphic to the Seifert fibered space having Seifert invariants $(1; (2, -1), (2, -1), (m-2, 1))$. \vspace{5mm} \begin{figure}[h] \begin{center} \includegraphics[scale=1]{TrefoilFiber-eps-converted-to.pdf} \begin{picture}(400,0)(0,0) \put(120,70){$\alpha$} \put(160,70){$\beta$} \put(215,115){$c_\alpha$} \put(155,115){$c_\beta$} \end{picture} \caption{The trefoil in $S^3$ has monodromy $D_\alpha \circ D_\beta$.} \label{fig:TrefoilFiber} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=.6]{SurgeryDescriptionforGOF0-k-lsurface-eps-converted-to.pdf} \begin{picture}(400,0)(0,0) \put(70,135){\scriptsize $L$} \put(130,170){\textcolor{blue}{\scriptsize $c_1(\frac{k}{k-1})$}} \put(70,105){\textcolor{green}{\scriptsize $c_2(\frac{\ell}{\ell-1})$}} \put(240,120){\textcolor{blue}{\scriptsize $\frac{k}{k-1}$}} \put(220,120){\textcolor{green}{\scriptsize $\frac{\ell}{\ell-1}$}} \put(240,30){\textcolor{blue}{\scriptsize $-k$}} \put(220,30){\textcolor{green}{\scriptsize $-\ell$}} \put(95,45){\textcolor{blue}{\scriptsize $-k$}} \put(120,85){\textcolor{green}{\scriptsize $-\ell$}} \put(180,90){\scriptsize $(-1)$-twist along \textcolor{blue}{$c_1$} and \textcolor{green}{$c_2$} $\downarrow $} \put(185,135){$\sim$} \put(185,45){$\sim$} \end{picture} \caption{$GOF(0;k,\ell)$ with monodromy $D_\beta^\ell\circ D_\alpha^k $ and an $\alpha$-loop in $M(0;k,\ell)$.} \label{fig:surgerydescriptionGOF(0;k,l)} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=.6]{SurgeryDescriptionforGOF1-msurface-eps-converted-to.pdf} \begin{picture}(400,0)(0,0) \put(75,305){\scriptsize $L$} \put(160,310){\textcolor{blue}{\scriptsize $c_1(2)$}} \put(85,285){\textcolor{green}{\scriptsize $c_2(2)$}} \put(150,280){\textcolor{red}{\scriptsize $c_3(\frac{m+3}{m+2})$}} \put(295,300){\textcolor{blue}{\scriptsize $2$}} \put(250,283){\textcolor{green}{\scriptsize $2$}} \put(270,270){\textcolor{red}{\scriptsize $\frac{m+3}{m+2}$}} \put(255,180){\textcolor{blue}{\scriptsize $1$}} \put(230,200){\textcolor{green}{\scriptsize $1$}} \put(270,190){\textcolor{red}{\scriptsize $-(m+3)$}} \put(100,200){\textcolor{blue}{\scriptsize $0$}} \put(75,190){\textcolor{red}{\scriptsize $-(m+4)$}} \put(95,110){\textcolor{blue}{\scriptsize $0$}} \put(70,100){\textcolor{red}{\scriptsize $-(m+4)$}} \put(240,100){\textcolor{blue}{\scriptsize $0$}} \put(270,110){\textcolor{red}{\scriptsize $-m$}} \put(240,13){\textcolor{blue}{\scriptsize $0$}} \put(255,30){\textcolor{red}{\scriptsize $-m$}} \put(120,85){\textcolor{blue}{\scriptsize $0$}} \put(140,50){\textcolor{red}{\scriptsize $-m$}} \put(180,300){$\sim$} \put(250,265){ $\downarrow$} \put(185,265){\scriptsize $(-1)$-twist along \textcolor{red}{$c_3$}} \put(175,220){ $\longleftarrow$} \put(155,210){\scriptsize $(-1)$-twist along \textcolor{green}{$c_2$}} \put(120,170){$\wr$} \put(175,135){ $\longrightarrow$} \put(165,125){\scriptsize $1$-twist along \textcolor{blue}{$c_1$}} \put(255,90){$\wr$} \put(185,50){$\sim$} \end{picture} \caption{$GOF(1;m)$ with monodromy $D_\partial^{1/2}\circ D_\alpha^m$ and an $\alpha$-loop in $M(1;m)$.} \label{fig:surgerydescriptionGOF(1;m)} \end{center} \end{figure} \begin{thm} \label{thm:Manifolds} \begin{enumerate} \item Every once-punctured torus bundle with a clean arc is the complement of a GOF-knot in $L(n, 1)$ for some $n \in\mathbb{Z}$. \item Every once-punctured torus bundle with a once-unclean arc is the complement of a GOF-knot in $L(2, 1)\, \sharp\, L(n,1)$ for some $n\in \mathbb{Z}$, $L(4,\pm1)$, or a prism manifold. \end{enumerate} \end{thm} \subsection{Crossing Changes} We recall that a \emph{crossing circle} for a knot (or link) $K$ is a circle $L$ that bounds a disk intersecting $K$ in two points with opposite orientations. We refer to the disk as a \emph{crossing disk}. Then, a \emph{generalized crossing change along $L$ of order $q$} is a $- \frac{1}{q}$ Dehn surgery on $L$, with $q \in \mathbb{Z} \smallsetminus \set{0}$. Since $L$ bounds a disk, the ambient manifold does not change, but the knot may. When $q = \pm 1$, this is just an ordinary \emph{crossing change}. Also, $\chi(K)$ refers to the maximal Euler characteristic of all Seifert surfaces for $K$, and a Seifert surface $S$ for $K$ is said to be \emph{taut} if its Euler characteristic realizes $\chi(K)$. \begin{lem} \label{lem:AllCrossingChangesAlongArcs} If $K$ is a GOF-knot with fiber $F$, $L$ is a crossing circle for $K$, and the result of an order $q$ generalized crossing change ($q$-twist) along $L$ is another GOF-knot, then $L$ bounds a disk that intersects $F$ in a single arc $\alpha$. Moreover, one of the following holds: \begin{enumerate} \item $q=\pm 2$, $\alpha$ is clean and alternating (not fixed) with respect to the monodromy of $F$, or \item $q=\pm 1$, $\alpha$ is once-unclean (and alternating) with respect to the monodromy of $F$. \end{enumerate} \end{lem} \begin{proof} Our method is similar to the proofs in $S^3$ from \cite{KalLinKAGET} and \cite{SchThoLGCM}, relying on an important result of Gabai in \cite{GabFT3MII}. Evidently, $\chi(K) = -1$. Suppose that $S$ is a taut surface bounded by $K$ in the complement of $L$. From the local picture, the crossing disk must intersect $S$ in a single arc. Let $K'$ and $S'$ be the images of $K$ and $S$, respectively, after the generalized crossing change, and note that $\chi(S') = \chi(S)$. By Corollary 2.4 of \cite{GabFT3MII}, at least one of $S$ or $S'$ is taut for $K$ or $K'$. But then they both realize $\chi(K) = \chi(K')$, so, in particular, $S$ must be taut for $K$. From a classic result of Hatcher and Floyd \cite{FloHatISPTB} (stated for Anosov homeomorphisms, but true in general), there are no Euler characteristic $-1$ surfaces in a once-punctured torus bundle besides the fiber, so $S=F$, and the first part of the statement is established. Now, in exactly the same way as obtaining Theorem 5 from Theorem 3 in \cite{BucIshRatShiBSCCBFL}, we have that if a crossing disk intersects a fiber surface in an arc, and the result of the generalized crossing change is another fiber bundle, then one of the two cases in the statement of the lemma occurs, or the arc is clean and non-alternating. However, Lemma \ref{lemma:Alternating} excludes the latter possibility. \end{proof} Hence, it suffices to look at clean or once-unclean (and alternating) arcs. In other words, non-classical generalized crossing changes (resp., classical crossing changes) between GOF-knots must occur at $\alpha$-loops for arcs $\alpha$ that are clean and alternating (resp., once-unclean and alternating). Lemma \ref{lem:AllCrossingChangesAlongArcs}, then, provides a corollary to Theorem \ref{thm:Manifolds}. \begin{cor} \begin{enumerate} \item Every GOF-knot with a non-classical generalized crossing change resulting in another GOF-knot is in $L(n, 1)$ for some $n \in \mathbb{Z}$. \item Every GOF-knot with a classical crossing change resulting in another GOF-knot is in $L(2, 1) \, \# \, L(n, 1)$ for some $n \in \mathbb{Z}$, $L(4,\pm1)$, or a prism manifold.\end{enumerate} \end{cor} Since a (generalized) crossing change taking one GOF-knot to another must be around a crossing circle bounding a disk that intersects the fiber in an arc, the crossing change is a Dehn surgery along the curve formed by the union of the arc and its image. Then, Proposition 1.4 of \cite{NiDSKPM} describes the way that the monodromy must change when the crossing change is performed. Combining this with Lemma \ref{lemma:FactorizationUnclean}, we have the following. \TheoremCrosingChanges* \begin{proof} The three curves $r$, $s$, and $t$ obtained by resolving the intersections of $\mu \cup h(\mu)$ are precisely the three curves used in \cite{NiDSKPM} to describe the effect of the Dehn surgery on the monodromy. In the first case of Lemma \ref{lemma:FactorizationUnclean}, we have $t$ is trivial, and $r = s = c_\delta$, so the monodromy changes by post-composition with $D_{\delta}^{\pm 4}$. As $\mu$ will be right-veering with a negative crossing (respectively, left-veering with a positive crossing) precisely when the monodromy is of the form $D_\delta^2 \circ D_\mu^m$ (respectively, $D_\delta^{-2} \circ D_\mu^m$), the result of the crossing change is $D_\delta^{-4} \circ D_\delta^2 \circ D_\mu^m = D_\delta^{-2} \circ D_\mu^m$ (respectively, $D_\delta^{4} \circ D_\delta^{-2} \circ D_\mu^m = D_\delta^{2} \circ D_\mu^m$). In the second case of Lemma \ref{lemma:FactorizationUnclean}, note that $t$ is boundary parallel, and $r = s$ is actually $c_\mu$, so the monodromy changes by post-composition with $D_{\mu}^{\mp 4} \circ D_\partial^{\pm 1} = D_\mu^{\mp 4} \circ ((D_\partial)^{1/2})^{\pm2}$. By investigating all of the monodromies from Table \ref{table:RedoneMonodromies}, one finds the only once-unclean arc that is right-veering with a positive crossing (respectively, left-veering with a negative crossing) to be $\alpha$ in the monodromy $(D_\partial)^{1/2} \circ D_\alpha^m$ (respectively, $(D_\partial)^{-1/2} \circ D_\alpha^m$), so the result of the crossing change is $D_\alpha^{4} \circ (D_\partial)^{-1} \circ (D_\partial)^{1/2} \circ D_\alpha^m = (D_\partial)^{-1/2} \circ D_\alpha^{m + 4}$ (respectively, $D_\alpha^{-4} \circ (D_\partial) \circ (D_\partial)^{-1/2} \circ D_\alpha^m = (D_\partial)^{1/2} \circ D_\alpha^{m - 4}$). \end{proof} \subsection{Surface bundles, open book decompositions, and monodromy maps} Let $F$ be a compact, connected surface with boundary. Suppose $\alpha$ is an arc properly embedded in the surface $F$ with boundary, and $h$ is a homeomorphism $h : F \to F$ so that the restriction of $h$ to the boundary is the identity. As $h$ fixes the boundary pointwise, $\alpha$ and $h(\alpha)$ necessarily share their endpoints. For this reason, whenever we say that two arcs $\alpha$ and $\beta$ properly embedded in a surface $F$ are \emph{disjoint}, we shall mean that they are disjoint on their interiors. Thus, an arc $\alpha$ is said to be \emph{clean} (with respect to $h$) if $\alpha$ and $h(\alpha)$ are disjoint, (i.e. $int(\alpha) \cap int(h(\alpha)) = \emptyset$). We will also say that $\alpha$ is \emph{once-unclean} (with respect to $h$) if $|int(\alpha) \cap int(h(\alpha))| = 1$. Assume that $\alpha$ and $h(\alpha)$ have been isotoped (rel $\partial$) to intersect minimally. In general, $\alpha \cup h(\alpha)$ will be a curve in $F$ with self-intersections. We may move the endpoints $\partial \alpha = \partial h(\alpha)$ slightly into the interior of $F$ to obtain a curve immersed in the interior of $F$. Choose an orientation on $F$, and choose an orientation on $\alpha$. There is an induced orientation on $h(\alpha)$ so that $\alpha \cup h(\alpha)$ has a coherent orientation that agrees with the orientation of $\alpha$. Then the intial point of $\alpha$ is the terminal point of $h(\alpha)$ and vice versa. Say that $\alpha$ is \emph{right-veering} if the orientations induced by the tangent vectors to $h(\alpha)$ then $\alpha$ are opposite the orientation on $F$ at both endpoints of the arcs. We say that $\alpha$ is \emph{left-veering} if these orientations agree with the orientation on $F$ at both endpoints of the arcs. In either case, we say that the arc $\alpha$ is \emph{alternating}, as $h(\alpha)$ approaches $\alpha$ on alternate sides at the endpoints. Otherwise, we say that $\alpha$ is \emph{non-alternating}. See Figure \ref{figure:OrientationsAlternatingNonAlternating}. \begin{figure}[h] \begin{center} \begin{minipage}{.23\textwidth} \begin{center} \begin{picture}(80,40)(0,0) \put(33,15){$\alpha$} \linethickness{0.2mm} \qbezier[20](30,0)(30,20)(30,40) \put(0,20){$h(\alpha)$} \qbezier(30,0)(45,10)(60,10) \qbezier(30,40)(15,30)(0,30) \linethickness{0.5mm} \put(0,0){\line(1,0){60}} \put(0,40){\line(1,0){60}} \thicklines \put(30,3){\vector(0,-1){1}} \put(30, 30){\vector(0,-1){1}} \put(40,4){\vector(4,2){1}} \put(17, 3){$\ominus$} \put(25,36){\vector(4,2){1}} \put(32, 32){$\ominus$} \end{picture} \end{center} \end{minipage} \begin{minipage}{.23\textwidth} \begin{center} \begin{picture}(80,40)(0,0) \put(33,15){$\alpha$} \linethickness{0.2mm} \qbezier[20](30,0)(30,20)(30,40) \put(0,15){$h(\alpha)$} \qbezier(30,0)(15,10)(0,10) \qbezier(30,40)(45,30)(60,30) \linethickness{0.5mm} \put(0,0){\line(1,0){60}} \put(0,40){\line(1,0){60}} \thicklines \put(20,6){\vector(-3,2){1}} \put(35,37){\vector(-3,2){1}} \put(30, 3){\vector(0,-1){1}} \put(30, 30){\vector(0,-1){1}} \put(33, 3){$\oplus$} \put(17, 32){$\oplus$} \end{picture} \end{center} \end{minipage} \begin{minipage}{.23\textwidth} \begin{center} \begin{picture}(80,40)(0,0) \put(33,15){$\alpha$} \linethickness{0.2mm} \qbezier[20](30,0)(30,20)(30,40) \put(0,20){$h(\alpha)$} \qbezier(30,0)(15,10)(0,10) \qbezier(30,40)(15,30)(0,30) \linethickness{0.5mm} \put(0,0){\line(1,0){60}} \put(0,40){\line(1,0){60}} \thicklines \put(30,3){\vector(0,-1){1}} \put(30, 30){\vector(0,-1){1}} \put(20,6){\vector(-3,2){1}} \put(32, 2){$\oplus$} \put(25,36){\vector(4,2){1}} \put(32, 32){$\ominus$} \end{picture} \end{center} \end{minipage} \caption{The orientation induced by the tangent vectors to $h(\alpha)$ and then $\alpha$ either disagree with the orientation of $F$ at both endpoints (alternating, right-veering), agree with the orientation of $F$ at both endpoints (alternating, left-veering), or agree at one and disagree at the other endpoint (non-alternating).} \label{figure:OrientationsAlternatingNonAlternating} \end{center} \end{figure} Further, we will refer to a self-intersection point of $\alpha \cup h(\alpha)$, as a \emph{crossing}. We say that the crossing is \emph{positive} if the orientation induced by the tangent vectors to $h(\alpha)$ and then $\alpha$ agrees with the orientation on $F$, and \emph{negative} if this orientation disagrees with that of $F$. See Figure \ref{figure:OreintationsCrossings}. \begin{figure}[h] \begin{center} \begin{minipage}{.23\textwidth} \begin{center} \begin{picture}(80,40)(0,0) \put(33,30){$\alpha$} \linethickness{0.3mm} \qbezier[20](30,0)(30,20)(30,40) \put(0,20){$h(\alpha)$} \qbezier(0,15)(30,15)(60,15) \linethickness{0.5mm} \put(0,0){\line(1,0){60}} \put(0,40){\line(1,0){60}} \thicklines \put(20,15){\vector(-1,0){1}} \put(30, 5){\vector(0,-1){1}} \put(31, 20){$\oplus$} \end{picture} \end{center} \end{minipage} \begin{minipage}{.23\textwidth} \begin{center} \begin{picture}(80,40)(0,0) \put(33,30){$\alpha$} \linethickness{0.3mm} \qbezier[20](30,0)(30,20)(30,40) \put(0,20){$h(\alpha)$} \qbezier(0,15)(30,15)(60,15) \linethickness{0.5mm} \put(0,0){\line(1,0){60}} \put(0,40){\line(1,0){60}} \thicklines \put(40,15){\vector(1,0){1}} \put(30, 5){\vector(0,-1){1}} \put(31, 20){$\ominus$} \end{picture} \end{center} \end{minipage} \caption{A crossing is positive or negative depending on whether the orientation induced by tangent vectors to $h(\alpha)$ and then to $\alpha$ agree or disagree with the orientation on $F$, respectively.} \label{figure:OreintationsCrossings} \end{center} \end{figure} Let $I$ be the unit interval $[0, 1]$. Given a homeomorphism $h: F \to F$ as above, we can form $(F \times I)/\sim$, where $(x, 0) \sim (h(x), 1)$ for all $x \in F$, the \emph{surface bundle over $S^1$}. The map $h$ is called the \emph{monodromy} of the bundle, and the bundle can also be denoted $(F \times I)/h$. Each copy of $F$ arising from $F \times \set{y}$ is called a \emph{fiber}. The resulting manifold is well-defined up to conjugation of $h$ in the mapping class group of $F$, and Dehn-twisting along curves in $F$ parallel to boundary components of $F$. The surface bundle formed above has a toroidal boundary component arising from each boundary component of $F$. If we fill each toral boundary component with a solid torus so that each loop in the torus arising from $(\set{x} \times I) / h$ bounds a disk in the solid torus, where $x \in \partial F$, the result is a closed $3$-manifold, $M$. The union of the cores of all so-filled solid tori forms a link in this $3$-manifold. This link is often referred to as a \emph{fibered link} in $M$. In this language, each copy of the surface $F$ is again called a \emph{fiber}. Alternatively, the link is called the \emph{binding} of an \emph{open book decomposition} of $M$. In this language, each copy of the surface $F$ is called a \emph{page}. For the purposes of this paper, we will largely use the terms interchangably, often preferring the language of fibrations or surface bundles for ease of exposition. Given a particular page $F_0$ in an open book decomposition, and an arc $\alpha$ properly embedded in $F_0$, let $n(\alpha)$ denote a neighborhood of $\alpha$ in the manifold. Then there is a unique loop $L$ in $\partial n(\alpha)$ that bounds a disk in the manifold intersecting the page $F_0$ in exactly the arc $\alpha$. We will call $L$ an \emph{$\alpha$-loop for the page $F_0$}. \begin{defn}[see \cite{GabDFLS3}] \label{def:Murasugisum} Let $F_i \subset M_i$, for $i = 1, 2$, be compact oriented surfaces in the closed, oriented 3-manifolds $M_i$. Then $F \subset M_1 \# M_2 = M$ is a \emph{Murasugi sum} of $F_1$ and $F_2$ if $$M = (M_1 \smallsetminus int(B_1)) \cup_{S^2} (M_2 \smallsetminus int(B_2)), \, \, \, \mbox{for 3-balls } B_i \mbox{ with } S^2 = \partial B_1 = \partial B_2,$$ and for each $i$, $$S^2 \cap F_i \mbox{ is a 2}n\mbox{-}gon, \, \, \, \, \, \mbox{ and }\, \, \, \, \, (M_i \smallsetminus int(B_i)) \cap F = F_i.$$ When $n=2$, this is known as a \emph{plumbing} of $F_1$ and $F_2$. Further, when $n=2$ and one of the surfaces, say $F_2$ is a Hopf annulus, and the corresponding manifold $M_2 = S^3$, this is known as a \emph{Hopf banding}. The inverse operation is called \emph{Hopf de-banding}. In the language of open book decompositions, Hopf banding is also known as \emph{stabilization}, and its inverse as \emph{destabilization}. \end{defn} It is well known (see \cite{GabFT3M}, \cite{SakMDCSSUO}, and \cite{CowLacUGOK}) that if $F'$ is a Seifert surface of a link in a manifold $M$, and $F$ is the result of a Hopf banding of $F'$, then $F$ is a fiber of a fibration of $M$ if and only if $F'$ is. It is also well-known that a fiber surface $F$ is the result of a Hopf banding of $F'$ if and only if there is an arc properly embedded in $F'$ which is clean and alternating with respect to the monodromy of the fibration by $F'$. In other words, intersection behavior about arcs in the fiber surface and their images under the monodromy correspond exactly with certain geometric information about the fiber surface. In \cite{BucIshRatShiBSCCBFL}, Buck, Shimokawa and the current authors generalized this idea, introducing the notion of a \emph{generalized Hopf banding} or \emph{generalized stabilization}. Generalized Hopf bandings also respect fibration structures. Further, a generalized Hopf banding that is not an actual Hopf banding was shown to correspond exactly to arcs properly embedded in a fiber which are once-unclean and non-alternating with respect to the monodromy of the fibration. \subsection{The arc complex and isometric actions on $\mathbb{H}^2$} The \emph{arc complex} $\mathcal{A}(F)$ of a surface $F$ is a simplicial complex whose vertices correspond to the isotopy classes (rel $\partial$) of (essential) arcs properly embedded in $F$, and whose vertices span a simplex if the vertices correspond to isotopy classes of arcs that can be made pairwise disjoint (on their interiors) in $F$. Let $F$ be a once-punctured torus. In this case, $\mathcal{A}(F)$ is two-dimensional. In fact, by shrinking the boundary of $F$, isotopy classes of essential arcs in $F$ are in one-to-one correspondence with essential simple closed curves in the torus, which, in turn, are in one-to-one correspondence with $\mathbb{Q} \cup \{\infty\}$, the set of slopes on the torus. Further, two arcs in the punctured torus $F$ can be istoped to intersect minimally in $n$ points (in their interiors) if and only if their corresponding ratios $p/q$ and $p'/q'$ (in lowest terms, or $\infty = 1/0$) satisfy: $|pq' - qp'| = n+1$ (see, for instance, \cite{HatThuIS2BKC}). It is well known that the $1$-skeleton of arc complex of a once-punctured torus is the \emph{Farey graph}, and that the complex $\mathcal{A}(F)$ has a very useful embedding into $\overline{\mathbb{H}^2}$, the Gromov compactification of the hyperbolic plane. Each $2$-dimensional simplex embeds as an ideal triangle, and each $1$-simplex embeds as a geodesic line. There is also an associated dual tree $\mathcal{T}$, which embeds in $\mathbb{H}^2$ by taking a vertex at the orthocenter of each triangle of $\mathcal{A}(F)$, and joining two vertices arising from triangles in $\mathcal{A}(F)$ sharing an edge. Further, an orientation-preserving homeomorphism of $F$ induces an automorphism of $\mathcal{A}(F)$, an automorphism of $\mathcal{T}$, and an orientation-preserving isometry of $\mathbb{H}^2$ which extends to a continous map of $\overline{\mathbb{H}^2}$, agreeing with the actions on $\mathcal{A}(F)$ and $\mathcal{T}$. So, in particular, the monodromy map $h: F \to F$ induces an isometry $\widetilde{h}: \mathbb{H}^2 \to \mathbb{H}^2$. By a slight (and common) abuse of notation, we will refer to both the isometry on $\mathbb{H}^2$ and its extension to $\overline{\mathbb{H}^2}$ by $\widetilde{h}$. By the classification of hyperbolic isometries, $\widetilde{h}$ is one of three classes: (1) elliptic, (2) parabolic, or (3) loxidromic, which correspond exactly to $h$ being (1) periodic, (2) reducible, or (3) pseudo-Anosov. Suppose $\alpha$ is once-unclean with respect to the monodromy $h$. Let $\beta = h(\alpha)$. Now, consider a neighborhood $N$ of $\alpha \cup \beta$. Then $N$ is a pair of pants with one boundary curve properly embedded in the interior of $F$, so that each of the other boundary curves intersects $\partial F$ in a single arc. The frontier of $N$, consists of the one curve properly embedded in the interior of $F$, and two essential arcs in $F$, say $\nu_1$ and $\nu_2$. Then, there exist two disjoint essential arcs $\nu_1, \nu_2$ properly embedded in $F$, each of which is also disjoint from $\alpha$ and $\beta$. Thus, in the arc complex, $\mathcal{A}(F)$, there is a simplex $\Delta_\alpha$, whose vertices correspond to $\alpha, \nu_1$, and $\nu_2$, and there is a simplex $\Delta_\beta$, whose vertices correspond to $\beta, \nu_1$, and $\nu_2$. In particular, $\Delta_\alpha$ and $\Delta_\beta$ share an edge (the edge between the vertices corresponding to $\nu_1$ and $\nu_2$). In this case, we say that the vertices corresponding to $\alpha$ and $\beta$ are \emph{simplex-adjacent}, and call the edge between the vertices corresponding to $\nu_1$ and $\nu_2$ the \emph{common edge}. (Equivalently, we could say that there exist $2$-simplices associated with $\alpha$ and $\beta$ whose corresponding vertices in $\mathcal{T}$ are adjacent. In this case, the edge between these vertices in $\mathcal{T}$ corresponds to the common edge in $\mathcal{A}(F)$.) Let $A$ be a directed edge in $\overline{\mathbb{H}^2}$, directed from endpoint $A_-$ to $A_+$, both in $\partial \mathbb{H}^2$. Following \cite{CowLacUGOK}, say that two distinct vertices of $\mathcal{A}(F)$ are on the \emph{same side} of $A$ if their corresponding vertices in $\partial \mathbb{H}^2$ are not interleaved with the endpoints of $A$. If $x_1$ and $x_2$ are distinct points of $\partial \mathbb{H}^2$ on the same side of $A$, then $x_1 < x_2$ if $\{x_1, A_+\}$ and $\{x_2, A_-\}$ are interleaved. This defines a total order on points of one side of $A$. \subsection{Automorphisms of the once-punctured torus, oriented arcs, and half-twists} \label{subsection:AutomorphismsOrientedArcsHalfTwists} Let $F$ be a once-punctured torus. For any essential arc $\alpha$ properly embedded in $F$, there is a uniquely determined essential loop $c_\alpha$ disjoint from $\alpha$. Let $D_\alpha$ denote the right-handed Dehn twist along the curve $c_\alpha$. Observe that $D_\alpha$ is a reducible automorphism, fixing the arc $\alpha$ (and the loop $c_\alpha$), and the action of $\widetilde{D_\alpha}$ sends each edge of $\mathcal{A}(F)$ incident to the vertex corresponding to $\alpha$ to the next such edge, preserving a cyclic ordering of the edges incident to this vertex. If $\beta$ is any properly embedded arc in $F$ disjoint from $\alpha$, then by Alexander's Method, an automorphism $h$ of $F$ is determined up to free isotopy by the images of $\alpha$ and $\beta$ under $h$. Next, observe that a disk branched over three points is double-branch covered by the once-punctured torus. The covering map, $\tau$, is called the hyper-elliptic involution. One result of this is that $(D_\alpha \circ D_\beta \circ D_\alpha) = (D_\beta \circ D_\alpha \circ D_\beta)$, for any disjoint arcs $\alpha$ and $\beta$. Following notation from the braid group on three strands, we will call this automorphism $\Delta_{\alpha, \beta}$. (We suppress the subscripts when unnecessary or implied.) Another result of this clarifies an important distinction between monodromy maps of the once-puctured torus and the induced automorphisms on the arc complex of the once-punctured torus. The arc complex $\mathcal{A}(F)$ considers unoriented arcs. The oriented arc complex $\mathcal{A}^O(F)$ has vertices represented by oriented arcs in $F$. Then $\mathcal{A}^O(F)$ double-covers $\mathcal{A}(F)$, and there is a short exact sequence relating the automorphism groups, \[ \set{1} \to \mathbb{Z}_2 \to Aut(\mathcal{A}^O(F)) \stackrel{\pi}{\to} Aut(\mathcal{A}(F)) \to \set{1} .\] The hyper-elliptic involution, $\tau$, is a non-trivial automorphism, even up to free isotopy, but preserves all free isotopy classes of arcs set-wise, reversing their orientations. Thus, the induced action of $\tau$ on $\mathcal{A}^O(F)$ generates the kernel of $\pi$. This involution does not fix the boundary of $F$, so it is not a monodromy map. However, it is freely isotopic to the monodromy map $(D_\alpha \circ D_\beta)^{\pm 3}$, where $\alpha$ and $\beta$ are a pair of disjoint essential arcs. We will denote $(D_\alpha \circ D_\beta)^{\pm 3} = \Delta^{\pm 2}$ by $(D_{\partial})^{\pm1/2}$, as it is a square root of a full twist around the boundary. Any two \emph{monodromy} maps that are freely isotopic differ by some power of $(D_{\partial})^{1/2}$. \section{The Angles of Arcs and Intersections} For two points $a$ and $b$ on $\partial F$, we call an (essential) oriented arcs properly embedded in $F$ with orientation from $a$ to $b$ an {\em $(a,b)$-arc}. For $(a,b)$-arcs, there are two types of isotopies; {\em strong-isotopies} which keep the endpoins on $\{a,b\}$ at all times, and {\em weak-isotopies} which can move the endpoins along $\partial F$. We say that two $(a,b)$-arcs $\gamma$ and $\gamma'$ are {\em strongly-isotopic} if they are equivalent by strong-isotopies. A strong-isotopy class of $\gamma$ is denoted by $\langle\gamma\rangle$. We say that an $(a,b)$-arc $\gamma$ and an $(a',b')$-arc $\gamma'$ are {\em weakly-isotopic} if they are equivalent as oriented arcs by weak-isotopies. A weak-isotopy class of $\gamma$ is denoted by $[\gamma]$. By the definitions, if two arcs are strongly-isotopic, then they are weakly-isotopic. Note that two arcs which are the same as sets but having opposite orientations are NOT weak-isotopic. For a small positive number $\varepsilon>0$, put $\widetilde{F}=\mathbb{R}^2-int(B_{\varepsilon})$, where $B_{\varepsilon}$ is the union $\cup_{(m,n)\in \mathbb{Z}^2} B_{\varepsilon}(m,n)$ of closed balls $B_{\varepsilon}(m,n)$ with center $(m,n)\in\mathbb{Z}^2$ and radius $\varepsilon$. The quotient, $\widetilde{F}/\mathbb{Z}^2$, is homeomorphic to the one-punctured torus $F$. By identifing $\widetilde{F}/\mathbb{Z}^2$ and $F$, we take the projection $p: \widetilde{F}\to F$, that is a covering map. Take two points $a=p(\varepsilon,0)$ and $b=p(-\varepsilon,0)$ on $\partial F$. Then for any $(a,b)$-arc $\gamma$, there is a unique lift $\widetilde{\gamma}$ which starts from $(\varepsilon,0)$. For some integers $p_{\gamma}$ and $q_{\gamma}$, $\widetilde{\gamma}$ arrives at $(q_{\gamma}-\varepsilon,p_{\gamma})$. Note that each lift $\widetilde{\gamma}_{m,n}$ of $\gamma$ which start from $(m+\varepsilon,n)$, arrives at $(m+q_{\gamma}-\varepsilon,n+p_{\gamma})$. Since $\gamma$ is properly embedded, $p_{\gamma}$ and $q_{\gamma}$ are relatively prime. Note that not only strong-isotopies but also weak-isotopies preserve $p_{\gamma}$ and $q_{\gamma}$, so we sometimes use notations $p_{\langle\gamma\rangle}$ and $q_{\langle\gamma\rangle}$, or $p_{[\gamma]}$ and $q_{[\gamma]}$ instead. Let $\theta_{\varepsilon}$ be the polar angle of the end point $(q_{\gamma}-\varepsilon,p_{\gamma})$ of $\widetilde{\gamma}$ when we regard a polar angle of the starting point $(0,\varepsilon)$ of $\widetilde{\gamma}$ as $0$ and follow along $\widetilde{\gamma}$. Then we define the {\em angle of} $\gamma$, denote by $\theta(\gamma)$, as the limit of $\theta_{\varepsilon}$ as $\varepsilon$ approaches $0$. Note that strong-isotopies preserve the angles, but weak-isotopies change them by $2\pi$ times an integer, so we sometimes use a notation $\theta(\langle\gamma\rangle)$. We have the following immediately from the definitions: \begin{center} $\tan(\theta(\gamma))=p_{\gamma}/q_{\gamma}$, \end{center} where we consider $\tan(m\pi/2)=\infty$ for any odd integer $m$. \begin{lem}\label{lem:angle} For two $(a,b)$-arcs $\alpha$ and $\beta$ in the once-punctured torus $F$, the following hold: (1) If $p_{\alpha}=p_{\beta}$ and $q_{\alpha}=q_{\beta}$, then $\alpha$ and $\beta$ are weakly-isotopic. (2) If $\theta(\alpha)=\theta(\beta)$, then $\alpha$ and $\beta$ are strongly-isotopic. (3) $i_{\partial}(\alpha,\beta)=1$ if and only if $\theta(\alpha)<\theta(\beta)$. (4) $i_{total}(\alpha,\beta)=\lfloor \frac{\theta(\alpha)-\theta(\beta)}{\pi} \rfloor+|p_{\alpha}q_{\beta}-q_{\alpha}p_{\beta}|$, where $\lfloor x \rfloor$ indicates the floor function, that is the largest integer not greater than $x$. \end{lem} \label{section:StickingarcComplex} \section{Sticking-arc Complex} The \emph{sticking-arc complex} $\widehat{\mathcal{A}(F)}$ of a surface $F$ is a simplicial complex whose vertices correspond to strong-isotopy classes of $(a,b)$-arcs in $F$, and whose vertices span a simplex if the vertices correspond to strong-isotopy classes of arcs that can be made pairwise disjoint on their interiors in $F$. By regarding $(a,b)$-arcs as non-oriented arcs, there is natural projection $\pi:\widehat{\mathcal{A}(F)}\to\mathcal{A}(F)$. For each vertex $v$ of $\mathcal{A}(F)$, there exists a real number $\theta_v$ such that $\theta(\pi^{-1}(v))=\{\theta_v+n\pi \ |\ n\in\mathbb{Z}\}$. By Lemma \ref{lem:angle}, we have the following: \begin{lem} (1) For two verticies $v$ and $w$ of $\widehat{\mathcal{A}(F)}$, $v$ and $w$ span a $1$-smplex if and only if $|\theta(v)-\theta(w)|<\pi$ and $|p_{v}q_{w}-q_{v}p_{w}|=1$. (2) For two verticies $v$ and $w$ of $\widehat{\mathcal{A}(F)}$ which span a $1$-simplex with $\theta(v)<\theta(w)<\theta(v)+\pi$, there exist three vertices $x_-,x_0$ and $x_+$ such that $v,w,x_{*}$ span a $2$-simplex with $\theta(x_-)<\theta(v)<\theta(x_0)<\theta(w)<\theta(x_+)=\theta(x_-)+\pi<\theta(v)+\pi$. Moreover, $(q_{x_*},p_{x_*})$ are determined from $(q_{v},p_{v})$ and $(q_{w},p_{w})$ as follows; $(q_{x_0},p_{x_0})=(q_{v}+q_{w},p_{v}+p_{w})$, $(q_{x_+},p_{x_+})=(-q_{v}+q_{w},-p_{v}+p_{w})$, $(q_{x_-},p_{x_-})=(q_{v}-q_{w},p_{v}-p_{w})$. \end{lem} \label{section:MonodromieswithTotalIntersctionNumberatMostTwo} \section{Monodromies with Total Intersction Number at Most Two}
1,314,259,996,036
arxiv
\section{Introduction} The Polyakov model, Yang-Mills theory with an adjoint Higgs scalar on ${\mathbb R}^3$, is one of the cornerstones in the study of confinement in gauge theories \cite{Polyakov:1976fu}. Abelian duality is used to show the emergence of a mass gap, and to exhibit linear confinement via the proliferation of the monopoles in the vacuum. Another theory which realizes confinement and a mass gap similarly, i.e, via the proliferation of the flux (or monopoles) is compact lattice QED$_3$. These are two different microscopic theories with a different set of symmetries at the cut-off scale. However, at long distances, they are gapped, and they flow to the same theory, constituting a non-perturbative long distance duality. Although we do not know whether the Polyakov model is relevant in Nature, the lattice QED$_3$ with fermionic fields appears in two dimensional spin systems, in the spin liquid approach to high $T_c$ superconductivity and in the phase fluctuation model of the cuprate superconductors (See the reviews \cite{carlson-2002, lee-2004} and \cite{franz-2002-66}.) Therefore, the issue of deconfined versus confined long distance characteristic of 2+1 dimensional lattice QED with fermionic matter is experimentally relevant. An important question in this context is the existence (or non-perturbative stability) of the spin liquids, the non-magnetic Mott insulators with no broken symmetries. In QED$_3$, this question translates into whether the strongly coupled fermions and gauge fluctuations remain massless in the long distances, when the non-perturbative effects (consistent with microscopic symmetries) are taken into account. If so, this implies deconfinement and stability. In the literature, a permanent confinement and instability was argued in \cite{sachdev-2002-298, herbut-2003-91,herbut-2003-68}. Ref. \cite{hermele-2004-70, hermele-2005-72} showed that, at least in a large $n_f$ limit where $SU(2)$ spin symmetry is generalized to $SU(n_f)$, there are some spin liquids which are stable. For small numbers of fermionic flavors, which is experimentally most interesting, this is still an unsettled matter. In this work, we discuss a variety of related gauge theories, each of which needs to be distinguished very carefully via their microscopic symmetries. For example, consider non-compact continuum QED$_3$ minimally coupled to $2n_f$ flavors of fundamental fermions, and assume one wishes to incorporate the compactness of the gauge field. We show that, common bottom-up arguments which claim to account for the compactness of the gauge fields are ill-defined, due to non-uniqueness of this procedure. In the continuum, a standard way to obtain compact QED$_3$ is via the gauge ``symmetry breaking" $SU(N) \rightarrow U(1)^{N-1} $ in a parent Yang-Mills adjoint Higgs systems. We show that there are at least two classes of parent theories which differ in the topological structure of their adjoint Higgs field (compact versus noncompact), yet both lead to the desired gauge symmetry breaking and reduce (necessarily) to continuum compact QED$_3$. Although indistinguishable in perturbation theory, the non-perturbative behavior of these theories are strikingly opposite: In the theory with non-compact adjoint Higgs scalar (Polyakov model with massless fermions), we demonstrate \begin{eqnarray} \underbrace{SU(N)}_{\rm with \; noncompact \; scalar} \; \; \; \underbrace{\longrightarrow}_{\rm Higgsing} \; \; \; \underbrace{[U(1)]^{N-1}}_{ {\rm compact \; QED}_3} \; \; \; \underbrace{\longrightarrow}_{\rm nonperturbative} \qquad \underbrace{U(1)}_{ \rm CFT \; or \; free \; photon}\, , \label{pattern1} \end{eqnarray} the existence of a massless photon in the long distances, and the absence of confinement. Of course, the dramatic behavior here is the appearance of a conformal field theory (CFT) in certain cases, to be discussed below. In the theory with compact adjoint Higgs field, the gauge structure reduces at longer distances as (for moderately small number of flavors) \begin{eqnarray} \underbrace{SU(N)}_{\rm with \; compact \; scalar} \;\;\; \underbrace{\longrightarrow}_{\rm Higgsing} \; \; \; \underbrace{[U(1)]^{N-1}}_{ {\rm compact \; QED}_3} \; \; \; \underbrace{\longrightarrow}_{\rm nonperturbative} \qquad \underbrace{\rm nothing}_{\rm gapped \; gauge \; bosons} \, . \label{pattern2} \end{eqnarray} The photon gains a mass, and the theory confines. As opposed to the common assertions in the literature, the presence or absence of monopoles has nothing to do with the confining or deconfining behavior of a {\it generic} gauge theory. (See the table.\ref{default}). We introduce a sharp (topological) symmetry characterization to describe the long distance limits (deconfined versus confined, and more delicate refinements) of gauge theories on ${\mathbb R}^3$ and small $S^1 \times {\mathbb R}^3$. \footnote{ Naively, the (\ref{pattern1}) seems to be in accord with Ref.\cite{hermele-2004-70, hermele-2005-72}, and (\ref{pattern2}) seems to be coinciding with the results of \cite{sachdev-2002-298, herbut-2003-91,herbut-2003-68}. This is not quite correct. The references \cite{sachdev-2002-298, herbut-2003-91,herbut-2003-68, hermele-2004-70, hermele-2005-72} study a spin Hamiltonian which, in the $\pi$-flux state, maps into a compact lattice QED$_3$ with fermions. The global symmetries of this lattice theory is different (although related, see $\S$.\ref{sec:lat}) from the continuum discussion above. Despite these differences, we will establish precise non-perturbative long distance dualities between spin system and Polyakov model with massless fermions in certain cases.} We first discuss the question of confinement in the Polyakov model with massless fermions, either in real and complex representations. The answer is known for one real representation adjoint Dirac fermion \cite{Affleck:1982as}. The fermion number symmetry breaks down spontaneously, and there is a gapless Nambu-Goldstone boson (the dual photon). The masslessness of dual photon is protected by symmetry breaking order, i.e, Goldstone theorem, and the adjoint fermion acquires a mass. For complex representation fermions, the infrared is more interesting. There are strongly coupled gauge fluctuations and fermions which remain massless in the infrared. The answer entails a different mechanism to keep fermions and a boson massless. It is referred as quantum order (or non-symmetry breaking order) in condensed matter physics \cite{wen-2002-65, wen-2002-66}. The appearance of quantum order in the Polyakov model is new. In the first application, the spontaneous breaking of a global symmetry generates and protects a massless boson, in the latter, the unbroken symmetry implies the existence of massless boson and fermions. The main concept behind the deconfinement in the Polyakov model with massless fermions is a $U(1)_{*}$ {\bf topological symmetry}. This symmetry arises in the long distance and protects only one dual photon from acquiring a mass. It relies on the Jackiw-Rebbi zero modes and the index theorem of Callias \cite{Jackiw:1975fn,Callias:1977kg} . Due to the index theorem, a $U(1)_A$ symmetry of the high energy theory transmutes into a shift symmetry for the dual photon. For complex representation fermions, the combination of the topological symmetry and other global symmetries is very powerful, and they severely restrict any perturbative or non-perturbative relevant or marginal operators that may destabilize the masslessness of the strongly interacting photon and fermions. In particular, in theories with $N_f \geq 4 $ fundamental fermions are quantum critical due to the absence of relevant or marginal operators which may destabilize their masslessness. We argue that the strong correlation physics of the fermions and gauge boson at long distance produce a scale invariant, conformal field theory (CFT). In three dimensional non-abelian gauge theories, the earlier examples of infrared strongly coupled CFTs are mostly among extended supersymmetric theories \cite{Intriligator:1996ex, Kapustin:1999ha}. The nonsupersymmetric gauge theories discussed in this paper provide an infinite class of infrared CFTs which interpolate between weak and strong coupling as the number of flavors is varied, $4 \leq N_f < \infty$, with a dimensionless coupling constant $ \sim \frac{1}{\sqrt {N_f}}$. The $N_f=2$ theory turns out to be non-critical, due to the presence of a relevant, non-perturbatively generated flux operator with fermion zero mode insertion. The existence of the continuous $U(1)_{*}$ topological shift symmetry is the {\bf necessary and sufficient} condition to prove that the photon remains massless in the Polyakov model with massless complex fermions.\footnote{For a real massless Majorana fermion in the adjoint representation, there is no $U(1)_{*}$ symmetry. Such theories on ${\mathbb R}^3$ do indeed confine.\cite{Affleck:1982as}.} In fact, the fundamental distinction between the theories in (\ref{pattern1}) and (\ref{pattern2}) is that, in the latter, the continuous topological shift symmetry for the dual photon is replaced by a discrete one. As opposed to continuous shift symmetry, the discrete shift symmetries cannot prohibit the appearance of a mass term for the scalar. Thus, the photons in the latter case should acquire mass according to symmetry considerations. However, there is the possibility that the monopole fugacity may become irrelevant at large distance in the renormalization group sense. In this case, the long distance theory will exhibit an enhanced topological symmetry relative to the microscopic theory. This implies that the presence of the discrete topological symmetry is necessary, but not {\bf sufficient} for confining behavior. Finally, equipped with the understanding of the Polyakov models, we turn to the discussion of spin systems. As stated earlier, the spin systems can be mapped into lattice gauge theories in the slave fermion mean field theory. We investigate the relation between the Polyakov model and lattice QED$_3$, both with massless fermions, in the long distance limit. These are theories with distinct microscopic symmetries. But, perhaps the most significant distinguishing feature of the lattice QED$_3$ and continuum Polyakov models is the absence of an analog of the Callias index theorem in lattice QED$_3$ as shown by Marston \cite{Marston:1990bj} , and the analog of a global $U(1)_A$ symmetry in the lattice model. The first is not as severe as it sounds despite the concerns raised in literature \cite{kim-1999-272}. In fact, the latter is the main problem. We will show that, were the global $U(1)_A$ a symmetry of the spin Hamiltonian, the topological symmetry would indeed arise in the infrared despite the absence of an index theorem. If this were the case, we could have carried a precise analogy with the Polyakov model even at small $N_f$. Unfortunately, only in the sufficiently large $N_f$ limit can we make a reliable statement about the infrared structure of the lattice theory. In particular, we are not able to improve the discussion given in \cite{hermele-2004-70, hermele-2005-72}. In the Polyakov model with massless fermions, we are able to side-step the renormalization group and large $N_f$ analysis of Hermele et.al. \cite{hermele-2004-70}. In lattice QED$_3$, this analysis seems inevitable. Thus, there is a long distance duality between the spin liquids and Polyakov models with massless fermions in the large $N_f$ limit where both theories flow into the same interacting CFT. \section{Gauge theories in three dimensions} We consider $SU(N)$ Yang-Mills gauge theory with a noncompact adjoint Higgs scalar on ${\mathbb R}^3$ (also known as Georgi-Glashow model) in the presence of massless fermions. The fermions are chosen in complex and real representations such as fundamenal(F) and adjoint(adj). We will label these theories as P(F) and P(adj), respectively. Before discussing them, it is useful to review the basics of the pure Polyakov model \cite{Polyakov:1976fu} and set the notation. \subsection{Polyakov model} \label{sec:Pol} The action of $SU(2)$ gauge theory with an adjoint scalar is \begin{eqnarray} S= \int_{{\mathbb R}^3} \; \frac{1}{g_3^2} &&{\rm tr} \Big[ \frac{1}{4} F_{ \mu \nu}^2 + \coeff 12 (D_{\mu} \Phi)^2 + V [\Phi] \Big] \label{lagrangian} \end{eqnarray} $\Phi$ is a Lie algebra valued non-compact scalar Higgs field, $F_{\mu \nu}$ is the non-abelian field strength, and $\mu, \nu=1,2,3$. The classical potential $V[\Phi]$ is chosen such that, at tree level, the theory is in its Higgs regime, $SU(2) \rightarrow U(1)$. At long distances, only the abelian components are operative. To all orders in perturbation theory, the infrared is a free (non-interacting) Maxwell theory. The Gaussian fixed point is destabilized due to nonperturbative instanton (monopole) effects. This instability is easiest to see in a dual formulation where the gauge boson is dualized to a scalar, $F= *d \sigma$.\footnote{Our discussion mostly relies on symmetries. Therefore, to lessen the clutter of expressions, we set the dimensionful parameters (e.g. $g_3$) to one. These parameters will be restored if necessary.} Since an instanton has a finite action, they will proliferate due to entropic effects. This generates nonperturbative $e^{-S_0}$ effects in the long distance Lagrangian \begin{equation} L = \coeff 12 (\partial \sigma)^2 - e^{-S_0} (e^{i \sigma} + e^{-i \sigma}) \label{dual} \end{equation} The $\cos \sigma$ is a relevant operator which alters the IR physics drastically, and leads to a mass gap $\sim e^{-S_0/2}$. It is worth nothing that, the dual of the free Maxwell theory, i.e., in the absence of monopoles, described by $ L = \coeff 12 (\partial \sigma)^2, $ has a continuous shift symmetry \begin{equation} U(1)_{\rm flux}: \; \sigma\rightarrow \sigma - \beta \label{U1J} \end{equation} which protects $\sigma$ from acquiring mass. The current associated with the shift symmetry is $ {\cal J}_{\mu}= \partial_{\mu} \sigma= \coeff 12 \epsilon_{\mu \nu \rho} F_{\nu \rho} = F_{\mu} $, and its divergence is zero, $ \partial_{\mu} {\cal J}_{\mu}= \partial_{\mu} F_{\mu}=0 $, reflecting the absence of monopoles and conservation of magnetic flux, hence the name $U(1)_{\rm flux}$. In the $U(1)$ gauge theory with monopoles, the current ${\cal J}_{\mu} $ is not conserved. Its divergence is $ \partial_{\mu} {\cal J}_{\mu}= \nabla^2 \sigma = \partial_{\mu} F_{\mu}= \rho_m(x) $ where $\rho_m(x) $ is the monopole charge density. Since the $U(1)_{\rm flux}$ is no longer a symmetry, there is no symmetry reason for the $\sigma$ field to remain massless. Indeed, $\sigma$ acquires a mass as shown in (\ref{dual}). {$\bf SU(N)$:} More generally, let the $SU(N)$ gauge symmetry be broken down to $U(1)^{N-1}$ via an adjoint Higgs vacuum expectation value \begin{eqnarray} \langle \Phi \rangle = {\rm Diag}( a_1, \ldots, a_N) \end{eqnarray} where $ a_1 < a_2 < \ldots< a_N$. There are $N-1$ photons which remain massless to all orders in perturbation theory. Let us dualize them into $(F_1, \ldots , F_{N-1})= * d ( \sigma_1, \ldots, \sigma_{N-1})$. Non-perturbatively, there are $N-1$ types of elementary monopoles associated with this pattern, which we label by their magnetic charges $\{{\bm \alpha}_1, \ldots, {\bm \alpha}_{N-1}\}$ where each ${\bm \alpha}_i$ is an $N-1$ vector with charges under $U(1)^{N-1}$. The antimonopoles carry opposite charges. The monopole operator in a theory without fermions is $e^{-S_0} e^{i {\bm \alpha}_i {\bm \sigma}}$, and the sum over all elementary monopole effects induce $ e^{-S_{0}} \sum_{j=1}^{N-1} \cos ( {\bm \alpha}_j {\bm \sigma}) $ rendering all $N-1$ varieties of photons massive. \footnote{We assume, for simplicity, $S_{0,i} \equiv \frac{4 \pi}{g_3^2}|a_{i+1} -a_{i}| = S_0$ for the elementary monopoles by tuning the potential. This can be relaxed if desired.} \subsubsection{Introducing complex representation fermions} \label{introcomp} Our goal is to construct the non-perturbative long distance description of Polyakov models with massless fermions. The long distance effective field theory must respect all the (non-anomalous) symmetries of the underlying microscopic theory. In other words, the (perturbative or non-perturbative) operators that can be generated are severely restricted by the microscopic symmetries. Therefore, it is useful to clearly state the symmetries of the microscopic P(F) model. This will also ease the comparison of microscopic and enhanced (emergent) macroscopic global flavor and spacetime symmetries of the theory. Consider the addition of the massless fermions in the fundamental representation of the gauge group into the Polyakov model. (The generalization to other complex representation fermions is possible.) We interchangeably use the four-component Dirac spinors or two two-component Dirac spinors $\psi_1$ and $\psi_2$ related to each other via \begin{equation} \Psi^a= \left(\begin{array}{l} \psi_1^a \\ \bar \psi_2^a \end{array} \right), \qquad \overline \Psi_a= \left(\begin{array}{l} \psi_2^a \\ \bar \psi_1^a \end{array} \right), \label{split} \end{equation} We consider the theories with $N_f=2n_f$ two component Dirac spinors, or equivalently, $n_f$ four component spinors. The $a=1, \ldots n_f$ and subscripts $(1,2)$ are flavor indices. In our conventions, the representations of the two component fermions under the $SU(N)$ gauge group are $(\psi_1^a, \bar \psi_2^a) \in (\Box, \Box)$ where $\Box$ denotes the fundamental representation. These combinations and our subsequent Dirac $\gamma$ matrix choices are for later convenience, and will make the Callias index analysis slightly simpler. \footnote{In Euclidean space, $\psi_a$ and $\bar \psi_a$ should be viewed as independent variables. In particular, they are not related to each other by conjugation } The fermions couple to gauge fields and adjoint scalars as \begin{equation} L_{\rm F}= i \overline \Psi^a \Big( \gamma_{\mu} (\partial_{\mu} + i A_{\mu} ) + i \gamma_4 \Phi \Big) \Psi_a \end{equation} where the Euclidean $\gamma$ matrices are given by \begin{comment} \footnote{Equivalently, \begin{equation} \gamma_M= \left( \begin{array}{cc} 0 & \bar \sigma_M \cr \sigma_M & 0 \end{array} \right), \qquad \bar \sigma_M = (\sigma_{\mu}, -iI), \qquad \sigma_M = (\sigma_{\mu}, iI), \end{equation} } \end{comment} \begin{equation} \gamma_{\mu} = \sigma_1 \otimes \sigma_\mu, \; \; \gamma_4= \sigma_2 \otimes I \qquad \{\gamma_{M}, \gamma_{N}\}= 2 \delta_{MN}, \qquad M, N=1,\ldots 4 \end{equation} It is also convenient to define $$ \overline \sigma_M = (\sigma_{\mu}, -iI) \equiv (\sigma_{\mu}, \sigma_4), \qquad \sigma_M = (\sigma_{\mu}, iI)\equiv (\sigma_{\mu}, -\sigma_4),$$ where $\sigma_{\mu}$ are the Pauli matrices. The explicit form of the Dirac-like operator in this basis is \begin{eqnarray} \gamma_M D_M = \gamma_{\mu} D_{\mu} + \gamma_4 (i \Phi) = \left[ \begin{array}{cc} 0& \sigma_\mu(\partial_\mu +i A_{\mu}) + \sigma_4 (i \Phi) \cr \sigma_\mu(\partial_\mu +i A_{\mu}) - \sigma_4 (i \Phi) & 0 \end{array} \right] \end{eqnarray} and consequently, \begin{eqnarray} L_{\rm F}= && i \bar \psi_1^a ( \sigma_\mu(\partial_\mu +i A_{\mu}) + i \sigma_4 \Phi )\psi_1^a \; + \; i \psi_2^a ( \sigma_\mu(\partial_\mu +i A_{\mu}) - i \sigma_4 \Phi ) \bar \psi_2^a \label{fl} \end{eqnarray} In this representation, it is easier to see the global symmetries of the theory. Besides the $SO(3)_L $ Euclidean Lorentz symmetry and the $C, P, T$ discrete charge conjugation, parity and (Euclidean) time reversal symmetries, the theory possesses a discrete ${\mathbb Z}_2$ \begin{equation} {\mathbb Z}_2: \qquad \Phi \rightarrow - \Phi, \qquad \psi_1 \rightarrow \bar \psi_2, \qquad \psi_2 \rightarrow \bar \psi_1 \end{equation} and the following global (flavor) symmetries \begin{eqnarray} \begin{array}{lll} SU(n_f)_1: \qquad &\psi_1 \rightarrow U \psi_1, \;\;\qquad & \bar \psi_2 \rightarrow \bar \psi_2 , \cr \cr SU(n_f)_2: \qquad & \psi_1 \rightarrow \psi_1, \;\; &\bar \psi_2 \rightarrow V \bar \psi_2, \cr \cr U(1)_V: \;\;\; \qquad & \psi_1 \rightarrow e^{i \delta} \psi_1, \;\; & \bar \psi_2 \rightarrow e^{ i \delta} \bar \psi_2\cr \cr U(1)_A: \;\;\; \;\; \qquad & \psi_1 \rightarrow e^{i \beta} \psi_1, \;\;& \bar \psi_2 \rightarrow e^{- i \beta} \bar \psi_2 \end{array} \label{symfun} \end{eqnarray} Note that the gauge covariant term possesses a larger global $SU(2n_f)$ symmetry group. Were the Yukawa's not present in the theory, the $SU(n_f)_1 \times SU(n_f)_2 \times U(1)_A$ global symmetry would enhance into the $SU(2n_f)$. However, the relative sign difference between the covariant derivative and Yukawa couplings prevents this enhancement in the microscopic theory. Since there is no chiral anomaly in $d=3$ dimensions, the $U(1)_A$ symmetry is a true symmetries of the theory. The discrete $P$ and ${\mathbb Z}_2$ symmetries, and continuous flavor symmetry prohibits a fermion mass term. To summarize, the full microscopic symmetry group ${\cal G_M}_{,\rm P(F)}$ of the theory is \begin{eqnarray} {\cal G_M}_{,\rm P(F)} = SO(3)_L \times C \times P \times T \times {\mathbb Z}_2 \times U(1)_V \times U(1)_A \times SU(n_f)_1 \times SU(n_f)_2 \label{master1} \end{eqnarray} \subsubsection{Real representation fermions} We restrict attention to the adjoint representation fermion. Since the adjoint representation is real, the two component (complex) Dirac spinors is appropriate for all circumstances. Thus, $N_f=n_f$. The coupling of fermions to gauge boson and adjoint scalar is \begin{eqnarray} L_{\rm adj}= && i {\rm tr} \; \left[ \bar \psi_a \Big( \sigma_\mu(\partial_\mu +i [A_{\mu}, \; ] ) + \sigma_4 [i \Phi, \; ] \Big) \psi_a \right] \label{f2} \end{eqnarray} The global flavor symmetries of the theory is given by \begin{eqnarray} && SU(n_f):\;\;\; \psi \rightarrow U \psi, \cr && U(1)_A: \;\;\; \;\; \psi \rightarrow e^{i \beta} \psi. \label{symadj} \end{eqnarray} Note that, in this case, $U(1)_A$ may be viewed as fermion number symmetry. However, since it does not have the same interpretation in the theories with complex representation fermions, we will not use this nomenclature. Thus, the full symmetry group ${\cal G_M}_{,\rm P(adj)}$ of the microscopic theory is \begin{eqnarray} {\cal G_M}_{,\rm P(adj)} = SO(3)_L \times C \times P \times T \times U(1)_A \times SU(n_f) \label{master2} \end{eqnarray} {\bf Remark on QCD:} At the classical level, the flavor symmetry group of the Polyakov models with fermions on ${\mathbb R}^3$ is the same as the flavor symmetries of the corresponding QCD on ${\mathbb R}^4$ or $S^1 \times {\mathbb R}^3$. However, in QCD in four dimensions, the analog of the symmetry that we referred as $U(1)_A$ in (\ref{symfun}) and (\ref{symadj}) is anomalous. In odd dimensions, there is no chiral anomaly, and the $U(1)_A$ is a true symmetry of the Polyakov model with massless fermions. In four dimensions, due to instanton effects, only a discrete ${\mathbb Z}_{2h}$ subgroup of $U(1)_A$ survives quantization, where $2h$ is the number of fermionic zero modes in the background of a four dimensional instanton. The microscopic $U(1)_A$ symmetry will play a major role in the characterization of deconfinement in P(${\cal R}$) theories. \subsubsection{Perturbative operators and flux operators} \label{setup} In all the P$({\cal R})$ theories, we assume that the theory is always maximally Higgsed, and the long distance is dictated by the maximal abelian subgroup. There are massless bosons whose masslessness is protected to all orders in perturbation theory. Also, there are fermionic zero modes which interact with gauge fluctuations at long distances. Our interest is to determine the stability of such massless fields. There are two categories of operators which may be generated, and alter the long distance physics. These are, following \cite{hermele-2004-70}, \begin{itemize} {\item perturbative (without flux), naturally incorporated in terms of the original variables.} {\item nonperturbative (flux operators), or topological excitations, naturally incorporated in terms of dual photon.} \end{itemize} For example, in the pure Polyakov model, a would-be operator of the first category is the relevant Chern-Simons term, \begin{equation} \frac{ i n}{4 \pi} \int \epsilon^{ \mu \nu \rho} A_{\mu} \partial_{\nu} A_\rho \end{equation} which would induce a mass term for the photon. However, this operator does not get generated at one loop order (or any order in perturbation theory), because the microscopic theory is parity invariant and the Chern-Simons term is parity odd. Thus, this type of instability does not occur. An operator in the second category is the monopole operator. Indeed, it is allowed by all symmetries and generates the $e^{-S_0} (e^{i \sigma} + \rm c.c.)$ interaction, which, in the deep infrared, is a mass term for the dual photon. This is the type of instability that we will look for in the Polyakov models with massless fermions and some related gauge theories. We will see that the microscopic symmetries $ {\cal G_M} $ and a topological shift symmetry which arises as a natural consequence of the Callias index theorem very severely restrict the types of operators that can be generated. In some circumstances, the infrared theory is quantum critical, in the sense that there exists no perturbative or nonperturbative operators which may destabilize the masslessness of photons and fermions, and some such theories become conformal field theories. \subsection{Callias index theorem and (continuous) topological symmetry} In the presence of massless (or light) fermions, the monopoles may carry fermionic zero modes attached to them \cite{Jackiw:1975fn}. The number of the fermionic insertions is determined uniquely by the Callias index theorem \cite{Callias:1977kg}, and matter content of the theory. \footnote{ For the relation between the more familiar Atiyah-Singer index theorem and Callias index theorem in QCD-like gauge theories on small $S^1 \times {\mathbb R}^3$, see page.37 of \cite{Shifman:2008ja}.} Let ${\cal I}_{\bm \alpha_i}$ denote the index associated with the monopole with charge ${\bm \alpha_i}$. The typical form of the monopole operator in the theory with fermions is \begin{equation} e^{-S_0} e^{ \pm i \bm \alpha_i \bm \sigma} O_{\rm fermions}. \end{equation} The number of fermion insertions of each flavor/type, say $\psi_1^a$, in $O_{\rm fermions}$, is determined by the index ${\cal I}_{\bm \alpha_i}$, by the difference of the dimensions of the zero energy eigenstates: \begin{eqnarray} {\cal I}_{\bm \alpha_i} = ( \dim \ker {\rlap{\raise 1pt \hbox{$\>/$}}D}_{ \bm \alpha_i } - \dim \ker \overline {\rlap{\raise 1pt \hbox{$\>/$}}D}_{ \bm \alpha_i} ) \end{eqnarray} Here, ${\rlap{\raise 1pt \hbox{$\>/$}}D}_{ \bm \alpha_i } = [\sigma_{\mu} (\partial_{\mu} + i A_{\mu})+ \sigma_4 (i\Phi)]_{\bm \alpha_i} $ is the Dirac-like operator in $d=3$ dimensions in the background of the monopole $\bm \alpha_i$. In our conventions, the $O_{\rm fermions}$ in the monopole operator has only $\psi_a$ insertions, and an anti-monopole operator can only have $\bar \psi_a$ insertions. \begin{equation} e^{-S_0} e^{ + i \bm \alpha_i \bm \sigma} O_{\rm fermions}( \psi ) , \qquad e^{-S_0} e^{ - i \bm \alpha_i \bm \sigma} O_{\rm fermions}( \bar \psi ) \end{equation} This was indeed the reason for the peculiar spinor decomposition (\ref{split}). For an adjoint fermion, the index is equal to ${\cal I}_{\bm \alpha_i} =2$. In the presence of fundamental fermions, the index is ${\cal I}_{\bm \alpha_i} =\delta_{i, \hat i}$ where ${\hat i}$ is the monopole that the zero mode is localized into. This is for each flavor of two component Dirac fermion. Since we have even number of fundamental fermions, the number of fermionic zero mode insertion in $O_{\rm fermions}$ is always even. More precisely, for fermions in complex representations, we have two Dirac-like operators as seen in (\ref{fl}) and two conjugates, \begin{eqnarray} && {\rlap{\raise 1pt \hbox{$\>/$}}D}^{(1)} = \overline \sigma_M D_M= \sigma_{\mu} (\partial_{\mu} + i A_{\mu})+ \sigma_4 (i\Phi), \qquad \overline {\rlap{\raise 1pt \hbox{$\>/$}}D}^{(1)} = \sigma_M D_M= \sigma_{\mu} (\partial_{\mu} - i A_{\mu})+ \sigma_4 (i\Phi), \cr \cr && {\rlap{\raise 1pt \hbox{$\>/$}}D}^{(2)} = \overline \sigma_M D_M= \sigma_{\mu} (\partial_{\mu} - i A_{\mu})- \sigma_4 (i\Phi), \qquad \overline {\rlap{\raise 1pt \hbox{$\>/$}}D}^{(2)} = \sigma_M D_M= \sigma_{\mu} (\partial_{\mu} + i A_{\mu}) - \sigma_4 (i\Phi), \qquad \qquad \, \end{eqnarray} The total number of fermion zero modes associated with a monopole ${\bm \alpha_i}$ is $ n_f( {\cal I}_{\bm \alpha_i}^{(1)} + {\cal I}_{\bm \alpha_i}^{(2)} )= 2n_f {\cal I}_{\bm \alpha_i} $. {\bf Symmetry transmutation:} The microscopic Polyakov Lagrangian with massless fermions has a global $U(1)_A$ symmetry given in (\ref{symfun}) and (\ref{symadj}) regardless of whether fermions are in a real or complex representation. Since it is a non-anomalous symmetry, it must be a symmetry of the long distance theory. The $U(1)_A$ transformation, \begin{equation} \psi \rightarrow e^{i\beta} \psi , \;\; \; \bar \psi \rightarrow e^{- i\beta} \bar \psi \label{sym} \end{equation} implies $ O_{\rm fermions} \rightarrow e^{i N_f {\cal I}_{\bm \alpha_i} \beta } O_{\rm fermions} $. Therefore, the invariance of the monopole operator under (\ref{sym}) necessitates a continuous shift for the dual photons: \begin{equation} U(1)_{*} : \; \bm \alpha_i \bm \sigma \rightarrow \bm \alpha_i \bm \sigma - N_f {\cal I}_{\bm \alpha_i} \beta \label{topsym} \end{equation} Since this symmetry originates from the topological index theorem, we will call it a topological shift symmetry, or simply, {\bf topological symmetry} and refer to it as $U(1)_{*}$. Just like the abelian duality transform \cite{Polyakov:1976fu}, the topological shift symmetry requires going to sufficiently long distances. In the IR, the $U(1)_A$ symmetry of the original theory intertwines with the shift symmetry for the dual photons (\ref{U1J}). This phenomena pervades the physics of all P$({\cal R})$ theories. More precisely, recall that in the absence of fermions and monopoles, the free Maxwell theory is dual to a free scalar theory with a continuous shift symmetry $U(1)_{\rm flux}$ (\ref{U1J}). The presence of monopoles (in the absence of fermions) spoils this symmetry completely. However, in the presence of fermions, the $U(1)_{*}$ linear combination of the $U(1)_A$ and $U(1)_{\rm flux}$ \begin{equation} U(1)_{*} : U(1)_A - N_f {\cal I}_{\bm \alpha_i} U(1)_{\rm flux} \label{topsym2} \end{equation} remains a true symmetry of the theory. \footnote{If there was no dual photon field to soak-up the phase of the fermionic zero modes, this would indeed imply that $U(1)_A$ must be anomalous, which is incorrect on ${\mathbb R}^3$. Compare this with one flavor QCD on ${\mathbb R}^4$. The instanton vertex also has two fermion insertion and no extra structure to soak-up the $U(1)_A$ chiral rotation. Indeed, there is a chiral anomaly on ${\mathbb R}^4$ and the $U(1)_A$ is anomalous. Only a ${\mathbb Z}_2$ subgroup of it is anomaly-free. } A continuous shift symmetry can protect a scalar from acquiring a mass. Since there is only one parameter in the transformation (\ref{topsym}), only one dual photon is protected by the topological symmetry. At a conceptual level, this shows that one gauge degree of freedom remains massless in the IR of the P(${\cal R}$) theory regardless of any other detail, so long as the microscopic theory possesses the $U(1)_A$ symmetry. We may call this phase deconfined, since a gauge boson remains infinite ranged. Although this is true, it is a crude characterization. A more refined categorization of the deconfined phases, which can distinguish a free infrared theory (free photon), and a strongly or weakly coupled conformal field theory (CFT) is needed, and will be discussed. \subsection{Revisiting P(adj): Dual scalar as a Nambu-Goldstone boson} Consider the $SU(2)$ one flavor P(Adj). (Below is a review and slight refinement of Affleck et.al. \cite{Affleck:1982as}). We assume the long distance gauge structure reduces down to $U(1)$. Perturbatively, we have a photon and a neutral fermion, described by \begin{equation} L= \frac{1}{4 g_3^2} F_{\mu \nu}^2 + i \bar \psi \sigma_{\mu} \partial_{\mu} \psi \end{equation} a free field theory. Parity forbids relevant perturbative operators such as $\bar \psi \psi$ from being generated \cite{Affleck:1982as}. Nonperturbatively, there is only one type of elementary monopole (and its anti-monopole.) The index ${\cal I}_{\alpha_1}=2$ for adjoint fermions. Thus, by (\ref{sym}) and (\ref{topsym}), we have \begin{equation} \psi \rightarrow e^{i \beta} \psi , \qquad \sigma \longrightarrow \sigma - N_f {\cal I}_{\alpha_1} \beta = \sigma- 2 \beta \label{symadj2}. \end{equation} There is only one combination of the relevant ${\cal G_M}_{,\rm P(adj)}$ singlet that one can construct, and which gets induced nonperturbatively: \begin{eqnarray} \Delta L^{\rm non-pert.} = e^{-S_0} e^{i \sigma} \psi \psi + e^{-S_0} e^{- i\sigma} \; \bar\psi \bar \psi \end{eqnarray} There is also a large class of ${\cal G_M}_{,\rm P(adj)}$ singlet, but irrelevant multi-monopole operators of the form $ ( e^{- S_0} e^{i \sigma} \psi \psi )^k$ where $k$ is some integer. The continuous shift symmetry (\ref{symadj2}) forbids any kind of potential (such as $e^{i\sigma}+{\rm c.c.}$), the mass term for the dual photon. This proves that the photon must remain massless nonperturbatively. Affleck et.al. showed that, by expanding the $\sigma$ fields around, say, zero, the $U(1)_{*}$ symmetry is spontaneously broken and the photon is the Nambu-Goldstone boson. The fermion acquires mass $\sim e^{-S_0}$ due to $U(1)_{*}$ breaking. This is the conventional way to have gapless scalars in a gauge field theory. For a fuller discussion, see ref.\cite{Affleck:1982as, Unsal:2007vu}. For $SU(N)$ and multi-flavor generalizations, see \cite{Unsal:2007jx}. It is useful to think of the Noether current associated with the symmetry (\ref{symadj2}) in the $n_f$ flavor theory. It is \begin{equation} K_{\mu}= \bar \psi \sigma_{\mu} \psi - n_f {\cal I}_{\alpha_1} \partial_{\mu} \sigma = \bar \psi \sigma_{\mu} \psi- n_f {\cal I}_{\alpha_1} {\cal J}_{\mu} = j_{\mu} - n_f {\cal I}_{\alpha_1} {\cal J}_{\mu} \end{equation} Recall from $\S$.\ref{sec:Pol} that the current associated with $U(1)_{\rm flux}$ satisfies $ {\cal J}_{\mu}= \partial_{\mu} \sigma = \coeff 12 \epsilon_{\mu \nu \rho} F^{\nu \rho} = F_{\mu}$ where $F_{\mu}$ is the magnetic field. Using $\partial_{\mu} F_{\mu} = \nabla^2 \sigma= \rho_m(x)$ where $ \rho_m(x)$ is the magnetic charge density, the local current conservation can be re-expressed as \begin{equation} \partial_{\mu} K_{\mu} = \partial_{\mu} (j_{\mu} - n_f {\cal I}_{\alpha_1} {\cal J}_{\mu}) =0 \Longrightarrow \partial_{\mu} j_{\mu} (x)= n_f {\cal I}_{\alpha_1} \rho_m(x) \end{equation} which implies the conservation of the $U(1)_{*}$ current as stated in (\ref{topsym2}). The final form is the local version of the Callias index theorem, which ties the $U(1)_A$ charge with the $U(1)_{\rm flux}$ charge. Namely, in the presence of $n_f$ adjoint fermions, \begin{eqnarray} Q_{*} &=& Q_A - n_f {\cal I}_{\alpha_1} Q_{\rm flux} \cr &=&N_\psi - N_{\bar \psi} - n_f {\cal I}_{\alpha_1} (N_{\rm monopole} -N_{\rm anti-monopoles}) \end{eqnarray} is a conserved charge, where $N_X$ counts the number of the $X$ excitations. This means, any perturbative or non-perturbative interaction vertex in the long distance theory preserves $Q_{*}$. However, the $U(1)_{*}$ is spontaneously broken by the vacuum, and the photon is a Goldstone boson. {$\bf SU(N)$:} It is also useful to review the $SU(N)$ generalization of this theory since it carries important lessons on the interplay of symmetry and dynamics. Due to gauge symmetry breaking down to $U(1)^{N-1}$, there exist $N-1$ photons and $N-1$ massless fermions, the components along the Cartan subalgebra. The infrared Lagrangian in perturbation theory is, therefore, \begin{eqnarray} L^{\rm pert. theory} = \coeff 12 (\partial \bm \sigma)^2 +i \bm {\bar \psi} \sigma_{\mu} \partial_{\mu} \bm \psi, \qquad \bm \sigma \equiv (\sigma_1, \ldots, \sigma_{N-1}), \; \; \bm \psi \equiv (\psi_1, \ldots, \psi_{N-1}), \end{eqnarray} The simplicity of this system relative to the complex representation fermions to be studied in the subsequent section is the electric neutrality of the zero mode fermions. In perturbation theory, there are no relevant or marginal operators which respect the underlying symmetries of the original theory and which may be generated perturbatively. Thus, the Gaussian fixed point is stable to all orders in perturbation theory. However, there exist a plethora of relevant nonperturbative (flux) operators. The index is ${\cal I}_{\bm \alpha_i}= 2 $ for all $i=1, \ldots N-1 $. The $N-1$ monopole operators are $e^{-S_0} e^{i \bm \alpha_i \bm \sigma} {\bm \alpha}_i \bm \psi \bm \alpha_i \bm \psi $ none of which generates a mass term for the dual photons. Notice that each term is manifestly invariant under the topological $U(1)_*$ symmetry (\ref{sym}), (\ref{topsym}). In the $e^{-S_0}$ expansion, at order $e^{-2S_0}$, there are $N-2$ linearly independent relevant operators, $e^{-2S_0} e^{i ({\bm \alpha}_j-{\bm \alpha}_{j+1}) \bm \sigma} $ which get generated. Even though there is no fermion zero mode attached to these topological objects, since they are essentially the bound states of a monopole (with charge $\bm \alpha_i$) and anti-monopole (with charge $-\bm \alpha_{i \pm1}$), their invariance under the $U(1)_{*}$ topological symmetry is also manifest. \footnote{ A monopole and antimonopole in the presence of massless adjoint fermions interacts logarithmically at large distances in Euclidean ${\mathbb R}^3$, rather than the Coulomb's law. (Also see \cite{PhysRevB.39.8988,Marston:1990bj} for $U(1)$ QED, but one needs to be really careful here. See formula (\ref{mmint}) and the discussion around it.) The $\log|x-y|$ marginally binds a monopole into its antimonopole. The combined state is magnetically neutral, and cannot lead to Debye screening. (A monopole-antimonopole pair is a dipole, and in the long distance, the dipole-dipole interaction is $1/r^3$, hence the absence of the Debye screening.) In P(adj) with $N\geq 3$, the presence of the fermion zero modes also leads to $N-2$ bound states of a monopole with charge $\bm \alpha_j$ and antimonopole with charge $-\bm \alpha_{j\pm1}$. The combined topological excitation has a nonzero magnetic charge $\bm \alpha_j - \bm \alpha_{j \pm 1}$ and at large distances interacts via Coulomb potential, $1/r$. These excitations are referred to as magnetic bions \cite{Unsal:2007jx}. The magnetic bions render $N-2$ varieties of $N-1$ photons massive. In QCD(adj) on $S^1 \times {\mathbb R}^3$ discussed in Ref.\cite{Unsal:2007jx}, due to an extra elementary monopole, one can form $N-1$ magnetic bions, and the gauge sector is fully gapped. This also has a nice symmetry interpretation. The $U(1)_{*}$ continuous topological shift symmetry turns into a $({\mathbb Z}_N)_{*}$ discrete shift symmetry on small $S^1 \times {\mathbb R}^3$. The discrete shift symmetry cannot prohibit mass term for scalars. } Thus, the combined nonperturbative effects up to order $e^{-3S_0}$ is given by \begin{eqnarray} \Delta L^{\rm non-pert.} = e^{-S_0} \sum_{j=1}^{N-1} e^{i \bm \alpha_j {\bm \sigma}} \bm \alpha_i \bm \psi \bm \alpha_i \bm \psi + e^{-2S_0} \sum_{j=1}^{N-2} e^{i (\bm \alpha_j-\bm \alpha_{j+1}) {\bm \sigma}} + \; (\rm conjugates) \end{eqnarray} This renders $N-2$ varieties of the photons massive leaving the one which is protected by the shift symmetry. As in the $N=2$ case, the $U(1)_{*}$ breaks down spontaneously and there exist only one Goldstone. The higher order terms in the $e^{-S_0}$ do not alter this conclusion. This application shows that the existence of $U(1)_{*}$ symmetry provides a characterization for the absence of mass gap in gauge sector and the absence of confinement. The $U(1)_{*}$ does not imply the absence of monopoles or the irrelevance of monopole operators. And neither the presence of elementary monopoles or magnetically charged bound states of the monopoles implies confinement. \subsection{Complex representation fermions, masslessness and quantum criticality} \label{sec:F} Let us consider an $SU(2)$ Yang-Mills noncompact adjoint Higgs system with $N_f= 2n_f$ two component fundamental Dirac fermions on ${\mathbb R}^3$, the P(F) theory. The theory possess the symmetries (\ref{master1}). As always, we assume the $SU(2)$ gauge structure reduces down to $U(1)$ at long distances. The off-diagonal gauge degrees of freedom ($W$-bosons) and one component of the fermions in the $SU(2)$ gauge symmetry doublet, and the scalars acquire masses and decouple from the long distance physics. In perturbation theory, the infrared theory is described by the abelian QED$_3$ action \begin{eqnarray} S^{\rm P(F)}_{\rm pert.}= \int_{{\mathbb R}^3} \; \Big[ \frac{1}{4 g_3^2} F^{ 2}_{\mu \nu} \; + \; i \bar \Psi^a \gamma_{\mu} (\partial_{\mu} + i A_{\mu}) \Psi_a \Big] \label{QED} \end{eqnarray} The action possesses an enhanced (accidental) $SU(2n_f)$ flavor symmetry group, and a $U(1)_V$ symmetry which is the global part of the gauge symmetry. This enhancement is expected in perturbation theory, because the Higgs scalar acquires mass and disappears from the long distance description. Since the disparity between the gauge-kinetic term and Yukawa term in (\ref{fl}) was the source of the lower symmetry, and since there are no Yukawa's in the long distance limit, there is an enhanced symmetry in perturbation theory. The non-perturbative effects may in principle be aware of the lower symmetry of the high energy theory, and indeed, they are. Let us first take $N_f=2$. As in P(Adj), there is one type of monopole. The index theorem tells us that for each fundamental flavor, the monopole has ${\cal I}_{\bm \alpha_1}= 1$ zero mode. There is one relevant ${\cal G}_{\cal M}$ singlet operator which is induced nonperturbatively: \begin{eqnarray} {\rm Relevant} \; {\cal G}_{\cal M} \; {\rm singlets} : \;\;\; e^{-S_0} e^{i \sigma} \psi_1 \psi_2 + e^{-S_0} e^{- i\sigma} \; \bar\psi_1 \bar \psi_2 \end{eqnarray} The two fermions and the dual photon transform under $U(1)_{*}$ as \begin{eqnarray} && U(1)_{*}: \psi_1 \rightarrow e^{i \beta } \psi_1, \qquad \psi_2 \rightarrow e^{i \beta } \psi_2, \qquad \sigma \longrightarrow \sigma - 2 \beta \; . \end{eqnarray} The continuous shift symmetry forbids any kind of mass term for the dual photon. In particular, it forbids the $e^{-S_0} (e^{i \sigma} +e^{-i \sigma} ) $ operator. Thus, the photon must remain massless nonperturbatively. In the multi-flavor case $N_f=2n_f \geq 4$, the simplest monopole operator has $2n_f$ insertion of the fermionic zero modes, $$e^{-S_0} e^{i \sigma} \left[ (\psi_1^1 \psi_2^1)\ldots (\psi_1^{n_f} \psi_2^{n_f}) + {\rm permutations} \right]$$ The equality of the number of $\psi_1^a$ insertion with the $\psi_2^a$ insertion is a consequence of the Callias index theorem and $U(1)_V$ symmetry, i.e, electric charge neutrality. Making the $SU(n_f)_1 \times SU(n_f)_2 $ symmetry of the monopole operator manifest gives \begin{eqnarray} {\cal G}_{\cal M} \; {\rm singlets} : \;\;\; e^{-S_0} e^{i \sigma} \det_{a, b} \psi_1^a \psi_2^b + e^{-S_0} e^{- i\sigma} \; \det_{a, b} \bar\psi_1^a \bar \psi_2^b \end{eqnarray} where $a, b=1, \ldots n_f$ are flavor indices. The invariance of the vertex under $U(1)_A$ symmetry necessitates the dual photon to transform as $\sigma \longrightarrow \sigma - 2n_f \beta$ under $U(1)_{*}$. We identified a distinction between the behavior of $N_f=2$ and $N_f \geq 4$ theories. In the $e^{-S_0}$ expansion, the leading non-perturbatively generated flux operator is classically relevant in the $N_f=2$ case, and irrelevant in the $N_f \geq 4$ cases. Therefore, the latter class of theories are quantum critical, and will exhibit enhanced $SU(2n_f)$ symmetry at long distance. For the $N_f=2$ case, there is one relevant direction and no enhancement of flavor symmetry takes place. It is again useful to study the Noether currents in the effective long distance theory. Unlike P(Adj), there are two types of conserved $U(1)$ currents in the Polyakov model with $n_f$ complex representation fermions. One is associated with $U(1)_V$ symmetry, and the latter is a linear combination of $U(1)_A$ and $U(1)_{\rm flux}$. These are, in the conventions of $\S$.\ref{introcomp}, \begin{eqnarray} &&J_{\mu}= j_{1, \mu} + j_{2, \mu} = \bar \psi^a_{1} \sigma_{\mu} \psi_{1,a} + \psi^a_{2} \sigma_{\mu} \bar \psi_{2,a} \cr \cr &&K_{\mu}= j_{1, \mu} - j_{2, \mu} - 2n_f {\cal I}_{\alpha_1} \partial_{\mu} \sigma = \bar \psi^a_{1} \sigma_{\mu} \psi_{1,a} - \psi^a_{2} \sigma_{\mu} \bar \psi_{2,a} - 2n_f {\cal I}_{\alpha_1} \partial_{\mu} \sigma \end{eqnarray} The conserved charge associated with the $U(1)_V$ current $J_{\mu}$ is \begin{equation} (N_{\psi_1} - N_{\bar \psi_1} )+ (N_{\bar \psi_2} - N_{\psi_2} ) \end{equation} and the conserved charge associated with $U(1)_{*}$ is \begin{eqnarray} Q_{*} &=& Q_A - 2 n_f {\cal I}_{\alpha_1} Q_{\rm flux} \cr &=& (N_{\psi_1} - N_{\bar \psi_1} )- (N_{\bar \psi_2} - N_{ \psi_2} ) - 2 n_f {\cal I}_{\alpha_1} (N_{\rm monopole} -N_{\rm anti-monopoles}) \end{eqnarray} Clearly, these symmetries are in accord with the monopole operators and their zero mode structures. In fact, the conservation of the $U(1)_{*}$ current, $ \partial_{\mu}K_{\mu} =0$ is the local re-incarnation of the Callias index theorem. We will discuss the infrared limit of these theories after generalizing the basic essentials to $SU(N)$ gauge theory. {$\bf SU(N)$:} The difference of long distance physics between $N_f=2$ and $N_f\geq 4$ is not special to the $SU(2)$ P(F) theory. The infrared limit of $N\geq 3$ $SU(N)$ gauge theory with $N_f$ massless fermion flavors turns out to be rather similar to the $N_f$ flavor $SU(2)$ theory, as a consequence of the non-perturbative dynamics. We assume the gauge structure reduces into $SU(N) \rightarrow [U(1)]^{N-1}$ at long distances. In perturbation theory, the infrared has $N-1$ types of the massless photons, and $2 n_f$ massless fermions. The other fields acquire masses and decouple from the long distance physics. There are $N-1$ varieties of elementary monopoles. Their Callias indices are given by ${\cal I}_{\bm \alpha_i} = \delta_{i, 1} = (1, 0, \ldots, 0)_i, \; i=1, \ldots, N-1 $ where without loss of generality, we assumed that the fermion zero mode is localized into the monopole with charge $\bm \alpha_1$. Thus, the $U(1)_{*}$ shift symmetry reads \begin{eqnarray} && \bm \alpha_1{\bm \sigma} \rightarrow \bm \alpha_1{\bm \sigma} - (2n_f) \beta , \qquad \cr && \bm \alpha_j{\bm \sigma} \rightarrow \bm \alpha_j {\bm \sigma}, \qquad j=2, \ldots N-1 \end{eqnarray} The symmetries do not forbid the $N-2$ types of monopole operators which do not carry any fermionic zero modes. The first monopole has $2n_f $ fermion insertions and is irrelevant for $2n_f \geq 4$. The list of all the flux operators invariant under the symmetries of the microscopic theory up to order $e^{-2S_0}$ is \begin{eqnarray} \; {\cal G_M} \; {\rm singlets} : \left\{ \; \; e^{-S_0} e^{i \bm \alpha_1 {\bm \sigma}} \det_{a,b} \psi_1^a \psi_2^b, \; \; e^{-S_0} e^{i \bm \alpha_2 {\bm \sigma}}, \ldots, e^{-S_0} e^{ i \bm \alpha_{N-1} {\bm \sigma} } \; \; \right\} + { \rm c.c.} \end{eqnarray} Hence, $N-2$ out of $N-1$ photons acquire mass due to relevant monopole induced effects. Thus, the $SU(N)$ P(F) theory undergoes changes in its gauge structure as we consider longer and longer length scales. The first change is perturbative $SU(N) \longrightarrow [U(1)]^{N-1} $ and the latter is non-perturbative $[U(1)]^{N-1} \longrightarrow U(1)$ as shown in (\ref{pattern1}). The very long distance $U(1)$ theory is quantum critical due to the absence of any relevant or marginal perturbations which may destabilize its masslessness. We will comment on the effects of strong (non-compact) gauge fluctuations in the next section. Note that regardless of the value of the rank $N$ in the original gauge theory, the deep IR of the P(F) theory always reduces to an abelian $U(1)$ QED$_3$ theory with $2n_f$ flavors. Below, we discuss the long distance limit of this theory. \subsection{Conformal field theories (CFTs) at long distances} $\bf 2 n_f \geq 4:$ The $U(1)_{*}$ topological symmetry combined with symmetries such as parity, Lorentz and flavor symmetries forbids any relevant instability that may occur in the infrared limit of our theory. The monopole operators such as $e^{i \sigma} $, or $e^{i \sigma} ({\rm fermion \; bilinears}) $, where $\sigma$ is the dual of the final $U(1)$ factor, are forbidden. This means, in the compact continuum QED$_3$ theory obtained as described above, {\bf there are no relevant flux (monopole) operators in the original ``electric" theory. } Thus, the non-perturbative lagrangian is the same as the perturbative one, \begin{eqnarray} S^{\rm P(F)}_{\rm non pert.}= S^{\rm P(F)}_{\rm pert.} \; + \; \ldots \label{QED2} \end{eqnarray} where ellipsis stands for irrelevant perturbations consistent with the microscopic symmetries of the underlying theory. This is QED$_3$ with charged massless fermions, and with an enhanced (accidental) $SU(2n_f)$ flavor symmetry. The theory (\ref{QED2}) has no dimensionless coupling constant. The expansion parameter is $\frac{g_3^2}{k }$ where $k$ is some euclidean momentum scale. Thus, perturbative techniques are not useful at low energies. The low energy limit is a strongly correlated system of fermions and gauge fluctuations whose masslessness is protected by $U(1)_{*}$. A logical possibility for the infrared theory is a weakly or strongly coupled conformal field theory (CFT) depending on the number of flavors. In order to see this, let us calculate the correction to the photon propagator at one loop order in perturbation theory. Partially integrating out fermions produce the non-analytic correction to the gauge kinetic term \begin{equation} \frac{1}{g_3^2} F_{\mu \nu}^2 \rightarrow \frac{1}{g_3^2} \left(F_{\mu \nu}^2 + \frac{g_3^2 n_f }{8} F_{\mu \nu} \frac{1}{\sqrt \Box } F_{\mu \nu} \right) \; . \end{equation} In the large $n_f$ limit, the higher order effects in perturbation theory are suppressed by powers of $1/n_f$ and the one loop result becomes reliable \cite{PhysRevLett.60.2575}. The low energy limit is the same as taking $g_3^2$ to $\infty$. These changes in the photon propagator can be summarized as \begin{equation} \frac{g_3^2}{k^2} \underbrace{\longrightarrow}_{\rm one-loop} \frac{g_3^2}{k^2 + \frac{g_3^2}{8} n_f k} \underbrace{\longrightarrow}_{\rm low \; energy} \frac{8}{n_f k} \label{oneloop} \end{equation} Thus, we are left with a theory without any scale in the IR with gauge boson propagator $\sim \frac{1}{ k}$. Using the canonical normalization for the gauge kinetic term, the Lagrangian can be expressed as \begin{equation} L \sim F_{\mu \nu} \Box^{-1/2} F_{\mu \nu} + i \bar \Psi^a \gamma_{\mu} (\partial_{\mu} + i \frac{1}{\sqrt {n_f}} A_{\mu}) \Psi_a \label{deepIR} \end{equation} with a dimensionless expansion parameter $1/\sqrt {n_f}$. This is a remarkable change in the dynamics. To appreciate this, let us measure the potential between two external electric charges located at ${\bf x, y} \in {\mathbb R}^2$. The Coulomb potential between the two test charges is $V_{\rm Coulomb} ({\bf |x-y|}) = \log {\bf |x-y|}$, in two spatial dimensions, hence marginally confining. The non-perturbative dynamics of the pure Polyakov model alters this potential into a linearly confining one. In the infrared of the theory with massless fundamental fermions, the potential is dictated by conformal behavior. Thus, \begin{equation} V_{\rm non-pert.}({\bf |x-y|}) \sim \left\{ \begin{array}{ll} {\bf |x-y|} & \qquad {\rm pure \; \; Polyakov \; or \; with \; \; heavy \; fermions} \cr \cr {\bf |x-y|}^{-1} & \qquad {\rm with \; \; massless \; \; fundamental \; \; fermions} , \cr\cr \log {\bf |x-y|} & \qquad {\rm with \; \; massless \; \; adjoint \; \; fermions} , \end{array} \right. \label{CFT-confine} \end{equation} In some sense, the long distance behavior of the Polyakov model with massless fermions is more drastic than the Polyakov model {\it per se}. This example also shows that the presence of a single massless fermion can completely alter the confining property of the gauge theory! However, the main concept here is not really the presence or absence of a fermionic species. Rather, it is the nature (continuous versus discrete) of the topological symmetry, as we will discuss in more detail, especially in connection with QCD* theory. The microscopic symmetries of the P(F) theory given in (\ref{master1}) enhances and transmutes into \begin{eqnarray} {\cal G}_{\rm IR, P(F) } \sim ( {\rm conformal \; symmetry} ) \times C \times P \times T \times U(1)_V \times U(1)_{\rm flux} \times SU(2n_f) \cr \label{master1conf} \end{eqnarray} in the long distances. In the $2n_f \geq 4$ cases, the relevant $U(1)_{*}$ respecting operators also individually respects $U(1)_A$ and $U(1)_{\rm flux}$. The $U(1)_A$ is part of $SU(2n_f)$, and $U(1)_{\rm flux}$ is the symmetry associated with conservation of magnetic flux. In the $2n_f=2$ case, only the $U(1)_{*}$ combination is a symmetry. Eq.(\ref{master1conf}) is indeed the symmetry group of the algebraic spin liquid discussed in \cite{hermele-2005-72}. The P(F) theory, just like the spin liquids, undergoes enormous space-time and flavor symmetry enhancement. [Compare the long distance symmetries with the short distance ones, (\ref{master1}).] Interestingly, very different microscopic theories (one is lattice spin system in the $\pi$-flux or staggered flux state and the other is continuum P(F) theory) both flow to the identical long distance interacting CFT. \footnote{Recently, the gauge/string (AdS/CFT) correspondences are receiving much attention to model QCD-like gauge theories in 4d and lower dimensional condensed matter systems. Although there is currently no complete matching which captures both microscopic and macroscopic aspects of the most interesting gauge theories (such as the ones appearing in Nature), it certainly makes sense to model the infrared CFTs or whatever infrared behavior of the strongly coupled system by using a gravitational dual. Such constructions has computational utility at strong coupling. It may be useful to construct the gravitational duals of the spin liquids.} Thus, the multi-flavor QED$_3$ theories which descend from the Polyakov model are generically quantum critical. A recent work discusses the finite temperature limit of this class of CFTs \cite{kaul-2008}. It is not completely clear what occurs for fewer flavors. A logical possibility is that the weakly coupled CFT may interpolate into a strongly coupled CFT. For $2n_f \geq 4$, there is some evidence from the large scale lattice studies that no chiral symmetry breaking occurs in this theory \cite{Hands:2002dv}. These lattice simulations of {\it non-compact} QED$_3$ are relevant to our discussion only because the effect of compactness of the gauge boson in our theories with $n_f\geq 2$ is irrelevant in the renormalization group sense. Also, the inequality in Ref.~\cite{Appelquist:1999hr} suggests that the $SU(2n_f)$ global symmetry should be unbroken for $n_f \geq 2$. Ref.~\cite{Appelquist:1999hr} also argues that an earlier bound for a larger values ($3 < n_f<4 $) \cite{PhysRevLett.60.2575} is an overestimation of the truncated Schwinger-Dyson equations. $\bf 2 n_f =2 :$ In the $n_f=1$ case, the nonperturbative infrared Lagrangian of P(F) is \begin{equation} L^{\rm P(F)}_{\rm non pert.}= L^{\rm P(F)}_{\rm pert} + e^{-S_0} e^{i \sigma} \psi_1 \psi_2 + e^{-S_0} e^{- i\sigma} \; \bar\psi_1 \bar \psi_2 + \; \ldots \label{PF} \end{equation} where ellipsis again refer to perturbations such as $(e^{-S_0} e^{i \sigma} \psi_1 \psi_2)^k$ with $k\geq 2$ which are allowed by symmetries, but irrelevant in the renormalization group sense. In this case, it is not possible to consult the Monte-Carlo studies for the noncompact lattice QED$_3$, because the effect of compactness is a relevant perturbation of non-compact QED$_3$ dynamics. However, it is certain that, due to topological $U(1)_{*}$ symmetry, the photon remains gapless. The strong coupling dynamics in the IR combined with the existence of a relevant monopole operator make the determination of the long distance physics hard, and this is left as an open problem. To conclude, in $n_f \geq 2$, the combination of the topological symmetry and the irrelevance of operators which may lead to the breaking of the global symmetries not only protects the dual photon (scalar) from acquiring mass, it also protects the fermions. The mechanism of gaplessness is different from the Nambu-Goldstone mechanism. In particular, it relies on unbroken symmetry. Protection of masslessness due to unbroken symmetry appeared previously in the context of strongly coupled gauge theories, (see chapter 6 of \cite{Peskin:1982mu} for a review, and references therein). More recently, refinements and generalization of this idea appeared in condensed matter context as quantum order \cite{wen-2002-65, wen-2002-66}. The appearance of quantum order in Polyakov model with fermions in new, and is one of the main results of this work. \section{Topology of adjoint Higgs field and QCD*} \label{sec:top} There is a way to trick the Polyakov model with massless fermions, and get confinement! In particular, we will present gauge theories which reduce to (\ref{QED}) in perturbation theory, but are gapped non-perturbatively. \begin{FIGURE}[t] { \parbox[c]{\textwidth} { \begin{center} \includegraphics[width=6in]{topology.pdf} \caption The vacuum expectation value of the noncompact versus compact adjoint Higgs field. Both lead to $SU(4) \rightarrow U(1)^3$ gauge symmetry breaking. On ${\mathbb R}^3$, when two nearest neighbor eigenvalues become degenerate, the gauge symmetry restore partially and there is an associated elementary monopole $\bm \alpha_i$. The theory with compact field space topology has an extra elementary monopole, $\bm \alpha_4$, which ``moves in" from infinity as the the compactification radius is reduced. When $|a_1({\rm image}) -a_4|$ separation becomes comparable with the other eigenvalue separation, the extra monopole gains equal fugacity (or action) with the rest of the monopoles, and contributes equally to the dynamics. The fundamental distinction between the two models is more pronounced in the presence of massless fermions, as a continuous versus discrete topological symmetry. } \end{center} } \label{fig:compact} } \end{FIGURE} Let us consider the ``identical" looking action as in (\ref{lagrangian}), however, alter the topology of the field space into a compact one. Let $\Phi$ be a {\it compact} adjoint Higgs field, with a vacuum expectation value $\langle \Phi \rangle ={\rm Diag} (a_1, \ldots a_N)$. These eigenvalues are on the circle (rather than a line) and $a_N$ is the nearest neighbor of $a_1$. (See figure \ref{fig:compact}). Naturally enough, this vacuum expectation value will induce the very same gauge symmetry breaking as in the previous sections $SU(N) \rightarrow U(1)^{N-1}$. However, due to the change in the topology of the field space, there will be an extra elementary monopole other than the ones previously mentioned $\{\bm \alpha_1, \ldots \bm \alpha_{N-1}\} $. The extra monopole stems from the fact that the eigenvalues $a_N$ and $a_1$ are now nearest neighbors, and if they become degenerate in real space, that corresponds to the extra monopole with charge $\bm \alpha_N=-\sum_{i=1}^{N-1} \bm \alpha_i$. The $\bm \alpha_N$ is the affine root of the $SU(N)$ algebra. This monopole is on the same footing with the rest of the elementary monopoles, in particular, for $\langle \Phi \rangle$ backgrounds with cyclic ${\mathbb Z}_N$ symmetry, the extra monopole has the same action $ S_{0, N}=S_{0, i} =S_0$, as the rest. Leaving these secondary issues aside, let us pose the main question: What did we really change? First, we turned a perturbatively superrenormalizable theory (the case with non-compact adjoint Higgs) into a nonrenormalizable field theory. The latter is in need of a UV completion. And there indeed exist such UV completions, but these are locally four dimensional QCD-like theories on small $S^1 \times {\mathbb R}^3 $. We assert that, all the Yang-Mills compact adjoint Higgs theories with or without fermions on ${\mathbb R}^{1,2}$ have their UV completion in QCD-like gauge theories (with judiciously chosen matter content) in small $S^1\times {\mathbb R}^{1,2}$. Here, however, without concerning ourselves with the UV completion, we will only state that the Yang-Mills {\it compact} Higgs system on ${\mathbb R}^3$ can be obtained by adding a center stabilizing deformation potential into the YM action in the small $S^1$ regime, \begin{eqnarray} S^{\rm YM^*} &=& S^{\rm YM} + \int_{{\mathbb R}^3 \times S^1} P[\Phi] \cr &=& \int_{{\mathbb R}^3 \times S^1} \Big[ \frac{1}{4 g^2} {\rm tr} F_{MN}^2 + \frac{1}{L^4} \sum_{n=1}^{[N/2]} a_n |{\rm tr} U^n({\bf x})|^2 \Big] \end{eqnarray} and considering the low energy dynamics of the resultant theory.\footnote{In condensed matter language, this center stabilizing double trace deformation may be viewed as a ${\it frustration}$ of the Polyakov loop. Without the deformation, in the small $S^1$ regime of YM theory, $ \langle {\rm tr} U \rangle \neq 0$. At sufficiently large deformation, $\langle {\rm tr} U \rangle =0 $ even at arbitrarily small $S^1$. A perfect analogy in spin systems is an anti-ferromagnet which upon frustration become a paramagnet, as in (\ref{spin2}). For fruitful applications of this idea into YM and QCD, see \cite{Unsal:2008ch, Shifman:2008ja}. In QCD(adj) formulated on ${\mathbb R}^{2,1} \times S^1$ where $S^1$ is a spatial circle endowed with periodic spin connection for fermions, these deformations are not necessary, because the quantum fluctuations (as opposed to thermal fluctuations, which are absent in this setup) prefers a center symmetric vacuum \cite{Unsal:2007vu, Unsal:2007jx}. This theory is the motivation behind the double trace deformations. } Here, $\Phi({\bf x}) \equiv A_{4}({\bf x}) $ is the reduction of the gauge field along the short direction, which is periodic by construction. If we label the holonomy along $S^1$ as $U({\bf x})= P e^{i \int_0^L A_4({\bf x}, x_4) dx_4} \approx e^{i L \Phi ({\bf x})} $ where the last equality is correct for smooth fields, the resulting theory can be brought into the form (\ref{lagrangian}). Here, $[N/2]$ is the integer part of the half rank of the $SU(N)$ gauge group. The deformation terms with sufficiently large coefficients $a_n$ where $n$ goes all the way to $[N/2]$ are necessary to have maximal gauge symmetry breaking. Similarly, in theories with fermions, this procedure will produce a QCD* theory, whose action is \begin{equation} S^{\rm QCD^*} = S^{\rm QCD} + \int_{{\mathbb R}^3 \times S^1} P[\Phi] \end{equation} Study of such deformations of YM and QCD-like theories is relatively recent. The main advantage of this construction is that, some of these deformed theories are solvable in the same sense as the Polyakov model. For example, in YM*, the existence of the mass gap and linear confinement can be shown analytically \cite{Unsal:2008ch}, despite being locally four dimensional. The relevance of this class of theories to our discussion is that, in perturbation theory, the long distance limits of these theories are indistinguishable from the appropriate Polyakov models, and reduce to the $[U(1)]^{N-1}$ QED$_3$ on ${\mathbb R}^3$. Thus, they constitute an alternative way to embed compact QED$_3$ into a continuum gauge theory, different from Polyakov's original constructions \cite{Polyakov:1976fu}. Remarkably, the non-perturbative aspects of some of these theories are opposite of the Polyakov model with massless fermions. Their gauge sectors are gapped as shown in the pattern (\ref{pattern2}) for moderate numbers of flavors. \subsection{Discrete topological symmetry and mass gap} How can such a ``small" change in the topology of the field space alter the IR properties so drastically? The simplest reason is, as always, through symmetries. As explained above, the compact adjoint Higgs theories descend from locally 4d QCD-like theories. As it is well known, there are chiral anomalies in locally $d=4$ dimensional theories, and since anomalies are a short distance property, it will clearly distinguish a theory whose base space is ${\mathbb R}^3$ from another one whose base space is secretly $S^1 \times {\mathbb R}^3$ (even if its lagrangian is expressed on ${\mathbb R}^3$.) Thus, the true symmetry structure of the P$({\cal R})$ theory must be different from the QCD$({\cal R})$*. In, for example, one flavor theories, the $U(1)_A$ symmetry of the P$({\cal R})$ theories is replaced by ${\mathbb Z}_{2h}$ discrete chiral symmetry of locally four dimensional theory. Here, $h=1$ for a fundamental and $h=N $ for adjoint fermion. The Callias index theorem is still valid in the formulation on small $S^1 \times {\mathbb R}^3$, and its precise relation to the Atiyah-Singer index theorem is well understood \cite{Shifman:2008ja}. In the presence of massless fermions, the ${\mathbb Z}_{2h}$ which is the discrete chiral symmetry of the microscopic theory, intertwines with the ${\mathbb Z}_{h}$ discrete subgroup of the $U(1)_{\rm flux}$, schematically as \begin{equation} \psi \rightarrow e^{i \frac{2 \pi}{2h}} \psi, \qquad \sigma \rightarrow \sigma - \frac{2 \pi}{h} \end{equation} such that the monopole operators (e.g. $e^{i \sigma} \psi \psi$) remains invariant. Clearly, this discrete symmetry does not forbid operators such as $e^{i h\sigma}$ from being generated, but forbids $e^{i h' \sigma}$ if $h' \neq 0 \; ({\rm mod}\; h)$. The topological $U(1)_{*}$ symmetries of P$({\cal R})$ theories reduce into a discrete symmetry in QCD$({\cal R})^*$, \begin{equation} \underbrace{ U(1)_{*}}_{{\rm non-compact\; Higgs \; or \; P}({\cal R})} \longrightarrow \underbrace{ ({\mathbb Z}_{h})_* }_{{ \rm compact \; Higgs \; or \; QCD}({\cal R})^* }\; . \end{equation} We identified the fundamental distinction between non-compact and compact adjoint Higgs systems as a change in their microscopic, and consequently, topological symmetry: As emphasized in the discussion of P(${\cal R}$), $U(1)_{*}$ continuous shift symmetry is able to prohibit mass term for one variety of the dual photon. On the other hand, the $({\mathbb Z}_{h})_{*}$ symmetry which is a { \bf discrete topological symmetry} is incapable of forbidding a mass term for the dual photon.\footnote{ The discrete topological shift symmetry has a representation dependence. It is ${\mathbb Z}_N$ for adjoint, ${\mathbb Z}_{N+2}$ for symmetric, ${\mathbb Z}_{N-2}$ for anti-symmetric and trivial group ${\mathbb Z}_{1}$ for fundamental fermions. These are also valid for multi-flavor cases.} The discrete symmetries can at best postpone the emergence of the mass term in the $e^{-S_0}$ expansion \cite{Unsal:2007vu, Unsal:2007jx}, but can never forbid it. Thus, in the theory with compact adjoint Higgs field, there is no {\it symmetry} reason for the photon to remain massless and a mass term is generated. At this stage we are conceptually done. But in order to come to a full circle with the first paragraph of the ($\S$.\ref{sec:top}) which had an emphasis on the topological structure of the field space, let us discuss an example, given in \cite{Shifman:2008ja}. \subsection{Application: QCD(F)* with $n_f=1$} Consider the analog of the gauge theory in ($\S$.\ref{sec:F}), let $2n_f=2$. Due to the change in topology of the field space, there are two types of elementary monopoles. Their magnetic and topological charges $ \left( \int_{S^2} F, \; \int_{{\mathbb R}^3 \times S^1} F \widetilde F\right)$ are given by ${\cal M}_1: ( +1, \coeff 12), \; {\cal M}_2 :(-1, \coeff 12)$ for monopoles, and $\overline{\cal M}_1: ( -1, +\coeff 12),\; \overline{\cal M}_2 :(+1, -\coeff 12)$ for the anti-monopoles. The Callias index for the monopole with quantum number $( +1, \coeff 12)$ is one and the index for the $(-1, \coeff 12)$ monopole is zero. Note that the product ${\cal M}_1 {\cal M}_2$ is the four dimensional instanton vertex and the monopoles can be viewed as constituents of the 4d instanton. The zero modes localizes into one of the constituent monopoles following the ``Higgs regime" criteria in the statement of Callias's theorem \cite{Callias:1977kg}. For a nice lattice realization of the localization property, see Bruckmann et. al. \cite{Bruckmann:2003ag}. The monopole operators are \begin{eqnarray} && {\cal M}_1(x) = e^{-S_0} e^{i { \sigma}} \psi_1 \psi_2, \qquad {\cal M}_2(x) = e^{-S_0} e^{- i \sigma} \cr && \overline {\cal M}_1(x) = e^{-S_0} e^{- i \sigma} \bar \psi_1 \bar \psi_2, \qquad \overline {\cal M}_2(x) = e^{-S_0} e^{+ i \sigma} \label{monopole} \end{eqnarray} which only respects $U(1)_V \times ({\mathbb Z}_2)_A$ symmetries of the microscopic QCD(F)* theory. Unlike the case with non-compact adjoint Higgs fields (\ref{PF}), the dynamics and symmetries of the compact Higgs theory admits a relevant monopole operator without a fermion zero mode insertion: \begin{eqnarray} \Delta L^{\rm non pert. } \sim e^{-S_0} \cos \sigma + e^{-S_0} e^{i \sigma} \psi_1 \psi_2 + e^{- i \sigma} \bar \psi_1 \bar \psi_2 \qquad \label{cQED2} \end{eqnarray} The mass for the dual scalar is $\sim e^{-S_0/2}$ and is there due to the extra $ {\cal M}_2(x) + \overline {\cal M}_2(x) $ monopole effect $e^{-S_0} \cos \sigma$. This potential pins the scalar at the bottom of the potential. Expanding $ {\cal M}_1(x) + \overline {\cal M}_1(x)$ at the minimum of $\sigma$ yields $e^{-S_0} ( \psi_1 \psi_2 + \bar \psi_1 \bar \psi_2)$, a mass for the fermion proportional to $e ^{-S_0}$, much smaller than the photon mass. Thus, the dynamical patterns of the theory is \begin{eqnarray} SU(2) \underbrace{\longrightarrow}_{\rm Higgsing} U(1) \underbrace{\longrightarrow}_{\rm nonperturbative} {\rm no \; massless \; modes}\; \label{pattern3} \end{eqnarray} which is a special case of (\ref{pattern2}). For a fuller discussion of one-flavor QCD-like theories with two index representation fermions, we refer the reader to a joint work with M. Shifman \cite{Shifman:2008ja}. Note that the important conceptual distinction relative to the P(F) theory discussed in $\S$.\ref{sec:F} is the absence of $U(1)_{*}$ symmetry in the QCD(F)*. In P(F), $U(1)_{*}$ forbids the appearance of all the flux operators without fermion zero mode insertions, such as $(e^{-S_0} e^{i \sigma})^k$ for any integer $k$. In QCD(F)*, such operators are allowed by symmetries. A consequence of the presence versus absence of a continuous topological symmetry is reflected in the interactions between topological excitations. In P(F) on Euclidean ${\mathbb R}^3$, the long distance interactions of monopoles with anti-monopoles are necessarily logarithmic, whereas in QCD(F)*, the ${\cal M}_1(x)$ and $\overline{\cal M}_1(y)$ interaction is logarithmic, but ${\cal M}_2(x)$ and $\overline{\cal M}_2(y)$ interacts according to Coulomb's law as it can be seen by inspecting the leading connected correlator of the monopole operators: \begin{eqnarray} &&V_{1 \bar 1} (x-y)= - \log \langle {\cal M}_1(x) \overline{\cal M}_1(y) \rangle \sim 4 \log |x-y| - \frac{1}{|x-y|}, \cr &&V_{2 \bar 2} (x-y)= - \log \langle {\cal M}_2(x) \overline{\cal M}_2(y) \rangle \sim - \frac{1}{|x-y|}, \label{mmint} \end{eqnarray} The $ {\cal M}_2(x), \overline{\cal M}_2(y) $ type monopoles in QCD(F)* are sufficient to have the usual Debye mechanism, and generate a mass gap for the dual photon. \subsection{Remark on accidental continuous topological symmetry} \label{accidental} Evidently, the presence of a discrete $({\mathbb Z}_{h})_{*}$ topological symmetry is a necessary criteria for the the presence of a mass gap in the gauge sector. If a mass term for dual photon is not protected by a symmetry, surely, it will get generated. However, it is also possible that a term allowed by all the symmetries may be irrelevant in the renormalization group sense. Thus, the presence of discrete topological symmetry is not sufficient to conclude that the theory has a mass gap and confines. Consider the QCD(F)*, a theory defined on small $S^1 \times {\mathbb R}^3$ by construction, as a function of the number of flavors. Assume the number of flavor is large, but not very large so that the four dimensional coupling at the compactification scale is small. Indeed, a monopole operator is allowed, and hence is generated. However, a monopole operator may become irrelevant if there are a sufficiently large number of flavors. The classical scaling dimension of the monopole fugacity is $+3$. The presence of the massless fermions alters the quantum scaling dimension for the monopole operator in a significant way for large numbers of flavors $(\sim n_f)$ as shown in \cite{Borokhov:2002ib}. The continuum analysis for such QCD(F)* mimics the analysis of Hermele et.al. for lattice QED$_3$ at large $n_f$ \cite{hermele-2004-70}. In both cases, the monopole operator does scale down to zero at long distances \cite{hermele-2004-70} due to large scaling dimension, showing the irrelevance of monopoles and emergence of an accidental $U(1)_{\rm flux}$ symmetry associated with the conservation of gauge flux. Strictly speaking, there are some important differences between lattice QED$_3$ and QCD(F)* to be explained after the discussion of spin liquids. However, those are immaterial for the above argument. Thus, in QCD(F)* theory, there must be a critical window for the number of flavors for which the theory is a three dimensional interacting conformal field theory. It is desirable to understand the relation between these fixed points and the perturbative Banks-Zaks fixed point \cite{Banks:1981nn}. Plausibly, they may be smoothly connected within QCD(F)*. \section{Compact lattice QED$_3$ and $U(1)$ spin liquids} We have arrived at a very interesting situation. There are at least two ways to obtain ``compact QED with fermions" in $d=3$ dimensions by using {\bf continuum} field theories. We referred to the theories with non-compact adjoint Higgs as P($\cal R$) and the one with compact adjoint Higgs fields as QCD($\cal R$)*. In both case, the resultant QED$_3$ is compact by necessity, because both are realized via gauge symmetry breaking down to the (compact) maximal torus $ [U(1)]^{N-1} $. It is well-known that pure compact QED$_3$ confines even at arbitrarily weak coupling. A controversial question is what happens to confinement if one introduces massless fermions. This question is of practical importance in the context of the stability of the $U(1)$-spin liquids in two dimensions, a phase which {\it may be} neighbor with the $d$-wave superconducting phase in cuprates. \footnote{It is not certain that spin liquids play a role in cuprates. However, the question of whether doping a spin liquid by charge generates a $d$-wave superconductor is sensible and interesting, and its answer may give insights into the structure of the pseudo-gap regime. } Regardless of the relevance of spin liquids for cuprates, the stability of the spin liquid is associated with the concept of fractionalization, which does not arise in any naive way from a collection of electrons, but which may exist due to strong-correlation physics. Therefore, this is a conceptually interesting and experimentally relevant question. Ref.\cite{sachdev-2002-298, herbut-2003-91,herbut-2003-68} argued that the monopole effects always render the $U(1)$ spin liquids unstable. Ref.\cite{hermele-2004-70} showed that there are at least some spin liquids, with gapless fermions and $U(1)$ gauge fluctuations. These works refers to a particular ``3d lattice QED with massless fermions", with a specific set of microscopic symmetries (sometimes called projective symmetry group (PSG)). In the large $n_f$ limit, Ref.\cite{hermele-2004-70} exhibits by relying on the microscopic symmetries of the lattice theory and a sophisticated RG analysis which addresses the light electric and magnetic degrees of freedom simultaneously that there are no relevant perturbative or non-perturbative instabilities which may render the photon and fermions massive. Our work shows that the compact QED$_3$ with fermions may arise in at least two different ways as in (\ref{pattern1}) and (\ref{pattern2}), via non-compact versus compact adjoint Higgs field. (Moreover, it can also arise from a compact lattice formulation.) The change in the topological structure of the field space produces drastically distinct physics in the IR, gapless versus gapped gauge sectors in some cases. Thus, the question of the presence or absence of a defonfined phase in compact QED$_3$ in the {\it continuum} formulation is an {\bf ill-defined} question unless one states the symmetries of the cut-off scale (microscopic) theories clearly. (The importance of symmetries is also emphasized in lattice formulations Ref.\cite{hermele-2004-70}. ) The analysis of Ref.\cite{hermele-2004-70} carefully incorporates all possible symmetry singlet operators that can be generated perturbatively, or nonperturbatively via flux (monopole) operators, in a continuum language, by remaining loyal to the symmetries of the microscopic theory. This is a basic principle in any effective field theory construction as stated in $\S$.\ref{setup}, either in the continuum limit of lattice gauge theory or the long distance description of a gauge theory in which gauge structure changes over length scales. By a careful renormalization group analysis, Ref.\cite{hermele-2004-70} shows that in the large $n_f$ limit, the quantum effects turn the monopole operator, which has engineering dimension $ +3 $, into an irrelevant operator. The essence of this argument, is that at the IR fixed point, the quantum scaling dimension for the monopole operator is large $\sim n_f$ \cite{Borokhov:2002ib} and forces the monopole operator to scale down to zero at long distances. The irrelevance of monopoles is the same as conservation of magnetic flux, and there is an emergent topological $U(1)_{\rm flux}$ symmetry which characterizes the deconfined nature of this fixed point. (For the details, see Ref.\cite{hermele-2004-70}.) In our analysis of continuum QED$_3$ which descends from the Polyakov model P(F), we did not need such a renormalization group analysis to show the irrelevance of flux (monopole) operators such as $e^{-S_0} e^{i q \sigma}$ with $q\geq 1$ because they are forbidden to begin with, due to $U(1)_{*}$ topological symmetry. Since this symmetry is independent of the rank $N$ and the number of flavors $ n_f $, the assertion that P(F) theory is always in the deconfined phase did not require a large $n_f$ limit either. In the next sections, we will discuss whether P(F) theory or QCD(F)* theory has anything to do with the $U(1)$ compact lattice QED$_3$ with massless fermions. The lattice theory of interest is the one which arise in the $SU(n_f)$ spin systems, which we review next. \subsection{From $SU(n_f)$ quantum spin model to lattice QED$_3$} It is useful to briefly review the route from the spin models to lattice QED$_3$ with massless fermions, and identify the symmetries carefully. \footnote{This section is a review of known results in quantum spin systems, see \cite{lee-2004}, and references therein. } The Hamiltonian of a $d=2$ dimensional spin model on a square lattice is given by \begin{eqnarray} H= && J \sum_{\langle \bm r, \bm r' \rangle} {\rm tr} \left[ {\bm S}({\bm r}). {\bm S}({\bm r'}) \right] + \ldots \cr \equiv && J \sum_{a=1}^{\rm dim(adj)} \sum_{\langle \bm r, \bm r' \rangle} S_{\bm r}^a S_{\bm r'}^a + \ldots \label{spin} \end{eqnarray} where $ J >0 $ is the antiferromagnetic exchange, and ellipsis are higher order terms which may ease the frustration of magnetic order. This term may be due to geometric frustration or some other microscopic mechanism. Here, $\bm r, \bm r' $ are points on a two dimensional (square) lattice and ${\langle \bm r, \bm r' \rangle}$ indicates the nearest neighbor interactions. The hamiltonian has a global $SU(n_f)_D$ spin rotation symmetry group acting by conjugation \begin{equation} {\bm S}({\bm r}) \rightarrow U {\bm S}({\bm r})U^{\dagger}, \qquad U \in SU(n_f)_D \; . \label{globalSUnf} \end{equation} The subscript $D$ stands for diagonal, due to reasons to be explained in $\S$ \ref{sec:lat}. The ellipsis are assumed to be singlets under the $SU(n_f)_D$ symmetry and the other symmetries of the lattice. The description of an ordered phase in terms of the mean field approximation is well known. A more non-trivial aspect in higher dimensional systems is whether the mean field approach can be usefully applied to a phase which refuses to order. The answer to this question is relatively recent \cite{PhysRevB.37.580, PhysRevB.37.3774 }, and eventually leads to the emergence of gauge structure (and 2+1 dimensional gauge theories) in spin systems in two spatial dimensions. A microscopic Hamiltonian which may have a non-magnetic ground state is a double-trace deformation of (\ref{spin}) \begin{eqnarray} H= && \sum_{\langle \bm r, \bm r' \rangle} \left[ J {\rm tr} \left[ {\bm S}({\bm r}). {\bm S}({\bm r'}) \right] + \frac{J'}{n_f} ({\rm tr} \left[ {\bm S}({\bm r}). {\bm S}({\bm r'}) \right] )^2 \right] \label{spin2} \end{eqnarray} For sufficiently large positive $J'$, despite the leading anti-ferromagnetic term, no long range magnetic order will appear. The double trace deformation is same as frustration for the spin order parameter. To see this, the local spin operators $ {\bm S}_{\bm r}$ are expressed as a local composite of the fermionic spinon operators $f_{\bm r, \beta}$ \begin{equation} S_{\bm r}^{a} (\bm r) = f^{\dagger}_{\bm r, \alpha} T^a_{\alpha \beta} f_{\bm r, \beta}, \qquad {\rm or } \;\; {\bm S}_{\alpha \beta} = (S_{\bm r}^{a}T^a)_{\alpha \beta} = f^{\dagger}_{\bm r, \alpha} f_{\bm r, \beta} - \frac{1}{2n_f} \delta_{\alpha \beta} \label{spinon} \end{equation} Supplemented with the constraint that each site must have occupation number $n_f/2$ (with $n_f$ even), \begin{equation} \sum_{\alpha=1}^{n_f} f^{\dagger}_{\bm r, \alpha} f_{\bm r, \alpha} =n_f/2 \,, \label{constraint} \end{equation} this is an exact description of the original spin Hamiltonian. This procedure of breaking the spin into two fermionic spinons is called slave fermion mean field theory, and (\ref{spinon}) should be viewed as the definition of lattice spinons, $f_{\bm r, \alpha}$ . The spinons obey canonical anti-commutation relations, $\{ f_{\bm r, \alpha}, f^{\dagger}_{\bm r', \alpha'} \} = \delta_{ \bm r r', \alpha \alpha'}$ and zero for all other anti-commutators. Clearly, the Hilbert space of the theory without the constraint is vastly larger. There is an apparent gauge redundancy $ f_{\bm r, \alpha} \longrightarrow e^{i \theta({\bm r}) } f_{\bm r, \alpha}$ built-in the definition of the spinon operator. The local constraint (\ref{constraint}) guarantees that the quartic Hamiltonian in terms of the spinon operators is same as the original Hamiltonian in terms of spin operators. Exploiting the gauge redundancy provides the connection between purely bosonic spin models and lattice theories with gauge fluctuations and fermions. The spin Hamiltonian (\ref{spin}) in terms of the spinon operators is quartic. The $U(1)$ lattice QED$_3$ arises in describing the fluctuations of this system around the $\pi$-flux ($\pi$F), and the staggered flux (sF) state \cite{PhysRevB.37.3774}. Here, we only review the $\pi$-flux state. Let a mean field ansatz be denoted by \begin{equation} \overline \chi_{\bm r \bm r'}= \langle f^{\dagger}_{\alpha}(\bm r) f_{\alpha}(\bm r') \rangle \; . \end{equation} The $\pi$-flux state is the configuration of $\overline \chi$ with flux $\pi$ through each plaquette on the square lattice, \begin{equation} \prod_{\partial p} \overline \chi [\partial p ]=e^{i \pi} =-1 \end{equation} where $p$ denotes an elementary plaquette and $\partial p $ is the oriented boundary. It is clear that $\chi_{\bm r \bm r'}$ transforms gauge covariantly, as a connection on the lattice. For low energy considerations, only the phase fluctuations of the ansatz are important. Hence, the terms in Hamiltonian incorporating the fluctuations and spinon hopping term takes the form \begin{equation} H \sim J \sum_{\langle \bm r, \bm r' \rangle} \bar \chi_{\bf r' \bf r} f^{\dagger}_{\bm r, \alpha} e^{i a_{\bm r, \bm r'}} f_{\bm r', \alpha} + \rm h.c. \label{latticeQED3} \end{equation} which is the fermionic terms in lattice QED$_3$ \cite{PhysRevB.37.3774}. Even though the Maxwell term is not present above, it will be produced by the renormalization group, when one integrates out a thin momentum-shell of fermions. Hence, we can add it the the above Hamiltonian. \footnote{ The reader familiar with the staggered fermions (or Kogut-Susskind fermions) in lattice QCD will realize immediately that the spinons are the analogs of the staggered fermions \cite{Kogut:1974ag}, and by construction, we are guaranteed to get a relativistic dispersion relations, and Lorentz invariance (in a naive continuum limit.) The Dirac algebra and spinors of the continuum theory translates into the $\pi$-flux relation and Grassmann valued operators in the (reverse) Kogut-Susskind construction.} The resulting theory is {\it compact} lattice QED$_3$ theory with minimally coupled fermionic matter. The QED$_3$ also appears in the more phenomenological proposal of Franz et.al. \cite{franz-2001-87, franz-2002-66} and \cite{herbut-2002-66} within the phase fluctuation model in order to describe the pseudo-gap region of cuprate superconductors. The relation between this approach and the more microscopic spin liquid approach to the underdoped cuprate superconductors, and in particular, a relation between the lattice spinons and nodal quasi-particles is currently not clear. \subsection{Reverse engineering of lattice spinons and twisting} \label{sec:lat} It is useful to understand the relation between the symmetries of the compact lattice QED$_3$ (\ref{latticeQED3}) and continuum QED$_3$ with Lagrangian (\ref{QED}). In particular, considering the important role played by $U(1)_{A}$ symmetry and Callias index theorem in the Polyakov model, it is desirable to understand whether an analog of these may arise in the lattice formulations. For ease of presentation, we relabel the $2n_f$ fermionic continuum fields as \begin{eqnarray} \{ \psi_{1, a} \; ,\; \bar \psi_{2,a} \} \rightarrow \{ \lambda_{1, a} \; , \; \lambda_{2,a} \} \equiv \{ \lambda_{1}, \ \lambda_{2}, \ldots, \lambda_{2n_f} \}, \qquad a=1, \ldots , n_f \end{eqnarray} where Lorentz indices are suppressed. The continuum Lagrangian in terms of $\lambda$ fields reads \begin{eqnarray} {\cal L} = \frac{1}{4 g_3^2} F^{ 2}_{\mu \nu} \; + \; \sum_{b=1}^{2n_f} \>\> i \bar \lambda^b \sigma_{\mu} (\partial_{\mu} + i A_{\mu}) \lambda_b \label{conQED} \end{eqnarray} The continuum theory has an $U(1)_V \times SU(2n_f)$ global symmetry, where $U(1)_V$ is the global part of gauge symmetry and $SU(2n_f)$ is a global flavor symmetry. In the Polyakov model embedding or ``regularization" of the compact version of this theory, only \begin{equation} U(1)_V \times SU(n_f)_1 \times SU(n_f)_2 \times U(1)_A \; \subset\; U(1)_V \times SU(2n_f) \end{equation} is present, where we loosely view the inverse $W$-boson mass as the lattice spacing. Let us now reverse engineer the lattice QED$_3$ theory starting with continuum formulation. This will be useful in understanding what the lattice symmetries mean in the continuum and ease the comparison with Polyakov's model. Consider continuum QED$_3$ theory in Hamiltonian formulation on ${\mathbb R}^{1,2}$ and latticize ${\mathbb R}^2$. Let us consider the $SU(2) \times SU(n_f)_D $ subgroup of the $SU(2n_f)$ flavor symmetry. Since we are in the Hamiltonian formulation, we split the Lorentz symmetry into $SO(2)$ and continuous time translations. The fermions are in two dimensional spinor representation of $SO(2)$, two dimensional spinor representation of $ SU(2)$ and in the fundamental representation of $SU(n_f)_D$. Now, we wish to discuss a well defined procedure, called {\it twisting}, which intertwines the Lorentz and flavor symmetry such that the continuum spinors are mapped into Grassmann valued operators residing on the lattice sites (the lattice spinons). In spin systems, the $SU(n_f)_D$ corresponds to the global rotation symmetry (\ref{globalSUnf}) of the spin. It is also the diagonal subgroup of the $SU(n_f)_1 \times SU(n_f)_2$ decomposition which appeared in the Polyakov model. The $SU(n_f)_D$ will have no impact in our discussion, so we suppress it. The fermion $\lambda_{ \alpha, a }$ transforms as $\lambda \rightarrow O \lambda U^{\dagger}$ under $ O \in SO(2)_L$ and $U \in SU(2)$ flavor. We can write every two by two matrix such as $\lambda_{\alpha, a}$ in a basis spanned by the identity and the Pauli matrices $(1, \sigma_{\mu}, \sigma_{\mu \nu}= i \coeff 12 \epsilon_{\mu \nu} \sigma_{\mu}\sigma_{\nu} )$. Thus, \begin{equation} \lambda_{\alpha, a} = (f 1 + f_{\mu} \sigma_{\mu} + \coeff 12 f_{\mu \nu} \sigma_{\mu \nu})_{ \alpha,a} \qquad \alpha=1,2, \; \; a=1, 2 \label{form} \end{equation} This is to say that under the diagonal $SO(2)_D = {\rm Diag} (SO(2)_L \times SU(2)) $ subgroup, the spinor becomes a collection of $p$-forms, one scalar, one vector and one two form anti-symmetric tensor, which we label as $(f, f_{\mu}, f_{\mu \nu}) $. On the lattice, a $p$-form is naturally associated with a $p$-cell, zero form with sites, one form with links, and two form with faces. This twist is also sometimes referred to as the ``Dirac-K\"ahler" construction in lattice gauge theory \footnote{This type of decomposition is one of the cornerstone of the recent progress in supersymmetric lattices, see for example, \cite{Kaplan:2003uh, Catterall:2005eh}.} and is known to be equivalent to staggered fermions. We can map these fermions onto a lattice with half the spacing. The mapping takes the single component Grassmanns $f, f_1, f_2, f_{12}$ onto the sites $(0,0), (1, 0) , (0,1), (1,1)$, in a unit cell, respectively. The new lattice repeats itself in amounts $(2,0)$ and $(0,2)$ in the $x$ and $y$ directions. The twisting procedure is the reverse engineering of the appendix of Ref.\cite{hermele-2004-70}. To see this, rewrite (\ref{form}) in the component language: \begin{equation} (\lambda_{ \alpha, a}) = \left( \begin{array}{cc} f+ f_{12} & f_1+ if_2 \cr f_1- if_2 & f- f_{12} \end{array} \right \label{form2} \end{equation} This is indeed the relation between the lattice spinons and continuum spinors given in Ref.\cite{hermele-2004-70} modulo a minor renaming of the fields. \footnote{This is also the reason why fields that transform in a single valued representation of the lattice point group symmetry maps into the double valued spinor representations under the continuum Lorentz symmetry. This clearly does not make any sense without the twisting idea, which mixes Lorentz symmetry and some global symmetry. This is in fact a recurring and fruitful theme in diverse fields of theoretical physics. It appeared initially in staggered (Kogut-Susskind) fermions \cite{Kogut:1974ag}, and most recently in supersymmetric lattices constructions \cite{Kaplan:2003uh, Catterall:2005eh}. It also arises naturally in ``topologically'' twisted version of the supersymmetric theories, where under the diagonal subgroup of space-time and some flavor symmetry, the spinors decompose as $p$-forms, single valued representations \cite{Witten:1988ze}. Apparently, such structures are also ubiquitous in spin systems, in particular, the $\pi$-flux and staggered flux states \cite{PhysRevB.37.3774}. } The discrete rotational symmetries of the QED$_3$ lattice action discussed in Ref.\cite{hermele-2004-70} are in fact the subgroup of $ G_{\rm discrete} \subset SO(2)_D = {\rm Diag} (SO(2)_L \times SU(2)) $. In the continuum, when the $SO(2)_D$ restores, one can always undo the twist. This reverse procedure gives the so-called emergent flavor $SU(2) $ subgroup of $SU(2n_f)$ for free. To summarize, the compact lattice QED$_3$ possesses \begin{equation} {\cal G}_{{\rm QED}_3 } \sim G_{\rm discrete} \times C \times P \times T \times U(1)_V \times SU(n_f)_D \label{match1} \end{equation} This is indeed the symmetry structure of spin system in the gauge theory formulation in the $\pi$-flux state. This needs to be compared with much larger microscopic symmetry (\ref{master1}) of P(F) theory. The analog of the $U(1)_A$ symmetry in the P(F) theory is part of the $SU(2) \subset SU(2n_f)$ symmetry in the continuum of the QED$_3$. Unfortunately, in the $\pi$-flux state of the spin system, and in the specific lattice regularization described above, the continuous $U(1)_A$ does not survive at the cut-off scale. Only a discrete subgroup of it is hidden in $G_{\rm discrete}$. However, $G_{\rm discrete}$ is practically useless (like any other discrete symmetry) for forbidding generic flux operators in lattice QED$_3$. This is the significant difference between P(F) theory, QCD(F)* theory and lattice QED$_3$. The P(F) theory has $U(1)_A$ symmetry at short distances and this transmutes into a continuous topological symmetry in the IR preventing a mass term for a photon, for any number of flavors. In QCD(F)*, the short distance theory only has a ${\mathbb Z}_{2}$ discrete chiral symmetry, which again transmutes into a trivial $({\mathbb Z}_{1})_{*}$ topological symmetry, which cannot prohibit mass term for the dual photon. For small numbers of flavors, the theory exhibits a mass gap in gauge sector. At sufficiently large number of flavors, an accidental $U(1)_{*}$ may arise as discussed in ($\S$.\ref{accidental}). In the lattice versus continuum QED$_3$, the critical target theory has a $U(1)_A$ symmetry embedded into $SU(2n_f)$ for {\it any} $n_f$. However, the lattice Hamiltonian does not respect it. This makes this problem different and relatively harder than the previous two problems that we have discussed. \subsection{The emergent topological $U(1)_{*}$ symmetry} It is not { \it a priori } clear whether there is a relation between P(F) theory, and lattice QED$_3$ studied in Ref.\cite{hermele-2004-70}. Clearly, continuum P(F) theory is a theory with a scalar and with a larger set of symmetries than the lattice QED$_3$. However, the infrared physics of these two theories seems to be coincident at least in the large $n_f$ limit. It is in principle plausible that different microscopic theories may flow to the same theory in their long distance limits. In our opinion, the most important physical issue is associated with the topological $U(1)_{*}$ symmetry. In P(F), the origin of $U(1)_{*}$ is clear. It is a natural consequence of the the $U(1)_A$ symmetry combined with the Callias index theorem. In large $n_f$ lattice QED$_3$, the $U(1)_{*}$ symmetry is referred as an emergent topological symmetry of the IR theory \cite{hermele-2004-70}. The reason it may be considered emergent is twofold: One is the analog of $U(1)_A$ is not present in the spin system and resulting lattice QED$_3$. The second is the analog of the Callias index theorem on lattice QED$_3$ does not exist as shown by Marston \cite{Marston:1990bj}. The result of Ref.~\cite{Marston:1990bj} looks discouraging, as stated in \cite{kim-1999-272}. However, the more severe issue is the absence of the $U(1)_A$ symmetry in lattice QED$_3$, or spin system. Below we will prove the following assertion: If the $U(1)_A$ is a symmetry of the cut-off (lattice) QED$_3$ theory, despite the absence of the Callias index theorem, the topological $U(1)_{*}$ symmetry will emerge in the long distances even at {\it small} $n_f$. Let us see how this works. The result of ref.~\cite{Marston:1990bj} does not tell us that monopole-multifermion type operators are excluded. It only states that in a monopole operator of the form $e^{i \sigma} O_{\rm fermion}$, the structure of $O_{\rm fermion}$ is not dictated by an index theorem. $O_{\rm fermion}$ may be $\{1, \; (2 \; {\rm fermions}),\; (4 \; {\rm fermions}), \ldots \}$, a plethora of (even) numbers of fermion insertions allowed by other symmetries of the lattice. Let us list a set of operators which may be induced nonperturbatively \begin{equation} \{ e^{i \sigma}, e^{i \sigma} \lambda_{1,a} \bar \lambda_{2,a}, \;\; e^{i \sigma} (\lambda_{1,a} \bar \lambda_{2,a} )^2, \ldots, e^{2 i \sigma}, \; e^{2 i \sigma} \lambda_{1,a} \bar \lambda_{2,a}, \; e^{2 i \sigma} (\lambda_{1,a} \bar \lambda_{2,a})^2, \ldots \} \label{fluxop} \end{equation} where we suppress Lorentz indices. This is the set of monopole operators and the composites of monopoles with the fermion fields. By assumption, the $U(1)_A$, under which $\lambda_{1,a} \bar \lambda_{2,a} \rightarrow e^{2i \beta} \; \lambda_{1,a} \bar \lambda_{2,a} $ is a symmetry of the cut-off theory. Our goal here is to show that the absence of an index theorem by itself does not imply that the continuous $U(1)_A$ symmetry cannot be transmuted into the dual photon as a shift symmetry. Let $e^{i \sigma} (\lambda_{1,a} \bar \lambda_{2,a})^{q} $ be the lowest dimensional flux operator with multiple fermion insertions allowed by lattice symmetries. Since the $U(1)_A$ is a symmetry of the cut-off theory, it must be a symmetry of the long distance theory. As before, this can be accomplished by intertwining $U(1)_A$ with $U(1)_{\rm flux}$, the shift symmetry of dual photon, in the infrared. The invariance of $e^{i \sigma} (\lambda_{1,a} \bar \lambda_{2,a})^q$ under $U(1)_A$ demands that the dual photon must have a shift symmetry $\sigma \rightarrow \sigma- 2 q \beta$. Thus, reconciling $U(1)_A$ symmetry with the long distance physics forbids any operators in the list except $[e^{i \sigma} (\lambda_{1,a} \bar \lambda_{2,a})^q]^k$. Most importantly, it forbids the monopole operator $e^{i \sigma} $ and other pure flux operators such as $e^{2 i \sigma} $ regardless of the value of $q \geq 1$. This implies that relevant monopole operators (which render the photon massive) may be forbidden by the {\it accidental} $U(1)_{*}$ pure flux forbidding symmetry even at small $n_f$ if the cut-off theory has the $U(1)_A$ symmetry. Unfortunately, the spin system does not have the analog of $U(1)_A$ symmetry. The flux operators such as $e^{i \sigma}$ which are not forbidden by symmetry will be generated. Under the given circumstances, the {\it only} way that such operators will not generate a mass for dual photon is if they are irrelevant in the long distances in the renormalization group sense. We reach to the conclusion that, for spin systems in a $\pi$-flux phase, unlike the P(F) theories, the renormalization group and large $n_f$ analysis are unavoidable \cite{hermele-2004-70}. \section{Conclusions and prospects} \begin{table}[htdp] \begin{center} \begin{tabular}{| p{2.5cm} |p{4cm}|p{2cm}|c|p{2cm}|} \hline Theory & Description & Topological symmetry, microscopic precursor & Gauge sector & Long \qquad distances \\ \hline P \cite{Polyakov:1976fu} & noncompact $\Phi $, & none, none & gapped & confined \\ \hline P(adj) \cite{Affleck:1982as} & noncompact $\Phi$, real ${\cal R}$, complex fermions & $U(1)_{*}, U(1)_A$ & gapless & deconfined, free photon \\ \hline P(F) &noncompact $\Phi$, complex ${\cal R}$, complex fermions & $U(1)_{*}, U(1)_A$ & gapless & deconfined, CFT \\ \hline YM* \cite{Unsal:2008ch}& compact $\Phi$ & none, none & gapped & confined \\ \hline QCD(adj)* \cite{Unsal:2007jx} & compact $\Phi$, real ${\cal R}$, complex fermions, $n_f$ small & $({\mathbb Z}_{N})_{*}$, ${\mathbb Z}_{2N}$ axial & gapped & confined \\ \hline QCD(F)* \cite{Shifman:2008ja} & compact $\Phi$, complex ${\cal R}$, complex fermions, $n_f$ small & none, ${\mathbb Z}_2$ & gapped & confined \\ \hline compact lattice QED$_3$ \cite{Polyakov:1976fu} & compact gauge fluctuations & none, none & gapped & confined \\ \hline compact lattice QED$_3$ with fermions \cite{hermele-2004-70} & complex ${\cal R}$, complex fermions, $N_f \gg 1$ & emergent $U(1)_{*}$, none & gapless & deconfined, CFT \\ \hline \end{tabular} \end{center} \caption{The role of topological symmetry in the determination of the deconfined/confined long distance behavior. It is worth emphasizing that all the theories in the list has magnetic monopoles in a semi-classically tractable regime. Thus, the presence or absence of the magnetic monopoles does not tell much about the infrared property of the theory. A more refined characterization is through the topological symmetry.} \label{default} \end{table}% {\bf Topological symmetry and classification of gauge theories:} In this paper, we discussed a large class of gauge theories formulated on ${\mathbb R}^3$ and $S^1 \times {\mathbb R}^3$ whose long distance gauge structure is described by abelian $U(1)^{N-1}$. Examples are $SU(N)$ continuum P(${\cal R}$ ) on ${\mathbb R}^3$, $SU(N)$ continuum QCD(${\cal R}$)*, and $U(1)^{N-1}$ lattice QED$_3$ in three dimensions. We arrived to sharp topological symmetry realizations which distinguish the zero temperature phases of such gauge theories, such as confined versus deconfined. \footnote{In QCD(${\cal R}$)*, the small $S^1 \times {\mathbb R}^3$ should be viewed as a spatial (not thermal) compactification, along which fermions are endowed with periodic boundary condition. Its Minkowski space continuation is $S^1 \times {\mathbb R}^{2,1}$. If one wishes to study these gauge theories at finite temperature, a thermal circle should be formed out of the temporal direction on ${\mathbb R}^{2,1}$.} \begin{itemize} {\item[$\bf 1)$]The existence of continuous $U(1)_{*}$ topological symmetry is the necessary and sufficient condition to demonstrate the absence of mass gap in the gauge sector and provides an unambiguous characterization of de-confinement. } \begin{itemize} {\item[$\bf 1.a)$] If the $U(1)_{*}$ symmetry is spontaneously broken, then there is a Goldstone boson. The infrared theory is the free scalar (which is same as a photon on ${\mathbb R}^3$.)} {\item[$\bf 1.b)$] If the $U(1)_{*}$ symmetry is unbroken, the unbroken $U(1)_{*}$ protects the masslessness of the dual scalar. In some cases, the infrared theory flows into an interacting CFT. } \end{itemize} {\item[$\bf 2)$] The existence of a discrete topological symmetry is necessary, but not sufficient to exhibit confinement.} \begin{itemize} {\item[$\bf 2.a)$] If the monopole (or other flux) operators are irrelevant at large distances, then there is an emergent topological $U(1)_{\rm flux}$ symmetry. This class of theories will deconfine, and some will flow into interacting CFTs.} {\item[$\bf 2.b)$] If the monopole (or other flux) operator is relevant at large distances, then the mass gap and confinement will occur. Showing the relevance of flux operators is the sufficient criteria to exhibit mass gap and confinement. } \end{itemize} \end{itemize} Some examples for these classes are tabulated in table.\ref{default} along with useful references. I wish to point out that some of these necessary and sufficient conditions are not completely novel. An example of class $1.a)$ was discussed long ago by Affleck, Harvey and Witten \cite{Affleck:1982as}, and the statement of $2.a)$ is constructed in the work of Hermele et.al \cite{hermele-2004-70} on stable spin liquids, but it applies more generally to gauge theories. The totality of these criteria is new. \footnote{See also Refs.~\cite{Hands:2006dh, Di Giacomo:1999fa} which use disorder operators to probe confinement. These works also attempt to provide a symmetry realization for confinement. An application to a QCD-like theory with adjoint fermions is given in \cite{Cossu:2008wh}. It may be useful to perform the lattice simulations on an asymmetric lattice, which mimics ${\mathbb R}^3 \times S^1$ where $S^1$ is endowed with periodic spin connection for fermions. The theoretical analysis shows that the small $S^1$ regime must exhibit confinement without chiral symmetry breaking \cite{Unsal:2007vu, Unsal:2007jx}. It would be interesting to test this on lattice.} There are many interesting questions on the generalizations of these criteria. The most obvious is whether the topological symmetry characterization can be generalized to cases where the long distance dynamics is non-abelian. Another one is whether the abelian CFTs discussed in this paper has non-abelian counterparts? Assuming this is the case, are they dual to non-abelian spin liquids at large distances? Can we make use of this topological characterization towards the decompactification ${\mathbb R}^4$ limit of QCD(${\cal R}$)*? We leave these questions for future work. {\bf Ambiguity in defining compact QED$_3$ in continuum and resolution:} There are at least two continuum gauge theories which produce compact QED$_3$ in perturbation theory via gauge symmetry breaking in P$({\cal R})$ and QCD$({\cal R})$*. These flow into opposite IR theories, such as a CFT versus a theory with a mass gap in some cases, as shown in table.\ref{default}. {\bf Spin liquid and P(F) duality:} We demonstrated that the $SU(N)$ Polyakov model with $2n_f$ massless fundamental fermions and $SU(n_f)_D$ spin systems in the $n_f\gg 1 $ limit flow into the same interacting conformal field theory. This is to some extent surprising due to the absence of the Callias index theorem in lattice QED$_3$ \cite{Marston:1990bj}, and very distinct symmetries of the spin Hamiltonian and P(F) model. Both theories are quantum critical in the sense that there are no relevant perturbative or non-perturbative operators consistent with the symmetries of the microscopic theory. Thus, these theories flow into interacting conformal field theories at long distances. As the number of flavors is reduced, the long distance limit of $2n_f \geq 4$ P(F) theory interpolate in between the weakly and strongly coupled CFT's. What happens with lattice QED$_3$ at small number of flavors is still ambiguous. Given the long distance duality between the spin liquids and P(F) gauge theory, a sensible question is the meaning of the doping of spin liquids by holons on the gauge theory side. Clearly, compactification of the field space brings in new excitations (flux operators) from infinity, and generates a QCD* type of theory, with a mass gap in its gauge sector. It is desirable to understand the relation, if any, between the QCD* theories and $d$-wave superconducting phase of high $T_c$ cuprates. \acknowledgments I am grateful to Eun-Ah Kim, B. Marston, M. Shifman for multiple useful explanations. I also would like to thank M. Headrick, S. Kachru, M. Mulligan, \"O. Oktel, T. Senthil, Piljyin Yi for related discussions. This work was supported by the U.S.\ Department of Energy Grants DE-AC02-76SF00515. \bibliographystyle{JHEP}
1,314,259,996,037
arxiv
\section{Introduction} \label{sec: intro} Nuclear matter, an ideal infinite system made of strongly-interacting nucleons, is currently subject to intense study from multiple perspectives, due to its connections to the nuclear physics of finite nuclei \cite{rocamaza2018,Piekarewicz2019NeutronSkin,Burgio2020}, the astrophysics of neutron stars and gravitational waves \cite{HaenselNeutronStars,Burgio2021,Piekarewicz2022}, and the physics of cold Fermi gases \cite{BulgacUnitaryGas,Gandolfi2015}. Nuclear matter has been studied theoretically both within \textit{ab initio} theory and Density Functional Theory (DFT). In synthesis, \textit{ab initio} or first-principle methods aim at finding an exact or systematically improvable solution to the many-body problem starting from a Hamiltonian that describes the interactions among the constituent nucleons \cite{computational_nuclear,Hergert2020}. DFT, on the other hand, maps the many-particle problem to a single-particle (s.p.) self-consistent (s.c.) problem that is based on the concept of an Energy Density Functional (EDF), i.e. on expressing the total energy of a generic system as a functional of its (generalized) densities \cite{Schunck2019,colo2020,Martin2020}. DFT is in principle an exact theory, but the EDF which are currently used rely heavily on phenomenology \cite{colo2020}. The equation of state (EOS), i.e. (at zero temperature) the energy per particle as a function of the neutron and proton densities, is the fundamental ground state (g.s.) property of homogeneous matter and has been the main target of most works, see the reviews Refs. \cite{OertelOES,rocamaza2018,Burgio2021}. Another line of research has focused on inhomogeneous nuclear matter \cite{Gandolfi2015}, motivated by the fact that the inner crust of neutron stars is not uniform \cite{HaenselNeutronStars} and by the attractive possibility of constraining specific terms of the nuclear EDFs (see e.g. Refs. \cite{Maris2013,Forbes2014,Rrapaj2016,Shen2019}). Neutron and neutron-proton drops, i.e. nuclear matter confined by an external trap, have been studied e.g. in Refs. \cite{Pudliner1996,gandolfi2011,Maris2013,Boulet2018}. The problem of the response of nuclear matter subject to a weak periodic perturbation has also been tackled. The dynamical response function has been determined for rather general EDFs numerically \cite{Riz2020TimeDependent} and analytically (see Refs. \cite{PASTORE20151,Pastore2021} and references therein). Recently, Gezerlis and collaborators \cite{Gezerlis2016,Gezerlis2017,Gezerlis2021}, extending techniques used for the electron gas \cite{Moroni1995,SenatoreBook,Dornheim2017Density} and cold atoms \cite{Gandolfi2014Unitary}, have attacked the problem of the neutron matter static response \textit{ab initio} with the Auxiliary Field Diffusion Monte Carlo (AFDMC) method \cite{carlson2015,Tews2020}. While the EOS and the static and dynamic response can be studied directly in the thermodynamic limit (TL) in the framework of DFT \cite{rocamaza2018,PASTORE20151}, most \textit{ab initio} methods simulate infinite matter by employing a finite number of particles (see e.g. Refs. \cite{computational_nuclear,LietzCompNucl,Hagen2014,Barbieri2017,Arthuis2022,Piarulli2020}). In fact, they are limited to few tens of fermions at most, which implies that \textit{ab initio} results are affected by finite-size (FS) effects. In this context, developing a finite-$\rm{A}$ DFT formalism for nuclear matter is important for two reasons. First, very large numbers of particles can be studied in DFT due to its low computational cost and thus a playground for understanding and handling FS effects is provided. In Refs. \cite{Gezerlis2021,Gezerlis2022Skyrme}, for example, \textit{ab initio} simulations of perturbed matter were extrapolated to the TL with the aid of DFT calculations. Second, the finite-$\rm{A}$ DFT approach is instrumental in our program of constructing \textit{ab initio}-based EDFs started in Ref. \cite{Marino2021}, since it paves the way to matching \textit{ab initio} and DFT calculations with the same number of particles in a consistent manner. The EOS of uniform matter has already been employed in a local density approximation scheme \cite{Marino2021,Riz2020TimeDependent} to link the EDF to microscopic theory. Full-fledged EDFs, however, must incorporate surface terms that can act exclusively in non-uniform systems. Perturbed nuclear matter, in this respect, is a promising candidate for setting constraints on the EDF surface contribution (see e.g. Ref. \cite{Dalfovo1995,Gandolfi2014Unitary,Gandolfi2015}). This work is devoted to a detailed description of the solution of the DFT problem for nuclear matter under the effect of an external perturbation for Skyrme-like EDFs. Our approach is based on simulating nuclear matter using a finite number of nucleons in a box on which periodic boundary conditions are imposed. The formalism for pure neutron matter (PNM) and symmetric nuclear matter (SNM), together with its numerical implementation, are presented; a careful analysis of the treatment of spin-orbit is provided. The static response problem is then tackled with this method and the effect of the perturbation on the energies, densities and level structure of the system is investigated. This paper is structured as follows. Section \ref{sec: dft formalism} is devoted to a detailed description to the finite-$\rm{A}$ nuclear DFT formalism and to its numerical implementation. Section \ref{sec: static response theory} reviews the theory of the static response of homogeneous matter. Results are presented in Sec. \ref{sec: results}. Lastly, Section \ref{sec: conclusions} summarizes our work and presents future developments. \section{Nuclear DFT formalism} \label{sec: dft formalism} \subsection{Overview of nuclear DFT} \label{sec: overview dft} We give a brief overview of nuclear DFT \cite{colo2020,Schunck2019}. Details are given in our previous work Ref. \cite{Marino2021} and references therein. We consider quasi-local (or Skyrme-like) EDF models \cite{Schunck2019} for time-reversal-invariant systems, such as spin-saturated nuclear matter, and neglect pairing. We adopt the Kohn-Sham (KS) scheme \cite{Martin2020}, in which a representation in terms of s.p. orbitals $\psi_j(\mathbf{x})$ is introduced and the kinetic energy term is equal to that of a non-interacting Fermi system. Then, the total energy of a generic system is written as a functional of number density $\rho_t(\mathbf{x})$, kinetic density $\tau_t(\mathbf{x})$ and spin-orbit density $\mathbf{J}_t(\mathbf{x})$ (see App. \ref{app: dens orbitals} for their definition) with $t=0,1$ labelling isoscalar ($\rho_0=\rho_n+\rho_p$) and isovector ($\rho_1=\rho_n-\rho_p$) quantities, and has the following structure: \begin{equation} \label{eq: edf basic structure} E = \int d\mathbf{x}\, \mathcal{E}(\mathbf{x}) = E_{kin} + E_{pot} + E_{ext} \end{equation} which comprises the kinetic energy, a nuclear potential energy term and possibly an external potential contribution, \begin{align} &E_{kin} = \int d\mathbf{x}\, \mathcal{E}_{kin}(\mathbf{x}) = \int d\mathbf{x}\, \frac{\hbar^2}{2m} \tau_0(\mathbf{x}) , \\ &E_{pot} = \int d\mathbf{r}\, \mathcal{E}_{pot}(\mathbf{x}), \\ &E_{ext} = \sum_{t=0,1} \int d\mathbf{x}\, \rho_t(\mathbf{x}) v_{t}(\mathbf{x}). \end{align} Throughout this work $\mathcal{E}_{pot}$ has the form \cite{Marino2021} \begin{align} & \mathcal{E}_{pot}(\mathbf{x}) = \sum_{t=0,1} \, \bigg( \sum_\gamma \left( c_{\gamma,0} + c_{\gamma,1} \beta^2 \right) \rho_0^{\gamma+1} \\ & + C_t^{\tau} \rho_t \tau_t + C_t^{\Delta \rho} \rho_t \Delta \rho_t + C_{t}^{J} \mathbf{J}_{t}^2 + C_t^{\nabla J} \rho_t \nabla \cdot \mathbf{J}_t \bigg) \nonumber \end{align} with $\beta=\rho_1/\rho_0$ being the isospin asymmetry. The KS-DFT equations are found by minimizing the EDF w.r.t the s.p. orbitals $\psi_j^{*}(\mathbf{x})$ and read for protons and neutrons ($q=n,p$) \cite{Schunck2019} \begin{align} \label{eq: skyrme HF eqs} \bigg[ & - \nabla \cdot \frac{\hbar^2}{2m^*_q(\mathbf{x})} \nabla + U_q(\mathbf{x}) + v_q(\mathbf{x}) + \\ & \mathbf{W}_q(\mathbf{x}) \cdot \left( -i\right) \left( \nabla \cross \mathbf{\sigma} \right) \bigg] \psi_j(\mathbf{x}) = \epsilon_j \psi_j(\mathbf{x}) \nonumber \end{align} where the fields entering the equations are defined as \begin{equation} \label{eq: def mwan field} U_q = \fdv{E}{\rho_q} \qquad \frac{\hbar^2}{2m_q^*} = \fdv{E}{\tau_q} \qquad \mathbf{W}_q = \fdv{E}{\mathbf{J}_q}. \end{equation} $m_q^{*}(\mathbf{x})$, $U_q(\mathbf{x})$ and $\mathbf{W}_q(\mathbf{x})$ are called effective mass, mean field and spin-orbit potential, respectively. \subsection{Infinite nuclear matter} \label{sec: intro inf matter} Nuclear matter is an infinite system of nucleons that interact through the strong interaction only \cite{rocamaza2018}. (The Coulomb interaction is neglected.) In the following we concentrate on zero-temperature and spin-unpolarized matter. Moreover, we limit ourselves to the limiting cases of SNM ($\rho_n=\rho_p=\rho_0/2$) and PNM ($\rho_p$=0, $\rho_n=\rho_0$), although extensions are straightforward. The fundamental quantity that characterizes homogeneous matter is the EOS $e(\rho,\beta) = E(\rho,\beta)/A$, where $E$ is the total energy of the system and $e$ the energy per nucleon. We also remind that in homogeneous matter both the gradients of the density and the spin-orbit density vanish \cite{Marino2021}. Some theoretical approaches attack nuclear matter directly in the TL. These include nuclear DFT \cite{PASTORE20151,rocamaza2018} and e.g. Self-consistent Green's functions \cite{Rios2020}. Most \textit{ab initio} methods, though, simulate infinite matter by using a finite number of particles (see e.g. Refs. \cite{LietzCompNucl,Hagen2014,Piarulli2020}). Among them is AFDMC \cite{Tews2020}, that has been used extensively not only for the nuclear matter EOS, but also for inhomogeneous matter, namely neutron drops \cite{Maris2013}, as well as for neutron matter response \cite{Gezerlis2017}. DFT, too, can be formulated with a finite nucleon number, as proposed in Ref. \cite{Gezerlis2022Skyrme}. The standard technique adopted in most studies \cite{LietzCompNucl,Tews2020} involves considering $\rm{A}$ fermions enclosed in a cubic box of size $L$ and volume $\Omega=L^3$ and imposing periodic boundary conditions (PBCs) on the wave function. The cell size is chosen such as the density of the system is a fixed and constant $\rho_0=\rm{A}/\Omega$. In this framework, the TL corresponds to the limit in which both $\rm{A}$ and $L$ go to infinity while keeping $\rho_0$ fixed \cite{Fetter}. The free gas (FG), that is the starting point for studying interacting matter, is described in terms of s.p. plane waves orbitals $e^{i\mathbf{k} \cdot \mathbf{x}}/\sqrt{\Omega}$ with wave number $\mathbf{k}$ and kinetic energy $\frac{\hbar^2 \mathbf{k}^2}{2m}$. As a consequence of PBCs, the momenta $\mathbf{k}$ are quantized, i.e. $\mathbf{k} = \frac{2\pi}{L} \mathbf{n}$ where $\mathbf{n}$ is a three-component vector of integer numbers. Since the energy depends on $\mathbf{k}^2$ and thus on $\mathbf{n}^2$, a "momentum space" shell structure emerges, with different energy levels being labelled by $n^2$ and being degenerate. The first few momentum space "magic numbers" are given by $\rm{A}/g=$1, 7, 19, 27, 33 etc. \cite{LietzCompNucl}, where $g$ is spin/isospin degeneracy (2 for spin-saturated PNM, 4 for for spin-saturated SNM). Typically, the number of fermions in a calculation is selected so as to correspond to a shell closure of the FG in both homogeneous and perturbed matter. As we discuss below, this choice is fundamental when calculating the EOS with finite-A methods. \subsection{Solution of DFT in a periodic box} \label{sec: skyrme box} We discuss in detail the solution of the DFT problem for a finite number of nucleons enclosed in a cubic box with PBCs. We focus on spin-saturated PNM and SNM, which are the most important cases for nuclei and neutron stars \cite{rocamaza2018}. Moreover, SNM and PNM can be treated as two-component (spin up/down) fermionic systems in a unified way. The case of asymmetric matter ($\rho_n \ne \rho_p$, $\rm{N}\ne \rm{Z}$) would require some limited extensions of the formalism and is left for future studies. From now on, for the sake of simplicity in the notation the isospin labels ($q$ or $t$) are suppressed. We consider an external potential $v(z)$ that is a function of the $z$ coordinate only. Thus, translational invariance is broken in the $z$ direction, but still holds in the $xy$ plane. In order to respect PBCs, $v(z)$ must be periodic as well. Moreover, we adopt the spin- and isospin-independent sinusoidal potential \begin{equation} \label{eq: vz} v(z) = 2 v_q \cos \left( q z \right) \end{equation} with $q$ being an integer multiple of $q_{min}=2\pi/L$. The s.p. wave functions (in 2-spinor notation), then, have the following structure: \begin{equation} \label{eq: orbitals init} \psi_{\mathbf{n},\lambda}(\mathbf{x}) = \frac{e^{i k_x x}}{\sqrt{L}} \frac{e^{i k_y y}}{\sqrt{L}} \, \begin{pmatrix} \phi_{\mathbf{n},\lambda}(z,\uparrow) \\ \phi_{\mathbf{n},\lambda}(z,\downarrow) \end{pmatrix} \end{equation} PBCs imply that $k_x$ and $k_y$ are quantized in units of $2\pi/L$, i.e. $k_x = \frac{2\pi}{L} n_x$ and $k_y = \frac{2\pi}{L} n_y$, and $\phi_{\mathbf{n},\lambda}(z)$ is periodic, i.e. $\phi_{\mathbf{n},\lambda}(z+L)=\phi_{\mathbf{n},\lambda}(z)$. The states are labelled by the three integer numbers $\mathbf{n}$, plus a spin quantum number $\lambda=\pm 1$ whose precise meaning will be discussed below. The general DFT equations \eqref{eq: skyrme HF eqs} are now specialized to our case. We first note that the fields are functions of the z coordinate only: $m^{*}=m^{*}(z)$, $U=U(z)$ and $\mathbf{W}= W(z) \hat{\mathbf{z}}$. (The detailed expressions of the EDF and the fields are reported in App. \ref{app: expr nuclear edf}.) For later convenience, we define the transverse momentum as \begin{align} \label{eq: def transverse momentum} \mathbf{k}_{n_x n_y} = k_x \hat{\mathbf{x}} + k_y \hat{\mathbf{y}} = \frac{2\pi}{L} \left( n_x \hat{\mathbf{x}} + n_y \hat{\mathbf{y}} \right) \end{align} having magnitude \begin{align} k_{n_x n_y} = \sqrt{k_x^2 + k_y^2} = \frac{2\pi}{L} \sqrt{n_x^2 + n_y^2}. \end{align} Now, we discuss the spin-orbit term of Eq. \eqref{eq: skyrme HF eqs} with the help of $\pdv{\psi_{\mathbf{n},\lambda}}{x} = ik_x \psi_{\mathbf{n},\lambda}$ and $\pdv{\psi_{\mathbf{n},\lambda}}{y} = ik_y \psi_{\mathbf{n},\lambda}$: \begin{align} \label{eq: spin orbin intermediate} & \mathbf{W}(\mathbf{x}) \cdot \left( -i\right) \left( \nabla \cross \mathbf{\sigma} \right) \psi_{\mathbf{n},\lambda}(\mathbf{x}) = \\ & W(z) \left( -i\right) \left( \partial_x \sigma_y - \partial_y \sigma_x \right) \psi_{\mathbf{n},\lambda}(\mathbf{x}) = \nonumber \\ & W(z) \left( k_x \sigma_y - k_y \sigma_x \right) \psi_{\mathbf{n},\lambda}(\mathbf{x}) = \nonumber \\ & W(z) K_{n_x,n_y} \psi_{\mathbf{n},\lambda}(\mathbf{x}) \nonumber \end{align} In the last equality, we have introduced the spin matrix $K_{n_x,n_y}=k_x \sigma_y - k_y \sigma_x $, which reads explicitly as \begin{align} K_{n_x,n_y} = \begin{pmatrix} 0 & -i (k_x + i k_y) \\ i (k_x - i k_y) & 0 \end{pmatrix} . \end{align} Since $K_{n_x,n_y}$ is not diagonal, it is clear that the states $\psi_{\mathbf{n},\lambda}$ cannot be eigenstates of $\sigma_z$. While one possibility would be to solve the coupled DFT equation for the spin-up and -down components, a better choice is to take the $\psi$'s to be eigenstates of $K_{n_x,n_y}$, as suggested in Ref. \cite{SemiInfMatter2008}. It is easy to verify that $K_{n_x,n_y}$ has eigenvalues $\pm k_{n_x n_y}$. Thus we impose \begin{align} \label{eq: helicity final} K_{n_x,n_y} \psi_{\mathbf{n},\lambda}(\mathbf{x}) = \lambda k_{n_x n_y} \psi_{\mathbf{n},\lambda}(\mathbf{x}), \end{align} where $\lambda = \pm 1$. Importantly, since $K_{n_x,n_y}$ is independent of the position, Eq. \eqref{eq: helicity final} implies that the orbitals \eqref{eq: orbitals init} can be decomposed into the product of a single spatial orbital and a constant spinor, namely \begin{equation} \label{eq: orbitals} \psi_{\mathbf{n},\lambda}(\mathbf{x}) = \frac{e^{i k_x x}}{\sqrt{L}} \frac{e^{i k_y y}}{\sqrt{L}} \, \phi_{\mathbf{n},\lambda}(z) \, \chi_{n_x, n_y, \lambda}. \end{equation} The spinors $\chi_{n_x,n_y,\lambda}$ satisfy \begin{equation} \label{eq: helicity eigenstate} K_{n_x,n_y} \chi_{n_x,n_y,\lambda} = \lambda k_{n_x n_y} \chi_{n_x,n_y,\lambda}, \end{equation} where \begin{equation} \chi_{n_x, n_y, \lambda} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ \lambda e^{i\phi} \end{pmatrix} \ . \end{equation} In the last expression, the angle $\phi$ is given by $\phi = \arctan\left( n_y / n_x \right)$. Physically, the states $\psi_{\mathbf{n},\lambda}$ have a definite spin projection in the direction of the transverse momentum \eqref{eq: def transverse momentum}, which is not fixed but depends on the numbers $n_x$, $n_y$. The label $\lambda$ thus can be interpreted as a spin projection or helicity quantum number. The kinetic term can be manipulated along the same lines and is discussed in App. \ref{app: kinetic term}. Finally, applying Eqs. \eqref{eq: kin term}, \eqref{eq: spin orbin intermediate} and \eqref{eq: helicity final} to Eq. \eqref{eq: skyrme HF eqs}, we find the following one-dimensional equations for the spatial orbital $\phi_{\mathbf{n},\lambda}(z)$: \begin{align} \label{eq: skyrme final eqs} & - \frac{d}{dz} \left( \frac{\hbar^2}{2m^{*}(z)} \phi_{\mathbf{n},\lambda}'(z) \right) + \\ & \left( U(z) + v(z) + \lambda k_{n_x n_y} W(z) + \frac{\hbar^2}{2m^{*}(z)} k_{n_x n_y}^2 \right) \phi_{\mathbf{n},\lambda}(z) = \nonumber \\ & \epsilon_{\mathbf{n},\lambda} \phi_{\mathbf{n},\lambda}(z) \nonumber . \end{align} These are s.p. state-dependent Schr\"{o}dinger equations that must be solved self-consistently due the density-dependence of the fields. For a given set of quantum numbers $n_x$,$n_y$ and $\lambda$, $n_z$ labels the eigensolutions ordered by increasing s.p. energies $\epsilon$. The $z$ coordinate is restricted to the symmetric interval $\left[ -\frac{L}{2}, \frac{L}{2} \right]$. We note that due to time-reversal invariance, that holds if we consider the spin-independent potential \eqref{eq: vz}, the eigenvalues $\epsilon_{\mathbf{n},+1}$ and $\epsilon_{\mathbf{n},-1}$ are degenerate, while in general $\lambda=\pm 1$ spatial orbitals are different. In the special case of homogeneous matter ($v=0$ and $\rho(z)=\rho_0$), though, the spin-orbit field $W(z)$ vanishes [see Eq. \eqref{eq: spin orbit field}], and thus the equations for the spin-orbit partners $\lambda=\pm 1$ are identical and so are the orbitals, namely $\phi_{\mathbf{n},+1}=\phi_{\mathbf{n},-1}$. As a consequence, the spin-orbit density vanishes too [Eq. \eqref{eq: spin orbit density final}] and thus uniform matter is insensitive to spin-orbit. In passing, we also observe that the energy of a spin-saturated and closed-shell system is invariant when the sign of the spin-orbit coefficient is flipped, $C^{\nabla J} \longrightarrow - C^{\nabla J}$. Indeed, the effect of this transformation is that of swapping the $\lambda=1$ and $\lambda=-1$ states in Eq. \eqref{eq: skyrme final eqs} and, if an equal number of spin states is occupied, all the densities, including $J(z)$, remain unchanged, and so does the total energy. We shall describe how the Schr\"{o}dinger equation \eqref{eq: skyrme final eqs} is solved, how the many-particle g.s. of the system is constructed, and how the s.c. loop is dealt with. Due to the intrinsic periodicity of the systems under study, expanding Eq. \eqref{eq: skyrme final eqs} in the plane waves basis (see e.g. Refs. \cite{IzaacComputationalQM,Martin2020}) allows to solve the problem very efficiently. Few tens of plane waves are typically enough to find converged results even for moderately strong perturbations; by contrast, the finite-difference approach used in Ref. \cite{Gezerlis2022Skyrme} requires a mesh of several hundreds points at least and a much more time-consuming diagonalization. The orbitals are Fourier-expanded as $\phi(z)= \frac{1}{\sqrt{L}} \sum_k c_k e^{ikz}$ where again $k=\frac{2\pi}{L} n$ and the Schr\"{o}dinger equation is recast into matrix form, namely \begin{align} \label{eq: eig plane waves} \sum_{k'} \left( \tilde{h}_{\mathbf{n},\lambda} \right)_{k,k'} c_{k'} = \epsilon_{\mathbf{n},\lambda} c_k, \end{align} where $\left( \tilde{h}_{\mathbf{n},\lambda} \right)_{k,k'}$ is the Hamiltonian matrix in the plane waves basis and is derived in App. \ref{app: hamiltonian in plane waves}. \begin{comment} is compactly written as the sum of the Fourier transforms of the kinetic and potential terms, namely \begin{equation} \label{eq: hamiltonian tilde} \tilde{h}_{k,k'} = k' \tilde{B}(k-k') k + \tilde{U}(k-k'). \end{equation} with $\tilde{B}(k-k')$ and $\tilde{U}(k-k')$ defined as \begin{align} & \tilde{B}(k-k') = \frac{1}{L} \int_{-L/2}^{L/2} dz\, e^{-i(k-k')z} \frac{\hbar^2}{2m^{*}(z)} , \\ & \tilde{U}(k-k') = \frac{1}{L} \int_{-L/2}^{L/2} dz \, e^{-i(k-k')z} \\ &\left( U(z) + v(z) + \lambda k_{n_x n_y} W(z) + \frac{\hbar^2}{2m^{*}(z)} k_{n_x n_y}^2 \right) . \nonumber \end{align} \end{comment} Nuclear DFT is based on an independent-particle picture and the many-particle g.s. configuration is found by occupying the first $\rm{A}$ energy levels of the system. In order to determine them, Eqs. \eqref{eq: skyrme final eqs} are solved for several different combinations $(n_x,n_y)$, and separately for the two spin states $\lambda$ \cite{Gezerlis2022Skyrme}. Then, the solutions are collated and the lowest-energy states are filled up with $\rm{A}/2$ spin-up and $\rm{A}/2$ spin-down particles. (The discussion is limited to spin-saturated system.) Energy levels are degenerate, since $n_x$ and $n_y$ only enter Eq. \eqref{eq: skyrme final eqs} in the combination $k_{n_x n_y} \propto n_x^2+n_y^2$, so that inverting the sign of $n_x$, $n_y$ or both, or exchanging the two numbers, leaves the equation invariant. Such degeneracy $g_{n_x,n_y}$ can be exploited to reduce the computational load of the method, since we can restrict ourselves to the pairs $(n_x,n_y)$ with $0 \le n_x \le n_y \le n_{max}$. It is good practice to choose at first a large value for $n_{max}$, though the following argument, which generalizes that of Ref. \cite{Gezerlis2022Skyrme}, allows to stop the search over the $(n_x,n_y)$ pairs sooner. Indeed, we observe that $k_{n_x n_y}$ enters Eq. \eqref{eq: skyrme final eqs} in the combination $\lambda k_{n_x n_y} W(z) + \frac{\hbar^2}{2m^{*}(z)} k_{n_x n_y}^2 $. This contribution is positive when $k_{n_x n_y}$ satisfies the inequality \begin{align} \label{eq: kxy condition} k_{n_x n_y} > \Bar{k}_{n_x n_y} = \max_z \left( - \lambda \frac{ 2m^{*}(z) W(z) }{\hbar^2} \right). \end{align} Then, provided that $k_{n_x n_y} > \Bar{k}_{n_x n_y}$, the lowest eigenvalue of Eqs. \eqref{eq: skyrme final eqs} increases as $k_{n_x n_y}$ increases. Now, while one is iterating over the combinations $(n_x,n_y)$ (which must have been sorted according to increasing values of $n_x^2+n_y^2$), and separately for $\lambda=+1$ and -1, one checks whether the lowest eigenvalue $\epsilon_{n_x,n_y,0,\lambda}$ is greater than the energy of the first $\rm{A}/2$ lowest-energy states found so far. In that case, the cycle can be stopped, since we are guaranteed by Eq. \eqref{eq: kxy condition} that the many-nucleon g.s does not receive contributions from higher $n_x^2+n_y^2$. Once the occupied orbitals and the corresponding s.p. energies have been found, the total energy and the densities (App. \ref{app: dens orbitals}) of the system are computed. \begin{comment} Number density, kinetic density and spin-orbit density may be computed from their definitions as functions of the occupied orbitals \cite{Schunck2019} applied to the wave functions \eqref{eq: orbitals}. Eqs. \eqref{eq: nabla psi} and \eqref{eq: laplacian psi} are also used to find \begin{align} \rho(z) &= \sum_j \abs{\psi_j(\mathbf{x})}^2 = \, \frac{1}{L^2} \sum_{\mathbf{n},\lambda} \abs{\phi_{\mathbf{n},\lambda}}^2(z) \\ \tau(z) &= \sum_j \abs{\nabla \psi_j (\mathbf{x})}^2 \\ &= \frac{1}{L^2} \sum_{\mathbf{n},\lambda} \left( \abs{\phi_{\mathbf{n},\lambda}'}^{2} + k_{n_x n_y}^2 \abs{\phi_{\mathbf{n},\lambda}}^2 \right) \nonumber \\ J_z(z) &= \sum_j \psi_j^*(\mathbf{x}) \left( -i\right) \left( \nabla \cross \mathbf{\sigma} \right)_3 \psi_j(\mathbf{x}) \\ &= \sum_{\mathbf{n},\lambda} \psi_{\mathbf{n},\lambda}^*(\mathbf{x}) K \psi_{\mathbf{n},\lambda}(\mathbf{x}) \nonumber \\ &= \frac{1}{L^2} \sum_{\mathbf{n},\lambda} \lambda k_{n_x n_y} \abs{\phi_{\mathbf{n},\lambda}(z)}^2 \nonumber \end{align} where only the z component of $\mathbf{J}$ matters and Eq. \eqref{eq: helicity eigenstate} has been used. \end{comment} The total energy is evaluated in two ways, i.e. as an integral of the energy density, \begin{equation} \label{eq: e integral} E = L^2 \int_{-L/2}^{L/2} dz \mathcal{E}(z), \end{equation} and by means of \begin{equation} \label{eq: energy erea} E = \frac{1}{2} \left( T + \sum_j \epsilon_j \right) + E_{rea}. \end{equation} The rearrangement energy $E_{rea}$ and the energy density $\mathcal{E}(z)$ are given in App. \ref{app: expr nuclear edf}. The expressions \eqref{eq: e integral} and \eqref{eq: energy erea} must match when they are evaluated on the g.s. and this provides a strong check on the correctness of the method and on its convergence to the exact g.s. A crucial aspect of DFT is that the potential is itself a functional of the densities. Therefore, a s.c. solution to the problem must be looked for \cite{Schunck2019}. At each iteration $i$ of the s.c. loop, the densities are determined for the current values of the fields, as described above. Then new fields are generated by linearly mixing the old fields with the ones evaluated on the newly obtained densities $\rho^{(i)}$ \cite{SkyrmeRpa}, namely \begin{align} U^{(i+i)} = \alpha U^{(i)} + \left( 1-\alpha \right) U\left[ \rho^{(i)} \right] \end{align} and similar relations for $W$ and $\hbar/(2m^*)$. $\alpha$ is a mixing parameter; in order to achieve convergence, it is safe to be rather conservative, e.g. we choose $\alpha=0.8-0.9$ at the beginning and then gradually decrease it as iterations go by. At the beginning ($i=0$), the densities are initialized at the uniform matter values $\rho(z)=\rho_0$, $\tau(z) = \frac{3}{5} \rho_0 q_F^2$ and $J(z)=0$ and the fields are determined accordingly. The s.c. procedure is stopped if two conditions are met: the energies between iterations $i$ and $i-1$ and, at the same time, the two formulas \eqref{eq: energy erea} and \eqref{eq: e integral} for the energy at iteration $i$, agree within a chosen tolerance. Thresholds of the order of 0.1-1 keV per nucleon can be obtained usually in few tens of iterations. Combining linear mixing and two convergence conditions makes our approach rather robust. \section{Theory of the static response} \label{sec: static response theory} The theory of the response of homogeneous matter to an external static perturbation is summarized. In-depth discussions can be found in Refs. \cite{giuliani_vignale_2005,SenatoreBook,Lundqvist}. Consider a system with uniform g.s. density $\rho_0$, described either by a Hamiltonian $\hat{H}$ or an EDF. A static potential $v(\mathbf{x})$ coupled to the total density is then turned on. $v(\mathbf{x})$ is periodic so as to respect the PBCs. The density and energy of the g.s. of the perturbed system are called $\rho_v(\mathbf{x})$ and $E[v]$, respectively. If the external potential is weak enough, its effect can be treated perturbatively (see e.g. Refs. \cite{Fetter,giuliani_vignale_2005}). The density fluctuation induced by $v(\mathbf{x})$, in particular, is linear in the external potential and is written as follows: \begin{align} \label{eq: delta rho real space} \delta\rho(\mathbf{x}) = \rho_v(\mathbf{x}) - \rho_0 = \int d\mathbf{x}' \chi(\mathbf{x},\mathbf{x}') v(\mathbf{x}'). \end{align} The static response function $\chi(\mathbf{x},\mathbf{x}')$ has been introduced and we stress that it depends exclusively on the properties of the unperturbed system. The response of homogeneous matter, in particular, is a function only of $\mathbf{x}-\mathbf{x}'$, i.e $\chi(\mathbf{x},\mathbf{x}')=\chi(\mathbf{x}-\mathbf{x}')$. While a generic periodic function $v(\mathbf{x})$ is a superposition of plane waves, in the following we consider without loss of generality a monochromatic potential oscillating at a given wave number $\mathbf{q}$, namely \begin{align} \label{eq: periodic v} v(\mathbf{x}) = v_q e^{i\mathbf{q} \cdot \mathbf{x}} + c.c. = 2 v_q \cos\left( \mathbf{q} \cdot \mathbf{x} \right). \end{align} Thus the density fluctuation induced by the perturbation \eqref{eq: periodic v} is monochromatic too and is given by \begin{align} \label{eq: delta rho harmonic} \delta \rho (\mathbf{x}) = 2 \rho_q \cos\left( \mathbf{q} \cdot \mathbf{x} \right), \end{align} where the amplitude $\rho_q$ is linear in $v_q$, i.e. \begin{align} \label{eq: rhoq v} \rho_q = \chi(q) v_q \end{align} and $\chi(q)$ is the Fourier transform of $\chi(\mathbf{x},\mathbf{x}')$, see Eq. \eqref{eq: chi fourier transform}. The energy of the perturbed system, instead, is quadratic in the external potential. In App. \ref{app: static response theory}, we derive that the energy per particle is given by \cite{SenatoreBook} \begin{equation} \label{eq: ev quadratic} \delta e_v = e_v - e_0 = \frac{\chi(q)}{\rho_0} v_q^2. \end{equation} \begin{comment} $\chi(\mathbf{x},\mathbf{x}')$ is the static response function, that we stress depends exclusively on the properties of the unperturbed system. By the previous equation, $\chi$ can be defined as $\chi(\mathbf{x},\mathbf{x}') = \fdv{\rho(\mathbf{x})}{v(\mathbf{x}')} \eval_{v=0}$. The change in the energy, instead, is quadratic in $v(\mathbf{x})$, as it can be verified by expanding $E[v]$ (understood as a functional of $v$) around the unperturbed system $v=0$, namely \cite{SenatoreBook} \begin{align} \label{eq: delta E func v} & E[v] - E[0] = \int d\mathbf{x} v(\mathbf{x}) \rho_0 + \\& \frac{1}{2} \int d\mathbf{x} \int d\mathbf{x}' \chi(\mathbf{x},\mathbf{x}') v(\mathbf{x}) v(\mathbf{x}') \nonumber, \end{align} and noticing that the first-order term vanishes, $v$ being periodic. (A more general argument is presented in Ref. \cite{giuliani_vignale_2005}). That the static response enters Eq. \eqref{eq: delta E func v} can be verified by using $\rho(\mathbf{x}) = \fdv{E}{\rho(\mathbf{x})}$ and the identities \begin{align} \secondfdv{E[v]}{v(\mathbf{x})}{v(\mathbf{x}')} = \fdv{\rho(\mathbf{x})}{v(\mathbf{x}')} = \chi(\mathbf{x},\mathbf{x}'). \end{align} Then one can transform Eq. \eqref{eq: delta E func v} to momentum space inserting the Fourier expansions \begin{align} \label{eq: fourier exp} \delta \rho(\mathbf{x}) &= \sum_\mathbf{k} \rho_\mathbf{k} e^{i\mathbf{k} \cdot \mathbf{x}}, \quad v(\mathbf{x}) = \sum_\mathbf{k} v_\mathbf{k} e^{i\mathbf{k} \cdot \mathbf{x}} \\ \label{eq: chi fourier transform} \chi(\mathbf{x} - \mathbf{x}') &= \frac{1}{\Omega} \sum_\mathbf{k} \chi(\mathbf{k}) e^{i\mathbf{k} \cdot (\mathbf{x}- \mathbf{x}') }. \end{align} Thus one finds that \begin{align} \label{eq: delta E fourier} E[v] - E[0] = \frac{\Omega}{2} \sum_\mathbf{k} v_\mathbf{k} \chi(\mathbf{k}) v_{-\mathbf{k}}. \end{align} While a generic periodic function $v(\mathbf{x})$ is a superposition of harmonics [Eq. \eqref{eq: fourier exp}], one can consider a monochromatic potential oscillating at a given wave number $\mathbf{q}$, namely \begin{align} \label{eq: periodic v} v(\mathbf{x}) = v_q e^{i\mathbf{q} \cdot \mathbf{x}} + c.c. = 2 v_q \cos\left( \mathbf{q} \cdot \mathbf{x} \right). \end{align} Substituting the previous potential in Eq. \eqref{eq: delta E fourier} and using the relations $\rho_0=A/\Omega$ and $\chi=\chi(\abs{\mathbf{q}})$ that hold for uniform matter, we find that the energy per particle of the perturbed system is given by \cite{SenatoreBook} \begin{equation} \label{eq: ev quadratic} \delta e_v = e_v - e_0 = \frac{\chi(q)}{\rho_0} v_q^2. \end{equation} Moreover, the density fluctuation induced by the perturbation \eqref{eq: periodic v} is monochromatic too and is given by \begin{align} \label{eq: delta rho harmonic} \delta \rho (\mathbf{x}) = 2 \rho_q \cos\left( \mathbf{q} \cdot \mathbf{x} \right), \end{align} where the amplitude $\rho_q$ satisfies \begin{align} \label{eq: rhoq v} \rho_q = \chi(q) v_q. \end{align} \end{comment} The formalism we have outlined is valid both in the TL and in finite systems, and both for DFT and for Hamiltonian-based methods. The question is now how to compute the response function in practice. For generalized Skyrme EDFs \cite{PASTORE20151} and Gogny and Nakada EDFs \cite{Pastore2021}, for example, the response in the TL can be determined analytically (App. \ref{app: edf tl resp}). An alternative for studying $\chi(q)$ is provided by exploiting Eqs. \eqref{eq: rhoq v} or \eqref{eq: ev quadratic}. The strategy to determine $\chi(q)$ for a uniform system at a given density $\rho_0$, and with a given particle number, is the following. For a given (quantized) momentum $q$, multiple calculations of the g.s. of the perturbed system are performed for different values of the strength $v_q$ of the external potential \eqref{eq: periodic v}. Then $\chi(q)$ can be extracted from the amplitude of the density fluctuations [Eq. \eqref{eq: rhoq v}] or from the energies [Eq. \eqref{eq: ev quadratic}] as a function of $v_q$, for sufficiently small $v_q$. This strategy has been applied in several contexts, e.g. Refs. \cite{SenatoreBook,Gezerlis2017,Dornheim2017,Stringari2019}, and provides a relatively straightforward way to determine the static response function numerically. We will interpolate energies using the more general formula \cite{Gezerlis2017,Dornheim2017} \begin{align} \label{eq: fourth order ev} \delta e_v = e_v - e_0 = \frac{\chi(q)}{\rho_0} v_q^2 + C_4 v_q^4 \end{align} which takes into account higher-order contributions. Second-order perturbation theory, or equivalently the spectral representation of the dynamical density response $\chi(\mathbf{q},\omega)$, can be employed to derive a formula that relates $\chi(q)$ to the excited states of the homogeneous system \cite{Fetter,giuliani_vignale_2005}. For the case of the spin- and isospin-saturated $\rm{A}$-fermion FG, the response $\chi_{0,\rm{A}}$ at zero temperature is given by \cite{Dornheim2017,giuliani_vignale_2005} \begin{comment} \begin{equation} \chi_{0,A}(q) = \frac{1}{\Omega} \sum_{\mathbf{k},\sigma} \frac{n_{\mathbf{k}+\mathbf{q},\sigma}-n_{\mathbf{k},\sigma} }{\epsilon_{\mathbf{k}+\mathbf{q}}-\epsilon_{\mathbf{k}}} \end{equation} with $\sigma$ labels the spin projection and $n_{\mathbf{k},\sigma}$ and $\epsilon_{\mathbf{k}}=\frac{\hbar^2\mathbf{k}^2}{2m}$ are the occupation factors and the s.p. energies of the momentum eigenstates, respectively. At zero temperature and for spin/isospin saturated systems, the previous formula leads to \end{comment} \begin{equation} \label{eq: chi0N analytic} \chi_{0,\rm{A}}(q) = - \frac{4mg}{\hbar^2 \Omega} \sum_{\mathbf{k}\,\rm{occ}} \frac{1}{\left( \mathbf{k}+\mathbf{q}\right)^2 - \mathbf{k}^2 }, \end{equation} where the sum extends over the occupied momentum states and terms with vanishing denominator are can be safely neglected. Consistently with the assumptions of Sec. \ref{sec: dft formalism}, we write $\mathbf{k} = \frac{2\pi}{L} \mathbf{n}$ and take $\mathbf{q}$ quantized and parallel to the $z$ direction, i.e. $\mathbf{q}=q \hat{\mathbf{z}} = \frac{2\pi}{L} p \, \hat{\mathbf{z}}$, with $p$ integer. Then Eq. \eqref{eq: chi0N analytic} is expressed as \begin{align} \label{eq: chi0N analytic final} \chi_{0,\rm{A}}(q) = - \frac{mg}{L \pi^2 \hbar^2} \sum_{\mathbf{n}\,\rm{occ}}\frac{1}{p^2 + 2 p n_z}. \end{align} This formula is straightforward to evaluate: we determine the occupied states of the $\rm{A}$-particle FG g.s. once and then, for each value of $q$, we simply perform a sum over these states. In the TL, $n_\mathbf{k}=\theta(q_F-k)$, $\frac{1}{\Omega} \sum_\mathbf{k} \longrightarrow \int \frac{d\mathbf{k}}{(2\pi)^3}$ \cite{Fetter} and the static response becomes the well-known Lindhard function at zero-frequency \cite{Lindhard} \begin{align} \label{eq: lindhard tl} & \chi_0(q) = - g \frac{mq_F}{2 (\hbar \pi)^2} f\left( \frac{q}{2q_F} \right) \\ \label{eq: fk lind} & f(k) = \frac{1}{2} \left( 1 + \frac{1-k^2}{2k} \log \abs { \frac{1+k}{1-k } } \right). \end{align} \section{Results} \label{sec: results} The method described in Sec. \ref{sec: dft formalism} is applied to calculate the EOS and the static response. The popular SLy4 EDF \cite{Chabanat_Sly} is used when not stated otherwise, and examples of perturbed matter calculations are typically performed at a reference density of $\rho_0$= 0.16 fm$^{-3}$. DFT energies are converged within a tolerance of 1 keV per nucleon. Perturbation strengths are measured in units of the Fermi energy of the corresponding system ($v_q/E_F$). We plot the static response function in the form $-\chi(q)/\rho_0$ (in MeV$^{-1}$), which is everywhere positive. Momenta are reported either in units of the Fermi momentum ($q/q_F$) or as integer multiples of the minimum allowed momenta ($q_{min}=2\pi/L$). \subsection{EOS} \label{sec: results eos} As a first application, the EOS is studied in both SNM (Fig. \ref{fig: eos_sly4_snm}) and PNM (Fig. \ref{fig: eos_sly4_pnm}). The TL EOS is shown as a solid line, while calculations with $\rm{A}$=132, 16676 nucleons and $\rm{N}$=66, 8338 neutrons, respectively, are reported as symbols. Multiples of 33 particles are commonly used in infinite matter studies, because the kinetic energy per particle of FG made of 33$g$ particles is rather close to TL FG energy (see Ref. \cite{Gezerlis2017}, Fig. 1). As a prototypical large-$\rm{A}$ system, we use a number of nucleons equal to 4169 times the spin/isospin degeneracy $g$, which corresponds to filling up all the momentum shells of the FG up to $n^2=n_x^2+n_y^2+n_z^2=100$. Indeed, the results for these numbers of nucleons turn out to be practically indistinguishable from the TL curve and provide a strong check on the correctness of numerical calculations. It can also be appreciated that the $\rm{N}$=66 and $\rm{A}$=132 EOS give energies very close to the TL EOS, so that the special usefulness of these "magic numbers" is confirmed also for DFT calculations. \begin{figure*}[t] \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{eos_sly4_snm.pdf} \caption{SNM EOS computed with the SLy4 EDF in the TL (line) and with a finite number of particles (symbols). } \label{fig: eos_sly4_snm} \end{minipage} \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{eos_sly4_pnm.pdf} \caption{Same as Fig. \ref{fig: eos_sly4_snm}, but for PNM. } \label{fig: eos_sly4_pnm} \end{minipage} \end{figure*} \begin{comment} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{extrap_eos_sly4_snm.pdf} \caption{ SNM EOS in the TL (line), computed with $\rm{A}$=76 nucleons (filled symbols) and extrapolated from $\rm{A}$=76 to the TL (hollow symbols). See text. } \label{fig: extrap_eos_sly4_snm} \end{figure} \end{comment} \subsection{Free response} \label{sec: results free response} A second study concentrates on the static response of the FG. The exact formula for $\chi_{0,\rm{N}}$ [Eq. \eqref{eq: chi0N analytic}] is applied in Fig. \ref{fig:free resp cinv} for different numbers of neutrons and compared to the TL response \eqref{eq: lindhard tl}. FS effects are rather strong at small or moderate momenta and manifest themselves as a non-monotonic behaviour of $\chi_{0,\rm{N}}(q)$ at finite $\rm{N}$, while the TL response function is strictly decreasing in magnitude. For $q>2q_F$, instead, the oscillations tend to disappear and the curves match rather well for all particle numbers. This qualitative change of behaviour is due to geometric reasons, see e.g. the calculation of $\chi_0(q)$ in Ref. \cite{Fetter}: essentially, for $q>2q_F$ any occupied momentum state can be scattered from the g.s. (the Fermi sphere) to an empty state and thus shell effects, that strongly affect the results at small $q$, are ineffective. The special role of $q=2q_F$ is also signalled by the fact that the TL Lindhard function \eqref{eq: lindhard tl} is non-analytical at that point. Moreover, we note that the convergence to the TL as $\rm{N}$ is increased is relatively slow and mild oscillations continue to persist up to very large $\rm{N}$. \begin{figure*}[t] \begin{minipage}[t]{\columnwidth} \includegraphics[width=\textwidth]{free_resp_tl_rho0.16.pdf} \caption{Dashed lines: free response function $-\chi_{0,N}(q)/\rho_0$ in PNM at $\rho_0=$0.16 fm$^{-3}$ as a function of $q/q_F$ for different numbers of neutrons. Full line: response in the TL (Lindhard function). } \label{fig:free resp cinv} \end{minipage} \begin{minipage}[t]{\columnwidth} \includegraphics[width=\textwidth]{chi0_rho0.16_N66.pdf} \caption{ Static response $-\chi_{0,\rm{N}}(q)/\rho_0$ of the FG as a function of $q/q_F$ in PNM at a density $\rho_0=0.16$ fm$^{-3}$. The exact response (filled squares) and the response determined by a fit to the Mathieu energies (empty diamonds) are shown for $\rm{N}$=66 neutrons. For comparison, the TL response (Lindhard function) is also plotted. } \label{fig:resp mathieu N66} \end{minipage} \end{figure*} Then, the free response is computed numerically and compared to the analytical results. In particular, the FG response is determined by solving the Mathieu problem \cite{Gezerlis2017}, i.e. the independent-particle problem of fermions subject to the external potential \eqref{eq: vz} (with the EDF potential terms turned off), for different momenta $q$ and for strengths $v_q/E_F$ between 0.01 and 0.1 (with a step of 0.01). Then the energy differences $\delta e_v$ are interpolated with the quartic formula \eqref{eq: fourth order ev} at each $q$. In Fig. \ref{fig:resp mathieu N66}, a comparison is drawn in the case of PNM with $\rm{N}$=66 neutrons between the exact response (filled squares) and the values obtained through the fitting procedure (empty diamonds). An almost perfect agreement is obtained, with a modest discrepancy only at the lowest momentum ($q/q_F \approx$ 0.5). In order to better understand this deviation, in Fig. \ref{fig:ratio mathieu N66} we consider the ratio between the energy variation $\delta e_v$ and the square of the perturbation strength $v_q$ as a function of $v_q/E_F$. The exact response is shown as a hollow symbol at $v_q=0$. If linear response theory were exact, at least in a certain range of small $v_q$, the ratio $\delta e_v/v_q^2$ would be constant. This is indeed verified for $q/q_{min}>1$ over the whole interval considered, but at $q/q_{min}=1$ a slight underestimation of the response is observed at all finite perturbations. This highlights that modest non-linear (fourth-order) contributions are present in the behaviour of the system. Importantly, though, the ratio correctly converges to the exact response [ $\delta e_v/v_q^2 \longrightarrow \chi_{0,\rm{N}}(q)/\rho_0$] as $v_q \longrightarrow 0$. \begin{figure*}[t] \begin{minipage}[t]{\columnwidth} \includegraphics[width=\textwidth]{ratio_N66.pdf} \caption{ Ratio between the energy variation $-\delta e_v$ and the square of the perturbation strength $v_q$ for the first four allowed moments ($q/q_{min}$ between 1 and 4) for the same system as Fig. \ref{fig:resp mathieu N66}. Hollow symbols at $v_q=0$ represent the exact value of -$\chi_{0,\rm{N}}(q)/\rho_0$. Dashed lines are guide to the eye.} \label{fig:ratio mathieu N66} \end{minipage} \begin{minipage}[t]{\columnwidth} \includegraphics[width=\textwidth]{conv_nk_pnm_N66.pdf} \caption{ Energy per particle of PNM with $\rm{N}$=66 at $\rho_0=0.16$ fm$^{-3}$ obtained with the SLy4 EDF as a function of the number of plane waves. Results are shown for the lowest momentum ($q=q_{min}$) for two different strengths of the external potential. } \label{fig: conv_nk_pnm_N66} \end{minipage} \end{figure*} \subsection{Perturbed nuclear matter} \label{sec: results perturbed matter} Perturbed matter is now studied with the SLy4 EDF. First, a preliminary analysis of the convergence of the calculations with respect to the number of plane waves included in the basis is presented. Fig. \ref{fig: conv_nk_pnm_N66}, which reports calculations performed with $\rm{N}$=66 neutrons at $q/q_{min}=1$ for a small ($v_q/E_F$=0.1) and a moderate ($v_q/E_F$=0.25) perturbation strengths, shows that in this case as few as 8 plane waves are sufficient to find energies converged within 0.1 keV or less. As a general rule, though, the number of plane waves required increases as a function of the momentum $q$ of the perturbation and in practice we have found that a basis of 40 waves always yields converged results for 66 or 132 nucleons. When thousands of particles are considered, we raise the cutoff to 60 plane waves. Calculations remain very fast (few seconds) even on a single processor. Then, the densities $\rho(z)$ as well as their Fourier components are shown in Figs. \ref{fig: densities_pnm_N66} and \ref{fig: fourier_pnm_N66}, respectively, for three perturbations that differ in strength and periodicity ($q/q_{min}$=1 with strengths $v_q/E_F$=0.1, 0.25 and $q/q_{min}$=2 with $v_q/E_F$=0.1). From the real space representation, one can appreciate that densities closely resemble cosine function that oscillate around the unperturbed density with the same periodicity as that of the external perturbation [see Eq. \eqref{eq: delta rho harmonic}]. The Fourier analysis confirms that the response is essentially harmonic, as in all cases a single component at momentum $q$ is clearly dominant with rather modest contributions beyond the linear regime. So far, we have always used particle numbers that correspond to a shell closure of the free Fermi gas and implicitly assumed that they are magic numbers for the perturbed system as well. This hypothesis proves true in general for weak potentials. Actually, its violation is a sign that the picture itself of a small perturbation of the homogeneous system is breaking down. In Fig. \ref{fig: levels} the neutron level scheme of $\rm{N}$=66 PNM (same case as Fig. \ref{fig: densities_pnm_N66}) is shown at two different perturbation strengths (both with momentum $q/q_{min}=1$). We remind that the $\lambda = \pm 1$ energy eigenvalues are degenerate and we plot the s.p. energies only for $\lambda=+1$. The quantum numbers $\mathbf{n}=(n_x,n_y,n_z)$ ($0\le n_x \le n_y$), and the number of nucleons corresponding to shell closures, are reported next to each level. Among the latter, magic numbers of the FG are circled. In the case of the weaker potential, the effect of the perturbation is to partially lift the degeneracy of the free gas levels (as well as to lower the s.p. energies), as can be seen from the triplets or doublets of neighbouring levels. The overall structure of the homogeneous system, though, is preserved and indeed all the FG magic numbers up to 33 are found in the perturbed system too. A markedly different picture appears for the stronger perturbation, where the level ordering of the FG is severely altered. One consequence is that a shell closure is found not for 33 nucleons but for 35. We suggest that the sudden changes in the slope of the energy as a function of the perturbation mentioned in Ref. \cite{Gezerlis2022Skyrme} may be a side-effect of such 'shell-opening' effects. The key message is that care must be taken when studying perturbed finite-$\rm{A}$ matter and not only global properties (energy, density), but also the shell structure must be looked at. For example, we warn that, if DFT or Mathieu orbitals are used to construct a reference state for Quantum Monte Carlo \cite{Gezerlis2017,Lynn2019}, it is crucial to check that it be a closed-shell state, before embarking on expensive calculations. \begin{figure*} \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{densities_pnm_N66.pdf} \caption{ Densities $\rho(z)$ as a function of $z/L$ in PNM ($\rm{N}$=66 neutrons) at a reference density $\rho_0=0.16$ fm$^{-3}$ (dashed horizontal line). Densities for three perturbations, differing in strength and momentum (see legend), are shown as symbols. } \label{fig: densities_pnm_N66} \end{minipage} \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{fourier_pnm_N66.pdf} \caption{ Fourier components $\rho_q$ of the density fluctuations in the same cases as Fig. \ref{fig: densities_pnm_N66}. } \label{fig: fourier_pnm_N66} \end{minipage} \end{figure*} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{levels.pdf} \caption{ Level structure of $\rm{N}$=66 PNM. Two perturbation strengths (at momentum $q/q_{min}=1$) are shown. The quantum numbers $\mathbf{n}=(n_x,n_y,n_z)$ of each level and the number of particles up to that shell are reported. Momentum-shell magic numbers of the FG are circled. } \label{fig: levels} \end{figure} Next, the static response function is discussed. The TL response of nuclear EDFs is known exactly \cite{PASTORE20151} (App. \ref{app: edf tl resp}) and is now compared to the finite-$\rm{A}$ calculations in both SNM (Fig. \ref{fig:resp sly4 snm}) and PNM (Fig. \ref{fig:resp sly4 pnm}). The numerical response functions for the large-$\rm{A}$ system are in very good agreement with the analytical predictions. The convergence to the TL is thus verified and we can appreciate by comparing to Fig. \ref{fig:free resp cinv} that it is definitely faster (as a function of the number of nucleons) in the interacting (DFT) system than for the FG. The small-$\rm{A}$ response, instead, is characterized by a non-monotonic behaviour that is reminiscent of that of the free response, with marked fluctuations with respect to the TL function for $q<2 q_F$. \begin{figure*}[t] \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{response_sly4_snm.pdf} \caption{Static response of SNM at $\rho_0=0.16$ fm$^{-3}$ obtained with the SLy4 EDF. The solid line represents the TL response, while symbols denote calculations for a finite number of particles ($\rm{A}$=132 and 16676). } \label{fig:resp sly4 snm} \end{minipage} \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{response_sly4_pnm.pdf} \caption{ Same as Fig. \ref{fig:resp sly4 snm}, but for PNM. Calculations are performed with $\rm{N}$=66 and 8338 neutrons (symbols) and in the TL. } \label{fig:resp sly4 pnm} \end{minipage} \end{figure*} Lastly, we would like to understand the impact of the spin-orbit terms on the static response. Spin-orbit was neglected in Ref. \cite{Gezerlis2022Skyrme} and its inclusion is one of the novelties of our work. The response computed with the full SLy4 EDF and for SLy4 with spin-orbit neglected, i.e. with $C^{\nabla J}$ set to zero, is reported for SNM (Fig. \ref{fig:spinorbit_sly4_snm}) and PNM (Fig. \ref{fig:spinorbit_sly4_pnm}) both in the TL and for the usual $\rm{A}$=132 and $\rm{N}$=66 numbers of particles, respectively. One can appreciate that spin-orbit has the main effect of lowering the magnitude of $\chi(q)$ at all momenta, both in the TL and in the finite systems and, while in SNM it constitutes a small correction, in PNM it is a significant effect. While the qualitative picture of Ref. \cite{Gezerlis2022Skyrme} is not altered in a fundamental way, quantitative results may change noticeably. In particular, it is important to incorporate spin-orbit terms if one aims at constraining the EDF parameters using \textit{ab initio} information. \begin{figure*}[t] \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{spinorbit_sly4_snm.pdf} \caption{SNM static response obtained in the TL and for $\rm{A}$=132 nucleons with the full SLy4 EDF and SLy4 with spin-orbit terms neglected ('no spin-orbit' in the legend). } \label{fig:spinorbit_sly4_snm} \end{minipage} \begin{minipage}[t]{\columnwidth} \includegraphics[width=\columnwidth]{spinorbit_sly4_pnm.pdf} \caption{ Same as Fig. \ref{fig:spinorbit_sly4_snm}, but for PNM with $\rm{N}$=66 neutrons. } \label{fig:spinorbit_sly4_pnm} \end{minipage} \end{figure*} \section{Conclusions and perspectives} \label{sec: conclusions} To sum up, in this work we have studied nuclear matter under the effect of an external potential within the DFT framework. Our approach is based on simulating nuclear matter with a finite number of nucleons enclosed in a box and subject to PBCs, and the theoretical formalism and numerical implementation have been presented in detail for PNM and SNM for Skyrme-like EDFs. We have discussed carefully how to treat spin-orbit terms and, in particular, we have shown that, although in the presence of spin-orbit the DFT orbitals are not eigenstates of the spin projection operator, single-component equations can still be derived. Then, the problem of the response of nuclear matter to static density perturbations has been analyzed with our technique. Our method has been validated successfully by comparing the numerical results with analytical formulas for the EDF EOS, the free gas response (both for finite-$\rm{A}$ and TL systems) and the TL EDF response. The power of DFT is demonstrated by the fact that systems of thousands of particles can be computed in an extremely fast and reliable way, and the convergence to the thermodynamic limit has been verified numerically. Moreover, the validity of linear response for weak perturbations, as well as deviations occurring for stronger external potentials have been investigated by looking at energies, densities and level structures. We point out that the momentum space magic numbers of uniform matter do not necessarily correspond to shell closures of the perturbed system. Therefore, care must be taken when the finite-$\rm{A}$ DFT approach is used in conjunction with \textit{ab initio}, for example when DFT or Mathieu orbitals \cite{Gezerlis2017} are used as a reference state in Quantum Monte Carlo. Moreover, we have found that spin-orbit contributes significantly to the PNM response, and to a lesser extent to the SNM response. In future studies of inhomogeneous matter, therefore, spin-orbit terms should be incorporated. This work represents an intermediate step in the program of developing \textit{ab initio}-based EDFs started in Ref. \cite{Marino2021}. Indeed, inhomogeneous systems are to be studied in order to gain information about the gradient terms of the EDF. Our efforts are currently devoted to the \textit{ab initio} response of both SNM and PNM, aiming at constraining the nuclear EDF by matching DFT and \textit{ab initio} results. In particular, our strategy involves tuning the EDF parameters on the \textit{ab initio} energies obtained with the same number of particles so to keep FS effects under control. Results will be presented in a forthcoming publication \cite{in_preparation}. Moreover, while here we have focused on PNM and SNM and presented results for density perturbations only, the formalism can be easly extended to isospin-asymmetric matter, as well as (introducing time-odd densities in the theory \cite{Schunck2019}) to spin-polarized matter and to spin/isospin perturbations. \section{Acknowledgements} We thank Alessandro Lovato and Francesco Pederiva for useful discussions. F.M. acknowledges the use of CINECA Galileo100 computing resources through the AbINEF ISCRA-B grant.
1,314,259,996,038
arxiv
\section{Introduction} Relativistic magnetohydrodynamics is the important part of the modern cosmic plasma physics and fluid dynamics in the context of applications to the theory of matter accretion to the rotating neutron stars and black holes, to the theory of structure of magnetospheres of pulsars and Sun (see, e.g., \cite{L1}-\cite{MHD8} for references and description of the main problems). The canonic formalism of relativistic magnetohydrodynamics, described in the famous book of Andr\'e Lichnerowicz \cite{L1}, is constantly being extended to solve new astrophysical and cosmological problems. For instance, in the excellent paper of Massimo Giovannini \cite{MHD4} the reader can find the applications of anomalous magnetohydrodynamics to the relativistic domains with extreme characteristics. In that paper also one can find a new element of fluid dynamics, namely, the theory of scalar/pseudoscalar fields interacting with electromagnetic fields. In the paper \cite{MHD2} the authors consider fluids with chiral properties. The interest to these problems is not accidental, in fact, we are on the threshold of formulation and active use of the relativistic axion magnetohydrodynamics, which deals with interaction of the cosmic axionic dark matter with magnetohydrodynamic flows. We consider the relativistic axion magnetohydrodynamics as an essential part of the relativistic theory of the axionically active plasma (see, e.g., \cite{AAP1} - \cite{AAP5} for some specific results for such plasma). We have introduced two new elements to the theory of axionically active systems. The first element is connected with the nonlinear approach to the description of the axion - photon coupling. The standard idea is to introduce into the Lagrangian the term $\frac14 \phi F^{*}_{mn}F^{mn}$ as it was done by the pioneers of the axion physics \cite{ax1} - \cite{ax9}. The pseudoscalar (axion) field $\phi$ enters this term linearly in front of the pseudo-invariant of the electromagnetic field presented by the convolution of the Maxwell tensor $F_{mn}$ and its dual $F^{*}_{mn}$. This term is invariant with respect to the discrete symmetry transformation $\phi \to \phi + 2\pi n$ ($n$ is an integer), since the rest term $2\pi n F^{*}_{mn}F^{mn}$ is the perfect divergence and thus it can be avoided from the action functional. If one uses arbitrary nonlinear function $f(\phi)$ instead of linear function $\phi$, this symmetry happens to be lost. Clearly, this function has to be periodic $f(\phi{+}2\pi n)= f(\phi)$, odd, and has to tend to $\phi$, when $\phi$ is small. One can choose, for instance, $f(\phi) = \sin{\phi}$. But we went further and applied the Jackson's SO(2) symmetry intrinsic for the electrodynamics \cite{Jackson}, thus obtaining the new unified invariant term \begin{equation} {\cal I} = \frac14 \left(\cos{\phi} F_{mn}F^{mn} + \sin{\phi} F^{*}_{mn}F^{mn} \right) \,, \label{000} \end{equation} with necessary periodicity \cite{B1,B2}. When $\phi=0$, we deal with the standard Lagrangian of the electromagnetic field $\frac14 F_{mn}F^{mn}$; when $\phi$ is small, the new linear term is $\frac14 \phi F^{*}_{mn}F^{mn}$, typical for the classical axion electrodynamics. Since the multiplier $\sin{\phi}$ in front of the pseudo-invariant $F^{*}_{mn}F^{mn}$ is the odd function, the second term remains a true scalar; the first term, which contains the even function $\cos{\phi}$ also is true scalar. As it was shown in \cite{B1,B2}, this idea of sin/cosine extension of the theory happened to be fruitful in application to cosmology, in particular, for description of axionically induced anomalous electric flares in the magnetized early Universe. The next step in the extension of the axion theory was the study of models nonlinear with respect to the Maxwell tensor. As an interesting case, we consider the nonlinear term ${\cal H}({\cal I})$ in the Lagrangian \cite{B2}, which tends to ${\cal I}$ for the small argument. In this work we follow this line, and formulate the relativistic axion magnetohydrodynamic nonlinear both in the axion field and in the Maxwell tensor. The work contains the general formalism of the relativistic nonlinear axion fluid dynamics, as well as, the truncated models of magnetohydrodynamics of two types. The model of the first type is based on the standard assumption that the electric conductivity $\sigma$ is very large, $\sigma \to \infty$; in this case one supposes that the electric field in the medium has to be vanishing providing the electric current to be finite. For the models of the second (principally new) type we assume that the axion field tends to the value $\phi \to \frac{\pi}{2}+ 2\pi n$ and the product $\sigma \cos{\phi} \to \infty \times 0$ remains finite, but the spatial part of the electric current now is vanishing. Such models are characterized by the anomalous growth of the axionically induced electric field at least in two cases: first, when there exists axionically induced magnetic conductivity; second, when the fluid flow possesses the rotation of the velocity four-vector. In this paper we formulate the general formalism and the perspective program of investigations, and in the nearest future we hope to apply the prepared formalism to the analysis of magnetohydrodynamic flows in cosmological and astrophysical systems. \section{The formalism} \subsection{The structure of the action functional} The total action functional is considered to be presented by four elements \begin{equation} S_{(\rm tot)} = \int d^4x \sqrt{-g} \left\{ \frac{R+ 2\Lambda}{2\kappa} + {\cal L}_{(\rm EMA)} + L_{(\rm axion)} + L_{(\rm matter)} \right\} \,. \label{F0} \end{equation} Here $g$ is the determinant of the metric tensor; $R$ is the Ricci scalar; $\Lambda$ is the cosmological constant; $\kappa = 8 \pi G$ is the Einstein constant ($c=1$). The Lagrangian of the electromagnetic field interacting nonlinearly with the pseudoscalar (axion) field $\phi$, indicated as ${\cal L}_{(\rm EMA)}$, is presented as an appropriate (linear or nonlinear) function \begin{equation} {\cal L}_{(\rm EMA)} = {\cal H}({\cal I}) \label{F1} \end{equation} of the unified invariant (\ref{000}). As usual, $F_{mn}$ is the Maxwell tensor, and $F^{*mn} = \frac12 \epsilon^{mnpq}F_{pq}$ is its dual; the Levi-Civita tensor $\epsilon^{mnpq}= \frac{E^{mnpq}}{\sqrt{-g}}$ is defined with the equality $E^{0123}=1$. When the dimensionless pseudoscalar $\phi$ vanishes, we obtain from (\ref{000}) the standard invariant of the electromagnetic field $\frac14 F_{mn} F^{mn}$. When the pseudoscalar field is nonvanishing, but it tends to zero $\phi \to 0$, the unified invariant converts into the term \begin{equation} {\cal I} \to \frac14 \left[F_{mn} F^{mn} + \phi \ F^{*}_{mn} F^{mn} \right] \,, \label{F3} \end{equation} which is typical for the axion electrodynamics. The Lagrangian of the pure pseudoscalar (axion) field \begin{equation} L_{(\rm axion)} = \frac12 \Psi^2_{0} \left[V(\phi) - \nabla_m \phi \nabla^m \phi \right] \label{F4} \end{equation} contains the periodic axion potential \begin{equation} V(\phi) = 2m^2_{\rm A} \left(1- \cos{\phi}\right) \,, \label{F5} \end{equation} which is invariant wit respect to the discrete symmetry transformation and converts into the potential $V=m^2_{\rm A} \phi^2 $, when $\phi$ is small. The parameter $m_{\rm A}$ describes the rest mass of the axion, and the parameter $\Psi_0$ is connected with the axion-photon coupling constant $g_{A \gamma \gamma}$ as follows $\frac{1}{\Psi_0}=g_{A \gamma \gamma}$. The Lagrangian of the matter $L_{(\rm matter)}$ is not presented explicitly, and is the subject of phenomenological modeling. \subsection{Master equations} \subsubsection{Master equations for the electromagnetic field} The Maxwell tensor $F_{ik}$ is connected with the potential of the electromagnetic field $A_k$ by the known relationship \begin{equation} F_{ik} = \nabla_{i}A_k - \nabla_k A_i \,. \label{21} \end{equation} As the consequence of this definition one obtains the first series of the Maxwell equations \begin{equation} \nabla_k F^{*ik}=0 \,, \label{20} \end{equation} which converts into identity, when (\ref{21}) holds. Variation of the total action functional (\ref{F0}) with respect to the electromagnetic potential $A_i$ gives the equations \begin{equation} \nabla_k H^{ik} = {\cal J}^i \,, \label{F9} \end{equation} where \begin{equation} {\cal J}^i = - \frac{\delta L_{(\rm matter)}}{\delta A_i} \label{F8} \end{equation} is the electric current, and \begin{equation} H^{ik} ={\cal H}^{\prime}({\cal I})\left[\cos{\phi} F^{ik} + \sin{\phi} F^{*ik} \right] \label{18} \end{equation} plays the role of the nonlinear tensor of the electromagnetic induction. Since the following identity holds \begin{equation} \nabla_i \nabla_k H^{ik} = 0 \,, \label{F10} \end{equation} one has to add the equation \begin{equation} \nabla_k {\cal J}^k = 0 \label{F11} \end{equation} into the total set of Master equations of the model. The term ${\cal J}^k$ is the subject of the phenomenological modeling. \subsubsection{Master equation for the axion field} Variation with respect to the pseudoscalar field $\phi$ yields \begin{equation} g^{mn} \nabla_m \nabla_n \phi + \frac12 \frac{dV}{d \phi} = - \frac{1}{ \Psi_0^2}\left\{ \frac14 {\cal H}^{\prime}({\cal I})\left[- \sin{\phi} \ F_{mn} F^{mn} + \cos{\phi} \ F^{*}_{mn} F^{mn} \right] + {\cal G} \right\} \,, \label{F12} \end{equation} where the pseudoscalar source ${\cal G}$ appears formally as the variational derivative of the matter Lagrangian \begin{equation} {\cal G} = \frac{\delta L_{(\rm matter)}}{\delta \phi} \,. \label{F13} \end{equation} The term ${\cal G}$ also is the subject of the phenomenological modeling. \subsubsection{Master equations for the gravitational field} Variation with respect to the metric gives the equations of the gravitational field \begin{equation} R_{pq} - \frac12 g_{pq} R - \Lambda g_{pq}= \kappa T^{(\rm tot)}_{pq} \,, \label{F14} \end{equation} where $R_{pq}$ is the Ricci tensor. The total (effective) stress energy tensor $T^{(\rm tot)}_{pq}$ consists of three terms \begin{equation} T^{(\rm tot)}_{pq} = T^{(\rm EMA)}_{pq} + T^{(\rm axion)}_{pq} + T^{(\rm matter)}_{pq} \,. \label{F15} \end{equation} The stress-energy tensor $T^{(\rm EMA)}_{pq}$, associated with the nonlinear electromagnetic field coupled nonlinearly to the axion field, is of the form \begin{equation} T^{(\rm EMA)}_{ik} = {\cal H}^{\prime}({\cal I}) \cos{\phi} \left[\frac14 g_{ik} F_{mn}F^{mn} - F_{im}F_k^{\ m}\right] + g_{ik} \left[{\cal H}({\cal I}) - {\cal I} \cdot {\cal H}^{\prime}({\cal I}) \right] \,. \label{17} \end{equation} It coincides with the standard stress-energy tensor of the electromagnetic field, when $\phi=0$ and ${\cal H}({\cal I})= {\cal I}$. The trace of the tensor (\ref{17}) \begin{equation} T^{(\rm EMA)}_{ik} g^{ik} = 4 \left[{\cal H}({\cal I}) - {\cal I} \cdot {\cal H}^{\prime}({\cal I}) \right] \label{170} \end{equation} is not equal to zero, when we use the nonlinear version of the theory. The stress energy tensor of the pure pseudoscalar field is presented as \begin{equation} T^{(\rm axion)}_{ik}= \Psi^2_0 \left\{\nabla_i \phi \nabla_k \phi + \frac12 g_{ik}\left[V(\phi) - \nabla_p \phi \nabla^p \phi \right]\right\} \,. \label{16} \end{equation} The stress-energy tensor of the matter is presented formally as \begin{equation} T^{(\rm matter)}_{pq} = \frac{(-2)}{\sqrt{-g}} \frac{\delta}{\delta g^{pq}} \left\{\sqrt{-g} L_{(\rm matter)} \right\} \,. \label{F151} \end{equation} It requires the algebraic decomposition and phenomenological decoding. \subsubsection{Conservation law and balance equations} The Bianchi identities require that \begin{equation} \nabla_{k} T^{(\rm tot)ik} =0 \,, \label{F16} \end{equation} i.e., the total energy and momentum are conserved. In order to simplify the balance equations for the matter quantities, we present some auxiliary calculations. First, we consider the divergence of the axion stress-energy tensor (\ref{16}) on the solution to the equation (\ref{F12}) \begin{equation} \nabla_k T^{ik (\rm axion)} = - \nabla^i \phi \left[\frac14 {\cal H}^{\prime}({\cal I})F^{mn} \left(-\sin{\phi}F_{mn} + \cos{\phi}F^*_{mn} \right) + {\cal G}\right] \,, \label{B1} \end{equation} second, we calculate the divergence of the electromagnetic stress-energy tensor (\ref{17}) on the solutions to the equations (\ref{F9}), (\ref{18}) \begin{equation} \nabla_k T^{ik (\rm EMA)} = F^{ik}{\cal J}_k + \frac14 \nabla^i \phi {\cal H}^{\prime}({\cal I})F^{mn} \left(-\sin{\phi}F_{mn} + \cos{\phi}F^*_{mn} \right) \,, \label{B2} \end{equation} and obtain finally \begin{equation} \nabla_k T^{ik (\rm matter)} = {\cal G} \nabla^i \phi - F^{ik} {\cal J}_k \,. \label{B3} \end{equation} \subsection{Phenomenology} \subsubsection{Macroscopic velocity four-vector and irreducible decomposition of its covariant derivative} Phenomenological approach requires the appropriate velocity four-vector $U^k$ to be defined as the starting point of the decomposition of the necessary quantities. We follow the Eckart's approach \cite{Eckart} and consider the timelike unit velocity four-vector $U^k$ to be defined as follows: \begin{equation} N^k= n U^k \,, \quad U^kU_k =1 \,, \quad n = \sqrt{N_k N^k} = N^k U_k \,, \label{F133} \end{equation} where $N^k$ is the four-vector of particle number flux, and $n$ is the scalar of particle number. Generally, the plasma is the multi-component system, and thus $N^k = \sum\limits_{(a)}N^k_{(a)}$, where $(a)$ indicates the sort of particle. With this four-vector we decompose all the tensor quantities using the so-called longitudinal and transversal components. In particular, the covariant derivative can be decomposed as follows \begin{equation} \nabla_k = U_k D + {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \,, \quad D = U^s \nabla_s \,, \quad {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k = \Delta_k^j \nabla_j \,, \quad \Delta_k^j = \delta^j_k - U^j U_k \,. \label{F134} \end{equation} $\Delta_k^j$ is the projector. The covariant derivative $\nabla_k U_j$ can be decomposed in the standard sum \begin{equation} \nabla_k U_j = U_k DU_j + \sigma_{kj} + \omega_{kj} + \frac13 \Delta_{kj} \Theta \,, \label{F54} \end{equation} where the acceleration four-vector $DU_j$, the symmetric traceless shear tensor $\sigma_{kj}$, the skew - symmetric vorticity tensor $\omega_{kj}$ and the expansion scalar $\Theta$ are presented by the well-known formulas \begin{equation} DU_j = U^s \nabla_s U_j \,, \quad \sigma_{kj} = \frac12 \left({\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k U_j + {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_j U_k \right) - \frac13 \Delta_{kj} \Theta \,, \quad \omega_{kj} = \frac12 \left({\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k U_j - {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_j U_k \right) \,, \quad \Theta = \nabla_kU^k \,. \label{F541} \end{equation} The four-vector of the electric current also can be decomposed with respect to the $U^k$ \begin{equation} {\cal J}^k = \rho U^k + {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k \,, \quad \rho = {\cal J}^m U_m \,, \quad {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k = \Delta_k^j {\cal J}_j \,. \label{F154} \end{equation} \subsubsection{Decomposition of the Maxwell tensor and its dual} We use the standard definitions of the electric field four-vector $E^k$ and of the magnetic induction four-vector $B^k$ \cite{L1} \begin{equation} E^k = F^{km} U_m \,, \quad B^k = F^{*km} U_m \ \ \rightarrow E^k U_k = 0 \,, \quad B^k U_k = 0 \,, \label{F542} \end{equation} which give the standard decompositions \begin{equation} F^{km} = E^m U^n - E^n U^m - \eta^{mns} B_s \,, \quad F^{*km} = B^m U^n - B^n U^m + \eta^{mns} E_s \,. \label{F542} \end{equation} Here the absolutely skew-symmetric symbol with three-indices is defined as $\eta^{mns}= \epsilon^{mnsl}U_l$; it is orthogonal to the velocity four-vector, $\eta^{mns}U_s=0$. \subsubsection{Equation of the magnetic flux balance and the Faraday equation} Now we apply the decompositions (\ref{F542}) to (\ref{20}) and consider the convolution $U_i \nabla_k F^{*ik}=0$. This procedure yields the scalar balance equation \begin{equation} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k B^k = \eta^{kmn}E_k \ \omega_{mn} \,. \label{F551} \end{equation} Clearly, when the vorticity of the medium flow is absent, $\omega_{mn} = 0$, we deal with the standard conservation law of the magnetic flux. Similarly, the convolution $\Delta_i^l \nabla_k F^{*ik}=0$ gives the Faraday law \begin{equation} \Delta^l_k D B^k + \eta^{lmn} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_m E_n = - \frac23 \Theta B^l + B_k \left(\sigma^{kl} + \omega^{kl} \right) - \eta^{lmn} E_m D U_n + \Delta^l_s \epsilon^{smkn} \omega_{kn} E_m \,, \label{F552} \end{equation} the source terms in the right-hand side of this equation are produced by the non-uniformity and inhomogeneity of the medium flow. \subsubsection{Axionic extension of the Gauss law for nonlinear electrodynamics} The equations (\ref{F551}) and (\ref{F552}) do not contain information about the pseudoscalar (axion) field. The function $\phi$ and its gradient appears in the equations (\ref{F9}) due to the structure of (\ref{18}) and (\ref{000}). Convolution of (\ref{F9}) with the velocity four-vector gives the nonlinear axionic extension of the Gauss law \begin{equation} {\cal H}^{\prime}({\cal I}) \left\{ \cos{\phi} \left[{\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k E^k + \eta^{mpq} B_m \omega_{pq} + B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right] - \sin{\phi}E^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right\} + \label{G1} \end{equation} $$ + {\cal H}^{\prime \prime}({\cal I})\left(E^k \cos{\phi} + B^k \sin{\phi} \right)\left\{\cos{\phi}\left[E_m B^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi + E^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k E_m - B^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k B_m \right] + \right. $$ $$ \left. +\sin{\phi}\left[B^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k E_m + E^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k B_m + \frac12 \left(B^m B_m - E^m E_m \right) {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right]\right\} = -\rho \,, $$ where $\rho = {\cal J}^m U_m$ is the charge density scalar. When $\phi=0$ and we deal with the linear electrodynamics, this equation reduces to the Gauss equation in the moving medium \begin{equation} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k E^k = - \rho - \eta^{mpq} B_m \omega_{pq} \,. \label{G11} \end{equation} \subsubsection{Axionic extension of the Ampere law for nonlinear electrodynamics} Convolution of (\ref{F9}) with the projector $\Delta_i^l$ gives the equation, which can be indicated as the nonlinear axionic extension of the Ampere law \begin{equation} {\cal H}^{\prime}({\cal I}) \left\{ \cos{\phi} \left[\Delta^l_k DE^k {-} \eta^{lkm} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_kB_m {+} B^l D\phi {+} \eta^{klp} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi E_p {+} \frac23 E^l \Theta {-} E_k (\sigma^{kl} {+} \omega^{kl}) {-} \eta^{lmn} B_m DU_n {-} \Delta^l_i \epsilon^{ikmn}B_m \omega_{kn} \right] {+} \right. \label{G5} \end{equation} $$ \left. {+} \sin{\phi}\left({-} E^l D \phi {+} \eta^{lkp} B_p {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right)\right\} {+} {\cal H}^{\prime \prime}({\cal I})\left[\left(E^l \cos{\phi} {+} B^l \sin{\phi} \right) D {\cal I} {+} \eta^{lkp}\left(\sin{\phi} E_p {-} \cos{\phi} B_p \right) {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k {\cal I} \right] = {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^l \,, $$ where the following auxiliary notations are used: \begin{equation} D {\cal I} = \cos{\phi} \left[E^m DE_m - B^m D B_m + E_m B^m D\phi \right] + \sin{\phi} \left[\frac12 (B^m B_m - E^m E_m) D \phi + B^m DE_m + E^m DB_m \right]\,, \label{G57} \end{equation} \begin{equation} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k {\cal I} = \cos{\phi} \left[E^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k E_m {-} B^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k B_m {+} E_m B^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right] {+} \sin{\phi} \left[\frac12 (B^m B_m {-} E^m E_m) {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi {+} B^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k E_m {+} E^m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k B_m \right]\,. \label{G58} \end{equation} When we deal with linear electrodynamics and $\phi=0$, we obtain from (\ref{G5}) the equation \begin{equation} \Delta^l_k DE^k {-} \eta^{lkm} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_kB_m {+} \frac23 E^l \Theta {-} E_k (\sigma^{kl} {+} \omega^{kl}) {-} \eta^{lmn} B_m DU_n {-} \Delta^l_i \epsilon^{ikmn}B_m \omega_{kn} = {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^l \,, \label{G599} \end{equation} which can be indicated as the Ampere equation in the moving medium. \subsubsection{Decomposition of the electric current} The equation $\nabla_k {\cal J}^k = 0$ can be now rewritten as \begin{equation} D \rho + \rho \Theta = {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k DU_k - {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k \,, \label{F154} \end{equation} and can be considered as the evolutionary equation for the charge density $\rho$. In the standard relativistic magnetohydrodynamics the transversal component of the current four-vector is of the form \begin{equation} {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k_{(\rm standard)} = \sigma E^k \,, \label{F138} \end{equation} where $\sigma$ is the conductivity scalar. In general case we present the transversal component of the current four-vector as the series \begin{equation} {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k = {\cal J}^k_{(1)} + {\cal J}^k_{(2)} + ... \label{F23} \end{equation} with respect to the number of derivatives, as the Effective Field Theory advises \cite{EFT}. In this sense the term ${\cal J}^k_{(1)}$ contains only one derivative of the first order; the term ${\cal J}^k_{(2)}$ contains the composition of two derivatives of the first order; second derivatives are omitted. We restrict our-selves by the first order terms, and obtain only five appropriate ones \begin{equation} {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k = \sigma E^k \cos{\phi} + \tilde{\sigma} B^k \sin{\phi} + \nu_1 \sin{\phi} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^k \phi + \nu_2 \cos{\phi} D U^k + \nu_3 \eta^{kpq} \sin{\phi}\ \omega_{pq} \,. \label{F24} \end{equation} The construction $\tilde{\sigma} \sin{\phi}$ plays the role of a magnetic conductivity associated with the chirality introduced by the axion field into the electrodynamic system; this term describes the current directed along the magnetic induction four-vector. The term $\nu_2 \cos{\phi} D U^k$ has the classical analog; it describes the electric current caused by the acceleration of the conductor. The term $\nu_1 \sin{\phi} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^k \phi$ relates to current along the gradient of the axion field. The last term in (\ref{F24}) rewritten as $2\nu_3 \omega^k \sin{\phi}$ with the help of the angular velocity four-vector $\omega^k = \frac12 \eta^{kpq} \omega_{pq}$ can be attributed to the current provoked by the rotation of the chiral medium. \subsubsection{Modification of the equation of the axion field} Taking into account the representation of the electromagnetic equations we can now rewrite the equation for the axion field as follows: \begin{equation} g^{mn} \nabla_m \nabla_n \phi + m^2_{A} \sin{\phi} = - \frac{1}{ \Psi_0^2}\left\{ {\cal H}^{\prime}({\cal I})\left[ \frac12 \sin{\phi} (B^m B_m -E_m E^m ) + \cos{\phi} E^m B_m \right] + {\cal G} \right\} \,. \label{0F12} \end{equation} The pseudoscalar source ${\cal G}$ can be phenomenologically decomposed similarly to the electric current four-vector; recollecting the terms of the zero, first and second order in derivatives we obtain \begin{equation} {\cal G} =\omega_1 \Theta \sin{\phi} +\omega_2 \cos{\phi} D \phi +\omega_3 \Theta \cos{\phi} D \phi +\omega_4 DU^k \cos{\phi}{\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi + \label{F149} \end{equation} $$ +\omega_5 E^k \cos{\phi} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi +\omega_6 \sin{\phi} E^k DU_k +\omega_7 \cos{\phi} \eta^{kpq} E_k \omega_{pq} + $$ $$ +\omega_8 \cos{\phi }B^k DU_k +\omega_9 \sin{\phi} B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi +\omega_{10}\cos{\phi} \eta^{kpq} B_k \omega_{pq} \,. $$ \subsubsection{Decomposition of the stress - energy tensor of the matter and the evolutionary equation for the velocity four-vector} The algebraic decomposition of the stress-energy tensor of the matter is well known: \begin{equation} T^{ik(\rm matter)} = W U^i U^k + U^i q^k + U^k q^i - \Delta^{ik} P + \Pi^{ik} \,, \label{03} \end{equation} where $W$ is the scalar of the matter energy density, $q^i$ is the heat-flux four-vector, $P$ is the Pascal equilibrium pressure, and $\Pi^{ik}$ is the tensor of non-equilibrium pressure. The rate of evolution of the scalar $W$, i.e., the quantity $DW$, can be found from the equation $U_i \nabla_k T^{ik(\rm total)}=0$ accounting for (\ref{B3}): \begin{equation} DW + (W+P) \Theta = q^k DU_k - \nabla_k q^k + \Pi^{ik}\left(\sigma_{ik}+ \frac13 \Delta_{ik} \Theta \right) + {\cal G} D \phi + E_k {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k \,. \label{B12} \end{equation} Convolution of (\ref{B3}) with the projector $\Delta^l_i$ gives the equation for the macroscopic velocity of the medium dynamics \begin{equation} (W+P) DU^l = {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^l P - \Delta^l_i Dq^i - q^l \Theta - q^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k U^l - \Delta^l_i \nabla_k \Pi^{ik} - \rho E^l + {\cal G} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^l \phi + \eta^{lks} B_s {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}_k \,. \label{B121} \end{equation} Let us add that if we follow the Eckart's approach we have to keep in mind that the heat-flux four-vector \begin{equation} q^i = \lambda \left({\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^i T {-} T DU^i \right) \label{Eck1} \end{equation} contains the phenomenological constant $\lambda$, describing the heat conductivity, the spatial gradient of the temperature $T$ and the acceleration four-vector $DU^k$. As for the anisotropic pressure tensor, which satisfies the relationships \begin{equation} \Pi_{ik} = \Pi_{(0)ik} + \Pi \Delta_{ik} \,, \quad \Pi_{(0)ik} g^{ik} = 0 \,, \quad \Pi = \frac13 \Pi_{ik} g^{ik} \,, \label{Eck2} \end{equation} it can be presented using two phenomenological constants: the shear viscosity $\eta$ and bulk viscosity $\zeta$, as follows: \begin{equation} \Pi_{ik(0)} = \eta \sigma_{ik} \,, \quad \Pi = 3 \zeta \Theta \,. \label{Eck3} \end{equation} Also, we assume that $W$ and $P$ are connected by the two-parameter equation of state \begin{equation} W = W(n,T) \,, \quad P = P(n,T) \,, \label{Eck4} \end{equation} and by the compatibility condition \begin{equation} n \frac{\partial W}{\partial n} + T \frac{\partial P}{\partial T} = W+P \,. \label{Eck5} \end{equation} In particular case, when $W$ is linear in $n$, i.e., $W=n e(T)$, we obtain immediately from (\ref{Eck5}) that $P=f(n) T$ with arbitrary function $f(n)$, and we can extract the known equation for the pressure of the relativistic perfect gas $P= nk_B T$ ($k_B$ is the Boltzmann constant). At the end of this Section we would like to emphasize that till now we considered the general formalism, which is appropriate for moving electromagnetically active fluid. In the next Section we start to discuss magnetohydrodynamic models with specific ansarz concerning the electric conductivity of the system. \section{Two examples of truncated sets of equations of the axion magnetohydrodynamics} \subsection{Classical approach: Approximation of infinite electric conductivity} \subsubsection{Auxiliary equations} Zero order approximation of classical magnetohydrodynamics is based on the assumption that $\sigma \to \infty$. In this situation one assumes that the electric field four-vector has to tend to zero, $E^k \to 0$ providing the product $\sigma E^k$ remains finite. In fact, we have to decompose the four-vector of the electric field in the power series with respect to small parameter $\frac{1}{\sigma}$, and in the zero order approximation to put $E^i =0$ in all the Master equations. As for the Ampere equation (\ref{G5}), it converts now into the equation for the electric field $$ E^k = \frac{1}{\sigma \cos{\phi}}\left\{ - \tilde{\sigma} B^k \sin{\phi} - \nu_1 \sin{\phi} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^k \phi - \nu_2 \cos{\phi} D U^k - \nu_3 \eta^{kpq} \sin{\phi}\ \omega_{pq} + \right. $$ \begin{equation} \left. + {\cal H}^{\prime}({\cal I}) \left[ \cos{\phi} \left( {-} \eta^{lkm} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_kB_m {+} B^l D\phi {-} \eta^{lmn} B_m DU_n {-} \Delta^l_i \epsilon^{ikmn}B_m \omega_{kn} \right) {+} \sin{\phi} \ \eta^{lkp} B_p {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right]+ \right. \label{electric} \end{equation} $$ \left. {+} \frac12 {\cal H}^{\prime \prime}({\cal I})\left\{ B^l \sin{\phi} \left[\sin{\phi} B^m B_m D \phi {-} \cos{\phi} D (B^m B_m)\right] {-} \eta^{lkp}B_p \cos{\phi} [\sin{\phi} B^m B_m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi\ {-} \cos{\phi} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k (B^m B_m )] \right\} \right\}\,. $$ The Gauss equation (\ref{G1}) can be now considered as the definition of the charge density scalar \begin{equation} - \rho = {\cal H}^{\prime}({\cal I}) \cos{\phi} \left(\eta^{mpq} B_m \omega_{pq} + B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right) + \frac12 {\cal H}^{\prime \prime}({\cal I})B^k \sin{\phi} \left[\sin{\phi} B^m B_m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi -\cos{\phi} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k (B^m B_m) \right] \,. \label{0G177} \end{equation} \subsubsection{Equations for the magnetic field} The equations (\ref{F551}) and (\ref{F552}) can be now written in the form \begin{equation} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k B^k = 0\,, \label{01} \end{equation} \begin{equation} \Delta_{kl} D B^k = B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k U_l - B_l \Theta \,. \label{02} \end{equation} If we consider the case, when the particle number $n$ is conserved and thus \begin{equation} \nabla_k (n U^k) = 0 \ \ \rightarrow Dn + n \Theta = 0 \,, \label{027} \end{equation} we can rewrite the equation (\ref{02}) in the form \begin{equation} \Delta^l_{k} {\pounds}_U \left(\frac{B^k}{n}\right) = 0 \,, \label{029} \end{equation} where ${\pounds}_U$ is the Lie derivative calculated along the four-vector $U^j$. The equation (\ref{029}) is known as the condition of frostbite of magnetic field lines. Formally speaking, this condition does not depend on the axion field and does not include information about the nonlinearity of the axion electrodynamics. \subsubsection{Equation for the axion field} The Master equation for the axion field is \begin{equation} g^{mn} \nabla_m \nabla_n \phi + \sin{\phi} \left[m^2_{A} + \frac{1}{ 2\Psi_0^2}{\cal H}^{\prime}({\cal I}) B^m B_m \right]= \label{1axion00} \end{equation} $$ - \frac{1}{ \Psi_0^2}\left[\cos{\phi}\left(\omega_2 D \phi +\omega_3 \Theta D \phi +\omega_4 DU^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi +\omega_8 B^k DU_k +\omega_{10} \eta^{kpq} B_k \omega_{pq} \right) + \sin{\phi}\left(\omega_1 \Theta +\omega_9 B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right) \right]\,. $$ \subsubsection{Equations for the velocity four-vector} The equations for the velocity four-vector (\ref{B121}) can be reconstructed as follows: we have to work with the equation \begin{equation} (W+P) DU^l = {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^l P {-} \lambda \Delta^l_i D({\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^i T {-}T DU^i) {-} \lambda \Theta ({\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^lT {-} T DU^l) {-} \lambda ({\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^kT {-} T DU^k) {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k U^l {-} \Delta^l_i \nabla_k \Pi^{ik} {+} {\cal G} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^l \phi {+} \eta^{l}_{\cdot js} B^s {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^j \,, \label{B121j} \end{equation} where ${\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^j$ should be replaced by the term \begin{equation} {\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^j = {\cal H}^{\prime}({\cal I}) \left[ \cos{\phi} \left({-} \eta^{jkm} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_kB_m {+} B^j D\phi {-} \eta^{jmn} B_m DU_n {-} \Delta^j_i \epsilon^{ikmn}B_m \omega_{kn} \right) {+} \sin{\phi} \eta^{jkp} B_p {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right] + \label{G5j} \end{equation} $$ {+} \frac12 {\cal H}^{\prime \prime}({\cal I})\left\{B^j \sin{\phi} \left[\sin{\phi} B^m B_m D \phi - \cos{\phi} D (B^m B_m) \right] {-} \eta^{jkp} \cos{\phi} B_p \left[\sin{\phi} B^m B_m {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi - \cos{\phi} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k (B^m B_m ) \right] \right\} \,, $$ \subsubsection{The case of linear electrodynamics} When ${\cal H}({\cal I}) = {\cal I}$, we can simplify the equations (\ref{0G177}) and (\ref{electric}) as follows: \begin{equation} \rho = - \cos{\phi} \left(\eta^{mpq} B_m \omega_{pq} + B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi \right) \,, \label{0G1} \end{equation} $$ E^k = \frac{1}{\sigma }\left[\tan{\phi} \left( \eta^{lkp} B_p {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \phi - \tilde{\sigma} B^k - \nu_1 {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^k \phi - \nu_3 \eta^{kpq} \omega_{pq} \right) + \right. $$ \begin{equation} \left. + \left( B^l D\phi - \nu_2 D U^k {-} \eta^{lkm} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_kB_m {-} \eta^{lmn} B_m DU_n {-} \Delta^l_i \epsilon^{ikmn}B_m \omega_{kn} \right) \right] \,. \label{electric2} \end{equation} \subsection{New versions of axion magnetohydrodynamics} \subsubsection{Anomalous regime in the presence of axionically induced magnetic conductivity} As was shown in \cite{B1,B2} the nonlinear axion electrodynamics admits the existence of a anomalous regime, which is characterized by the electric field $E^k \propto \tan{\phi} B^k$. When $\phi \to \frac{\pi}{2} + 2\pi n$, the electric field grows infinitely. In the model under consideration we see that the term $\sigma \cos{\phi} E^k$ in (\ref{F24}) remains finite at $\sigma \to \infty$, if $\cos{\phi} \to 0$, i.e., $\phi \to \frac{\pi}{2} + 2\pi n$. In other words, we can remove the requirement that $E^k \to 0$, and the electric field remains finite. There are two interesting phenomenological situations based on this idea. First, when $\tilde{\sigma} \neq 0$, but $\nu_3=0$, we can connect the electric and magnetic field by the relationship \begin{equation} E^k = -\frac{\tilde{\sigma}}{\sigma} B^k \tan{\phi} \,, \label{electric5} \end{equation} keeping in mind that the big value of conductivity parameter in the denominator is compensated by the large value of the function $\tan{\phi}$. In order to obtain the corresponding truncated set of equations we can put \begin{equation} \phi \to \frac{\pi}{2} - \psi \,, \quad |\psi|<<1 \,, \quad \sin{\phi} \to \cos{\psi} \,, \quad \cos{\phi} \to \sin{\psi}\,, \quad \tan{\phi} \to \cot{\psi} \,, \quad E^k \to -\frac{\tilde{\sigma}}{\sigma} B^k \cot{\psi} \to -\frac{\tilde{\sigma}}{\sigma \psi} B^k \,, \label{psi1} \end{equation} to all the equations obtained in Section II. One can present the result of this procedure as follows. The equation (\ref{F551}) converts into \begin{equation} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k B^k = -\frac{\tilde{\sigma}}{\sigma \psi} \eta^{kmn}B_k \ \omega_{mn} \,, \label{psi2} \end{equation} the Faraday law (\ref{F552}) takes the form \begin{equation} \Delta^l_k D B^k + \Theta B^l - B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k U^l = \frac{\tilde{\sigma}}{\sigma \psi} \left[ \eta^{lmn} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_m B_n + \eta^{lmn} B_m D U_n - \Delta^l_s \epsilon^{smkn} \omega_{kn} B_m + \eta^{lmn} B_n \frac{ {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_m \psi}{\psi} \right]\,. \label{psi3} \end{equation} The leading order version of the axionically modified Gauss law (\ref{G1}) can be now written as \begin{equation} B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \left[{\cal H}^{\prime}({\cal I})\right] = - \rho \,, \quad {\cal I} = - \frac{\tilde{\sigma}B^m B_m}{\sigma \psi} \,. \label{psi6} \end{equation} The leading order version of the axionically modified Ampere law (\ref{G5}) can be presented in the form \begin{equation} B_p \left(g^{lp} D + \frac{\tilde{\sigma}}{\sigma \psi}\eta^{lpk} {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \right) {\cal H}^{\prime}({\cal I}) = 0 \,. \label{psi9} \end{equation} In the leading order approximation with respect to $\psi$ the equation (\ref{0F12}) for the axion field requires the function $\psi$ to be found from the equation \begin{equation} \Psi_0^2 m^2_{A} + \omega_1 \Theta = {\cal I} {\cal H}^{\prime}({\cal I}) \frac{\sigma \psi}{2\tilde{\sigma}} \left[1 - \left(\frac{\tilde{\sigma}}{\sigma \psi} \right)^2 \right] - \frac{\omega_6 \tilde{\sigma}}{\sigma \psi} B^k DU_k \,. \label{psi11} \end{equation} In particular, when $\omega_1=\omega_6 =0$, and the function ${\cal H}({\cal I})$ is logarithmic, i.e., ${\cal H}({\cal I}) = \frac{1}{\nu} \log{\frac{{\cal I}}{{\cal I}_*}}$ ($\nu$ and ${\cal I}_*$ are some constants) we see that $\psi$ takes constant value \begin{equation} \psi = \frac{\tilde{\sigma}}{\sigma} \left[\nu \Psi^2_0 m^2_{A} \pm \sqrt{\nu^2 \Psi^4_0 m^4_{A}+1} \right] \,. \label{psi12} \end{equation} The final remark concerns the equation for the velocity four-vector (\ref{B121}). Since now ${\mathop {\rule{0pt}{0pt}{J}}\limits^{\bot}}\rule{0pt}{0pt}^k=0$, this equation transforms into \begin{equation} (W+P) DU^l - {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^l P + \Delta^l_i Dq^i + q^l \Theta + q^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k U^l + \Delta^l_i \nabla_k \Pi^{ik} = \label{B121} \end{equation} $$ = - \frac{\tilde{\sigma}}{\sigma \psi} B^l B^k {\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}_k \left[{\cal H}^{\prime}({\cal I})\right] + \left(\omega_1 \Theta - \frac{\omega_6 \tilde{\sigma}}{\sigma \psi} B^k DU_k \right){\mathop {\rule{0pt}{0pt}{\nabla}}\limits^{\bot}}\rule{0pt}{0pt}^l \phi \,. $$ \subsubsection{Anomalous regime in the presence of rotation in the magnetohydrodynamic flow} The second interesting case appears, if $\tilde{\sigma}=0$, $\nu_3 \neq 0$ and there exists the rotation of the medium, i.e., the angular velocity of the medium flow rotation is nonvanishing, $\omega^k = \frac12 \eta^{kpq} \omega_{pq} \neq 0$. The most interesting application of this model would be the model of plasma dynamo. Based on the arguments presented above, in this second case we can put (instead of (\ref{electric5})) \begin{equation} E^k = -\frac{2\nu_3}{\sigma} \omega^{k} \tan{\phi} \to -\frac{2\nu_3}{\sigma \psi} \omega^{k} \label{electric6} \end{equation} into the axionically extended Faraday, Gauss and Ampere equations, as well as, into the equations for the pseudoscalar field and for the velocity four vector. This procedure is analogous to the one presented in the previous subsection. \section{Outlook} In this work we presented the mathematical formalism of the relativistic nonlinear axion magnetohydrodynamics. The term nonlinear means that both fields: the pseudoscalar (axion) and the electromagnetic one are described in the nonlinear version. In fact, we entered the threshold of the new program execution, which can be formally divided into two sectors. \noindent 1. In the first sector of the future work we plan to extend the general formalism; to be more precise we plan to do the following. \noindent 1.1. We plan to consider the axionically extended Born-Infeld, Euler-Heisenberg, etc. models, which are nonlinear in the first and second invariants of the electromagnetic field. \noindent 1.2. We plan to supplement the hydrodynamic part of the established theory, which is now based on the Eckart approach (models with bulk and shear viscosity and heat conductivity), by the elements of the Israel-Stewart theory \cite{IS}, which deals with transient irreversible (second order) relativistic thermodynamics. \noindent 1.3. We plan to prepare the justification of the nonlinear axion magnetohydrodynamics on the base of extension of the relativistic kinetic theory of the axionically active plasma, using the axionic modification of the Lorentz force. \noindent 2. In the second sector of the future work we plan to study the anomalous regimes of the magnetohydrodynamic flows; for this purpose we hope to do the following. \noindent 2.1. We hope to consider new models of anomalous accretion of the magnetized matter on the rotating neutron stars, black holes and dyons, as well as instabilities of a new type caused by the interactions with the axion field. \noindent 2.2. We hope to study new anomalous models of dynamo, jets production and turbulent flows. \noindent 2.3. We hope to analyze the solutions describing magnetohydrodynamic waves, shock waves based on the introduction of the correspondingly generalized Reynolds numbers. Wish success to our group. \acknowledgments{The work was supported by Russian Foundation for Basic Research (Grants No 20-02-00280 and 20-52-05009).} \vspace{1cm} \section*{References}
1,314,259,996,039
arxiv
\section{Introduction} Recently, we have used the technique of quantum thermal geodesics confined behind horizons of black hole \cite{K1,K2,K3} in order to derive a quasi-classical spectrum of masses for rotating Kerr and BTZ black holes \cite{K2', K-BTZ}. For the Kerr black hole, we have found the following relation between the orbital momentum $J$ and mass $M$ \cite{K2'}: \begin{equation}\label{Kerr-J} J = \frac{2\sqrt{l}}{l+1}\,M_{{\textsc{\scriptsize Kerr}}}^2, \end{equation} where we put the gravitational constant equal to unit ($G=1$). The parameter $l$ takes the values of \begin{equation}\label{Kerr-l} l=\{1,\, \textstyle{\frac{3}{2}},\, 2,\, 3,\,\infty\}, \end{equation} representing the ratio of external horizon area to the internal one $$ l=\frac{{\cal A}_+}{{\cal A}_-}, $$ as follows from the consistency of mapping the analytically continued space for radial geodesics completely confined behind the horizons. Therefore, the Kerr black hole with respect to rotation and radial motion has got two quantum numbers: the first is the integer or, generically, half integer momentum $J$, while the second is the `loop number' $l$. It is important to emphasize that the quantum spectrum of Kerr black hole possesses the loop-duality: the spectrum is invariant under the action of duality transform $$ l \leftrightarrow \frac{1}{l}. $$ The extremal black hole corresponds to $l=1$. The limit of $l\to\infty$ gives the Schwarzschild black hole, so that $J\to 2M^2/\sqrt{l}\to 0$, which indicates a breaking down such the quantization method in the case of Schwarzschild black hole. The same note concerns the BTZ black hole \cite{BTZ}, in which case we have found the following spectrum \cite{K-BTZ}: \begin{equation}\label{BTZ-J} J=\frac{2 k}{k^2+1}\, M_{{\textsc{\scriptsize btz}}}\ell, \end{equation} where $\ell$ is the curvature radius of AdS$_3$ space-time, while $k$ is the loop number for the BTZ black hole. The loop-duality $$ k\leftrightarrow \frac{1}{k} $$ remains the spectrum invariant. Again, the non-rotating limit of $k\to \infty$ misses the quantization in the form of (\ref{BTZ-J}). Therefore, for non-rotating black holes we need a consistent quantization procedure supplemental to the case of $J\neq 0$. In section II we use the quasi-classical method for the periodic motion in purely imaginary time, which corresponds to the thermodynamical ensemble, that is the case of geodesics confined behind the horizons. The offered approach is applied to the Schwarzschild black hole in section III and to the BTZ black hole in section IV. We compare the procedure of quantization at $J=0$ with that of $J\neq 0$ and clarify the difference. We make notes on the connection of quantization with quasi-normal modes (see reviews in \cite{QNM-rev}). Several remarks are devoted to the comparison with another quantization procedure developed by \cite{BarKust}. Our results are summarized in Conclusion. \section{Quasi-classical method and thermodynamical ensemble} Let us start with a system possessing the only dynamical degree of freedom, the generalized coordinate $q$, moving periodically. Then, the quasi-classical quantization rule is the following: \begin{equation}\label{quasi-1} \oint p\,{\rm d}q =2\pi\hbar\,n,\qquad n\in \mathbb N, \end{equation} where $p$ is the momentum canonically conjugated to $q$, while $n\gg 1$ is the quantum number. In (\ref{quasi-1}) we have neglected a possible shift of $n$ due to reflections at turn-points. The energetic density of levels ${\rm d}n/{\rm d}E$ can be easily derived by differentiating (\ref{quasi-1}) with respect to energy $E$, so that \begin{equation}\label{quasi-2} \frac{{\rm d}n}{{\rm d}E}=\frac{1}{2\pi\hbar}\oint\frac{\partial p}{\partial E}\,{\rm d}q. \end{equation} The Hamilton equations give \begin{equation}\label{quasi-3} \frac{\partial E}{\partial p}=\dot q, \end{equation} where $\dot q={\rm d}q/{\rm d}t$ is the velocity of motion. Therefore, \begin{equation}\label{quasi-4} \frac{{\rm d}n}{{\rm d}E} =\frac{\tau(E)}{2\pi\hbar}, \qquad \tau(E)=\oint{\rm d}t, \end{equation} where $\tau$ is the period of motion depending on the energy. Introducing the phase frequency $$ \omega(E)=\frac{2\pi}{\tau(E)}, $$ we get \begin{equation}\label{quasi-5} \frac{{\rm d}n}{{\rm d}E}=\frac{1}{\hbar\omega(E)}. \end{equation} Thus, we have reminded the ordinary result on the spacing between the levels of energy: $\Delta E =\hbar\omega(E)\,\Delta n$. Summing up the number of levels in a given interval of energy we get \begin{equation}\label{quasi-6} n\hbar= \int \frac{{\rm d}E}{\omega(E)}, \end{equation} or equivalently \begin{equation}\label{quasi-7} 2\pi\hbar\,n=\int \tau(E)\,{\rm d}E. \end{equation} Next, in a thermodynamical equilibrium, a system is moving periodically in purely imaginary time, so that the period is fixed by the inverse temperature $\beta=1/T$. Therefore, we have got the substitution \begin{equation}\label{quasi-t1} \tau(E)\mapsto-{\rm i}\hbar\,\beta({\cal E}),\qquad E\mapsto {\rm i}\,{\cal E}, \end{equation} with $\cal E$ denoting the Euclidean energy. So, we derive \begin{equation}\label{quasi-t2} \int \beta({\cal E})\,{\rm d}{\cal E}=2\pi\,n,\qquad n\in \mathbb N. \end{equation} In the framework of quantum thermal geodesics confined behind the horizon, the black hole is represented by definite microstates, namely, the system of particles on geodesics in analytically continued space defined behind the horizon. Each particle is ascribed to a winding number $n_W$ determining the number of cycles per period, i.e. $\beta_{n_W}=\beta/n_W$ \cite{K1,K2,K3}. Summing up contributions by microstates is equal to summing over the particles and cycles: \begin{equation}\label{quasi-bh1} \sum\limits_{micro.}\int \beta({\cal E})\,{\rm d}{\cal E}\Big|_{micro.}\equiv \sum\limits_{part.}\sum\limits_{cycle}^{n_W}\int \frac{\beta({\cal E})}{n_W}\,{\rm d}{\cal E}. \end{equation} Since $$ \sum\limits_{cycle=1}^{n_W}\frac{\beta}{n_W}=\beta, $$ we get the same overall period, while the summing over particles gives the total energy of the black hole, i.e. its mass \cite{K3}. Therefore, for the non-rotating black hole with the single external characteristics, the mass $M$, the quantization rule of (\ref{quasi-t2}) takes the form \begin{equation}\label{quasi-bh2} \int\beta(M)\,{\rm d}M=2\pi\,n,\qquad n\in \mathbb N. \end{equation} Using the thermodynamical relation $$ {\rm d}M= T\,{\rm d}\mathcal S, $$ where $\mathcal S$ is the entropy, we arrive to \begin{equation}\label{quasi-bh3} \mathcal S=2\pi\,n. \end{equation} Thus, the quasi-classical quantization of non-rotating black hole results in the equidistant quantization of its entropy\footnote{We do not consider charged black holes, too, since the derivation has been based on the fact of single dynamical quantity of black hole, its mass.}, which is in agreement with the argumentation by J.Bekenstein in his pioneering paper on the quantum spectrum of black hole area \cite{Bekenstein} as well as with further developments in \cite{BM}. \section{Kerr black hole: $J=0$ and $J\neq 0$} \subsection{Schwarzschild black hole} Non-rotating black hole satisfies the condition of single dynamical variable, the black hole mass. Therefore, we straightforwardly get the quasi-classical spectrum \begin{equation}\label{Schwarz-1} \mathcal S_n =4\pi M_n^2=2\pi\,n\quad\Rightarrow\quad M_n^2=\frac{n}{2}. \end{equation} This result coincides with the quasi-classical limit of spectrum obtained in \cite{BarKust} in the framework of quantizing the effective dynamical system of black hole in terms of its global external characteristics. Then, after canonical transformation one gets the quantum system equivalent to harmonic oscillator. This transform uses the canonically conjugated pair of black hole mass $M$ and periodic angle-like variable $P$ as conjectured by authors of \cite{BarKust}. However, we can point out a problem related with a constructing of Hermitian phase operator conjugated to the occupation number of oscillator in quantum mechanics as reviewed in \cite{CarrNieto}. Nevertheless, the problem is irrelevant in the quasi-classical approximation, but it is important while exact quantization. In addition, the oscillator-like spectrum in \cite{BarKust} yields the entropy, not the total energy, that could change the situation. Thus, the result of this section is in agreement with that of obtained by the method of \cite{BarKust}. Similar ideas were used by H.Kastrup in \cite{Kastrup}. However, the obtained spectrum $M_n^2=n/4$ includes the additional one half, which reflects a systematic miss caused by the heuristic correspondence of energy multiplied by the period in imaginary time, with the quantized adiabatic invariant. So, we have improved this guess by more strict argumentations. Ref. \cite{Kastrup} contains a discussion on the black hole entropy and thermodynamics, too. \subsection{Quasi-normal modes and quantum spectrum} Remarkably, one could explore independent determination of phase frequency $\omega(E)$ in order to use the quantization rule in the form of (\ref{quasi-6}), which is a formal expression of Bohr's correspondence principle: classical frequencies reproduce increments of energy between the levels at high quantum numbers, i.e. $\Delta E =\hbar\omega(E)\,\Delta n$. Such classical frequencies correspond to quasi-normal modes \cite{QNM-rev}. Those modes have got both real and imaginary terms (see original evaluations in \cite{QNM-original}, recent analytical results were obtained in \cite{Schiappa,Schiappa2}). In \cite{QNM-adiabat,Setare} authors use the real parts in order to quantize the black hole spectrum. However, as we have argued in section II, the thermodynamical system is inherently periodic with imaginary time. This fact is exactly reproduced by tower of the imaginary parts in quasi-normal modes. So, we insist that procedure based on the quasi-normal modes is consistent with the quantization performed above, if only one uses the imaginary part of classical frequencies, which are universal for classical fields with various spins, while the substitution of real parts of quasi-normal modes in quantization rule (\ref{quasi-6}) seems to be misleading. Nevertheless, the real parts of quasi-normal frequencies could contain some other physical information. This fact could be especially important in connection with attempts to use Bohr's correspondence principle in the framework of Loop Quantum Gravity (see recent reviews in \cite{LQG} and references therein) in order to fix both Immirzi parameter and quantum spacing of black hole horizon area \cite{Hod,Dreyer}. In that case one exploits the real parts of quasi-normal modes to relate it with the area law and minimal value of spin in its network. Unfortunately, to our opinion, again, the substitution of real parts of quasi-normal frequencies in Bohr's correspondence principle is misleading in the context of black hole thermodynamics. In addition, authors of \cite{Schiappa,Schiappa2} argue for the real parts of quasi-normal modes cannot be straightforwardly applied to other black holes except the simplest case of Schwarzschild black holes in the context of Loop Quantum Gravity. Such the argumentation invalidates some proposals in \cite{Dreyer}. Moreover, in \cite{Schiappa2} one finds a discussion, why real parts of asymptotic quasi-normal modes cannot be used in semi-classical considerations of Loop Quantum Gravity, at suggested in \cite{Hod,Dreyer}. \subsection{Kerr black hole} At $J\neq 0$ we have two dynamical variables, the mass $M$ and orbital momentum $J$, so one has to take into account the quantization of angular motion. Moreover, it is important to pay attention to both horizons, the external and internal ones. Indeed, the angular velocities of horizons are equal to \begin{equation}\label{Kerr-1} \Omega_\pm=\frac{a}{r_\pm^2+a^2}, \end{equation} where $a=J/M$, and $r_\pm$ are the radii of horizons. The quantization of horizon-area ratio leads to strict relation between the mass and orbital momentum shown in (\ref{Kerr-J}). Then, $M$ and $J$ are not independent at fixed loop $l$ of (\ref{Kerr-l}), that makes the single-variable quantization of (\ref{quasi-6}), (\ref{quasi-7}) or (\ref{quasi-bh2}), (\ref{quasi-bh3}) irrelevant. Under relation (\ref{Kerr-J}) we get \begin{equation}\label{Kerr-2} \Omega_+=\frac{1}{2\sqrt{l}M},\qquad \Omega_-=l\,\Omega_+. \end{equation} The temperatures at horizons are given by \begin{equation}\label{Kerr-3} \beta_+=8\pi M\,\frac{l}{l-1},\qquad \beta_-=\frac{\beta_+}{l}. \end{equation} So, the self-dual angle of rotation per thermodynamical period is equal to \begin{equation}\label{Kerr-4} \Delta\phi=\beta_+\Omega_+=\beta_-\Omega_-=4\pi\,\frac{\sqrt{l}}{l-1}. \end{equation} The corresponding winding numbers for the ground state at horizons are given by \begin{equation}\label{Kerr-5} n_W^+=\frac{2l}{l-1},\qquad n_W^-=\frac{n_W^+}{l}=\frac{2}{l-1}. \end{equation} Introduce a multiple period consistent for both horizons, \begin{equation}\label{Kerr-6} \tau=\sqrt{\beta_+\beta_-}=\frac{\beta_+}{\sqrt{l}}, \end{equation} which gives the following rotation angles \begin{equation}\label{Kerr-7} \begin{array}{ccc} \Delta\phi_+ & =\tau \Omega_+ & \displaystyle=2\pi\,\frac{2}{l-1}=2\pi\,n_W^-, \\[4mm] \Delta\phi_- & =\tau \Omega_- & \displaystyle=2\pi\,\frac{2l}{l-1}=2\pi\,n_W^+. \end{array} \end{equation} Therefore, at both horizons the Kerr black hole makes rotations by angles multiple to $2\pi$ per the specified time period. The multiplication factors are identical to winding numbers. Thus, the complete periodicity with account of rotation takes place at $\tau=\beta_+/\sqrt{l}$. Note, that due to $$ T\,{\rm d}\mathcal S={\rm d}M-\Omega_+{\rm d}J $$ we can deduce \begin{equation}\label{Kerr-S} \int\limits_0^M\tau(M){\rm d}M\,\left(1-\Omega_+\,\frac{{\rm d}J}{{\rm d}M}\right) =\frac{\mathcal S}{\sqrt{l}}=2\pi\,J, \end{equation} that provides the correct quantization of entropy $\mathcal S$ as it was obtained in \cite{K3}. Thus, we should modify the quantization rule of (\ref{quasi-bh3}) by \begin{equation}\label{QR} \frac{\tau}{\beta}\,\mathcal S=2\pi\,n, \end{equation} valid in the case of rotation, though $n$ could be a subset of integer numbers. In the quasi-classical approach, the horizon area spectrum is given by \begin{equation}\label{Kerr-8} \mathcal A = 8\pi\,\sqrt{l}\,J. \end{equation} In the same limit, formula (\ref{Kerr-8}) reproduces the spectrum obtained in \cite{BarKust}, if only one puts the loop $l=1$. To our opinion, the reason for such the correspondence is transparent: if one ignores the dynamics on inner horizon (as in \cite{BarKust}), one gets the consistent quantization supposing a coherent rotation of both horizons, i.e. putting $l=1$. Finally, it is interesting to note, that combining the cases of $J=0$ and $J\neq 0$ at $l=1$, one could ascribe the spectrum of $M^2=n/2$ to points of `daughter trajectories' of main trajectory $J=M^2$ in the plane of $\{M^2,J\}$. \section{BTZ black hole: $J=0$ and $J\neq 0$} At $J=0$ we use the quantization rule of (\ref{quasi-bh3}) to deduce \begin{equation}\label{BTZ-1} \mathcal S_n= 2\pi\ell\sqrt{\frac{M_n}{2G}}=2\pi\,n,\quad M_n=2G\,\frac{n^2}{\ell^2}. \end{equation} The horizon area spectrum $\mathcal A_0=4G\,\mathcal S_n$ is also equidistant. At $J\neq 0$, after taking into account the spectrum of (\ref{BTZ-J}), we find that the horizons rotate with angle velocities \begin{equation}\label{BTZ-2} \Omega_+ =\frac{1}{\ell k},\qquad \Omega_-=k^2\Omega_+, \end{equation} while the corresponding temperatures are given by \begin{equation}\label{BTZ-3} \beta_+=\pi \ell\, \frac{k}{k^2-1}\sqrt{\frac{k\ell}{GJ}},\qquad \beta_-=\frac{\beta_+}{k}. \end{equation} Two horizons consistently rotate by angles multiple to $2\pi$ at the period of \begin{equation}\label{BTZ-4} \tau=4\pi\ell\,\frac{k}{k-1}, \end{equation} so that the angles are determined by the winding numbers of ground state at the horizons, \begin{equation}\label{BTZ-5} \begin{array}{ccc} \Delta\phi_+ & =\tau \Omega_+ & \displaystyle=2\pi\,\frac{2}{k-1}=2\pi\,n_W^-, \\[4mm] \Delta\phi_- & =\tau \Omega_- & \displaystyle=2\pi\,\frac{2k^2}{k-1}=2\pi\,k\,n_W^+. \end{array} \end{equation} The ratio \begin{equation}\label{BTZ-6} \frac{\tau}{\beta_+}=4(k+1)\sqrt{\frac{GJ}{k\ell}} \end{equation} gives \begin{equation}\label{BTZ-7} \frac{\tau}{\beta_+}\,\mathcal S=2\pi J\,(2k+2), \end{equation} which is consistent with (\ref{QR}). The obtained result disagrees with consideration in \cite{Setare} \section{Conclusion} In the present paper we have used the periodic motion of thermodynamical ensemble in imaginary time in order to formulate the quasi-classical quantization rule for single-variable dynamical system of black hole, i.e. non-rotating black hole. The rule has given the equidistant quantization of entropy. The application of method to Schwarzschild and BTZ black holes has been considered. We have emphasized the difference with the treatment in terms of quasi-normal modes: to our opinion the use of real parts of frequencies is misleading in the problem under study, while the imaginary parts of quasi-normal modes reproduce our result. This fact makes irrelevant the treatment of quantum spectrum for black holes in terms of Loop Quantum Gravity as suggested in \cite{Hod,Dreyer} as well as in the quasi-classical framework of \cite{QNM-adiabat}. Nevertheless, the real parts of quasi-normal frequencies could have another physical sense. We have clarified the difference of single-variable approach with the case of rotating black holes. So, one has to take into account consistent multiple folding of rotation for both horizons. This consistency has required to scale the full period of motion for the black hole as a whole. This scaling has resulted in the modified quantization rule, which guarantees the appropriate equidistant quantization of scaled entropy. The mentioned consistency adjusting the rotation of both horizons, was not generically taken into account in approach of \cite{BarKust}, which, therefore, is theoretically sound at $l\to 1$, only, in the quasi-classical limit, since quantization of periodic phase has some principal problems \cite{CarrNieto}. This work is partially supported by the grant of the president of Russian Federation for scientific schools NSc-1303.2003.2, and the Russian Foundation for Basic Research, grant 04-02-17530. \newpage
1,314,259,996,040
arxiv
\section{Correlated formation of Black Hole and Bulges in CDM and MOND} While appearing in wide range of shapes, sizes and luminosities, galaxies have very regular properties. E.g., the terminal rotation speed $V_{cir}$ of a spiral galaxy is tightly correlated with its total baryonic mass $M$, following a simple TFMM power-law $V_{cir}^4/M \sim 0.02 ({\rm km}{\rm s}^{-1})^{-4}{M_\odot}^{-1}$, a formula proposed by Tully \& Fisher (1977) for high-surface brightness galaxies, and generalized by Milgrom (1983) and tested by McGaugh (2005) for gas-rich low-surface brightness galaxies. There is no evidence for any significant scatter, and it seems to apply independent of galaxy formation history (Gentile et al. 2007). This power-law also applies, with significant scatter, to elliptical galaxies and bulges if replacing $V_ {cir}$ with $ \sim 1-2$ times the typical stellar dispersion $\sigma$ (Faber \& Jackson 1976). Nevertheless, a much tighter relation exists for the central black holes of these nearly spherical systems. The formation of central black holes (BHs) in galaxies is likely to be a rapid process since most quasars have already formed at redshift $z>2$. There is a tight correlation between the BH mass and the mass of the spheroidal (bulge) component, or even better its velocity dispersion (Ferrarese \& Merritt 2000; Gebhardt et al. 2000; Tremaine et al. 2002). The correlations $M_{bulge} \propto M_{BH} \propto \sigma^4$ are so tight that it is hard to explain unless bulges form as fast as BHs, and their growth is controlled simultaneously by some mechanism. At the present epoch, the BH accretion rate is both small and completely decoupled from the bulge growth. The likely window to couple the two is at high redshift during phases of rapid growth and violent feedback. Many previous discussions emphasize that the feedback from central supermassive black hole, which interacts with the surrounding environment in a self-regulated way, is the key to form the correlations (Silk \& Rees 1998; King 2003; Wyithe \& Loeb 2003; Murray, Quataert \& Thompson 2005; Begelman \& Nath 2005; Cen 2007). On the other hand, the starburst activities peak at similar redshifts to the quasars as a whole. In a co-evolution scenario of starbursts and a central BH, the central BH accretes with high accretion rate during the main star formation (SF) phase (Alexander 2005). To make the starbursts, it was proposed that bulges can form by a rapid collapse due to radial instability of isothermal gas. This proposal has the nice feature of forming bulges before disks (Xu, Wu, \& Zhao 2007). Inspired by these works in a Newtonian framework, we model the criteria of bulge formation, assuming a more general mixture of gas and a non-isotropic stellar component imbeded in a constant external gravity provided by either CDM halos or effective halos of a MOND scalar field. Observations show that most of the local and distant starburst galaxies are rich in gas and dust (Heckman, Armus, \& Miley 1990; Meurer et al. 1995; Sanders \& Mirabel 1996; Adelberger \& Steidel 2000). Photons from newly-formed stars and the BH's accretion disk with a luminosity $L_{SF}$ and $L_{BH}$ would diffuse out of the gas sphere. While keeping the gas nearly isothermal, the photons exert a pressure due to dust opacity. The momentum deposit rate of photons from an Eddington accreting BH, $(2/c)L_{BH} = (M_{BH}/{M_\odot}) \times 4.3 \times 10^{-8} {\rm m s}^{-2} {M_\odot} = (M_{BH}/1.2 \times 10^8{M_\odot}) \times 10^{31}$ Newton, might drive a feedback force on the gas (King \& Pounds 2003) to balance the inward momentum deposit of the SF, $(2/c)L_{SF}=(L_{SF}/3.7 \times 10^{12}L_\sun) \times 10^{31}$ Newton; the latter could also drive an outward force to balance, say half of, the gravitational force on the gas. So we have roughly \begin{equation} (10^{-10} {\rm m s}^{-2}) (430 M_{BH}) \sim {2\over c} L_{BH} \sim {2 \over c} L_{SF} \sim 0.5\bar{g} M_{g} \end{equation} where $\bar{g}$ is the mass-averaged gravity on the gas sphere. Rewriting $\bar{g}=10^{-10}{\rm m s}^{-2} g_{-10}$, we find a short star formation time scale $(0.001 c^2)M_g/L_{SF} \sim 0.2 g_{-10}^{-1}$Gyr if adopting the usual SF efficiency of about 0.001 (Leitherer et al. 1999; Bruzual \& Charlot 2003). In a starburst, we expect radial collapse and violent feedback, so the short timescale of SF should be comparable to the free-fall time scale. Assuming that all the gas eventually turned into stars, we recover a Magorrian et al. (1998) relation ${M_{BH} \over M_*} \sim 0.002 g_{-10} $. The key point here is that observed properties of BH, SF and bulges can all be realised {\it if $g_{-10} \sim 2$ with little scatter universally}. Presently there are two paradigms where galaxy structure and formation are studied. Introducing only two speculative dark components of the universe, the paradigm of Cold Dark Matter (CDM) plus the cosmological constant $\Lambda \sim (10^{-9}m/s^2)^2$, it is possible to simulate the large scale universe realistically with General Relativity. Many properties of model galaxies can be predicted, although not all are in agreement with observations, especially for low-surface brightness galaxies. The Modified Newtonian Dynamics (MOND) paradigm is able to match properties of high and low surface brightness galaxies with amazing accuracy by simply introducing a fundamental scale $a_0 \sim 1.2 \times 10^{-10}m/s^2$ in the space-time metric gradient. However, it is generally underdeveloped in terms of simulating the processes of structure and galaxy formation. Nevertheless, it was realised that disks in MOND become unstable once above certain central surface brightness ${a_0 \over 2\pi G} \sim 130 {M_\odot}{\rm pc}^{-2}$, so bright regions of galaxies tend to exist in spheroidal form and are in strong gravity $g\ge a_0$ (Sanders \& McGaugh 2002); throughout the paper we use $a_0$ as the dividing scale for strong vs. weak gravity. The classical MOND gravitational theory (Bekenstein \& Milgrom 1984) is now furbished with several co-variant versions. These have different constructions using a vector field plus (optional) scalar fields (Bekenstein 2004, Sanders 2005, Halle, Zhao, \& Li 2008, Zhao \& Li 2008); the scalar field(s) can always be absorbed (e.g., into the modulus of the vector field), but can be useful to make the expressions of the theories simpler. In these theories there is a constant scale $a_0 \sim 1.2 \times 10^{-10}{\rm m}\ {\rm sec}^{-2}$; in regions or epochs of more gradual variation of the space-time metric, the dominating source to Einstein's tensor switches from normal matter/radiation fields to the new vector or scalar fields. With the same scale the most recent co-variant model (Zhao 2007) can explain how the expansion of the universe switches from de-accelerating to accelerating, and explains the amplitude of the cosmological constant $\Lambda \sim (8 a_0)^2$, rather than invoking it arbitrarily as in $\Lambda$CDM. In principle, cosmological structure formation all the way to galaxies is well-defined in some of these co-variant theories (Halle, Zhao \& Li 2008). In the original proposal for TeVeS (Bekenstein 2004), there is a dis-continuity in the Lagrangian for the scalar field, which makes the evolution from cosmological linear perturbations to quasi-static galaxies problematic. This is overcome in the proposed Lagrangian of Zhao \& Famaey (2006). It is theoretically possible to seed and simulate galaxy formation from the linear perturbation simulations of the Cosmic Microwave Background (Skordis et al. 2005). Already work has started on galaxy formation modeling in MOND (Sanders 2008, Sanders \& Land 2008) and there are many constraints on the theory from gravitational lensing tests (Feix et al. 2007, 2008, Shan, Feix, Famaey, \& Zhao 2008, Natarajan \& Zhao 2008, Tian, Hoekstra, Zhao 2008). Here we speculate upon the properties of galaxies formed in the TeVeS framework, and contrast with the $\Lambda$CDM framework. While generally speaking the two paradigms are mutually exclusive, here we illustrate some similarities, in the context of the formation of high surface brightness bulges and elliptical galaxies. There are, however, important differences. \section{Scatter of the halo gravity in CDM} Let us consider the Cold Dark Matter (CDM) galaxy formation framework. In CDM models baryons fall into the potential well of CDM, cool, and condense into stars. The background dark matter distribution is often described by the NFW density distribution for dark matter (Navarro, Frenk \& White 1997, Navarro et al. 2004 ), which decreases smoothly from an $r^{-1}$ density cusp inside to a $r^{-3}$ tail outside a scale radius $r_s$, defined to be where the logarithmic density slope is $-2$ exactly. The density satisfies $ r \rho_{NFW} \approx \rho_s r_s$, insensitive to radius inside the cusp region $r<r_s$, where $\rho_s$ is a density scale. The Newtonian gravity of the halo, determined by the halo mass enclosed inside radius $r$, is given by \begin{equation} g_{NFW}(r) = {G M_{NFW}(r) \over r^2} = {(2 \pi G) (\rho_s r_s)} F\left({r \over r_s}\right), \end{equation} where the function \begin{equation} F(y) \equiv {2 \over y^2} \left[\ln(1+y)-{y \over 1+y}\right] \sim (1+y)^{-1.475}. \end{equation} The approximation is valid to 10 percent for $0<y<20$ and gives $F(0)=1$. For a halo of virial mass $M_{vir}$, the halo scale parameters satisfy \begin{eqnarray} r_s &=& (17 {\rm kpc/h}) M_{vir,12}^{0.46}, \\ r_s \rho_s &=& 130 {M_\odot}{\rm pc}^{-2} \times \Xi, \\ \Xi & =& 2 M_{vir,12}^{0.17} \sim 2 \left({r_s \over 17/h {\rm kpc}}\right)^{0.34}. \end{eqnarray} where $h \equiv H_0/100$ is the dimensionless Hubble constant, and the halo virial mass $M_{vir}$ is related to the baryonic mass $M_b$ of a galaxy by $M_{vir} \sim 8 M_b$. The parameters are insensitive to the resolution of the simulation and whether the cusp is truly finite or infinite at the origin. The numerical values above and the data points shown in Fig.~\ref{scatter} are taken from Navarro et al. (1996) and Table 3 of the simulations of Navarro et al. (2004). They agree with the scalings for the concentration vs mass, $c \equiv {r_{vir} \over r_s} \sim 13.4 M_{vir,12}^{-0.13} (1+z)^{-1}$, as given in Bullock et al. (2001), where $c$ is a ratio betwen the virial radius $r_{vir}$ and the scale radius $r_s$. It is interesting that in the cusp, $F \sim 1$, and the NFW halo's self-gravity $g_{NFW}(r)$ is insensitive to radius and halo parameters, i.e., \begin{equation} g_{NFW}(r) \sim (1.2 \pm 0.6) \Xi \times 10^{-10} m/s^2, \end{equation} hence $g_{NFW}(r)$ of NFW cusps is nearly a universal constant for galaxies of the same baryonic total mass. A factor of two scatter in $g_{NFW}(r)$ is estimated from Table 3 and Fig.1a of Navarro et al. (2004); the galaxy clusters of $10^{14}{M_\odot}$ have nearly the same $r^{-1}$ extrapolated inner density as $10^{12}{M_\odot}$ galaxy halos (although the cluster simulations do not yet have enough resolution to be confidently extrapolated to small radii of a few kpc). More precisely $r_s^{0.66} \rho_s \sim cst$ insensitive to halo mass, and simulation resolution, i.e., it applies to the $r^{-1.5}$ Moore profile as well the cored profile of Navarro et al. (2004) as long as $r_s$ has the model-insensitive definition of the radius where the logarithmic density slope is $-2$. The cluster density is a factor of two higher, and the dwarfs are a factor of four lower due to the $\Xi$ factor. The above is roughly in agreement with Xu et al. (2007), who noted that in the central region containing the galaxy bulge, NFW halos have a density scaling relation $r \rho \sim r_s \rho_s \sim 130 {M_\odot}{\rm pc}^{-2} \Xi$, and $\Xi(M_{vir},z,c) \sim M_{vir,12}^{0.072} \sim 1$. While the details differ, in both cases $\Xi$ is a shallow function of the halo virial mass $M_{vir}$, the redshift $z$, and the concentration $c$. \subsection{Effects on CDM cusp if bulges and ellipticals grow adiabatically} The CDM density is highly compressible by gravitational interaction with baryons. The stellar distribution in elliptical galaxies is often described by Sersic profile in projected light (see Graham \& Driver 2005), with an underlying volume density of the approximate form $r^{-1+0.6/n} \exp(-r^{1/n})$ (Prugniel \& Simien 1997); for the Sersic index $n \sim 4$, we have a nearly $1/r$ cusp, which suggests a nearly constant Newtonian gravity in the center. Here we use the simpler Hernquist profile for the enclosed mass $M_b(r)$, hence we find the central Newtonian gravity has a spatially constant value, \begin{equation}\label{Hernquist} g_N(r)={G M_b(r) \over r^2}= {G M_b \over (r+r_h)^2} \sim {G M_b \over r_h^2}, \end{equation} for $r \rightarrow 0$, where $M_b$ and $r_h$ are the baryon (total) mass and scale length. The scale length and stellar mass (luminosity) of elliptical galaxies are typically correlated with some scatter, e.g., Chen \& Zhao (2008) find that \begin{equation} \log {G M_b r_h^{-2} \over 350 \times 10^{-10}ms^{-2} } = -1.52 \log \left({M_b \over 1.5 \times 10^{11}{M_\odot}}\right) \pm 0.5. \end{equation} where $1.5 \times 10^{11}M_{\sun}$ is the characteristic turn-over mass scale in the Schechter stellar mass function of galaxies, found by fitting SDSS galaxy samples in the range $10^8-10^{12}{M_\odot}$ (Panters et al. 2004), assuming a Hubble parameter $70$km/s/Mpc. This would imply that near the Hernquist cusp the Newtonian gravity in units of $10^{-10}m/s^2$ is $g_{N-10} = 350 \times \left({M_b \over 1.5 \times 10^{11}{M_\odot}}\right)^{-1.52}$. This illustrate that cores of ellipticals have $g \gg a_0$, i.e., they are strong gravity enviornment (Milgrom \& Sanders 2003). \footnote{As an alternative model-insensitive check, we estimate from Faber et al. (1997, their Eq.3-4, and Fig.4) that the typical observed Newtonian gravity near the core or the half-mass radii of cored giant ellipticals (with $L_{10} \equiv L/10^{10}{L_\odot} > 1$) is $g_{N-10} \sim 100 L_{10}^{-0.6}$, and for cusped dwarf ellipticals (with $L_{10}<1$) is $g_{N-10} \sim 1000 L_{10}^{+0.6}$.} {\it If} in elliptical galaxies baryons condense into the center adiabatically, this process would further increase the value of $r \rho_{NFW}$ or $\Xi$. In the most extreme case, a galaxy might start with a $f_b:(1-f_b)=1:7$ mix of gas and CDM particles all distributed on circular orbits with an NFW distribution of the radius $r_i$, and the gas part condense adiabatically into the present stellar mass distribution $M_b(r)$. Following the standard recipe (e.g., Klypin, Zhao, Somerville 2002), the initial CDM halo mass $(1-f_b)M_{NFW}(r_i)$ is squeezed into a radius of $r$, conserving mass. From conservation of angular momentum $J$ of circular orbits, we have \begin{equation} J^2= r_i GM_{NFW}(r_i) = r \left[(1-f_b) GM_{NFW}(r_i) + GM_b(r) \right]. \end{equation} By Taylor expanding near $r \sim 0$, we find the contraction factor $C \equiv {r_i \over r}$ satisfies \begin{equation} C^3 \approx (1-f_b) C^2 + {G M_b r_h^{-2} \over g_{NFW}(0) } \sim 350 \left({M_b \over 1.5 \times 10^{11}{M_\odot}}\right)^{-1.52} \!\!\!\!\!\! \gg 1, \end{equation} where we considered only the dominant 2nd term and applied the approximation of Chen \& Zhao (2006). By contracting the same halo mass inside a factor of $C$ smaller radii, one keeps the $r^{-1}$ profile of of the halo cusp, but increases the halo central density and self-gravity by a factor $C^2$, i.e., at the center \begin{equation} g_{CDM} = 2 \pi G r \rho_{CDM} = C^2 g_{NFW} = C^2 \Xi \cdot 10^{-10}m/s^2 \gg a_0. \end{equation} The above is likely an over-estimate, since more realistic formation of ellipticals will involve mergers of gas-rich spirals galaxies. In any case, there is likely a {\it large upward scatter} around the naive analytical prediction $g_{CDM} \sim 10^{-10}m/s^2$ due to different scenarios of formation and different halo masses and concentrations etc.. \section{A universal gravity scale: effective halos in simple MOND} In the co-variant theories of MOND, the only sources of gravity are the stars and the gas (i.e., the baryons). In the context of its co-variant version, the Tensor-Vector-Scalar theory of Bekenstein (2004), there should be a scalar field $\phi_s$, such that in the spherical case ${\mathbf g}_s = -\nabla \phi_s$ gives a dark halo like gravity. Here we call the scalar field the Effective Dark Matter (EDM), $g_{EDM}=g_s=|{\mathbf g}_s|$; it is related to the Newtonian gravity $g_N=|{\mathbf g}_N|$ and the actual acceleration (or gravity) $g=|{\mathbf g}|=|\mathbf{g}_N+\mathbf{g}_s|$. The Poisson equation is modified as \begin{equation} \nabla \cdot (\mu_s \mathbf{g}_s ) = \nabla \cdot (\mu {\mathbf g}) = \nabla \cdot {\mathbf g}_N = -4 \pi G \rho, \end{equation} where $\mu_s$ and $\mu$ are modification functions, which give identical descriptions in case of a spherical mass distribution, and reduce to Newtonian dynamics when $g_N \gg a_0$. Consider the modification functions as proposed in Angus et al. (2006), \begin{equation} \mu_s = {g_s \over (a_\alpha - g_s)\alpha},\qquad a_\alpha \equiv {a_0 \over \alpha}, \end{equation} where $a_\alpha$ is the fixed scale of the theory with $\alpha$ being a theory constant; the $\alpha=1$ "simple" model is the most popular special case. Reexpressing $g_s$ in terms of a rescaled Newtonian baryonic gravity $y$, we find that \footnote{Here $\mu = 1- \left[ {g +a_\alpha\over 2 a_\alpha} + \sqrt{\left({g -a_\alpha\over 2a_\alpha}\right)^2 + {g \over\alpha a_\alpha}} \right]^{-1}$. The combined gravity of effective DM and baryonic matter is the actual acceleration ${d\Phi \over dr} =g(r) = a_\alpha \left[\theta_s(y)+ \alpha^{-1} y\right]$, where $\theta_s(y) \equiv {2 \over 1 + \sqrt{1 + 4 y^{-1}} }$. Note $\theta_s(y) \sim 1$ if $y \gg 1$.} around a gas plus stellar sphere there will be an effective DM halo gravity $g_{EDM}$ or the scalar field $g_s$, \begin{equation}\label{Phi} g_{EDM} \equiv g_s = a_\alpha \theta_s(y), \qquad y \equiv { G (M_g + M_*) \alpha \over r^2 a_\alpha } \end{equation} where $M_g+M_*$ is the gas plus star mass inside radius $r$. A remarkable result of the general class of $\mu$-function is that there is a maximum to the scalar field gravity \begin{equation} g_s \le g_{s,max} = a_\alpha. \end{equation} This means that when the Newtonian gravity, $g_N$, is the strongest (as in the centers of bright galaxies), the scalar field $g_s \rightarrow a_0 /\alpha$, i.e., approaching a universal constant plateau. This breaks down only if $\alpha=0$, corresponding to Bekenstein's toy function. Also if $\alpha \rightarrow \infty$, the dynamics becomes purely Newtonian with a zero scalar field. The standard $\mu(x)=x/\sqrt{1+x^2}$, which fits observations well, can be approximated by $\alpha \sim 3$ in terms of sharpness of transition from strong to weak gravity. One can set $\alpha \sim 1-3$ to be consistent with galaxy data. \footnote{ Based on theoretical arguments and matching observed rotation curves, Zhao \& Famaey (2006) advocated the "simple" function with $\alpha =1, \qquad \mu_s = {g_s \over a_0-g_s}, \qquad \mu = {g \over a_0+g}.$ This $\mu$-function is also supported by the Milky Way kinematics data, and by the SDSS extragalactic satellite velocity distribution (Angus et al. 2007), and is consistent with the recent rotation curve fittings by Famaey et al. (2007a,b), Zhao \& Famaey (2006), and Sanders \& Noordermeer (2007), McGaugh (2008). } The density profile of the EDM can be derived from the Poisson equation as \begin{equation} \rho_{\rm EDM}(r)={1 \over 4\pi Gr^2}{d \over dr}(r^2 g_{EDM}), \label{poisson}. \end{equation} Taking $g_{EDM} = a_\alpha \theta_s$ and the Newtonian gravity of the baryonic Hernquist profile (Eq.~\ref{Hernquist}), we find \begin{equation} \rho_{EDM} = {a_\alpha \over 2 \pi \alpha G r} \theta_1, \theta_1 \equiv \left(\theta_s - {d \theta_s \over d \ln y} {r \over r+r_h}\right). \end{equation} For small radii ($\sim$ kpc) $\rho_{EDM}$ has a roughly $1/r$ cusp because $\theta_1 \sim \theta_s \sim 1$ at centers of bright galaxies, where $g_N \sim GM/r_h^2 \sim 350 (M/1.5 \times 10^{11}{M_\odot})^{-1.52} a_0 \gg a_0$. In fact near the center $g_N \gg a_0 \sim a_\alpha$ even if adopting a scale length 3 times bigger than implied by the mean scaling in Chen \& Zhao (2006), and/or using a Sersic profile for the stellar distribution. The strong gravity implies a saturated scalar field in the center: $g_s$ reaches its maximum $a_\alpha$. This is confirmed numerically at least for the $\alpha=1$ case (cf. Fig.~\ref{fig:daming}). In fact near the centers of ellipticals the model predicts \begin{equation} r \rho_{EDM} = {\Pi \over \alpha}, \qquad \Pi \equiv {a_0 \over 2 \pi G} \sim 130{M_\odot}{\rm pc}^{-2}. \end{equation} This is rather similiar to the case of NFW halos, but $r\rho_{EDM}$ has {\it virtually no scatter} for the MONDian scalar field in bright centers of ellipticals. Fig.~\ref{fig:daming} shows the scalar field is very rigid, \footnote{The scalar field becomes half-saturated as soon as the overall gravity exceeds $a_0/\alpha$, where $g_s=g_N=0.5a_0/\alpha$. In the Milky Way this translates to a galactocentric radius of about $(220{\rm km}{\rm s}^{-1})^2 \alpha/(a_0) \sim 13 \alpha$kpc, which is the edge of the galaxy disk.} $g_s =(0.5-1)a_0/\alpha$, nearly incompressible on scales of 1-10 kpc, i.e., it cannot be increased significantly by compressing the baryonic material and increasing the Newtonian gravity. We can also define a maximum central pressure of the scalar field for later use \footnote{Zhao (2007) noted that this pressure $P_\alpha$ ultimately relates to the cosmological constant or the vacuum energy density, which has the same order of magnitude as $P_\alpha$} \begin{equation} P_\alpha \equiv {a_\alpha^2 \over 4 \pi G}, \qquad a_\alpha=a_0/\alpha. \end{equation} It is remarkable that the centers of ellipticals are all immersed in strong gravity, hence a universal constant scalar field, perhaps extending all the way to the central black holes of these system. One ponders {\it the consequences of such a remarkable uniformity and universality for central kpc regions of all bright galaxies}. One wonders, in particular, whether the uniformity would lead to very tight correlations, such as the $M_{BH}-\sigma$ relation. \section{Modeling spheroid formation in a constant background gravity $a_\alpha$} As a demonstration of the consequence of a universal scale, we apply it to galaxy formation and scaling relations. Unless stated otherwise we make the approximation \begin{equation} g_{NFW} \sim g_{EDM} =g_s = a_\alpha \theta_s \sim a_\alpha = cst \end{equation} This approximation means that baryons experience a rigid uniform extra field $a_\alpha$ on top of the Newtonian self-gravity of the stars and gas, \begin{equation} g(r) = a_\alpha + { G (M_g + M_*) \over r^2}. \end{equation} \begin{itemize} \item If the origin of the extra field is actually from NFW cusped CDM, then it should be understood that $a_\alpha$ would have a significant scatter between galaxies with a trend for bigger $a_\alpha$ for bigger galaxies. \item If the origin is the scalar field in a co-variant version of MOND, then $a_\alpha$ should be viewed as a universal fundamental constant intrinsic to a theory. It is equal to $a_0/\alpha$ with zero scatter, although the value of $\alpha \sim (1-3)$ is not precisely determined at present. \end{itemize} Spatial non-uniformness of the background field is studied in the Appendix; the effect can be treated crudely as a small spatial variation of $\alpha$. \subsection{Maximum stable gas mass} First we construct analytical spherical models of gas and star equilibrium in MOND, and study the condition for the gas sphere to remain stable. It is found that as we increase the core density up to a critical value, the total gas mass increases. Once the gas core density exceeds the critical, the total gas mass starts to decrease. Assume a quasi-static equilibrium of a gas sphere $\rho_g(r)$ of sound speed $c_g(r)$ and a stellar sphere $\rho_*(r)$ of radial velocity dispersion $\sigma_*$ and an anisotropy parameter $\beta \equiv 1-{\sigma_{*\phi}^2 \over \sigma_*^2}$. To model the tangential velocity dispersion and non-isothermal profile we introduce a velocity dispersion measure $\sigma_1$, \begin{equation} \sigma_1 \equiv \xi_* \sigma_*, \qquad \xi_*^2 \equiv {d\log \rho_*\sigma_*^2 \over d\log \rho_*} -{d\log r \over d\log \rho_*}\beta. \end{equation} Define $\xi$ to be the ratio of thermal pressure $\rho_g\sigma_g^2$ to the opacity-induced pressure $(\rho_g \sigma_g^2)/\xi$ on the dusty gas sphere (not the stellar sphere), countering the gravity. The velocity dispersion $\sigma_g(r)$ is related to the sound speed by $c_g^2=\sigma_g^2 {d \log \rho_g \sigma_g^2 \over d\log \rho_g}$. In equilibrium the gas-stars mixture satisfies the equations \begin{eqnarray} g(r) &=& { c_g^2 \over r} \left[{-d\ln[\rho_g(r)] \over d\ln r}\right] (1+\xi^{-1})\\\nonumber &=& {\sigma_1^2 \over r}\left[-{d\ln[\rho_*(r)] \over d\ln r}\right],\\ 4\pi r^2 &=& {dM_g(r) \over \rho_g(r) dr} = {dM_*(r) \over \rho_*(r) dr}. \end{eqnarray} The conversion of gas into stars also needs to be modeled to be realistic. The simplest solution of the above eqs. would be a model where {\it stars trace the gas radial distribution}, so we have \begin{equation} {\rho_*(r) \over \rho_g(r)}={M_*(r) \over M_g(r)} = {f_*(t) \over 1-f_*(t)}, \end{equation} where the position-independent factor $f_*(t)$ is the fraction of gas formed into stars at time $t$. Such a solution is possible if the feedback ratio $\xi^{-1}$ is regulated by star formation, and related to the temperature of the gas by \begin{equation} (1+\xi^{-1})c_g^2 = \xi_*^2 \sigma_*^2 = \sigma_1^2 =cst. \end{equation} Note that the feedback parameter $\xi$ and the gas sound speed $c_g$ are not required to be rigorously independent of radius, although certain combination of these two is. We do not require the stellar component to be exactly isothermal or isotropic either. We compute the gas equilibrium for a range of core pressures $p(0)$ (see Figure~\ref{fig:gasden}) after rewriting the equations in term of the dimensionless mass $m$, radius $u(m)$ and rescaled density $p(m)$ (see Appendix). We find that the gas density generally falls off monotonically with radius or mass. All models have finite mass out to infinite radius. The density drops steeply with radius due to the deep linear potential well of the background gravity, hence the mass converges quickly. Curiously there is also a maximum $m_{max} \approx 4.3$ in the total rescaled mass as $p(0)$ increases. This happens at a critical core density or pressure \begin{equation} p(0) = {\rho_0 \sigma_1^2 \over P_\alpha} \approx 30 \end{equation} above which the gas density $\rho_g$ of a parcel of gas $dM_g$ no longer increases monotonically with an increase of the central pressure, and in fact the total mass will decrease with increasing $p(0)$ after it reaches the maximum value. These limits on gas central pressure and total mass are examples of the instability first discussed by Elmegreen (1999). A gas sphere above a certain critical mass \begin{equation} M_{max} \approx 4.3 \left({\sigma_1^4 \over a_\alpha G}\right), \end{equation} or above a critical central gas density or pressure, does not have stable solutions; adding a tiny amount of gas would lead to collapse. It is interesting to speculate that bulge formation originates from such a gas instability. For a sphere of gas plus stars at the critical mass, we integrate the density numerically, and find the central surface mass density \begin{equation}\label{I0} S(0) \sim 2 a_\alpha/G \sim 1500 \alpha^{-1} {M_\odot}{\rm pc}^{-2},\end{equation} insensitive to the initial gas velocity dispersion $\sigma_g$ and the feedback ratio $\xi^{-1}$. We also fit our numerical solution of the gas mass profile by a Sersic-like distribution. The gas mass profile is related to the dimensionless volume density profile $j(u)$ via \begin{equation} {M_g(r,t) \over (1-f_*(t)) M_{max}} = \int_0^u j(u) (4 \pi u^2 du), ~ u \equiv {r \over \sigma_1^2 a_\alpha^{-1}}. \end{equation} Prugniel \& Simien (1997, eq B6) suggested an approximated (valid to 5\% accuracy typically) form $r^{-1+0.6/n}\exp(-r^{1/n})$ for the deprojected volume density of a Sersic profile. We fit $j(u)$ numerically to such a S{\'e}rsic profile of volume density, and find a $n=1.2$ profile works well, i.e., \begin{equation} \qquad j(u) \approx 0.14 u^{-1/2} \exp(-1.6 u^{1/1.2}). \end{equation} The normalisation is such that $\int_0^\infty 4\pi j(u) u^2 du=1$. \subsection{Properties of the stellar component formed} In the simplest scenario gas might turn adiabatically into stars while maintaining the density profile at the critical density and mass. Eventually $f_*=1$ when the gas is exhausted by star formation, we have a stellar system with a nearly Sersic profile of high central surface brightness. The density slope is shallower/steepr than isothermal inside/outside the radius $r \sim 0.3 \sigma_1^2 a_\alpha^{-1}$. Our model also resembles real spheroids since they satisfy the Faber-Jackson-like relation $M_*^\infty \sim \sigma_*^4$ between the total mass and stellar velocity dispersion (Faber \& Jackson 1976), \begin{equation}\label{FJ} M_*^\infty = {4.3 \sigma_1^4 \over a_\alpha G } = 4.3 \xi_*^4 \alpha \times 10^{11} {M_\odot} \left({\sigma_* \over 200{\rm km}{\rm s}^{-1}}\right)^4. \end{equation} Considering small deviations from isothermal and isotropic $\beta \sim 0$ stellar distribution, $\sigma_* = \sigma_1/\xi_*$ can be treated as effectively certain mean projected stellar dispersion. Note that our result differs in detail from the MOND virial theorem $M_* = {81 \over 4G a_0}\sigma_*^4$ (Sanders \& McGaugh 2002) which applies to an isotropic isothermal stellar system in deep-MOND (see Appendix). Our bulges and ellipticals are clearly in a mild or high acceleration regime. \subsection{Correlations of Black Hole, Stellar Spheroid and Starburst} Similar to the introduction, we assume the momentum deposit rate of photons, ${2 \over c}( L_{SF} + L_{BH} )$ drives a force acting on the gas. Assume the BH grows exponentially by Eddington accretion from a seed before reaching the balance $L_{BH} = L_{SF}$. After this point, the BH mass stops growing and we set $L_{BH}=0$, and the gas mass and the $L_{SF}$ will then exhaust exponentially as the starburst continues. Assuming at the turning point, the BH and SF each provide half of the total feedback force, which is a fraction $(1+\xi)^{-1} \sim 1$ of the radially-pointing gravity $g(r)$, we obtain \begin{equation} {2 \over c} L_{BH} = {2 \over c} L_{SF} = {\bar{g} \over 2 (1+\xi)} M_{g} \end{equation} where $M_g =(1-f_*) M_{max}$ is the total gas mass, and the mass-averaged gravity \begin{equation} \bar{g} \equiv M_g^{-1} \int g dM_g \sim 2a_\alpha \end{equation} is computed by an integration of mass for the density model with the critical mass. This result is robust for a range of the polytropic index of the gas (see Appendix). Observations reveal that some gas rich local spirals have $f_* \sim 0.5 \pm 0.2$. The stellar fraction is $f_* \sim 1$ for local bulges and elliptical galaxies. This is much higher than in their $z \sim 3$ progenitor starburst galaxies $f_* \sim 0.02$ (Alexander et al. 2005; Genzel et al. 2006; Kennicutt 1998, Solomon \& Vanden Bout 2005). We shall adopt $M_g =(1-f_*) M_{max}=(0.5-1)M_*^\infty$, and $\bar{g} \sim 2 a_\alpha \sim (1-2) \times 10^{-10}{\rm m s}^{-2}$, and $(1+\xi)^{-1} \sim 1$ as long as the feedback $1/\xi$ is not too small (Murray, Quartet \& Thompson 2005). We obtain in the end \begin{equation} {M_{BH} \over 2 \times 10^8{M_\odot}} = { L_{SF} \over 6 \times 10^{12}{L_\odot} } = {M_*^\infty \xi' \over \alpha \times 10^{11}{M_\odot}} = \left({\xi_*' \sigma_* \over 200~{\rm km}{\rm s}^{-1}}\right)^4 , \end{equation} where $\xi'=(1-f_*)/(1+\xi) \sim 0.5-1$, consistent with the observed Magorrian relation $M_{BH} \sim 0.005 M_{\rm bulge}$, and $\xi_*' = (4.3 \xi')^{1/4} \xi_* \sim 1$. The scatter in $\xi_*'^4$ is about factor of a few each way, consistent with the narrow scatter seen in the observed relation of $M_{BH} \sim (1-4)\times 10^8{M_\odot} ({\sigma_*/200{\rm km}{\rm s}^{-1}})^4$ (Ferrarese \& Merritt 2000; Gebhardt et al. 2000; Tremaine et al. 2002). \subsection{Hydro simulations of gas collapse in a fixed background potential} To confirm the analytical results on the gas instability, we investigate the threshold of gas collapse numerically. We consider the gravitational collapse of gas within a rigid dark matter halo using a hydrodynamic simulation. The gas which forms the bulge is embedded in a uniform external field from the dark matter potential. The simulation represents a simple treatment of the CDM halo gravity or the MOND scalar field. This simulation was performed using the Lagrangian fluids code, Smoothed Particle Hydrodynamics (SPH). The code is described in Bate, Bonnell \& Price (1995), and is based on the version by Benz et. al. (1990). The simulation uses $10^5 $ particles, which are initially distributed randomly within a sphere of radius 15 kpc. The gas is initially at rest, with a temperature of $10^6$ K. The mean molecular weight is $1.2$, thus the sound speed of the gas is 83 km s$^{-1}$. The dark matter halo is included by way of a fixed linear external potential. The potential per unit mass is $-a_\alpha \mathbf{r}$ such that there is a universal acceleration of $a_\alpha=10^{-10}$ m sec$^{-2}$ towards the centre of the galaxy. The gas is subject to both the external potential and self gravity during the calculations. With these parameters, the critical mass is $1.54 \times 10^{10}$~M$_ {\odot}$. We first set the total mass of the sphere to 0.0065 M$_ {max}$ ($10^8$~M$_{\odot}$), so the gas should be stable against gravitational collapse. We then evolved the gas for 12 crossing times, during which time the gas settles into equilibrium according to the dark matter potential. The mass was then doubled (by doubling the mass of each particle) and the calculation resumed. By 5 crossing times, the gas had again settled into equilibrium. This process was repeated, doubling the mass each time the gas reaches equilibrium, until runaway gravitational collapse occurs. Fig.~\ref{fig:snap} shows snapshots of the radial profile of the gas pressure just below (upper panles) and just above (lower panels) the critical mass. A collapse of gas is evident from the density cusp in the final state. In Fig~\ref{fig:hydro}, a 1 D projection of the column density (along the $x$ axis) is plotted versus radius. During the first run, the distribution of gas changes from a uniform profile, to a profile corresponding to the dark matter halo. Over the subsequent runs, there is little change in the distribution whilst the total mass remains less than the critical mass. However when the mass exceeds the critical mass, the cloud is gravitationally unstable. Gravitational collapse occurs and the density at the centre of the cloud continuously increases. The calculation stops, the final profile shown as the thickest line in Fig.~\ref{fig:hydro}; the slightly subcritical gas can be approximated by a Sersic law with $n =1$ (i.e., exponential). Galaxy formation clearly involves more than just spherical collapse. The rapid spherical collapse phase might produce the bulge and the black hole. This phase is likely followed by a gradual phase of episodes of minor mergers, the dense galaxy will likely acquire a diffuse stellar halo of higher angular momentum material. One might expect a shallower outer profile than $n=1$ in the end depending on the amount of stars accreted. Indeed real galaxies have a range in Sersic index, which is correlated with the stellar mass of the galaxy (Caon et al. 1993, Desroches et al. 2007). Bulges and pesudo-bulges typically have profiles with $4 \ge n \ge 1$, and the most massive elliptical galaxies have profiles with $n \ge 4$. Some discrepancy with observations is expected since a constant scalar field is a poor assumption at large radii. \section{Summary} In short, combining the results of analytical models and numerical simulations, we argue that gas collapse and BH feedback inside the effective DM halos of MOND can produce galaxies with realistic scaling relations. Better than in Newtonian NFW halos, the Faber-Jackson and BH mass-velocity dispersion scaling relations are recovered with narrow scatter thanks to the fact that there is a redshift-insensitive and luminosity-insensitive universal scale of gravity in the high-z gas-rich starburst galaxies and present day ellipticals. Many feedback processes (e.g., Silk \& Rees 1998, King \& Pounds 2003, Cen 2007, Xu, Wu, Zhao 2007) invoked in the CDM context could be important in MOND context as well, and could change predictions of the state of the art MOND N-body simulation (Nipoti et al. 2008) and hydro simulations (Tiret \& Combes 2008). In the context of a simplified picture of TeVeS, a nearly-isothermal gas sphere can collapse and trigger a starburst if the gas central pressure is above a universal threshold. This condition is likely synchronised throughout the universe, consistent with the observed epoch of starbursts. We also recover the $M_{BH}-\sigma_*$ relation, if the gas collapse is regulated or resisted by the feedback from radiation from the central BH. While the pristine CDM halos give acceleration of typically the order $(1-3)a_0$, there are two important differences with the universal scale in MOND. In CDM, this scale would exhibit a large variation due to scatter in halo concentration (Milgrom 2002). Secondly, as the galaxy bulge forms, the central part becomes baryon-dominated in Newtonian gravity and the CDM is adiabatically compressed to higher densities without a theoretical upper limit unless DM is made of neutrino or its sterile partners (cf. Angus 2008, Zhao 2008). The collapse threshold could rise as the background gravity $a_\alpha$ increases. In contrast, in the TeVeS picture the scalar field $g_s$ within all bright galaxies stays close to $a_\alpha$ throughout galaxy formation. This value $a_\alpha$ is a universal constant in MOND. Hence we expect a universal theshold and synchronised formation of galaxy spheroids at high redshift, perhaps producing star burst galaxies in TeVeS. The mechanism would work less well in the CDM paradigm. Speculating beyond our models, elliptical galaxies are often triaxial in their stellar distribution and potential. If collapse happened above the threshold described here, the uneven distribution of angular momentum would lead to a triaxial equilibrium potential. This expectation is consistent with N-body simulations of collapse in CDM halos (Navarro et al. 1996) and in MOND (Nipoti et al. 2007). Indeed, elliptical galaxies with a mild $r^{-1}$ cusp in stellar light are allowed to exist in self-consistent triaxial configuration in CDM (Capuzzo-Dolcetta et al. 2007) and in MOND (Wang et al. 2008). Corrections are expected for low surface brightness galaxies since the scalar field in these systems are far from being saturated. So $\bar{g} \sim w a_\alpha$, where $w \le 0.3$, and we expect the faint dwarf spheroidal mass and their central BH mass to reduce by a factor $w$ and $w^2$ respectively compared to a blind application of Faber-Jackson and $M_{BH}-\sigma$ relations. No correction is expected for M32-like compact dwarf ellipticals, which are observed to have bright stellar nuclei and BHs. Our prediction that the BH mass per unit stellar luminosity $M_{BH}/L_*$ is lower in dwarf spheroidals than dwarf ellipticals could be checked by sensitive searches for (central tracers of) massive BHs in dwarf spheroidals, which is challenging observationally. Environmental effects could lead to corrections to the Faber-Jackson relation. Member galaxies in the center of a rich galaxy cluster have orbital accelerations of $\sim 2 a_0$, which makes it easier to saturate their MOND scalar field due to the external field effect; any gas rich system would be more vulnerable to radial collapse into a high surface brightness triaxial galaxy in the center of a galaxy cluster than in the surrounding or in the field (Wu et al. 2007, 2008). The external field effect is similar to making $a_\alpha$ smaller. Interestingly, eq.~(\ref{I0}) and~(\ref{FJ}) can be combined to form \begin{equation} {M_*^2 \over L_*^2} \sim {\sigma_*^4 \over I_* L_*}{ 9\xi_*^4 \over G^2} \sim \left({\sigma_* \over 200{\rm km}{\rm s}^{-1}}\right)^4 \left({1500L_\sun{\rm pc}^{-2} \over I_*}\right){4.3\xi_*^4 10^{11} L_\sun \over L_*}, \end{equation} where $I_* = S_*/(M/L)$ is the surface brightness of stellar distribution of surface density $S_*$. This relation is very similar to the fundamental plane relation of ellipticals (e.g. Binney \& Merrifield 1998) \begin{equation} L_* \propto \left({\sigma_*^4 \over I_*}\right)^{0.7}, \end{equation} if we accept that $M_*/L_* \sim L_*^{0.2}$. Equivalently we can write the galaxy size $r_e \propto S_*^{-1} \sigma^2$, which is very close to the lensing mass fundamental plane $S_* \propto \sigma^2/r_e$ determined by GR-based strong lensing results, $S_*^{0.93} \propto \sigma^{1.93}/r_e$ (Bolton et al. 2007); the MONDian corrections to the lensing mass within the Einstein radius are generally mild (Zhao, Bacon, Taylor, Horne 2006, Shan, Feix, Famaey, Zhao 2008). On the other hand, environmental effects would not correct the BH-velocity dispersion relation because \begin{equation} M_{BH} \propto \sigma_*^4, \end{equation} which is independent of the parameter $a_\alpha$. The BH-bulge relation may have been frozen at high redshift; some scatter might be caused by gas rich mergers, and minor feeding of the BH through a rotating gas-rich bar later on, which might evolve secularly into a bulge (Tiret \& Combes 2007). Disk galaxies could have built up their disks from high-angular momentum gas well after the bright bulges form by radial collapse. The formation of the disk part will not fuel the central BH significantly. This may explain why the BH mass is tightly related to the bulge and its velocity dispersion rather than with the total baryonic mass and the terminal circular velocity in disk galaxies; the latter two are correlated themselves by the Tully-Fisher relation, which is built in the Lagrangian of covariant MOND because virtually no scatter from this relation is observed in field galaxies (cf. Wu et al. 2007). \acknowledgments We acknowledge the support of NSFC fund (No.10473001 and No.10525313) and the RFDP Grant (No.20050001026) to Xuebing Wu and BXX, and partial support from NSFC Grant to HSZ (No. 10233040) and PPARC. HSZ thanks Da-ming Chen and Xufen Wu especially for technical assistance, and Xue-Bing Wu, Benoit Famaey, Gianfranco Gentile and Ralf Klessen, Keith Horne, Simon Driver, Ian Bonnell for discussions. CLD's work is conducted as part of the award `The formation of stars and planets: Radiation hydrodynamical and magnetohydrodynamical simulations', made under the European Heads of Research Councils and European Science Foundation EURYI (European Young Investigator) Awards scheme, and supported by funds from the Participating Organisations of EURYI and the EC Sixth Framework Programme.
1,314,259,996,041
arxiv
\section{Introduction} It is well established that galaxies contain massive black holes (MBHs) in their nuclei \citep[e.g.,][]{kr95, richstoneetal98}. Observations of the host elliptical galaxy or spiral bulge show a relation between the mass of the MBHs and the galactic spheroid luminosity and the stellar velocity dispersion \citep{magorrianetal98,fm00,gebhardtetal00a}. Such relationships, as well as their small instrinsic scatter \citep{tremaineetal02,gultekinetal09b}, hint at a connection between the formation of the MBH and the formation of the spheroid. In the context of the favored cold dark matter cosmology, hierarchical assembly of galaxies and protogalaxies with MBHs at their centers naturally leads to the formation of MBH binaries \citep{bbr80}. If the MBH binary can achieve a small enough separation through stellar dynamical hardening or through viscous gas dynamical drag, gravitational radiation from the binary can become significant enough to cause the system to merge. Gravitational waves carry away energy from the binary system, driving the system to merger. If the black holes have unequal masses or misaligned spins, anisotropic gravitational radiation will impart a net momentum flux on the binary, causing the center of mass to recoil. Until recently, the magnitude of the recoil velocity was uncertain. For non-spinning black holes the maximum recoil has now been calculated to be $v_{\rm recoil,max} \approx 200\,{\rm km\,s^{-1}}$ \citep{bakeretal06}, and a similar range is expected for black holes with low spins, or with spins (anti-)aligned with the binary orbital angular momentum. Several studies have also found consistent results for the maximum recoil of spinning black holes $v_{\rm recoil,max} = 1000$--$4000\,{\rm km\,s^{-1}}$ \citep{bakeretal07,gonzalezetal07,herrmannetal07, koppitzetal07,campanellietal07,sb07}. Such velocities are significant because they will eject the binary from the galaxy since most galaxies have escape speeds $ < 1000\,{\rm km\,s^{-1}}$. \cite{vhg2008} studied the effect of recoil on the MBH occupation fraction in nearby galaxies. They assumed that the relative orientation between the orbital angular momentum of the binary and the spins of the two MBHs were isotropically distributed. This configuration can result in high recoil velocities, up to thousands $\,{\rm km\,s^{-1}}$. However, \cite{brm07} proposed that MBHs orbiting in gaseous circumnuclear discs, such as those expected in advanced stages of gas rich galaxy mergers \citep{mayer2007}, align their spins with the orbital angular momentum of the binary. This configuration leads to small recoils for the MBH remnant. The accreting gas exerts gravito-magnetic torques that suffice to align the spins of both the MBHs with the angular momentum of the large-scale gas flow. \cite{dotti2009b} have quantified the efficiency of this alignment process through the analysis of high resolution {\it N}-body Smoothed Particle Hydrodynamics (SPH) simulations. They apply the algorithm presented in \cite{perego2009} to evolve masses, magnitudes and orientation of the spins of two MBHs. We extend the investigation of \cite{vhg2008} considering different degrees of alignment between the spins and the angular momentum of the MBHs. We will either use isotropic spins and angular momentum configurations or the `quasi-aligned' distributions of spin orientations from \cite{dotti2009b}. Those two cases bracket all the possible values of recoil velocities. We apply these distributions to Monte--Carlo realizations of galaxy merger trees, and we study the effect of gravitational recoil on the occupation fraction of MBHs in galaxies at different redshifts. \section{Recoil Velocities} \subsection{Fitting formulae} The recoil velocity may be broken down into components arising solely from mass asymmetry, $v_m$, which is perpendicular to the orbital angular momentum vector $\vecb{L}_{\rm pair}$, and components arising from spin asymmetry, $v_\perp$ and $v_\parallel$, which are perpendicular and parallel to $\vecb{L}_{\rm pair}$, respectively: \beq {v}_{\rm recoil} = \sqrt{v_m^2 + v_{\perp}^2+2 v_m v_{\perp} \cos(\xi)+ v_{\parallel}^2}, \label{eq:v_total} \eeq where $\xi$ is the angle between $v_m$ and $v_\perp$ in the orbital plane. We assumed $\xi=145^{\circ}$, as suggested by \cite{lz2009}. \cite{campanellietal07} and \citet[][fit CL]{lz2009} propose the following fitting formulas for the recoil components: \begin{eqnarray} v_m &=& A \eta^2 \sqrt{1 - 4 \eta}\, (1 + B \eta), \label{eq:v_mass}\\ v_{\perp} &=& H \eta^2(1+q)^{-1}\left( a_1^{\parallel} - q a_2^{\parallel} \right), \label{eq:v_perp}\\ v_{\parallel} &=& K \eta^2(1+q)^{-1}\,\cos(\Theta-\Theta_0)\left|\vecb{a}_1^{\perp} - q \vecb{a}_2^{\perp}\right|, \label{eq:v_parallel} \end{eqnarray} where $q = M_2 / M_1 \le 1$ is the mass ratio of the black holes and $\eta\equiv q/(1+q)^2$ is the symmetric mass ratio. The components of the spins of the two MBHs are broken into projections parallel and perpendicular to $\vecb{L_{\rm pair}}$, $\vecb{a}^{\perp} = \vecb{a}\sin(\theta)$ and $a^{\parallel} = \vecb{a}\cos(\theta)$. In the isotropic case, $\cos(\theta_1)$ and $\cos(\theta_2)$ are distributed uniformly between $-1$ and $1$. We compare the results from the isotropic case with those obtained assuming quasi--aligned spin-orbit configurations. In this case we assumed $\theta_1$ and $\theta_2$ were distributed according to \citet[][see Section~\ref{sec:angles}]{dotti2009b}. $\Theta$ is the angle between $(\vecb{a}_2^{\perp} - q \vecb{a}_1^{\perp})$ and the separation vector at coalescence, and $\Theta_0$ depends on the initial separation between the holes. We assume a uniform distribution of $\Theta-\Theta_0$ between 0 and $2\pi$. Note that for $q\vecb{a}_{1\perp} = \vecb{a}_{2\perp}$ (including $a_{1\perp} = a_{2\perp} = 0$), $v_\|$ vanishes and there is no recoil out of the orbital plane. In the absence of spins, the recoil is $v_m$, which is maximized for for $q = (3 - \sqrt{5})/2 \approx 0.38$. The best fit parameters found by \citet[which we refer to as fit CL]{lz2009} are $A = 1.2 \times 10^4 \,{\rm km\,s^{-1}}$, $B = -0.93$, $H = 6900\,{\rm km\,s^{-1}}$, and $K = 6.0 \times 10^4 \,{\rm km\,s^{-1}}$. \citet[referred to as fit B]{bakeretal2008} found an alternative form for $v_{\parallel}$ that scales as $\eta^3$: \beq v_{\parallel} = K \eta^3(1+q)^{-1}\,\left(a_1^{\perp} \cos(\Phi_1)- q a_2^{\perp} \cos(\Phi_2)\ \right), \eeq where $\Phi_1$ and $\Phi_2$ are the differences between two angles that depend on the system of reference. We assume $\Phi_1=\Phi_2$ uniformly distributed between 0 and $2\pi$ \citep[see][]{dotti2009b}. The best-fit parameters from \citet{bakeretal2008} are $A = 1.35 \times 10^4 \,{\rm km\,s^{-1}}$, $B = -1.48$, $H = 7540 \,{\rm km\,s^{-1}}$, and $K = 2.4 \times 10^5 \,{\rm km\,s^{-1}}$. \citet[fit H in the following]{herrmannetal07} parameterize the recoil velocity in terms of $\theta_{\rm H}$, the angle between $\vecb{L}_{\rm pair}$ and \begin{equation} {\mathbf{\Sigma}}=\left(M_1 + M_2\right)\left(\frac{\vecb{J}_1}{M_1} - \frac{\vecb{J}_2}{M_2} \right), \end{equation} where the $\vecb{J}$ are the spin angular momenta of the holes. Taking $\vecb{L}_{\rm pair}$ to be in the $z$ direction, they find the Cartesian components of the recoil velocity to be: \begin{eqnarray} \nonumber V_x &=& C_0 H_x \cos(\theta_{\rm H}), \\ \nonumber V_y &=& C_0 H_y \cos(\theta_{\rm H}), \\ V_z &=& C_0 K_z \sin(\theta_{\rm H}), \end{eqnarray} where $C_0=\Sigma q^2 (M_1 + M_2)^{-2} (1 + q)^{-4}$ with the best-fit parameters $H_x = 2.1\times 10^3$, $H_y = 7.3\times 10^3 $, and $K_z = 2.1\times 10^4$. \begin{figure} \includegraphics[width=0.48\textwidth]{recoil_3fit_iso.ps} \includegraphics[width=0.48\textwidth]{recoil_3fit_align.ps} \caption{Recoil velocity of the MBH remnant as a function of $q$, for MBH spins isotropically distributed (top) or for aligned configurations (bottom). The upper, middle, and lower panel refers to fit H, fit CL, and fit B, respectively. The thick lines show the average values of $v_{\rm recoil}$ ($\overline{v}_{\rm recoil}$), and $\overline{v}_{\rm recoil} \pm \sigma$.} \label{fig:comparison} \end{figure} In Figure~\ref{fig:comparison} we plot recoil velocities from fits H, CL, and B as a function of $q$ for a sample of isotropically distributed MBH spins and for aligned configurations. This figure demonstrates the agreement of the three formulae for different spin-orbit configurations. For each value of $q$ between 0.1 and 1, we produce 500\,000 realizations of the two MBH spins with magnitudes uniformly distributed between 0 and 1. For the isotropic case we also assign isotropically distributed spin directions. We do not show results for $q<0.1$ because the recoil velocity drops below the typical escape velocity from galaxies. Fit B decreases faster as $q$ decreases, with respect to fits CL and H. This is because the component of the recoil velocity parallel to $\vecb{L}_{\rm pair}$ scales as $\eta^3$ in fit B, and as $\eta^2$ in fits CL and H. The average values of $v_{\rm recoil}$ obtained with the different prescriptions for isotropically distributed spins are consistent within $1\sigma$ (i.e. within a factor of $\sim 2$). Assuming perfect alignment between the two spins and $\vecb{L}_{\rm pair}$, the results from fits CL, H, and B are consistent within a factor of 2.5 for any $q$. For the aligned case with $q \approx 0.1$, the difference between fits CL and B disappears because, for perfect alignment, $a=a^{\parallel}$ and $v_{\parallel}=0$, resulting in a perfect agreement between the two fits. \subsection{Quasi-aligned configurations}\label{sec:angles} As mentioned in the Introduction, gravo--magnetic torques exerted by accreting flows onto the MBHs tend to align $\vecb{a}_1$ and $\vecb{a}_2$ to $\vecb{L}_{\rm pair}$. The distributions of relative angles between the two MBH spins, and between each spin and $\vecb{L}_{\rm pair}$ after the formation of a MBH binary have been computed in \citet[][see Figure~2 therein]{dotti2009b}. Their results stem from the analysis of high resolution {\it N}--body/SPH simulations. In the suite of runs discussed in that paper, the Authors varied the thermodynamical properties of the gas. In particular they assumed two different polytropic equations of state, with polytropic index $\gamma=5/3$ (`hot' runs) and $7/5$ (`cold' runs), respectively. The hot case corresponds to an adiabatic monoatomic gas, as if radiative cooling were completely suppressed during the merger. This case mimics gas radiatively heated by an AGN \citep{mayer2007}. The cold case, instead, has been shown to provide a good approximation to a gas of solar metallicity heated by a starburst \citep[e.g.,][]{ss2000}. In each run, \citet{dotti2009b} found a significant degree of alignment already present before the formation of a binary. However, their runs always start with an equal-mass MBH pair. Because the dynamical evolution of the two MBHs is faster than the Salpeter time, the masses of the two MBHs do not change significantly, and at the end of the runs a nearly equal-mass binary forms. Here we discuss the dependence of spins/orbit configurations discussed by \citet{dotti2009b} on the masses of the MBHs, by comparing the dynamical timescale ($\tau_{\rm dyn}$) of the binary formation with the alignment timescale ($\tau_{\rm align}$). Before we consider the ratio of these two timescales, we look at each in slightly more detail in the context of our simulations, but for a full discussion see \citet{dotti2009b}. Dynamical friction is the process that drives the dynamical evolution of the pair. Since the longer timescale is what determines the time until coalescence, $\tau_{\rm dyn}$ is the dynamical friction time-scale of the less massive MBH, which depends on a number of factors: \beq \tau_{\rm dyn} \propto K\, M_{\rm BH}^{-1}, \label{eq:tdyn} \eeq where $K$ is a function of the properties of the circumnuclear disc. In \citet{dotti2009b} runs, $M_\mathrm{BH} = 4 \times 10^6\;\,{\rm M_\odot}$, and $\tau_{\rm dyn}= 4.5 - 7.5$ Myr, depending on the initial orbital parameters of the BHs and on the effective equation of state used to evolve the gas thermodynamics. Here we are considering the circumnuclear disc parameters in \citet{dotti2009b} as fiducial. Considering different values for these is out of the scope of this paper, but it is likely to change only the details and not our qualitative results. The alignment timescale also depends on the BH spin, $a$, and may be expressed in terms of the Eddington fraction (see equation~43 in Perego et al. 2009): \beq \tau_{\rm align} \propto a^{5/7} M_{\rm BH}^{-2/35} f_{\rm Edd}^{-32/35}. \label{eq:talign} \eeq For $M_\mathrm{BH} = 4 \times 10^6\;\,{\rm M_\odot}$ in the simple case of a fixed orbital plane of the outer disc, i.e., coherent accretion, the value of $\tau_{\mathrm{align}} \approx 10^5\;\mathrm{yr}$. For the simulations in \citet{dotti2009b}, however, the orbital plane of the accreted particles is never constant, and thus the alignment timescale will be longer: $\tau_\mathrm{align} \approx 1$--$4\;\mathrm{Myr}$, depending on the initial orbital parameters of the BHs and the effective equation of state used. Since both black hole spins must achieve alignment with the disc angular momentum for the spins to be aligned with each other, the alignment timescale we are interested in is the longer of the two. For most cases, this will be the timescale for the less massive binary. Before the formation of a binary, the more massive of the two MBHs always accretes more mass, and has the shortest alignment timescale \citep{dotti2009a}. Thus, we may use $M_{\rm BH} = M_2$ in the expression for $\tau_{\rm align}$. The situation after the formation of a hard binary, however, is slightly different. In this case, the torques exerted by the binary onto the gas carve out a low-density region in the middle of the disc, and the secondary MBH, being closer to the outer and denser gas region, can grow faster than the primary \citep{hayasaki2007,cuadra2009}, further aligning its spin with $\vecb{L}_{\rm pair}$. For our purposes, we conservatively assume that after forming a binary, the spin-orbit configuration at the time of binary formation does not significantly evolve. The ratio of these two quantities is obtained by dividing Eq~\ref{eq:talign} by Eq~\ref{eq:tdyn} and shows the dependence on the accretion behaviour of the two MBHs: \beq \tau_{\rm align}/\tau_{\rm dyn} \sim a^{5/7} M_2^{-2/35} f_\mathrm{Edd}^{-32/35}. \eeq For a fixed value of $a$ and rewriting in terms of $f_\mathrm{Edd} \sim \dot{M} M_2^{-1}$, this expression becomes \beq \tau_{\rm align}/\tau_{\rm dyn} \sim \dot{M}_2^{-32/35} M_2^{13/7}. \eeq If they are accreting at the Bondi rate, $\dot{M}_2\propto M_2^2$, and consequently $\tau_{\rm align}/\tau_{\rm dyn} \propto M_2^{1/35}$. In this case the ratio is almost independent of the mass of the secondary, and the assumption that the two MBHs are almost aligned is always correct. If, instead, the secondary is accreting at the Eddington limit, $\dot{M_2}\propto M_2$. From \citet{dotti2009a}, we know that for $M_2 \approx 4\times 10^6\;\,{\rm M_\odot}$, $\tau_{\rm align}/\tau_{\rm dyn} \approx 0.3$.\footnote{We took the cold, prograde case as our fiducial case, but the results are not very different for the other cases.} Using this as our normalization, we get $\tau_{\rm align}/\tau_{\rm dyn} \approx 0.1 \, M_{2,6}^{33/35}$, where $M_{2,6}$ is the mass of the secondary in units of $10^6 \,{\rm M_\odot}$. This scaling implies that a binary can form before the two MBHs align their spins to $\vecb{L}_{\rm pair}$ only for very massive secondaries ($M_2 \mathrel{\rlap{\lower 3pt\hbox{$\sim$}} \raise 2.0pt\hbox{$>$}} 2\times 10^7 \,{\rm M_\odot}$) and rapidly accreting pairs. To estimate the relevance of such mergers, we consider the prevalence of the \emph{secondary} MBH having mass $M_2 \mathrel{\rlap{\lower 3pt\hbox{$\sim$}} \raise 2.0pt\hbox{$>$}} 2\times 10^7 \,{\rm M_\odot}$. Assuming the scaling of MBH masses with the velocity dispersion of the host determined for nearby galaxies \citep{gultekinetal09b}, the host of a MBH with mass $M_2 \mathrel{\rlap{\lower 3pt\hbox{$\sim$}} \raise 2.0pt\hbox{$>$}} 2 \times 10^7 \,{\rm M_\odot}$ has $\sigma \mathrel{\rlap{\lower 3pt\hbox{$\sim$}} \raise 2.0pt\hbox{$>$}} 130 \,{\rm km\,s^{-1}}$, and an escape velocity $\mathrel{\rlap{\lower 3pt\hbox{$\sim$}} \raise 2.0pt\hbox{$>$}} 1000 \,{\rm km\,s^{-1}}$. The recoil velocity has a significant probability of being $>1000\,{\rm km\,s^{-1}}$ only for nearly equal-mass mergers ($q>0.3$). Such mergers are extremely rare for the heaviest MBHs \citep[see section 4;][]{volonteri07,GW3}. Furthermore, as mentioned above, further accretion can increase the degree of alignment of the binary system. As a consequence, in this study we may neglect the dependence of the spin-orbit configurations on the MBH masses and adopt the same distributions of relative angles between the two MBH spins for all MBH mergers in our sample. Note, also, that the isotropic distribution case we test below provides an upper limit to the ejection probability at all masses. \section{Ejections and halo bias} In models of structure formation based on gravitational instabilities in Gaussian primordial fluctuations, the number density and bias properties of a halo can be expressed as a function of its rms fluctuation of the linear density field at redshift $z$, $\sigma(M_h,z)$, and on the threshold density for collapse of a homogeneous spherical perturbation at redshift z, $\delta_c(z)$. In the \citet{ps74} formalism the comoving number density of haloes of mass between $M$ and $M+dM$ can be expressed as: \beq \frac{dn}{dM}=\sqrt{\frac{2}{\pi}}\, \frac{\rho_m}{M}\, \frac{-d(\ln \sigma)}{dM}\,\nu_c\, e^{-\nu_c^2/2}\ , \eeq where $\nu_c=\delta_{\rm c}(z)/\sigma(M,z)$ is the number of standard deviations which the critical collapse overdensity represents on mass scale $M$. At any redshift we can identify the characteristic mass (i.e., $\nu_c=1$), and its multiples. The higher $\nu_c$, the more massive and rarer the halo, and the higher its bias and clustering strength. Although this formalism is derived from a Press-Schechter analysis, it agrees fairly well with the results of {\it N}-body simulations. \cite{mw01} suggested that clustering analysis of quasar samples can be deconvolved to yield the typical mass of haloes hosting quasars, and their typical $\nu_c$. \citet{mw01} and subsequent investigations \citep{shenetal07,myersetal07,pmn04} have found that high redshift quasars are highly biased objects with respect to the underlying matter, and that their $\nu_c$ increases with $z$. \cite{shenetal07} and \cite{myersetal07} find that the bias increases from $\nu_c\simeq 3$ at $z=2$, to $\nu_c\simeq 3.5$ at $z=3$, and $\nu_c\simeq 5.5$ at $z=5$. We evaluate the ejection probabilities, using the techniques described in section 2 with fit CL, for haloes representing peaks of the density fluctuations $\nu_c$=1, 2, 3, 4, 5, 6 as a function of redshift in a concordance $\Lambda$CDM cosmology \citep{spergeletal07short}. For every halo mass we estimate the ejection probability by comparing the recoil velocity ($v_{\rm recoil}$) to the escape velocity from the dark matter halo potential well (truncated at the virial radius). We model the halo potential with a \cite{nfw97} density profile, where the halo properties evolve as suggested by \cite{bullocketal01}. The ejection probabilities thus defined are a lower limit to the probability that the recoil voids a galaxy of its central MBH, as if the recoil velocity is lower than the escape velocity but higher than the velocity dispersion of a halo, the timescale for the recoiled MBH to return to the center under the effect of dynamical friction is likely longer than the Hubble time \citep[and references therein]{mq04,Gualandris2008,vm2008,Devecchi2009,Guedes2009}. \begin{figure} \includegraphics[width=1.0\columnwidth]{ejvsz.eps} \caption{Ejection probability as a function of redshift for different $\nu_c=\delta_{\rm c}(z)/\sigma(M,z)$ peaks of the density fluctuations field. Line styles indicate the assumed spin of the merging black holes: {\it Solid lines:} $\hat a=0.9$; {\it Dashed lines:} $\hat a=0.6$; {\it Dotted lines:} $\hat a=0$. Line colors indicate assumed distribution of spin orientations: {\it Green:} isotropically distributed; {\it Red:} orientation distribution from hot simulations by Dotti et al.\ (2009); {\it Blue:} orientation distribution from cold simulations by Dotti et al.\ (2009). All probabilities assume $q = 0.1$.} \label{fig:ej_peaks} \end{figure} With this caveat in mind we can estimate lower limits to the probability that MBH binaries are ejected or displaced due to the gravitational recoil in $\nu_c$--peaks haloes (Figure \ref{fig:ej_peaks}). The redshift at which the probability drops to 50\% increases with increasing $\nu_c$ for all binary mass ratios and spins. The probability of ejection for MBH binaries in haloes which host very high redshift quasars ($\nu_c\simeq 5.5$ at $z=5$) drops to 0 at $z\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 19$ if MBHs are non-spinning, while for spinning MBHs the ejection probability is significant down to $z\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 13$ in the absence of any alignment process. Lower $\nu_c$ peaks remain dangerous for merging binaries for longer. A $\nu_c\simeq 3$ peak has an ejection probability of 100\% until $z=13$. If {\it all} $\nu_c\simeq 3$ haloes host a MBH, and {\it all} experience a major merger before $z=13$, then no MBHs are left for further evolution. In this scenario, MBHs must form at $z<11$, in order to evolve all the way to $z=3$ and host the observable quasars. However, the $\nu_c$-peak formalism does not allow a clear evaluation of the merger history of haloes. A halo which represents a $\nu_c$-peak at a given redshift $z$, would be incorporated into a halo representing a lower peak at $z-\Delta z$. A MBH in a given halo would then belong to different $\nu_c$-peaks in its lifetime. In order to evaluate the number of MBH mergers that a halo experienced it is necessary to use techniques that can trace the whole merger history of a given halo as a function of mass and time. \section{Individual halo histories} We now turn to evaluate the merger histories of different galaxy haloes. We adopt a statistical approach, based on merger histories extracted from full $\Lambda$CDM merger trees. This work has a more general approach than \citet{vhg2008}, and addresses in an astrophysical context the analysis by \citet{schnittman07}. While \citet{vhg2008} looked at the role of ejections in the merger histories for galaxies that were in clusters but not in the `main trunk,' here we examine merger histories with a focus on the main halo of the merger tree. We evaluate the average ejection probability as a function of cosmic epoch, and consider how it depends on the environmental bias. We consider here the halo merger histories leading to the formation of haloes with masses $M_0=4\times10^{13}\,{\rm M_\odot}$, $M_0=2\times10^{12}\,{\rm M_\odot}$, $M_0=2\times10^{11}\,{\rm M_\odot}$ at $z=0$. We average our results over 20 realizations of the same mass, to account for cosmic variance. Our technique and cosmological framework is similar to the one described in \cite{VHM}. We track the dynamical evolution of MBHs ab-initio and follow their assembly down to $z = 0$. Several theoretical arguments indicate that MBH formation occurs at very high redshift, and probably in biased haloes \citep[e.g.,][]{mr01,vr06}. Additional parameters, such as the angular momentum of the gas, its ability to cool and its metal enrichment, are likely to set the exact efficiency of MBH formation and the redshift range when the mechanism operates \citep[for a thorough discussion see][]{VLN2008}. We consider the effect of varying the initial conditions, by assuming frequent or rare MBH formation process. As a reference model, we assume that MBH formation is effective in all haloes with $\nu_c>3$ at $z>20$ (corresponding to masses $>10^6 \,{\rm M_\odot}$). This model is based on a scenario where MBHs form as remnants of the first generation of metal--free stars (Population III). The main features of the hierarchical assembly of MBHs left over by the first stars in a $\Lambda$CDM cosmology have been discussed by \cite{VHM}, \cite{vr06} and \cite{VN09}. A model similar to the one presented in this paper has been shown to reproduce observational constraints on MBH evolution \citep[luminosity function of quasars and Soltan's argument, $M_{\rm BH}$--$\sigma$ relationship at $z=0$, mass density in MBHs at $z=0$;][]{VLN2008}. We also analysed a very different case, where whenever a halo grows above a given mass threshold ($>10^{10} \,{\rm M_\odot}$) and it does not already contain a MBH in its centre, then it forms one (in case the previous MBH had been displaced or ejected, a new MBH materializes), regardless of redshift. This model is not based on a specific physical model, but it is meant to be used for comparison with existing numerical simulations. MBH formation mechanisms define when and how often haloes are populated with MBHs. They provide the initial occupation fraction. Additionally we have to follow the dynamical evolution of haloes and embedded MBHs all the way to $z=0$ in order to determine the effect of recoils on the occupation fraction of MBHs. Since the magnitude of the recoil depends on the mass ratio of the merging MBHs, we have to model the mass-growth of MBHs and the merger efficiency. We base our model of MBH growth on the empirical correlation found between MBH masses and the properties of their hosts, and on the suggestion that these correlations are established during galaxy mergers that fuel MBH accretion and form bulges. {We therefore assume that after every merger between two galaxies with a mass ratio larger than $1:10$, their MBHs climb to the same relation with the velocity dispersion of the halo, as it is seen today \citep{gultekinetal09b}: \beq M=1.3\times10^8 M_\odot \left(\frac{\sigma}{200 \,{\rm km\,s^{-1}}} \right)^{4.24}. \eeq We link the correlation between the black hole mass and the central stellar velocity dispersion of the host with the empirical correlation between the central stellar velocity dispersion and the asymptotic circular velocity ($V_{\rm c}$) of galaxies (Ferrarese 2002; see also Pizzella et al. 2005; Baes et al. 2003). \beq \sigma=200 \,{\rm km\,s^{-1}} \left(\frac{V_{\rm c}}{304 \,{\rm km\,s^{-1}}}\right)^{1.19}. \eeq The latter is a measure of the total mass of the dark matter halo of the host galaxies. A halo of mass $M_{\rm h}$ collapsing at redshift $z$ has a circular velocity \beq V_{\rm c}= 142 \,{\rm km\,s^{-1}} \left[\frac{M_{\rm h}}{10^{12} \ M_{\,{\rm M_\odot}} }\right]^{1/3} \left[\frac {{\Omega_m}}{{\Omega_m^{\,z}}}\ \frac{\Delta_{\rm c}} {18\pi^2}\right]^{1/6} (1+z)^{1/2} \eeq where $\Delta_{\rm c}$ is the over--density at virialization relative to the critical density. For a WMAP5 cosmology we adopt here the fitting formula (Bryan \& Norman 1998) $\Delta_{\rm c}=18\pi^2+82 d-39 d^2$, where $d\equiv {\Omega_m^{\,z}}-1$ is evaluated at the collapse redshift, so that $ {\Omega_m^{\,z}}={{\Omega_m} (1+z)^3}/({{\Omega_m} (1+z)^3+{\Omega_{\Lambda}}+{\Omega_k} (1+z)^2})$. Therefore, if a dark matter halo of mass $M_{\rm h}$ at redshift $z$ hosts a MBH, we can derive the MBH mass via $V_{\rm c}$ and $\sigma$ \citep[see also][]{VHM,Rhook2005, Croton2009}. The relationship between black hole and dark matter halo mass is: \beq M=1.3\times10^{8} M_\odot \left[\frac{M_h}{9.410^{13} M_\odot} \right]^{5/3} \left[\frac {{\Omega_m}}{{\Omega_m^{\,z}}}\ \frac{\Delta_{\rm c}} {18\pi^2}\right]^{5/6}(1+z)^{5/2}, \eeq in good agreement with the recent result by \cite{Bandara2009}. We further assume that MBHs merge within the merger timescale of their host haloes ($t_{\rm merge}$), which is a likely assumption for MBH binaries formed after gas rich galaxy mergers \citep[and references therein]{dotti07}. We adopt the relations suggested by \cite{Taffoni2003} for the orbital decay of merging haloes. We treat as MBH mergers at a given $z$ those MBH binaries that are expected to merge during that specific timestep. If two haloes start interacting at $z_{\rm in}$, corresponding to a Hubble time $t_H(z_{\rm in})$ with a merger timescale $t_{\rm merge}$, then we consider these MBHs merged at $z_{\rm fin}$ corresponding to $t_H(z_{\rm fin})=t_H(z_{\rm in})+t_{\rm merge}$. Although we follow the dynamical evolution of each MBH along cosmic time, we note that dynamical friction appears to be efficient for mergers with mass ratio of the progenitors larger than $1:10$. Small satellites suffer severe mass losses by the tidal perturbations induced by the gravitational field of the primary halo. This progressive mass loss increases the decay time that can be of order the Hubble time when the mass ratio is smaller than $1:10$. Finally, the spin of MBHs is fixed to be $\hat a=0.9$, in order to obtain upper limits to the recoil consequences. } This set of models, though simple, provides a wide range of histories that can be considered to bracket some extreme behaviours. We find that the environment of the MBH population plays an important role. Along the merger history of massive galaxies, the fraction of ejected MBHs decreases more rapidly with decreasing redshift, dropping below 50\% by $z\sim 7$ for $M_0=4\times10^{13}\,{\rm M_\odot}$. The 50\% threshold is reached at later times in the `trees' of less massive haloes: $z\sim5$ for $M_0=2\times10^{12}\,{\rm M_\odot}$ and $z\sim2$ for $M_0=2\times10^{11}\,{\rm M_\odot}$. It is particularly instructive to examine the ejection history of the tree's main halo, which is the galaxy we would see today (Figure~\ref{fig:ejectionave_5}). The main halo is usually among the most massive haloes in the tree at any time, implying the largest escape velocities. The fraction of ejected MBHs is therefore lower. The central MBH in a large galaxy has a very small probability of being lost after $z=5$. Since most of the growth of MBHs happens between $z=3$ and $z=1$, corresponding to the peak in quasar activity, we argue that, if accretion is responsible for creating the correlations between MBHs and their hosts \citep{sr98,1999MNRAS.308L..39F,2003ApJ...596L..27K}, then MBHs hosted in large galaxies will sit close to the expected correlation. The central MBH in a small galaxy (e.g., $M_0=2\times10^{11}\,{\rm M_\odot}$) has instead a large probability ($\sim 20$\%) of being ejected all the way to today. If such a low-redshift ejection happens, it is not immediately clear if the galaxy can re-acquire a MBH {\it and} if the latter can grow to the hypothetical mass that the correlation with the host would suggest. The alternative case of MBH formation (haloes with $>10^{10} \,{\rm M_\odot}$) gives quantitatively similar results for large galaxies, e.g., $M_0=4\times10^{13}\,{\rm M_\odot}$, and qualitatively similar results for small galaxies. Black holes form much later in this model (as the emergence of more massive galaxies must wait until $z \simeq 5-7$), and this pushes the ejection fraction to first rise steeply, as more and more MBHs are available to merge, and then decrease as galaxies grow in mass deepening the potential wells. \begin{figure} \includegraphics[width=0.45\textwidth,angle=0]{ej_frac.ps} \caption{Probability of MBH ejections as a function of redshift along the merger history of the main halo in each merger tree. Each row corresponds to different halo masses $M_0$ at $z=0$ as indicated in the left panels. Each column as indicated corresponds to the different fitting function used to calculate recoil speed. The {\it green, vertically hatched} regions are for an isotropic distribution of spin orientations; the {\it red, horizontally hatched} regions are for the `hot' simulations by Dotti et al.\ (2009); and the {\it blue, diagonally hatched} regions are for the `cold' simulations by Dotti et al.\ (2009). In every panel the MBH spins are fixed at $\hat a=0.9$. The trends and results are in strong qualitative agreement with the results from the simpler simulations presented in Fig.~\ref{fig:ej_peaks}. There is also general agreement among the different fitting functions.} \label{fig:ejectionave_5} \end{figure} It is most interesting that the cosmic ejection rate is very similar for fit B compared to fits CL and H, notwithstanding the different $\eta$ dependence ($\eta^3$ vs $\eta^2$). The reason is simple. From Figure~\ref{fig:comparison} it is evident that the mass-ratio dependence picks up at $q<0.1$ (where, incidentally, recoil velocities become much smaller than escape velocities from galaxies). However, mergers with $q\ll1$ are rather uncommon \citep{VHM,Sesanaetal2005}, due to dynamical effects. During a galactic merger, it is dynamical friction that drags in the satellite, along with its central MBH, towards the center of the more massive progenitor. When the orbital decay is efficient, the satellite hole moves towards the center of the more massive progenitor, leading to the formation of a bound MBH binary. The efficiency of dynamical friction decreases with the mass ratio of the merging galaxies: only nearly equal mass galaxy mergers (`major mergers', mass ratio larger than $\simeq 1:10$) lead to efficient MBH binary formation within timescales shorter than the Hubble time \citep{Taffoni2003}. These effects must be convolved with the mass-ratio probability distribution. As the mass function of haloes (and galaxies) is steep, the probability of halo mergers decreases with increasing mass ratio. That is, dynamically efficient major mergers are rare, and minor mergers are common but inefficient at forming MBH binaries. We can now derive how strong or mild recoils influence the frequency of MBHs in galaxies. We analyse here the $\nu_c>3$ case only, as by construction the alternative case (mass threshold $>10^{10} \,{\rm M_\odot}$) has an occupation fraction of unity for all haloes above the threshold, as if a MBH is lost, a new immediately materialises. . We first derive a control case, in which we ignore recoils altogether. This control case is shown in the bottom panel of Figure~\ref{fig:OF}. The MBH frequency is calculated above a given minimum halo mass ($>10^{10}\, \,{\rm M_\odot}$; $>10^{11}\, \,{\rm M_\odot}$; $>10^{12}\, \,{\rm M_\odot}$). As discussed by \cite{Menou2001} and \cite{VHM}, the frequency of MBHs in haloes above a fixed mass threshold initially decreases with cosmic time as lower mass haloes lacking MBHs become more massive than the assumed threshold. Eventually, the occupation fraction starts to increase as the total number of individual haloes drops. Given that different fitting formulae give very similar results, we show here the results that we obtain using fit CL. The worst case scenario, the case with the largest number of ejections, is the isotropic case, which leads to the largest recoils (see Figure~\ref{fig:comparison}). The top panel of Figure~\ref{fig:OF} indicates that recoils can decrease the overall frequency of MBHs in small galaxies to $\sim60$\%, while they have little effect on the frequency of MBHs in large galaxies (at most a 20\% effect). The middle panel depicts the cold aligned case. We consider the latter the more realistic situation for gas-rich mergers, which are expected for high-redshift galaxies, at least. It is reassuring to notice that recoils are not dangerous for most galaxies. This is the result of a combination of effects: on the one hand MBHs hosted in small haloes, with shallow potential wells and low escape velocities, have a high ejection probability (Figure~\ref{fig:ejectionave_5}), on the other hand the MBH merger rate is very low along their galaxy formation merger hierarchy: MBH formation processes are inefficient in such shallow potential wells, and the anti-hierarchical nature of the galaxy assembly implies that not much action happens in low-bias systems at high-redshift. This is exemplified in the right panels of Figure~\ref{fig:OF}, in which we show the frequency of MBH close binaries for the same mass thresholds we used to calculate the MBH frequency (note the different y-axis scales). {Na\"\i vely}, one would expect the frequency of double MBHs to scale as the square of the MBH frequency, but the frequency of close binaries is suppressed with respect to the frequency of pairs because of the requirement of efficient orbital decay. Our simulations show that gravitational recoil is not expected to be efficient at ejecting MBHs with mass $\simeq10^6\,{\rm M_\odot}$. This mass range is where the {\it Laser Interferometric Space Antenna} ({\it LISA}) gravitational wave observatory will be most sensitive to extreme mass ratio inspiral (EMRI) events \citep{gair2009}. Thus the event rate for EMRIs is not likely to be affected by recoils. Summarising, the larger ejection probability that MBHs hosted in low-bias haloes have is counteracted by their lower merger probability, thus leading to an overall small change in the MBH frequency as a function of the recoil strength. \begin{figure} \includegraphics[width=0.45\textwidth,angle=0]{OF_BIN.ps} \caption{ Left: frequency of MBHs in galaxies as a function of redshift for haloes above three different mass thresholds: $M>10^{10} \,{\rm M_\odot}$ (\emph{gray, vertically hatched}) $M >10^{11} \,{\rm M_\odot} $ (\emph{magenta, diagonally hatched}) $M>10^{12}\,{\rm M_\odot}$ (\emph{cyan, horizontally hatched}) for isotropic spin orientations (\emph{top}) and spin orientations from the `cold' simulations of Dotti et al.~(2009) (\emph{middle}). The bottom panel shows the black hole occupation fraction when the recoil is arbitrarily set to zero at each MBH merger. Right: binary MBH frequency for the same models and mass thresholds.} \label{fig:OF} \end{figure} \section{Discussion} Ejection of MBHs may explain the unusual case of NGC~3115. NGC~3115 is an S0 galaxy with MBH measured to have mass $M=1\times10^9\,{\rm M_\odot}$ \citep{kr92}, yet the surface-brightness profile of the central bulge has a small break radius $r_b\sim2\units{pc}$ with a steep inner slope slope consistent with a power-law $\gamma=0.52$ \citep{laueretal07b}. This presents a contradiction of theoretical expectations for a galaxy of this size ($M_{h} \approx 10^{13}\,{\rm M_\odot}$). A galaxy as massive as NGC~3115 is expected to have merged frequently enough to have had a binary black hole at some point in its history \citep{vmh03}. According to the standard picture of core scouring \citep{bbr80, 1997AJ....114.1771F, 2003ApJ...596..860M, laueretal07b}, the binary black hole would eject stars on elongated orbits as orbital energy is transferred from the binary to kinetic energy in the stars. This process shrinks the binary separation and flattens the stellar number density profile so that the surface brightens profile appears as a core at the center. The galaxy's core can be described by its deficit in stellar mass, i.e., the mass in stars ejected from what was previously a power-law profile. The most recent numerical simulations on black hole mergers find that for nearly equal masses, the mass deficit should scale with the total mass of the binary \citep{2006ApJ...648..976M}; but for mass ratios far from unity, the mass deficit should scale with the mass of the secondary \citep{2008ApJ...686..432S}. Observationally, the mass deficit is found to scale linearly with existing $M_\mathrm{BH}$ \citep{laueretal07,2009ApJ...691L.142K}. Regardless of its magnitude, the mass deficit is expected to persist after the binary coalesces because the remnant black hole acts as a `guardian' that prevents dynamical refilling of the core \citep{laueretal07}. The observations of NGC~3115, however, reveal that this is not what happened. There is an existing MBH at its center that would have acted as the core's guardian, but the surface brightness profiles show no core. There are several possible explanations. First, core scouring may be greatly diminished if the final major merger had a significant amount of gas \citep{Kormendy2009}. The gas drag speeds up the process of MBH merger before the MBH binary can eject many stars. Even if core scouring occurred, a subsequent gas-rich merger could re-fill the core by depositing gas that would form stars. Here, we suggest another possibility. The final MBH binary is ejected by gravitational recoil \citep[or three-body encounters,][]{gmh04, gmh06,hl07}, but the MBH is replaced in a subsequent merger. Taking into consideration the typical mass ratios of merging MBHs as a function of galaxy bias and cosmic time, the probability that a MBH in NGC~3115 has been ejected at redshift $z<5$ is 10\% in the case of a `dry merger', but less than 1\% in the case of a `wet merger'. This is because at late cosmic times the binary mass ratio distribution becoming shallow, with $q\ll1$ becoming less probable \citep{volonteri07}. In general, in order to have a probability larger than 10\% that a MBH recoils with a velocity larger than the escape velocity of the progenitor of NGC~3115 at $z\simeq5$ the mass ratio of the merging MBHs must be $q>0.3$ for the isotropic and hot cases (assuming spins $\hat a=0.9$.). For the cold case, even for $q=1$ and $\hat a=0.9$, the probability is 0.8\%. In the ejection scenario NGC~3115 should have had at least one major merger that involved two MBH-hosting galaxies where substantial star formation or AGN feedback kept the gas pressurized (corresponding to our hot simulations). \section{Summary} We assessed the effects of spin alignment on the strength of the gravitational recoil along the cosmic build-up of galaxies and MBHs. We determined ejection probabilities for mergers in gas-poor galaxies, where the MBH binary coalescence is driven by stellar dynamical processes, and the spin-orbit configuration is expected to be isotropically distributed. We contrast this case of gas-rich mergers, where we expect MBH spins to align with the orbital angular momentum. This is because in gas-rich environments MBHs accrete gas, which exerts gravito-magnetic torques that align the spins of both the MBHs with the angular momentum of the large-scale gas flow. We find that for aligned configurations the ejection probability is strongly suppressed (by at least a factor of 2). Along the merger history of a large elliptical, the ejection probability becomes negligible at $z<5$, while small galaxies have ejection probabilities of order 20\% even today. However, the MBH merger rate is very low along their merger hierarchy of small galaxies: MBH formation processes are likely inefficient in such shallow potential wells. The occupation fraction of MBHs, intimately related to halo bias and MBH formation efficiency, therefore plays a crucial role in increasing the retention fraction. Recoils can effectively decrease the overall frequency of MBHs in small galaxies to $\sim60$\%, while they have little effect on the frequency of MBHs in large galaxies (at most a 20\% effect). \section*{Acknowledgements} MV acknowledges support from a Rackham faculty grant.
1,314,259,996,042
arxiv
\section{Introduction} \label{s1} The main idea of this paper is to present an elementary practical classification of electromagnetic fields (see also \cite{Mitsk06a}) equally applicable in both relativities and having deep physical roots. In fact, there already exist classifications of stress-energy tensors (essentially, those of Segr\`e and Pleba\'nski, \cite{Pleb,Segre,Exact2}) and, specifically, of electromagnetic fields \cite{SyngeSR}. However, one does not encounter there a direct relation to such field properties as the Poynting vector and the velocities of propagation of concrete configurations of these fields (in particular, with respect to the reference frames co-moving with the corresponding fields when these velocities are $<c=1$ in the natural units used here; for a detailed discussion of the Li\'enard--Wiechert example see our recent article \cite{Mitsk06a} and a preprint with complete simple deduction of that solution \cite{Mitsk05}). Now, our aim is also to give here natural and simple prescriptions for calculation of such physical characteristics of every concrete electromagnetic field ({\it cf.} \cite{MisWh,Rain1,Whee}). We consider electromagnetic fields in a vacuum (but electric currents may be present, not the media with dielectric and magnetic properties), this theory being treated here on the classical (not quantum theoretical) basis. The state of motion (in general, inhomogeneous) of the electromagnetic field, which we call its propagation, is that of a concrete (preferably, exact) solution of the system of dynamical equations (of Maxwell in special theory and Einstein--Maxwell in general theory of relativity), but not the propagation of perturbations on the background of these solutions, and not the propagation of discontinuities, which belong to other problems of the field theory not considered in this paper (their treatment is already well developed and does not need immediate revision). Any motion is, naturally, relative (that occurring with the velocity of light is also relative, at least in the sense of its direction: the light aberration effect), while motions with under-luminal velocities always permit to introduce co-moving reference frames with respect to which the fields are ``at rest'' everywhere in the four-dimensional region of the frame determination, which includes the requirement that redistributions of the field (``deformations'') should be ``caused'' by deformation, acceleration and rotation of the respective co-moving reference frame, and {\it vice versa}. In this connection, we remind that the physical reference frame is an idealized image of a changing with time (let us not to put this concept here more precisely) distribution and motion of observers together with their observational and measuring devices, idealized primarily in the sense to be test objects, {\it i.e.} they should not practically perturb the characteristics of the objects (including fields and spacetime) in the classical (non-quantum) theory. We do not touch here upon the problem of quantum theoretical description of reference frames: this question still seems not to be adequately considered in physics, although the first step in this direction was already made by Bohr and Rosenfeld \cite{BohrRos}. \subsection{A preview of the paper's structure} \label{s1.1} The reader will notice that in this paper the original conclusions are enlarged with a modernized review of already known facts to make the exposition more self-sufficient. In the next section the notations and definitions used in this paper are given. In section \ref{s3} we present a condensed information on description of reference frames and its application to electromagnetic fields. The short section \ref{s4} is dedicated to the classification of these fields in terms of their two invariants, $I_1$ and $I_2$. In section \ref{s5} we introduce the concept of propagation velocity of electromagnetic fields in a vacuum, and it is shown that absolute value the three-velocity of all pure null fields is equal to unity. The pure electromagnetic fields (when $I_2=0$) are considered in section \ref{s6} yielding for the pure electric and magnetic types a simple elimination of magnetic or electric field, respectively, by the corresponding choices of reference frame (subsection \ref{s6.1}), and a specific r\^ole of the Doppler effect (with its inevitable generalization) for the pure null type in subsection \ref{s6.2}. An approach to constructing exact Einstein--Maxwell solutions in the same 4-geometry as that of any exact seed Einstein--Maxwell solution one arbitrarily would choose (with the exception of pure null fields), is developed in sections \ref{s7} and \ref{s8}. Section \ref{s9} is dedicated to the treatment of impure subtype ($I_2\neq 0$) of all three types of fields leading to parallelization of electric and magnetic vectors in the adequate (canonical) reference frames. In sections \ref{s10} and \ref{s11} we consider application of the methods developed in the preceding sections to some exact solutions of the Einstein--Maxwell equations in general relativity and Maxwell's equations in special relativity: in subsection \ref{s10.1} of the well-known Kerr--Newman (KN) solution (involving, as we show, in different regions different types of electromagnetic field whose electric and magnetic vectors are always collinear and radially-directed), and in subsection \ref{s11.1}, the Li\'enard--Wiechert solution (we show that it belongs to the pure electric type and there always exists a global co-moving with this electromagnetic field non-degenerate reference frame, so that the velocity of propagation of this field in a vacuum is everywhere less than that of light, with the exception of the future null infinity). In subsection \ref{s10.2} ``new'' exact solutions with pure electric and pure magnetic fields in the standard Kerr-Newman black hole geometry are presented, together with a similar black hole with impure-null-type electromagnetic field. In subsection \ref{s11.2}, it is shown that a superposition of plane harmonic electromagnetic wave and homogeneous magnetic field has strictly sub-luminal velocity of its propagation in a vacuum. In the final section \ref{s12} the obtained results are summed up and concluding remarks are given. \section{Mathematical preliminaries} \label{s2} \setcounter{equation}{0} Everything will be considered in four spacetime dimensions. We use the spacetime signature $+,-,-,-$, Greek indices being four-dimen\-sional (running from 0 to 3), and Latin ones, three-dimensional, with the Einstein convention of summation over dummy indices. However, in the reference frame formalism, all indices usually are Greek, and the splitting into physical spacelike and timelike objects only means that the former ones are in all free indices orthogonal to the timelike monad vector (projected onto the physical three-space of the reference frame), while the physical timelike parts represent contractions with the monad in the indices which hence become absent (a change of the root-letter notation is then advisable). The indices put into individual parentheses belong to tetrad components. For the sake of convenience and writing and reading economy, the Cartan exterior forms formalism is frequently used. In it, the coordinated basis is the set of four covectors (1-forms) $dx^0, \dots,dx^3$, and the orthonormal tetrad basis similarly is $\theta^{(0)}, \dots,\theta^{(3)}$. Every such basis 1-form, ({\it e.g.}, $dx^2$, $\theta^{(3)}$), itself represents an individual four-dimensional covector. The exterior (wedge) product simply is a skew-symmetrized tensorial product (antisymmetrization is also denoted by Bach's square brackets which embrace the indices, while factor $\frac{1}{(\textnormal{their number})!}$ is supposed to be included in this definition). It is clear that the rank of a form can be from 0 (a scalar) to 4, inclusively; all forms of higher ranks vanish identically (in $D=4$). The scalar product of four-vectors or covectors is denoted by a central dot, if these vectors are written without indices ($A\cdot B$), but {\it with} such indices this has a wider meaning, for example, $dx^\mu\cdot dx^\nu=g^{\mu\nu}$: literally, this means that the scalar product of two coordinated-basis covectors equals a contravariant component of the metric tensor with the same indices as those of these factors. The dual conjugation in the sense of components (their indices) is denoted by an asterisk over the corresponding subindices, or under upper indices; the Hodge star stands for dual conjugation of a form written more abstractly, and is denoted by an asterisk before the form; it is convenient to have in mind that, after all, it applies to the form's basis, though this is equivalent to a similar dual conjugation of the form's components ({\it not both at once!}). An application of a pair of Hodge stars does not change an odd-rank form and results in the change of the sign of an even-rank form (for example, the electromagnetic field 2-form $F$). By this {\it definition}, \begin{equation} \label{dual} \ast(dx^{\alpha_1} \wedge\dots\wedge dx^{\alpha_k}):=\frac{1}{(4-p)!}{E^{\alpha_1 \dots\alpha_k}}_{\beta_1\dots\beta_l}(dx^{\beta_1}\wedge\dots \wedge dx^{\beta_l}) \end{equation} where \begin{equation} \label{E} E_{\kappa\lambda\mu \nu}:=\sqrt{-g}\epsilon_{\kappa\lambda\mu \nu}, ~ ~ E^{\kappa \lambda \mu\nu}:=-\frac{1}{\sqrt{-g}}\epsilon_{\kappa\lambda\mu \nu} \end{equation} are covariant and contravariant components of the axial Levi-Civit\`a tensor, and the usual Levi-Civit\`a symbol is defined as \begin{equation} \label{LC} \epsilon_{\kappa\lambda\mu \nu}= \epsilon_{[\kappa\lambda\mu \nu]}, ~ ~ \epsilon_{0123}=+1 \end{equation} (always with the {\it sub}\,indices: this is a {\it symbol}, though simultaneously representing components of a contravariant axial tensor density of the weight $-1$ {\it and} a covariant axial tensor density of the weight $+1$). See some details in the beginning of the introductive chapter in \cite{Mitsk06}. Finally, coming back to a formula in the end of the second paragraph of this section, we have $\ast(dx^\mu\wedge\ast dx^\nu)=-dx^\mu\cdot dx^\nu=-g^{\mu\nu}$. Of course, the r\^ole of metric properties of spacetime is somewhat hidden in the Hodge notations, as one can see from the formulae (\ref{dual}) and (\ref{E}). \section{Algebra of reference frames; applications to the electromagnetic field} \label{s3} \setcounter{equation}{0} The central point of our paper is the use of algebraic considerations, other applications being here only of auxiliary significance, for example, the exterior differentiation operator $d=\theta^{(\alpha)} \nabla_{X_{(\alpha)}}\wedge\equiv dx^\alpha\nabla_{\partial_\alpha}\wedge$. In the physical sense, a concrete reference frame (see \cite{Mitsk06}) has only to do with a state of motion (a timelike world lines' congruence, or, equivalently, its unitary tangent vector field, the monad $\tau$) of a swarm of test observers together with their test measuring devices. Moreover, one additional ingredient, the metric tensor $g$, is needed to construct the projector $b:=g-\tau\otimes\tau$ which at the same time serves as the (formally, four-dimensional) metric tensor on the three-dimensional local subspace orthogonal to the monad field $\tau$; $b_{\mu\nu}\tau^\nu\equiv 0$, $\det b\equiv 0$. $b$ has the signature $0,-, -,-$, so that the ``three-dimensional'' scalar product of two vectors is \begin{equation} \label{bullet} A\bullet B:=- b_{\alpha\beta}A^\alpha B^\beta\equiv\ast[(\tau\wedge A)\wedge\ast (\tau\wedge B)] \end{equation} where these vectors are also automatically projected onto the local subspace mentioned above. If such vectors did already belong to the subspace, they usually are boldfaced: $\mathbf{A}^\mu =b^\mu_\nu A^\nu$. The ``three-dimensional'' axial vector product of two vectors now reads \begin{equation} \label{times} A\times B:=\ast(A\wedge\tau\wedge B). \end{equation} These algebraic operations are locally equivalent to the usual three-dimensio\-nal scalar and vector products, so we denote them by essentially the same symbols. In fact, in the complete reference frame theory we similarly use the operations of gradient, divergence and curl, but, being differential operators, they are more profoundly generalized, explicitly taking into account the characteristics of inhomogeneities of general reference frames, such as acceleration, rotation and deformation (expansion and shear) which naturally cannot be present in the algebraic treatment of geometry. The uniformity of general and old traditional notations radically simplifies the physical interpretation of general- and (in non-inertial frames) special-relativistic expressions as well as of theoretically predicted effects. Electromagnetic fields are described with the use of the covector potential $A=A_\alpha dx^\alpha$ and the 2-form (the field tensor) \begin{equation} \label{F2form} F=dA=\frac{1}{2}F_{\alpha\beta}dx^\alpha\wedge dx^\beta. \end{equation} With respect to a given reference frame $\tau$ (see \cite{Mitsk06}), the field tensor splits into two four-dimensional (co)vectors, electric \begin{equation} \label{elE} \textnormal{{\bf E}}_\mu=F_{\mu\nu} \tau^\nu ~ ~ \Longleftrightarrow ~ ~ \textnormal{{\bf E}}= \ast(\tau \wedge\ast F) \end{equation} and magnetic \begin{equation} \label{magB} \textnormal{{\bf B}}_\mu=-F\!\!\stackrel{\textnormal{\small$\ast$} }{\textnormal{\scriptsize$\mu\nu$}}\!\tau^\nu ~ ~ \Longleftrightarrow ~ ~ \textnormal{{\bf B}}=\ast(\tau\wedge F), \end{equation} both $\perp\tau$, thus \begin{equation} \label{FBE} F=\mathbf{E}\wedge\tau+ \ast(\mathbf{B}\wedge\tau). \end{equation} It is obvious that {\bf E} is a polar four-vector and {\bf B}, an axial four-vector, both restricted to the local physical three-subspace of the $\tau$-reference frame. (In Cartesian coordinates and with the corresponding inertial monad, consequently, in the Minkowskian spacetime, we have the same relations as for usual contravariant three-vectors: $ \textnormal{{\bf E}}^i=F_{i0}=-F^{i0}, ~ ~ \textnormal{{\bf B}}^i =-\frac{1}{2} \epsilon_{ijk}F_{jk}=-\frac{1}{2} \epsilon_{ijk}F^{jk}$.) The splitting (\ref{elE}), (\ref{magB}), (\ref{FBE}) follows from the observation that the Lorentz force can be expressed as \begin{equation} \label{LorForce} (\textnormal{{\bf E}}+\textnormal{{\bf v}}\times \textnormal{{\bf B}})_\alpha=F_{\mu\nu}\left(\tau^\nu+ \textnormal{{\bf v}}^\nu\right) b^\mu_\alpha. \end{equation} Here the three-velocity of the charged particle on which acts the Lorentz force, follows from the general definition \begin{equation} \label{v} u= \stackrel{(\tau)}{u}(\tau+\mathbf{v}) ~ \Rightarrow ~ \textnormal{\bf v}=b(\frac{dx}{dt},\cdot) \end{equation} where $\stackrel{(\tau)}{u}=u\cdot\tau=\frac{dt}{ds}=(1-v^2)^{-1/2}$, while $dt= \tau_\mu dx^\mu$ ($\tau\cdot dx$, that is non-total differential of the physical time along an infinitesimal displacement of the particle in spacetime), and $u$ is its four-velocity. We have to add here important comments related to the basic concepts of both relativities, and these comments could be more transparent just with the three-velocity as an intuitively clear example. The ``physical'' objects (such as $\mathbf{v}$, $\mathbf{E}$, {\it etc.}) belong to the section orthogonal to $\tau$ using which these objects are introduced. Thus, already in special relativity, the velocities considered in their composition law, may exist even in three distinct sections of spacetime, while three frames are participating in the composition, and it is absolutely obvious that one cannot simply add vectors from two subspaces obtaining the third one automatically lying in the third subspace, all of them having necessary properties with respect to these respective frames. And in the composition law the three-vectors being added together, frequently are ``collinear'' (an absurd if they belong to non-parallel sections of spacetime). This is, of course, understandable, since Einstein himself did not realize the fact of unification of space and time into the four-dimensional manifold before the famous discovery of Minkowski in 1908 (and even during several years after this discovery). The only two electromagnetic invariants being important in the Einstein--Maxwell theory can be easily introduced: \begin{equation} \label{I1} I_1=-2\ast(F\wedge\ast F)=F_{\mu\nu}F^{\mu\nu}=2 \left(\mathbf{B}^2-\mathbf{E}^2 \right), \end{equation} \begin{equation} \label{I2} I_2=2\ast(F\wedge F)=F\!\stackrel{ \textnormal{\small$\ast$}}{ \textnormal{\scriptsize$\mu\nu$}} F^{\mu\nu}=4 \textnormal{{\bf E}}\bullet\textnormal{{\bf B}}. \phantom{aaaaaa.} \end{equation} These invariants enter the following important identities: \begin{equation} \label{crafty} F_{\mu\nu}F^{\lambda\nu}-F\!\stackrel{ \textnormal{\small$\ast$}}{\textnormal{\scriptsize$\mu\nu$}} F\!\stackrel{ \textnormal{\scriptsize$\lambda\nu$}}{ \textnormal{\small$\ast$}}=\frac{1}{2}I_1\,\delta^\lambda_\mu, ~ ~ ~ F\!\stackrel{\textnormal{\small$\ast$}}{ \textnormal{\scriptsize$\mu\nu$}}F^{\lambda\nu}=\frac{1}{4}I_2 \,\delta^\lambda_\mu. \end{equation} In fact, $I_2$ is an axial (pseudo-) invariant whose square behaves as a usual scalar. The electromagnetic stress-energy tensor is \cite{Mitsk58,Mitsk06} \begin{equation} \label{Tmunu} T^\nu_\mu=\frac{1}{4\pi}\left(\frac{1}{4}F_{\kappa\lambda} F^{\kappa\lambda} \delta^\nu_\mu-F_{\mu\lambda}F^{\nu\lambda} \right)=-\frac{1}{8\pi}\left( F_{\mu\lambda}F^{\nu\lambda}+F\! \stackrel{\textnormal{\small$\ast $}}{\textnormal{\scriptsize$\mu \lambda$}} F\!\stackrel{ \textnormal{\scriptsize$\nu\lambda $}}{\textnormal{\small$\ast$}}\right) \end{equation} (in Gaussian units). Its (single) contraction with arbitrary monad includes the electromagnetic energy density and Poynting vector in that frame, \begin{equation} \label{Ttau} T^\nu_\mu \tau_\nu=\frac{1}{8\pi}\left[\left (\textnormal{{\bf E}}^2+ \textnormal{{\bf B}}^2\right)\tau_\mu+2 (\textnormal{{\bf E}} \times\textnormal{{\bf B}})_\mu\right], \end{equation} and the squared expression is (see (\ref{crafty}) and {\it cf.} \cite{Whee,Rain1}) \begin{multline} \label{Ttausq} T^\nu_\mu T^\mu_\xi\tau_\nu \tau^\xi=\frac{1}{(8\pi)^2}\left[\left(\textnormal{{\bf E}}^2+ \textnormal{{\bf B}}^2\right)^2-4(\textnormal{{\bf E}}\times \textnormal{{\bf B}})^2\right] \\ \equiv\frac{1}{(8\pi)^2}\left[\left(\textnormal{{\bf B}}^2- \textnormal{{\bf E}}^2\right)^2+4(\textnormal{{\bf E}}\bullet \textnormal{{\bf B}})^2\right]=\frac{1}{(16\pi)^2}\left({I_1}^2+ {I_2}^2\right). \end{multline} It is interesting that these constructions are not only scalars under transformations of coordinates, but they are also independent of the reference frame choice: the right-hand side does not involve any mention of the monad at all. \section{A classification of electromagnetic fields} \label{s4} \setcounter{equation}{0} The simple and exhaustive classification of electromagnetic fields is based on existence of only two invariants, (\ref{I1}) and (\ref{I2}), built with the field tensor $F_{\mu\nu}$, while all other invariants are merely algebraic functions of these two invariants (if not vanish identically). Since $I_2$ itself is a pseudo-invariant (axial scalar) which acquires the factor sign$(J):=J/|J|$ under a general transformation of coordinates, $J$ being its Jacobian, the concrete sign of $I_2$ does not matter in our classification. In terms of $I_1$ the invariant classification suggests three types of fields: $I_1<0$ is the electric type (the electric field dominates), $I_1>0$ gives the magnetic type, and to $I_1=0$, the null type corresponds. The pseudo-invariant $I_2$ permits to work out the classification in more detail: we get additional subtypes, impure ($I_2\neq 0$) and pure ($I_2=0$). Below we shall see how this classification enables us to find reference frames most adequately suitable for description of concrete electromagnetic fields and even to construct new exact solutions of Einstein--Maxwell's equations. It also gives a natural base for straightforward physical interpretation of these fields. \section{Propagation of electromagnetic fields} \label{s5} \setcounter{equation}{0} Considering the propagation of electromagnetic field, we do not include the high-frequency limits related to field discontinuities (bicharacteristics). The Poynting vector plays an important r\^ole in electrodynamics having two distinct meanings: of the energy density flow and of the linear momentum density due to symmetry of the electromagnetic energy-momentum tensor (in natural units velocity is dimensionless and that of light in a vacuum is $c=1$). It is worth giving more comments on physical interpretation of the Poynting vector. It does not always describe propagation of extractable energy of the field and even a real motion (see also \cite{SyngeHerm}); the exclusion is here related to the special case of static and stationary fields (whose frequency is equal to zero). Thus the Poynting vector, together with the electromagnetic energy density, determines (sometimes formally) the propagation three-velocity of electromagnetic field with respect to the reference frame in which the expression (\ref{Ttau}) is given. We take this velocity according to Landau and Lifshitz \cite{LanLif} (see the problem in p. 69) as \begin{equation} \label{emprop} \frac{\mathbf{v} }{1+\mathbf{v}^2}=\frac{\mathbf{E}\times\mathbf{B} }{\mathbf{E}^2 +\mathbf{B}^2} \end{equation} (an alternative definition see in \cite{Pauli}, p. 115, formula (312), \begin{equation} \label{Pauli} \mathbf{v}=2 \frac{\mathbf{E}\times\mathbf{B} }{\mathbf{E}^2 +\mathbf{B}^2}, \end{equation} but this definition is false as it can be seen from subsection \ref{s11.2} below). In the preceding pages in \cite{LanLif}, an interesting discussion of electromagnetic invariants is worth being noted. From (\ref{Ttausq}) and (\ref{emprop}) we see that \begin{equation} \label{modv} 0\leq\frac{|\mathbf{v}|}{1+\mathbf{v}^2}= \frac{1}{2} \sqrt{1-\frac{{I_1}^2+{I_2}^2}{4(\mathbf{E}^2+ \mathbf{B}^2)^2}}=\frac{|\mathbf{E}||\mathbf{B}|}{\mathbf{E}^2+ \mathbf{B}^2}|\sin\psi|\leq\frac{1}{2}, \end{equation} $\psi$ being the angle between {\bf E} and {\bf B} in the strict local Euclidean sense; moreover, the function $|\mathbf{v}|/(1+\mathbf{v}^2)$ is everywhere monotonic. In particular, this means that the propagation of all pure null fields ($|\mathbf{E}|=|\mathbf{B}|$, $\psi=\pi/2$) occurs with the unit absolute value of the three-velocity, the velocity of light, and all other electromagnetic fields propagate with sub-luminal velocities which can always be made equal to zero in corresponding co-moving reference frames. This is the general-relativistic conclusion, only expressed in three-dimensional notations characteristic to the general reference frame theory. \section{Dealing with pure electromagnetic fields} \label{s6} \setcounter{equation}{0} Pure electromagnetic fields represent the simplest cases, especially in the non-null types when there always exist reference frames in which either magnetic or electric field can be easily eliminated. The pure null type requires more thorough examination involving a consideration of the Doppler effect (here, its generalized counterpart) which we have to discuss below in more detail. \subsection{Pure electric and magnetic type fields} \label{s6.1} Vanishing of the second invariant, $I_2$, means that the electromagnetic field tensor, or its dual conjugate, is a simple bivector in all reference frames (the second of two necessary and sufficient conditions is four-dimensionality of the manifold under consideration), thus \begin{equation} \label{simbiv} F=U \wedge V \textnormal{ or } \ast F=P\wedge Q, \end{equation} $U$, $V$, $P$, and $Q$ being four-(co)vectors. In the first case, \begin{equation} \label{FF} I_1=2\left((U \cdot U)(V\cdot V)-(U\cdot V)^2\right) \end{equation} obviously is negative if one of these vectors is timelike (say, $U$) and another, spacelike ($V$), thus $F$ will pertain to the pure electric type (or, similarly, for $\ast F$, to the pure magnetic type; see also an alternative case considered in subsection \ref{s11.1} when vector $U=R$ is null). Normalizing timelike $U$ to unity (the extra coefficient may be included in $V$), we can take the normalized $U$ as a new monad in the choice of reference frame and immediately see that in this frame the magnetic vector automatically vanishes. It remains only to show that our supposition (timelike $U$ and spacelike $V$) is sufficiently general; this can be easily proven using the substitution $V\Longrightarrow V+aU$ which does not change $F$. Similarly we treat the problem of eliminating the electric field in the pure magnetic case using $\ast F$ in (\ref{simbiv}). \subsection{Pure null type fields and the Doppler effect} \label{s6.2} Pure null type fields have both invariants equal to zero, but the very fields remain non-trivial in any non-degenerate system of coordinates as well as in any realistic reference frame (here, in the sense of {\bf E} and {\bf B}, simultaneously), although these three-vectors do transform under changes of reference frames, and their components transform under transformations of coordinates. The monad $\tau$ under these transformation should remain always timelike, and the Jacobian of the transformation of coordinates has to be non-zero and non-$\infty$. As an example we consider in this subsection a special-relativistic plane electromagnetic wave in a vacuum ($k= \omega$) written in Cartesian coordinates, \begin{equation} \label{emw} \mathbf{E}=\{0,E \cos[\omega(x-t)],0\}, ~ ~ \mathbf{B}=\{0,0,E\cos[\omega(x-t)]\}, \end{equation} and apply to it the Lorentz transformation $t,\mathbf{r} \Rightarrow t',\mathbf{r}'$ with the three-velocity $\pm v$ in the positive/negative direction of the $x$ axis using the well-known change of {\bf E} and {\bf B} under this transformation. The resulting electromagnetic field then is \begin{equation} \label{emwnew} \mathbf{E}'=\{0,E'\cos[\omega'(x'-t')],0\}, ~ ~ \mathbf{B}'=\{0,0,E'\cos[\omega' (x'-t')]\} \end{equation} where \begin{equation} \label{Dopp} E'= \sqrt{\frac{1\mp v}{1\pm v}}E, ~ ~ \omega'=\sqrt{\frac{1\mp v}{1\pm v}}\omega. \end{equation} It is clear that the expression of $\omega'$ in (\ref{Dopp}) describes the longitudinal Doppler effect while $E'$ gives the accompanying change of the wave intensity. Since the latter is an integral part of the longitudinal Doppler effect, we consider the complete expression (\ref{Dopp}) as its natural generalization; the description of transversal Doppler effect has to be generalized in the similar way. It seems that this generalization of the Doppler effect is not encountered in physics textbooks. Nevertheless, it is generally used as an important hint in the interpretation of the well-known astrophysical phenomenon of ultrarelativistic particles' jet pairs emitted by cores of some galaxies (the jet moving away from the observer not only has lower frequency, but also correspondingly lower intensity, thus this jet sometimes escapes to be observed). There is also a static ($\omega =0$) particular case of pure null electromagnetic fields involving mutually orthogonal constant vectors {\bf E} and {\bf B} (let us call it ``Cartesian case'' whose cylindrically symmetric analogue is used in some experiments involving electromagnetic fields with non-zero angular momentum without a genuine rotation). Such Cartesian pure null fields manifest only intensity part of the Doppler effect since in this case $\omega=0=\omega'$ in (\ref{Dopp}). Thus the pure null electromagnetic fields can be adjusted to any non-zero and non-$\infty$ values of their intensity and frequency (only the relation of frequency to intensity remaining constant, and if the frequency had not been equal to zero from the very beginning). A complete transformation away of initially non-trivial pure null fields is however impossible in any non-degenerate frame, representing only asymptotic and not real possibility. We considered above the case of a plane-polarized wave, but similar approach works in the circular polarization case as well. \section{Duality rotation and electromagnetic fields with the same spacetime geometry} \label{s7} \setcounter{equation}{0} Let us introduce a new electromagnetic field tensor (2-form) \begin{equation} \label{calF} \mathcal{F}=(k+l\ast)F, ~ ~ \ast\mathcal{F}=(k\ast- l)F \end{equation} ($\ast$ is the Hodge star), where $F$ is the electromagnetic field tensor belonging to some given exact self-consistent Einstein--Maxwell solution, $k$ and $l$ being some scalar functions to be further determined. We now set the condition that the new field $\mathcal{F}$ has to produce the same energy-momentum tensor which follows from the old field $F$. Since geometry is well determined by the energy-momentum tensor, from the Bianchi identities it then follows that the standard general relativistic Maxwell equations for both fields, old and new, will be equally satisfied if the old field has no electromagnetic sources, or the sources are localized at the singularity of the old and new fields where the standard classical theory is not applicable. The calligraphic letters will be used for all concomitants of the new electromagnetic field. Thus \begin{equation} \label{calF1} \mathcal{F}_{\mu \nu}=kF_{\mu\nu}+lF\!\!\stackrel{\textnormal{\small$\ast$} }{\textnormal{\scriptsize$\mu\nu$}}, ~ ~ \ast\mathcal{F}=k F\!\!\stackrel{\textnormal{\small$\ast$} }{\textnormal{\scriptsize $\mu\nu$}}- lF_{\mu\nu}, \end{equation} \begin{equation} \label{calEB} \mathcal{E}= k\mathbf{E}-l\mathbf{B}, ~ ~ ~ \mathcal{B}=k\mathbf{B}+l \mathbf{E}, \end{equation} $$ \mathcal{I}_1=\mathcal{F}_{\mu\nu }\mathcal{F}^{\mu\nu }=(k^2- l^2)I_1+2klI_2, ~ ~ \mathcal{I}_2=\mathcal{F}\!\!\stackrel{ \textnormal{\small$\ast$} }{\textnormal{\scriptsize$\mu\nu$}}\! \mathcal{F}^{\mu\nu}=(k^2-l^2 )I_2-2klI_1. $$ A simple calculation yields $\mathcal{T}^\mu_\nu= \frac{k^2+l^2}{4\pi} \left( \frac{1}{4}I_1\delta^\mu_\nu-F^{ \alpha\mu}F_{\alpha\nu}\right)=(k^2+l^2)T^\mu_\nu$ (the terms with $I_2$ cancel automatically for arbitrary $I_2$). Hence the coincidence of geometries created by the two fields is guaranteed iff \begin{equation} \label{lk} k=\cos\alpha, ~ ~ l=\sin\alpha, \end{equation} so that \begin{equation} \label{calI} \mathcal{I}_1=\cos2 \alpha\,I_1+\sin2\alpha\, I_2, ~ ~ \mathcal{I}_2=\cos2\alpha\,I_2-\sin2\alpha\,I_1. \end{equation} Duality rotation, how the ``transformation'' (\ref{calF}) with (\ref{lk}) is now interpreted (see \cite{MisWh}), does not change the 4-geometry compatible with the new electromagnetic field. This geometry remains the same as that created by the old field (see also \cite{Rain1,MisWh,Whee}). The ``angle'' (complexion) $\alpha$ of the duality rotation is, of course, an axial scalar function of coordinates which we shall now concretely determine. \section{Construction of new Einstein--Maxwell solutions {\it via} duality rotation} \label{s8} \setcounter{equation}{0} First, let us see how restrictive is the duality rotation. The relations (\ref{calI}) lead to a general conclusion \begin{equation} \label{Pyth} {\mathcal{I}_1}^2+ {\mathcal{I}_2}^2={I_1}^2+{I_2}^2 \end{equation} from where it follows that the pure null property is invariant under the duality rotation and it is impossible to obtain from any other type a pure null solution using this method. Together with the considerations of subsection \ref{s6.2}, this means that pure null fields sharply differ from all other electromagnetic fields. Now, let us see if pure subtypes ($\mathcal{I}_2=0 ~ \Rightarrow ~ {\mathcal{I}_1}^2={I_1}^2+{I_2}^2$) can be obtained from the impure fields ($I_2\neq 0$). The second relation in (\ref{calI}) then yields \begin{equation} \label{Ialph1} \cot2\alpha=I_1/ I_2 \end{equation} which with the first relation gives \begin{equation} \label{pureI} \mathcal{I}_1=\frac{I_2 }{\sin2\alpha}= \frac{I_1}{\cos2\alpha} =\cos2\alpha\frac{{I_1}^2 +{I_2}^2}{I_1}= \sin2\alpha\frac{{I_1}^2+{I_2}^2}{I_2}. \end{equation} (In fact, we have to perform straightforward calculations for every concrete solution $F$ and see if $I_2$ would be sign-definitive or not in the desired region. Though the last possibility seems to be excluded by our initial supposition, it could be, naturally, softened: the duality rotation should reduce to the identity transformation at {\it loci} where $I_2$ becomes equal to zero.) From (\ref{Ialph1}) and (\ref{pureI}) we come to the following conclusions: if the new field has to be pure electric ($\mathcal{I}_1<0$), while $I_1<0$ and $I_2>0$, the ``angle'' $\alpha$ has to be such that $\sin2\alpha<0$ and $\cos2\alpha>0$; for $I_1<0$ and $I_2<0$, $\alpha$ has to give $\sin2\alpha>0$ and $\cos2\alpha>0$; for $I_1>0$ and $I_2>0$, there has to be $\sin2 \alpha<0$ and $\cos2\alpha<0$; for $I_1>0$ and $I_2<0$, $\sin2 \alpha>0$ and $\cos2\alpha<0$. Similarly, we determine the position of $\alpha$ for the pure magnetic new field. For the null (now impure) type of the new field ($\mathcal{I}_1=0 ~ \Rightarrow ~ {\mathcal{I}_2}^2={I_1}^2+{I_2}^2$) we have to use the first relation in (\ref{calI}) yielding \begin{equation} \label{Ialph2} \tan2 \alpha=-I_1/I_2. \end{equation} The second relation gives \begin{equation} \label{impurenull} \mathcal{I}_2=\frac{I_2}{\cos2\alpha}= \frac{I_1}{\sin2\alpha}, ~ ~ etc; \end{equation} the procedure of determination of the position of $\alpha$ is the same as above, here only $\mathcal{I}_2\neq 0$, and both signs of $\mathcal{I}_2$ are equally admissible. It is clear that we can perform the inverse duality rotation in all these cases (in particular, coming from the impure null to pure electric or magnetic type fields). Thus the pure and impure electric and magnetic types form together with the impure null type a mutually ``transformable'' ({\it via} duality rotation) group of electromagnetic fields disconnected from the pure null type. \section{Impure electromagnetic fields: parallelizing of {\bf E} and {\bf B}} \label{s9} \setcounter{equation}{0} In this section we again use the classification of electromagnetic fields in two senses: in the proper one, {\it i.e.} with respect to $F$, and, simultaneously, in the sense of the new field $\mathcal{F}$ introduced in (\ref{calF}), but we now look for information received from $\mathcal{F}$ about the old field $F$. It is already clear that when $F$ is impure (electric, magnetic, or null), the field $\mathcal{F}$ can be chosen as pure (electric or magnetic in everyone of these cases). An interesting feature here is that the reference frame in which only one field (electric or magnetic in the sense of $\mathcal{F}$) survives, is precisely that in which {\bf E} and {\bf B} following from $F$ are mutually parallel, and the parallelization procedure becomes completely reduced just to determination of this canonical frame (say, $\tau'$). Thus, while the relations \begin{equation} \label{pure} \mathcal{E}\bullet \mathcal{B}\equiv \mathcal{E}'\bullet\mathcal{B}'=0 \end{equation} are frame-invariant (the field $\mathcal{F}$ is chosen as belonging to the pure subtype), the property $\mathbf{E}'\parallel\mathbf{B}'$ is realized only in the canonical frame where either $\mathcal{E}'$ or $\mathcal{B}'$ vanishes. The field $\mathcal{F}$ may be considered as a merely auxiliary one (essentially, in special relativity where its r\^ole in generation of gravitational field is neglected), so that the parallelization procedure then may be managed even without the use of the strict duality rotation with the ``angle'' $\alpha$ and the relation (\ref{Pyth}), but when we simply take, {\it e.g.}, \begin{equation} \label{calFk} \mathcal{F}=(1+k\ast)F. \end{equation} The calculations following from this ansatz are simple, but somewhat cumbersome, and we omit them, especially since they will not be used in this paper. \section{Examples in general relativity} \label{s10} \setcounter{equation}{0} In this section we consider two particular electromagnetic fields self-consis\-tent\-ly sharing one and the same four-dimensional geometry: in the first subsection, the standard Kerr--Newman (KN) rotating charged black hole, and in the subsection \ref{s10.2}, its generalizations to the black holes created by specific mixtures of electric charge and magnetic monopole distributions rotating as the KN singular ring. The first example corresponds to an impure electromagnetic field, while the next three ones belong to the pure electric, pure magnetic and impure null types, thus representing new black-hole exact solutions of Einstein--Maxwell equations. The first two new solutions admit reference frames in which there is no magnetic or no electric fields in the whole spacetime. \subsection{The Kerr--Newman solution} \label{s10.1} The KN metric tensor is taken in the Boyer--Lindquist (BL) coordinates as \begin{equation} \label{KN-BL} \begin{array}{l} ds^2= \displaystyle\frac{\Delta}{\rho^2} \left(dt-a\sin^2\vartheta\,d\varphi\right)^2-\frac{\rho^2}{ \Delta}dr^2-\rho^2d\vartheta^2\\ \phantom{ds^2}-\displaystyle\frac{\sin^2\vartheta}{ \rho^2}\left[(r^2+a^2)d\varphi-adt\right]^2 \end{array} \end{equation} where $\rho^2=r^2+a^2\cos^2\vartheta$, $\Delta=r^2-2Mr+Q^2+ a^2$. Thus the orthonormal 1-form basis outside the singularity $\rho=0$ reads \begin{equation} \label{11.2} \left. \begin{array}{ll} \hspace{-7.pt}\theta^{(0)}= \displaystyle\frac{\sqrt{\Delta}}{\rho}\left(dt-a\sin^2\vartheta\, d\varphi\right), & \theta^{(1)}=\displaystyle\frac{\rho}{\sqrt{ \Delta}}dr,\\ \hspace{-7.pt}\theta^{(2)}=\rho d\vartheta, & \theta^{(3)}= \displaystyle \frac{\sin\vartheta}{\rho}\left[(r^2+a^2) d\varphi-adt\right], \end{array}\right\} \end{equation} so that $d\varphi=\frac{a}{\rho\sqrt{\Delta }}\theta^{(0)}+\frac{1}{\rho\sin\vartheta}\theta^{(3)}$. Further, the electromagnetic 1-form four-potential ({\it cf.} the usual Coulomb potential) and the field tensor (2-form) are \begin{equation} \label{11.3} \left.\begin{array}{l} A\equiv A_{(\alpha)} \theta^{(\alpha)}= \displaystyle\frac{Qr}{\rho\sqrt{\Delta}} \theta^{(0)} ~ ~ \textnormal{ and}\\ ~ \\ F\equiv\frac{1}{2}F_{(\alpha)(\beta)} \theta^{(\alpha)} \wedge\theta^{(\beta)}=dA \\ \phantom{F}= \displaystyle\frac{ Q}{\rho^4}\left[(r^2-a^2 \cos^2\vartheta) \theta^{(0)}\wedge \theta^{(1)}-2ar\cos\vartheta\,\theta^{(2)} \wedge\theta^{(3)} \right], \end{array}\right\} \end{equation} respectively (see for some details \cite{DKSch,HEL,tHooft,Kerr,KSMH,Exact2,Teuk,Twns}). Since \begin{equation} \label{propbeta} 4a^2r^2\cos^2\vartheta+\left(r^2- a^2\cos^2 \vartheta\right)^2=\rho^4 \end{equation} (this confirms the presence in $F$ of the factor $r^2-a^2\cos^2\vartheta$ which first could seem to be somewhat unnatural), we can now introduce an ``angle'' $\beta$ as \begin{equation} \label{beta} \sin\beta=\frac{2ar\cos \vartheta}{\rho^2}, ~ ~ \cos\beta=\frac{r^2-a^2\cos^2\vartheta}{ \rho^2}. \end{equation} Then the electromagnetic field $F$ reads \begin{equation} \label{FasDR} F=\frac{Q}{\rho^2}(\cos\beta+\sin\beta\ast)\left(\theta^{(0)} \wedge\theta^{(1)}\right), \end{equation} obviously involving a duality rotation, and the electromagnetic invariants read \begin{equation} \label{11.6} I_1=-\frac{2Q^2 }{\rho^4}\cos2\beta, ~ ~ I_2=\frac{2Q^2}{\rho^4} \sin2\beta, \end{equation} so that the construction invariant under duality rotation (\ref{Pyth}) in the KN solution case is ({\it cf.} the Coulomb and Reissner--Nordstr\"om fields where, of course, $a=0$) \begin{equation} \label{I12sqr} {I_1}^2+{I_2}^2= \frac{4Q^4}{\rho^8}. \end{equation} From (\ref{11.6}) and (\ref{beta}) we immediately find that on the ``plane'' $\cos\vartheta=0$ and the ``sphere'' $r=0$ (it is well known that the ``negative region of space'' with $r<0$ makes a certain sense in this spacetime) the invariants take values $$ I_1=-\frac{2Q^2}{r^4}, ~ ~ I_2=0 ~ \textnormal{ and } ~ I_1=- \frac{2Q^2 }{a^4\cos^4\vartheta}, ~ ~ I_2=0, $$ respectively, thus the field $F$ belongs there to the pure electric type (in the BL frame, the magnetic field is already eliminated in this region). The intersection of these 2-surfaces is the well-known singular rotating Kerr ring where $I_2$ vanishes when we approach to the ring from these mutually orthogonal directions. This would be in conformity with the usual interpretation of the ring as rotating with the velocity of light if that $I_1$ also tended there to zero, though this is not quite the case. The vanishing of $I_1$ occurs only in the limits in four directions along which $r=\epsilon(3\pm2\sqrt{2})a^2\cos^2 \vartheta$ where $\epsilon=\pm 1$ (without admission to simultaneously take only both similar signs in the whole formula). Further, when $r^2=a^2 \cos^2\vartheta$, we have the pure magnetic type field (with already eliminated electric field in the BL frame) with $I_1=\frac{3Q^2}{4a^4\cos^4\vartheta}= \frac{3Q^2}{4r^4}$, thus, if we come to the ring from the corresponding directions, the field will be purely magnetic. The electromagnetic field around the Kerr ring in KN solution is in fact very diverse, like a patchwork quilt. To find how behaves the propagation velocity {\bf v} of this field, we have to calculate its energy density and Poynting vector, but let us begin with {\bf E} and {\bf B} in the BL frame $\tau=\theta^{(0)}$, see (\ref{11.2}). Already before any calculations, only looking at the definitions (\ref{elE}) and (\ref{magB}), one understands that these two vectors and $\theta^{(1)}$ are everywhere collinear, so that the Poynting vector identically vanishes, as well as {\bf v} does (this is natural, since both metric coefficients and the field tensor components do not depend on $t$ and $\varphi$, and we already {\it are} in the KN-field's co-moving frame). The pure subtype can be realized (if we suppress duality rotations) only due to local vanishing of {\bf E} or {\bf B} in the BL frame. In this co-moving frame the electromagnetic energy density is everywhere equal to \begin{equation} \label{KNw} T_\mu^\nu\tau^\mu\tau_\nu=\frac{1}{8\pi}(\mathbf{E}^2+\mathbf{B}^2 )=\frac{1}{16\pi}\sqrt{{I_1}^2+{I_2}^2}=\frac{Q^2}{8\pi\rho^4}, \end{equation} see (\ref{modv}). We have found here that the KN solution is ``anisotropic'' and ``inhomogeneous'' in the sense of distribution of electric and magnetic types of the $F$ field, although three-vector fields {\bf E} and {\bf B} are always collinear in the spacetime of KN black hole in the BL frame, but there are surfaces on which either magnetic or electric field vanishes. In order to better understand if this structure of electromagnetic field belonging to the KN solution is inevitable (however, see also \cite{HEL}), we shall further apply to the KN solution the method developed in sections \ref{s7} and \ref{s8} (as a broadening and modernization of the Rainich--Misner--Wheeler duality rotation approach). We shall find that it is easy to radically modify the KN solution obtaining rotating black holes with electromagnetic fields of everywhere pure electric or pure magnetic, or impure null type. \subsection{``New'' rotating black hole solutions with pure elec\-tric, pure mag\-ne\-tic, and impure null fields $F$ in KN geometry} \label{s10.2} Here we apply duality rotation \begin{equation} \label{dualrot} \mathcal{F}=(\cos\alpha+\sin\alpha\ast)F, ~ \textnormal{or} ~ \ast\mathcal{F}=(\cos\alpha\ast-\sin\alpha)F \end{equation} [the combination of (\ref{calF}) and (\ref{lk})] to the KN electromagnetic field (\ref{11.3}). Since our aim is to construct a pure subtype field ($\mathcal{I}_2=0$), we already have from (\ref{Pyth}) ${\mathcal{ I}_1}^2={I_1}^2+{I_2}^2$ while the sign of $\mathcal{ I}_1$ is determined by the choice of position of $\alpha$ in the angle diagram and (\ref{Ialph1}). Taking the expression (\ref{FasDR}) and using the obvious algebra of duality rotation, we rewrite the expression (\ref{dualrot}) as \begin{equation} \label{2drot} \mathcal{F}= \frac{Q}{\rho^2}\left[\cos(\alpha+\beta)+\sin(\alpha+\beta)\ast \right]\left(\theta^{(0)}\wedge\theta^{(1)}\right). \end{equation} Now, putting $\alpha+\beta=0$, we immediately come to the pure electric field $\mathcal{F}_{\mathrm{el}}$ \begin{equation} \label{pureelKN} \mathcal{F}_{\mathrm{el}}=\frac{Q}{\rho^2}\theta^{(0)}\wedge \theta^{(1)} \end{equation} generating the same KN geometry which everybody associates with the KN ``patchwork'' electromagnetic field $F$ (\ref{11.3}). The pure magnetic field \begin{equation} \label{puremagKN} \mathcal{F}_{\mathrm{mag}}=\frac{Q}{\rho^2} \theta^{(2)}\wedge\theta^{(3)} \end{equation} is similarly obtained with the use of $\alpha+\beta=-\pi /2$. Finally, we have to add the third new case of KN-like black holes, those with null type electromagnetic field when in (\ref{2drot}) $\alpha+\beta=\pi/4+n\pi/2$ is taken (naturally, this field now belongs to the impure subtype). With $n=0$, it reads \begin{equation} \label{nullKN} \mathcal{F}_{\mathrm{null}}= \frac{Q}{\sqrt{2} \rho^2} \left(\theta^{(0)}\wedge\theta^{(1)}-\theta^{(2)}\wedge \theta^{(3)}\right). \end{equation} The (contracted) Bianchi identities guarantee satisfaction of Maxwell's equations outside the ring singularity for the new field $\mathcal{F}$ in this geometry, while the presence of magnetic monopole distribution existing here only on the singular Kerr ring should not create any problem since at the singularity the classical laws of physics obviously fail to work. One may say that the magnetic monopole distribution (as well as that of the electric charge) can, as this is shown above, exactly compensate the magnetic (electric) field created by a rotating charge (rotating monopole) distribution, but this is not precisely the case. In fact, we encounter in this situation a more complicated superposition of dynamical and kinematic effects, since the ($\tau =\theta^{(0)}$)-reference frame is not an inertial one: it involves both acceleration and rotation (the latter is present due to $\theta^{(0)}\wedge d\theta^{(0)}\neq 0$, see the definitions in \cite{Mitsk06}). In the same ref. 11 (chapter 4, pp. 86, 87, 90, and 91) it is shown that in the classical Maxwell equations and in laws of motion of electric charges, both written in non-inertial reference frames, there appear kinematic terms of the monopole nature. In equations of motion they bear the name of kinematic forces (forces of inertia), thus in the field equations let us speak of kinematic sources. While dynamical force and source are originated by the same interaction term in the action integral (only the variational procedure is performed with respect to particle's world line and to field's potential, correspondingly), their kinematic counterparts automatically appear in the respective dynamical equations written (and experimentally investigable) in non-inertial frames. It is interesting that kinematic and dynamical counterparts of forces, as well as of sources, have rather similar structure, despite their different origin, thus making them recognizable. \section{Examples in special relativity} \label{s11} \setcounter{equation}{0} \subsection{Li\'enard--Wiechert's field: the pure electric type} \label{s11.1} The Li\'enard--Wiechert (LW) field is special relativistic electromagnetic field generated by an arbitrarily moving electric charge $Q$ (we restrict our consideration to an arbitrary timelike world line of the charge). See the details of deduction of this field in \cite{Mitsk05} where we used the future light cone (the case of retarded field, precisely like in the present paper); an arbitrary mixture of retarded and advanced fields can be found in \cite{SyngeSR}. Thus the retarded point on the charge world line $x'^\mu$ is connected with the four-dimensional point $x^\mu$ (where the field is determined) by the null vector $R^\mu=x^\mu- x'^\mu$ (we choose for simplicity the Cartesian coordinates in this special relativistic treatment). The four-potential then reads \begin{equation} \label{LW} A^\mu= \frac{Qu'^\mu}{D} \end{equation} where $u'^\mu= dx'^\mu/ds'$ ($u'\cdot u'=1$) is the retarded four-velocity of the charge and $D= u'\cdot R\equiv\sqrt{-\mathbf{D}^\mu \mathbf{D}_\mu}$, while $\mathbf{D}^\mu=R^\nu b^\mu_\nu$, $b=g-u'\otimes u'$ simultaneously being the projector on local retarded three-dimensional subspace orthogonal to $u'$ and the spatial three-metric on this subspace (with the signature $0,-,-,-$). The retarded four-acceleration of the charge is $a'^\mu=du'^\mu/ds'$ (naturally, $a'\cdot u'\equiv 0$). A simple calculation yields the LW field tensor \begin{equation} \label{FTLW} F=R\wedge V, ~ ~ V=\frac{Q}{D^2} \left(a'+u'\frac{1-a'\cdot R}{D} \right) \end{equation} (the second field invariant $I_2$ automatically vanishes). The first invariant is \begin{equation} \label{1Inv} I_1=-\frac{2 Q^2}{D^4}<0 \end{equation} (remarkably, its structure is exactly Coulombian). Thus the LW field pertains to the pure electric type everywhere outside the point charge's world line. Combining $R$ with $V$ in (\ref{FTLW}), one can change the null vector $R$ to a timelike one, $U$, and thus reduce the problem to that discussed in subsection \ref{s6.1}. However it is much simpler to (algebraically) regauge the vector $V ~ \rightarrow ~ W=V+\frac{Q}{D^2}lR$ (the fractional coefficient is put only for convenience, and $l$ is a scalar function still to be determined); this does neither change the field tensor, $F=R\wedge W$, nor produce any $l^2$-term in further calculations. Applying now the 1-form definition of the magnetic vector in a $\tau$-frame (\ref{magB}) and taking the monad as $\tau=NW$ where the scalar normalization factor is $N=(W\cdot W)^{-1/2}$, we come to $\mathbf{B}=0$ in this frame. The problem is thus reduced to a proper choice of $l$ such that $W$ will be a suitable real timelike vector. We see that \begin{equation} \label{V1} W=\frac{Q}{D^2}\left( a'+\frac{1-a'\!\cdot\! R}{D}\,u' +lR\right). \end{equation} Then its square takes an unexpectedly simple form \begin{equation} W\!\cdot\! W=\left(\frac{Q}{D^2} \right)^2\left[ a'\!\cdot\! a'+ \frac{(1-a'\!\cdot\! R)^2}{D^2}+2l\right]. \end{equation} In fact, $l$ still remains arbitrary (this means that there is a continuum of such different co-moving frames). Let it be \begin{equation} l= \frac{1}{2}\left[\frac{1}{D^2}-a'\!\cdot\! a'-\frac{(1-a'\!\cdot\! R)^2}{D^2}\right] \end{equation} (the first term in the square brackets, $1/D^2$, got its denominator to fit the dimensional considerations). Finally, $W\cdot W=\left( \frac{Q}{D^3}\right)^2>0$ and \begin{equation} \label{taucomov} \tau=Da'+ \left(1-a'\!\cdot\! R\right)u'+\frac{1}{2D}\left[1-D^2a'\! \cdot\! a'-\left(1-a'\!\cdot\! R\right)^2\right]R \end{equation} (it is clear that $\tau\cdot\tau=+1$). By its definition, the monad $\tau$ describes the reference frame co-moving with the LW electromagnetic field: in this frame the Poynting vector of the field vanishes, the electromagnetic energy flux ceases to exist due to the absence of magnetic part $\overset{\textnormal{\tiny $\tau$}}{\mathbf{B}}$ of the field in this frame (applicable at any finite distance $D$, not asymptotically), and $F$ can be rewritten as $F=\frac{Q}{D^3}R\wedge\tau$. The expression (\ref{elE}) now yields \begin{equation} \label{Ecomov} \overset{\textnormal{\tiny $\tau$}}{\mathbf{E}}=\ast(\tau\wedge\ast F)= \frac{Q}{D^3}\ast[\tau\wedge\ast (R\wedge\tau)]=\frac{Q}{D^2}\overset{\textnormal{\tiny $\tau$}}{\mathbf{n}} \end{equation} which is, up to an understandable reinterpretation of notations, exactly the form known as the Coulomb field vector. Here the unitary vector $\overset{\textnormal{\tiny $\tau$}}{\mathbf{n}}= \overset{\textnormal{\tiny $\tau$}}{\mathbf{D}}/D$ is normal to the $\tau$-congruence, while $R\cdot u'= \overset{\textnormal{\;\tiny $u'$}}{D}=D= \overset{\textnormal{\;\tiny $\tau$}}{D}=R\cdot\tau$ and $\overset{\textnormal{\tiny $\tau$}}{\mathbf{D}}^\mu=b^\mu_\nu R^\nu$ with $b^\mu_\nu=\delta^\mu_\nu- \tau^\mu\tau_\nu$, hence in the frame co-moving with the LW field (the reader may choose other co-moving reference frames taking different $l$s, but our choice seems to be one of the simplest ones) \begin{equation} \label{Dhat} \overset{\textnormal{\tiny $\tau$}}{\mathbf{D}}=-D^2a'- D\left(1-a'\! \cdot\! R\right)u'+\frac{1}{2}\left[1+D^2 a'\!\cdot\! a'+ \left(1-a'\!\cdot\! R\right)^2\right]R, \end{equation} so that this $\overset{\textnormal{\tiny $\tau$}}{\mathbf{D}} \neq \overset{\textnormal{\tiny $u'$}}{\mathbf{D}}$ in fact given after the formula (11.1) as $\mathbf{D}$ and pertaining to another frame (co-moving with the retarded charge, not with its field, see for details \cite{Mitsk05}). The situation discovered in this subsection can be formulated in a short and exact form as existence in all spacetime outside the world line of the charge generating LW's field, of a reference frame co-moving with this field, {\it i.e.}, a frame in which the Poynting vector vanishes in all this region (with the exception of the future null infinity which can be described only asymptotically, using more topological\footnote{It was a gibe of the fate with respect to the authors who deliberately disregarded topology and nevertheless claimed that the LW field contains electromagnetic radiation, though in this field the Poynting vector can be easily transformed away by means of a proper choice of the reference frame in every finite region around the charge's world line. About the attitude of L.D. Landau and E.M. Lifshitz toward topology see \cite{Thorne}, pp. 470-471, --- and see also \cite{LanLif}, p. 173 ff. However, Misner and Wheeler {\it did} take topology into account in their general-relativistic considerations \cite{MisWh,Whee}.} than geometrical methods), thus in this frame there is no flow of electromagnetic energy anywhere. Of course, this frame is in general a rotating one (see in \cite{Mitsk05} the expression (4.27) and appendix A), thus the three-dimensional space is non-holonom (it does not form a global --- at least, finite --- three-dimensional subspace of the four-dimensional world; at most, in the presence of rotation there exist only strictly local (infinitesimal) elements of such a subspace which do not merge into a finite hypersurface, like scales of a sick fish in aquarium. Note that this occurs here even in the special relativity, not only in general theory. Moreover, the presence of the frame's rotation does not permit synchronization of clocks being at rest in such a frame. (In the same spacetime there always exist also an infinite number of non-rotating frames in which you are welcomed to perform a synchronization, but in any rotating frame this very procedure is strictly forbidden. It is curious that while we live all our lives in our terrestrial rotating frame, its rotation remains sufficiently slow not to condition us to this non-holonom psychology.) Since everywhere outside the LW source (the pointlike charge world line on which the field singularity occurs) there is no magnetic field in this frame, and any redistribution of electromagnetic field cannot take place there, the LW field does not propagate in this frame, it only can be compressed or rarefied remaining at rest with respect to the frame in its contraction or expansion, similar to effects known in relativistic cosmology. This could seem to be in contradiction with the traditional decomposition of LW field into the near (induction) and distant (radiation) zones. The reason for this ``contradiction'' should be seen in the fact that, although the very Maxwell equations are linear, the physical characteristics of electromagnetic fields such as their energy density, Poynting vector (describing, up to a constant factor, either the energy flow density or linear momentum density of the field), and stress, are quadratic (or bilinear) in the field tensor sense. Thus we have not to overlook the ``interaction'' terms between the induction and radiation counterparts in these characteristics. Note that the elimination of the magnetic part of LW's field is directly related to its {\it pure} electric type,--- consequently, to the quadratic (bilinear) invariants of this field. Therefore the ``contradiction'' exists only in a wrong customary application of the linearity concept to the strictly nonlinear characteristics even of the electrovacuum electromagnetic fields. In a certain sense, there should be a way to reconcile this contradiction considering the asymptotic behaviour of the field; in any case, this has to correspond to merely technical details of the problem. Similarly, in his ironic paper \cite{SyngeHerm}, Synge with his great wit criticized the existing style of introduction of these same characteristics in the most widely used textbooks on field theory. Though his criticism was there somewhat superficial, we find Synge's paper quite provocative in more profound determination of the quotidian concepts in our theory using their physical sense. \subsection{Propagation of a plane electromagnetic wave on the background of homogeneous magnetic field in a vacuum} \label{s11.2} Finally, let us consider a simple, but not yet discussed in literature problem of electromagnetic waves' propagation in a time-independent sourceless Maxwell field in a vacuum. For simplicity, we take the same wave as in subsection \ref{s6.2}, and the additional Maxwell field is chosen merely as a magnetic one in the direction of propagation of the wave, with the constant three-vector~{\bf B}. Obviously, this superposition is an exact solution of Maxwell's equations, and there cannot be any real interaction between these two fields since the equations are linear. We have however already noted that the velocity of propagation of electromagnetic field is non-linear in terms of this field's tensor $F$, so that there should, naturally, exist an observable physical effect in the case of a superposition of such free Maxwell's fields. There is, of course, an effect which was already considered and observed in the early history of optics, that of the standing electromagnetic waves, but nobody still worried about the seemingly absurd problem formulated above. Thus we take in Cartesian coordinates $t,x,y,z$ the superposition of the fields (\ref{emw}) and $\mathbf{B}_{\mathrm H}=Hdx$, {\it i.e.} \begin{equation} \label{sup} \mathbf{E}=E\cos[\omega(x-t)]dy, ~ ~ \mathbf{B}=Hdx+E\cos[\omega(x-t)]dz. \end{equation} Obviously, $I_2=0$ due to the orthogonality of {\bf E} and {\bf B}, and the first invariant is $I_1=2H^2>0$: this is the pure magnetic type field. The result of superposition (\ref{sup}) is in fact a specific not precisely monochromatic wave whose behaviour can be best understood in the reference frame co-moving with it, and one can find such a frame using the pure-magnetic property of this wave's field, see subsection \ref{s6.1}. First, we write the field $\ast F$ [through (\ref{sup}) in the initial frame $\tau_{\mathrm{in}}=dt$] as a simple bivector: \begin{align} \label{astFsup} \nonumber \ast F &=\ast(\mathbf{E} \wedge dt)-\mathbf{B}\wedge dt\\ \nonumber &=-\left(E\cos[\omega (x-t)]dx\wedge dz+ H\phantom{^Z}\!\!\! dx\wedge dt+E\cos[\omega (x-t)]dz\wedge dt\right)\\ &=-\left(Hdx+E\phantom{^Z}\!\!\! \cos[\omega(x-t)]dz \right)\wedge(dt-dx)=-P\wedge Q. \end{align} If to $P$ we add $lQ$ ($l$ being an arbitrary function) and use this sum $P'$ instead of the former $P$, $\ast F$ does not change. It is obvious that $P\cdot P<0$, but $P'\cdot P'=2lH-H^2-E^2\cos^2[\omega(t-x)] $. Thus if we choose $l=H+\frac{E^2}{2H}\cos^2[\omega(t-x)]$, the vector $P'$ will be timelike, $P'\cdot P'=H^2>0$, and we can take $P'/H$ as a properly normalized monad, \begin{equation} \label{Ptau} \tau= \left(1+\frac{E^2 }{H^2}\cos^2 [\omega(t-x)]\right)(dt-dx)+ dx+\frac{E}{H} \cos[\omega(t-x)]dz. \end{equation} Now the dually conjugated field tensor reads $\ast F=-H\tau\wedge(dt-dx)$, thus in the frame $\tau$ the electric field (\ref{elE}) vanishes, and this is the field's co-moving frame. In all these calculations one has to remember that when only one (here, magnetic) field survives after the reference frame is transformed, there are other possible transformations which do not change this situation (in fact, all those which involve an additional motion in the direction of this field, even when this motion occurs to be with a non-constant magnitude of the three-velocity described by strictly local Lorentz transformations working in non-inertial frames). Thus there appears a continuum of such one-field frames ({\it cf.} \cite{LanLif}, but working in general as well as in special relativity), and the search for more elegant ones depends on the individual taste of the researcher. Let us now calculate the three-velocity of the frame $\tau$ from the viewpoint of $\tau_{\mathrm{in}}$ using our general definition (\ref{v}) and substitute the result into the left-hand side of (\ref{emprop}), then putting into the right-hand side the expressions of {\bf E} and {\bf B} from (\ref{sup}) in the frame $\tau_{\mathrm{in}}$ to check if the Landau--Lifshitz definition (\ref{emprop}) really works. Obviously, this way will not represent a vicious circle since these parts of (\ref{emprop}) were initially deduced in \cite{LanLif} from a very different standpoint than ours (moreover, in this way the left-hand side of (\ref{Pauli}) will be automatically checked: both definitions of {\bf v} cannot simultaneously work well). First, we rewrite (\ref{v}) in these notations for frames and find $\mathbf{v}\,(\perp\tau_{\mathrm{in}})$: \begin{equation} \nonumber \tau=(\tau\cdot \tau_{\mathrm{in}})(\tau_{\mathrm{in}}+\mathbf{v}) ~ \Rightarrow ~ \mathbf{v}=\frac{\frac{E}{H}\cos[\omega(t-x)]}{1+ \frac{E^2}{H^2}\cos^2[\omega(t-x)]}\left(dz-\frac{E}{H}\cos [\omega(t-x)]dx\right). \end{equation} This means that \begin{equation} \nonumber \frac{| \mathbf{v}|}{1+\mathbf{v}^2}=\frac{\frac{E}{H}\cos[\omega(t-x)] \sqrt{1+ \frac{E^2}{H^2}\cos^2[\omega(t-x)]}}{1+2\frac{E^2}{H^2} \cos^2[\omega(t-x)]}. \end{equation} Precisely the same is the result of calculating $\frac{|\mathbf{E}\times\mathbf{B}|}{\mathbf{E}^2+ \mathbf{B}^2}$ --- Landau and Lifshitz's definition wins. (Pauli's definition (\ref{Pauli}) cannot contain on its left-hand side the construction $1+2\frac{E^2}{ H^2}\cos^2[\omega(t-x)]$ which inevitably appears on the right-hand side $2\frac{|\mathbf{E}\times\mathbf{B}|}{ \mathbf{E}^2+ \mathbf{B}^2}$, like in the Landau--Lifshitz case.) The mean value of $|\mathbf{v}|$ is simply $\frac{2}{\pi}\arcsin\frac{E}{\sqrt{ E^2 + H^2}}$. When $H\rightarrow 0$, the mean propagation velocity approaches that of light, while if $E\ll H$, the mean velocity can become as low as one wishes: to this end, it is necessary to use as strong magnetic field $H$ as possible and/or choose a low-intensity wave in the superposition. \section{Concluding remarks} \label{s12} \setcounter{equation}{0} The results obtained in this paper are based on three simple observations: that the physical classification of electromagnetic fields should be formulated using the properties of only two well known invariants of these fields, the complete description of reference frame is related only to the state of motion of a continuous multitude of test observers, and that the duality rotation (in the vein of Rainich--Misner--Wheeler, but in a more modern and general form) applied to a seed solution of Maxwell's equations, yields a new solution in the same four-geometry which was generated by the seed solution {\it via} Einstein's equations. We have proven that these suppositions really work together, and the duality rotation permits to {\it construct} qualitatively new solutions, belonging also to other desired types of electromagnetic fields in accordance with our classification. There is only one restriction separating the pure null type fields from those of other five types. The pure null type does not change under the duality rotation, becoming in fact the same solution of this pure type, though corresponding to another reference frame and displaying the Doppler effect in its generalized form also considered in this paper. As illustrations of application of our approach we discuss concrete examples of the Kerr--Newman (KN) solution and the Li\'enard--Wiechert (LW) field (to show the efficiency of our method also in special relativity). Moreover, we deduce three qualitatively new types of electromagnetic field creating the same four-geometry as the seed KN solution, thus describing other kinds of KN-like black holes. Studying the LW field, we come upon a new conclusion that the linearity of Maxwell's equations does not automatically mean that different constituent parts of this field can be properly interpreted separately. Other characteristics of the field (such as the energy density and Poynting vector) have non-linear nature, thus a study of these characteristics constructed only of one or another parts of the LW solution, with omission of the combination (``interaction'') of these parts, means a disregard of important physical properties of the field, in particular, of its true propagation velocity. We have explicitly shown that this velocity of the complete LW solution is less than that of light, and we have given the physically full-fledged frame co-moving with LW field in which its Poynting vector exactly vanishes everywhere outside the world line of the source of this field (strangely, this fact was never noticed before). The last, but not least example is related to a simple superposition of two exact solutions of special-relativistic Maxwell's equations, plane electromagnetic wave and homogeneous magnetic field in a vacuum. We show that this super\-position, being itself an exact solution, always propagates with the velocity lesser than that of light, and we show that the elementary expression for this velocity is properly defined in \cite{LanLif}, but not in \cite{Pauli}. (I must admit that at first I liked the definition given in Pauli's book much more than Landau--Lifshitz's one: see, {\it e.g.}, \cite{Mitsk06a}.) Finally, may I express my hope (to a certain extent, against hope), that the given here examples should lead our community of physicists to a more profound consideration of the non-trivial concept of reference frame and to its better understanding as a more physical than purely mathematical subject and an important ingredient in the description of physical reality. To console those who cannot accept the representation of reference frames through monads and Cartan's forms, I would add that they can take instead any system of coordinates whose $t$-coordinate lines coincide with those of the $\tau$-congruence (the choice of spatial coordinates does not matter). In such a system, there will be realized precisely the same picture, though mathematics will feel awkward, while reference frames will seem to be silenced.
1,314,259,996,043
arxiv
\section{Introduction} \IEEEPARstart{T}{he} real world is dynamic – all objects move, and environments change. On a daily life scale, the world is deterministic and predictable by the laws of physics. Solving governing equations is a firm and definite approach to study a physical system. However, the complexity of the real world often does not permit getting accurate and precise measurements or governing equations. As the methodology of machine learning has attracted significant attention from general science and engineering areas, physicists and engineers are getting increasingly interested in implementing the approach~\cite{Carleo2019,Brunton2020} for readily solving the governing equations~\cite{Tompson2017,Bezenac2019,Kim2019}, developing data-driven paradigms~\cite{Lusch2018,Reichstein2019,Yeo2019,Montans2019}, and discovering unknown governing equations~\cite{Raissi2018,Champion2019,Qin2019}. The fundamental questions from physics society~\cite{Breen2019,Iten2020} include \begin{itemize} \item How can machines understand human physics or perceive our physical world? \item Is deep learning about to reinvent physics? \item Could AI be an essential or general tool to explore new physics in the future? \end{itemize} Despite its considerable success in physics and the high expectations from the media and the general public, there are still many skeptics from physicists and expert groups~\cite{Montavon, Bathaee2017,Radovic2018,Buchanan2019,Tyagi2020}. Main issues frequently pointed out include \begin{itemize} \item Deep learning requires highly complicated models with several tens of layers that are not physically well-interpreted, \item Deep learning models need exceedingly many training data that should mostly depend on human knowledge (governing equations) or other external simulation results (by solving the governing equations), \item Deep learning algorithms mostly work like {\it black boxes} that provide very little information about how they reached a certain conclusion. \end{itemize} In this study, we show research results that answer those fundamental questions and propose a framework for {\it physics-exploring machine learning} to settle out the issues. We suggest that those limitations of the existing deep learning models originate from the problem of representation~\cite{Zhong2016,Bengio2013}, and that a physical system or physical data should be re-represented before being applied to them. In short, we introduce how to turn the language of classical physics into a language for machine learning. Through the representation switching, we can dramatically reduce the complexity of existing machine learning models and the amount of training data required. We argue that understanding such duality of representation will contribute significantly to the development of machine learning theory for physical sciences in the future. The main part of this paper is organized into two sections. In Section \ref{sec:DataDriven}, we first show our results and highlight the methods we applied. Then, in Section \ref{sec:duality}, we derive the detailed methods. \\ \begin{figure}[htp] \centering \includegraphics[width=8.2cm]{soccerball} \caption{{\bf Generating a physical flow from sparse observations}. Three observation images of 128-by-128 pixels are treated as a mass distribution forming the shape of a soccer ball. The trace image is the output of our method from the three observations. By a physical flow, we mean a continuous flow of mass obeying the continuity equation.} \label{fig:soccerball} \end{figure} \section{Data-Driven Continuum Dynamics} \label{sec:DataDriven} \subsection{Learning physics from observational data} Consider the soccer ball example shown in Fig. \ref{fig:soccerball}. The soccer ball is not a point mass but a distributed mass; therefore, its motion cannot be modeled by a conventional interpolation of the coordinates of a point mass, or by the manifold learning~\cite{Roweis2000} of the pixel values from only three distributed mass images. Nevertheless, our method generates the intermediate and future ball images, following a curved trajectory as if it were affected by the gravitational field, from only three still images with no other pre-training data and no governing equation. Meanwhile, to generate images of an object in motion, a deep learning model is first trained on many example data to learn the effects of the same gravitational field on other objects, or governing equations. Our method is different from many physics-learned simulators or a physics-informed deep learning approach~\cite{Tompson2017,Kim2019,Raissi2018,Qin2019,Li2018,Umme2020,Sanchez20}. Our approach does not depend on human knowledge and enables machines to explore data of physical world by themselves. \begin{figure}[htp] \centering \includegraphics[width=8.8cm]{FlowChart} \caption{{\bf Flowchart}. When observing from the Eulerian view, we can obtain pixel images of a moving object. To generate a continuous physical flow from discrete time observations, we first transform them to another representation via an involution. In the new representation, object motions follow point-wise dynamics, which allows us to interpolate or extrapolate the images pixel-by-pixel. Finally, when we transform the interpolated or extrapolated images back to the original representation via the involution, we can obtain time domain super-resolution images or prediction images, which form a physical flow similar to that generated in Fig. \ref{fig:soccerball}.} \label{fig:flowchart} \end{figure} Our method is analogous to the numerical interpolation of data points, as illustrated in Fig. \ref{fig:flowchart}. Before interpolating them, however, we change each observation into another representation by inversion transform. That inversion transform is an involution that can change mass transport into a {\it mass teleport}, and then change back to the original transport by the same transform. The simplest case of a mass teleport is the linear combination of initial and final mass densities, as shown in Fig. \ref{fig:trans_tele} (The upper and lower figures of Fig. \ref{fig:trans_tele} are not direct transforms to each other). The soccer ball example of Fig. \ref{fig:soccerball} was generated by the second-order Lagrange interpolation of the three inverted images. \begin{figure}[htp] \centering \includegraphics[width=8.2cm]{example_trans_tele1} \caption{{\bf Example of mass transport and mass teleport}. The lower figure depicts a mass teleport. It resembles the hypothetical transfer of matter or energy from one point to another without traversing the physical space between them.} \label{fig:trans_tele} \end{figure} Transforming to a mass teleport is a velocity-free formulation of the continuum dynamics, in which we freeze the flow and let it transfer through the point-wise disappearance and reappearance of mass. Common complications in science, engineering, and machine learning originate from velocity. Solving the Navier-Stokes equations, determining the optical flow~\cite{Horn1981}, or tracking moving objects from video frames have been challenging problems regarding velocity. Our strategy is utilizing a much simpler machine learning model with fewer data after transforming the continuum dynamics into a velocity-free representation. Discovering a proper representation has been a fundamental issue in computer vision~\cite{Beymer1996,Agarwal2004,Wright2010}, computational neuroscience~\cite{Marr1982,Olshausen1996,DiCarlo2007}, and machine learning~\cite{Bengio2013,Roweis2000,Lee1999}. \subsection{Inversion Transform} Figure \ref{fig:highlight} elaborates on what the inversion transform is and summarizes the highlights of our derivations, using which we can develop learning algorithms of observation data. In the case of mass transport, the movement of a physical object obeys the continuity equation (first equation on the upper left of Fig. \ref{fig:highlight}). However, the field of velocity is completely described by considering both its rotational and irrotational components. Thus, we need the second equation to determine the rotational components of motion. It states that the vorticity is twice the angular velocity at any point in a moving fluid~\cite{Lai2014}. The two equations impose fundamental constraints to conserved systems. \begin{figure}[htp] \centering \includegraphics[width=8.6cm]{highlight} \caption{{\bf Representation switching of the continuum dynamics via inversion transform}. The inversion transform mentioned in Fig. \ref{fig:flowchart} is given by the reciprocal of the mass density and the negative of the rotational angles. The combination of two simple transforms can change the appearance of continuum dynamics, i.e., from transport to teleport, and from teleport back to transport.} \label{fig:highlight} \end{figure} To discuss it in a more general way, we include the source or sink terms, $\sigma$ and $\boldsymbol{\tau}$, with double equalities. Then the first equation can be referred to as the general continuity equation. The local rate $\sigma$ is zero only when the mass density is conserved at a local point. In similar, we can define a vector of the local rates, $\boldsymbol{\tau}$, which is related to virtual sources or sinks to the local angles but does not make additional real rotations. \begin{figure*}[htp] \centering \includegraphics[width=17.6cm]{cheetah} \caption{From two still images taken at time $t=0$ and $t=1$, we generated the three intermediate frames at $t = 0.25, 0.5, 0.75$ and one future frame at $t = 1.25$ by using the pixel-wise linear interpolation ({\it upper}) and our method ({\it lower}, pixel-wise linear interpolation after inversion transform). Pixel-wise linear interpolation outputs a superimposed image of the two, whereas our method finds a physical flow satisfying the continuity equation (\href{https://drive.google.com/file/d/1pVs7o3VA-yU5x_niJCoeUAgyP1yEBdwV/view?usp=sharing}{click \underline{here} to see a video clip}).} \label{fig:cheetah} \end{figure*} We can transform the transport equations into another form by using an amazingly simple inversion rule. It is given by the reciprocal of mass density and the negative of local angles. More specifically, they can be given by \begin{equation} \tilde{\rho}(\tilde{\mathbf{x}})\equiv\det[\tilde{\mathbf{J}}]=\det[\mathbf{J}]^{-1}\equiv\rho^{-1}(\mathbf{x}) \label{eq:invrule1} \end{equation} and \begin{equation} \tilde{\boldsymbol{\theta}}(\tilde{\mathbf{x}})\equiv\int [d\tilde{\mathbf{J}}~\tilde{\mathbf{J}}^{-1}]_{\times}=-\int[\mathbf{J}^{-1}d\mathbf{J}]_{\times}\equiv-\boldsymbol{\theta}(\mathbf{x}) \label{eq:invrule2} \end{equation} where $\mathbf{J}$ and $\tilde{\mathbf{J}}\equiv\mathbf{J}^{-1}$ are the Jacobian matrices defining a transformation between two spaces denoted by $\mathbf{x}\in\boldsymbol{\Omega}\subset\mathbb{R}^3$ and $\tilde{\mathbf{x}}\in\tilde{\boldsymbol{\Omega}}\subset\mathbb{R}^3$, and $[\cdot]_{\times}$ is an operator defined in Eq. (\ref{eq:crossop1}) (See Section \ref{sec:angles} for detailed information). Then, the time-derivative terms of the transport equations can move to the right side, while the double equalities are fixed. They immediately lead to zero-divergence and zero-curl of the velocity field on the right of Fig. \ref{fig:highlight}, which means that the velocity field is trivially constant, or entirely zero by anchoring a reference point. The mass teleport equations comprise only velocity-independent partial derivatives and become point-wise operations of both mass density and the angles of rotation. The local rates $\tilde{\sigma}$ and $\tilde{\boldsymbol{\tau}}$ determine the change of inverted mass density $\tilde{\rho}$ and its local angles of rotation $\tilde{\boldsymbol{\theta}}$. The process of modeling the local rates from observational data is what an interpolating polynomial or, generally, a deep learning model can do. The integration of the local rates is zero, indicating that the total mass and angles remain unchanged. \begin{figure}[htp] \centering \includegraphics[width=8.6cm]{digit_LinearInterp.png} \includegraphics[width=8.6cm]{digit_OurMethod} \caption{From two still images of a digit parameterized with $t=0$ and $t=1$, we generated the three intermediate frames at $t = 0.25, 0.5, 0.75$ and one extrapolated frame at $t = 1.25$ by using the pixel-wise linear interpolation ({\it upper}) and our method ({\it lower}, pixel-wise linear interpolation after inversion transform). Pixel-wise linear interpolation outputs a superimposed image of the two, whereas our method finds an underlying manifold onto which the two digit images can be embedded. The underlying manifold approaches a ground truth as the number of data increases.} \label{fig:digit} \end{figure} \subsection{Inversion of Incomplete Observations} The existence of a zero-velocity representation of continuum dynamics provides us with a general framework for developing a model of real-world dynamics where there is no governing equation, as shown in Figs. \ref{fig:cheetah} and \ref{fig:digit}. The inverted dynamics is not coupled to its velocity field, and it is decomposed into point-wise dynamics of inverse mass. By learning the inverted point-wise dynamics from observational data, we can determine the dynamics of objects. In Figs. \ref{fig:cheetah} and \ref{fig:digit}, we start with only two 256-by-256 pixel images of a running cheetah and two 32-by-32 pixel images of a handwritten digit, obtained at $t = 0$ and $t = 1$. To obtain an inversion of them without the local angles of rotation and the velocity field (many real-world problems fall into this case), we solved the following optimization problem: \begin{equation} \min\sum_{t=0}^{T-1}\sum_{i,j}\left(\log\tilde{\rho}_{ij}^{(t)}-\log\tilde{\rho}_{ij}^{(t-1)}\right)^2\label{ErF} \end{equation} where $T = 1$ for the two-sample case and $(i, j)$ are the pixel indices. From Eq. (\ref{ErF}) we intend to minimize the local rates $\tilde{\rho}$ normalized by the density of inverted mass by the Euclidean metric. The operation is pixel-wise, and velocity information is not required. As shown in Fig. \ref{fig:flowchart}, once we obtain the inversion of original images, we can fit an interpolating function to them to generate the new frames at $t$ $=$ $0.25$, $0.5$, $0.75$, and $1.25$. Finally, by transforming them back to the original representation, we can reconstruct the frames of the running cheetah and a continuous variation of the digit. The generated frames do not necessarily correspond to the ground-truth of the cheetah motion and the ground-truth of the manifold~\cite{Roweis2000} onto which the digit images are embedded, but can approach them by increasing the frequency of observation. Nevertheless, we should note that the frames follow a physical flow that satisfies the continuity equation of mass. Eq. (\ref{ErF}) is one of the straightforward forms available and can be tailored into a more or less sophisticated form for better performance or practical use cases, respectively. While many excellent algorithms have been proposed for video prediction from two-dimensional videos~\cite{Finn2016,Vondrick2016,Liu2017,Jin2017,Walker2016}, our method is promising for 3D volumetric data that provide complete information without occlusion. Two-dimensional natural images are not significant sources of conserved quantities. They usually comprise a two-layer structure of multiple foregrounds and background, and a foreground may occlude the background or the other foregrounds. The edges around those occlusions are non-zero sources or sinks of pixel values, which violate the conservation law. This explains why the left forefoot in the cheetah example appears a little distorted. The algorithms for optical flow~\cite{Horn1981} can also be compared to our method. Both share the same goal of determining a flow from video frames. However, optical flow assumes only brightness constancy, whereas our method follows the strict form of continuity equation as an inviolable rule. Optical flow is suitable for visual sciences data, while our method is suitable for physical sciences data. \section{Mass Transport-Teleport Duality} \label{sec:duality} In this section, we show how the inversion transform changes the transport equations into the teleport equations. It is organized into thirteen subsections with sixty-six equations. In Section \ref{sec:teleport_inv}, the teleport equations are derived. In Section \ref{sec:inv_obs}, Eq. (\ref{ErF}) is derived. Section \ref{sec:intp} elaborates on interpolation and extrapolation. \subsection{Transport of Mass} \label{sec:mass_transport} By a physical flow that we mention in the paper, we mean that there is a flux of a quantity, and it should satisfy the continuity equation as a local conservation law. It is a partial differential equation which gives a relation between the amount of the quantity and the transport of that quantity. It states that the amount can only change by what is moved in or out. It is a universal equation that any transport of a non-interacting conserved quantity must obey~\cite{Lai2014}. In this paper, we focus on the discussion of mass and its transport. The continuity equation has two forms depending on the viewpoint. From the Eulerian perspective, we have it in the form of \begin{equation} \nabla\cdot(\rho\mathbf{v})+\frac{\partial\rho}{\partial t}=0 \label{conEq1} \end{equation} where \(\rho\) is the density of mass. From the Lagrangian perspective, we have it in the form of \begin{equation} \nabla\cdot\mathbf{v}+\frac{d\log\rho}{dt}=0\label{conEq2} \end{equation} from Eq. (\ref{conEq1}) by \[\nabla\cdot(\rho\mathbf{v})+\frac{\partial\rho}{\partial t}=\rho\nabla\cdot\mathbf{v}+(\mathbf{v}\cdot\nabla)\rho+\frac{\partial\rho}{\partial t}=\rho(\nabla\cdot\mathbf{v}+\frac{d\log\rho}{dt})\] Eq. (\ref{conEq2}) describes a flux of the current of a unit mass density. Eq. (\ref{conEq1}) or Eq. (\ref{conEq2}) is a fundamental constraint that almost all physical quantities must obey as a local form of conservation laws. In this paper, we introduce another quantity to describe the continuum dynamics. It is the local angle of rotation. It is a scalar for 2D. For 3D, it has the three components along x-, y-, and z-axis. In classical continuum dynamics, we are not interested in discussing the absolute values of a local angle of rotation, and we discuss vorticity more often than the local angular velocity. If we are given the velocity, the mass, and the pressure, the state of a fluid is completely determined. However, the purpose of our research work is to search for a new representation on which we can work on machine learning of physical data, and we want it to be a velocity-free formulation. Thus, we need to move our interest from the velocity to the angle of rotation. It begins with the equation of vorticity and angular velocity: \begin{equation} \nabla\times\mathbf{v}=2\frac{d\boldsymbol{\theta}}{dt}\label{conEq4} \end{equation} It is well-known that the vorticity is twice the angular velocity at any point in a moving fluid. In this paper, we explicitly denote the local angles of rotation in the equation. We interpret Eq. (\ref{conEq4}) as a complementary equation to Eq. (\ref{conEq2}). We refer to Eq. (\ref{conEq4}) as the second kind of continuity. Eq. (\ref{conEq4}) has the total time derivative, which means that the local angles depend on the velocity. In Eq. (\ref{conEq3}), we will see a different version of Eq. (\ref{conEq4}). Eq. (\ref{conEq3}) is for the Eulerian description, and it will be complementary to Eq. (\ref{conEq1}). We will refer to Eqs. (\ref{conEq1}) and (\ref{conEq3}) as the equations of mass transport for the Eulerian view. We can also refer to Eqs. (\ref{conEq2}) and (\ref{conEq4}) as the equations of mass transport for the Lagrangian description. Now we are ready to explore a new representation of the continuum dynamics. \subsection{Mass-Volume Conversion} \label{sec:conversion} As a continuum, particles or material points of an object are represented by a mass density, which is conventionally mass per unit volume. In this section, what we try to propose is a mathematical process of exchanging mass elements with volume elements. By the process that we call mass-volume conversion for a one-dimensional case as shown in Fig. \ref{fig:MVC}, the mass density $\rho=dm/dx$ switches to its inverse mass density $\tilde{\rho}=dx/dm$. By introducing new notations for inverse mass and inverse volume, we can write it by the same definition of density, that is to say, $\tilde{\rho}=d\tilde{m}/d\tilde{x}$. For this, we need to assume that the volume element $dx$ can be converted into the new mass element $d\tilde{m}$, and the mass element $dm$ can be converted into the new volume element $d\tilde{x}$. The new mass density can switch back to the initial mass density by the same rules $d\tilde{m}=dx$ and $d\tilde{x}=dm$. It looks like a trivial transform, but it is not. \begin{figure}[htp] \centering \includegraphics[width=8.6cm]{MassVolumeConversion} \caption{One-dimensional mass-volume conversion.} \label{fig:MVC} \end{figure} Let us continue to discuss it in a multi-dimensional space. As derived above, the one-dimensional mass density also becomes a one-dimensional volume ratio $\rho=dm/dx=d\tilde{x}/dx$ by converting the mass element into the new volume element. A multivariate analogue of the volume ratio $d\tilde{x}/dx$ is the Jacobian matrix $\left[\partial\tilde{\mathbf{x}}/\partial\mathbf{x}\right]$, where $\mathbf{x}\in\boldsymbol{\Omega}\subset\mathbb{R}^n$, $\tilde{\mathbf{x}}\in\tilde{\boldsymbol{\Omega}}\subset\mathbb{R}^n$. To equate it to the scalar density $\rho$, we need to take the determinant operator to the Jacobian matrix. It is well consistent with the interpretation that the scalar ratio, $d\tilde{\mathbf{x}}/d\mathbf{x}$, is a volume ratio because the Jacobian determinant’s geometric meaning is the volumetric ratio. Thus, the mass-volume conversion in a multi-dimensional space leads to the equivalence of mass density and Jacobian determinant, \begin{equation} \rho=\det\mathbf{J} \label{MVC1} \end{equation} where $\mathbf{J}=\mathbf{J}(\mathbf{x})=\left[\partial\tilde{\mathbf{x}}/\partial\mathbf{x}\right]$ is a positive definite Jacobian matrix and $\rho=\rho(\mathbf{x})$ is a positive density function. Similarly, we can have the same equivalence for the new mass density: \begin{equation} \tilde{\rho}=\det\tilde{\mathbf{J}} \label{MVC2} \end{equation} where $\tilde{\mathbf{J}}=\tilde{\mathbf{J}}(\tilde{\mathbf{x}})=\left[\partial\mathbf{x}/\partial\tilde{\mathbf{x}}\right]$ is the inverse matrix of $\tilde{\mathbf{J}}$ and $\tilde{\rho}=\tilde{\rho}(\tilde{\mathbf{x}})$ is the reciprocal of the original function $\rho=\rho(\mathbf{x})$, that is, \begin{equation} \tilde{\rho}=\frac{1}{\rho} \label{inversion1} \end{equation} We can refer to Eq. (\ref{inversion1}) as the mass-mass inversion rule that is shown in Figs. \ref{fig:highlight} and \ref{fig:Representation}. We can learn that the equivalence of mass density and Jacobian determinant is symmetric under the conversion process. It looks trivial again, but it is not a simple reciprocal operation because it involves a nonlinear coordinate transformation between $\mathbf{x}$ and $\tilde{\mathbf{x}}$. If either is the mass density, we can refer to the other as the density of inverse mass or inverted mass density. This inversion transform originates from Legendre transform~\cite{Zia2009}. Eq. (\ref{MVC1}) can also be understood in Monge’s continuous formulation~\cite{Haker2004,Kolouri2017}. The optimal mass transportation problem is to find a mapping that transports a first mass distribution onto a second mass distribution at a minimal cost. The general formulation applies to our problem in two ways. First, the second mass distribution becomes uniform in Eq. (\ref{MVC1}). Second, we can also consider the opposite case, which is its inverse mapping from the uniform second distribution to the initial mass distribution, and can derive Eq. (\ref{MVC2}). Our purpose is to transform a mass distribution to its inverse distribution, while optimal mass transportation finds an optimal mapping between two mass distributions. \subsection{Differential Forms} The mass-volume conversion has the definite form by Eqs. (\ref{MVC1}) and (\ref{MVC2}), but it can be more powerful when we put it in a differential form that is independent of coordinates. By taking the logarithm and performing some calculus by \begin{eqnarray*} &&\rho=\det\mathbf{J} \nonumber \\ &\Rightarrow&\log\rho=\log\left[\det\mathbf{J}\right]=\mathrm{Tr}\left[\log\mathbf{J}\right] \\ &\Rightarrow&d\left(\log\rho\right)=d\left(\mathrm{Tr}\left[\log\mathbf{J}\right]\right)=\mathrm{Tr}\left[d\left(\log\mathbf{J}\right)\right]=\mathrm{Tr}\left[\mathbf{J}^{-1}d\mathbf{J}\right], \end{eqnarray*} we can get its differential form \begin{equation} d\left(\log\rho\right)=\mathrm{Tr}\left[\mathbf{J}^{-1}d\mathbf{J}\right] \label{dMVC1} \end{equation} We can utilize Eq. (\ref{dMVC1}) in various ways. Examples include the following cases: \begin{eqnarray} \nabla\left(\log\rho\right)&=&\mathrm{Tr}[\mathbf{J}^{-1}\nabla\mathbf{J}] \label{dMVC1_1} \\ \frac{d\log\rho}{dt}&=&\mathrm{Tr}[\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}] \label{dMVC1_2} \\ \left(\mathbf{v}\cdot\nabla\right)\left(\log\rho\right)&=&\mathrm{Tr}[\mathbf{J}^{-1}\left(\mathbf{v}\cdot\nabla\right)\mathbf{J}]. \label{dMVC1_3} \end{eqnarray} Similarly, we can also get a similar form from Eq. (\ref{MVC2}): \begin{equation} d\left(\log\tilde{\rho}\right)=\mathrm{Tr}[d\tilde{\mathbf{J}}~\tilde{\mathbf{J}}^{-1}] \label{dMVC2} \end{equation} Generally, the differential Jacobian and the inverse Jacobian are not commutative, but we can keep it commutative inside the trace operator. We can utilize Eq. (\ref{dMVC2}) to have \begin{eqnarray} \tilde{\nabla}\left(\log\tilde{\rho}\right)&=&\mathrm{Tr}[\tilde{\nabla}\tilde{\mathbf{J}}~\tilde{\mathbf{J}}^{-1}] \label{dMVC2_1} \\ \frac{d\log\tilde{\rho}}{dt}&=&\mathrm{Tr}[\frac{d\tilde{\mathbf{J}}}{dt}\tilde{\mathbf{J}}^{-1}] \label{dMVC2_2} \end{eqnarray} where $\tilde{\nabla}$ is the gradient regarding the new coordinates $\tilde{\mathbf{x}}$ . Because $\tilde{\mathbf{J}}$ is the inverse matrix of $\mathbf{J}$, we have the identity \begin{equation} \mathbf{J}^{-1}d\mathbf{J}+d\tilde{\mathbf{J}}~\tilde{\mathbf{J}}^{-1}=\mathbf{O}, \label{idt1} \end{equation} where $\mathbf{O}$ is a matrix whose components are all zero. Then, we get differential forms of the mass-Jacobian conversion rules: \begin{eqnarray} d\left(\log\rho\right)+\mathrm{Tr}[d\tilde{\mathbf{J}}~\tilde{\mathbf{J}}^{-1}]=0, \label{dMVC3} \\ d\left(\log\tilde{\rho}\right)+\mathrm{Tr}[\mathbf{J}^{-1}d\mathbf{J}]=0. \label{dMVC4} \end{eqnarray} \subsection{Incorporating the Angles} \label{sec:angles} The conversion of mass density and Jacobian determinant tells that the mass elements and volume elements are exchangeable, but the mass is a scalar quantity and Jacobian is a tensor quantity. We need extra dimensions to determine the Jacobian matrix. For a compact notation, we define a linear operator denoted by $[\cdot]_\times$. When we apply it to a $3\times 3$ matrix, the operator outputs a column vector by \begin{equation} \left[\begin{pmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{pmatrix}\right]_{\times} = \frac{1}{2}\begin{pmatrix} A_{23}-A_{32} \\ A_{31}-A_{31} \\ A_{12}-A_{21} \end{pmatrix} \label{eq:crossop1} \end{equation} When it applies to a column, it can also output a matrix by \begin{equation} \left[\begin{pmatrix} a_1 \\ a_2 \\ a_3 \end{pmatrix}\right]_{\times} = \begin{pmatrix} 0 & a_3 & -a_2 \\ -a_3 & 0 & a_1 \\ a_2 & -a_1 & 0 \end{pmatrix} \label{eq:crossop2} \end{equation} The operator is an involution for a scalar, a vector, and an anti-symmetric matrix. The Levi-Civita symbols can replace the first use case of the operator, but we want to perform both cases with a single compact notation. Now, by applying the operator to the normalized differential Jacobian $\mathbf{J}^{-1}d\mathbf{J}$ that appears in Eq. (\ref{dMVC1}), we can make a connection to the local angles of rotation: \begin{equation} d\boldsymbol{\theta} = \left[\mathbf{J}^{-1}d\mathbf{J}\right]_{\times} \label{dthJ1} \end{equation} The proof is straightforward, and we omit it. Similarly, we can derive \begin{equation} d\tilde{\boldsymbol{\theta}} = [d\tilde{\mathbf{J}}~\tilde{\mathbf{J}}^{-1}]_{\times}. \label{dthJ2} \end{equation} By Eq. (\ref{idt1}), we have \begin{equation} d\boldsymbol{\theta}+d\tilde{\boldsymbol{\theta}}=\mathbf{0} \label{idt2} \end{equation} where $\mathbf{0}$ is a column vector whose components are all zero. By Eq. (\ref{idt1}), we also have \begin{eqnarray} d\boldsymbol{\theta}+[d\tilde{\mathbf{J}}~\tilde{\mathbf{J}}^{-1}]_{\times}=\mathbf{0}, \label{dthJ3} \\ d\tilde{\boldsymbol{\theta}}+[\mathbf{J}^{-1}d\mathbf{J}]_{\times}=\mathbf{0}. \label{dthJ4} \end{eqnarray} Unfortunately, we are generally not allowed to have definite for the angles forms from integrating Eqs. (\ref{dthJ1}), (\ref{dthJ2}), (\ref{dthJ3}), or (\ref{dthJ4}) because the differential Jacobian and the inverse Jacobian are not commutative. If the matrices in the bracket can be commutative in a fixed eigenspace of the Jacobian, we may have definite forms of \begin{eqnarray} \boldsymbol{\theta}+[\log\tilde{\mathbf{J}}]_{\times}=\mathbf{0} \label{thJ1}, \\ \tilde{\boldsymbol{\theta}}+[\log\mathbf{J}]_{\times}=\mathbf{0}. \label{thJ2} \end{eqnarray} It is a particular case in which we can find a locally linear embedding of nonlinear dynamics. \subsection{Transport of Jacobian} \label{sec:Jacobian_transport} We usually consider a field of the Jacobian matrix for a curvilinear coordinate transform, and it is usually a static situation. However, the curvilinear coordinates we consider move and the Jacobian changes. In this subsection, we merge Eqs. (\ref{conEq2}) and (\ref{conEq4}) to derive a transport equation of Jacobian. Two equations in Eqs. (\ref{conEq2}) and (\ref{conEq4}) are complementary to each other. Before merging them, we need to consider the time-derivative form of Eq. (\ref{dthJ1}): \begin{equation} \frac{d\boldsymbol{\theta}}{dt} = \left[\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}\right]_{\times}. \label{dtthJ1} \end{equation} First, by using Eq. (\ref{dMVC1_2}), we can factor out the trace operator from Eq. (\ref{conEq2}): \begin{eqnarray} 0&=&\nabla\cdot\mathbf{v}+\frac{d\log\rho}{dt} \nonumber \\ &=&\mathrm{Tr}[\nabla\mathbf{v}]+\mathrm{Tr}[\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}] \nonumber \\ &=&\mathrm{Tr}[\nabla\mathbf{v}+\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}] \label{conEq2mer} \end{eqnarray} Second, by using Eq. (\ref{dtthJ1}), we can factor out the bracket operator from Eq. (\ref{conEq4}): \begin{eqnarray} 0&=&\nabla\times\mathbf{v}-\frac{d(2\boldsymbol{\theta})}{dt} \nonumber \\ &=&-2[\nabla\mathbf{v}]_{\times}-2[\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}]_{\times} \nonumber \\ &=&-2[\nabla\mathbf{v}+\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}]_{\times} \label{conEq4mer} \end{eqnarray} Finally, we can get a merged equation from Eqs. (\ref{conEq2mer}) and (\ref{conEq4mer}): \begin{equation} \nabla\mathbf{v}+\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}=\mathbf{O} \label{transJac0} \end{equation} We can further transform Eq. (\ref{transJac0}) into what we call the transport equation of Jacobian \begin{equation} \nabla\left(\mathbf{Jv}\right)+\frac{\partial\mathbf{J}}{\partial t}=\mathbf{O} \label{transJac} \end{equation} by performing the following calculus with Eq. (\ref{transJac0}): \begin{eqnarray*} \nabla\left(\mathbf{Jv}\right)+\frac{\partial\mathbf{J}}{\partial t}&=&\mathbf{J}(\nabla\mathbf{v})+(\nabla\mathbf{J})\mathbf{v}+\frac{\partial\mathbf{J}}{\partial t} \\ &=&\mathbf{J}\left(\nabla\mathbf{v}\right)+(\mathbf{v}\cdot\nabla)\mathbf{J}+\frac{\partial\mathbf{J}}{\partial t} \\ &=&\mathbf{J}(\nabla\mathbf{v})+\frac{d\mathbf{J}}{dt} \\ &=&\mathbf{J}(\nabla\mathbf{v}+\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt})=\mathbf{O} \end{eqnarray*} For deriving Eq. (\ref{transJac}), we also used the following identity regarding the directional derivatives of Jacobian: \begin{equation} \left(\mathbf{v}\cdot\nabla\right)\mathbf{J}=\left(\nabla\mathbf{J}\right)\mathbf{v}, \label{gradJv} \end{equation} which comes from \begin{eqnarray*} (\mathbf{v}\cdot\nabla)\mathbf{J}&=&(v_k\frac{\partial}{\partial x_k})J_{ij} \\ &=&v_k\frac{\partial}{\partial x_k}\left(\frac{\partial \tilde{x}_i}{\partial x_j}\right) \\ &=&\frac{\partial}{\partial x_j}\left(\frac{\partial \tilde{x}_i}{\partial x_k}\right) v_k\equiv\left(\nabla\mathbf{J}\right)\mathbf{v}. \end{eqnarray*} The transport equation of Jacobian in Eq. (\ref{transJac}) resembles the transport equation of mass in Eq. (\ref{conEq1}). Eqs. (\ref{conEq2}) and (\ref{conEq4}) can be replaced by Eq. (\ref{transJac}) for the Lagrangian description, as explained in Section \ref{sec:viewpoints}. \subsection{The Velocity is Zero} In Section \ref{sec:conversion}, we have derived the conjugated density of mass called the density of inverse mass through Eqs. (\ref{MVC2}) and (\ref{inversion1}). Now we get one of the most important results in the paper and discuss it. Let us find out what happens to the inverse mass when the original mass transports by Eq. (\ref{transJac}) or, equivalently, Eqs. (\ref{conEq2}) and (\ref{conEq4}). First, let us consider the gradient of the inverse mass’s velocity. We learn that the gradient of the velocity is always zero: \begin{equation} \tilde{\nabla}\tilde{\mathbf{v}}=\mathbf{0} \label{zerov} \end{equation} from using Eq. (\ref{transJac}) by performing the following calculus \begin{eqnarray} \tilde{\nabla}\tilde{\mathbf{v}}&=&(\tilde{\mathbf{J}}^T\nabla)(\frac{d\tilde{\mathbf{x}}}{dt}) \nonumber \\ &=&(\tilde{\mathbf{J}}^T\nabla)[(\mathbf{v}\cdot\nabla)\tilde{\mathbf{x}}+\frac{\partial\tilde{\mathbf{x}}}{\partial t}] \nonumber \\ &=&[\nabla(\mathbf{Jv}+\frac{\partial\tilde{\mathbf{x}}}{\partial t})]\tilde{\mathbf{J}} \nonumber \\ &=&[\nabla\left(\mathbf{Jv}\right)+\frac{\partial\tilde{\mathbf{J}}}{\partial t}]\mathbf{J}^{-1} = \mathbf{O} \label{transJacDer} \end{eqnarray} It readily means zero velocity at all points, as long as we anchor the reference point, by \begin{equation} \tilde{\mathbf{v}}=\int d\tilde{\mathbf{v}}=\int_{\text{ref. point}}^{\tilde{\mathbf{x}}}(\tilde{\nabla}\tilde{\mathbf{v}})~d\tilde{\mathbf{x}}=\mathbf{0} \end{equation} The canonical way of object transfer in physics is transportation. However, Eq. (\ref{zerov}) suggests that the inverse mass can move in a totally different way. In science fiction, film, video game, or the virtual world, we often imagine the theoretical transfer of objects: teleportation. An object can disappear in a place and simultaneously reappear in another distant place without physically crossing the space in between. In the following subsections, we will show that the dynamics of inverse mass follows such a teleportation-like way, as illustrated in Fig. \ref{fig:trans_tele}. \subsection{Teleport of Inverse Jacobian} \label{sec:invJacobian_teleport} Let us define a tensor quantity denoted by $\boldsymbol{\Sigma}$ as \begin{equation} \boldsymbol{\Sigma}\equiv\nabla(\mathbf{Jv})+\frac{\partial\mathbf{J}}{\partial t}. \label{Sigma} \end{equation} Then, we can simply express the transport equation of Jacobian of Eq. (\ref{transJac}) as $\boldsymbol{\Sigma}=\mathbf{O}$. For a symmetric formulation, we can also define \begin{equation} \tilde{\boldsymbol{\Sigma}}\equiv\tilde{\nabla}(\mathbf{\tilde{J}\tilde{v}})+\frac{\partial\tilde{\mathbf{J}}}{\partial t}. \label{tSigma} \end{equation} However, $\tilde{\boldsymbol{\Sigma}}$ is not a zero matrix. Let us learn what it is by performing the following calculus that is similar to Eq. (\ref{transJacDer}): \begin{eqnarray} \nabla\mathbf{v}&=&(\mathbf{J}^T\tilde{\nabla})(\frac{d\mathbf{x}}{dt}) \nonumber \\ &=&(\mathbf{J}^T\tilde{\nabla})[(\tilde{\mathbf{v}}\cdot\tilde{\nabla})\mathbf{x}+\frac{\partial\mathbf{x}}{\partial t}] \nonumber \\ &=&[\tilde{\nabla}(\mathbf{\tilde{J}\tilde{v}}+\frac{\partial\mathbf{x}}{\partial t})]\mathbf{J} \nonumber \\ &=&[\tilde{\nabla}(\mathbf{\tilde{J}\tilde{v}})+\frac{\partial\mathbf{J}}{\partial t}]\tilde{\mathbf{J}}^{-1} \label{transJacDer2} \end{eqnarray} which leads, by Eq. (\ref{tSigma}), to \begin{equation} \nabla\mathbf{v}=\tilde{\boldsymbol{\Sigma}}\tilde{\mathbf{J}}^{-1} \label{gradv1} \end{equation} On the other hand, we have the following time-dependent version of Eq. (\ref{idt1}) by $\tilde{\mathbf{v}}=\mathbf{0}$: \begin{equation} \mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}+\frac{\partial\tilde{\mathbf{J}}}{\partial t}~\tilde{\mathbf{J}}^{-1}=\mathbf{O}. \label{tidt} \end{equation} We need to notice the partial derivative, which means that the velocity of inverse Jacobian is zero from Eq. (\ref{zerov}). From Eqs. (\ref{transJac0}), (\ref{gradv1}), and (\ref{tidt}), we finally get \begin{equation} \frac{\partial\tilde{\mathbf{J}}}{\partial t}=\tilde{\boldsymbol{\Sigma}} \label{teleJ1} \end{equation} or, equivalently, \begin{equation} \frac{\partial\tilde{\mathbf{J}}}{\partial t}\tilde{\mathbf{J}}^{-1}=\tilde{\boldsymbol{\Sigma}}\tilde{\mathbf{J}}^{-1}. \label{teleJ2} \end{equation} We can double-check it directly from Eq. (\ref{tSigma}) by replacing the velocity term with zero. Eq. (\ref{teleJ1}) does not have any velocity-dependent term, and the inverse Jacobian is governed by the local rate $\tilde{\boldsymbol{\Sigma}}$. The local rate is not arbitrary. It should follow Eq. (\ref{gradv1}), which comes from the gradient of a known or observed velocity field. Even if we don’t know the velocity field, it leaves a constraint that the local rate should follow. Another form of Eq. (\ref{gradv1}) is $\tilde{\boldsymbol{\Sigma}}=\tilde{\nabla}\mathbf{v}$ . From the identity of $\tilde{\nabla}\times\tilde{\nabla}v_i=\mathbf{0}$ , we can learn that the curl of each row of $\tilde{\boldsymbol{\Sigma}}$ is always zero. We can simply denote it as the curl of a tensor \begin{equation} \tilde{\nabla}\times\tilde{\boldsymbol{\Sigma}}=\epsilon_{ijk}\tilde{\Sigma}_{mj,i}\mathbf{e}_k\otimes\mathbf{e}_m=\mathbf{O} \end{equation} By Eq. (\ref{teleJ1}), we can also make sure that the inverse Jacobian always has no circulation at any point: \begin{equation} \tilde{\nabla}\times\tilde{\mathbf{J}}=\mathbf{O} \label{zerocurlJ} \end{equation} which means $\oint_{any}\tilde{\mathbf{J}}d\tilde{\mathbf{l}}=\mathbf{0}$, and we can discretize Eq. (\ref{zerocurlJ}) for 2D case into \begin{equation} (\tilde{\mathbf{J}}_{ij}-\tilde{\mathbf{J}}_{i+1,j+1})\begin{pmatrix} 1 \\ -1\end{pmatrix} =(\tilde{\mathbf{J}}_{i,j+1}-\tilde{\mathbf{J}}_{i+1,j})\begin{pmatrix} 1 \\ 1\end{pmatrix}. \label{nocurlJ2D} \end{equation} Because a field of Jacobian defines a curvilinear coordinate system, it is not allowed to have a circulation. We can refer to Eqs. (\ref{teleJ1}) and (\ref{zerocurlJ}) as the teleport equations of inverse Jacobian. \subsection{Advective Rotation} The advection operator is $\mathbf{v}\cdot\nabla$, and the advection of local angles is $(\mathbf{v}\cdot\nabla)\boldsymbol{\theta}$. It can be changed into another form by taking the following steps with the Levi-Civita symbol from Eq. (\ref{dtthJ1}): \begin{eqnarray*} \frac{d\mathbf{\theta}}{dt}&=&[\mathbf{J}^{-1}\frac{d\mathbf{J}}{dt}]_{\times} \\ &=&[\mathbf{J}^{-1}((\mathbf{v}\cdot\nabla)\mathbf{J}+\frac{\partial\mathbf{J}}{\partial t})]_{\times} \\ &=&[\mathbf{J}^{-1}(\nabla\mathbf{J})\mathbf{v}]_{\times}+[\mathbf{J}^{-1}\frac{\partial\mathbf{J}}{\partial t}]_{\times} \\ &=&\frac{1}{2}\epsilon_{ijk}J_{jl}^{-1}(\partial_kJ_{lm})v_m+\frac{\partial\boldsymbol{\theta}}{\partial t} \\ &=&\frac{1}{2}\epsilon_{ijk}J_{ml}^{-1}(\partial_kJ_{lm})v_j+\frac{\partial\boldsymbol{\theta}}{\partial t} \\ &=&\frac{1}{2}\epsilon_{ijk}v_j\partial_k(\log\rho)+\frac{\partial\boldsymbol{\theta}}{\partial t} \\ &=&\frac{1}{2}\mathbf{v}\times(\nabla\log\rho)+\frac{\partial\boldsymbol{\theta}}{\partial t} \end{eqnarray*} where Eqs. (\ref{dMVC1_1}) and (\ref{gradJv}) are necessary for derivation. When we compare it to the chain rule $\frac{d\boldsymbol{\theta}}{dt}=(\mathbf{v}\cdot\nabla)\boldsymbol{\theta}+\frac{\partial\boldsymbol{\theta}}{\partial t}$, we get \begin{equation} \mathbf{v}\times\nabla\log\rho=(\mathbf{v}\cdot\nabla)(2\boldsymbol{\theta}) \label{advrot1} \end{equation} or, equivalently, \begin{equation} \nabla\log\rho\times\mathbf{v}+\nabla(2\boldsymbol{\theta})\mathbf{v}=\mathbf{0} \label{advrot2} \end{equation} We can also derive Eqs. (\ref{advrot1}) or (\ref{advrot2}) directly from Eq. (\ref{gradJv}) by multiplying the inverse Jacobian to the left side of Eq. (\ref{gradJv}) and then taking the cross operator $[\cdot]_{\times}$ . \begin{figure}[htp] \centering \includegraphics[width=6cm]{advRot} \caption{An example of zero vorticity. The mass current has non-zero curl despite zero vorticity.} \label{fig:zerovorticity} \end{figure} \subsection{Complete Eulerian Form} The reason why we derive the equation of advective rotation is that we can get the Eulerian form of Eq. (\ref{conEq4}). It is important for us to derive the teleport equations of inverse mass in the next subsection. From Eqs. (\ref{conEq4}) and (\ref{advrot1}), we have \begin{eqnarray*} \mathbf{0}&=&\nabla\times\mathbf{v}-2\frac{d\boldsymbol{\theta}}{dt} \\ &=&\nabla\times\mathbf{v}-2(\mathbf{v}\cdot\nabla)\mathbf{\theta}-2\frac{\partial\boldsymbol{\theta}}{\partial t} \\ &=&\nabla\times\mathbf{v}-(\mathbf{v}\times\nabla)\log\rho-2\frac{\partial\boldsymbol{\theta}}{\partial t} \\ &=&\frac{1}{\rho}(\rho\nabla\times\mathbf{v}+\nabla\rho\times\mathbf{v}-2\rho\frac{\partial\boldsymbol{\theta}}{\partial t}) \\ &=&\frac{1}{\rho}(\nabla\times(\rho\mathbf{v})-2\rho\frac{\partial\boldsymbol{\theta}}{\partial t}) \end{eqnarray*} which gives \begin{equation} \nabla\times(\rho\mathbf{v})-2\rho\frac{\partial\boldsymbol{\theta}}{\partial t}=0 \label{conEq3} \end{equation} The continuity equation, Eq. (\ref{conEq1}), states how the mass density should increase or decrease at a fixed point when a mass current converges or diverges at the point. Likewise, Eq. (\ref{conEq3}) tells how the local angle of rotation should decrease or increase at a fixed point when a mass current rotates clockwise or counterclockwise right around the point. Even when the mass current does not apparently rotate, the angle of rotation at a fixed point may decrease or increase, as shown in Fig. \ref{fig:zerovorticity}. \subsection{Teleport of Inverse Mass} \label{sec:teleport_inv} Let us define a scalar quantity $\sigma$ as \begin{equation} \sigma\equiv\nabla\cdot(\rho\mathbf{v})+\frac{\partial\rho}{\partial t}. \label{sigma} \end{equation} Then, $\sigma=0$ means the transport equation of mass of Eq. (\ref{conEq1}). Similarly, we can define $\tilde{\sigma}$ as \begin{equation} \tilde{\sigma}\equiv\tilde{\nabla}\cdot(\tilde{\rho}\tilde{\mathbf{v}})+\frac{\partial\tilde{\rho}}{\partial t}. \label{tsigma} \end{equation} Because the velocity is zero from Eq. (\ref{zerov}), we immediately have \begin{equation} \frac{\partial\tilde{\rho}}{\partial t}=\tilde{\sigma} \label{telemass1} \end{equation} or, equivalently, \begin{equation} \frac{\partial\log\tilde{\rho}}{\partial t}=\frac{\tilde{\sigma}}{\rho}. \label{telemass1a} \end{equation} Similarly, we define $\mathbf{\tau}$ and $\tilde{\mathbf{\tau}}$ as \begin{equation} \mathbf{\tau}\equiv\nabla\times(\rho\mathbf{v})-2\rho\frac{\partial\boldsymbol{\theta}}{\partial t} \label{tau} \end{equation} and \begin{equation} \tilde{\mathbf{\tau}}\equiv\tilde{\nabla}\times(\tilde{\rho}\tilde{\mathbf{v}})-2\tilde{\rho}\frac{\partial\tilde{\boldsymbol{\theta}}}{\partial t}, \label{ttau} \end{equation} respectively. Then we can express Eq. (\ref{conEq3}) as $\mathbf{\tau}=\mathbf{0}$. Because the velocity of inverse mass is zero from Eq. (\ref{zerov}), we have from Eq. (\ref{ttau}) \begin{equation} 2\tilde{\rho}\frac{\partial\tilde{\boldsymbol{\theta}}}{\partial t}=-\tilde{\mathbf{\tau}} \label{telemass2} \end{equation} or, equivalently, \begin{equation} \frac{\partial(2\tilde{\boldsymbol{\theta}})}{\partial t}=-\frac{\tilde{\mathbf{\tau}}}{\rho}. \label{telemass2a} \end{equation} We refer to Eqs. (\ref{telemass1}) and (\ref{telemass2}) or, equivalently, Eqs. (\ref{telemass1a}) and (\ref{telemass2a}) as the teleport equations of inverse mass. $\tilde{\sigma}$ and $\tilde{\mathbf{\tau}}$ are the local rates governing the point-wise dynamics of inverse mass. If we know the velocity of mass density as observational data, we can determine them by \begin{equation} \frac{\tilde{\sigma}}{\tilde{\rho}}=\nabla\cdot\mathbf{v} \label{srcdiv} \end{equation} and \begin{equation} \frac{\tilde{\mathbf{\tau}}}{\tilde{\rho}}=\nabla\times\mathbf{v}. \label{srccul} \end{equation} We can derive Eqs. (\ref{srcdiv}) and (\ref{srccul}) by \begin{equation*} \frac{\tilde{\sigma}}{\tilde{\rho}}=\frac{\partial\log\tilde{\rho}}{\partial t}=-\frac{d\log\rho}{dt}=\nabla\cdot\mathbf{v} \end{equation*} and \begin{equation*} \frac{\tilde{\mathbf{\tau}}}{\tilde{\rho}}=-2\frac{\partial\tilde{\boldsymbol{\theta}}}{\partial t}=2\frac{d\boldsymbol{\theta}}{dt}=\nabla\times\mathbf{v} \end{equation*} where we used \begin{eqnarray} \frac{\partial\log\tilde{\rho}}{\partial t}+\frac{d\log\rho}{dt}=0 \label{inversion2}\\ \frac{\partial\tilde{\boldsymbol{\theta}}}{\partial t}+\frac{d\boldsymbol{\theta}}{dt}=\mathbf{0} \label{inversion3} \end{eqnarray} Eq. (\ref{inversion2}) is derived from Eqs. (\ref{dMVC1_2}), (\ref{dMVC2_2}), (\ref{zerov}), and (\ref{tidt}). Eq. (\ref{inversion3}) is derived from Eqs. (\ref{dthJ3}), (\ref{dthJ4}), and (\ref{tidt}). The local rates are also connected to the local rate $\tilde{\boldsymbol{\Sigma}}$ for the inverse Jacobian in Eq. (\ref{teleJ1}). We can take the trace and cross operators to Eq. (\ref{gradv1}) and use Eqs. (\ref{srcdiv}) and (\ref{srccul}) to have \begin{equation} \frac{\tilde{\sigma}}{\tilde{\rho}}=\mathrm{Tr}[\tilde{\boldsymbol{\Sigma}}\tilde{\mathbf{J}}^{-1}] \end{equation} and \begin{equation} -\frac{\tilde{\mathbf{\tau}}}{\tilde{\rho}}=2[\tilde{\boldsymbol{\Sigma}}\tilde{\mathbf{J}}^{-1}]_{\times}. \end{equation} Figure \ref{fig:Representation} shows four different representations of continuum dynamics that we have derived in Sections \ref{sec:mass_transport}, \ref{sec:Jacobian_transport}, and \ref{sec:invJacobian_teleport} together with this subsection. The space denoted by tilde symbols is the representation in which machines can efficiently learn physical motions of objects with low complexity models and a smaller number of observational data. In the space, the dynamics of inverse mass density is governed by the teleport equations of inverse mass. Equivalently, its Jacobian is also governed by the teleport equations of Jacobian. \begin{figure}[t] \centering \includegraphics[width=8.8cm]{RepresentationOfDynamics} \caption{Four different representations of continuum dynamics.} \label{fig:Representation} \end{figure} \subsection{Inversion and Viewpoints} \label{sec:viewpoints} The motion of an object or the flow of a continuum can be described by two traditional viewpoints: Eulerian and Lagrangian. As shown in Fig. \ref{fig:view}, we can revisit them by generalizing the terms of pixels (mass-filled quadrangle meshes) and pixel values (amount of mass per pixel). In the Eulerian view, pixels are fixed squares of unit volume, and pixel values are highly variable even by slight object motion. In the Lagrangian view, pixels move along with material points and have variable sizes and shapes because of the converging, diverging, or whirling motions of material points. Pixel sizes can be selected so that pixel values are all units; multiplying the pixel size by its brightness provides the unit pixel value. Hence, while only pixel values are recorded in the Eulerian view, the shapes and sizes of pixels are required. These are handled by the Jacobian matrices shown in Fig. \ref{fig:Representation}, without the need to record the uniform pixel values in the Lagrangian view. \begin{figure}[htp] \centering \includegraphics[width=7.5cm]{Viewpoints2} \caption{{\bf Two classical viewpoints in fluid mechanics and their inversion}. The inversion views provide a zero velocity representation of the continuum dynamics, which means that we only observe a change of pixel values without motion (\href{https://drive.google.com/file/d/1q1sH4iBms5cA_HTVBKh9VTIIUnfZsX5Y/view?usp=sharing}{click \underline{here} to see a video clip}). The zero velocity representation enables us to model the continuum dynamics of an object with lower complexity and fewer data.} \label{fig:view} \end{figure} The viewpoints of the dynamics can be inverted by the inversion transform. The inversion of the Eulerian view is a Eulerian view where it still uses the regular pixel arrays but has a distorted and inverted distribution of mass density. Mass density in the inverted view is still positive but reciprocal to the original density. Similarly, the distorted and inverted distribution is observed in the inversion of the Lagrangian view. However, pixels move with variable sizes and shapes, and pixel values are all units, similar to the Lagrangian view. The Eulerian view can be transformed to the inversion of the Lagrangian view by mass-volume conversion, which is a mathematical process for converting pixel values of the Eulerian view into pixels sizes of the inverted Lagrangian view. The Lagrangian and inverted Eulerian views can also transform to each other in the same way. Here, we can describe the unusual behavior that differs from observations in the Eulerian viewpoint. When a mass distribution continuously flows, its inverted distribution of mass does not. The inverted mass density gradually increases or decreases in a pointwise manner. The inversion views provide the zero-velocity representation of an object in motion or the flow of a continuum. Notice that the black background of classical views of the soccer ball is not zero intensity but a small positive, which affects the thickness of the white margin in the inversion views. Figure \ref{fig:view} is a good visualized example that helps understand Fig. \ref{fig:Representation}. The Eulerian view is well-described by the transport equations of mass. The transport equation of Jacobian explains the Lagrangian view. Similarly, inverse views of Eulerian and Lagrangian are described by the teleport equations of inverse mass and inverse Jacobian, respectively. \subsection{Inversion of Incomplete Observations} \label{sec:inv_obs} Determining the dynamics of a system from only mass density observations is an ill-posed problem. Complete observations require a pair of mass density and velocity field (or a pair of mass density and local angles of rotation). Direct observations of the velocity field can determine the local rates of Eqs. (\ref{srcdiv}) and (\ref{srccul}), and the dynamics of inverse mass by Eqs. (\ref{telemass1a}) and (\ref{telemass2a}). We can discretize Eqs. (\ref{telemass1a}) and (\ref{telemass2a}) into $(i,j)$th pixels: \begin{equation} \frac{\partial}{\partial t}\log\tilde{\rho}_{ij}\equiv\frac{\partial}{\partial t}\langle\log\tilde{\rho}\rangle_{ij}=\nabla\cdot\langle\mathbf{v}\rangle_{ij} \end{equation} and \begin{equation} -\frac{\partial}{\partial t}2\boldsymbol{\theta}_{ij}\equiv-\frac{\partial}{\partial t}\langle2\boldsymbol{\theta}\rangle_{ij}=\nabla\times\langle\mathbf{v}\rangle_{ij} \end{equation} The operator $\langle\cdot\rangle_{ij}$ means taking the average on the $(i,j)$th pixel. Equivalently, direct observations of velocity also determine the local rates of Eq. (\ref{gradv1}), and the dynamics of inverse Jacobian by Eq. (\ref{teleJ2}). In this study, the approach we take for getting inverse mass or inverse Jacobian is to minimize the divergence of the velocity field. We can transform and discretize it by \begin{eqnarray} \int\rho(\nabla\cdot\mathbf{v})^2dV&=&\int\rho(\frac{d\log\rho}{dt})^2dV \nonumber \\ &=&\int(-\frac{\partial\log\tilde{\rho}}{\partial t})^2\rho~dV \nonumber \\ &=&\int(\frac{\partial\log\tilde{\rho}}{\partial t})^2d\tilde{V} \nonumber \\ &\approx&\sum_{ij}(\log\tilde{\rho}_{ij}^{(t+1)}-\log\tilde{\rho}_{ij}^{(t)})^2 \end{eqnarray} which comprises Eq. (\ref{ErF}). In summary, our problem is stated as \noindent \textbf{Problem}. {\it When given $T$ observations \{$\rho^{(t)}(x,y)$\} from the Eulerian point of view, find the $m\times n$ array of inverse Jacobian matrices \{$\tilde{\mathbf{J}}^{(t)}_{ij}$\} for $t=1,\ldots,T$ such that \begin{itemize} \item $\sum_{i,j,t=1}^{m,n,T}(\log\tilde{\rho}_{ij}^{(t+1)}-\log\tilde{\rho}_{ij}^{(t)})^2$ be minimized; \item $\log|\tilde{\mathbf{J}}_{ij}^{(t)}|+\langle\log\rho^{(t)}\rangle_{ij}=0$; \item $(\tilde{\mathbf{J}}_{ij}-\tilde{\mathbf{J}}_{i+1,j+1})\small{\begin{pmatrix} +1 \\ -1\end{pmatrix}} =(\tilde{\mathbf{J}}_{i,j+1}-\tilde{\mathbf{J}}_{i+1,j})\small{\begin{pmatrix} 1 \\ 1\end{pmatrix}}$. \end{itemize} } \subsection{Interpolation and Extrapolation} \label{sec:intp} In this subsection, we propose an interpolation or extrapolation scheme that we can use for modeling the dynamics of inverse Jacobian. We assume that there is a set of the inverse Jacobian data at different times $t = t_1\ldots, t_N$ on all discrete pixels of 2D space from the inverse Eulerian view, \[\tilde{\mathbf{J}}_{ij}^{(t_1)},\tilde{\mathbf{J}}_{ij}^{(t_2)},\ldots,\tilde{\mathbf{J}}_{ij}^{(t_N)}\] where $\tilde{\mathbf{J}}_{ij}^{(t_n)}$ is a 2-by-2 inverse Jacobian matrix for the $ij$th pixel at time $t_n$ and already satisfies Eq. (\ref{nocurlJ2D}). We can generally assume a continuous function or model of parameters $\omega$ by \begin{equation} \tilde{\mathbf{J}}_{ij}(t)=\mathbf{F}_{ij}\left(t;\omega\right) \end{equation} where $\mathbf{F}_{ij}\left(t_n;\omega\right)=\tilde{\mathbf{J}}_{ij}^{(t_n)}$ for all. The method we took to make the interpolated one satisfy Eq. (\ref{nocurlJ2D}) is the Lagrange interpolating polynomial~\cite{Abramowitz1965}. To write it explicitly, we have \begin{eqnarray} \tilde{\mathbf{J}}_{ij}(t)=\frac{(t-t_2)(t-t_3)\cdots(t-t_N)}{(t_1-t_2)(t_1-t_3)\cdots(t_1-t_N)}\tilde{\mathbf{J}}_{ij}^{(t_1)}+\nonumber\\ \cdots+\frac{(t-t_1)(t-t_2)\cdots(t-t_{N-1})}{(t_N-t_1)(t_N-t_2)\cdots(t_N-t_{N-1})}\tilde{\mathbf{J}}_{ij}^{(t_N)}. \label{LagIntp} \end{eqnarray} We used this to create the examples shown in Figs. \ref{fig:soccerball}, \ref{fig:cheetah}, and \ref{fig:digit}. An advantage of Lagrange interpolating polynomial is that the weights do not depend on the data. An improved version of the Lagrange formula~\cite{Berrut2004} is also applicable to the problem. \section{Conclusion} We proposed a framework for self-exploratory machine learning of transport phenomena. It is different from many state-of-the-art physics-learned simulators or physics-informed deep learning models because it does not rely on human knowledge (governing equations) or external simulation results as training data (by solving the governing equations). However, it does not reject the effectiveness of deep learning models, and will be combined with them and contribute to the development of machine learning models for exploring the physical world. \ifCLASSOPTIONcaptionsoff \newpage \fi
1,314,259,996,044
arxiv
\section{Introduction} The properties of gauge invariance and gauge-parameter independence, which are inherent in all kinds of gauge theories, have recently gained renewed interest. The question of gauge dependence arises automatically whenever physical observables, i.e.\ S-matrix elements, are not strictly calculated order by order in perturbation theory. However, mixing different orders of the perturbative expansion is sometimes unavoidable. For example, the introduction of finite-width effects for unstable particles or of running couplings can only be achieved by a resummation of certain subsets of Feynman diagrams. Moreover, single off-shell vertex functions have been parametrized by so-called form factors in the literature. The physical significance of such objects is always questionable. Every definition of quantities from incomplete parts of S-matrix elements (in a fixed order of perturbation theory) is necessarily based on conventions but not on physical grounds. At this point a few remarks on the difference between gauge invariance and gauge-parameter independence are in order. Strictly speaking, one can call only such objects gauge-invariant which are singlets with respect to gauge transformations. However, the gauge invariance of the underlying Lagrangian has to be broken in order to quantize the fields in perturbation theory. To this end a gauge-fixing term is added to the Lagrangian depending on one or more free (gauge-)parameters, which drop out in complete S-matrix elements. If a quantity depends on the gauge parameters, it depends on the gauge-fixing procedure. On the other hand, it can be shown that Green functions of gauge-invariant operators are independent of the method of gauge fixing and thus gauge-parameter-independent. However, the inverse conclusion is wrong in general: gauge-parameter independence does {\it not} necessarily indicate gauge invariance. The fact that the gauge-parameter dependence of individual vertex functions (self-energies, vertex corrections, etc.) is compensated within complete S-matrix elements motivated several authors to rearrange the gauge-dependent parts between different vertex functions resulting in definitions of separately gauge-parameter-independent building blocks. Since such splittings of vertex functions are not uniquely determined, different proposals were made in the literature. For example, in the context of four-fermion processes different running couplings \cite{Ke89,Ku91} have been defined. A more general procedure for eliminating the gauge-parameter-dependent parts of vertex functions is given by the so-called pinch technique (PT) \cite{PT1,Si92,PT2,PT3}. All these approaches have the common aim to define gauge-parameter-independent ``vertex functions'' with improved theoretical properties. In this context it should be noticed that these procedures are not free of problems. The methods of \citeres{Ke89,Ku91} have no natural generalization beyond four-fermion processes. On the other hand, the application of the PT algorithm is not always clear, and the universality (process independence) of the PT ``vertex functions'' has not rigorously been proven but only been verified from examples \cite{PT2,PT3}. Therefore, we pursue a completely different approach and study directly consequences of the underlying gauge invariance for vertex functions. The background-field method (BFM) \cite{BFM,Ab81,Ab83} represents a well-suited framework for such investigations. In the BFM the effective action, which generates the vertex functions, is manifestly gauge-invariant, and this invariance implies simple Ward identities for vertex functions. For the calculation of S-matrix elements, tree-like structures are formed with these vertex functions, where the gauge fixing of the genuine tree part can be chosen arbitrarily. In \citeres{BFMvPT,bgflong} the BFM was applied to the electroweak Standard Model (SM), and the consequences of the Ward identities were discussed. The renormalization% \footnote{The renormalization of the electroweak SM without fermions was also discussed in \citere{Li}.} was carried out in a gauge-invariant way, which led to considerable simplifications. Moreover, it was shown that the Ward identities imply several improved properties for vertex functions (compared to the conventional formalism) concerning ultraviolet, infrared or high-energy behavior. Furthermore, actual loop calculations of S-matrix elements in general become simpler using the BFM. This is mainly due to the freedom of choosing the gauge for the tree parts independently from the one in the loops. The BFM brings together two important features: the gauge invariance of the effective action and the clear distinction between classical and quantum parts of fields. This fact renders the BFM well-suited for integrating out heavy fields at one-loop level directly in the path integral. Firstly, the tree-level and one-loop effects can be isolated in the path integral very easily. Secondly, choosing a definite gauge (e.g.\ the unitary gauge) for the background fields drastically simplifies intermediate steps in the $1/M$-expansion for the heavy field of mass $M$. Thus, a one-loop effective Lagrangian can be directly derived and by inverting the transformation to the definite background gauge after the $1/M$-expansion one recovers a manifestly gauge-invariant result. Such a procedure was worked out in \citere{HHint} and applied to an $\mathrm{SU}(2)_{{\mathrm{W}}}$ gauge theory and the SM. In \citere{BFMvPT} it was realized that the building blocks obtained within the PT in QCD% \footnote{In QCD this fact was also pointed out in \citere{Ha94}.} and the SM coincide with the corresponding BFM vertex functions in 't~Hooft--Feynman gauge. This observation and further investigations in \citere{BFMvPT} clarified the origin of certain desirable properties \cite{Si92} noticed for the PT ``vertex functions''. In particular, in the BFM the QED-like Ward identities are derived from gauge invariance and imply all the other properties as shown in \citere{BFMvPT}. Since this is true in the BFM for arbitrary gauges of the quantum fields, the above-mentioned desirable properties are related to the gauge invariance of the effective action rather than to the absence of gauge-parameter dependence. In this article we first review the basic results of the application of the BFM to the SM. They have been worked out in \citere{bgflong} for the usual linear realization of the scalar Higgs doublet. The generalization of the BFM to the non-linear realization of the scalar sector was described in \citere{HHint}. We further explore the connection between gauge-parameter-independent formulations and the BFM. Finally, we focus on the SM contributions to the $S$, $T$, and $U$ parameters, which have been originally introduced \cite{STU} in order to quantify new-physics effects beyond the SM entering via vacuum polarization only. Comparing the BFM results for the $S$, $T$, and $U$ parameters with the ones obtained within the PT \cite{DKS}, the relevance of the latter is discussed. {\samepage The article is organized as follows: in \refse{se:bfm} we review the application of the background field method to the electroweak SM. The renormalization of the SM in the BFM is discussed in \refse{se:ren}. In \refse{se:nonlin} we summarize the virtues of the BFM in the formulation of the SM with a non-linear realization of the Higgs sector. In \refse{se:GIVvGIP} we elaborate on the connection between gauge-parameter independence of vertex functions and symmetry relations. As a further illustration we treat the $S$, $T$, and $U$ parameters in the BFM in \refse{se:STU}. } \pagebreak \section{The Background-Field Method for the Electroweak Standard Model} \label{se:bfm} \vspace{-1em} \subsection{The Construction of the gauge-invariant Effective Action} The background-field method~\cite{BFM,Ab81} (BFM) is a technique for quantizing gauge theories without losing explicit gauge invariance of the effective action. This is done by decomposing the usual fields $\hat\varphi$ in the classical Lagrangian $\L_{\mathrm{C}}$ into background fields $\hat\varphi$ and quantum fields $\varphi$, \begin{equation} \label{eq:quantLc} {\cal L}_{\mathrm{C}}(\hat\varphi) \rightarrow {\cal L}_{\mathrm{C}}(\hat\varphi + \varphi). \end{equation} While the background fields are treated as external sources, only the quantum fields are variables of integration in the functional integral. A gauge-fixing term is added which breaks only the invariance with respect to quantum gauge transformations but retains the invariance of the functional integral with respect to background-field gauge transformations. {}From the functional integral an effective action $\Gamma[\hat\varphi]$ for the background fields is derived which is invariant under gauge transformations of the background fields and thus gauge-invariant. The S-matrix is constructed by forming trees with vertex functions from $\Gamma[\hat\varphi]$ joined by background-field propagators. These propagators are defined by adding a gauge-fixing term to $\Gamma[\hat\varphi]$. This gauge-fixing term is only relevant for the construction of connected Green functions and S-matrix elements. It is not related to the term used to fix the gauge inside loop diagrams, i.e.\ in the functional integral, and the associated gauge parameters $\xi_B^i$ only enter tree level quantities but not the higher-order contributions to the vertex functions. In particular, in linear background gauges only the tree-level propagators are affected by the background gauge fixing. The equivalence of the S-matrix in the BFM\ to the conventional one has been proven in \citeres{Ab83,Re85,Be94}. For our discussion of the SM we use the conventions of \citeres{bgflong,Dehab}. The complex scalar SU$(2)_{{\mathrm{W}}}$ doublet field of the minimal Higgs sector is written as the sum of a background Higgs field $\hat\Phi$ having the usual non-vanishing vacuum expectation value $v$, and a quantum Higgs field $\Phi$ whose vacuum expectation value is zero: \begin{equation} \hat\Phi(x) = \left( \begin{array}{c} \hat \phi ^{+}(x) \\ \frac{1}{\sqrt{2}}\bigl(v + {\hat H}(x) +i \hat \chi (x) \bigr) \end{array} \right) , \qquad \Phi (x) = \left( \begin{array}{c} \phi ^{+}(x) \\ \frac{1}{\sqrt{2}}\bigl(H(x) +i \chi (x) \bigr) \end{array} \right) . \label{eq:Hbq} \end{equation} Here ${\hat H}$ and $H$ denote the physical background and quantum Higgs field, respectively, and $\hat \phi ^{+}, \hat \chi, \phi ^{+}, \chi$ are the unphysical Goldstone fields. The generalization of the 't~Hooft gauge fixing to the BFM~\cite{Sh81} reads \begin{eqnarray}\label{tHgf} \L_{\mathrm{GF}} &=& {}- \frac{1}{2\xi_Q^W} \biggl[(\delta^{ac}\partial_\mu + g_2 \varepsilon^{abc}\hat W^b_\mu)W^{c,\mu} -ig_2\xi_Q^W\frac{1}{2}(\hat\Phi^\dagger_i \sigma^a_{ij}\Phi_j - \Phi^\dagger_i \sigma^a_{ij}\hat\Phi_j)\biggr]^2 \nonumber\\ && {}- \frac{1}{2\xi_Q^B} \biggl[\partial_\mu B^{\mu} +ig_1\xi_Q^B\frac{1}{2}(\hat\Phi^\dagger_i \Phi_i - \Phi^\dagger_i \hat\Phi_i)\biggr]^2, \end{eqnarray} where $\hat W_{\mu }^{a}$, $a$=1,2,3, represents the triplet of gauge fields associated with the weak isospin group SU$(2)_{{\mathrm{W}}}$, and $\hat B_{\mu }$ the gauge field associated with the group U$(1)_{{\mathrm{Y}}}$ of weak hypercharge $Y_{{\mathrm{W}}}$. The Pauli matrices are denoted by $\sigma^a$, $a=1,2,3$, and $\xi_Q^W$, $\xi_Q^B$ are parameters associated with the gauge fixing of the quantum fields, one for SU$(2)_{{\mathrm{W}}}$ and one for U$(1)_{{\mathrm{Y}}}$. In order to avoid tree-level mixing between the quantum $\mathswitch A$ and $\mathswitch Z$ fields, we set $\xi_Q=\xi_Q^W=\xi_Q^B$ in the following. Background-field gauge invariance implies that the background gauge fields appear only within a covariant derivative in the gauge-fixing term and that the terms in brackets transform according to the adjoint representation of the gauge group. The gauge-fixing term (\refeq{tHgf}) translates to the conventional one upon replacing the background Higgs field by its vacuum expectation value and omitting the background SU$(2)_{{\mathrm{W}}}$ triplet field $\hat W^a_\mu$. The vertex functions can be calculated directly from Feynman rules that distinguish between quantum and background fields. Whereas the quantum fields appear only inside loops, the background fields are associated with external lines. Apart from doubling of the gauge and Higgs fields, the BFM Feynman rules differ from the conventional ones only owing to the gauge-fixing and ghost terms. Because these terms are quadratic in the quantum fields, they affect only vertices that involve exactly two quantum fields and additional background fields. Since the gauge-fixing term is non-linear in the fields, the gauge parameter enters also the gauge-boson vertices. The fermion fields are treated as usual, they have the conventional Feynman rules, and no distinction needs to be made between external and internal fields. A complete set of BFM Feynman rules for the electroweak SM has been given in~\citere{bgflong}. Despite the distinction between background and quantum fields, calculations in the BFM become in general simpler than in the conventional formalism. This is in particular the case in the 't~Hooft--Feynman gauge ($\xi_Q=1$) for the quantum fields where many vertices simplify a lot. Moreover, the gauge fixing of the background fields is totally unrelated to the gauge fixing of the quantum fields\cite{Re85}. This freedom can be used to choose a particularly suitable background gauge, e.g.~the unitary gauge. In this way the number of Feynman diagrams can considerably be reduced. \subsection{Ward Identities} \label{sec:WI} As can be directly read off from Eqs.~(21), (22) of \citere{bgflong}, the invariance of the effective action under the background gauge transformations yields \newcommand{\dgd}[1]{\frac{\delta\Gamma}{\delta#1}} \begin{eqnarray} 0 = \frac{\delta\Gamma}{\delta\hat\theta^\mathswitch A} &=& -\partial_\mu\dgd{\hat\mathswitch A_\mu} - ie \biggl( \hat\mathswitch W^+_\mu\dgd{\hat\mathswitch W^+_\mu} - \hat\mathswitch W^-_\mu\dgd{\hat\mathswitch W^-_\mu} \biggr) - ie \biggl( \hat\phi^+\dgd{\hat\phi^+} - \hat\phi^-\dgd{\hat\phi^-} \biggr) \nonumber\\ && {} + ie \sum_f Q_f\biggl(\bar f \dgd{\bar f} + \dgd{f} f \biggr),\\ 0 = \frac{\delta\Gamma}{\delta\hat\theta^\mathswitch Z} &=& -\partial_\mu\dgd{\hat\mathswitch Z_\mu} + ie\frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}} \biggl(\hat\mathswitch W^+_\mu\dgd{\hat\mathswitch W^+_\mu} -\hat\mathswitch W^-_\mu\dgd{\hat\mathswitch W^-_\mu}\biggr) \nonumber\\ && {} + ie\frac{\mathswitch {c_{\rw}}^2-\mathswitch {s_{\rw}}^2}{2\mathswitch {c_{\rw}}\mathswitch {s_{\rw}}} \biggl(\hat\phi^+\dgd{\hat\phi^+} - \hat\phi^-\dgd{\hat\phi^-} \biggr) - e\frac{1}{2\mathswitch {c_{\rw}}\mathswitch {s_{\rw}}} \biggl((v+\hat\mathswitch H)\dgd{\hat\chi} - \hat\chi\dgd{\hat\mathswitch H} \biggr) \nonumber\\ && {} - ie \sum_f \biggl(\bar f (v_f+a_f\gamma_5) \dgd{\bar f} + \dgd{f} (v_f-a_f\gamma_5) f \biggr) ,\\ 0 = \frac{\delta\Gamma}{\delta\hat\theta^\pm} &=& -\partial_\mu\dgd{\hat\mathswitch W^\pm_\mu} \mp ie \hat\mathswitch W^\mp_\mu \biggl(\dgd{\hat\mathswitch A_\mu} -\frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}}\dgd{\hat\mathswitch Z_\mu}\biggr) \pm ie (\hat\mathswitch A_\mu - \frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}}\hat\mathswitch Z_\mu)\dgd{\hat\mathswitch W^\pm_\mu} \nonumber\\ && {} \mp ie \frac{1}{2\mathswitch {s_{\rw}}} \hat\phi^\mp \biggl(\dgd{\hat\mathswitch H} \pm i\dgd{\hat\chi}\biggr) \pm ie \frac{1}{2\mathswitch {s_{\rw}}} (v + \hat\mathswitch H \pm i \hat\chi) \dgd{\hat\phi^\pm} \nonumber\\ &&{} - ie\frac{1}{\sqrt{2}\mathswitch {s_{\rw}}} \sum_{(f_+,f_-)} \biggl( \bar f_\pm \frac{1 +\gamma_5}{2} \dgd{\bar f_\mp} + \dgd{f_\pm} \frac{1-\gamma_5}{2} f_\mp \biggr), \label{eq:inv} \end{eqnarray} where $\mathswitch {v_\Pf} = (I^3_{{\mathrm{W}},f} - 2\mathswitch {s_{\rw}}^2 Q_f)/(2\mathswitch {s_{\rw}}\mathswitch {c_{\rw}})$ and $\mathswitch {a_\Pf} = I^3_{{\mathrm{W}},f}/(2\mathswitch {s_{\rw}}\mathswitch {c_{\rw}})$. In \refeq{eq:inv} $f_\pm$ denote the fermions with isospin $\pm1/2$, and the sum in the last line runs over all isospin doublets. The electric unit charge is denoted by $e$ as usual, and the Weinberg angle $\theta_\mathswitchr W$ is fixed by the mass ratio, \begin{equation} \mathswitch {c_{\rw}} =\cos\theta_{{\mathrm{W}}}= \frac{\mathswitch {M_\PW}}{\mathswitch {M_\PZ}}, \qquad \mathswitch {s_{\rw}} =\sin\theta_{{\mathrm{W}}} = \sqrt{1-\mathswitch {c_{\rw}}^2}. \end{equation} By differentiating \refeq{eq:inv} with respect to background fields and setting the fields equal to zero, one obtains Ward identities for the vertex function that are precisely the Ward identities related to the classical Lagrangian. This is in contrast to the conventional formalism where, owing to the gauge-fixing procedure, explicit gauge invariance is lost, and Ward identities are obtained from invariance under BRS transformations. These Slavnov--Taylor identities have a more complicated structure and in general involve ghost contributions. The BFM Ward identities are valid in all orders of perturbation theory and hold for arbitrary values of the quantum gauge parameter $\xi_Q$. They relate one-particle irreducible Green functions. In particular, the two-point functions do not contain tadpole contributions. These appear explicitly in the Ward identities. For illustration and later use, we list some of the Ward identities. Concerning the notation and conventions for the vertex functions we follow \citere{bgflong} throughout. The two-point functions fulfill the following Ward identities: \begin{eqnarray} \label{eq:sega} k^\mu \Gamma^{\hat A\Ahat}_{\mu\nu}(k) = 0, \parbox{3cm}{\hfill $k^\mu \Gamma^{\hat A\hat Z}_{\mu\nu}(k)$} &=& 0, \\ \label{eq:segachi} k^\mu \Gamma^{\hat A\hat H}_{\mu}(k) = 0, \parbox{3cm}{\hfill $k^\mu \Gamma^{\hat A\hat \chi}_{\mu}(k) $} &=& 0, \\ \label{eq:seZ1} k^\mu \Gamma^{\hat Z\Zhat}_{\mu\nu}(k) -i\mathswitch {M_\PZ} \Gamma^{\hat \chi\hat Z}_\nu(k) &=& 0, \\ \label{eq:seZ2} k^\mu \Gamma^{\hat Z\hat \chi}_{\mu}(k) -i\mathswitch {M_\PZ} \Gamma^{\hat \chi\chihat}(k) +\frac{ie}{2\mathswitch {s_{\rw}}\mathswitch {c_{\rw}}} \Gamma^{\hat H}(0) &=& 0 , \\ \label{eq:seW1} k^\mu \Gamma^{\hat W^\pm\hat W^\mp}_{\mu\nu}(k) \mp\mathswitch {M_\PW} \Gamma^{\hat \phi^\pm\hat W^\mp}_\nu(k) &=& 0, \\ \label{eq:seW2} k^\mu \Gamma^{\hat W^\pm\hat \phi^\mp}_{\mu}(k) \mp\mathswitch {M_\PW} \Gamma^{\hat \phi^\pm\hat \phi^\mp}(k) \pm\frac{e}{2\mathswitch {s_{\rw}}} \Gamma^{\hat H}(0) &=& 0. \end{eqnarray} The Ward identities for the gauge-boson--fermion vertices read \begin{eqnarray} \label{WIAff} \!\!&& k^\mu \Gamma^{\hat A\mathswitch{\bar f}\mathswitch f}_{\mu}(k,\bar p, p) = -e\mathswitch {Q_\Pf} [\Gamma^{\mathswitch{\bar f}\mathswitch f}(\bar p) - \Gamma^{\mathswitch{\bar f}\mathswitch f}(-p)], \\ \label{WIZff} \!\!&& k^\mu \Gamma^{\hat Z\mathswitch{\bar f}\mathswitch f}_{\mu}(k,\bar p,p) - i \mathswitch {M_\PZ} \Gamma^{\hat\chi\mathswitch{\bar f}\mathswitch f}(k,\bar p,p) = e [\Gamma^{\mathswitch{\bar f}\mathswitch f}(\bar p)(\mathswitch {v_\Pf}-\mathswitch {a_\Pf}\gamma_5) - (\mathswitch {v_\Pf}+\mathswitch {a_\Pf}\gamma_5) \Gamma^{\mathswitch{\bar f}\mathswitch f}(-p)], \\ \label{WIWff} \!\!&& k^\mu \Gamma^{\hat W^\pm\mathswitch{\bar f}_\pm\mathswitch f_\mp}_{\mu}(k,\bar p,p) \mp \mathswitch {M_\PW} \Gamma^{\hat \phi^\pm\mathswitch{\bar f}_\pm\mathswitch f_\mp}(k,\bar p,p) = \frac{e}{\sqrt{2}\mathswitch {s_{\rw}}} [\Gamma^{\mathswitch{\bar f}_\pm\mathswitch f_\pm}(\bar p)\omega_- - \omega_+ \Gamma^{\mathswitch{\bar f}_\mp\mathswitch f_\mp}(-p)] . \hspace{2em} \end{eqnarray} The triple-gauge-boson vertices obey \begin{eqnarray} \label{WIAWW} k^\mu \Gamma^{\hat A\hat W^+\hat W^-}_{\mu\rho\sigma}(k,k_+,k_-) &=& e [\Gamma^{\hat W^+\hat W^-}_{\rho\sigma}(k_+) - \Gamma^{\hat W^+\hat W^-}_{\rho\sigma }(-k_-)], \\ k_+^\rho \Gamma^{\hat\mathswitch A\hat\mathswitch W^+\hat\mathswitch W^-}_{\mu\rho\sigma}(k,k_+,k_-) &-& \mathswitch {M_\PW} \Gamma^{\hat\mathswitch A\hat\phi^+\hat\mathswitch W^-}_{\mu\sigma}(k,k_+,k_-) = \nonumber\\ && \quad +e\left[\Gamma^{\hat\mathswitch W^+\hat\mathswitch W^-}_{\mu\sigma}(-k_-) - \Gamma^{\hat\mathswitch A\hat\mathswitch A}_{\mu\sigma}(k) +\frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}} \Gamma^{\hat\mathswitch A\hat\mathswitch Z}_{\mu\sigma}(k)\right] , \label{WIWAW}\\ k_-^\sigma \Gamma^{\hat\mathswitch A\hat\mathswitch W^+\hat\mathswitch W^-}_{\mu\rho\sigma}(k,k_+,k_-) &+& \mathswitch {M_\PW} \Gamma^{\hat\mathswitch A\hat\mathswitch W^+\hat\phi^-}_{\mu\rho}(k,k_+,k_-) = \nonumber\\ && \quad -e\left[\Gamma^{\hat\mathswitch W^-\hat\mathswitch W^+}_{\mu\rho}(-k_+) - \Gamma^{\hat\mathswitch A\hat\mathswitch A}_{\mu\rho}(k) +\frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}} \Gamma^{\hat\mathswitch A\hat\mathswitch Z}_{\mu\rho}(k)\right] , \label{WIWWA} \\ \label{WIZWW} k^\mu \Gamma^{\hat Z\hat W^+\hat W^-}_{\mu\rho\sigma}(k,k_+,k_-) &-& i\mathswitch {M_\PZ} \Gamma^{\hat \chi\hat W^+\hat W^-}_{\rho\sigma}(k,k_+,k_-) = \nonumber\\ && \quad -e\frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}} [\Gamma^{\hat W^+\hat W^-}_{\rho\sigma}(k_+) - \Gamma^{\hat W^+\hat W^-}_{\rho\sigma }(-k_-)], \\ k_+^\rho \Gamma^{\hat\mathswitch Z\hat\mathswitch W^+\hat\mathswitch W^-}_{\mu\rho\sigma}(k,k_+,k_-) &-& \mathswitch {M_\PW} \Gamma^{\hat\mathswitch Z\hat\phi^+\hat\mathswitch W^-}_{\mu\sigma}(k,k_+,k_-) = \nonumber\\ && \quad -e\frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}}\left[\Gamma^{\hat\mathswitch W^+\hat\mathswitch W^-}_{\mu\sigma}(-k_-) - \Gamma^{\hat\mathswitch Z\hat\mathswitch Z}_{\mu\sigma}(k) +\frac{\mathswitch {s_{\rw}}}{\mathswitch {c_{\rw}}} \Gamma^{\hat\mathswitch Z\hat\mathswitch A}_{\mu\sigma}(k)\right] , \label{WIWZW}\\ k_-^\sigma \Gamma^{\hat\mathswitch Z\hat\mathswitch W^+\hat\mathswitch W^-}_{\mu\rho\sigma}(k,k_+,k_-) &+& \mathswitch {M_\PW} \Gamma^{\hat\mathswitch Z\hat\mathswitch W^+\hat\phi^-}_{\mu\rho}(k,k_+,k_-) = \nonumber\\ && \quad +e\frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}}\left[\Gamma^{\hat\mathswitch W^-\hat\mathswitch W^+}_{\mu\rho}(-k_+) - \Gamma^{\hat\mathswitch Z\hat\mathswitch Z}_{\mu\rho}(k) +\frac{\mathswitch {s_{\rw}}}{\mathswitch {c_{\rw}}} \Gamma^{\hat\mathswitch Z\hat\mathswitch A}_{\mu\rho}(k)\right] . \hspace{2em} \label{WIWWZ} \end{eqnarray} Note that the Ward identities involving only fermions and photons are exactly those of QED. \section{Renormalization of the Standard Model} \label{se:ren} \vspace{-1em} \subsection{Impact of Gauge Invariance on Renormalization} The BFM gauge invariance has important consequences for the structure of the renormalization constants necessary to render Green functions and S-matrix elements finite. The arguments which we give in the following are made explicit for the one-loop level.% \footnote{We implicitly assume the existence of an invariant regularization scheme.} It is easy, however, to extend them by induction to arbitrary orders in perturbation theory. Because the renormalization of the fermionic sector is similar to the one in the conventional formalism, we leave it out% \footnote{It is included in \citere{bgflong}.}. We introduce the following renormalization constants for the parameters: \begin{eqnarray} e_0 &=& Z_e e = (1 + \delta Z_e) e , \nonumber\\ {\mathswitch {M_\PW}^2}_{,0} &=& \mathswitch {M_\PW}^2 + \delta \mathswitch {M_\PW}^2 , \qquad {\mathswitch {M_\PZ}^2}_{,0} = \mathswitch {M_\PZ}^2 + \delta \mathswitch {M_\PZ}^2 , \qquad {\mathswitch {M_\PH}^2}_{,0} = \mathswitch {M_\PH}^2 + \delta \mathswitch {M_\PH}^2 , \nonumber\\ t_0 &=& t + \delta t . \label{eq:renconsts1} \end{eqnarray} The tadpole counterterm $\delta t$ renormalizes the term in the Lagrangian linear in the Higgs field $\hat H$, which we denote by $t \hat H(x)$ with $t = v (\mu^2 - \lambda v^2/4)$. It corrects for the shift in the minimum of the Higgs potential due to radiative corrections. Choosing $v$ as the correct vacuum expectation value of the Higgs field $\hat \Phi$ is equivalent to the vanishing of $t$. In principle, the renormalization constant $\delta t$ is not necessary, and one could work with arbitrary or even without tadpole renormalization. In these cases, however, one would have to take into account explicit tadpole contributions. Following the QCD treatment of~\citere{Ab81}, we introduce field renormalization only for the background fields \begin{eqnarray} \hat W_{0}^{\pm} & = & Z_{\hat W}^{1/2} \hat W^{\pm} = (1+\frac{1}{2}\delta Z_{\hat W}) \hat W^{\pm} , \nonumber\\ \left(\barr{l} \hat Z_{0} \\ \hat A_{0} \end{array} \right) & = & \left(\barr{ll} Z_{\hat Z\Zhat}^{1/2} & Z_{\hat Z\hat A}^{1/2} \\[1ex] Z_{\hat A\hat Z}^{1/2} & Z_{\hat A\Ahat}^{1/2} \end{array} \right) \left(\barr{l} \hat Z \\ \hat A \end{array} \right) = \left(\barr{cc} 1 + \frac{1}{2}\delta Z_{\hat Z\Zhat} & \frac{1}{2}\delta Z_{\hat Z\hat A} \\ [1ex] \frac{1}{2}\delta Z_{\hat A\hat Z} & 1 + \frac{1}{2}\delta Z_{\hat A\Ahat} \end{array} \right) \left(\barr{l} \hat Z \\[1ex] \hat A \end{array} \right) , \nonumber\\ \hat H_{0} & = & Z_{\hat H}^{1/2} \hat H = (1+\frac{1}{2}\delta Z_{\hat H}) \hat H, \nonumber\\ \hat \chi_{0} & = & Z_{\hat \chi}^{1/2} \hat \chi = (1+\frac{1}{2}\delta Z_{\hat \chi}) \hat \chi , \nonumber\\ \hat \phi_{0}^{\pm} & = & Z_{\hat \phi}^{1/2} \hat \phi^{\pm} = (1+\frac{1}{2}\delta Z_{\hat \phi}) \hat \phi^{\pm} . \label{eq:renconsts2} \end{eqnarray} In order to preserve the background-field gauge invariance, the renormalized effective action has to be invariant under background-field gauge transformations. This restricts the possible counterterms and relates the renormalization constants introduced above. These relations can be derived from the requirement that the renormalized vertex functions fulfill Ward identities of the same form as the unrenormalized ones. As a consequence, also the counterterms have to fulfill these Ward identities. An analysis of the Ward identities yields~\cite{bgflong}: \begin{eqnarray} \label{eq:delZB} \delta Z_{\hat A\Ahat} &=& - 2 \delta Z_e, \qquad \delta Z_{\hat Z\hat A} = 0, \qquad \delta Z_{\hat A\hat Z} = 2 \frac{\mathswitch {c_{\rw}}}{\mathswitch {s_{\rw}}} \frac{\delta \mathswitch {c_{\rw}} ^2}{\mathswitch {c_{\rw}} ^2} , \nonumber\\ \delta Z_{\hat Z\Zhat} &=& - 2 \delta Z_e - \frac{\mathswitch {c_{\rw}} ^2 - \mathswitch {s_{\rw}} ^2}{\mathswitch {s_{\rw}}^2} \frac{\delta \mathswitch {c_{\rw}} ^2}{\mathswitch {c_{\rw}} ^2} , \qquad \delta Z_{\hat W} = - 2 \delta Z_e - \frac{\mathswitch {c_{\rw}} ^2}{\mathswitch {s_{\rw}}^2} \frac{\delta \mathswitch {c_{\rw}} ^2}{\mathswitch {c_{\rw}} ^2} , \nonumber\\ \delta Z_{\hat H} &=& \delta Z_{\hat \chi} = \delta Z_{\hat \phi} = - 2 \delta Z_e - \frac{\mathswitch {c_{\rw}} ^2}{\mathswitch {s_{\rw}}^2} \frac{\delta \mathswitch {c_{\rw}} ^2}{\mathswitch {c_{\rw}} ^2} + \frac{\delta \mathswitch {M_\PW}^2}{\mathswitch {M_\PW}^2} , \end{eqnarray} where \begin{equation} \frac{\delta \mathswitch {c_{\rw}} ^2}{\mathswitch {c_{\rw}} ^2} = \frac{\delta \mathswitch {M_\PW}^2}{\mathswitch {M_\PW}^2} - \frac{\delta \mathswitch {M_\PZ}^2}{\mathswitch {M_\PZ}^2} . \end{equation} The relations \refeq{eq:delZB} express the field renormalization constants of all gauge bosons and scalars completely in terms of the renormalization constants of the electric charge and the particle masses. With this set of renormalization constants all background-field vertex functions become finite% \footnote{Beyond one-loop order one needs in addition a renormalization of the quantum gauge parameters~\cite{Ab81}. At one-loop level these counterterms do not enter the background-field vertex functions because $\xi_Q$ does not appear in pure background-field vertices. Clearly, the renormalization of gauge parameters is irrelevant for gauge-independent quantities such as S-matrix elements at any order.}. This is evident since the divergences of the vertex functions are subject to the same restriction as the counterterms. In \citere{bgflong} it has been verified explicitly at one-loop order that a renormalization based on the on-shell definition of all parameters can consistently be used in the BFM. It renders all vertex functions finite while respecting the full gauge symmetry of the BFM. As the field renormalization constants are fixed by \refeq{eq:delZB}, the propagators in general acquire residues being different from unity but finite. This is similar to the minimal on-shell scheme of the conventional formalism\cite{BHS} and has to be corrected in the S-matrix elements by UV-finite wave-function renormalization constants. However, just as in QED, the on-shell definition of the electric charge together with gauge invariance automatically fixes the residue of the photon propagator to unity. As a consequence of the relations between the renormalization constants, the counterterm vertices of the background fields have a much simpler structure than the ones in the conventional formalism (see e.g.~\citere{Dehab}). In fact, all vertices originating from a separately gauge-invariant term in the Lagrangian acquire the same renormalization constants. The explicit form of the counterterm vertices at one-loop order has been given in~\citere{bgflong}. \subsection{Gauge-Parameter Independence of Counterterms and Running Couplings} If the renormalized parameters are identified with the physical electron charge and the physical particle masses, they are manifestly gauge-independent. Moreover, the original bare parameters in the Lagrangian are obviously gauge-independent, as they represent free parameters of the theory. The same is true for the bare charge and the bare weak mixing angle as these are directly related to the free bare parameters. Consequently, the counterterms $\delta Z_e$ and $\delta\mathswitch {c_{\rw}}^2$ for the gauge couplings are gauge-independent. The relations \refeq{eq:delZB} therefore imply that the field renormalizations of all gauge-boson fields are gauge-independent. This is in contrast to the conventional formalism where the field renormalizations in the on-shell scheme are gauge-dependent. In contrast to $\delta Z_e$ and $\delta\mathswitch {c_{\rw}}^2$ the mass counterterms are not gauge-independent. The bare masses depend on the bare vacuum expectation value $v_0$ of the Higgs field, which is not a free parameter of the theory. Whereas the renormalized value $v=2\mathswitch {s_{\rw}}\mathswitch {M_\PW}/e$ is gauge-independent, the bare quantity $v_0$ and the corresponding counterterm $\delta v$ are not. As a consequence, the bare masses are gauge-dependent. Thus, the counterterms $\delta\mathswitch {M_\PW}^2$, $\delta\mathswitch {M_\PZ}^2$, $\delta\mathswitch {M_\PH}^2$, $\delta\mathswitch {m_\Pf}$ and $\delta t$ are also gauge-dependent. The physical masses, however, are determined by the pole positions of the propagators, i.e.~the zeros of $k^2 - M^2 - \delta M^2 + C \delta t/\mathswitch {M_\PH}^2 + \Sigma(k^2) + C T^{\hat H}/\mathswitch {M_\PH}^2$, where $C$ denotes the coupling of the fields to the Higgs field and $\Sigma(k^2)$ the relevant self-energy. The linear combination $\delta M^2 - C \delta t/\mathswitch {M_\PH}^2$ of the mass and tadpole counterterm, however, is independent of $\delta v$ and thus gauge-independent% \footnote{Note that the mass counterterms become gauge-independent if one chooses $\delta t=0$.}. Just as in QED, one can define running couplings in the BFM for the SM via na{\"\i}ve Dyson summation of self-energies\ as follows: \begin{eqnarray} \label{eq:runcoupl} e^2(q^2) &=& \frac{e_0^2}{1 + \Re \Pi^{\hat A\Ahat}_0(q^2)} = \frac{e^2}{1 + \Re \Pi^{\hat A\Ahat}(q^2)} , \nonumber\\ g_2^2(q^2) &=& \frac{g_{2,0}^2}{1 + \Re\Pi^{\hat W\What}_0(q^2)} = \frac{g_2^2}{1 + \Re\Pi^{\hat W\What}(q^2)} , \end{eqnarray} where \begin{equation} g_{2,0} = \frac{e_0}{s_{{\mathrm{W}},0}} \quad\mbox{and}\quad g_2 = \frac{e}{\mathswitch {s_{\rw}}}, \end{equation} and the subscript ``0'' denotes bare quantities. The quantities $\Pi^{\hat V\Vhat'}$ are related to the transverse parts of the gauge-boson self-energies\ as follows: \begin{equation} \Pi^{\hat V\Vhat'}(q^2) = \frac{\Sigma^{\hat V\Vhat'}_{\mathrm{T}}(q^2)-\Sigma^{\hat V\Vhat'}_{\mathrm{T}}(0)}{q^2}. \end{equation} The relations \refeq{eq:delZB} give rise to a number of nice properties of the running couplings. As indicated in \refeq{eq:runcoupl}, the renormalization constants cancel. Consequently, the running couplings are finite without renormalization and thus independent of the renormalization scheme (as long as it respects BFM gauge invariance). Their asymptotic behavior is gauge-independent and governed by the renormalization group. In particular, the coefficients of the leading logarithms in the self-energies\ are equal to the ones appearing in the $\beta$-functions associated with the running couplings. All these properties are completely analogous to those of the running coupling in QED; they follow in the same way from the relations \refeq{eq:delZB} as in QED from $Z_e = Z_{\hat A\Ahat}^{-1/2}$. As mentioned above, the asymptotic behavior of $e^2(q^2)$ and $g_2^2(q^2)$ is independent of the quantum gauge parameter. The running couplings coincide in this region with those defined in \citeres{Ke89,Ku91,Si92}. For finite values of $q^2$, however, there are differences% \footnote{Those differences also exist between the different formulations of the previous treatments\cite{Ke89,Ku91,Si92}.}, and the couplings \refeq{eq:runcoupl} depend on $\xi_Q$. This indicates that the mentioned desirable theoretical properties do not single out any specific definition of the running couplings. Instead, any definition of running couplings via Dyson summation of self-energies\ that take into account mass effects is not unique but a matter of convention. This arbitrariness is made transparent in the BFM and has to be taken into account in treatments based on running couplings. \section{Non-linear Realization of the scalar Sector} \label{se:nonlin} In the previous sections the scalar $\mathrm{SU}(2)_{{\mathrm{W}}}$ doublet $\hat\Phi$ was represented in the usual linear way, as defined in \refeq{eq:Hbq}. It is interesting to inspect also the non-linear realization of the scalar sector specified by \cite{nlhiggs,Je86} \begin{equation} \hat\Phi = \frac{1}{\sqrt2}(v+\hat H)\hat U, \label{phinl} \end{equation} where the Goldstone fields $\hat\phi^a$ form the unitary matrix $\hat U$. A convenient representation for $\hat U$ is for instance given by \begin{equation} \hat U = \exp\left(\frac{i}{v}\hat\phi^a\sigma^a\right). \end{equation} The $\hat\phi^a$ are related to the charge eigenstates $\hat\phi^\pm$, $\hat\chi$ as follows \begin{equation} \hat\phi^\pm = \frac{1}{\sqrt{2}}\left(\hat\phi^2\pm i\hat\phi^1\right), \qquad \hat\chi = -\hat\phi^3. \end{equation} The (physical) Higgs field $\hat H$ is a $\mathrm{SU}(2)_{{\mathrm{W}}}$ singlet unlike in the linear parametrization of \refeq{eq:Hbq}. The (non-polynomial) Higgs part of the Lagrangian reads \begin{equation} {\cal L}_H = \frac{1}{2}\tr{(\hat D_\mu\hat\Phi)^\dagger(\hat D^\mu\hat\Phi)} +\frac{1}{2}\mu^2\tr{\hat\Phi^\dagger\hat\Phi} -\frac{1}{16}\lambda\left(\tr{\hat\Phi^\dagger\hat\Phi}\right)^2, \label{laghnl} \end{equation} where $\hat D_\mu$ denotes the covariant derivative of $\hat\Phi$ in matrix notation \begin{equation} \hat D^\mu\hat\Phi = \partial_\mu\hat\Phi-ig_2\hat W_\mu^a\frac{\sigma^a}{2}\hat\Phi -ig_1\hat\Phi \hat B_\mu\frac{\sigma^3}{2}. \end{equation} One of the most interesting features of the non-linear realization \refeq{phinl} is that the scalar self interaction in \refeq{laghnl} is independent of the unphysical Goldstone fields $\hat\phi^a$ owing to the unitarity of $\hat U$. The linear and non-linear realizations of the scalar sector turn out to be physically equivalent \cite{nlhiggs}, as the Jacobian of the corresponding field transformation yields only a contribution to the Lagrangian proportional to $\delta^{(D)}(0)$, which cancels extra quartic UV divergences occurring in loop diagrams but vanishes anyhow in dimensional regularization. In the BFM the fields $\hat H$ and $\hat \phi^a$ are split into background and quantum fields as follows \cite{HHint} \begin{equation} \hat H \to \hat H + H, \qquad \hat U \to \hat U U. \end{equation} Note that in order to preserve background gauge invariance the matrix $\hat U$ is split multiplicatively, i.e.\ the $\hat\phi^a$ are split in a non-linear way. The corresponding $R_\xi$-gauge-fixing term for the quantum fields reads \cite{HHint} \begin{eqnarray} {\cal L}_{\rm GF} &=& -\frac{1}{4\xi_Q} \tr{\left( \partial^\mu W_\mu^a\sigma^a +g_2\varepsilon^{abc}\hat W_\mu^a W^{\mu,b}\sigma^c +\xi_Q\frac{g_2v}{2}\hat{U}\phi^a\sigma^a\hat{U}^\dagger\right)^2} \nonumber\\[.3em] && {}-\frac{1}{2\xi_Q}\left(\partial^\mu B_\mu+\xi_Q\frac{g_1v}{2}\phi^3\right)^2, \label{eq:gfterm} \end{eqnarray} and the Faddeev--Popov ghost Lagrangian ${\cal L}_{\rm FP}$ is constructed as usual. Since ${\cal L}_{\rm GF}$ does not involve $H$ and $\hat H$, the physical Higgs field does not couple to the Faddeev--Popov ghost fields. Owing to the gauge invariance of the background Higgs field $\hat H$, vertex functions involving only $\hat H$ fields are independent of the gauge parameter $\xi_Q$. We have explicitly checked this for the case of the tadpole $\Gamma^{\hat H}=iT^{\hat H}$ and the Higgs two-point function $\Gamma^{\hat H\Hhat}(q)=i(q^2-\mathswitch {M_\PH}^2)+i\Sigma^{\hat H\Hhat}(q^2)$. Hence, the tadpole counterterm $\delta t=-T^{\hat H}$ and the Higgs-boson mass counterterm $\delta\mathswitch {M_\PH}^2=\Re\left(\Sigma^{\hat H\Hhat}(\mathswitch {M_\PH}^2)\right)$are gauge-independent in contrast to the corresponding quantities in the linear parametrization. Moreover, the gauge independence of $\delta t$ implies the same for the gauge-boson mass counterterms $\delta\mathswitch {M_\PW}^2$ and $\delta\mathswitch {M_\PZ}^2$ (and for the fermion-mass counterterms) because of gauge independence of propagator poles. Carrying out the field renormalization in a way respecting background-field gauge invariance, one finds the same relations, \refeq{eq:delZB}, for the field renormalization constants as in the linear scalar realization except for the one of $\delta Z_{\hat H}$. There is no constraint on $\delta Z_{\hat H}$ following from gauge invariance. In this context we mention that the non-polynomial scalar self interactions in \refeq{laghnl} lead to a Higgs self-energy $\Sigma^{\hat H\Hhat}(q^2)$ which off-shell remains UV-divergent even after Higgs-field and Higgs-mass renormalization. This is due to the presence of UV-divergent terms proportional to $q^4$. Of course, in S-matrix elements these spurious divergences always cancel against their counterparts in other vertex functions since the complete theory is renormalizable. Disregarding the physical Higgs field in the non-linear realization \refeq{phinl}, the SM reduces to the so-called gauged non-linear $\sigma$-model (GNLSM) \cite{gnlsm}. The GNLSM is non-renormalizable but still a $\mathrm{SU}(2)_{{\mathrm{W}}}\times \mathrm{U}(1)_Y$ gauge theory. The BFM effective action of the GNLSM is gauge-invariant, and the corresponding vertex functions obey simple Ward identities. However, the structure of these Ward identities is different from the one in the SM described in the previous sections, although they can be derived analogously. This is due to the non-linearity in the scalar sector, which renders also gauge transformations of the background Goldstone fields non-linear, \begin{equation} \delta\hat \phi^a = \mathswitch {M_\PW}\delta\hat\theta^a +\mathswitch {M_\PW}\frac{\mathswitch {s_{\rw}}}{\mathswitch {c_{\rw}}}\delta\hat\theta^{\rm Y}\delta^{a3} -\frac{e}{2\mathswitch {s_{\rw}}}\varepsilon^{abc}\delta\hat\theta^b\hat \phi^c +\frac{e}{2\mathswitch {c_{\rw}}}\varepsilon^{a3c}\delta\hat\theta^{\rm Y}\hat \phi^c + {\cal O}(\hat \phi^2), \label{gaugatrnl} \end{equation} as can be easily inferred from the detailed presentation of \citere{HHint}. Consequently, a Ward identity for an $n$-point function in general involves vertex functions with less external lines down to self-energies. Since $H$ and $\hat H$ represent $\mathrm{SU}(2)_{{\mathrm{W}}}\times \mathrm{U}(1)_Y$ singlets, the Ward identities of the GNLSM are valid in the SM with the non-linear scalar realization of \refeq{phinl}, too. The remaining Ward identities in the SM with non-linear scalar sector, which involve $\hat H$ vertex functions, are obtained from the ones of the GNLSM simply by taking further functional derivatives with respect to $\hat H$, or diagrammatically by adding further $\hat H$ legs to each occurring vertex function. In particular, tadpole contributions can never occur in the Ward identities. In \refeq{gaugatrnl} the constant terms and the ones linear in the $\hat\phi^a$ coincide with the corresponding result for the linear realization of the scalar sector [see Eq.\ (21) of \citere{bgflong}]. Therefore, Ward identities involving at most one Goldstone field but no Higgs field in each occurring vertex function coincide within the linear and non-linear scalar realizations. In particular, this is the case for all Ward identities given in \refse{sec:WI} except for \refeqs{eq:seZ2} and (\ref{eq:seW2}), which are modified in the non-linear scalar realization of the SM and the GNLSM to \begin{eqnarray} k^\mu \Gamma^{\hat Z\hat \chi}_{\mu}(k) -i\mathswitch {M_\PZ} \Gamma^{\hat \chi\chihat}(k) &=& 0 , \\ k^\mu \Gamma^{\hat W^\pm\hat \phi^\mp}_{\mu}(k) \mp\mathswitch {M_\PW} \Gamma^{\hat \phi^\pm\hat \phi^\mp}(k) &=& 0, \end{eqnarray} where no tadpole contributions occur. \section{Gauge Invariance and gauge-parameter-independent Formulations of Green Functions} \label{se:GIVvGIP} In this section we discuss the relation between gauge invariance and gauge-pa\-ra\-me\-ter-independent formulations at the level of Green functions. One should be aware in this context that formally one can obtain a gauge-parameter-independent quantity in a totally trivial way, namely by putting the gauge parameters to a specific value, e.g.~$\xi_i = 1$. A ``trivial'' gauge-parameter independence of this kind obviously is not related to any symmetry properties of the theory. On the other hand, as mentioned in the introduction, the rearrangement of parts between different vertex functions in the conventional formalism of the SM according to the prescription of the pinch technique (PT) has led to new ``vertex functions'' that are gauge-parameter-independent and coincide with the corresponding vertex functions in the BFM for $\xi_Q = 1$. The PT ``vertex functions'' were found to fulfill the same Ward identities which within the BFM are a direct consequence of gauge invariance. The origin of non-trivial symmetry relations in this case stems from the fact that the gauge parameters in the vertex functions are canceled while the lowest-order propagators connecting the PT ``vertex functions'' are still gauge-parameter-dependent. Obviously, this cannot be achieved by simply putting the gauge parameters in the conventionally defined vertex functions to a certain value. As the complete S-matrix element is independent of the gauge parameters, certain relations between the new ``vertex functions'' must exist that enforce the cancellation of the remaining gauge-parameter dependence. It is important to note that the validity of non-trivial symmetry relations is not based on the actual gauge-parameter independence of the new ``vertex functions'', but --- more generally --- on the independence of the gauge parameters in the tree-level propagators from the gauge fixing within loop diagrams. This, however, is exactly the same situation as in the BFM. The vertex functions in the BFM depend on the quantum gauge parameter $\xi_Q$. This gauge dependence is completely unrelated to the gauge fixing entering the lowest-order propagators and giving rise to background gauge parameters% \footnote{In this section we restrict ourselves to linear background gauge-fixing conditions. Note that the PT has only been formulated for linear gauge fixings.} $\xi_B^i$. Thus, there is an analogy between the BFM and prescriptions for constructing gauge-parameter-independent ``vertex functions'' in the conventional formalism, as far as the cancellation of gauge-parameters associated with lowest-order quantities is concerned. In the BFM, however, the cancellation of background gauge parameters is enforced by the BFM Ward identities. Consequently, a possible (and particularly simple) choice for gauge-parameter-independent ``vertex functions'' constructed using the conventional formalism is one that respects the BFM Ward identities. In order to illustrate this in some more detail, we treat as a simple example a four-fermion process $\mathswitchr u_1\bar\mathswitchr d_1\to\mathswitchr u_2\bar\mathswitchr d_2$ at one-loop order, where $\mathswitchr u_i$ and $\mathswitchr d_i$ are up- and down-type fermions, respectively. For ease of notation we consider a charged current process, i.e.\ we do not include mixing effects between different gauge bosons. The complete one-loop contribution $\delta {\cal M}$ to the transition amplitude ${\cal M}$ can be written as \begin{eqnarray} \label{eq:4fermel} \delta {\cal M} &\rlap{=}& \phantom{{}+} \left(\bar d_1 \Gamma^{W^- \bar d_1 u_1}_{\mu,(0)} u_1\right) \Delta^{W, \mu \alpha} \left( \Gamma^{W^+ W^-}_{\alpha\beta,(1)}- i\Gamma^{W^+ W^-H}_{\alpha\beta,(0)}\Gamma^H_{(1)}/\mathswitch {M_\PH}^2\right) \Delta^{W, \beta \nu} \left(\bar u_2 \Gamma^{W^+ \bar u_2 d_2}_{\nu,(0)} d_2\right) \nonumber\\ && {}+ \left(\bar d_1 \Gamma^{W^- \bar d_1 u_1}_{\mu, (0)} u_1\right) \Delta^{W, \mu \alpha} \left( \Gamma^{W^+ \phi^-}_{\alpha,(1)} -i\Gamma^{W^+ \phi^-H}_{\alpha,(0)}\Gamma^H_{(1)}/\mathswitch {M_\PH}^2\right) \Delta^{\phi} \left(\bar u_2 \Gamma^{\phi^+ \bar u_2 d_2}_{(0)} d_2\right) \nonumber\\ && {}+ \left(\bar d_1 \Gamma^{\phi^- \bar d_1 u_1}_{(0)} u_1\right) \Delta^{\phi} \left( \Gamma^{\phi^+ W^-}_{\beta,(1)} -i\Gamma^{\phi^+ W^-H}_{\beta,(0)}\Gamma^H_{(1)}/\mathswitch {M_\PH}^2\right) \Delta^{W, \beta \nu} \left(\bar u_2 \Gamma^{W^+ \bar u_2 d_2}_{\nu, (0)} d_2\right) \nonumber\\ && {}+ \left(\bar d_1 \Gamma^{\phi^- \bar d_1 u_1}_{(0)} u_1\right) \Delta^{\phi} \left( \Gamma^{\phi^+\phi^-}_{(1)} -i\Gamma^{\phi^+\phi^-H}_{(0)}\Gamma^H_{(1)}/\mathswitch {M_\PH}^2\right) \Delta^{\phi} \left(\bar u_2 \Gamma^{\phi^+ \bar u_2 d_2}_{(0)} d_2\right) \nonumber\\ && {}+ \left(\bar d_1 \Gamma^{W^- \bar d_1 u_1}_{\mu, (1)} u_1\right) \Delta^{W, \mu \nu} \left(\bar u_2 \Gamma^{W^+ \bar u_2 d_2}_{\nu, (0)} d_2\right) + \left(\bar d_1 \Gamma^{W^- \bar d_1 u_1}_{\mu, (0)} u_1\right) \Delta^{W, \mu \nu} \left(\bar u_2 \Gamma^{W^+ \bar u_2 d_2}_{\nu, (1)} d_2\right) \nonumber\\ && {}+ \left(\bar d_1 \Gamma^{\phi^- \bar d_1 u_1}_{(1)} u_1\right) \Delta^{\phi} \left(\bar u_2 \Gamma^{\phi^+ \bar u_2 d_2}_{(0)} d_2\right) + \left(\bar d_1 \Gamma^{\phi^- \bar d_1 u_1}_{(0)} u_1\right) \Delta^{\phi} \left(\bar u_2 \Gamma^{\phi^+ \bar u_2 d_2}_{(1)} d_2\right) \nonumber\\ && {}+ \bar d_1 \bar u_2 \Gamma^{\bar d_1 u_1 \bar u_2 d_2}_{(1)} u_1 d_2, \end{eqnarray} where $\bar d_1, u_1, \bar u_2,$ and $d_2$ denote the spinors of the external fermions. The subscripts ``(0)'' and ``(1)'' mark lowest-order and one-loop quantities, respectively. The terms in the first four lines are self-energy\ and tadpole contributions, the ones in the fifth and sixth line are vertex corrections, and the last line contains the one-loop box contribution. Since we are concerned with an S-matrix element, \refeq{eq:4fermel} is understood to contain {\it renormalized quantities} only. In particular, the wave function renormalizations of the external fermion lines are completely absorbed in the vertex corrections. We use a linear $R_{\xi}$ gauge for the lowest-order propagators $\Delta^{\phi}$ and $\Delta^{W}_{\mu \nu}$, i.e. \begin{equation} \label{eq:propzero} \Delta^{\phi}(k) = \frac{i}{k^2 - \xi \mathswitch {M_\PW}^2} , \quad \Delta^{W}_{\mu \nu}(k) = \left( - g_{\mu \nu} + \frac{k_{\mu} k_{\nu}}{\mathswitch {M_\PW}^2} \right) \frac{i}{k^2 - \mathswitch {M_\PW}^2} - \frac{k_{\mu} k_{\nu}}{\mathswitch {M_\PW}^2} \Delta^{\phi}(k) . \end{equation} According to our discussion above, we assume that the gauge-parameter dependence of the one-particle irreducible contributions in \refeq{eq:4fermel} is not related to the one of the tree propagators, \refeq{eq:propzero}. This includes both the case of the BFM and of gauge-parameter-independent ``vertex functions'' constructed in the conventional formalism. Since the box contribution is independent of the (background-type) gauge parameter $\xi$, the cancellation of $\xi$ requires symmetry relations involving self-energy and vertex contributions. After inserting \refeq{eq:propzero} into \refeq{eq:4fermel}, the complete $\xi$ dependence is contained in the term $\Delta^{\phi}$. Collecting these gauge-dependent parts yields two relations, namely one for the contributions proportional to $\left( \Delta^{\phi} \right)^2$ and one for the terms proportional to $\Delta^{\phi}$. Using the relations \begin{equation} \bar d_1 k^{\mu} \Gamma^{W^- \bar d_1 u_1}_{\mu, (0)} u_1 = \mathswitch {M_\PW} \bar d_1 \Gamma^{\phi^- \bar d_1 u_1}_{(0)} u_1, \qquad \bar u_2 k^{\nu} \Gamma^{W^+ \bar u_2 d_2}_{\nu, (0)} d_2 = \mathswitch {M_\PW} \bar u_2 \Gamma^{\phi^+ \bar u_2 d_2}_{(0)} d_2, \end{equation} and the tensor structure of the two-point functions, we find \begin{eqnarray} \label{eq:gaparrel1} k^{\alpha} k^{\beta} \Gamma^{W^+ W^-}_{\alpha \beta,(1)}(k) - 2 \mathswitch {M_\PW} k^{\alpha} \Gamma^{W^+ \phi^-}_{\alpha,(1)}(k) + \mathswitch {M_\PW}^2 \Gamma^{\phi^+\phi^-,(1)}(k) -\frac{e\mathswitch {M_\PW}}{2\mathswitch {s_{\rw}}} \Gamma^H_{(1)} &=& 0, \! \hspace{2em} \\ \label{eq:gaparrel2} \! \left( \bar d_1 k^{\mu} \Gamma^{W^- \bar d_1 u_1}_{\mu, (0)} u_1 \right) 2i\left[ \frac{k^{\alpha} k^{\beta}}{k^2} \Gamma^{W^+ W^-}_{\alpha \beta,(1)}(k) - \frac{\mathswitch {M_\PW} k^{\alpha}}{k^2} \Gamma^{W^+ \phi^-}_{\alpha,(1)}(k) \right] \left( \bar u_2 k^{\nu} \Gamma^{W^+ \bar u_2 d_2}_{\nu, (0)} d_2 \right) && \nonumber\\ {} + \left( \bar d_1 k^{\mu} \Gamma^{W^- \bar d_1 u_1}_{\mu, (0)} u_1 \right) \frac{ie\mathswitch {M_\PW}}{\mathswitch {s_{\rw}}\mathswitch {M_\PH}^2}\Gamma^H_{(1)} \left( \bar u_2 k^{\nu} \Gamma^{W^+ \bar u_2 d_2}_{\nu, (0)} d_2 \right) && \nonumber\\ {} + \left(\bar d_1 k^{\mu} \Gamma^{W^- \bar d_1 u_1}_{\mu, (0)} u_1 \right) \mathswitch {M_\PW}^2\left[ \bar u_2 k^{\nu} \Gamma^{W^+ \bar u_2 d_2}_{\nu, (1)} d_2 - \mathswitch {M_\PW} \bar u_2 \Gamma^{\phi^+ \bar u_2 d_2}_{(1)} d_2 \right] && \nonumber\\ {} \quad + \mathswitch {M_\PW}^2\left[ \bar d_1 k^{\mu} \Gamma^{W^- \bar d_1 u_1}_{\mu, (1)} u_1 - \mathswitch {M_\PW} \bar d_1 \Gamma^{\phi^- \bar d_1 u_1}_{(1)} u_1 \right] \left(\bar u_2 k^{\nu} \Gamma^{W^+ \bar u_2 d_2}_{\nu, (0)} d_2 \right) &=& 0, \! \end{eqnarray} where $k^\mu$ represents the total incoming momentum of the initial state. \refeq{eq:gaparrel1} coincides with the (renormalized) Ward identity valid for the one-loop self-energies\ in the conventional formalism of the SM, while \refeq{eq:gaparrel2} involves both process-specific vertex contributions and process-independent self-energies\ and a tadpole term. Note that the self-energy\ and vertex contributions in \refeq{eq:gaparrel2} do not necessarily decouple. In the particular case of the BFM with the renormalization procedure described in \refse{se:ren}, \refeqs{eq:gaparrel1} and \refeqf{eq:gaparrel2} are obviously fulfilled. \refeq{eq:gaparrel1} is just the sum of the BFM Ward identities \refeqs{eq:seW1} and \refeqf{eq:seW2}. In \refeq{eq:gaparrel2} the four lines actually vanish separately. The first line is zero owing to the Ward identity \refeq{eq:seW1}, the second is absent since the tadpole is renormalized to zero, and the last two lines vanish owing to the Ward identities \refeq{WIWff} and the on-shell conditions for the fermions. In this context, it is interesting to add some remarks on the tadpole contributions. Of course, it is not necessary to renormalize the tadpole to zero as it is done in \refse{se:ren}. Instead, one can fix its renormalized value arbitrarily or one need not renormalize it at all. This leads to additional tadpole contributions in all mass counterterms, e.g. \begin{eqnarray} \label{eq:dMW2} \delta\mathswitch {M_\PW}^2 &=& \Re\left(\Sigma^{WW}_{0,\rm T}(\mathswitch {M_\PW}^2)\right) -\frac{e\mathswitch {M_\PW}}{\mathswitch {s_{\rw}}\mathswitch {M_\PH}^2}T^H, \\ \delta\mathswitch {m_\Pf} &=& \frac{1}{2}\mathswitch {m_\Pf} \Re\Bigr[\Sigma^{\bar ff}_{0,\mathswitchr L}(\mathswitch {m_\Pf}^2) + \Sigma^{\bar ff}_{0,\mathswitchr R}(\mathswitch {m_\Pf}^2) + 2\Sigma^{\bar ff}_{0,{\mathrm{S}}}(\mathswitch {m_\Pf}^2) \Bigl] -\frac{e\,\mathswitch {m_\Pf}}{2\mathswitch {s_{\rw}}\mathswitch {M_\PW}\mathswitch {M_\PH}^2}T^H, \label{eq:dmtad} \end{eqnarray} where the unrenormalized self-energies $\Sigma_0$ are defined like in \citere{bgflong}. The renormalized tadpole $T^H=T^H_0+\delta t$ consists of the unrenormalized tadpole contribution $T^H_0$ and the tadpole counterterm $\delta t$. The tadpole terms in \refeqs{eq:dMW2} and \refeqf{eq:dmtad} are canceled in $\delta {\cal M}$ by the tadpole contributions in \refeq{eq:4fermel}. Consequently, in such a renormalization scheme the four lines in \refeq{eq:gaparrel2} do {\it not} decouple. Using the BFM with a finite renormalized tadpole, the situation is as follows: the first line of \refeq{eq:gaparrel2} is still zero owing to the Ward identity \refeq{eq:seW1}. However, the last two lines yield finite tadpole contributions upon inserting the identities \refeqs{WIWff} and using the on-shell conditions for the fermions, \begin{equation} {\mathrm Re}\left.\left\{ \Gamma^{\mathswitch{\bar f} f}_{(1)}(-p) - \frac{i}{\mathswitch {M_\PH}^2} \Gamma^{\mathswitch{\bar f} f \mathswitch H}_{(0)}(-p,p,0) \Gamma^{\mathswitch H}_{(1)} \right\} \right|_{p^2 = \mathswitch {m_\Pf}^2} u(p) = 0. \end{equation} The resulting terms cancel exactly against the tadpole contribution in \refeq{eq:gaparrel2}. The above investigation of the gauge-parameter dependence associated with the tree lines for the example of a (charged-current) four-fermion process shows, in particular, that the gauge independence of the corresponding S-matrix element does not require a decoupling of the conventional Ward identity \refeq{eq:gaparrel1} into the BFM counterparts \refeqs{eq:seW1} and \refeqf{eq:seW2}. This is in contrast to the statements made in \citere{PT3} in the PT framework. There the decoupled Ward identities were derived under the {\it additional assumption} that the tree-like gauge-parameter dependence of self-energy contributions is canceled independently of the remaining vertex and tadpole contributions. In particular, care has to be taken with respect to the tadpole contributions. They cannot be simply included in the self-energies obeying the decoupled identities \refeqs{eq:seW1} and \refeqf{eq:seW2}, since they do not fulfill these identities by themselves. Finally, we emphasize that derivations starting from the gauge independence of the S-matrix can only yield results for {\it renormalized} vertex functions since an ``unrenormalized S-matrix'' does not exist. Even if gauge-parameter-independent ``vertex functions'' are constructed in such a way that they fulfill the BFM Ward identities, their definition is still not unique. One can always shift parts between the ``vertex functions'' that by themselves fulfill the Ward identities. This freedom naturally appears within the framework of the BFM as the freedom of choosing different values of the quantum gauge parameter $\xi_Q$. As has been stressed above, the BFM Ward identities and the desirable properties of the BFM vertex functions are a consequence of gauge invariance and hold for arbitrary values of $\xi_Q$. When comparing the approach pursued e.g.~in the PT with the BFM, one should keep in mind that the field-theoretical interpretation of the PT quantities being defined by a rearrangement of contributions between different vertex functions is rather obscure. Their process independence has not been proven in general, but only verified for specific examples (see in particular \citere{PT2}), and their construction beyond one-loop order is technically very complicated~\cite{twoloop}. In contrast, the BFM vertex functions have a well-defined field-theoretical meaning and can be derived from the effective action in all orders of perturbation theory. As we have seen above, in the conventional formalism the application of the PT is a special case of decoupling the gauge-parameter dependence of the vertex functions from the one in the tree propagators. Recently, however, the PT has also been applied within the framework of the BFM in order to eliminate the dependence of the BFM vertex functions on the quantum gauge parameter $\xi_Q$~\cite{papavBFM}. Since the background gauge parameters $\xi_B$ appearing in the tree propagators of the BFM are not related to $\xi_Q$, an elimination of $\xi_Q$ via a prescription like the PT can {\em not} be distinguished from trivially putting $\xi_Q$ to any specific value. This can also be seen from the fact that application of the PT within the BFM does not lead to new relations between the BFM vertex functions. The apparent gauge-parameter independence has only been achieved on cost of the specific prescription used in the PT to eliminate the gauge parameter. The comparison between PT and BFM has made transparent that, despite their gauge-parameter independence and several desirable properties, the PT ``vertex functions'' are not unique but to a large extent a matter of convention. This is evident, because any off-shell quantity cannot be directly related to an observable and thus cannot uniquely be fixed. It was already pointed out in \citere{Je86} that off-shell quantities are ambiguous even if gauge invariance is imposed. This holds in particular for all off-shell form factors such as a neutrino electromagnetic moment or anomalous triple-gauge-boson couplings. While these quantities are well-defined where the single one-particle exchange approximation holds, like e.g.\ on the $\mathswitchr Z$ resonance, they are not directly observable in general and to a large extent ambiguous. The PT, like any other prescription, can only provide a more or less convenient definition for off-shell quantities but cannot supply a physical meaning. \section{The $S$, $T$, and $U$ Parameters in the Background-Field Method} \label{se:STU} As an illustration of the discussion given in the last section, we treat the $S$, $T$, and $U$ parameters in the framework of the BFM. The $S$, $T$, and $U$ parameters are defined as certain combinations of self-energies~\cite{STU}. Originally, they were introduced in order to parametrize the effects of new physics that enters only via oblique (i.e.~self-energy) corrections. They can be extracted from experiment by comparing the experimentally measured values ${\cal A}_i^{\mathrm{exp}}$ of a number of observables with their values predicted by the SM, ${\cal A}_i^{\mathrm{SM}}$, i.e. \begin{equation} \label{eq:STUgen} {\cal A}_i^{\mathrm{exp}} = {\cal A}_i^{\mathrm{SM}} + f^{\mathrm{NP}}_i(S, T, U). \end{equation} Here ${\cal A}_i^{\mathrm{SM}}$ contains the complete radiative corrections in the SM up to a given order, while $f^{\mathrm{NP}}_i(S, T, U)$ is a function of the parameters $S$, $T$, $U$ and describes the contributions of new physics. The SM prediction ${\cal A}_i^{\mathrm{SM}}$ is evaluated for a reference value of $m_\mathswitchr t$ and $M_\mathswitchr H$. For most observables accessible by precision measurements the corrections caused by a variation of $m_\mathswitchr t$ and $M_\mathswitchr H$ can also be absorbed into the parameters $S$, $T$, and $U$. The parameters $S$, $T$, and $U$ obtained via \refeq{eq:STUgen} are gauge-invariant quantities. This follows from the fact that ${\cal A}_i^{\mathrm{SM}}$ contains a complete set of electroweak radiative corrections entering an S-matrix element and that the analysis has been restricted to those models of new physics where $f^{\mathrm{NP}}_i(S, T, U)$ accounts for the total contribution. In \citere{DKS}, however, an extension of the $S$, $T$, and $U$ parametrization to cases where these assumptions do not hold has been discussed. This includes effects of new physics that do not exclusively enter via oblique corrections but also via vertex and box contributions as for example anomalous triple-gauge-boson couplings. Furthermore, the authors of \citere{DKS} also considered the case where the $S$, $T$, and $U$ parameters are used {\em within} the SM, i.e.~to parametrize not only new physics effects but also the SM fermionic and bosonic radiative corrections. These extensions of the $S$, $T$, and $U$ parameters appear to be questionable, since the parameters defined in this way are no longer directly related to observables. In particular, this poses severe problems of gauge invariance. It was pointed out in \citere{DKS} that calculating the one-loop bosonic SM contributions to the $S$, $T$, and $U$ parameters yields gauge-parameter-dependent results. It was further noted that for gauges with $\xi_W \neq \mathswitch {c_{\rw}}^2 \xi_Z + \mathswitch {s_{\rw}}^2 \xi_A$ the parameters $T$ and $U$ are even UV-divergent. The authors of \citere{DKS} argued that these problems can be overcome by using the PT in order to eliminate the gauge-parameter dependence of the one-loop gauge-boson self-energies. By explicit calculation, the $S$, $T$, and $U$ parameters obtained within the PT were also shown to be UV-finite. In order to discuss the formulation of the $S$, $T$, and $U$ parameters given in \citere{DKS}, we calculate the bosonic SM contributions to the $S$, $T$, and $U$ parameters in the framework of the BFM. To allow for an easy comparison, we adopt the same definition of $S$, $T$, and $U$ as in \citere{DKS}, i.e. \begin{eqnarray} \alpha S_0 &=& 4\mathswitch {c_{\rw}}^2\mathswitch {s_{\rw}}^2 {\mathrm Re}\left\{ -\Pi^{ZZ}_0(\mathswitch {M_\PZ}^2) + \frac{\mathswitch {s_{\rw}}^2-\mathswitch {c_{\rw}}^2}{\mathswitch {c_{\rw}}\mathswitch {s_{\rw}}}\Pi^{\gamma Z}_0(\mathswitch {M_\PZ}^2) +\Pi^{\gamma\ga}_0(\mathswitch {M_\PZ}^2) \right\}, \\ \alpha T_0 &=& -\frac{\Sigma^{WW}_{\rm T,0}(0)}{\mathswitch {M_\PW}^2} + \frac{\Sigma^{ZZ}_{\rm T,0}(0)}{\mathswitch {M_\PZ}^2} -2\mathswitch {c_{\rw}}\mathswitch {s_{\rw}}\frac{\Sigma^{\gamma Z}_{\rm T,0}(0)}{\mathswitch {M_\PW}^2}, \\ \alpha U_0 &=& 4\mathswitch {s_{\rw}}^2 {\mathrm Re}\left\{ -\Pi^{WW}_0(\mathswitch {M_\PW}^2) + \mathswitch {c_{\rw}}^2\Pi^{ZZ}_0(\mathswitch {M_\PZ}^2) -2\mathswitch {c_{\rw}}\mathswitch {s_{\rw}}\Pi^{\gamma Z}_0(\mathswitch {M_\PZ}^2) + \mathswitch {s_{\rw}}^2\Pi^{\gamma\ga}_0(\mathswitch {M_\PZ}^2) \right\}, \hspace{2em} \end{eqnarray} where as usual $\alpha = e^2/(4 \pi)$. We use the subscript ``0'' to indicate that $S$, $T$, and $U$ are defined in terms of unrenormalized one-loop self-energies. Note that in our conventions $\mathswitch {s_{\rw}}$ differs by a sign from the one used in \citere{DKS}. Furthermore, we use the on-shell definitions for $e$ and $\mathswitch {s_{\rw}}$, while in \citere{DKS} the $\overline{\mbox{MS}}$ parameters are used. This difference is irrelevant for the discussion in this section. The bosonic contributions to $S$, $T$ and $U$ in the BFM formulation of the SM read \begin{eqnarray} \lefteqn{\! \! \! \alpha S^{\mathrm{SM, BFM}}_0 = \frac{\alpha}{24 \pi} \biggl\{ 2 \mathswitch {c_{\rw}}^2 - 5 + h + 2 \mathswitch {c_{\rw}}^2 \xi_Q - 2 \log(\mathswitch {c_{\rw}}^2) - 2 \left[3 - (1 + 2 \mathswitch {c_{\rw}}^2) \xi_Q \right] \frac{\log(\xi_Q)}{1 - \xi_Q} } \nonumber\\ && {} - 2 (12 - 4 h + h^2) F(\mathswitch {M_\PZ}^2; \mathswitch {M_\PH}, \mathswitch {M_\PZ}) + 6 (1 - 4 \mathswitch {c_{\rw}}^2 \xi_Q) F(\mathswitch {M_\PZ}^2; \sqrt{\xi_Q} \mathswitch {M_\PW}, \sqrt{\xi_Q} \mathswitch {M_\PW}) \nonumber\\ && {} - 4 \left[1 + 10 \mathswitch {c_{\rw}}^2 + \mathswitch {c_{\rw}}^4 - 2 \mathswitch {c_{\rw}}^2 (1 + \mathswitch {c_{\rw}}^2) \xi_Q + \mathswitch {c_{\rw}}^4 \xi_Q^2 \right] F(\mathswitch {M_\PZ}^2; \mathswitch {M_\PW}, \sqrt{\xi_Q} \mathswitch {M_\PW}) \biggr\}, \label{eq:SBFM} \\[0.2 cm] \lefteqn{\! \! \! \alpha T^{\mathrm{SM, BFM}}_0 = \frac{\alpha}{16 \pi} \biggl\{ \frac{1}{\mathswitch {s_{\rw}}^2} (12 - 5 \xi_Q) - \frac{3 h^2}{\mathswitch {c_{\rw}}^2 (1 - h) (\mathswitch {c_{\rw}}^2 - h)} \log (h) } \nonumber\\ && {} + \frac{\mathswitch {c_{\rw}}^2\log(\mathswitch {c_{\rw}}^2)}{\mathswitch {s_{\rw}}^4 (\mathswitch {c_{\rw}}^2 - h) (\mathswitch {c_{\rw}}^2 - \xi_Q) (1 - \mathswitch {c_{\rw}}^2 \xi_Q)} \Bigl[3 \mathswitch {c_{\rw}}^2 (3 - 2 \mathswitch {c_{\rw}}^2 + 3 \mathswitch {c_{\rw}}^4) - 3 h (2 - \mathswitch {c_{\rw}}^2 + 3 \mathswitch {c_{\rw}}^4) \nonumber\\ && {} \; \; - \Bigl(9 - 12 \mathswitch {c_{\rw}}^2 + 23 \mathswitch {c_{\rw}}^4 + 9 \mathswitch {c_{\rw}}^6 - h (6/\mathswitch {c_{\rw}}^2 - 9 + 20 \mathswitch {c_{\rw}}^2 + 12 \mathswitch {c_{\rw}}^4) \Bigr) \xi_Q \nonumber\\ && {} \; \; + \Bigl(\mathswitch {c_{\rw}}^2 (5 + 6 \mathswitch {c_{\rw}}^2 + 11 \mathswitch {c_{\rw}}^4) - h (2 + 9 \mathswitch {c_{\rw}}^2 + 11 \mathswitch {c_{\rw}}^4) \Bigr) \xi_Q^2 - \mathswitch {c_{\rw}}^2 (2 + 3 \mathswitch {c_{\rw}}^2) (\mathswitch {c_{\rw}}^2 - h) \xi_Q^3 \Bigr] \nonumber\\ && {} + \frac{3 \left[ 3 \mathswitch {c_{\rw}}^2 - (3 + 2 \mathswitch {c_{\rw}}^2) \xi_Q + (1 + 2 \mathswitch {c_{\rw}}^2) \xi_Q^3 - \mathswitch {c_{\rw}}^2 \xi_Q^4 \right]}{(\mathswitch {c_{\rw}}^2 - \xi_Q) (1 - \mathswitch {c_{\rw}}^2 \xi_Q)} \frac{\log(\xi_Q)}{1 - \xi_Q} \biggr\} , \label{eq:TBFM}\\[0.2 cm] \lefteqn{\! \! \! \alpha U^{\mathrm{SM, BFM}}_0 = \frac{\alpha}{12 \pi \mathswitch {c_{\rw}}^2} \biggl\{ - \frac{2}{\mathswitch {c_{\rw}}^2} - \frac{39}{2} + \frac{171}{2} \mathswitch {c_{\rw}}^2 - \mathswitch {c_{\rw}}^4 + \frac{1}{2} h \mathswitch {s_{\rw}}^2 + (4 + 12 \mathswitch {c_{\rw}}^2 - \mathswitch {c_{\rw}}^4) \xi_Q } \nonumber\\ && {} + \frac{\mathswitch {c_{\rw}}^4\log(\mathswitch {c_{\rw}}^2)}{\mathswitch {s_{\rw}}^2 (\mathswitch {c_{\rw}}^2 - \xi_Q) (1 - \mathswitch {c_{\rw}}^2 \xi_Q)} \Bigl[ 1 + 89 \mathswitch {c_{\rw}}^2 - 27 \mathswitch {c_{\rw}}^4 - (1/\mathswitch {c_{\rw}}^2 + 107 - 41 \mathswitch {c_{\rw}}^2 + 44 \mathswitch {c_{\rw}}^4) \xi_Q \nonumber\\ && {} \; \; + (13 + 53 \mathswitch {c_{\rw}}^2 - 33 \mathswitch {c_{\rw}}^4) \xi_Q^2 + 3 \mathswitch {c_{\rw}}^2 (2 + 3 \mathswitch {c_{\rw}}^2) \xi_Q^3 \Bigr] \nonumber\\ && {} - \frac{\mathswitch {s_{\rw}}^2}{(\mathswitch {c_{\rw}}^2 - \xi_Q) (1 - \mathswitch {c_{\rw}}^2 \xi_Q)} \Bigl[ 1 + 5 \mathswitch {c_{\rw}}^2 + 27 \mathswitch {c_{\rw}}^4 - (1/\mathswitch {c_{\rw}}^2 + 4 + 15 \mathswitch {c_{\rw}}^2 + 25 \mathswitch {c_{\rw}}^4) \xi_Q \nonumber\\ && {} \; \; - (1/\mathswitch {c_{\rw}}^2 + 12 - 6 \mathswitch {c_{\rw}}^2 + 13 \mathswitch {c_{\rw}}^4 - 2 \mathswitch {c_{\rw}}^6) \xi_Q^2 \nonumber\\ && {} \; \; + (1 + 22 \mathswitch {c_{\rw}}^2 + 16 \mathswitch {c_{\rw}}^4) \xi_Q^3 - 9 \mathswitch {c_{\rw}}^4 \xi_Q^4 \Bigr] \frac{\log(\xi_Q)}{1 - \xi_Q} \nonumber\\ && {} - (12 \mathswitch {c_{\rw}}^2 - 4 h + h^2/\mathswitch {c_{\rw}}^2) F(\mathswitch {M_\PW}^2; \mathswitch {M_\PH}, \mathswitch {M_\PW}) + \mathswitch {c_{\rw}}^2 (12 - 4 h + h^2) F(\mathswitch {M_\PZ}^2; \mathswitch {M_\PH}, \mathswitch {M_\PZ}) \nonumber\\ && {} + 48 \mathswitch {c_{\rw}}^2 \mathswitch {s_{\rw}}^2 F(\mathswitch {M_\PW}^2; 0, \mathswitch {M_\PW}) + \frac{\mathswitch {s_{\rw}}^2}{\mathswitch {c_{\rw}}^2} (1 + 5 \mathswitch {c_{\rw}}^2) (1 - 4 \mathswitch {c_{\rw}}^2 \xi_Q) F(\mathswitch {M_\PZ}^2; \sqrt{\xi_Q} \mathswitch {M_\PW}, \sqrt{\xi_Q} \mathswitch {M_\PW}) \nonumber\\ && {} - 2 \mathswitch {s_{\rw}}^2 (1 + \mathswitch {c_{\rw}}^2) \Bigl[1/\mathswitch {c_{\rw}}^2 + 10 + \mathswitch {c_{\rw}}^2 - 2 (1 + \mathswitch {c_{\rw}}^2) \xi_Q + \mathswitch {c_{\rw}}^2 \xi_Q^2 \Bigr] F(\mathswitch {M_\PZ}^2; \mathswitch {M_\PW}, \sqrt{\xi_Q} \mathswitch {M_\PW}) \nonumber\\ && {} - (1 - 4 \mathswitch {c_{\rw}}^2) (1/\mathswitch {c_{\rw}}^2 + 20 + 12 \mathswitch {c_{\rw}}^2) \left[F(\mathswitch {M_\PW}^2; \mathswitch {M_\PW}, \mathswitch {M_\PZ}) - F(\mathswitch {M_\PZ}^2; \mathswitch {M_\PW}, \mathswitch {M_\PW}) \right] \biggr\} , \label{eq:UBFM} \end{eqnarray} where we have used the shorthand $h = \mathswitch {M_\PH}^2/\mathswitch {M_\PZ}^2$, and the quantum gauge parameter $\xi_Q = \xi_Q^W = \xi_Q^B$ has been kept as a free parameter. The UV-finite function $F(p^2; m_1, m_2)$ is defined as \begin{equation} F(p^2; m_1, m_2) = {\mathrm Re} \left( B_0(p^2; m_1, m_2) - B_0(0; m_1, m_2) \right) , \end{equation} where $B_0(p^2;m_1,m_2)$ is the usual scalar one-loop two-point integral~\cite{Dehab}. For completeness, we also give the difference between $S^{\mathrm{SM, BFM}}_0$, $T^{\mathrm{SM, BFM}}_0$, $U^{\mathrm{SM, BFM}}_0$ evaluated at $\xi_Q = 1$ and the bosonic contributions to the $S$, $T$, and $U$ parameters calculated in the 't~Hooft--Feynman gauge (tHF) of the conventional formalism: \begin{eqnarray} \alpha S^{\mathrm{SM, BFM}}_0\biggr|_{\xi_Q = 1} &=& \alpha S^{\mathrm{SM, conv}}_0\biggr|_{\mathrm{tHF}} + \frac{2\alpha\mathswitch {c_{\rw}}^2}{\pi} {\mathrm Re}\left\{B_0(0,\mathswitch {M_\PW},\mathswitch {M_\PW})-B_0(\mathswitch {M_\PZ}^2,\mathswitch {M_\PW},\mathswitch {M_\PW})\right\}, \nonumber\\ \alpha T^{\mathrm{SM, BFM}}_0\biggr|_{\xi_Q = 1} &=& \alpha T^{\mathrm{SM, conv}}_0\biggr|_{\mathrm{tHF}} + \frac{\alpha}{\mathswitch {s_{\rw}}^2\pi}\left\{ B_0(0,\mathswitch {M_\PW},\mathswitch {M_\PW})-\mathswitch {s_{\rw}}^2 B_0(0,0,\mathswitch {M_\PW}) \right. \nonumber\\ && \hspace{10em}\left. {}-\mathswitch {c_{\rw}}^2 B_0(0,\mathswitch {M_\PW},\mathswitch {M_\PZ}) \right\}, \nonumber\\ \alpha U^{\mathrm{SM, BFM}}_0\biggr|_{\xi_Q = 1} &=& \alpha U^{\mathrm{SM, conv}}_0\biggr|_{\mathrm{tHF}} + \frac{4\alpha}{\pi}{\mathrm Re}\left\{ \mathswitch {s_{\rw}}^2 B_0(0,0,\mathswitch {M_\PW})-\mathswitch {c_{\rw}}^2 B_0(0,\mathswitch {M_\PW},\mathswitch {M_\PW}) \right. \nonumber\\ && \hspace{6em}\left. {}+\mathswitch {c_{\rw}}^2 B_0(0,\mathswitch {M_\PW},\mathswitch {M_\PZ}) -\mathswitch {s_{\rw}}^2 B_0(\mathswitch {M_\PZ}^2,\mathswitch {M_\PW},\mathswitch {M_\PW})\right\}. \hspace{2em} \end{eqnarray} This coincides with the result obtained within the PT given in \citere{DKS}. As can be seen in \refeqs{eq:SBFM}, (\ref{eq:TBFM}) and (\ref{eq:UBFM}), $S^{\mathrm{SM, BFM}}_0$, $T^{\mathrm{SM, BFM}}_0$, and $U^{\mathrm{SM, BFM}}_0$ are UV-finite for arbitrary values of $\xi_Q$. While within the PT the UV-finiteness of the parameters could only be inferred from explicit computation, in the BFM it is an obvious consequence of gauge invariance% \footnote{Note that in the BFM gauge invariance restricts the number of quantum gauge parameters to two, $\xi_Q^W$ and $\xi_Q^B$. This automatically implies the identity $\xi_Q^W = \mathswitch {c_{\rw}}^2 \xi_Q^Z + \mathswitch {s_{\rw}}^2 \xi_Q^A$.}. In order to show this, we consider the renormalized $S$, $T$ and $U$ parameters. The renormalization of $T^{\mathrm{SM, BFM}}_0$, for instance, yields \begin{eqnarray} \alpha T^{\mathrm{SM, BFM}} &=& - \frac{\Sigma_{{\mathrm{T}}}^{\hat W\What}(0)}{\mathswitch {M_\PW}^2} + \frac{\Sigma_{{\mathrm{T}}}^{\hat Z\Zhat}(0)}{\mathswitch {M_\PZ}^2} \nonumber\\ &=& - \frac{\Sigma_{{\mathrm{T}},0}^{\hat W\What}(0)}{\mathswitch {M_\PW}^2} + \delta Z_{\hat W} + \frac{\delta \mathswitch {M_\PW}^2}{\mathswitch {M_\PW}^2} + \frac{\Sigma_{{\mathrm{T}},0}^{\hat Z\Zhat}(0)}{\mathswitch {M_\PZ}^2} - \delta Z_{\hat Z\Zhat} - \frac{\delta \mathswitch {M_\PZ}^2}{\mathswitch {M_\PZ}^2} , \end{eqnarray} where we have used that in the BFM $\Sigma_{{\mathrm{T}},0}^{\hat A\hat Z}(0) = \Sigma_{{\mathrm{T}}}^{\hat A\hat Z}(0) = 0$ holds, which can be inferred from \refeq{eq:sega}. However, from \refeq{eq:delZB} we find \begin{equation} \delta Z_{\hat W} + \frac{\delta \mathswitch {M_\PW}^2}{\mathswitch {M_\PW}^2} - \delta Z_{\hat Z\Zhat} - \frac{\delta \mathswitch {M_\PZ}^2}{\mathswitch {M_\PZ}^2} = 0 , \end{equation} and therefore \begin{equation} \label{eq:Tren} \alpha T^{\mathrm{SM, BFM}} = \alpha T^{\mathrm{SM, BFM}}_0 = - \frac{\Sigma_{{\mathrm{T}}}^{\hat W\What}(0)}{\mathswitch {M_\PW}^2} + \frac{\Sigma_{{\mathrm{T}}}^{\hat Z\Zhat}(0)}{\mathswitch {M_\PZ}^2} . \end{equation} Since $T^{\mathrm{SM, BFM}}_0$ and $T^{\mathrm{SM, BFM}}$ are identical, the unrenormalized parameter $T^{\mathrm{SM, BFM}}_0$ is manifestly UV-finite. Similarly one derives \begin{equation} \alpha S^{\mathrm{SM, BFM}} = \alpha S^{\mathrm{SM, BFM}}_0 , \quad \alpha U^{\mathrm{SM, BFM}} = \alpha U^{\mathrm{SM, BFM}}_0 . \end{equation} For fermionic contributions, the combination of self-energies\ appearing in \refeq{eq:Tren} is just the one-loop correction to the $\rho$ parameter. While the bosonic contributions to this combination of self-energies\ are divergent in the conventional formalism of the SM, they are finite within the BFM. Recalling the discussion of the previous section, it should now be obvious that the definition of the $S$, $T$, and $U$ parameters based on the PT is not distinguished, neither through its UV-finiteness nor through its apparent gauge-parameter independence. This ambiguity reflects the fact that a parametrization of the SM bosonic contributions in terms of $S$, $T$, and $U$ cannot directly be compared to experimentally measured quantities. Moreover, there is a priori no reason why the $S$, $T$, and $U$ parameters defined within the PT should include the dominant part of the bosonic contributions to electroweak observables. In fact, comparing for the bosonic contributions the complete one-loop result of the $\rho$ parameter stated in \citere{Degrass} with the PT value of $\alpha T$~\cite{DKS}, one finds that the process-specific bosonic contributions that are not included in the PT definition of $\alpha T$ give by far the most important contribution. The bosonic PT result even has a sign different from the complete bosonic one-loop contribution to the $\rho$ parameter.% \footnote{We have assumed an electron target and varied the Higgs mass between 50 and 1000 GeV.} Furthermore, from the analysis of LEP1 observables and muon decay carried out in \citere{Schild2} it can directly be seen that the (universal) bosonic corrections associated with the PT gauge-boson self-energies in general do {\it not} represent the dominant bosonic effects. In summary, while well established for the treatment of new physics contributions entering solely via vacuum polarization effects, the framework of the $S$, $T$, and $U$ parameters appears not to be favorable for an incorporation of SM bosonic corrections or of new physics effects going beyond oblique corrections. As we have seen, their definition becomes ambiguous in these cases. In order to study the complete SM contributions, it seems to be more appropriate to directly inspect observables or (process-specific) effective parameters closely related to measurable quantities. For LEP1 physics such parametrizations were e.g.~proposed in \citere{Alt} and \citeres{Schild1,Schild2}. \section{Conclusion} \label{concl} Quantizing a gauge theory within the background-field method (BFM) yields a manifestly gauge-invariant effective action for the underlying model. The application of this method to the electroweak Standard Model has been reviewed and further investigated. We have derived consequences of the simple Ward identities that follow directly from gauge invariance of the effective action. In particular, we have discussed the impact of BFM gauge invariance on renormalization. Moreover, we have considered the generalization of the BFM to the non-linear realization of the scalar sector of the Standard Model. The interplay between gauge-parameter independence of the S-matrix and Ward identities relating vertex functions has been further explored. We have shown that any formalism that decouples the gauge-parameter dependence of the vertex functions from the one of the tree lines leads to symmetry constraints for the corresponding ``vertex functions''. These quantities are, however, not uniquely determined by this requirement, but it is possible to shift parts between ``vertex functions'' that by themselves obey the constraints. This fact signals the ambiguity which within the BFM is naturally made transparent by the dependence of the vertex functions on the quantum gauge parameter. Although approaches based on a redistribution of parts between different Green functions may yield ``vertex functions'' that coincide with the corresponding quantities in the BFM, from a conceptual point of view these methods differ considerably. In addition to being technically rather complicated, approaches like the pinch technique suffer from severe theoretical shortcomings. In particular, the field-theoretical meaning of objects constructed by redistributions is not clear. In contrast, the BFM vertex functions have a well-defined field-theoretical interpretation and are derived from an effective action in all orders of perturbation theory. The application of a gauge-parameter elimination procedure within the BFM degenerates to a trivial selection of a particular value for the quantum gauge parameter and thus to a mere convention. Since off-shell quantities such as Green functions are not directly related to observables, they cannot be fixed on physical grounds. Therefore, any prescription that fixes these quantities can only be a more or less convenient definition but cannot be unique. We have illustrated this fact by calculating and discussing the (gauge-dependent) standard contributions to the $S$, $T$, and $U$ parameters. \vspace{1em} \pagebreak \noindent {\bf Note added} We would like to comment on remarks made in \citere{papanew} concerning the connection between background-field method and pinch technique. There, \citere{BFMvPT} was cited in the context of the ``erroneous impression'' and the ``naive expectation'' that ``Green's functions calculated within the background-field method should be completely gauge-invariant, and identical to the corresponding pinch-technique Green's functions''. Furthermore, with respect to the gauge-pa\-ra\-me\-ter dependence of the background-field vertex functions, it was stated in \citere{papanew} that ``there'' (\citere{BFMvPT}) ``was an attempt to assign a physical significance to this dependence''. {\it None} of these statements has been made in \citere{BFMvPT}, where all statements and conclusions are based on facts but not on the (irrelevant) ``initial expectations'' mentioned in \citere{papanew}. Note that one of our conclusions was that the gauge-parameter dependence in the BFM signals the fact that it is {\em not} possible to assign a physical significance to off-shell Green functions. \vspace{-0.2em} \section{Acknowledgements} \vspace{-.5em} S.D.\ would like to thank Carsten Grosse-Knetter for many helpful discussions on the non-linear representations of scalar fields and for a fruitful collaboration. \vspace{-0.2em} \section{References} \vspace{-1.5em}
1,314,259,996,045
arxiv
\section{Introduction} \indent In two decade, transport property of the unconventional superconducting junctions is studied both theoretically and experimentally. In these junctions, zero-energy resonance state (ZES) plays an important role. \cite{ZES2,ZES3} It is well known that in the tunneling spectroscopy of the high-$T_C$ superconductor the zero-bias conductance peak \cite{ZBCP4,ZBCP5,ZBCP6,ZBCP7,ZBCP8,ZBCP9,ZBCP10,ZBCP11} appears. \par On the other hand Josephson current in superconductor / insulator / superconductor junction is one of the characteristic phenomena. Anomalous behaver is obtained on high-$T_C$ superconducting junction, $i.e.$ the critical Josephson current enhances at low temperature when the lattice orientation is $\alpha =\pi /4$ (110-junction) as the Fig. 1. This is caused by the existence of the ZES formed at the interface. In previous papers we have known a general formula as Furusaki-Tsukada formulation for dc Josephson current, which include both macroscopic phase and ZES. This theory is based on a microscopic calculation of the current represented in terms of the coefficients of Andreev reflection. \cite{Andreev1,Andreev2,Andreev3} \par In this paper, we calculate and discuss dc Josephson current at 110-junction in the $d$-component superconductor / insulator / $d$-wave superconductor junction considering existence of $is$-wave state \cite{d+is} and imaginary part of the $d$-wave state. In these, we calculate spatial dependence of the pair potential self-consistently. \par \begin{figure}[hob] \begin{center} \includegraphics[width=8.0cm,clip]{fig1.eps} \end{center} \vskip -4mm \caption{A schematic of 110-junction of the wave function of $d$-wave superconductors. The crystal orientation of right and left side of the superconductor for junction are chosen as $\alpha =\pi /4$, respectively. \label{fig:1} } \end{figure} \section{Formulation} \indent In order to calculate Josephson current, we well know the Green's function method like this: \begin{eqnarray} I=\frac{e\hbar}{2im}\left( \frac{\partial }{\partial x} -\frac{\partial }{\partial x'} \right ) \mbox{Tr} G_{\omega_m}(x,x'). \end{eqnarray} And now, we use the quasi-classical method in this paper. First of all, Nanbu-Gol'kov Green's function is written as, \begin{eqnarray} \label{Green} G(x,x')=G_{++}(x,x')e^{ik_F (x-x')} +G_{--}(x,x')e^{-ik_F (x-x')} \nonumber \\ +G_{+-}(x,x')e^{ik_F (x+x')} +G_{-+}(x,x')e^{-ik_F (x+x')}. \end{eqnarray} Taking the differential into the equation, we obtain the following equation, \begin{eqnarray} I=\frac{e\hbar k_F}{m}\mbox{Tr} \left( G_{++}(x,x')e^{ik_F (x-x')}\right. \nonumber \\ \left. -G_{--}(x,x')e^{-ik_F (x-x')} \right) +O(1). \end{eqnarray} The differential for $G_{\alpha\beta}(x,x')$ is order $1$ ( $\ll k_F$ ), so it's ignored. The quantity $\alpha$, $\beta$ mean $\pm$. The Green's function $G_{\pm \mp}(x,x')$ terms vanish in the differential. \par We define the quasi-classical Green's function. \cite{Green} \begin{eqnarray} \hat{g}_{\alpha}=f_{1\alpha}\hat{\tau}_1 +f_{2\alpha}\hat{\tau}_2 +g_{\alpha}\hat{\tau}_3 \mbox{ , } (\hat{g}_{\alpha})^2 =\hat{1} \end{eqnarray} Here $\hat{\tau}_j$($j=1,2,3$) are Pauri matrices and $\hat{1}$ is a unit matrix. The quantities $f_{1\alpha},f_{2\alpha},g_{\alpha}$ obey the following relations, \begin{eqnarray} f_{1\alpha}=&\alpha \left[ i F_{\alpha}(x)+D_{\alpha}(x) \right] /\left[ 1-D_{\alpha}(x)F_{\alpha}(x) \right] ,\\ f_{2\alpha}=& -\left[ F_{\alpha}(x)-D_{\alpha}(x) \right] /\left[ 1-D_{\alpha}(x)F_{\alpha}(x) \right] ,\\ g_{\alpha}=& \alpha \left[1+ F_{\alpha}(x)D_{\alpha}(x) \right] /\left[ 1-D_{\alpha}(x)F_{\alpha}(x) \right] . \end{eqnarray} In these quasi-classical Green's function, the quantity $D_{\alpha}(x)$ and $F_{\alpha}(x)$ obey the Ricatti equations \cite{Ricatti} \begin{eqnarray} \label{D} \hbar |v_F |D_{\alpha}(x) =\alpha \left[ 2\omega_m D_{\alpha}(x) +\Delta (x,\theta )D_{\alpha}^{2}(x) \right. \nonumber \\ \left. -\Delta^{*}(x,\theta ) \right], \\ \hbar |v_F |F_{\alpha}(x) =\alpha \left[ -2\omega_m F_{\alpha}(x) +\Delta^{*} (x,\theta )F_{\alpha}^{2}(x) \right. \nonumber \\ \left. -\Delta(x,\theta ) \right]. \end{eqnarray} The quantity $\theta$ is the angle between quasi-particle going through the interface and $x$ direction, here $x$-axis is the vertical to the interface. The boundary conditions at the interface are given by \begin{eqnarray} \label{boundary11} F_{+L}=\frac{D_{-R}-RD_{+R}-(1-R)D_{-R}} {D_{-L}(RD_{-R}-D_{+R})+(1-R)D_{+R}D_{-R}}, \\ \label{boundary12} F_{-L}=\frac{RD_{-R}-D_{+R}+(1-R)D_{+L}} {D_{+L}(D_{-R}-RD_{+R})-(1-R)D_{+R}D_{-R}}, \\ \label{boundary13} F_{+R}=\frac{RD_{+L}-D_{-L}+(1-R)D_{-R}} {D_{-R}(D_{+L}-RD_{-L})-(1-R)D_{+L}D_{-L}}, \\ \label{boundary14} F_{-R}=\frac{D_{+L}-RD_{-L}-(1-R)D_{+R}} {D_{+R}(RD_{+L}-D_{-L})+(1-R)D_{+L}D_{-L}}, \end{eqnarray} where we omit the index $(x=0)$. The quantity $R$ is $R=Z^2 /(4+Z^2 )$ with $Z=2mH/\hbar ^2 k_F$. Here the quantity $H$ is the height of the barrier potential. Then we treat the insulator as the $\delta$-functional barrier potential. The boundary condition for $D_{\alpha}(x)$ at $x=\pm \infty$ is \begin{eqnarray} \label{boundary2} D_{\alpha}(\pm \infty) =\frac{\Delta^{*}(\pm \infty, \theta)} {\omega_{m} +\alpha \Omega_{\alpha}}. \end{eqnarray} \par In these relation, we can write down Josephson current as following, \begin{eqnarray} \label{Josephson} I(\theta )=\frac{2e\hbar k_F}{m} i\left( \left[ g_{+}(x,\theta ) \right] -\left[ g_{-}(x,\theta ) \right] \right). \end{eqnarray} Josephson current in this formula is obtained by $x\rightarrow 0$ \par The spatial dependent pair potential is calculated as following \begin{eqnarray} \label{pair} \Delta(x,\theta )&=&\frac{2T} {\ln T/T_C +\sum_{0\leq m}\frac{1}{m+1/2}} \nonumber \\ &\times &\sum_{0\leq m} \int_{-\pi /2}^{\pi /2} d\theta ' V(\theta ,\theta ' ) f_{2+} \end{eqnarray} where $V(\theta ,\theta ' )=2\sin 2\theta \sin 2\theta '$ for 110-junction and $V(\theta ,\theta ' )=2\cos 2\theta \cos 2\theta '$ for 100- junction, respectively for $d$-wave component, and $V(\theta ,\theta ' )=1$ for $s$-wave component for both 110- and 100-junction case. In this equation, we can calculate spatial dependent of the pair potential self-consistently (SCF). \par Josephson current $I$ in these formula is obtained numerically solving Eq. \ref{D}, \ref{Josephson}, \ref{pair} under the boundary conditions Eq. \ref{boundary11},\ref{boundary12},\ref{boundary13},\ref{boundary14},\ref{boundary2}. \par Calculated result of Josephson current is normalized by normal conductance $\sigma_N$, \begin{eqnarray} I(\eta )=\int_{-\pi /2}^{\pi /2}I(\theta )\cos\theta d\theta /\sigma_N, \end{eqnarray} \begin{eqnarray} \sigma_N =\int_{-\pi /2}^{\pi /2} \frac{4\cos \theta^2}{4\cos \theta^2+Z^2} \cos \theta d\theta . \end{eqnarray} Here, we define $\eta =\eta_L -\eta_R$, where $\eta_L$, $\eta_R$ is the macroscopic phase of left and right side of the superconductors. In every thing, we chose the temperature $T=0.05T_C$, where $T_C$ is the transition temperature of superconductor. And the cutoff frequency $\omega_D$ is set to be $\omega_D /2\pi T_C =1$ for summation of Matsubara frequency $m$. \section{Results} In this section, we show the calculated results of the superconducting macroscopic phase $\eta$ dependence of the pair potential and Josephson current. In all case we chose $T_{C_s}=0.2T_{C_d}$, where $T_{C_s}$ and $T_{C_d}$ ($=T_C$) are the transition temperature of $s$-wave component and $d$-wave component, respectively. \par \begin{figure}[hob] \begin{center} \includegraphics[width=5cm,clip]{fig2.eps} \end{center} \vskip -4mm \caption{The $x$-dependence of the pair potential of the right and left side of the superconductors at $\eta =0$. The a, b, c mean $Z=0$, $5$, $10$, respectively. $\xi$ is the coherent length of the superconductor. \label{fig:2} } \end{figure} \begin{figure}[hob] \begin{center} \includegraphics[width=5cm,clip]{fig3.eps} \end{center} \vskip -4mm \caption{The $x$-dependence of the pair potential of the right and left side of the superconductors at $\eta =\pi /2$. The a, b, c mean $Z=0$, $5$, $10$, respectively. $\xi$ is the coherent length of the superconductor. \label{fig:3} } \end{figure} \begin{figure}[hob] \begin{center} \includegraphics[width=4.8cm,clip]{fig4.eps} \end{center} \vskip -4mm \caption{The $x$-dependence of the pair potential of the right and left side of the superconductors at $\eta =\pi$. The a, b, c mean $Z=0$, $5$, $10$, respectively. $\xi$ is the coherent length of the superconductor. \label{fig:4} } \end{figure} First of all, we show the $\eta$-dependence of the pair potential. The system is 110-junction. In the Fig. 2, 3, 4, the solid line is real number, and doted line is imaginary number, respectively. The $x$ axis is normalized by coherent length $\xi $. For $\eta =0$, $Z$-dependences don't appear in the pair potential as Fig. 2. The reducing of the pair potential for $Z=0$ near the interface receives spatial changing. In these, $is$ state exists near the interface. This state doesn't appear in the 100-junction's case. When the quasi-particle goes through the interface, it feels the opposite sign of the pair potential. So the reducing occurs in spite of $Z=0$. \par Second, let us show the $\eta =\pi /2$ case. For $Z=0$, $s$ and $is$ state not exist. Since the pair potential contains the superconducting macroscopic phase at the right side of the superconductor, in the Fig. 3, right side of the pair potential of $d$-wave is imaginary number. Similarly $\eta =0$ case, since the quasi-particle through the interface feels the different sign of the pair potential, real number of $d$-wave (since it contains the macroscopic phase of the superconductor, real number of $d$-wave appears as the imaginary number. ) at the right side of the pair potential is connected to the imaginary $d$-wave at the left side of the pair potential. \par For $Z=5$, pair potential behaves as Fig. 3 $b$. The existence of the barrier potential affects the suppression of the right and left side of the pair potential for both real and imaginary numbers near the interface. At the region of the coherence length near the interface, imaginary parts of the $d$-wave is enhanced for the right and left side of the superconductors. \par When $Z=10$, $s$ and $is$ state appear for both side of the superconductors near the interface. The existence of the $is$-wave is same reason as the ordinary discussion for $is$-wave state at the edge of the $d$-wave superconductor on $\alpha =\pi /4$ (110-junction). \par Next, we show the $\eta =\pi$ case. The phase factor is $\exp (i\eta)=-1$, so 110-junction is same as in the 100-junction ($\alpha =0$). Therefore real part of the $d$-wave is not spacial dependence in the $Z=0$ case, $i.e.$ pair potential is constance for all region. $s$ and $is$-wave don't appear and $id$-wave doesn't appear too. \par When $Z=5$, since existence of the barrier potential makes the reflection of the quasi-particle at the interface, $d$-wave factors are reduced. $Z=10$ case is same as $Z=5$. The different point at the $Z=10$ is the existence of the $is$ state at the interface. \par Finally, we show the normalized dc Josephson current for 110-junction and 100-junction. For 110-junction, Josephson current is suppressed at the $\eta =\pi /2 \sim \pi$ region for $Z=10$, and it suppressed at all area for $Z=5$. For $Z=15$, Josephson current behaves $\sin \eta$. On the other hand, for 100-junction, Josephson current is not suppressed. And it is consistent with the non-SCF calculation. \par Comparing the 110-junction to 100-junction, the hight of the Josephson current for 100-junction is higher than that for 110-junction. It is not consistent with non-SCF calculations. This is our new dissolve. \section{Summary} In this section, we summarize the obtained results. Now we have seen the imaginary part of the pair potential exists at the $\eta \neq 0$, $\pi $ for 110-junction. That occurs by the existence of the macroscopic phase of the superconductors. This results are different from the situation of the surface of superconductor or junction between normal metal and superconductor. Since the both $is$- and $id$-wave state exist near the interface, Josephson current is reduced on 110-junction. This results is not only by the $is$-wave state but also by the existence of the imaginary part of the pair potential of $d$-wave. This reducing is same as in the $s$-wave superconductor / $p$-wave-superconductor / $s$-wave superconductor junction. In this junction similar reducing occurs by the existence of another symmetry pair potential at the junction. \cite{Yamashiro} In this paper's case $id$-wave component plays the different symmetry for the $d$-wave component. On the other hand, Josephson current is not reduced on 100-interface. These results are unusual. These appear only in the SCF calculation. In the non-SCF calculation, these don't appear. These result for 110-junction is consistent with Ref. 10, where it's a high barrier limit case. \par And adding one more thing, Josephson current disappears on $Z=0$ both for 110-junction and 100-junction. In the physical point of view, it is expected that Josephson current only exists when insulating barrier or something (normal metal or different type of superconductor) exist at the interface. Therefore these results are valid physically. \par In this paper, we discuss dc Josephson current for the 110-junction. Pair potential has the imaginary part for $\eta \neq 0$, and Josephson current is suppressed. This result appears only in the SCF calculation of the pair potential. \begin{figure}[hob] \begin{center} \includegraphics[width=4.5cm,clip]{fig5.eps} \end{center} \vskip -4mm \caption{Josephson current at the 110-junction (a) and 100-junction (b) for $Z=5$, $10$ and $Z=15$. \label{fig:5} } \end{figure} \acknowledgements I greatly acknowledge useful comment with Y. Tanaka and N. Hayashi. I would like to thank S. Kaya for giving me a calculating tool.
1,314,259,996,046
arxiv
\section{Introduction}\label{sec:Intro} The Eisenbud-Green-Harris (EGH) conjecture is a famous open problem in commutative algebra and algebraic geometry. If true, it would imply the generalized Cayley-Bacharach conjecture~\cite{EisenbudGreenHarris1993:EGHConjecture}, generalize the classic Clements-Lindstr\"{o}m theorem~\cite{ClementsLindstrom1969} in combinatorics, as well as extend Macaulay's characterization~\cite{Macaulay1927} of the Hilbert functions of homogeneous ideals in a polynomial ring $S:= k[x_1, \dots, x_n]$ to the case of homogeneous ideals containing a given $S$-regular sequence. There are several equivalent formulations and slight variations of the EGH conjecture in the literature; see \cite{FranciscoRichert2007:LPPIdeals} for a good overview. Given a proper ideal $I$ of a Noetherian ring $R$, we say $I$ {\it minimally contains an $(a_1, \dots, a_r)$-regular sequence of forms} (i.e. homogeneous polynomials) if $I$ has depth $r$, the minimum degree of the forms in $I$ is $a_1$, and for each $2\leq i \leq r$, the integer $a_i$ is the smallest degree such that $I$ contains an $R$-regular sequence $f_1, \dots, f_i$ of forms of degrees $a_1, \dots, a_i$ respectively. In this paper, we work with the following version of the conjecture. \EGH Let $2\leq e_1 \leq \dots \leq e_n$ be integers. If $I\subsetneq S$ is a homogeneous ideal that minimally contains an $(e_1, \dots, e_n)$-regular sequence of forms, then there exists a homogeneous ideal $J\subsetneq S$ containing $x_1^{e_1}, \dots, x_n^{e_n}$, such that $I$ and $J$ have the same Hilbert function. The EGH conjecture is known to be true in some cases. Richert~\cite{Richert2004:StudyLPPConjecture} settled the case $n=2$, Francisco~\cite{Francisco2004:AlmostCIandLPPConj} proved the case when $I$ is an almost complete intersection ideal, Caviglia and Maclagan~\cite{CavigliaMaclagan2008:EGHConj} showed that the conjecture is true when $e_{j+1} > \sum_{i=1}^j (e_i-1)$ for all $1\leq j<n$, and Abedelfatah~\cite{Abedelfatah2012:EGHConjPreprint} proved the conjecture when every $f_i$ is a product of linear forms. In the special case when $e_i = 2$ for all $1\leq i\leq n$, Richert~\cite{Richert2004:StudyLPPConjecture} claimed in unpublished work that the conjecture holds for $n\leq 5$, Chen~\cite{Chen2011:thesis} gave a proof for $n\leq 4$, Herzog and Popescu~\cite{HerzogPopescu1998:HilbertFunctionsGenericForms} proved that the conjecture holds if $k$ is a field of characteristic zero and $I$ is minimally generated by generic quadratic forms, while Gasharov~\cite{Gasharov1999:HilbertFunctionsHomogGenericForms} showed that Herzog-Popescu's result is still true when $k$ is replaced by a field of arbitrary characteristic. As for the case $n=3$, Cooper~\cite{Cooper:EGHPreprintCase3} proved the conjecture when $e_1 \leq 3$ by considering the remaining cases not covered by Caviglia-Maclagan's result. In contrast to these known results, we use liaison theory as our main tool. Licci ideals are homogeneous ideals in the liaison class of a complete intersection ideal, and projective schemes defined by licci ideals are fundamental objects studied in liaison theory. In this paper, we introduce a new subclass of licci ideals, which we call `sequentially bounded', and we prove that the EGH conjecture holds for every sequentially bounded licci ideal that `admits a minimal first link' (Theorem \ref{thm:SeqBoundedLicci}); see Section \ref{sec:SeqBoundedLicci} for a precise definition of such licci ideals. As an important consequence, we show that the EGH conjecture holds for Gorenstein ideals in the case of three variables: \thm\label{mainThm-Gorenstein} Let $2\leq e_1 \leq e_2 \leq e_3$ be integers. If $I\subsetneq k[x_1,x_2,x_3]$ is a homogeneous Gorenstein ideal that minimally contains an $(e_1, e_2, e_3)$-regular sequence of forms, then there exists a monomial ideal $J$ in $k[x_1,x_2,x_3]$ containing $x_1^{e_1}, x_2^{e_2}, x_3^{e_3}$, such that $I$ and $J$ have the same Hilbert function. Our proof of Theorem \ref{mainThm-Gorenstein} uses Migliore-Nagel's recent result~\cite{MiglioreNagel2010:MinimalLinksResultGaeta} in liaison theory, which in turn relies on the Buchsbaum-Eisenbud structure theorem~\cite{BuchsbaumEisenbud1977} on Gorenstein ideals of height three. We hope that further advances in liaison theory would give more insight into proving the EGH conjecture. \section{Liaison Theory}\label{sec:LiaisonTheory} Let $\mathbb{N}$ and $\mathbb{P}$ denote the non-negative and positive integers respectively. Define the set $[n] := \{1, \dots, n\}$ for each $n\in \mathbb{P}$, and let $[0] := \emptyset$. Given ${\bf a} = (a_1, \dots, a_n)$, ${\bf b} = (b_1, \dots, b_n)$ in $\mathbb{Z}^n$, let $|{\bf a}| := a_1 + \dots + a_n$, and write ${\bf a} \leq {\bf b}$ if $a_i \leq b_i$ for all $i\in [n]$. For brevity, let ${\bf 1}_n$ denote the $n$-tuple $(1, \dots, 1)$. Throughout this paper, $S:= k[x_1, \dots, x_n]$ is a standard $\mathbb{N}$-graded polynomial ring on $n$ variables over an infinite field $k$. All ideals, whether contained in $S$ or otherwise, are assumed to be homogeneous and proper, and for any minimal set of generators of a given ideal, we always assume the generators are homogeneous. Given $\mathcal{A} = \{p_1, \dots, p_r\}$ a collection of forms in $S$, write $\langle \mathcal{A} \rangle$ or $\langle p_1, \dots, p_r\rangle$ to mean the ideal of $S$ generated by $\mathcal{A}$. A {\it multicomplex} $M$ is a collection of monomials that is closed under divisibility (i.e. if $m\in M$ and $m^{\prime}$ divides $m$, then $m^{\prime} \in M$). For $\boldsymbol{\alpha} = (\alpha_1, \dots, \alpha_n) \in \mathbb{N}^n$, write ${\bf x}^{\boldsymbol{\alpha}}$ to mean the monomial ${\bf x}^{\boldsymbol{\alpha}} := x_1^{\alpha_1}\cdots x_n^{\alpha_n}$. For any ring $R$ and any ideal $I\subset R$, let $\depth(I)$ denote the depth of $I$. We say $I$ is {\it perfect} if $\depth(I)$ equals the projective dimension of $R/I$. A {\it complete intersection ideal} (CI ideal) of $R$ is an ideal generated by an $R$-regular sequence of forms, and an {\it almost complete intersection ideal} of $R$ is a perfect ideal $I$ that is minimally generated by $\depth(I)+1$ elements. If $J = \langle f_1, \dots, f_r\rangle$ is a CI ideal such that $\deg(f_1) \leq \dots \leq \deg(f_r)$, then ${\bf e} := (\deg(f_1), \dots, \deg(f_r))\in \mathbb{P}^r$ is called the {\it type} of $J$ (as a CI ideal). Given a finitely generated graded $S$-module $R = \bigoplus_{d\in \mathbb{N}} R_d$, let $H(R,-)$ denote its {\it Hilbert function}, i.e. $H(R,d) := \dim_{k} R_d$ for each $d\in \mathbb{N}$, and let $\soc(R)$ denote the socle of $R$. We say $R$ is {\it Gorenstein} if it is a Cohen-Macaulay module of Krull dimension $r$, and $\dim_{k} \soc(R/\langle f_1, \dots, f_r\rangle R) = 1$ for some $R$-regular sequence $f_1, \dots, f_r$. Note that complete intersection ideals are Gorenstein. \defn Let $R$ be a Cohen-Macaulay ring, and let $I, I \subset R$ be ideals of height $r$. If there exists a CI ideal $J\subset R$ of height $r$ satisfying $J\subseteq I \cap I^{\prime}$, $I = J:I^{\prime}$ and $I^{\prime} = J:I$, then we say $I$ and $I^{\prime}$ are {\it (algebraically) directly linked} (by $J$), and we write $I \overset{J}{\sim} I^{\prime}$, or simply $I \sim I^{\prime}$ (if the ideal $J$ is not important). This binary relation $\sim$ is called {\it direct linkage}, and the ideal $J$ is called a {\it link}. Direct linkage is symmetric, but not necessarily reflexive or transitive. Taking its transitive closure, we get an equivalence relation called {\it liaison} (or {\it linkage}). Equivalently, we have the following definition: \defn Let $R$ be a Cohen-Macaulay ring, and let $I, I^{\prime}\subset R$ be ideals of height $r$. If there exists a finite sequence of ideals $I_0, I_1, \dots, I_s$ ($s\in \mathbb{P}$) of height $r$, such that $I_0 = I$, $I_s = I^{\prime}$, and $I_0 \sim I_1 \sim \dots \sim I_s$, then we call ``$I_0 \sim I_1 \sim \dots \sim I_s$'' a {\it sequence of links} from $I$ to $I^{\prime}$, we say $s$ is the {\it length} of this sequence, and we say $I$ and $I^{\prime}$ are {\it (algebraically) linked}. This binary relation (that $I$ and $I^{\prime}$ are linked) is an equivalence relation called {\it liaison} (or {\it linkage}), and each equivalence class is called a {\it liaison class}. An ideal that is in the liaison class\footnote{Any two CI ideals of the same height are linked, so for a fixed height, the liaison class of a CI ideal is unique; see, e.g. \cite{Schwartau1982:thesis}, for a proof.} of a CI ideal is called {\it licci}. In our definition of liaison, we require that the links are CI ideals. This can be generalized by allowing links to be Gorenstein ideals, which yields the notion of {\it Gorenstein liaison}. The prefixes `CI-' and `G-' (for complete intersection and Gorenstein respectively) are usually attached to distinguish between the two definitions, e.g. CI-liaison, G-linked, etc.. Currently, Gorenstein liaison theory is an area of active research \cite{Gorla2008:GeneralizedGaetaThm,Hartshorne2007:GeneralizedDivisorsAndBiliaison,HartshorneSabadiniSchlesinger2008,KleppeMiglioreMiro-RoigNagelPeterson2001,MiglioreNagel2002:MonomialIdealsGLiaisonClassOfCI,MiglioreNagel2010:MinimalLinksResultGaeta}, and there are many open problems on whether results in CI-liaison theory can be extended analogously in G-liaison theory; see \cite{KleppeMiglioreMiro-RoigNagelPeterson2001}. In this paper, we only use CI-liaison theory and hence do not use the `CI-' prefix. For a good introduction to liaison theory, see \cite{book:MiglioreIntroLiaisonTheory}. \remark The notion of liaison is (more commonly) defined for projective schemes as follows: Let $V_1, V_2$ be equidimensional subschemes of $\mathbb{P}^n_{k}$ of codimension $r$, and let $X$ be a complete intersection scheme of codimension $r$ containing both $V_1$ and $V_2$. If the defining ideals $I_{V_1}, I_{V_2}, I_X$ (of $V_1, V_2, X$ respectively) satisfy $I_X \subseteq I_{V_1} \cap I_{V_2}$, $I_X: I_{V_1} = I_{V_2}$ and $I_X: I_{V_2} = I_{V_1}$, then we say $V_1$ and $V_2$ are {\it (algebraically) directly linked}, and we write $V_1 \overset{X}{\sim} V_2$, or simply $V_1 \sim V_2$. The equivalence relation generated by (the transitive closure of) $\sim$ is called {\it liaison}, and we can define {\it link}, {\it liaison class}, etc. analogously. \defn Let $R$ be a Cohen-Macaulay ring, and let $I, I^{\prime}, I^{\prime\prime} \subset R$ be ideals of height $r$. If $I$ minimally contains an $(a_1, \dots, a_r)$-regular sequence of forms, and $I$ and $I^{\prime}$ are directly linked by a CI ideal $J$ of type $(a_1, \dots, a_r)$, then we say $J$ is a {\it minimal} link. If $I$ and $I^{\prime\prime}$ are linked, and there exists a sequence of links from $I$ to $I^{\prime\prime}$ such that each link is minimal, then we say $I$ is {\it minimally linked to} $I^{\prime\prime}$. An ideal that is minimally linked to a CI ideal is called {\it minimally licci}. Given linked ideals $I$ and $I^{\prime}$ of $S$, there are many possible sequences of links from $I$ to $I^{\prime}$ of varying lengths, and much work has been done on understanding licci ideals and their corresponding sequences of links. Gaeta~\cite{Gaeta1948} showed that every Cohen-Macaulay ideal of height two is minimally licci, while Peskine and Szpiro~\cite{PeskineSzpiro:Liaison} proved that an ideal of height two is licci if and only if it is Cohen-Macaulay. However, these results do not extend to ideals of height $\geq 3$. Not every Cohen-Macaulay ideal of height three is licci~\cite{HunekeUlrich1987:StructureOfLinkage}, and for every $r\geq 3$, there are licci ideals of height $r$ that are not minimally licci~\cite{HunekeMiglioreNagelUlrich2007}. Nevertheless, Watanabe~\cite{Watanabe1973:NoteOnGorenSteinRings} showed that every Gorenstein ideal of height three is licci, while Migliore and Nagel~\cite{MiglioreNagel2010:MinimalLinksResultGaeta} recently proved that every Gorenstein ideal of height three is minimally licci. \section{Sequentially Bounded Licci Ideals}\label{sec:SeqBoundedLicci} In this section, we weaken the notion of `minimally licci ideals' and define a subclass of licci ideals that we call `sequentially bounded'. In particular, minimally licci ideals are sequentially bounded licci ideals. As the main result of this section, we prove that the EGH conjecture holds for every sequentially bounded licci ideal that `admits a minimal first link', which we now define precisely: \defn Let $I$ be a licci ideal of height $r$. Suppose there exists a sequence of links $I_0 \overset{J_1}{\sim} I_1 \overset{J_2}{\sim} \dots \overset{J_s}{\sim} I_s$ from $I_0 = I$ to a CI ideal $I_s$, such that $J_1, \dots, J_s$ (as CI ideals) have types ${\bf a}^{(1)}, \dots, {\bf a}^{(s)} \in \mathbb{P}^r$ respectively and satisfy ${\bf a}^{(1)} \geq \dots \geq {\bf a}^{(s)}$. Then $I$ is called a {\it sequentially bounded} licci ideal. Furthermore, if $J_1$ is a minimal link, then we say $I$ is a sequentially bounded licci ideal that {\it admits a minimal first link}. \thm\label{thm:SeqBoundedLicci} Let $2\leq e_1 \leq \dots \leq e_n$ be integers. If $I\subsetneq S$ is a sequentially bounded licci ideal that admits a minimal first link and minimally contains an $(e_1, \dots, e_n)$-regular sequence of forms, then there exists a monomial ideal $J\subsetneq S$ containing $x_1^{e_1}, \dots, x_n^{e_n}$ such that $I$ and $J$ have the same Hilbert function. Before we prove Theorem \ref{thm:SeqBoundedLicci}, we need the following useful theorem on Hilbert functions under liaison. \thm[{\cite{DavisGeramitaOrecchia1985}}]\label{thm:HilbertFunctionsUnderLiaison} Let $I,J$ be (homogeneous) ideals of $S$ such that $J\subseteq I$. If $S/J$ is an Artinian Gorenstein ring, and $s := \max\{t\in \mathbb{N}: H(S/J,t) \neq 0\}$, then $H(S/I,t) = H(S/J,t) - H(S/(J:I), s-t)$ for all $0\leq t\leq s$. \begin{proof}[Proof of Theorem \ref{thm:SeqBoundedLicci}] Let $I_0 \overset{J_1}{\sim} I_1 \overset{J_2}{\sim} \dots \overset{J_s}{\sim} I_s$ be a sequence of links from $I_0 = I$ to a CI ideal $I_s$, where $J_1$ is a minimal link. Let ${\bf a}^{(1)}, \dots, {\bf a}^{(s)}, {\bf a}^{(s+1)} \in \mathbb{P}^n$ denote the types of $J_1, \dots, J_s, I_s$ respectively (as CI ideals), and assume ${\bf a}^{(1)} \geq \dots \geq {\bf a}^{(s)}$. Note also that ${\bf a}^{(s)} \geq {\bf a}^{(s+1)}$, since $J_s$ and $I_s$ are CI ideals satisfying $J_s \subseteq I_s$. For each $i\in [s+1]$, write ${\bf a}^{(i)} = (a_{i,1}, \dots, a_{i,n})$, and let $\Gamma_i$ be the collection of all monomials in $S$ that divide ${\bf x}^{({\bf a}^{(i)} - {\bf 1}_n)}$. Also, define the collections of monomials $\widetilde{\Gamma}_{s+1}, \widetilde{\Gamma}_s, \dots, \widetilde{\Gamma}_1$ recursively as follows: Let $\widetilde{\Gamma}_{s+1} = \Gamma_{s+1}$, and for each $i\in [s]$, define \begin{equation*} \widetilde{\Gamma}_{s+1-i} := \Big\{q\in \Gamma_{s+1-i}: \frac{{\bf x}^{({\bf a}^{(s+1-i)} - {\bf 1}_n)}}{q} \not\in \widetilde{\Gamma}_{s+2-i}\Big\}. \end{equation*} \claimm $\widetilde{\Gamma}_i$ is a multicomplex, and $H(S/I_{i-1}, t) = \big|\big\{q\in \widetilde{\Gamma}_i: \deg(q) = t\big\}\big|$ for all $i\in [s+1]$, $t\in \mathbb{N}$. \begin{proof}[Proof of Claim] \renewcommand{\qedsymbol}{$\blacksquare$} We shall prove both assertions of the claim simultaneously by induction on $s+2-i$ (for $i\in [s+1]$). The base case is trivial: $\widetilde{\Gamma}_{s+1}$ is clearly a multicomplex, and \begin{equation*} H(S/I_s,t) = H(S/\langle x_1^{a_{n+1,1}}, \dots, x_n^{a_{n+1,n}}\rangle,t) = \big|\big\{q\in \widetilde{\Gamma}_{s+1}: \deg(q) = t\big\}\big| \end{equation*} for all $t\in \mathbb{N}$. In particular, $\Gamma_{s+1} = \widetilde{\Gamma}_{s+1}$ forms a $k$-basis for $S/\langle x_1^{a_{n+1,1}}, \dots, x_n^{a_{n+1,n}}\rangle$. For the induction step, let $m_{s+1-i} := {\bf x}^{({\bf a}^{(s+1-i)} - {\bf 1}_n)}$. Since ${\bf a}^{(1)} \geq \dots \geq {\bf a}^{(s+1)}$ implies $\widetilde{\Gamma}_{i+1} \subseteq \Gamma_{i+1} \subseteq \Gamma_i$ for all $i\in [s]$, it then follows from Theorem \ref{thm:HilbertFunctionsUnderLiaison} that $H(S/I_{s-i},t)$ equals \begin{align*} &\ H(S/J_{s+1-i},t) - H(S/I_{s+1-i}, |{\bf a}^{(s+1-i)}| -n-t)\\ =&\ \big|\big\{q\in \Gamma_{s+1-i}: \deg(q) = t\big\}\big| - \big|\big\{q^{\prime} \in \widetilde{\Gamma}_{s+2-i}: \deg(q^{\prime}) = |{\bf a}^{(s+1-i)}| -n-t\big\}\big|\\ =&\ \big|\big\{q\in \Gamma_{s+1-i}: \deg(q) = t\big\}\big| - \Big|\Big\{\frac{m_{s+1-i}}{q^{\prime}} \in \Gamma_{s+1-i}: q^{\prime} \in \widetilde{\Gamma}_{s+2-i}, \deg\Big(\frac{m_{s+1-i}}{q^{\prime}}\Big) = t\Big\}\Big|\\ =&\ \big|\big\{q\in \Gamma_{s+1-i}: \deg(q) = t\big\}\big| - \Big|\Big\{q\in \Gamma_{s+1-i}: \frac{m_{s+1-i}}{q} \in \widetilde{\Gamma}_{s+2-i}, \deg(q) = t\Big\}\Big|\\ =&\ \big|\big\{q\in \widetilde{\Gamma}_{s+1-i}: \deg(q) = t\big\}\big| \end{align*} for every $t\in \mathbb{N}$. Next, we prove that $\widetilde{\Gamma}_{s+1-i}$ is a multicomplex. Choose an arbitrary $q\in \Gamma_{s+1-i}$ such that $q\not\in \widetilde{\Gamma}_{s+1-i}$, and suppose $qx_j \in \Gamma_{s+1-i}$ for some $j\in [n]$. Clearly $\Gamma_{s+1-i}$ is a multicomplex containing $\widetilde{\Gamma}_{s+1-i}$, hence to prove that $\widetilde{\Gamma}_{s+1-i}$ is a multicomplex, it suffices to show that $qx_j \not\in \widetilde{\Gamma}_{s+1-i}$. Now, $qx_j \in \Gamma_{s+1-i}$ implies $x_j^{(a_{(s+1-i),j}-1)}$ does not divide $q$, which means $x_j$ divides $\frac{m_{s+1-i}}{q}$, so since $\widetilde{\Gamma}_{s+2-i}$ is a multicomplex by induction hypothesis, we thus get $\frac{m_{s+1-i}}{qx_j} \in \widetilde{\Gamma}_{s+2-i}$ which yields $qx_j \not\in \widetilde{\Gamma}_{s+1-i}$. \end{proof} With the above claim, we now complete the proof of Theorem \ref{thm:SeqBoundedLicci}. Let $J\subset S$ be the ideal spanned by monomials in $S$ that are not contained in $\widetilde{\Gamma}_1$. Using the claim, $\widetilde{\Gamma}_1$ is a multicomplex, hence $\widetilde{\Gamma}_1$ forms a $k$-basis for $S/J$, and we get \begin{equation*} H(S/I,t) = H(S/I_0,t) = \big|\big\{q\in \widetilde{\Gamma}_1: \deg(q) = t\big\}\big| = H(S/J,t). \end{equation*} Finally, since $J_1$ is a minimal link, we have $(e_1, \dots, e_n) = {\bf a}^{(1)}$, so $\widetilde{\Gamma}_1 \subseteq \Gamma_1$ implies $J$ contains $x_1^{e_1}, \dots, x_n^{e_n}$. \end{proof} \cor\label{cor:minimallyLicci=>EGH} Let $2\leq e_1 \leq \dots \leq e_n$ be integers. If $I\subset S$ is a minimally licci ideal that minimally contains an $(e_1, \dots, e_n)$-regular sequence of forms, then there exists a monomial ideal in $S$ containing $x_1^{e_1}, \dots, x_n^{e_n}$ with the same Hilbert function as $I$. \begin{proof} Given any sequence $I_0 \overset{J_1}{\sim} I_1 \overset{J_2}{\sim} I_2$ of minimal links, where $J_1$ and $J_2$ have types ${\bf a}$ and ${\bf b}$ respectively, the definition of liaison yields $I_0 \overset{J_1}{\sim} I_1 \overset{J_1}{\sim} I_0$, hence the minimality of $J_2$ as a link implies ${\bf b} \leq {\bf a}$. Consequently, a minimally licci ideal is a sequentially bounded licci ideal that admits a minimal first link, and the assertion follows from Theorem \ref{thm:SeqBoundedLicci}. \end{proof} Theorem \ref{mainThm-Gorenstein} is an immediate consequence of Corollary \ref{cor:minimallyLicci=>EGH}, since every Gorenstein ideal of height three is miminally licci~\cite{MiglioreNagel2010:MinimalLinksResultGaeta}. In fact, Migliore and Nagel~\cite{MiglioreNagel2010:MinimalLinksResultGaeta} proved a stronger result: If $I$ is a Gorenstein ideal that is not a CI ideal, then linking $I$ minimally twice gives a Gorenstein ideal with two fewer generators than $I$. Their proof uses Buchsbaum-Eisenbud's structure theorem~\cite{BuchsbaumEisenbud1977}, which says that every Gorenstein ideal of height three is generated by the submaximal Pfaffians of an alternating matrix. Note that Watanabe~\cite{Watanabe1973:NoteOnGorenSteinRings} previously showed the minimal number of generators of every Gorenstein ideal of height three is odd. \remark[The Two Variables Case] Every ideal $I\subset S$ containing a maximal $S$-regular sequence is Artinian and hence Cohen-Macaulay. Since Gaeta's theorem~\cite{Gaeta1948} says every Cohen-Macaulay ideal of height two is minimally licci, Corollary \ref{cor:minimallyLicci=>EGH} thus yields a different proof (cf. \cite{Richert2004:StudyLPPConjecture}, \cite[Remark 14]{CavigliaMaclagan2008:EGHConj}) that the EGH conjecture holds for $n=2$. \section{Variations of the EGH Conjecture} There are two common variations of the EGH conjecture. \conjecture[{$\text{EGH}_{{\bf e},n}$}]\label{conj:non-minimal-degrees} Let $2\leq e_1 \leq \dots \leq e_n$ be integers, and let $f_1, \dots, f_n$ be an $S$-regular sequence of forms of degrees $e_1, \dots, e_n$ respectively. If $I\subsetneq S$ is a homogeneous ideal containing $f_1, \dots, f_n$, then there exists a homogeneous ideal $J\subsetneq S$ containing $x_1^{e_1}, \dots, x_n^{e_n}$, such that $I$ and $J$ have the same Hilbert function. \conjecture[{$\text{EGH}_{n,{\bf e},r}$}]\label{conj:non-maximal-regularSeq} Let $r \in [n]$, ${\bf e} = (e_1, \dots, e_r) \in \mathbb{P}^r$ satisfy $2\leq e_1 \leq \dots \leq e_r$, and let $f_1, \dots, f_r$ be an $S$-regular sequence of forms of degrees $e_1, \dots, e_r$ respectively. If $I\subsetneq S$ is a homogeneous ideal containing $f_1, \dots, f_r$, then there exists a homogeneous ideal $J\subsetneq S$ containing $x_1^{e_1}, \dots, x_r^{e_r}$, such that $I$ and $J$ have the same Hilbert function. Conjecture \ref{conj:non-minimal-degrees} is clearly equivalent to the EGH conjecture, while Conjecture \ref{conj:non-maximal-regularSeq} allows for non-maximal $S$-regular sequences, with the case $\text{EGH}_{n,{\bf e},n}$ being identical to $\text{EGH}_{{\bf e},n}$. Remarkably, Caviglia and Maclagan~\cite{CavigliaMaclagan2008:EGHConj} showed that Conjecture \ref{conj:non-maximal-regularSeq} is equivalent to the EGH conjecture. In particular, they showed that if $r\in [n]$, ${\bf e} = (e_1, \dots, e_r) \in \mathbb{P}^r$ satisfies $2\leq e_1 \leq \dots \leq e_r$, and $\text{EGH}_{{\bf e}^{\prime},n}$ holds for all ${\bf e}^{\prime} = (e_1^{\prime}, \dots, e_n^{\prime}) \in \mathbb{P}^n$ such that $2\leq e_1^{\prime} \leq \dots \leq e_n^{\prime}$ and $e_i^{\prime} = e_i$ for each $i\in [r]$, then $\text{EGH}_{n,{\bf e},r}$ holds; and conversely, if $\text{EGH}_{{\bf e},r}$ holds for some $r\in \mathbb{P}$ and some ${\bf e} = (e_1, \dots, e_r) \in \mathbb{P}^r$ satisfying $2\leq e_1 \leq \dots \leq e_r$, then $\text{EGH}_{n^{\prime},{\bf e},r}$ holds for all integers $n^{\prime} \geq r$. By a modification of their proof, we show that $\text{EGH}_{n,{\bf e},r}$ holds for every sequentially bounded licci ideal that admits a minimal first link whenever $n \geq r$. \lemma\label{lemma:non-maximalCase} Let $I_1, I_2 \subsetneq S$ be ideals of height $r<n$, and let $f_1, \dots, f_r,g$ be an $S$-regular sequence of forms. Define $J := \langle f_1, \dots, f_r\rangle$ and $J^{\prime} := \langle f_1, \dots, f_r, g\rangle$. If $I_1\overset{J}{\sim} I_2$, then \begin{equation*} \big((I_1: g^j) + \langle g \rangle\big) \overset{J^{\prime}}{\sim} \big((I_2: g^j) + \langle g \rangle\big) \end{equation*} for every $j\in \mathbb{N}$. \begin{proof} For convenience, write $I_i^{\prime} := \big((I_i: g^j) + \langle g \rangle\big)$ for each $i\in \{1,2\}$. The case $j=0$ is trivial, so assume $j\geq 1$. Observe that the identity $I_2 = J:I_1$ yields \begin{align*} I_2^{\prime} &= \frac{1}{g^j}\Big((J:I_1) \cap \langle g^j\rangle\Big) + \langle g\rangle = \Big\{\frac{s^{\prime}}{g^j} \in S: s^{\prime} \in \langle g^j\rangle, s^{\prime}I_1 \subseteq J\Big\} + \langle g \rangle\\ &= \{s\in S: sg^jI_1\subseteq J\} + \langle g\rangle, \end{align*} while the identity $I_1 = J:(J:I_1)$ yields \begin{align*} J^{\prime}: I_1^{\prime} &= \Big\{s\in S: s\cdot \frac{1}{g^j}(I_1\cap \langle g^j\rangle) \subseteq J\Big\} + \langle g\rangle\\ &= \Big\{s\in S: s\cdot \frac{1}{g^j}((J:(J:I_1))\cap \langle g^j\rangle) \subseteq J\Big\} + \langle g\rangle\\ &= \big\{s\in S: ss^{\prime} \in J\text{ for every }s^{\prime}\in S^{\prime}\big\} + \langle g\rangle, \end{align*} where $S^{\prime} := \{s^{\prime} \in S: s^{\prime}tg^j \in J\text{ for all }t\in S \text{ such that }tI_1\subseteq J\}$. A routine check gives $J^{\prime}: I_1^{\prime} \subseteq I_2^{\prime}$. To show the reverse inclusion $J^{\prime}: I_1^{\prime} \supseteq I_2^{\prime}$, note that $sp\in J$ for every non-zero $s\in S$ and non-zero $p\in I_1$ satisfying $sg^jp\in J$, since otherwise $g^j$ would be a zero-divisor of $S/J$, which contradicts the assumption that $f_1, \dots, f_r,g$ is an $S$-regular sequence. By the same argument, every $s^{\prime} \in S^{\prime}$ satisfies $s^{\prime}t$ for all $t\in S$ such that $tI_1\subseteq J$. Consequently, each $s \in I_2^{\prime}$ satisfies $sI_1\subseteq J$ and hence satisfies $ss^{\prime} \in J$ for all $s^{\prime} \in S^{\prime}$, which gives the reverse inclusion, so we conclude $J^{\prime}: I_1^{\prime} = I_2^{\prime}$. A symmetric argument yields $J^{\prime}: I_2^{\prime} = I_1^{\prime}$. Finally, since $f_1, \dots, f_r, g$ is an $S$-regular sequence, we get $J\cap \langle g^j\rangle = \langle g^jf_1, \dots, g^jf_r\rangle$. It then follows from $J\subseteq I_1 \cap I_2$ that $J^{\prime} = \frac{1}{g^j}(J \cap \langle g^j\rangle) + \langle g\rangle \subseteq I_1^{\prime} \cap I_2^{\prime}$, therefore $I_1^{\prime} \overset{J^{\prime}}{\sim} I_2^{\prime}$. \end{proof} \prop Let $r\in [n]$, let $2\leq e_1 \leq \dots \leq e_r$ be integers, and let $I\subsetneq S$ be an ideal (of height $r$) that minimally contains an $(e_1, \dots, e_r)$-regular sequence of forms. If $I\subsetneq S$ is a sequentially bounded licci ideal that admits a minimal first link (for example, $I$ could be a minimally licci ideal), then there exists a monomial ideal $J\subsetneq S$ containing $x_1^{e_1}, \dots, x_r^{e_r}$ such that $I$ and $J$ have the same Hilbert function. \begin{proof} We follow Caviglia-Maclagan's proof and similarly prove our proposition by induction on $n-r\in \mathbb{N}$, where the base case $n-r=0$ is equivalent to Theorem \ref{thm:SeqBoundedLicci}. Assume $r <n$, fix the integers $2\leq e_1 \leq \dots \leq e_r$, and let $I = I_0 \overset{J_1}{\sim} I_1 \overset{J_2}{\sim} \dots \overset{J_s}{\sim} I_s$ be a sequence of links from $I_0$ to a CI ideal $I_s$, such that $J_1, \dots, J_s, I_s$ (as CI ideals) have types ${\bf a}^{(1)}, \dots, {\bf a}^{(s)}, {\bf a}^{(s+1)} \in \mathbb{P}^r$ respectively and satisfy ${\bf a}^{(1)} \geq \dots \geq {\bf a}^{(s+1)}$. In particular, $J_s$ and $I_s$ are CI ideals satisfying $J_s \subseteq I_s$, so the last inequality ${\bf a}^{(s)} \geq {\bf a}^{(s+1)}$ is guaranteed. Also, assume $J_1$ is a minimal link and ${\bf a}^{(1)} = (e_1, \dots, e_r)$. Since $k$ is infinite and $r<n$, we can choose some linear form $g$ that is a non-zero-divisor on each of $S/J_1, \dots, S/J_s, S/I_s$. Consequently, for each $i\in [s]$, we can define the CI ideal $J_i^{\prime} := J_i + \langle g\rangle$ of type $(1,{\bf a}^{(i)}) \in \mathbb{P}^{r+1}$. Next, let $N\in \mathbb{P}$ be sufficiently large so that $(I:g^{\infty}) = (I:g^N)$. For each $i\in \{0,1,\dots, s\}$ and $j\in \{0,1,\dots, N\}$, define the ideal $I_i^{(j)} := (I_i: g^j) + \langle g\rangle$. By Lemma \ref{lemma:non-maximalCase}, we get the sequence of links \begin{equation}\label{eqn:seqLinks} I_0^{(j)} \overset{J_1^{\prime}}{\sim} I_1^{(j)} \overset{J_2^{\prime}}{\sim} \dots \overset{J_s^{\prime}}{\sim} I_s^{(j)} \end{equation} for every $j\in \{0,1,\dots, N\}$, and we observe that each $I_s^{(j)}$ is a CI ideal. For each $i\in [s]$, write $J_i = \langle f_1^{(i)}, \dots, f_r^{(i)} \rangle$ so that ${\bf a}^{(i)} = (\deg(f_1^{(i)}), \dots, \deg(f_r^{(i)}))$. Also, write $I_s = \langle f_1^{(s+1)}, \dots, f_r^{(s+1)}\rangle$ so that ${\bf a}^{(s+1)} = (\deg(f_1^{(s+1)}), \dots, \deg(f_r^{(s+1)}))$. The quotient ring $R := S/\langle g\rangle$ is isomorphic to a polynomial ring on $n-1$ variables, and by the natural quotient map $\pi: S\to R$, the $S$-regular sequence $f_1^{(i)}, \dots, f_r^{(i)}$ descends to an $R$-regular sequence $\pi(f_1^{(i)}), \dots, \pi(f_r^{(i)})$ for each $i\in [s+1]$. In particular, $J_i^{\prime\prime} := \pi(J_i^{\prime})$ is a CI ideal in $R$ for all $i\in [s]$. By applying $\pi$ to the ideals in \eqref{eqn:seqLinks}, we get the sequence of links $\pi(I_0^{(j)}) \overset{J_1^{\prime\prime}}{\sim} \pi(I_1^{(j)}) \overset{J_2^{\prime\prime}}{\sim} \dots \overset{J_s^{\prime\prime}}{\sim} \pi(I_s^{(j)})$ for every $j\in \{0,1,\dots, N\}$. Now $J_1^{\prime\prime}, \dots, J_s^{\prime\prime}$ (as CI ideals) have types ${\bf a}^{(1)}, \dots, {\bf a}^{(s)}$ respectively, and for every $j\in \{0,1,\dots, N\}$, the ideal $\pi(I_s^{(j)})$ is a CI ideal of type ${\bf a}^{(s+1)}$, thus $\pi(I_0^{(j)})$ is a sequentially bounded licci ideal in $R$ that admits a minimal first link. Using $R \cong k[x_1,\dots, x_{n-1}]$, the induction hypothesis then says that for every $j\in \{0,1,\dots, N\}$, there exists a monomial ideal in $k[x_1,\dots, x_{n-1}]$ containing $x_1^{e_1}, \dots, x_r^{e_r}$ with the same Hilbert function as $\pi(I_0^{(j)})$. Let $M_j$ be the lex-plus-powers ideal in $k[x_1, \dots, x_{n-1}]$ containing $x_1^{e_1}, \dots, x_r^{e_r}$ with this same Hilbert function (which exists by Clements-Lindstr\"{o}m theorem~\cite{ClementsLindstrom1969}), and let $K_j \subseteq k[x_1, \dots, x_n]$ be the set of monomials $K_j := \{mx_n^j: m\in M_j\}$. Finally, consider the ideal $K$ generated by the monomials in $\bigcup_{j=0}^N K_j$. As shown by Caviglia and Maclagan~\cite[Proposition 10]{CavigliaMaclagan2008:EGHConj}, the ideal $K$ contains $x_1^{e_1}, \dots, x_r^{e_r}$ and has the same Hilbert function as $I$, and their proof holds verbatim. \end{proof} \cor Let $n\geq 3$, and let $2\leq e_1 \leq e_2 \leq e_3$ be integers. If $I\subsetneq S$ is a homogeneous Gorenstein ideal (of height three) that minimally contains an $(e_1, e_2, e_3)$-regular sequence of forms, then there exists a monomial ideal $J \subsetneq S$ containing $x_1^{e_1}, x_2^{e_2}, x_3^{e_3}$, such that $I$ and $J$ have the same Hilbert function. \section*{Acknowledgements} The author thanks David Eisenbud, Irena Peeva, Juan Migliore, and Edward Swartz for helpful comments. \bibliographystyle{plain}
1,314,259,996,047
arxiv
\chapter{\clearpage\thispagestyle{plain}\global\@topnum\z@ \@afterindenttrue \secdef\@chapter\@schapter} \makeatother \newtheorem{thm} {Theorem} [section] \newtheorem{prop}{Proposition} [section] \newtheorem{lem} {Lemma} [section] \newtheorem{cor} {Corollary}[section] \newtheorem{Def} {Definition} [section] \newtheorem{defs} [Def]{Definitions} \newtheorem{Not} [Def] {Notation} \newtheorem{thmgl} {Theorem} \newtheorem{propgl}{Proposition} \newtheorem{lemgl} {Lemma} \newtheorem{corgl} {Corollary \newtheorem{defgl} {Definition} \newtheorem{defsgl} [defgl]{Definitions} \newtheorem{notgl} [defgl] {Notation} \newtheorem{thmnn}{Theorem} \renewcommand{\thethmnn}{\!\!} \newtheorem{propnn}{Proposition} \renewcommand{\thepropnn}{\!\!} \newtheorem{lemnn}{Lemma} \renewcommand{\thelemnn}{\!\!} \newtheorem{cornn}{Corollary} \renewcommand{\thecornn}{\!\!} \newtheorem{defnn}{Definition} \renewcommand{\thedefnn}{\!\!} \newtheorem{defsnn}{Definitions} \renewcommand{\thedefsnn}{\!\!} \newtheorem{notnn}{Notation} \renewcommand{\thenotnn}{\!\!} \theoremstyle{definition} \newtheorem{alg} {Algorithm} [section] \newtheorem{exe} {Exercise} [section] \newtheorem{exes} [exe] {Exercises} \newtheorem{rem} {Remark} [section] \newtheorem{rems} [rem]{Remarks} \newtheorem{exa} [rem] {Example} \newtheorem{exas} [rem] {Examples} \newtheorem{alggl} {Algorithm} \newtheorem{exegl} {Exercise} \newtheorem{exesgl} [exegl]{Exercises} \newtheorem{remgl} {Remark} \newtheorem{remsgl} [remgl]{Remarks} \newtheorem{exagl} [remgl] {Example} \newtheorem{exasgl} [remgl] {Examples} \newtheorem{algnn} {Algorithm} \renewcommand{\thealgnn}{\!\!} \newtheorem{exenn} {Exercise} \renewcommand{\theexenn}{\!\!} \newtheorem{exesnn} {Exercises} \renewcommand{\theexesnn}{\!\!} \newtheorem{remnn}{Remark} \renewcommand{\theremnn}{\!\!} \newtheorem{remsnn}{Remarks} \renewcommand{\theremsnn}{\!\!} \newtheorem{exann}{Example} \renewcommand{\theexann}{\!\!} \newtheorem{exasnn}{Examples} \renewcommand{\theexasnn}{\!\!} \newcommand{\fo}{\footnotesize} \newcommand{\mf}{\mathfrak} \newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb} \newcommand{\nts}{\negthinspace} \newcommand{\Nts}{\nts\nts} \newcommand{\cd}{\cdot} \newcommand{\ncd}{\nts\cdot\nts} \newcommand{\ov}{\overline} \newcommand{\un}{\underline} \newcommand{\vink}{^{^{_{_\vee}}}} \newcommand{\sm}{\setminus} \newcommand{\ot}{\otimes} \newcommand{\la}{\langle} \newcommand{\ra}{\rangle} \newcommand{\Hom}{{\rm Hom}} \newcommand{\Mor}{{\rm Mor}} \newcommand{\End}{{\rm End}} \newcommand{\Aut}{{\rm Aut}} \newcommand{\Mat}{{\rm Mat}} \newcommand{\Der}{{\rm Der}} \newcommand{\Ext}{{\rm Ext}} \newcommand{\Tor}{{\rm Tor}} \newcommand{\soc}{{\rm soc}} \newcommand{\ind}{{\rm ind}} \newcommand{\coind}{{\rm coind}} \newcommand{\res}{{\rm res}} \newcommand{\Spec}{{\rm Spec}} \newcommand{\Maxspec}{{\rm Maxspec}} \newcommand{\Sym}{{\rm Sym}} \newcommand{\Mon}{{\rm Mon}} \newcommand{\Ker}{{\rm Ker}} \renewcommand{\Im}{{\rm Im}} \newcommand{\codim}{{\rm codim}} \newcommand{\rk}{{\rm rk}} \newcommand{\sgn}{{\rm sgn}} \newcommand{\tr}{{\rm tr}} \newcommand{\id}{{\rm id}} \newcommand{\rad}{{\rm rad}} \newcommand{\gr}{{\rm gr}} \newcommand{\g}{\mf{g}} \newcommand{\h}{\mf{h}} \let\ttie\t \newcommand{\tie}[1]{{\let\t\ttie \ttie#1} \renewcommand{\t}{\mf{t}} \newcommand{\n}{\mf{n}} \let\br\u \renewcommand{\u}{\mf{u}} \let\bar\b \renewcommand{\b}{\mf{b}} \let\ced\c \renewcommand{\c}{\mf{c}} \newcommand{\z}{\mf{z}} \newcommand{\ad}{{\rm ad}} \newcommand{\gl}{\mf{gl}} \newcommand{\spl}{\mf{sl}} \newcommand{\psl}{\mf{psl}} \newcommand{\pgl}{\mf{pgl}} \newcommand{\ort}{\mf{o}} \newcommand{\symp}{\mf{sp}} \newcommand{\Ad}{{\rm Ad}} \newcommand{\Lie}{{\rm Lie}} \newcommand{\Dist}{{\rm Dist}} \newcommand{\GL}{{\rm GL}} \newcommand{\PGL}{{\rm PGL}} \newcommand{\SL}{{\rm SL}} \newcommand{\PSL}{{\rm PSL}} \newcommand{\SU}{{\rm SU}} \newcommand{\Ort}{{\rm O}} \newcommand{\SO}{{\rm SO}} \newcommand{\PSO}{{\rm PSO}} \newcommand{\Spin}{{\rm Spin}} \newcommand{\Sp}{{\rm Sp}} \newcommand{\PSp}{{\rm PSp}} \newcommand{\sleq}{\mbox{\fontsize{2}{3}\selectfont $\leq $}} \newcommand{\stimes}{\mbox{\fontsize{7}{6}\selectfont $\times $}} \newcommand{\e}{\epsilon} \newcommand{\ve}{\varepsilon} \newcommand{\rot}{\rotatebox} \newcommand{\SpM}{{\rm SpM}} \newcommand{\GSp}{{\rm GSp}} \newcommand{\Stab}{{\rm Stab}} \newcommand*\pmat[1]{\begin{psmallmatrix}#1\end{psmallmatrix}} \newcommand*\bmat[1]{\begin{bsmallmatrix}#1\end{bsmallmatrix}} \makeatletter \def\vcdots{\vbox{\baselineskip4\p@ \lineskiplimit\z@ \kern3\p@\hbox{.}\hbox{.}\hbox{.}\Nts\nts\kern3\p@}} \makeatother \xyoption{all} \begin{document} \title{Highest weight vectors and transmutation} \begin{abstract} Let $G=\GL_n$ be the general linear group over an algebraically closed field $k$, let $\g=\gl_n$ be its Lie algebra and let $U$ be the subgroup of $G$ which consists of the upper uni-triangular matrices. Let $k[\g]$ be the algebra of polynomial functions on $\g$ and let $k[\g]^G$ be the algebra of invariants under the conjugation action of $G$. In characteristic zero, we give for all dominant weights $\chi\in\mathbb Z^n$ finite homogeneous spanning sets for the $k[\g]^G$-modules $k[\g]_\chi^U$ of highest weight vectors. This result (with some mistakes) was already given without proof by J.~F.~Donin. Then we do the same for tuples of $n\times n$-matrices under the diagonal conjugation action. Furthermore we extend our earlier results in positive characteristic and give a general result which reduces the problem to giving spanning sets of the highest weight vectors for the action of $\GL_r\times\GL_s$ on tuples of $r\times s$ matrices. This requires the technique called ``transmutation" by R.~Brylinsky which is based on an instance of Howe duality. In the cases that $\chi_{{}_n}\ge -1$ or $\chi_{{}_1}\le 1$ this leads to new spanning sets for the modules $k[\g]_\chi^U$. \end{abstract} \author[R.\ H.\ Tange]{Rudolf Tange} \keywords{} \thanks{2010 {\it Mathematics Subject Classification}. 13A50, 16W22, 20G05.} \maketitle \markright{\MakeUppercase{Highest weight vectors and transmutation}} \begin{comment} 1.\ Concerning the open problem of understanding Donin's results about the exterior algebra, one should check that his "results" are in accordance with Hanlon's and Stanley's limiting formula's.\\ 2.\ It would be interesting to know whether $(\Mat_{rn}\times\Mat_{ns},Y_{r,s,n})$ is a good pair for any r and s. This is not as easy as in the case of rank varieties. For a start Y_{r,s,n} need no longer be irreducible and it's usually its irreducible components that are studied. More importantly, from the results on bideterminant bases from the paper by De Concini and Strickland on the variety of copmplexes I cannot easily deduce that the vanishing ideal also has a bideterminant basis. The results of Mehta and Trivedi are also not very helpful, since they do not give a Frobenius splitting for the big variety. \end{comment} \section*{Introduction}\label{s.intro} Let $k$ be an algebraically closed field and let $\GL_n$ be the group of invertible $n\times n$ matrices with entries in $k$ and let $T_n$ and $U_n$ be the subgroups of diagonal matrices and of upper uni-triangular matrices. The group $\GL_n$ acts on the $k$-vector space $\Mat_n$ of $n\times n$ matrices with entries in $k$ via $S\cdot A=SAS^{-1}$ and therefore on its coordinate ring $k[\Mat_n]$ via $(S\cdot f)(A)=f(S^{-1}AS)$. We identify the character group of $T_n$ with $\mb Z^n$: if $\chi\in\mb Z^n$, then $D\mapsto \prod_{i=1}^nD_{ii}^{\chi_i}$ is the corresponding character of $T_n$. We will call the characters of $T_n$ {\it weights} of $T_n$ or $\GL_n$ and the weights $\chi$ of $T_n$ for which the corresponding weight space $M_\chi$ of a given $T_n$ module $M$ is nonzero will be called {\it weights} of $M$. We say that $\chi\in\mb Z^n$ is {\it dominant} if it is weakly decreasing. We will be interested in finding finite homogeneous spanning sets for the $k[\Mat_n]^{\GL_n}$-modules $k[\Mat_n]_\chi^{U_n}$ of highest weight vectors. As is well-known, such a module is nonzero if and only if $\chi$ is dominant and has coordinate sum zero. A weight $\chi\in\mb Z^n$ with this property can uniquely be written as $\chi=[\lambda,\mu]:=(\lambda_1,\lambda_2,\ldots,0,\ldots,0,\ldots,-\mu_2,-\mu_1)$ were $\lambda$ and $\mu$ are partitions with $|\lambda|=|\mu|$ and $l(\lambda)+l(\mu)\le n$. Here $l(\lambda)$ denotes the length of a partition $\lambda$ and $|\lambda|$ denotes its coordinate sum. As usual partitions are extended with zeros if necessary. The nilpotent cone $\mc N_n=\{A\in\Mat_n\,|\,A^n=0\}$ is a $\GL_n$-stable closed subvariety of $\Mat_n$. Using the graded Nakayama Lemma it is easy to see that it suffices to find finite homogeneous spanning sets for the vector spaces of highest weight vectors $k[\mc N_n]_\chi^{U_n}$ in the coordinate ring of $\mc N_n$. For background on the adjoint action on $k[\Mat_n]$ and $k[\mc N_n]$, e.g. graded character formulas, we refer to the introduction of \cite{T2} and the references in there. In \cite{Br} a process called {\it transmutation} is applied to understand the conjugation action of $\GL_n$ on the nilpotent cone. We briefly explain the idea and for simplicity we assume that $k$ has characteristic $0$. Let $G,H$ be reductive groups and let $Y$ be an affine $G\times H$-variety such that $k[Y]=\bigoplus_{i\in I}L_i^*\ot M_i$ where the $L_i$ are mutually nonisomorphic $G$-modules and the $M_i$ are mutually nonisomorphic $H$-modules. Then $Y$ can be used as a ``catalyst" for transmutation as follows. If $V$ is an affine $G$-variety, then the $W=Y\times^GV:=(Y\times V)//G$ is an $H$ variety, the $H$-irreducibles that show up in $k[W]$ are the $M_i$, and the multiplicity of $M_i$ in $k[W]$ is the same as that of $L_i$ in $k[V]$. The goal is to find for a given $V$ a suitable $H$ and $Y$ for which the resulting $W$ is much simpler than $V$, but still contains enough interesting information coming from $V$. In \cite{Br} R.~Brylinsky applied this technique to the closed $\GL_n$-stable subvariety $V=\mc N_{n,m}=\{A\in\mc N_n\,|\,A^{m+1}=0\}$ of $\Mat_n$ and $G=\GL_n$. She showed that in this case for $H=\GL_r\times\GL_s$ and a suitable catalyst $Y$ the transmuted variety $W$ is a certain closed subvariety of $\Mat_{rs}^m$ which is all of $\Mat_{rs}^m$ if $n$ is sufficiently big relative to $m,r$ and $s$. Here $\GL_r\times\GL_s$ acts on $\Mat_{rs}^m$ via $((R,S)\cdot\un A)_i=RA_iS^{-1}$, $\un A=(A_1,\ldots,A_m)\in\Mat_{rs}^m$, and on the coordinate ring $k[\Mat_{rs}^m]$ via $((R,S)\cdot f)(\un A)=f((R^{-1},S^{-1})\cdot\un A)$. The correspondence between the irreducibles for the two groups is in terms of the labels given by $\chi=[\lambda,\mu]\leftrightarrow(-\mu^{\rm rev},\lambda)$, where $\mu^{\rm rev}$ is the reversed $r$-tuple of $\mu$. In this paper we give finite homogeneous spanning sets for the vector spaces $k[\mc N_n]_\chi^{U_n}$ in characteristic $0$ using transmutation and some results of J.~Donin on skew representations for the symmetric group. Donin already stated our result Theorem~\ref{thm.highest_weight_vecs} in \cite{Donin1} and \cite{Donin2}, but gave no proof.\footnote{The ``proof" of \cite[Prop.~4.1]{Donin1} is not convincing.} By making Brylinsky's results explicit in terms of highest weight vectors we can prove the result of Donin on the highest weight vectors in $k[\Mat_n]$. It turns out that the method of ``transmutation" works in any characteristic and for certain special weights we can give bases for the highest weight vectors in the coordinate ring of the transmuted variety which then give spanning sets for the highest weight vectors in $\mc N_n$. The paper is organised as follows. In Section~\ref{s.prelim} we introduce some notation, e.g. for diagrams and tableaux, and we state some well-known results from the literature on the algebra $k[\Mat_n]^{\GL_n}$, reduction to the nilpotent cone and good filtrations that we will need. In Section~\ref{s.charp} we show in Theorem~\ref{thm.surjective_pullback} that the technique of transmutation works in our case in any characteristic. Our main tool here is Donkin's results on good pairs of varieties \cite{Don1}. We can apply Theorem~\ref{thm.surjective_pullback} in arbitrary characteristic for weights $\chi$ with $\chi_n\ge-1$ or $\chi_1\le 1$. For the corresponding $\GL_r\times\GL_s$-weights we give in Theorem~\ref{thm.basis_special_weights} bases for the spaces of highest weight vectors in the coordinate ring of the ``transmuted space" $\Mat_{rs}^m$. In Section~\ref{s.char0} we always assume that our field $k$ has characteristic $0$. In Section~\ref{ss.Sym} we first develop the necessary results on skew representations of the symmetric group. What we need is explicit polytabloid bases for the ``coinvariants" for a Young subgroup in a Specht module, see Proposition~\ref{prop.coinvariants}. Although Donin didn't prove Theorem~\ref{thm.highest_weight_vecs} or its analogue for the nilpotent cone, he did give proofs in \cite{Donin1} for most of the required results on the symmetric group. However, his proofs are mostly incomplete and sometimes incorrect. So we give an account with complete proofs. In Section~\ref{ss.highest_weight_vecs} we give bases for the spaces of highest weight vectors in the coordinate ring of the ``transmuted space" $\Mat_{rs}^m$. Combined with Theorem~\ref{thm.surjective_pullback} this gives finite homogeneous spanning sets for the vector spaces $k[\mc N_n]_\chi^{U_n}$ in characteristic $0$. This can then further be combined with Lemma~\ref{lem.reduction_to_nilpotent_cone} below to obtain finite homogeneous spanning sets for the $k[\Mat_n]^{\GL_n}$-modules $k[\Mat_n]_\chi^{U_n}$. In Section~\ref{ss.several_matrices} we briefly describe a generalisation to several matrices and how to obtain spanning sets for the $k[\Mat_n^l]^{\GL_n}$-modules $k[\Mat_n^l]_\chi^{U_n}$. \section{Preliminaries}\label{s.prelim} Throughout this paper $k$ is an algebraically closed field of arbitrary characteristic. All our varieties are affine. The groups $\GL_n,T_n,U_n$ and the actions of $\GL_n$ on $\Mat_n$ and $\mc N_{n,m}$ and of $\GL_r\times\GL_s$ on $\Mat_{rs}^m$ are as in Section~\ref{s.prelim}. \subsection{The graded Nakayama Lemma} As is well-known the algebra $k[\Mat_n]^{\GL_n}$ is generated by the algebraically independent functions $s_1,\ldots,s_n$ given by $s_i(A)=\tr(\wedge^iA)$, where $\wedge^iA$ denotes the $i$-th exterior power of $A$. Furthermore, the $s_i$ generate the vanishing ideal of $\mc N_n$. If $m$ is the dimension of the zero weight space of $\nabla_{\GL_n}(\chi)$, then $k[\mc N_n]^{U_n}_\chi$ has dimension $m$ and $k[\Mat_n]^{U_n}_\chi$ is a free $k[\Mat_n]^{\GL_n}$-module of rank $m$. The following lemma is an application of the graded Nakayama Lemma. \begin{lemgl}\label{lem.reduction_to_nilpotent_cone} Let $f_1,\ldots,f_l\in k[\Mat_n]^{U_n}_\chi$ be homogeneous. If the restrictions $f_1|_{\mc N_n},\ldots,f_l|_{\mc N_n}$ span $k[\mc N_n]^{U_n}_\chi$, then $f_1,\ldots,f_l$ span $k[\Mat_n]^{U_n}_\chi$ as a $k[\Mat_n]^{\GL_n}$-module. The same holds with ``span" replaced by ``form a basis of". \end{lemgl} \noindent We refer to \cite[Lem.~2, Prop.~1]{T1} for references and explanation. \subsection{Good filtrations} For $G$ a reductive group and $\chi$ a dominant weight relative to a Borel subgroup $B=TU$ we denote the standard or Weyl module corresponding to $\chi$ by $\Delta_G(\chi)$ and the costandard or induced module corresponding to $\chi$ by $\nabla_G(\chi)$. We have $\Delta_G(\chi)\cong\nabla_G(-w_0(\chi)^*$, where $w_0$ is the longest element in the Weyl group. The module $\nabla_G(\chi)$ has simple socle and the module $\Delta_G(\chi)$ has simple top, both isomorphic to the irreducible $L_G(\chi)$ of highest weight $\chi$. In characteristic $0$ we have $\Delta_G(\chi)\cong\nabla_G(\chi)\cong L_G(\chi)$. The main property of these modules that we will use is that all dominant $\chi_1$ and $\chi_2$, $\Ext_G^1(\Delta_G(\chi_1),\nabla_G(\chi_2))=0$ and $\Hom_G(\Delta_G(\chi_1),\nabla_G(\chi_2))=k$ if $\chi_1=\chi_2$ and $\{0\}$ otherwise. See \cite[II.4.13]{Jan}. A $G$-module $M$ is said to have a {\it good filtration} if it has a $G$-module filtration $0=M_0\subseteq M_1\subseteq M_2\subseteq\cdots$, $\bigcup_{i\ge0}M_i=M$, such that each quotient $M_i/M_{i-1}$ is isomorphic to some induced module $\nabla_G(\chi)$. If $M$ has a good filtration, the number of quotients isomorphic to $\nabla_G(\chi)$ is independent of the good filtration and equal to $\dim M^U_\chi$. If $k$ has characteristic $0$, then every $G$-module has a good filtration. For more details we refer to \cite[II.4.16,17]{Jan}. For example, a direct summand of a module with a good filtration has a good filtration. \subsection{Graded characters} If $M=\bigoplus_{i\ge 0} M_i$ is a graded vector space with $\dim M_i<\infty$ for all $i$, then the {\it graded dimension} of $M$ is the polynomial $\sum_i\dim M_iz^i$. Here one can use for $z$ any other grading variable. Similarly, if $G$ is a general linear group, $M=\bigoplus_{i\ge 0} M_i$ a graded $G$-module with a good filtration, and $\nabla_G(\chi)$ has finite good filtration multiplicity in $M$, then the {\it graded good filtration multiplicity} of $\nabla_G(\chi)$ in $M$ is the polynomial $\sum_i (M_i:\nabla_G(\chi))z^i$, where $(M_i:\nabla_G(\chi))$ is the good filtration multiplicity of $\nabla_G(\chi)$ in $M_i$. Note that by the above the graded good filtration multiplicity of $\nabla_G(\chi)$ in $M$ is the graded dimension of $M^U_\chi$. We say that one graded dimension or multiplicity is $\le$ another if this is true coefficient-wise. \subsection{Good pairs} Recall from \cite{Don1} that an affine variety $V$ on which a reductive group $G$ acts is called {\it good} if $k[V]$ has a good filtration. Furthermore, if $A$ is a closed $G$-stable subvariety of $V$, then $(V,A)$ is called a {\it good pair of $G$-varieties} if the vanishing ideal of $A$ in $k[V]$ has a good filtration. In this case $A$ is itself a good $G$-variety. If $(V,A)$ is a good pair of $G$-varieties, then the restriction map $k[V]^U_\chi\to k[A]^U_\chi$ is surjective by \cite[II.4.13]{Jan}. \subsection{Skew Young diagrams and tableaux} For $\lambda$ a partition of $n$ we denote the nilpotent orbit which consists of the matrices whose Jordan normal form has block sizes $\lambda_1,\cdots,\lambda_{l(\lambda)}$, by $\mc O_\lambda$. For $\lambda,\mu$ partitions of $n$, we say that $\lambda\ge\mu$ if $\sum_{j=1}^i\lambda_j\ge\sum_{j=1}^i\mu_j$ for $i=1,\ldots,n-1$. This order is called the {\it dominance order}. In \cite[Prop~1.6]{Ger} it was proved that $\ov{\mc O}_\lambda\supseteq\mc O_\mu$ if and only if $\lambda\ge\mu$. Here $\ov{\mc O}_\lambda$ denotes the closure of the orbit $\mc O_\lambda$. Since $\mc N_{n,m-1}$ is the union of the $\mc O_\lambda$ with $\lambda_1\le m$, it follows easily that $\mc N_{n,m-1}=\ov{\mc O}_{m^qr}$, where $q$ and $r$ are quotient and remainder under division of $n$ by $m$. We will denote the transpose of a partition $\lambda$ by $\lambda'$ and we will identify each partition $\lambda$ with the corresponding Young diagram $\{(i,j)\,|\,1\le i\le l(\lambda),1\le j\le\lambda_i\}$. The $(i,j)\in\lambda$ are called the {\it boxes} or {\it cells} of $\lambda$. More generally, if $\lambda,\mu$ are partitions with $\lambda\supseteq\mu$, then we denote the diagram $\lambda$ with the boxes of $\mu$ removed by $\lambda/\mu$ and call it the {\it skew Young diagram} associated to the pair $(\lambda,\mu)$. Of course the skew diagram $\lambda/\mu$ does not determine $\lambda$ and $\mu$. We denote the number of boxes in a skew diagram $E$ by $|E|$. We define $\Delta_t$ to be the diagram $$\begin{ytableau} \none&\none&\ \\ \none&\none[\iddots]&\none[\text {\hspace{3cm} ($t$ boxes)\,.}]\\ \ &\none&\none \end{ytableau}$$ \smallskip Let $E$ be as skew diagram with $t$ boxes. A {\it skew tableau} of shape $E$ is a mapping $T:E\to \mb N=\{1,2,\ldots\}$. A skew tableau of shape $E$ is called {\it row-ordered} if its entries are weakly increasing along rows, {\it strictly row-ordered} if its entries are strictly increasing along rows, and it is called {\it ordered} if its entries are weakly increasing along rows and along columns. The notions column-ordered and strictly column-ordered are defined in a completely analogous way. A skew tableau of shape $E$ is called {\it semi-standard} if its entries are weakly increasing along the rows and strictly increasing along the columns, and it is called {\it row semi-standard} if its entries are strictly increasing along the rows and weakly increasing along the columns. It is called a {\it $t$-tableau} if its entries are the numbers $1,\ldots,t$ (so the entries must be distinct) and it is called {\it standard} if it is a $t$-tableau and its entries are (strictly) increasing along rows and along columns. We will associate to $E$ two special skew tableaux $T_E$ and $S_E$ as follows. We define $T_E$ by filling in the numbers $1,\ldots,t$ row by row from left to right and top to bottom and we define $S_E$ by filling the boxes in the $i$-th row with $i$'s. So $T_E$ is standard and $S_E$ is semi-standard. Two tableaux $S$ and $T$ of shape $E$ are called {\it row equivalent} if, for each $i$, the $i$-th row of $F$ is a permutation of the $i$-th row of $T$. The notion of column equivalence is defined in a completely analogous way. Finally, if $m$ is the biggest integer occurring in a tableau $T$, or $0$ if $T$ is empty, then the {\it weight} of $T$ is the $m$-tuple whose $i$-th component is the number of occurrences of $i$ in $T$. Sometimes we will also consider the weight of $T$ as an $m'$-tuple for some $m'\ge m$ by extending it with zeros. \section{Transmutation and semi-invariants in characteristic $p$}\label{s.charp} Let $r,s$ be integers $\ge0$ with $r+s\le n$. We denote the variety of pairs $(A,B)\in\Mat_{rn}\times\Mat_{ns}$ with $AB=0$ by $Y_{r,s,n}$ and for $m$ an integer $\ge 2$ we define the maps $\varphi_{r,s,n,m}$ and $\ov\varphi_{r,s,n,m}$ by \begin{align*} \varphi_{r,s,n,m}:(A,B,X)\mapsto&(AB,AXB,\ldots,AX^mB)\\ &:\Mat_{rn}\times\Mat_{ns}\times\Mat_n\to\Mat_{rs}\times\Mat_{rs}^m\\ \ov\varphi_{r,s,n,m}:(A,B,X)\mapsto&(s_1(X),\ldots,s_n(X),\varphi_{r,s,n,m}(A,B,X))\\ &:\Mat_{rn}\times\Mat_{ns}\times\Mat_n\to k^n\times\Mat_{rs}\times\Mat_{rs}^m\,. \end{align*} We will denote several of the restrictions of these maps by the same symbol. The group $\GL_{r,s,n}:=\GL_r\times\GL_s\times\GL_n$ acts on $\Mat_{rn}\times\Mat_{ns}$ via $(S,T,U)\cdot(A,B)=(SAU^{-1},UBT^{-1})$ and on $\Mat_{rn}\times\Mat_{ns}\times\Mat_n$ via $(S,T,U)\cdot(A,B,X)=(SAU^{-1},UBT^{-1},UXU^{-1})$. Note that $Y_{r,s,n}$ is a $\GL_{r,s,n}$-stable closed subvariety of $\Mat_{rn}\times\Mat_{ns}$. Note also that $\varphi_{r,s,n,m}$ and $\ov\varphi_{r,s,n,m}$ are equivariant for the action of $\GL_{r,s,n}$ if we let $\GL_n$ act trivially on $ k^n\times\Mat_{rs}\times\Mat_{rs}^m$ and $\GL_r\times\GL_s$ trivially on $k^n$ and via its obvious diagonal action on $\Mat_{rs}\times\Mat_{rs}^m$. We consider $\Mat_{rs}\times\Mat_{rs}^m$ as a closed subvariety of $k^n\times\Mat_{rs}\times\Mat_{rs}^m$ by taking the first $n$ scalar components zero and we consider $\Mat_{rs}^m$ as a closed subvariety of $\Mat_{rs}\times\Mat_{rs}^m$ by taking the first matrix component the zero matrix. So $\varphi_{r,s,n,m}=\ov\varphi_{r,s,n,m}$ on $\Mat_{rn}\times\Mat_{ns}\times\mc N_n$ and $\varphi_{r,s,n,m}(Y_{r,s,n}\times\mc N_n)\subseteq\Mat_{rs}^m$. If $l\ge m$, then we consider $\Mat_{rs}^m$ as a closed subvariety of $\Mat_{rs}^l$ by extending an $m$-tuple of $r\times s$ matrices with zero matrices to an $l$-tuple of $r\times s$ matrices. So $\varphi_{r,s,n,l}=\varphi_{r,s,n,m}$ on $\Mat_{rn}\times\Mat_{ns}\times\mc N_{n,m}$ if $l\ge\min(m,n-1)$. When $r$ and $s$ are fixed we denote the image $\varphi_{r,s,n,m}(Y_{r,s,n}\times\mc N_{n,m})\subseteq\Mat_{rs}^m$ by $W_{n,m}$. We will use the embedding of $\Mat_n$ in $Y_{r,s,n}\times\Mat_n$ which is given by $$X\mapsto(E_r,F_s,X)\,,$$ where $E_r=\begin{bmatrix}0\Nts&I_r\end{bmatrix}\in\Mat_{rn}$, $F_s=\begin{bmatrix}I_s\\0\end{bmatrix}\in\Mat_{ns}$. Then $\varphi_{r,s,n,m}$ can be restricted to $\Mat_n$ and $\varphi_{r,s,n,m}(X)$ consists of the lower left $r\times s$ corners of the first $m$ powers of $X$. Any point of $Y_{r,s,n}$ is contained in an irreducible curve which also contains a point $(A,B)\in Y_{r,s,n}$ with $A$ and $B$ of maximal rank $r$ and $s$ (see e.g. \cite[p38]{Br}) and if $(A,B)$ is such a point, then it is easy to see that $g\cdot(A,B)=(E_r,F_s)$ for some $g\in\GL_n$. It follows that $Y_{r,s,n}$ is irreducible and that $\varphi_{r,s,n,m}(\mc N_{n,m})$ is dense in $W_{n,m}$. We will use the $\GL_{r,s,n}$-variety $Y_{r,s,n}$ as the catalyst for the transmutation from $\GL_n$-varieties to $\GL_r\times\GL_s$-varieties. We will mainly be interested in applying this transmutation to the varieties $\mc N_{n,m}$. Assertion (ii) of the next proposition, which is an analogue in arbitrary characteristic of \cite[Cor.~4.3]{Br}, says in particular that $W_{n,m}$ is the transmuted variety of $\mc N_{n,m}$. \begin{propgl}\label{prop.quotient}\ \begin{enumerate}[{\rm (i)}] \item If $m\ge n-1$, then $\ov\varphi_{r,s,n,m}:\Mat_{rn}\times\Mat_{ns}\times\Mat_n\to k^n\times\Mat_{rs}\times\Mat_{rs}^m$ is a $\GL_n$-quotient morphism onto its image. \item If $r+s\le n$ and $\nu$ is a partition of $n$ with $\nu_1\le m+1$, then $Y_{r,s,n}\times\ov{\mc O}_\nu$ is a good $\GL_{r,s,n}$-variety and $\varphi_{r,s,n,m}:Y_{r,s,n}\times\ov{\mc O}_\nu\to\Mat_{rs}^m$ is a $\GL_n$-quotient morphism onto its image. \end{enumerate} \end{propgl} \begin{proof} (i).\ If we apply \cite[Prop]{Don2} to the quiver with two nodes $x_1$ and $x_2$ of dimensions $1$ and $n$ with $s$ arrows from $x_1$ to $x_2$, $1$ loop at $x_2$ and $r$ arrows from $x_2$ to $x_1$, then we obtain that the algebra of $\GL_n$-invariants of $s$ vectors, $r$ covectors and $1$ matrix is generated by $s_1(X),\ldots,s_n(X)$ and the scalar products $\la f,X^iv\ra$, where $f$ is one of the covectors, $v$ is one of the vectors, $X$ is the matrix and $i$ is $\ge0$. Of course we may assume that $i<n$ by the Cayley-Hamilton Theorem. So we obtain the assertion.\\ (ii).\ As is well-known $\Mat_{rn}$ is a good $\GL_r\times\GL_n$-variety and therefore it is also a good $\GL_{r,s,n}$-variety if we let $\GL_s$ act trivially. Similarly, $\Mat_{ns}$ is also a good $\GL_{r,s,n}$-variety and $\Mat_n$ is a good $\GL_{r,s,n}$-variety if we let $\GL_r\times\GL_s$ act trivially. So, by Mathieu's result on tensor products \cite[Cor.~4.2.14]{BK}, $\Mat_{rn}\times\Mat_{ns}\times\Mat_n$ is a good $\GL_{r,s,n}$-variety. Since $r+s\le n$, $Y_{r,s,n}$ is a good complete intersection in $\Mat_{rn}\times\Mat_{ns}$. So $(\Mat_{rn}\times\Mat_{ns},Y_{r,s,n})$ is a good pair of $\GL_{r,s,n}$-varieties by \cite[Prop.~1.3b(i)]{Don1}. Furthermore, $(\Mat_n,\ov{\mc O}_\nu)$ is a good pair of $\GL_n$-varieties by \cite[Thm~2.2a(ii)]{Don1} and therefore also a good pair of $\GL_{r,s,n}$-varieties if we let $\GL_r\times\GL_s$ act trivially. So $(\Mat_{rn}\times\Mat_{ns}\times\Mat_n,Y_{r,s,n}\times\ov{\mc O}_\nu)$ is a good pair of $\GL_{r,s,n}$-varieties by \cite[Prop~1.3e(i)]{Don1}. This implies the first assertion and if we combine it with (i) and \cite[Prop~1.4a]{Don1} we obtain the second assertion. \end{proof} \begin{propgl}\label{prop.good_pair}\ Assume $r+s\le n$ and let $\nu$ be a partition of $n$ with $\nu_1\le m+1$. Then $\big(\Mat_{rs}^m,\varphi_{r,s,n,m}(Y_{r,s,n}\times\ov{\mc O}_\nu)\big)$ is a good pair of $\GL_r\times\GL_s$-varieties. \end{propgl} \begin{proof} Choose $N\ge (m+1)\max(r,s)$. By the argument in the proof of \cite[Thm~5.1]{Br} we have $\varphi_{r,s,N,m}(\mc N_{N,m})=\Mat_{rs}^m$ and therefore we certainly have $\varphi_{r,s,N,m}(Y_{r,s,N}\times\mc N_{N,m})=\Mat_{rs}^m$. In the proof of Proposition~\ref{prop.quotient} we have seen that $(\Mat_{rN}\times\Mat_{Ns}\times\Mat_N,Y_{r,s,N}\times\mc N_{N,m})$ is a good pair of $\GL_{r,s,N}$-varieties. So by Proposition~\ref{prop.quotient}(i) and \cite[Prop.~1.4(a)]{Don1}\\ (a).\ $(\ov\varphi_{r,s,N,N-1}(\Mat_{rN}\times\Mat_{Ns}\times\Mat_N),\Mat_{rs}^m)$ is a good pair of $\GL_r\times\GL_s$-varieties.\\ Put $Z_{N,n}=\{(B,X)\in\Mat_{Ns}\times\Mat_N\,|\,\rk(B|X)\le n\}$. If we identify $\Mat_{Ns}\times\Mat_N$ with $\Mat_{N,s+N}$, then $(\Mat_{Ns}\times\Mat_N,Z_{N,n})$ is a good pair of $\GL_N\times\GL_{s+N}$-varieties by \cite[Prop.~1.4(c)]{Don1}. By \cite[Cor.~4.2.15]{BK} it is then a good pair of $\GL_N\times(\GL_s\times\GL_N)$-varieties and by \cite[Cor.~4.2.14]{BK} it is then also a good pair of $\GL_s\times\GL_N$-varieties if we let $\GL_N$ act diagonally. It will also be a good pair of $\GL_{r,s,N}$-varieties if we let $\GL_r$ act trivially. So by \cite[Prop~1.3e(i)]{Don1} $(\Mat_{rN}\times\Mat_{Ns}\times\Mat_N,\Mat_{rN}\times Z_{N,n})$ is a good pair of $\GL_{r,s,N}$-varieties. It now follows from \cite[Prop.~1.4a]{Don1} that\\ (b).\ $(\ov\varphi_{r,s,N,N-1}(\Mat_{rN}\times\Mat_{Ns}\times\Mat_N),\ov\varphi_{r,s,N,N-1}(\Mat_{rN}\times Z_{N,n}))$ is a good pair of $\GL_r\times\GL_s$-varieties.\\ Let $(e_1,\ldots,e_N)$ be the standard basis of $k^N$ and let $(A,B,X)\in\Mat_{rN}\times Z_{N,n}$. Then $\dim(\Im(B)+\Im(X))\le n$, so for some $g\in\GL_N$ we have $\Im(gB)+\Im(gX)=\Im(gB)+\Im(gXg^{-1})\subseteq\{e_1,\ldots,e_n\}$. Write $$g\cdot A=\bmat{A_1&&A_2},\ g\cdot X=\bmat{X_1&&X_2\\0&&0},\ g\cdot B=\bmat{B_1\\0}\,,$$ $A_1\in\Mat_{rn}, X_1\in\Mat_n, B_1\in\Mat_{ns}$. Then a simple computation shows that $\ov\varphi_{r,s,N,N-1}(A,B,X)=\ov\varphi_{r,s,n,N-1}(A_1,B_1,X_1)$, so\\ (c).\ $\ov\varphi_{r,s,N,N-1}(\Mat_{rN}\times Z_{N,n})=\ov\varphi_{r,s,n,N-1}(\Mat_{rn}\times\Mat_{ns}\times\Mat_n)$,\\ since the inclusion $\supseteq$ is obvious. In the proof of Proposition~\ref{prop.quotient} we saw that $(\Mat_{rn}\times\Mat_{ns}\times\Mat_n,Y_{r,s,n}\times\ov{\mc O}_\nu)$ is a good pair of $\GL_{r,s,n}$-varieties. So by \cite[Prop.~1.4a]{Don1} we have\\ (d).\ $(\ov\varphi_{r,s,n,N-1}(\Mat_{rn}\times\Mat_{ns}\times\Mat_n),\varphi_{r,s,n,m}(Y_{r,s,n}\times\ov{\mc O}_\nu))$ is a good pair of $\GL_r\times\GL_s$-varieties.\\ Combining (a)-(d) and \cite[Lem.~1.3a(ii)]{Don1} we obtain the assertion. \end{proof} \begin{remsgl} 1.\ Similar as in the proof of Proposition~\ref{prop.good_pair}, one can show that for $r$ and $s$ arbitrary $(\Mat_{rs}\times\Mat_{rs}^m,\varphi_{r,s,n,m}(\Mat_{rn}\times\Mat_{ns}\times\ov{\mc O}_\nu))$ is a good pair of $\GL_r\times\GL_s$-varieties.\\ 2.\ The result \cite[Thm~2.2a(ii)]{Don1} can also be deduced from \cite[Thm.~4.3]{MvdK} in combination with \cite[Ex. 4.2.E.2]{BK}. The point is that the splitting from \cite{MvdK} is easily seen to be $B$-canonical. \end{remsgl} By Proposition~\ref{prop.quotient}(ii) we have $W_{n,m}\nts\cong\nts Y_{r,s,n}\times^{\GL_n}\mc N_{n,m}\nts:=\nts(Y_{r,s,n}\times\mc N_{n,m})/\nts/\GL_n$. It is well-known that the formal character of $Y_{r,s,n}$ is independent of the characteristic (this can also be deduced from the formula in \cite[Prop1.3b(ii)]{Don1}). So by \cite[Thm~6.3]{KV} and \cite[Thm~9]{H} (see also \cite[Thm~3.3]{Br}) the sections in a good $\GL_{r,s,n}$-filtration of $k[Y_{r,s,n}]$ are precisely the induced $\GL_{r,s,n}$-modules $\nabla_{\GL_r}(-\mu^{\rm rev})\ot\nabla_{\GL_s}(\lambda)\ot\nabla_{\GL_n}([\mu,\lambda])$, each occurring once, where $\lambda$ and $\mu$ are partitions with $l(\mu)\le r$ and $l(\lambda)\le s$. Now if $V$ is a good $\GL_n$-variety, then $Y_{r,s,n}\times^{\GL_n}V$ is a good $\GL_r\times\GL_s$-variety by \cite[Prop~1.2e(iii)]{Don1} and, by the above and a simple character calculation, the good filtration multiplicity of $\nabla_{\GL_r}(-\mu^{\rm rev})\ot\nabla_{\GL_s}(\lambda)$ in $k[Y_{r,s,n}\times^{\GL_n}V]$ is equal to that of $\nabla_{\GL_n}([\lambda,\mu])$ in $k[V]$. Note here that $\nabla_{\GL_n}([\mu,\lambda])^*\cong\Delta_{\GL_n}([\lambda,\mu])$. Loosely spoken, each copy of $\nabla_{\GL_n}([\lambda,\mu])$ in $k[V]$ is replaced by $\nabla_{\GL_r}(-\mu^{\rm rev})\ot\nabla_{\GL_s}(\lambda)$ if $l(\mu)\le r$ and $l(\lambda)\le s$ and removed otherwise. We can apply this to $V=\mc N_{n,m}$. If we give the piece of $k[\Mat_{rs}^m]$ of multidegree $\nu$ total degree $\sum_{i=1}^m\nu_ii$, then the vanishing ideals of the varieties $W_{n,m}$ are graded, so their coordinate rings will inherit the above total grading. The aforementioned equalities of good filtration multiplicities for $k[\mc N_{n,m}]$ and $k[W_{n,m}]$ are then in fact equalities of graded good filtration multiplicities. Furthermore, the graded dimension of $k[\mc N_{n,m}]^{U_n}_{[\lambda,\mu]}$ is increasing in $m$, and by the above it is also increasing in $n$, since $W_{n,m}\subseteq W_{N,m}$ whenever $N\ge n$. It follows that the graded dimension of $k[\mc N_n]^{U_n}_{[\lambda,\mu]}$ is increasing in $n$. This was observed by Brylinsky in \cite{Br}. The theorem below says that to find finite spanning sets for the highest weight vectors in the coordinate ring of the $\GL_n$-variety $\mc N_{n,m}$, it is enough to do this for the $\GL_r\times\GL_s$-variety $\Mat_{rs}^m$. We note that, since $k[\Mat_{rs}^m]$ has a good filtration and its formal character is independent of the characteristic, the good filtration multiplicity $\dim k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$ of $\nabla_{\GL_r}(-\mu^{\rm rev})\ot\nabla_{\GL_s}(\lambda)$ in $k[\Mat_{rs}^m]$ is independent of the characteristic of $k$. A simple character calculation combined with \cite[I.7.10(b)]{Mac} shows that the multigraded good filtration multiplicity of $\nabla_{\GL_r}(-\mu^{\rm rev})\ot\nabla_{\GL_s}(\lambda)$ in $k[\Mat_{rs}^m]$ is $s_\lambda\ast s_\mu(z_1,\ldots,z_m)$, where $s_\lambda$ is the Schur function associated to $\lambda$, $\ast$ denotes the internal product of Schur functions and $z_i$ is a grading variable for the $i$-th matrix component. So this multiplicity is $0$ if $|\lambda|\ne|\mu|$ or if $s_\lambda\ast s_\mu$ only contains Schur functions associated to partitions of length $>m$. \begin{thmgl}\label{thm.surjective_pullback} Let $\chi=[\lambda,\mu]$ be a dominant weight in the root lattice, $l(\mu)\le r$, $l(\lambda)\le s$, $r+s\le n$, and let $\nu$ be a partition of $n$ with $\nu_1\le m+1$. Then the pull-back $$k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}\to k[\ov{\mc O}_\nu]^{U_n}_\chi$$ along $\varphi_{r,s,n,m}:\ov{\mc O}_\nu\to\Mat_{rs}^m$ is surjective, and in case $\ov{\mc O}_\nu=\mc N_{n,m}$ and $n\ge (m+1)\max(r,s)$ it is an isomorphism. \end{thmgl} \begin{proof} For a matrix $M$ denote by $M_{r\rfloor,\lfloor s}$ the lower left $r\times s$ corner of $M$ and define $M_{r\rfloor,r\rfloor}$ and $M_{\lfloor s,\lfloor s}$ similarly. Then we have $$(SXS^{-1})_{r\rfloor,\lfloor s}=S_{r\rfloor,r\rfloor}X_{r\rfloor,\lfloor s}(S_{\lfloor s,\lfloor s})^{-1}$$ and therefore $$\varphi_{r,s,n,m}(SXS^{-1})=S_{r\rfloor,r\rfloor}\varphi_{r,s,n,m}(X)(S_{\lfloor s,\lfloor s})^{-1}$$ for any $X\in\Mat_n$ and any upper triangular $S\in\GL_n$. So indeed the pull-back along $\varphi_{r,s,n,m}$ maps highest weighty vectors to highest weight vectors and it is an easy exercise to see that the weights correspond as stated in the theorem. Since $(\mc N_{n,m},\ov{\mc O}_\nu)$ is a good pair of $\GL_n$ varieties by \cite[Thm.~2.1c, Lem.~1.3a(ii)]{Don1} we may assume $\ov{\mc O}_\nu=\mc N_{n,m}$. By the discussion before the theorem, based on Proposition~\ref{prop.quotient}, we know that the good filtration multiplicity of $\nabla_{\GL_r}(-\mu^{\rm rev})\ot\nabla_{\GL_s}(\lambda)$ in $k[W_{nm}]$ is equal to that of $\nabla_{\GL_n}([\lambda,\mu])$ in $k[\mc N_{n,m}]$. Put differently, we know that $k[W_{nm}]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$ and $k[\mc N_{n,m}]^{U_n}_\chi$ have the same dimension. As we have seen before, $\varphi_{r,s,n,m}(\mc N_{n,m})$ is dense in $W_{nm}$, so the pull-back $k[W_{nm}]\to k[\mc N_{n,m}]$ along $\varphi_{r,s,n,m}$ is injective and induces an isomorphism between $k[W_{nm}]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$ and $k[\mc N_{n,m}]^{U_n}_\chi$. By the argument in the proof of \cite[Thm~5.1]{Br} we have $\varphi_{r,s,n,m}(\mc N_{n,m})=\Mat_{rs}^m$ if $n\ge (m+1)\max(r,s)$, which gives us the final assertion. So it suffices to show that the restriction $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}\to k[W_{nm}]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$ is surjective and this follows from Proposition~\ref{prop.good_pair}. \begin{comment} THE PROOF BELOW IS WRONG: the product $A'B'$ below need not be zero. Let $\tilde\varphi:\Mat_{rn}\times\Mat_{ns}\times\Mat_n\to\Mat_{rs}\times\Mat_{rs}^m$ be the $\GL_n$-quotient map $(A,B,X)\mapsto(AB,\varphi(A,B,X))$. What I really prove below is that for $N$ big $\Mat_{rs}^m=\varphi_{r,s,N,m}(Y_{r,s,N}\times\ov{\mc O}_{\nu1^{N-n}})= \varphi_{r,s,n,m}(\Mat_{rn}\times\Mat_{ns}\times\ov{\mc O}_\nu)$. The problem is that I apply $\varphi_{r,s,n,m}$ to a variety on which it is not a quotient morphism. So we need to use $\tilde\varphi_{r,s,n,m}$. I can deduce that $(\Mat_{rs}^m,\varphi_{r,s,n,m}(\Mat_{rn}\times\Mat_{ns}\times\ov{\mc O}_\nu))$ and $(\Mat_{rs}\times\Mat_{rs}^m,\Mat_{rs}\times\varphi_{r,s,n,m}(\Mat_{rn}\times\Mat_{ns}\times\ov{\mc O}_\nu))$ and $(\tilde\varphi_{r,s,n,m}(\Mat_{rn}\times\Mat_{ns}\times\ov{\mc O}_\nu),\tilde\varphi_{r,s,n,m}(Y_{r,s,n}\times\ov{\mc O}_\nu))$ are good pairs. Obviously we have $\tilde\varphi_{r,s,n,m}(\Mat_{rn}\times\Mat_{ns}\times\ov{\mc O}_\nu))\subseteq\Mat_{rs}\times\varphi_{r,s,n,m}(\Mat_{rn}\times\Mat_{ns}\times\ov{\mc O}_\nu))$, but I can't see why this would be a good pair. It might of course be that because of our assumption $r+s\le n$ these two are actually equal.\\ --------------------------------------------------------------------\\ By the argument in the proof of \cite[Thm~5.1]{Br} we have $\varphi_{r,s,n,m}(\mc N_{N,m})=\Mat_{rs}^m$ and therefore also $W_{N,m}=\Mat_{rs}^m$ if $N\ge (m+1)\max(r,s)$. Now choose $N\ge n$ with this property and embed $\Mat_n$ into $\Mat_N$ by mapping $A$ to $\bmat{A&0\\0&0}$. Then $\ov{\mc O}_{(m+1)^{q_n}r_n1^{N-n}}\subseteq\mc N_{N,m}=\ov{\mc O}_{(m+1)^{q_N}r_N}$, where $q_n$ and $r_n$, resp. $q_N$ and $r_N$ are quotient and remainder under division of $n$ resp. $N$ by $m+1$. Now $\mc O_{(m+1)^{q_n}r_n1^{N-n}}$ is the union of the $\GL_N$-conjugates of $\mc O_{(m+1)^{q_n}r_n}$ and one easily checks that $\varphi_{r,s,N,m}(A,B,X)=\varphi_{r,s,n,m}(A',B',X)$ for all $X\in\Mat_n$ and all $A,B\in Y_{r,s,n}$, where $A'$ is $A$ with its last $N-n$ columns removed and $B'$ is $B$ with its last $N-n$ rows removed. So it follows from the continuity of $\varphi_{r,s,N,m}$ that $$\varphi_{r,s,N,m}(Y_{r,s,N}\times\ov{\mc O}_{(m+1)^{q_n}r_n1^{N-n}})=W_{nm}\,,$$ since the inclusion $\supseteq$ is obvious. By \cite[Thm~2.2a(ii), Lem~1.3a(ii)]{Don1} we have that $(\mc N_{N,m},\ov{\mc O}_{(m+1)^{q_n}r_n1^{N-n}})$ is a good pair of varieties, so by \cite[Prop~1.3e(i)]{Don1} $(Y_{r,s,N}\times\mc N_{N,m},Y_{r,s,N}\times\ov{\mc O}_{(m+1)^{q_n}r_n1^{N-n}})$ is a good pair of $\GL_{r,s,N}$-varieties. So $(\Mat_{rs}^m,W_{nm})$ is a good pair of varieties by Proposition~\ref{prop.quotient} and \cite[Prop~1.4a]{Don1}. \end{comment} \end{proof} \begin{remsgl}\label{rems.surjective_pullback} 1.\ Assume $m=n-1$. If $f\in k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$ is homogeneous for the total grading defined above, then the pull-back of $f$ along $\varphi_{r,s,n,m}:\mc N_n\to\Mat_{rs}^m$ has an obvious lift to $k[\Mat_n]^{U_n}_{[\lambda,\mu]}$, namely the pull-back along $\varphi_{r,s,n,m}:\mc \Mat_n\to\Mat_{rs}^m$. This follows from the fact that the displayed formulas at the beginning of the proof of Theorem~\ref{thm.surjective_pullback} hold for any $X\in\Mat_n$. So if we have a spanning set of $k[\mc N_n]^{U_n}_{[\lambda,\mu]}$ which is pulled back from $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$ along $\varphi_{r,s,n,m}$, then we are always in the situation to apply Lemma~\ref{lem.reduction_to_nilpotent_cone}.\\ 2.\ With the total grading of $k[\Mat_{rs}^m]$ defined above the pull-back along $\varphi_{r,s,n,m}:\mc N_n\to\Mat_{rs}^m$ is a homomorphism of graded vector spaces. By \cite[Thm.~2.14]{BCHLLS} and the independence of the characteristic of the graded formal character of $k[\mc N_n]$, the good filtration multiplicity of $\nabla_{\GL_n}([\lambda,\mu])$ in the degree $d$ piece of $k[\mc N_n]$ is the same for all $n\ge l(\lambda)+l(\mu)+d-t$, where $t=|\lambda|=|\mu|$. \begin{comment} In the notation \cite{BCHLLS}, $r\ge l(\lambda)+l(\mu)+k$, their $p=q$ is our degree $d$, $r$ is our $n$ and $p-k$ is our $t$, the partition number. From $\Mat^{\ot d}$ you go to $S^d(\Mat_n)$ by taking invariants for the diagonal action of $\Sym_t$. The highest weight vectors of a fixed weight in $\Mat^{\ot d}$ form an irreducible module for the walled Brauer algebra and under our assumption of $n$ this is just a ``cell module" whose restriction to $\Sym_t\times\Sym_t$ is independent of $n$. So the multiplicities of the irreducibles in $S^d(\Mat_n)$ are also stable under our assumption on $n$. Then we use the well-known factorisation in invariants and harmonics. This gives a formula for the multiplicity of $L_{\GL_n}([\lambda,\mu])$ in the degree $d$ piece of $k[\mc N_n]$ in terms of the multiplicities of $L_{\GL_n}([\lambda,\mu])$ in the degree $d'$ pieces of $S(\Mat_n)$, $d'\le d$. \end{comment} From this, Theorem~\ref{thm.surjective_pullback} and the fact that the graded dimension of $k[\mc N_{n,m}]^{U_n}_{[\lambda,\mu]}$ is increasing in $m$ and $n$ it follows that the pull-back $k[\Mat_{rs}^{n-1}]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}\to k[\mc N_n]^{U_n}_{[\lambda,\mu]}$ will be an isomorphism in degree $d$ if $n\ge l(\lambda)+l(\mu)+d-t$.\\ \begin{comment} We have ${\rm grdim} k[\mc N_n]^{U_n}_\chi\le{\rm grdim} k[\mc N_{N,n-1}]^{U_n}_\chi\le {\rm grdim} k[\mc N_N]^{U_n}_\chi$. But we can always choose $N$ such that the pull-back $k[\Mat_{rs}^{n-1}]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}\to k[\mc N_{N,n-1}]^{U_n}_\chi$ is an isomorphism. Side remark: the degree $d$ piece of $k[\Mat_{rs}^m]$ is the same as that of $k[\Mat_{rs}^d]$ if $m\ge d$. \end{comment} \end{remsgl} The space $\Mat_{rs}^m=\Mat_{rs}\ot k^m$ has an extra action of the group $\GL_m$ which commutes with the action of $\GL_r\times\GL_s$. For convenience we choose the action induced by the action $g\cdot v=vg^{-1}$ on $k^m$, where $v$ is considered as a row vector. If we would have used the more obvious action $g\cdot v=gv$ on $k^m$, then this would amount to twisting the above action with the inverse transpose. Let $\lambda$ be a partition of $t\le r$ with $l(\lambda)\le s$ and let $T_\lambda$ be the tableau of shape $\lambda$ defined in Section~\ref{s.prelim}. For $T$ a tableau of shape $\lambda$ with entries $\le m$ we define the semi-invariant $u_T\in k[\Mat_{rs}^m]$ by \begin{align*} &(A_1,\ldots,A_m)\mapsto\\ \sum_S\det\big(A_{S_{11}}e_1|\cdots|&A_{S_{1\lambda_1}}e_1|\cdots|A_{S_{l(\lambda)1}}e_{l(\lambda)}|\cdots| A_{S_{l(\lambda)\lambda_{l(\lambda)}}}e_{l(\lambda)}\big)_{t\rfloor} \end{align*} and the semi-invariant $v_T\in k[\Mat_{sr}^m]$ by \begin{align*} &(A_1,\ldots,A_m)\mapsto\\ \sum_S\det\big(A_{S_{11}}'e_s|\cdots|A_{S_{1\lambda_1}}'&e_s|\cdots|A_{S_{l(\lambda)1}}'e_{s-l(\lambda)+1}|\cdots| A_{S_{l(\lambda)\lambda_{l(\lambda)}}}'e_{s-l(\lambda)+1}\big)_{\lfloor t}\,, \end{align*} where the sums are over all tableaux $S$ in the orbit of $T$ under the column stabiliser of $T_\lambda$, the subscripts ``$t\rfloor$" and ``$\lfloor t$" mean that we take the last resp. first $t$ rows, the $S_{ij}$ denote the entries of $S$, the $e_i$ are the standard basis vectors of $k^s$, and $A_i'$ denotes the transpose of $A_i$. \vfill\eject \begin{thmgl}\label{thm.basis_special_weights} Let $\lambda$ be a partition of $t\le r$ with $l(\lambda)\le s$ and $\lambda_1\le m$. Then \begin{enumerate}[{\rm (i)}] \item the $u_T$ with $T$ row semi-standard form a basis of $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-(1^t)^{\rm rev},\lambda)}$\,, \item the $v_T$ with $T$ row semi-standard form a basis of $k[\Mat_{s\,r}^m]^{U_s\times U_r}_{(-\lambda^{\rm rev},1^t)}$ \end{enumerate} and both vector spaces are, with the $\GL_m$-action defined above, isomorphic to the Weyl module of highest weight $\lambda'$. \end{thmgl} \begin{proof} (i).\ Put $F=k^m$, let $(f_1,\ldots,f_m)$ be the standard basis of $F$ and put $\bigwedge^{\lambda}F=\bigwedge^{\lambda_1}F\ot\cdots\ot\bigwedge^{\lambda_{l(\lambda)}}F$. For $S$ a tableau of shape $\lambda$ with entries $\le m$ we put $$f_S=f_{S_{11}}\wedge\cdots\wedge f_{S_{1\lambda_1}}\ot\cdots\ot f_{S_{l(\lambda)1}}\wedge\cdots\wedge f_{S_{l(\lambda)\lambda_{l(\lambda)}}}\,.$$ Then the $f_S$ with the rows of $S$ strictly increasing form a basis of $\bigwedge^{\lambda}F$. From the anti-symmetry properties of the $f_S$ it is clear that there exists a unique linear mapping $\psi:\bigwedge^{\lambda}F\to k[\Mat_{rs}^m]$ with\quad $\psi(f_S)=$ $$(A_1,\ldots,A_m)\mapsto\det\big(A_{S_{11}}e_1|\cdots|A_{S_{1\lambda_1}}e_1|\cdots|A_{S_{l(\lambda)1}}e_{l(\lambda)}|\cdots| A_{S_{l(\lambda)\lambda_{l(\lambda)}}}e_{l(\lambda)}\big)_{t\rfloor}$$ for all tableaux $S$ of shape $\lambda$ with entries $\le m$. Furthermore, it is easy to check that $\psi$ is $\GL_m$-equivariant and that the $u_T$, $T$ row semi-standard are the images of the Carter-Lustig basis elements of the Weyl module of highest weight $\lambda'$ inside $\bigwedge^{\lambda}F$, see \cite[5.3b]{Gr} and \cite[Thm 3.5]{CL}. So to prove (i) and the final assertion in case (i) it suffices to show that $\psi$ is injective and $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-(1^t)^{\rm rev},\lambda)}$ has dimension equal to that of the Weyl module of highest weight $\lambda'$. Since the space of highest weight vectors has dimension $s_{1^t}\ast s_\lambda(1,\ldots,1)=s_{\lambda'}(1,\ldots,1)$ ($m$ ones) the latter is indeed true, so it remains to prove the injectivity of $\psi$. To prove this will associate to each tableau $T$ of shape $\lambda$ with entries $\le m$ and strictly increasing rows an $m$-tuple of $r\times s$-matrices $A(T)$ such that $\psi(f_S)(A(T))_{S,T}$ is the identity matrix. We define $A(T)$ as follows $$A(T)_{T_{ij}}(e_i)=e_{(T_\lambda)_{ij}}\text{\quad and\quad} A(T)_h(e_i)=0\text{\ if\ } h\notin \text{$i$-th row of $T$ or $l(\lambda)<i\le s$}\,,$$ where we denote the standard basis vectors of $k^{\max(r,s)}$ by $e_1,\ldots,e_{\max(r,s)}$.\footnote{The reader may consider $k^r$ as a subspace of $k^s$ if $r\le s$ and conversely otherwise.} Then clearly $\psi(f_T)(A(T))=1$. Now assume $S\ne T$. Then $S_{ij}\ne T_{ij}$ for certain $i,j$, so $S_{ij}$ does not occur in the $i$-th row of $T$. So $A(T)_{S_{ij}}(e_i)=0$ and therefore $\psi(f_S)(A(T))=0$.\\ (ii).\ Let $\Phi:k[\Mat_{rs}^m]\to k[\Mat_{s\,r}^m]$ be the algebra isomorphism corresponding to vector space isomorphism $\Mat_{s\,r}^m\to \Mat_{rs}^m$ induced by the vector space isomorphism $A\mapsto P_1A'P_2^{-1}:\Mat_{s\,r}\to\Mat_{rs}$, where $P_1\in\GL_r$ and $P_2\in\GL_s$ are the permutation matrices which are $1$ on the anti-diagonal and $0$ elsewhere. Then $\Phi(k[\Mat_{rs}^m]^{U_r\times U_s}_{(-(1^t)^{\rm rev},\lambda)})=k[\Mat_{s\,r}^m]^{U_s\times U_r}_{(-\lambda^{\rm rev},1^t)}$ and $\Phi(u_T)=\pm v_T$. So (ii) follows from (i). Furthermore, $\Phi$ is $\GL_m$-equivariant, so the final assertion also applies to (ii). \end{proof} \begin{remsgl}\label{rems.basis_special_weights} 1.\ If $\lambda$ or $\mu$ is a row one can easily find bases of $k[\Mat_{s\,r}^m]^{U_s\times U_r}_{(-\mu^{\rm rev},\lambda)}$. In this case the $\GL_m$-module structure is that of the induced module of highest weight $\lambda$. Unlike the case that $\lambda$ or $\mu$ is a column, the pull-backs of these bases to the nilpotent cone are always bases of $k[\mc N_n]^{U_n}_{[\lambda,\mu]}$. This can be deduced from the proof of \cite[Thm.~2]{T2}. For example, for the weight $(-\lambda^{\rm rev},1^t)$, $l(\lambda)\le m$, one obtains a basis by taking the ``left anti-canonical bideterminants" $(\tilde T_\lambda\,|\,T)$, $T$ semi-standard of shape $\lambda$ with entries $\le m$, on the $r\times m$ matrix obtained by taking the first column of each matrix component of $\un A\in\Mat_{rs}^m$. Here $\tilde T_\lambda$ is the anti-canonical tableau denoted by $T_\lambda$ in \cite{T2}. Our results on the $\GL_m$-module structure when $\lambda$ or $\mu$ is a row or a column are in accordance with \cite[Sect.~III]{ABW}.\\ \begin{comment} I just check the case of the weight $(-\lambda^{\rm rev},1^t)$. Remembering that we already twisted the natural action of $\GL_m$ by the inverse transpose, we want to look at the symmetric algebra $S(k^r\ot k^s\ot k^m)$ where the action of $\GL_r$ is twisted by the inverse transpose. This has a $\GL_{rm}\times\GL_s$ filtration with sections $S^\nu(k^r\ot k^m)\ot S^\nu(k^s)$, where we partly follow the notation of \cite{ABW} by denoting the ``Schur functor" associated to $\nu$ by $S^\nu$. So if we take highest weight vectors of weight ${\bf 1}_t$ for the action of $\GL_s$, then we obtain a module isomorphic to $\bigwedge^t(k^r\ot k^m)$. This has a filtration with sections $S^{{-\nu}^{\rm rev}}(k^r)\ot S^{{-\nu'}^{\rm rev}}(k^m)^*$ (this is obtained from \cite{ABW} by twisting the action of $\GL_r$ by the inverse transpose). Now we take highest weight vectors of weight $-\lambda^{\rm rev}$ for the action of $\GL_r$ and obtain a $\GL_m$-module isomorphic to $S^{{-\lambda'}^{\rm rev}}(k^m)^*$, i.e. the Weyl module of highest weight $\lambda'$. \end{comment} 2.\ Combining Theorem~\ref{thm.basis_special_weights} and Theorem~\ref{thm.surjective_pullback} we obtain spanning sets for the spaces $k[\mc N_n]^{U_n}_\chi$, where $\chi$ is of the form $[\lambda,1^t]$ or $[1^t,\lambda]$, i.e. for weights $\chi$ with $\chi_{{}_n}\ge-1$ or with $\chi_{{}_1}\le1$. Assume ${\rm char}(k)=0$. Then the weights $\chi$ with $\chi_{{}_n}\ge-1$ are related to the coinvariant ring $C_W$ of $W=\Sym_n$ via the generalised Chevalley Restriction Theorem as follows: $$k[\mc N_n]^{U_n}_\chi\cong\Mor_{\GL_n}(\mc N_n,L(\chi)^*)\cong\Mor_W(\mc N_n\cap\t,L(\chi)^*_0)\cong\Hom_W(L(\chi)_0,C_W)\,.$$ Here $\mc N_n\cap\t$ is the scheme-theoretic intersection of $\mc N_n$ and the vector space of diagonal $n\times n$-matrices $\t$. In fact one can replace $\mc N_n$ by an arbitrary nilpotent orbit closure $\ov{\mc O}_\nu$ and $C_W$ by the corresponding coinvariant ring, see \cite{Broer}. This means in particular that the graded dimension of $k[\ov{\mc O}_\nu]^{U_n}_\chi$ is given by $\tilde K_{\ov\lambda',\nu'}(t)$, where $\ov\lambda=\chi+{\bf 1}_n$, ${\bf 1}_n$ the all-one vector of length $n$ and $\tilde K_{\ov\lambda',\nu'}(t)=t^{n(\nu')}K_{\ov\lambda',\nu'}(t^{-1})$, $K_{\ov\lambda',\nu'}(t)$ the Kostka polynomial, as in \cite[p.~248]{Mac}, see e.g. \cite{GP}.\\ \begin{comment} For $\lambda,\mu$ arbitrary partitions of the same number the polynomial $K_{\lambda,\mu}(t)$ has degree $n(\mu)-n(\lambda)$ and for $\mu=1^n$, we have $t^{\binom{n}{2}}\tilde K_{\lambda,1^n}(t^{-1})=\tilde K_{\lambda',1^n}(t)$ by Poincare duality. This is equivalent to the same identity without the tildes or to $\tilde K_{\lambda,1^n}(t)= K_{\lambda',1^n}(t)$ and the latter can easily be deduced from \cite[p11,243]{Mac}. See \cite[p45, Ex.~2]{Mac} for another interesting identity. Note also that $L(\chi)_{{\bf 0}_n}\cong(\det^{-1}\ot L(\ov\lambda))_{{\bf 0}_n}\cong{\rm sgn}\ot L(\ov\lambda)_{{\bf 1}_n}$ which is the Specht module associated to $\ov\lambda'$. Finally note that the lowest degree of $K_{\ov\lambda,1^n}(t)$ is $n(\ov\lambda')$. Indeed the basis vector $u_T$ has multidegree equal to the weight of $T$ and the minimal possible degree of the pulled-back function is $\sum_{i\ge1}i\lambda'_i=\sum_{i\ge1}(i-1)\ov\lambda'_i=n(\ov\lambda')$, where $\lambda$ is again such that $\chi=[\lambda,1^t]$. Just take $T$ in Theorem~\ref{thm.basis_special_weights} with $i$'s in its $i$-th column. To make the above relation with the coinvariant ring explicit in the sense that the spanning sets for the highest weight vectors should produce spanning sets for the hom spaces $\Hom_W(L(\chi)_0,C_W)$ might be awkward, since one first has to turn the highest weight vectors in $k[\mc N]$ into homomorphisms $:L(\chi)\to k[\mc N]$. Maybe the approach of Carter and Lustig is useful here, since they give a basis which consists of elements $\Dist(U^-)$ applied to the highest weight vector and this clearly generalises to Weyl modules of non-polynomial highest weight.\\ \end{comment} 3.\ The dimension of the lowest degree piece of $k[\mc N_n]^{U_n}_\chi$ need not be one: for $\chi=(3,3,0,-2,-2,-2)$, the lowest degree is 9 and the piece of degree 9 has dimension 2. For weights of the form $[\lambda,1^t]$, $[1^t,\lambda]$, $[t,\lambda]$ and $[\lambda,t]$ the dimension of the lowest degree piece is always one. In the first case this follows from the link with the coinvariant algebra mentioned above. By going to bigger $n$ the lowest degree of $k[\mc N_n]^{U_n}_{[\lambda,\mu]}$ may drop: for $\lambda=(4,4,4)$ and $\mu=(3,3,3,3)$ the lowest degree is 18 for $n=7$ and 17 for $n=8$. All this can be calculated with the computer using the Lascoux-Schutzenberger charge on tableaux. \end{remsgl} \section{Coinvariants for Young subgroups and highest weight vectors in characteristic $0$}\label{s.char0} In this section we want to give bases for all the spaces highest weight vectors in $k[\Mat_{rs}^m]$. We will always assume that $k$ has characteristic $0$. \subsection{Representations of the symmetric group}\label{ss.Sym} We give a short account of Donin's results \cite{Donin1} on the representations of the symmetric group. He gave certain explicit bases for Hom spaces between skew Specht modules which are useful for the purpose of finding natural spanning sets for the highest weight vectors in $k[\mc N_n]$. We drop the assumption that $k$ is algebraically closed. Let $G$ be a finite group and let $A=kG$ be its group algebra. It has the obvious $\mb Q$-form $A_{\mb Q}=\mb QG$. Denote the symmetric bilinear form on $A$ for which the group elements form an orthonormal basis by $(-,-)$. Since its restriction to $A_{\mb Q}$ is positive definite, its restriction to any $\mb Q$-defined subspace of $A$ will be nondegenerate. Let $a\mapsto a^*$ be the anti-involution of $A$ which extends the inversion of $G$. Then we have $$(ab,c)=(a,cb^*)\text{\ and\ }(ab,c)=(b,a^*c)$$ for all $a,b,c\in A$. To deal with Hom spaces between ideals of $A$ generated by elements that need not be idempotents we need the following lemma. \begin{lemgl}\label{lem.homspaces} Let $a\in A$ and let $M$ be an $A$-module. \begin{enumerate}[{\rm(i)}] \item The map $\varphi:x\ot y\mapsto x^*y:Aa\ot M\to a^*M$ restricts to an isomorphism $(Aa\ot M)^G\stackrel{\sim}{\to} a^*M$. The inverse is given by $\psi:c\mapsto\frac{1}{|G|}\sum_{g\in G}g\ot gc$. \item If $a\in A_{\mb Q}$, then the composite of $\psi$ with the $G$-module isomorphism $x\ot y\mapsto(z\mapsto(x,z) y):Aa\ot M\to\Hom(Aa,M)$ maps $c\in\nts a^*M$ to the ``right multiplication" by $\frac{1}{|G|}c$. \item If $a\in A_{\mb Q}$, then $Aa=Aa^*a$. \end{enumerate} \end{lemgl} \begin{proof} (i).\ Clearly, $\varphi\circ\psi=\id$. Furthermore, we have for all $x,y\in A$ and $z\in M$ $$\sum_{g\in G}gxy\ot gz=\sum_{g\in G}gy\ot gx^*z\,.$$ So if $x\in a^*M$, then $\psi(x)\in(Aa\ot M)^G$. Now $(Aa\ot M)^G$ is spanned by elements of the form $c=\sum_{g\in G}gxa\ot gy$, $x\in A$, $y\in M$, and for such a $c$ we have $\psi(\varphi(c))=\psi(|G|(xa)^*y)=\sum_{g\in G}g\ot g(xa)^*y=\sum_{g\in G}gxa\ot gy=c$.\\ (ii).\ First note that the given map from $Aa\ot M$ to $\Hom(Aa,M)$ is obtained by combining the standard isomorphism $(Aa)^*\ot M\stackrel{\sim}{\to}\Hom(Aa,M)$ with the isomorphism $x\mapsto(x,-):Aa\stackrel{\sim}{\to}(Aa)^*$, so it is indeed an isomorphism. Now we compose $\psi$ with this isomorphism. Then $c\in a^*M$ goes to the map $z\mapsto\frac{1}{|G|}\sum_{g\in G}(g,z)gc=z\frac{1}{|G|}c$.\\ (iii).\ Let $\rho_a$ denote the right multiplication by $a$. Then $\rho_{a^*}=\rho_a'$, the transpose of $\rho_a$ with respect to the form $(-,-)$. So $Aa^*a=\Im(\rho_a\rho_a')=\Im(\rho_a)=Aa$. Here the second equality follows from the corresponding equality on $A_{\mb Q}$ on which our form is positive definite. \end{proof} From now on $G$ will be the symmetric group $\Sym_t$ of rank $t$. To describe certain Hom spaces and certain subspaces of $A$ it will turn out to be useful to use bijections between skew diagrams. We call such bijections {\it diagram mappings}. If we fix skew diagrams $E$ and $F$, then the elements of $G$ are in one-one correspondence with diagram mappings $F\to E$ as follows. If $\alpha:F\to E$ is a diagram mapping, then the corresponding element of $G$ sends for any box $x$ of $F$ the number of $T_F$ in $x$ to the number of $T_E$ in $\alpha(x)$. If we fix only one skew diagram $E$, then we can identify the elements of $G$ with $t$-tableaux of shape $E$ by replacing $(E,F)$ above by $(\Delta_t,E)$ and use the fact that $t$-tableaux can be identified with diagram mappings $E\to \Delta_t$. So the first correspondence is $g\mapsto\alpha_g=T_E^{-1}\circ g\circ T_F$ and the second one is $g\mapsto g\circ T_E$. For $T$ a $t$-tableau of shape $F$ we will also denote $T_E^{-1}\circ T$ by $\alpha_T$. As is well known one can associate the so-called skew Specht modules to skew diagrams, just like one can associate Specht modules to ordinary Young diagrams. These skew Specht modules are in general not irreducible, in fact they include the Young permutation modules. We briefly recall the construction. If $E$ is a skew Young diagram with $t$ boxes, then we can form the row symmetriser $e_2=\sum_gg\in A_{\mb Q}$ where the sum is over the row stabliser of $T_E$ in $G$, and the column anti-symmetriser $e_1=\sum_g{\rm sgn}(g)g\in A_{\mb Q}$ where the sum is over the column stabiliser of $T_E$ in $G$. The product $e=e_1e_2$ is then called the {\it Young symmetriser} associated to the skew diagram $E$. Unlike in the case of ordinary Young diagrams, the symmetrisers associated to skew diagrams are no longer idempotent up to a scalar multiple, although $e_1$ and $e_2$ of course are. \hbox{\hspace{2cm}}\vspace{-2mm \noindent For example, if $\ytableausetup {mathmode, boxsize=1.3em} E={\begin{ytableau} \none&1&2\\ 3&4 \end{ytableau}}\ ,$ \vspace{3mm} then $\dim{\rm span}(e,e^2)=2$. The {\it skew Specht module} associated to $E$ is the module $Ae$. We have $Ae=Ae_1e_2\subseteq Ae_2$ and $Ae_2$ is the well-known permutation module associated to $E$. If $\lambda$ is the partition which contains the row lengths of $E$ in weakly descending order, then $Ae_2$ is isomorphic to the usual Young permutation module $M^\lambda$. For example, if $\lambda$ is a partition of length $l$ and $$\begin{xy} (0.3,-.5)*=<48pt,28pt>{}*\frm{^\}}, (0.3,7.5)*{\text{$\lambda_1$ boxes}}, (-20,-10)*=<48pt,28pt>{}*\frm{_\}}, (-20,-18)*{\text{$\lambda_l$ boxes}}, (-9.9,-5.2)*={\begin{ytableau}\none\\\none[E=\quad\quad]\\ \none\end{ytableau} \begin{ytableau} \none&\none&\none&\none&\ &\none[\cdots]&\ \\ \none&\none&\none&\none[\iddots]&\none&\none&\none\quad,\\ \ &\none[\cdots]&\ &\none&\none&\none&\none \end{ytableau}} \end{xy}$$ then $e=e_2$ and $Ae=Ae_2=M^\lambda$. If $g,h\in G$, then $ge_2=he_2$ if and only if the tableaux of shape $E$ corresponding to $g$ and $h$ are row equivalent. For $g\in G$ and $T=g\circ T_E$ we denote $ge_2$ by $\{T\}$ and call it a {\it tabloid} in accordance with \cite{J}. Furthermore, $ge=ge_1g^{-1}ge_2$ and $\kappa_T=ge_1g^{-1}$ is the column anti-symmetriser associated to the skew tableau $T$. So the element $ge=\kappa_T\{T\}$ is the {\it polytabloid} $e_T$ from \cite{J}. We will denote it by $[T]$. For a $t$-tableau $T$ of shape $E$ we have $[T]=\sum_{\pi\in C_E}{\rm sgn}(\pi)\{T\pi\}$, where $C_E\le\Sym(E)$ is the column stabiliser of $E$. For the remainder of this section $E$ and $F$ are two skew diagrams and $e=e_1e_2$ and $f=f_1f_2$ are the corresponding Young symmetrisers. The next lemma says that, just like Specht modules, skew Specht modules could also have been defined by multiplying row symmetrisers and column anti-symmetrisers the other way round. \begin{lemgl}\ \begin{enumerate}[{\rm(i)}] \item We have $Ae_1e_2=Ae_2e_1e_2$ and $Ae_2e_1=Ae_1e_2e_1$. \item The maps $x\mapsto xe_1:Ae_1e_2\to Ae_2e_1$ and $x\mapsto xe_2:Ae_2e_1\to Ae_1e_2$ are isomorphisms. \end{enumerate} \end{lemgl} \begin{proof} (i).\ Since $e_1^*=e_1$ and $e_2^*=e_2$, we have $e^*=e_2e_1$ and $e^*e$ is a nonzero scalar multiple of $e_2e_1e_2$. Similarly for $\tilde e=e_2e_1$ we have that $\tilde e^*\tilde e$ is a nonzero scalar multiple of $e_1e_2e_1$. The assertion now follows from Lemma~\ref{lem.homspaces}(iii).\\ (ii).\ By (i) these maps are surjective, so, for dimension reasons, they must be isomorphisms. \end{proof} Since the elements of $G$ can be considered as diagram mappings $:F\to E$ we get a spanning set of $\Hom_A(Ae,Af)=e^*Af$ which is labelled by diagram mappings $:F\to E$. In particular we think of $Ae$ as spanned by diagram mappings $:E\to\Delta_t$, i.e. $t$-tableaux of shape $E$. \begin{comment} If $T$ is an $t$-tableau of shape $E$ and $\alpha:F\to E$ is a diagram mapping, then $T\circ\alpha$ is an $F$-tableau and in this we obtain an element of $\End_A(A)$. If we compose this endomorphism on the right with $\rho_{e*}$ and on the left with $\rho_f$, then we obtain the element of $\Hom_A(Ae,Af)$ labelled by $\alpha$. \end{comment} It is our goal to find a subset of the above spanning set which is a basis for the space $e^*Af$. First we point out some special cases, then we state it in general in Theorem~\ref{thm.homspace_basis}. Let $\mu$ be the tuple of row lengths of $E$, i.e. the weight of $S_E$. We have for $g,h\in G$ that $e_2g=e_2h$ if and only if $S_E\circ\alpha_g=S_E\circ\alpha_h$. We will say that $g$ or $T=g\circ T_F$ {\it represents} $S_E\circ\alpha_g=S_E\circ\alpha_T$. So the elements $e_2g$ with $g$ in a set of representatives for the tableaux of shape $F$ and weight $\mu$ form a basis of $e_2A$. Of course we could change the shape $F$ to any other shape with the same number of boxes. More generally, we have for $T_1,T_2$ $t$-tableaux of shape $F$ that $e_2\{T_1\}=e_2\{T_2\}$ if and only if $S_E\circ\alpha_{T_1}$ and $S_E\circ\alpha_{T_2}$ are row equivalent. So the elements $e_2\{T\}$ with $T$ in a set of representatives for the row-ordered tableaux of shape $F$ and weight $\mu$ form a basis of $e_2Af_2$. For a tableau $T$ we define the {\it standard scan} of $T$ to be the sequence of entries of $T$, read row by row from left to right and top to bottom. We order the row ordered tableaux of shape $F$ as follows. If $S\ne T$ are two such tableaux, then $S<T$ if and only if $\alpha_i<\beta_i$, where $i$ is the first position where the standard scans $\alpha$ and $\beta$ of $S$ and $T$ differ. The above basis of $e_2Af_2$ is now also linearly ordered, since we linearly ordered its index set. We extend the above order to a preorder on all tableaux of shape $F$ by defining $S\le T$ if and only if $\tilde S\le\tilde T$, where $\tilde S$ and $\tilde T$ are the unique row ordered tableaux that are row-equivalent to $S$ resp. $T$. The proof of the next trivial lemma is left to the reader. \begin{lemgl}[{cf.~\cite[Lem~1.2]{Donin1}, \cite[Lem.~8.2]{J}}]\label{lem.independent} Let $(x_i)_{i\in I}$ be a family of elements of $e_2Af_2$ and for each $i$ let $y_i$ be the least element from the above basis of $e_2Af_2$ involved in $x_i$. If the $y_i$ are distinct, then $(x_i)_{i\in I}$ is linearly independent. \end{lemgl} \begin{lemgl}[{cf.~\cite{Donin1}, \cite[Lem.~8.3]{J}}]\label{lem.order} Let $F$ be a skew diagram. If $S,T$ are distinct column equivalent tableaux of shape $F$ with $S$ column ordered, then $S<T$. \end{lemgl} \begin{proof} Denote the $i$-th rows of $S$ and $T$ by $S_i$ and $T_i$. Choose $i$ minimal with $S_i\ne T_i$. Then we have $S_{ij}\le T_{ij}$ for all $j$ with at least one inequality strict. So for each $r$ the number of occurrences of integers $\le r$ in $S_i$ is $\ge$ to that of $T_i$ with at least one inequality strict. So $S<T$. \end{proof} \begin{comment} We (pre-)order sequences of a fixed length, $n$ say, by $x<y$ iff we have for their sorted versions $x'$ and $y'$ that $x'<y'$ in the lexicographic order. Then $x\le y$ iff $\ov x\ge \ov y$ in the lexicographic order. Here $\ov x_i=|\{j\,|\,x_j=i\}|$ is the ``occurrence sequence" of $x$, and similarly for $\ov y$. We also have $x\le y$ iff $\tilde x\ge \tilde y$ component-wise, where $\tilde x_i=|\{j\,|\,x_j\le i\}|$ and similarly for $\tilde y$. \end{comment} As in \cite[Thm.~8.4]{J} one can use the previous two lemma's (replace $(E,F)$ by $(\Delta_t,E)$) and an obvious generalisation of the Garnir relations \cite[Sect.~7]{J} to prove the well-known result that the polytabloids $[T]$, $T$ a standard tableau of shape $E$, form a basis of $Ae$. \begin{lemgl}[{\cite[Lem~2.2]{Donin1} and \cite[Prop.]{Donin2}}]\label{lem.admissible} Let $\alpha:F\to E$ be a diagram mapping which satisfies \begin{enumerate}[{\rm(a)}] \item The tableau $S_E\circ\alpha$ of shape $F$ is semi-standard. \item If for $a,b\in F$, $\alpha(b)$ occurs strictly below $\alpha(a)$ in the same column, then $b$ occurs in a strictly lower row than $a$. \end{enumerate} Then there exists a diagram mapping $\tilde\alpha:F\to E$ with $S_E\circ\tilde\alpha=S_E\circ\alpha$ satisfying \begin{enumerate}[{\rm(b')}] \item If for $a,b\in F$, $\tilde\alpha(b)$ occurs strictly below $\tilde\alpha(a)$ in the same column, then $b$ occurs in a strictly lower row than $a$ and in a column to the left of $a$ or in the same column. \end{enumerate} \end{lemgl} \begin{comment} \begin{lemgl}[{\cite[Lem~2.2]{Donin1} and \cite[Prop.]{Donin2}}]\label{lem.admissible} Let $\alpha:F\to E$ be a diagram mapping which satisfies \begin{enumerate}[{\rm(1)}] \item $S_E\circ\alpha$ is semi-standard. \item If $x$ and $y$ are boxes in the same column of $F$, $y$ strictly below $x$, then $\alpha(y)$ occurs in a strictly lower row than $\alpha(x)$. \end{enumerate} Then there exists a diagram mapping $\tilde\alpha:F\to E$ with $S_E\circ\tilde\alpha=S_E\circ\alpha$ satisfying \begin{enumerate}[{\rm(2')}] \item If $x$ and $y$ are boxes in the same column of $F$, $y$ strictly below $x$, then $\alpha(y)$ occurs in a strictly lower row than $\alpha(x)$ and in a column to the left of $\alpha(x)$ or in the same column. \end{enumerate} \end{lemgl} \end{comment} \begin{proof} Let $a=(i,j)\in F$ be the first cell in the order of the standard scan such that with $\alpha(a)=(r,s)$ we have $(r+1,s)\in E$ and $b=\alpha^{-1}(r+1,s)$ occurs in a column strictly to the right of $a$ (*). Since $S_E(\alpha(a))=r$, $S_E(\alpha(b))=r+1$, $S_E\circ\alpha$ is semi-standard and $\alpha$ has property (b) we have $b=(i+1,j_1)$ for some $j_1>j$, $S_E(\alpha(i,j_2))=r$ and $S_E(\alpha(i+1,j_2))=r+1$ for all $j_2$ with $j\le j_2\le j_1$. Now put $b_1=(i+1,j)$ and $\beta=\alpha\circ(b,b_1)$, where $(b,b_1)$ is the transposition which swaps $b$ and $b_1$. Then $S_E\circ\beta=S_E\circ\alpha$. If $\beta$ does not have property (b'), then the first cell of $F$ in the order of the standard scan that has property (*) for $\beta$ will be after $a$. This is clear if with $\alpha(b_1)=(r+1,s_1)$ we have $(r,s_1)\notin E$. So assume this is not the case and assume $a_1=\alpha^{-1}(r,s_1)$ occurs before $a$ in the standard scan. Then, by the choice of $a$, its column index is $>j$. So its row index is $<i$. But then, by the semi-standardness of $S_E\circ\alpha$, its column index is $>j_1$. So $a_1$ doesn't have the above property for $\beta$ and this was the only possibility before $a$. So we can finish by induction. \end{proof} Recall that $\mu$ is the tuple of row lengths of $E$. We will call a semi-standard tableau $S$ of shape $F$ and weight $\mu$ {\it admissible} if $S=S_E\circ\alpha$ for some diagram mapping $\alpha:F\to E$ satisfying the conditions (a) and (b) from Lemma~\ref{lem.admissible}. Next we need the notion of a ``picture" (we will call it admissible) from \cite{Z1} which is a generalisation of that of \cite{JP}. For this we need two orderings $\le$ and $\preceq$ on $\mb N\times\mb N$ defined by $(p,q)\le(r,s)$ if and only if $p\le r$ and $q\le s$, and $(p,q)\preceq(r,s)$ if and only if $p<r$ or ($p=r$ and $q\ge s$). Note that $\preceq$ is a linear ordering. Recall that skew Young diagrams are by definition subsets of $\mb N\times\mb N$. A diagram mapping $\alpha:F\to E$ is called {\it admissible} if $\alpha:(F,\le)\to(E,\preceq)$ and $\alpha^{-1}:(E,\le)\to(F,\preceq)$ are order preserving. So $\alpha$ is admissible if and only if $\alpha^{-1}$ is admissible. In \cite[App.~2]{Z2} it is shown that $\alpha:F\to E$ is admissible if and only if for all $a,b\in F$ \begin{enumerate}[{\rm (1)}] \item $a (E) b \implies \alpha(a)(W,SW)\alpha(b)$, \item $a (S) b \implies \alpha(a)(SW,S)\alpha(b)$, \item $a (NE) b \implies \alpha(a)(NE,N,NW,W,SW)\alpha(b)$, \item $a (SE) b \implies \alpha(a)(SW)\alpha(b)$. \end{enumerate} Here the letter combinations E, S, SW etc. in the brackets refer to the usual wind directions and they are mutually exclusive. For example, $a(W)b$ means that $a$ occurs strictly before $b$ in the same row and $a(SW)b$ means that $a$ occurs in a row strictly below $b$ and in a column strictly to the left of $b$. Furthermore, ``$a(A,B)b$" means ``$a(A)b$ or $a(B)b$" and similar for more than two wind directions. In \cite{Z2} it is also pointed out that property (4) actually follows from (1) and (2). Although we will not use this equivalent characterisation, it can be useful to get an idea of what it means for a diagram mapping to be admissible. \begin{thmgl}[{\cite[Thm~2.4]{Donin1}, \cite[Thm~1]{Donin2}}]\label{thm.homspace_basis}\ \begin{enumerate}[{\rm(i)}] \item The elements $e^*[T]$ with $T$ in a set of representatives of the admissible semi-standard tableaux of shape $F$ and weight $\mu$ form a basis of $e^*Af$. \item For every admissible semi-standard tableau $S$ of shape $F$ and weight $\mu$, there is precisely one admissible diagram mapping $\alpha:F\to E$ such that $S=S_E\circ\alpha$ and all admissible diagram mappings occur in this way. \end{enumerate} \end{thmgl} \begin{proof} Assume $\alpha:F\to E$ is admissible. Then it follows that \hbox{$S=S_E\circ\alpha$} is ordered, since the ordering $\le$ is linear on the rows and columns of $F$. Furthermore, $\alpha^{-1}:E\to F$ is also admissible. From this it follows that if $b$ is strictly below $a$ in the same column of $F$, then $\alpha(b)$ occurs in a row strictly below $\alpha(a)$, i.e. $S$ is semi-standard. Since $\alpha^{-1}$ has the analogous property, $\alpha$ has property (b), i.e. $S$ is admissible. The image of the $i$-th row of $E$ under $\alpha^{-1}$ is $S^{-1}(i)$, and, since the ordering $\le$ is linear on the rows of $E$, $\alpha^{-1}$ is completely determined by the images of the rows of $E$ under $\alpha^{-1}$. So for every admissible semi-standard tableau $S$ of shape $F$ and weight $\mu$, there is at most one admissible diagram mapping $\alpha:F\to E$ such that $S=S_E\circ\alpha$. By \cite[Thm~1]{Z1} the number of admissible diagram mappings is equal to $\dim\Hom_A(Ae,Af)$ which is equal to $\dim e^*Af$ by Lemma~\ref{lem.homspaces}. So to prove (i) and (ii) it suffices to show that the elements given in (i) are linearly independent. By Lemma~\ref{lem.admissible} we may assume that for all $T$ in the given set of representatives $\alpha_T$ satisfies property (b') from Lemma~\ref{lem.admissible}. Let $C_{T_E}\le G$ and $C_F\le\Sym(F)$ be the column stabilisers of $T_E$ and $F$ and let $T$ be as above. Then we have $$e^*[T]=\sum_{g\in C_{T_E},\, \sigma\in C_F}{\rm sgn}(g){\rm sgn}(\sigma)e_2g\{T\sigma\}=\sum_{\pi\in\tilde C_F,\, \sigma\in C_F}{\rm sgn}(\pi){\rm sgn}(\sigma)e_2\{T\pi\sigma\}\,,$$ where $\tilde C_F=T^{-1}C_{T_E}T=\alpha_T^{-1}C_E\alpha_T\le\Sym(F)$ is the stabiliser of the sets $\alpha_T^{-1}(E^i)$, $E^i$ the $i$-th column of $E$. If, for $\pi\in\tilde C_F$, $S_E\circ\alpha_{T\pi}$ has a repeated entry in some column, then $\sum_{\sigma\in C_F}{\rm sgn}(\sigma)e_2\{T\pi\sigma\}=0$. By Lemma~\ref{lem.independent} it suffices to show that $e_2\{T\}$ occurs with strictly positive coefficient in $e^*[T]$ and $e_2\{T\}\le e_2\{T\pi\sigma\}$ for all $\pi\in\tilde C_F$ such that $S_E\circ\alpha_{T\pi}$ has no repeated entry in any column, and all $\sigma\in C_F$. For $\pi\in\tilde C_F$ with this property let $\sigma_\pi\in C_F$ be the element such that \hbox{$S_E\circ\alpha_{T\pi\sigma_\pi}$} is (strictly) column ordered. Then $S_E\circ\alpha_{T\pi\sigma_\pi}<S_E\circ\alpha_{T\pi\sigma}$ for all $\sigma\in C_F\sm\{\sigma_\pi\}$ by Lemma~\ref{lem.order}. So it suffices to show that $e_2\{T\}$ occurs with strictly positive coefficient in $e^*[T]$ and that for $\pi$ as above $e_2\{T\}\le e_2\{T\pi\sigma_\pi\}$. Let $\pi\in\tilde C_F$ such that $S_E\circ\alpha_{T\pi}$ has no repeated entry in any column. If $\pi\in C_F$, then $\sigma_\pi=\pi^{-1}$ and, ${\rm sgn}(\pi){\rm sgn}(\sigma_\pi)e_2\{T\pi\sigma_\pi\}=e_2\{T\}$. Now assume $\pi\notin C_F$. We will finish by showing that $e_2\{T\}<e_2\{T\pi\sigma_\pi\}$. Let $a_1=(i_1,j_1)$ be the first cell of $F$ in the order of the standard scan which is moved to another column by $\pi^{-1}$. So $a_1$ is the first cell whose value $r=S_E(\alpha_T(a_1))$ has moved to another column in $S_E\circ\alpha_T\pi$. First we prove the following claim.\\ {\bf Claim}.\ {\it If $a=(i,j)$ and $\pi(a)$ are not in the same column, then we have $S_E(\alpha_T(\pi(a)))\ge S_E(\alpha_T(i_1,j))$.} \vspace{-2mm} \begin{proof} Assume $a$ has the stated property. From the definition of $a_1$ it follows that $\pi(a)$ has row index $\ge i_1$. If $\pi(a)$ has column index $>j$, then the semi-standardness of $S_E\circ\alpha_T$ gives us the result. So we assume now that $\pi(a)$ has column index $<j$. Put $D=\alpha_T^{-1}(D')$, where $D'$ is the column of $E$ to which $\alpha_T(a)$ belongs. Note that since $\alpha_T$ has properties (a) and (b'), the inverse images of the columns of $E$ under $\alpha_T$ are vertical strips (see \cite{Mac}). Furthermore, they are stable under $\pi$. Note also that $S_E(b)$ is the row index of $b$ in $E$, so a cell of $D$ in a lower row than another cell of $D$ must contain a strictly bigger number. Since the intersection of $D$ with the $j$-th column of $F$ is not stable under $\pi$, it is also not stable under $\pi^{-1}$. So for some $b\in D$ in the $j$-th column of $F$, $\pi^{-1}(b)$ is not in the $j$-th column. By the definition of $a_1$, $b$ has row index $\ge i_1$. So $S_E(\alpha_T(i_1,j))\le S_E(\alpha_T(b))$, by the semi-standardness of $S_E\circ\alpha_T$. Now $\pi(a)$ occurs in a row strictly below $b$, since its column index is $<j$ and $D$ is a vertical strip. So $S_E(\alpha_T(b))<S_E(\alpha_T(\pi(a)))$. \end{proof}\vspace{-2mm} \noindent From the claim and the choice of $a_1$ it immediately follows that $S_E\circ\alpha_T$ and $S_E\circ\alpha_T\pi\sigma_\pi$ have the same first $i_1-1$ rows, and $$S_E(\alpha_T(\pi\sigma_\pi(i_1,j)))\ge S_E(\alpha_T(i_1,j))\text{\ for all $j$, with equality if\ }j<j_1.\eqno(*)$$ Now let $j_0,\ldots,j_2$ be the positions in the $i_1$-th row where $S_E\circ\alpha_T$ has an $r$. By (*) these are the only positions in the $i_1$-th row where $S_E\circ\alpha_T\pi\sigma_\pi$ could have an $r$. Note that $j_0\le j_1\le j_2$. Now let $a$ be any cell of $S_E\circ\alpha_T$ which contains an $r$ such that $\pi^{-1}(a)$ has column index in $\{j_0,\ldots,j_2\}$. If the column index of $a$ is $>j_2$, then, by the semi-standardness of $S_E\circ\alpha_T$, its row index is $<i_1$. So, by the definition of $a_1$, $\pi^{-1}(a)$ is in the same column as $a$ which is impossible. Now assume $\pi^{-1}(a)$ occurs in a column strictly to the right of $a$. Put $D=\alpha_T^{-1}(D')$, where $D'$ is the column of $E$ to which $\alpha_T(a)$ belongs. Since $D$ is a vertical strip $\pi^{-1}(a)$ has row-index strictly less than that of $a$ and must contain an number $<r$. So, by the semi-standardness of $S_E\circ\alpha_T$, its row index is $<i_1$. By the definition of $a_1$, $\pi^{-1}(\pi^{-1}(a))$ is in the same column as $\pi^{-1}(a)$. If its row index would be $\ge i_1$, then $D$ would have to contain another cell than $a$ with an $r$, since it is a vertical strip. This is impossible, so $\pi^{-1}(\pi^{-1}(a))$ has row index $<i_1$. But then we could keep applying $\pi^{-1}$ and stay in the same column. This contradicts the fact that $\pi^{-1}$ has finite order. So if $\pi^{-1}(a)$ has column index in $\{j_0,\ldots,j_2\}$, then the same is true for $a$. Furthermore, if this were true for $a_1$, then $\pi^{-1}(a_1)$ would have to occur in a column strictly to the left of $a_1$. Then it follows from the definition of $a_1$ that $S_E\circ\alpha_T\pi$ would have two $r$'s in the column containing $\pi^{-1}(a_1)$, contradicting our assumption on $\pi$. So the number of occurrences of $r$ in the $i_1$-th row of $S_E\circ\alpha_T\pi\sigma_\pi$ is at least one less than in the $i_1$-th row of $S_E\circ\alpha_T$ and by (*) the number of occurrences of any $r'<r$ in the $i_1$-th row is the same. So we may finally conclude that $S_E\circ\alpha_T\pi\sigma_\pi>S_E\circ\alpha_T$. \end{proof} \begin{comment} \noindent EXAMPLE\\ If $S_E\circ\alpha_T=\begin{tabular}{ccc}\ &1&1\\2&2&\ \end{tabular}$, $E=\begin{tabular}{cc}1&1\\2&2\end{tabular}$ (semi-standard numbering filled in) and $\alpha_T$ the diagram mapping which preserves the row index and the order within a row. Then we can let $\pi$ swap the elements in the two "cyles". Then $S_E\circ\alpha_T=\begin{tabular}{ccc}\ &2&2\\1&1&\ \end{tabular}$ and $S_E\circ\alpha_T\pi\sigma_\pi=\begin{tabular}{ccc}\ &1&2\\1&2&\ \end{tabular}$. Here $a_1=(1,2)$ is the very first cell of $F$ in the standard scan and $r=1$. The first position where $S_E\circ\alpha_T\pi\sigma_\pi$ and $S_E\circ\alpha_T$ differ is position $3=j_1+1$. \end{comment} \begin{remsgl} 1. It is easy to see that for any semi-standard tableau $S$ of shape $F$ and weight $\mu$ there is a standard tableau $T$ of shape $F$ with $S=S_E\circ T_E^{-1}\circ T$. So the representatives $T$ from Theorem~\ref{thm.homspace_basis} can be chosen standard.\\ 2.\ Write $E=\nu/\tilde\nu$. Using Lemma~\ref{lem.admissible}, it is easy to see that an admissible tableau of shape $F$ and weight $\mu$ must satisfy the condition from \cite[Cor~2]{Stem2} that $\tilde\nu+w(T_{\ge j})$ is dominant for all $j$. Since both sets count the same dimension, the two conditions are equivalent.\\ 3.\ Donin considers tableaux of shape $E$ as diagram mappings $T:\Delta_t\to E$, where $\Sym_t$ acts via $\pi\cdot T=T\circ\pi^{-1}$ and he works with the modules $e^*A$ considered as left $\Sym_t$ modules via the inversion. In his approach one has to use the isomorphism $\Hom_A(e^*A,f^*A)\cong f^*Ae$, and think of this space as having a spanning set labelled by diagram mappings $:E\to F$. Furthermore, one then has to replace $(a,b,\alpha(a),\alpha(b))$ by $(\alpha(a),\alpha(b),a,b)$ in property (b) and (b') in Lemma~\ref{lem.admissible}. \end{remsgl} Of course the previous results are valid for any symmetric group $\Sym(X)$, $X$ a finite subset of $\mb N$ with $t$ elements. Just redefine $T_F$ by writing by filling in the elements from $X$ in their natural order row by row from left to right and top to bottom and replace ``$t$-tableau of shape $E$" by ``$X$-tableau of shape $E$": this is a tableau whose entries are the elements of $X$ (so its entries are distinct). For $X\subseteq\{1,\ldots,t\}$ we consider $\Sym(X)$ as a subgroup of $\Sym_t$ by letting the permutations from $\Sym(X)$ fix everything outside $X$. When we apply our previous results to $\Sym(X)$ we use $X$ as an extra subscript when necessary. The group algebra $A_X=k\Sym(X)$ is a subalgebra of $A$. If $D$ is a skew tableau with $|X|$ boxes, then we denote the Young symmetriser associated to the standard tableau $T_{D,X}$ by $e_{D,X}$. Let $\nu=(\nu_1,\ldots,\nu_m)$ be an $m$-tuple of integers $\ge0$ with sum $t$. For $i\in\{1,\ldots,m\}$, put $\Lambda_i=\{j+\sum_{h=1}^{i-1}\nu_h\,|\,1\le j\le\nu_i\}$. Then the {\it Young subgroup} $\Sym_\nu$ of $\Sym_t$ associated to $\nu$ is the simultaneous stabiliser of the sets $\Lambda_1,\ldots,\Lambda_m$. So $\Sym_\nu\cong\prod_{i=1}^m\Sym_{\nu_i}$. Let $\lambda\supseteq\mu$ be partitions with $E=\lambda/\mu$. Then there is a 1-1 correspondence between ordered tableaux of shape $E$ with entries $\le m$ and sequences of partitions $\lambda^0,\ldots,\lambda^m$ with $\mu=\lambda^0\subseteq\lambda^1\subseteq\cdots\subseteq\lambda^m=\lambda$. Indeed if $P$ is such a tableau, then $(\mu\cup P^{-1}(\{1,\ldots,i\}))_{1\le i\le m}$ is such a sequence of partitions. Conversely we can construct $P$ from such a sequence: just fill the boxes of $\lambda^i/\lambda^{i-1}$ with $i$'s for all $i\in\{1,\ldots,m\}$. So we can express the well-known rule for restricting skew Specht modules to Young subgroups in terms of tableaux $P$ as above. We say that a $t$-tableau $T$ of shape $E$ {\it belongs to $P$} if $T^{-1}(\Lambda_i)=P^{-1}(i)$ for all $i\in\{1,\ldots,m\}$. Then $T$ will be standard if and only if the $T|_{P^{-1}(i)}$ are standard. Every standard tableau of shape $E$ belongs to some ordered tableau of shape $E$ and weight $\nu$. If $P$ is an ordered tableaux of shape $E$ and weight $\nu$, then we define $T_P$ to be the tableau of shape $E$ with $T_P|_{P^{-1}(i)}=T_{P^{-1}(i),\Lambda_i}$. Note that $T_P$ is a standard tableau which belongs to $P$. Let $P$ and $Q$ be ordered tableaux of shapes $E$ and $F$, both of weight $\nu\in\mb Z^m$. Then a diagram mapping $\alpha:F\to E$ with $P\circ\alpha=Q$ determines an $m$-tuple of tableaux $(S_{P^{-1}(1)}\circ\alpha_1,\ldots,S_{P^{-1}(m)}\circ\alpha_m)$ (*), where $\alpha_i:Q^{-1}(i)\to P^{-1}(i)$ is the restriction of $\alpha$ to $Q^{-1}(i)$. We will say that $\alpha$ {\it represents} (*). Notice that all the $m$-tuples (*) have the same tuple of shapes and the same tuple of weights. We express this by saying that the tuple of tableaux has {\it shapes determined by $Q$ and weights determined by $P$}. Similarly, if $T$ is a tableau of shape $F$ which belongs to $Q$, then we say that $T$ {\it represents} (*), where $\alpha_i=T_{P^{-1}(i),\Lambda_i}^{-1}\circ T|_{Q^{-1}(i)}$. So if we cut $T$ to pieces according to $Q$, then $\alpha_i:Q^{-1}(i)\to P^{-1}(i)$ above is just the diagram mapping corresponding to the $i$-th piece. Note that the ``union" of the above $\alpha_i$ is $T_P^{-1}\circ T$. Let $\nu$ be as above. If $H$ is a group and $U$ an $H$-module, then $U_H$, sometimes called the space of ``coinvariants", is defined as the largest quotient of $U$ which has trivial $H$-action, i.e. the quotient of $U$ by the subspace spanned by the elements $gx-x$, $x\in U, g\in\ H$. \begin{propgl}\label{prop.coinvariants} Assume that $E$ and $F$ are ordinary Young tableaux. Let $\nu$ and $\Sym_\nu$ be as above. Then the canonical images of the elements $[T_P]\ot[T]$, where for each pair $(P,Q)$ with $P$ and $Q$ ordered tableaux of shapes $E$ and $F$, both of weight $\nu$, $T$ goes through a set of representatives for the $m$-tuples of admissible semi-standard tableaux with shapes determined by $Q$ and weights determined by $P$, form a basis for $(Ae\ot Af)_{\Sym_\mu}$. \end{propgl} \begin{proof} Let $\Omega_E$ be the set of ordered tableaux of shape $E$ and weight $\nu$. For $P\in\Omega_E$ put $M_P=\bigoplus_{P\in\Omega}\ot_{i=1}^mA_{\Lambda_i}e_{P^{-1}(i),\Lambda_i}$ and let $\theta_P:M_P\to Ae$ be the linear map which sends $\ot_{i=1}^m[T_i]$, $T_i$ standard of shape $P^{-1}(i)$ with entries in $\Lambda_i$, to $[T]$ where $T$ is the (standard) tableau obtained by piecing the tableaux $T_i$ together according to $P$. Then it follows from the basis theorem for $Ae$ that $Ae=\bigoplus_{P\in\Omega_E}\theta_P(M_P)$. By \cite[Thm.~3.1]{JP} and a straightforward induction argument there is a total ordering $P_1<P_2<\cdots<P_p$ of $\Omega_E$ such that with $N_j=\bigoplus_{h=1}^j\theta_{P_h}(M_{P_h})$ we have that for all $j\in\{1,\ldots,p\}$ $N_j$ is a $\Sym_\nu$-submodule and the natural map $\ov\theta_{P_j}:M_{P_j}\to N_j/N_{j-1}$ is an isomorphism of $\Sym_\nu$-modules. In particular, if $T$ is a $t$-tableau which belongs to $P_j$, then $[T]\in N_j$ and the canonical image of $[T]$ in $N_j/N_{j-1}$ is the image of $\ot_{i=1}^m[T|_{P^{-1}(i)}]$ under $\ov\theta_{P_j}$. Similar remarks apply to analogously defined $\Omega_F$ and, for $Q\in\Omega_F$, $M_Q$ and $\theta_Q$. So (redefining the $P_j$) there is a total ordering $(P_1,Q_1)<(P_2,Q_2)<\cdots<(P_{pq},Q_{pq})$ of $\Omega_E\times\Omega_F$ such that with (redefining) $N_j=\bigoplus_{h=1}^j\theta_{P_h}(M_{P_h})\ot\theta_{Q_h}(M_{Q_h})$ we have that for each $j\in\{1,\ldots,pq\}$ $N_j$ is a $\Sym_\nu$-submodule and the natural map $\ov\theta_{P_j}\ot\ov\theta_{Q_j}:M_{P_j}\ot M_{Q_j}\to N_j/N_{j-1}$ is an isomorphism of $\Sym_\nu$-modules. Denote for each $P\in\Omega_E$ and $Q\in\Omega_F$ the given set of representative $t$-tableaux by $\Gamma_{PQ}$. Let $\pi_j:N_j\to N_j/N_{j-1}$ be the natural map. By Theorem~\ref{thm.homspace_basis}, Lemma~\ref{lem.homspaces}(i) and the fact that $\ov\theta_{P_j}\ot\ov\theta_{Q_j}$ is a homomorphism of $\Sym_\nu$-modules, the canonical images of the elements $\pi_j([T_P]\ot[T])$, $T\in\Gamma_{P_jQ_j}$, in $(N_j/N_{j-1})_{\Sym_\nu}$ form a basis for $(N_j/N_{j-1})_{\Sym_\nu}$. When applying Lemma~\ref{lem.homspaces}(i) we omitted the sum over $\Sym(\Lambda_i)$ coming from the definition of $\psi$ after moving $e_{Q_j^{-1}(i),\Lambda_i}^*$ to the left as $e_{Q_j^{-1}(i),\Lambda_i}$, since we work with coinvariants rather than invariants. Now the assertion follows by a straightforward induction. \end{proof} \begin{comment} \begin{lemnn} Let $H$ be a group, let $M$ be an $H$-module, let $N$ be a submodule of $M$ and let $\pi:M\to M/N$ be the natural homomorphism. Let $S\subseteq M$ and $T\subseteq N$. If the canonical image of $\pi(S)$ in $(M/N)_H$ spans $(M/N)_H$ and the canonical image of $T$ in $N_H$ spans $N_H$, then the canonical image of $S\cup T$ in $M_H$ spans $M_H$. \end{lemnn} \begin{proof} Let $M'$ be the span of the elements $x-g\cdot x$, $g\in G$, $x\in M$ and define $N'$ and $(M/N)'$ analogously. We have to prove that $\la S\cup T\ra+M'=M$. We know $\la T\ra+N'=N$ and $\la\pi(S)\ra+(M/N)'=M/N$. Since $\pi(M')=(M/N)'$ we can rewrite the latter as $\la S\ra+M'+N=M$. So we get $\la S\cup T\ra+M'+N'=M$. But $N'\subseteq M'$, so $\la S\cup T\ra+M'=M$. \end{proof} If we assume $H$ finite and the characteristic of the field zero (as in all of Section~\ref{s.char0}), and $\pi|_S$ injective, then we may replace ``spans" by ``is a basis of". Here it has to be understood that the canonical maps $N\to N_H$, $M/N\to(M/N)_H$ and $M\to M_H$ are injective on $\pi(S)$, $T$ and $S\cup T$. The first two by assumption and the second one as part of the statement of the lemma. For the ``basis-version" we need that $M'\cap N=N'$ which follows immediately from complete reducibility. \end{comment} \begin{remgl} The result \cite[Thm.~3.1]{Donin1} which deals with restriction to Young subgroups is incorrect since it assumes that the $\theta_P(M_P)$ are $\Sym_\nu$-submodules. \end{remgl} \subsection{Bases for the highest weight vectors}\label{ss.highest_weight_vecs} We return to the notation of Section~\ref{s.charp}. In particular $m,r,s$ are fixed integers $\ge1$. For $l\in\{1,\ldots,m\}$ we denote the matrix entry functions of the $l$-th matrix component on $\Mat_{rs}^m$ by $x(l)_{ij}$. For $t$ an integer $\ge 0$ let $\Sigma_t$ be the set of $m$-tuples $\nu=(\nu_1,\ldots,\nu_m)$ of integers $\ge0$ with sum $t$. Furthermore, if $\lambda$ is a partition, then we define $C_\lambda\le\Sym(\lambda)$ to be the column stabiliser of $\lambda$. In the papers by Donin referred to below the elements $u_{\nu,P,Q,\alpha}$ were not given, but, in the case $m=n-1$, their pull-backs to $\Mat_n$ under the map $\varphi_{r,s,n,n-1}:\Mat_n\to\Mat_{rs}^{n-1}$ from Section~\ref{s.charp}. Here one should also note that Donin worked with $S(\Mat_n)$ rather than $k[\Mat_n]$, so, after pulling back, $x_{ij}\in\Mat_n^*$ should be replaced by Donin's $e_{ji}\in\Mat_n$. \begin{thmgl}[cf. {\cite[after Thm~3]{Donin2},\cite[Prop.~4.1]{Donin1}}]\label{thm.highest_weight_vecs} Let $\lambda,\mu$ be a partitions of $t$ with $l(\mu)\le r$ and $l(\lambda)\le s$. For $\nu\in\Sigma_t$, $P,Q$ ordered tableaux of shapes $\lambda$ and $\mu$, both of weight $\nu$ and $\alpha:\mu\to\lambda$ a diagram mapping such that $P\circ\alpha=Q$ define $$u_{\nu,P,Q,\alpha}=\sum_{\pi\in C_\mu, \sigma\in C_\lambda}{\rm sgn}(\pi){\rm sgn}(\sigma)\prod_{a\in\mu}x(Q(a))_{r-\pi(a)_1+1,\,\sigma(\alpha(a))_1}\,,$$ where for $b\in\mu$, $b_1$ denotes the row index of $b$ in $\mu$ and similar for $b\in\lambda$. Then the elements $u_{\nu,P,Q,\alpha}$, where for each $P,Q,\nu$ as above $\alpha$ goes through a set of representatives for the $m$-tuples of admissible semi-standard tableaux with shapes determined by $Q$ and weights determined by $P$, form a basis of $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$. \end{thmgl} \begin{proof} Let $V=k^r$ and $W=k^s$ be the natural modules of $\GL_r$ and $\GL_s$. Then $\Mat_{rs}=V\ot W^*$ and $\Mat_{rs}^*=V^*\ot W$. So $k[\Mat_{rs}^m]=\bigoplus_{t\ge0}S^t\big((V^*\ot W)^m\big)=\bigoplus_{t\ge0,\nu\in\Sigma_t}S^\nu(V^*\ot W)=\bigoplus_{t\ge0,\nu\in\Sigma_t}((V^*)^{\ot t}\ot W^{\ot t})_{\Sym_\nu}$, where, for $U$ any vector space $S^\nu(U)=\ot_{i=1}^mS^{\nu_i}(U)$. Therefore $$k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}=\bigoplus_{\nu\in\Sigma_t}\Big(((V^*)^{\ot t})^{U_r}_{-\mu^{\rm rev}}\ot (W^{\ot t})^{U_s}_\lambda\Big)_{\Sym_\nu}\,.$$ As is well-known, $((V^*)^{\ot t})_{-\mu^{\rm rev}}$ and $(W^{\ot t})_\lambda$ are the permutation modules associated to $\mu$ and $\lambda$, and $((V^*)^{\ot t})^{U_r}_{-\mu^{\rm rev}}$ and $(W^{\ot t})^{U_s}_\lambda$ are the Specht modules $Ae_\mu$ and $Ae_\lambda$, where $A=k\Sym_t$. To each $t$-tableau $T$ of shape $\mu$ we associate the highest weight vector $e_{1,T}v^*_T\in(V^*)^{\ot t}$, where $v^*_T$ is the basis tensor which has $v_{r-i+1}^*$'s in the positions which occur as entries in the $i$-th row, and $e_{1,T}$ is the column anti-symmetriser associated to $T$. We also associate to each $t$-tableau of $T$ shape $\lambda$ the highest weight vector $e_{1,T}w_T\in W^{\ot t}$, where $w_T$ is the basis tensor which has $w_i$'s in the positions which occur as entries in the $i$-th row, and again $e_{1,T}$ is the column anti-symmetriser associated to $T$. Then $[T]\mapsto e_{1,T}v^*_T:Ae_\mu\to((V^*)^{\ot t})^{U_r}_{-\mu^{\rm rev}}$ and $[T]\mapsto e_{1,T}w_T:Ae_\lambda\to(W^{\ot t})^{U_s}_\lambda$ are isomorphisms. So by Proposition~\ref{prop.coinvariants} with $E=\lambda$ and $F=\mu$ the canonical images in $M=\Big(((V^*)^{\ot t})^{U_r}_{-\mu^{\rm rev}}\ot (W^{\ot t})^{U_s}_\lambda\Big)_{\Sym_\nu}$ of the elements $$e_{1,T}v^*_T\ot e_{1,T_P}w_{T_P}=\sum_{\pi\in C_\mu, \sigma\in C_\lambda}{\rm sgn}(\pi){\rm sgn}(\sigma)v^*_{T\pi^{-1}}\ot w_{T_P\sigma^{-1}}\,,$$ where for each $P,Q,\nu$ as above $T$ goes through a set of representatives for the $m$-tuples of admissible semi-standard tableaux with shapes determined by $Q$ and weights determined by $P$, form a basis of $M$. Here we put in the inverses for convenience below. Now we change from representative tableaux $T$ to representative diagram mappings $\alpha$ via $\alpha=T_P^{-1}\circ T$ and take basis elements of $V$ and $W$ which occur in the same tensor position together: $v^*_{T\pi^{-1}}$ has $v^*_{r-\pi(a)_1+1}$ in position $T(a)$ and $w_{T_P\sigma^{-1}}$ has $w_{\sigma(b)_1}$ in position $T_P(b)$, and those positions are the same if and only if $b=\alpha(a)$. Finally, $T(a)\in \Lambda_{Q(a)}$, since $T$ belongs to $Q$. So $v^*_{r-\pi(a)_1+1}\ot w_{\sigma(\alpha(a))_1}$ becomes $x(Q(a))_{r-\pi(a)_1+1,\,\sigma(\alpha(a))_1}$. \end{proof} The next corollary gives a much simpler (but bigger) spanning set for the space of highest vectors $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$. Of course it can, like the above theorem, be combined with Theorem~\ref{thm.surjective_pullback} and Lemma~\ref{lem.reduction_to_nilpotent_cone} to give spanning sets for the vector space $k[\mc N_n]^{U_n}_{[\lambda,\mu]}$ and the $k[\gl_n]^{\GL_n}$-module $k[\gl_n]^{U_n}_{[\lambda,\mu]}$. \begin{cornn} Let $\alpha=1^{\mu_1}2^{\mu_2}\cdots$ and $\beta=1^{\lambda_1}2^{\lambda_2}\cdots$ be the standard scans of $S_\mu$ and $S_\lambda$. Then the elements $$\sum_{\pi\in C_{T_\mu},\sigma\in C_{T_\lambda}}{\rm sgn}(\pi){\rm sgn}(\sigma)\prod_{i=1}^tx(\gamma_i)_{r-\alpha_{\pi(i)}+1,\beta_{\sigma(\tau(i))}}\,,$$ where $\gamma\in\{1,\ldots,m\}^t$ and $\tau\in\Sym_t$ form a spanning set of $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$. \end{cornn} \begin{proof} In Theorem~\ref{thm.highest_weight_vecs} we take for each $\nu$ the bigger spanning set $$e_{1,S}v^*_S\ot e_{1,T}w_{T}=\sum_{\pi\in C_S, \sigma\in C_T}{\rm sgn}(\pi){\rm sgn}(\sigma)v^*_{\pi^{-1}S}\ot w_{\sigma^{-1}T}\,,$$ where $S$ and $T$ are any $t$-tableaux of shape $\mu$ and $\lambda$. Write $T=\rho^{-1}T_\mu$, $S=\tau^{-1}T_\lambda$ for $\rho,\tau\in\Sym_t$. Then we get $$e_{1,S}v^*_S\ot e_{1,T}w_T=\sum_{\pi\in C_{T_\mu}, \sigma\in C_{T_\lambda}}{\rm sgn}(\pi){\rm sgn}(\sigma)\ot_{i=1}^tv^*_{r-\alpha_{\pi(\rho(i))}+1}\ot\ot_{i=1}^t w_{\beta_{\sigma(\tau(i))}}\,,$$ which corresponds to the element $$\sum_{\pi\in C_{T_\mu}, \sigma\in C_{T_\lambda}}{\rm sgn}(\pi){\rm sgn}(\sigma)\prod_{i=1}^tx(\gamma_i)_{r-\alpha_{\pi(\rho(i))}+1,\beta_{\sigma(\tau(i))}}\in k[\Mat_{rs}^m]\,,$$ where $\gamma\in\{1,\ldots,m\}^t$ is the tuple with $i\in \Lambda_{\gamma_i}$ for all $i$. Recall that the $\Lambda_i$ depend on $\nu$ and note that $\gamma$ determines $\nu$. Now we observe that if we allow arbitrary tuples $\gamma\in\{1,\ldots,m\}^t$ we can take $\rho=\id$. So we obtain the assertion. \begin{comment} I decided not to follow this approach, since the summands in the sum over $\alpha$ are not $\Sym_t$-stable. So this would be too confusing. \end{comment} \end{proof} \begin{remsgl} 1.\ It is instructive to consider some special cases. For example, in the case $t=m$ and $\nu$ the all-one vector, the highest weight vectors of multidegree $\nu$ are labelled by pairs $(P,Q)$ of standard tableaux of shape $\lambda$ and $\mu$. Another example is the case that $\lambda$ consists of one row or column. Then there is for each $\nu$ only one $P$ and for each $Q$ there is at most one tuple of admissible semi-standard tableaux with shapes determined by $Q$ and weights determined by $P$. The $Q$ which have such a tuple are the semi-standard tableaux of weight $\nu$ if $\lambda$ is a row and the row semi-standard tableaux of weight $\nu$ if $\lambda$ is a column. Similar remarks apply to the case that $\mu$ consists of one row or column. The last two cases extend to prime characteristic, see Theorem~\ref{thm.basis_special_weights} and Remark~\ref{rems.basis_special_weights}.1.\\ 2.\ If we combine Theorem~\ref{thm.highest_weight_vecs} with Theorem~\ref{thm.surjective_pullback} we obtain, for $k$ of characteristic $0$, finite spanning sets for all spaces of highest weight vectors in $k[\mc N_{n,m}]$ and, taking $m=n-1$, in $k[\mc N_n]$. Note that pulling the $u_{\nu,P,Q,\alpha}$ back just amounts to interpreting $x(Q(a))_{ij}$ as the $(i,j)$-th entry of the $Q(a)$-th matrix power and replacing $r-\pi(a)+1$ by $n-\pi(a)+1$. In particular, these pulled-back functions don't depend of the choice of $r$ and $s$. In case of $\mc N_{n,m}$ the spanning sets are bases in all degrees for $n\ge (m+1)\max(r,s)$. In case of $\mc N_n$ one can only say that in a fixed degree $d$ the spanning sets will be bases if $n\ge l(\lambda)+l(\mu)+d-t$, where $t=|\lambda|=|\mu|$. This follows from Remark~\ref{rems.surjective_pullback}.1. Donin claimed in \cite{Donin1} and \cite{Donin2} that the spanning sets obtained above are always bases, but this is easily seen to be incorrect. Finding explicit homogeneous bases for all the spaces $k[\mc N]^{U_n}_{\mc \chi}$ (or more generally $k[\ov{\mc O}_\eta]^{U_n}_{\mc \chi}$) is still an open problem. If one tries to find them as subsets of the above spanning sets this is combinatorially already a challenging problem. \begin{comment} In the case of the harmonics one also has correct labelling sets across all degrees (but no clear way to assign a degree, except for the highly mysterious charge): e.g. the rational semistandard tableaux of weight $0$ with entries in ${1,\ldots,n}$. For weights of the form $[\lambda,1^r]$ this amounts to finding suitable subsets depending on $n$ of row-strict ordered tableaux of shape $\lambda$ with entries in ${1,\ldots,n-1}$. For weights of the form $[1^r,1^r]$, $2r\le n$ this amounts to finding suitable subsets depending on $n$ of partitions of length $r$ with parts $\le n-1$. Here a correct labelling set is the number of strictly increasing $r$-tuples $a_1<\cdots<a_r$ with $a_i\ge2i$ for all $i$ and $a_r\le n$. There are $\frac{n-2r+1}{r}\binom{n}{r-1}=\frac{n-2r+1}{n-r+1}\binom{n}{r}$ of them. For $n=2r$, this is the $r$-th Catalan number. For the above weights I thought I could let a suitable subset of the partitions of $r$ with entries $\le n-1$ label a basis of highest weight vectors, but then the resulting highest weight vectors turned out to be linearly dependent in the case $r=4$ and the minimal value $n=8$. I do not know how to use semi-standard tableaux of shape $\chi+s{\bf 1}_n$ ($s$ big enough such that this is a partition) and weight $s{\bf 1}_n$ as a labelling set for a basis for $k[\mc N]^{U_n}_{\mc \chi}$ in such a way that charge corresponds to degree. \end{comment} In the case of the $\GL_n$-modules $V^{\ot r}\ot(V^*)^{\ot s}$, $V=k^n$, there is a similar problem of finding bases for the vector spaces $(V^{\ot r}\ot(V^*)^{\ot s})^{U_n}_{[\lambda,\mu]}$. In \cite{BCHLLS} this was done for $n\ge l(\lambda)+l(\mu)+r-|\lambda|$. In this case there is at least a good candidate indexing set for arbitrary $n$: the up-down staircase tableaux of \cite{Stem1}.\\ 3. Note that in Theorem~\ref{thm.highest_weight_vecs} we can choose each $\alpha$ the unique representative such that for all $i$ $\alpha_i$ is admissible, i.e. a ``picture" in the sense of \cite{JP} and \cite{Z1}.\\ 4. The corollary to Theorem~\ref{thm.highest_weight_vecs} proves a weaker version of the ``conjecture" in \cite[Sect.~4]{T1}: in the notation there, with $\chi=[\lambda,\mu]$, the elements $$\vartheta\big(\psi_t((\tau,\id)\cdot E_\chi)\cdot s_{i_1}\ot\cdots\ot s_{i_t}\big)\,,$$ $2\le i_1,\ldots,i_t\le n$, $\tau\in\Sym_t$ generate the $k[\gl_n]^{\GL_n}$-module $k[\gl_n]^{U_n}_\chi$. This follows by pulling the spanning set of the corollary back to the nilpotent cone taking $m=n-1$, using the fact that $(X^l)_{ij}=\pm(\partial_{ji}s_{l+1})(X)$ for all $X\in\mc N_n$, see \cite[Cor to Thm~1]{T2}, and applying Lemma~\ref{lem.reduction_to_nilpotent_cone}. Of course one can also use the $\{\id\}\times\Sym_t$-conjugates of $E_\chi$. Just take $\tau=\id$ and $\rho$ in the proof of the corollary. The original conjecture is false, see \cite[Rem~2.5]{T2}. \end{remsgl} \subsection{Several matrices}\label{ss.several_matrices} In this final section we look at highest weight vectors in the coordinate ring of the space of several matrices $\Mat_n^l$ under the diagonal conjugation action of $\GL_n$. In order to be able to apply the graded Nakayama Lemma we need to work with the ``null-scheme" rather then the null-cone. We will denote an $l$-tuple of $n\times n$-matrices $(X_1,\ldots,X_l)$ by $\un X$. We recall some results from \cite[Sect.~4]{Br}. For $i$ an integer $\ge0$ let $\mc X_i$ be the set of sequences of length $\le i$ with entries in $\{1,\ldots,l\}$ and let $\mc X_i'$ be $\mc X_i$ with the empty sequence omitted. For $\eta\in\mc X_i$ of length $j\le i$ define $f_\eta:\Mat_n^l\to\Mat_n$ by $f_\eta(\un X)=X_{\eta_1}\cdots X_{\eta_j}$. By the Razmyslov-Procesi Theorem the algebra $k[\Mat_{rn}\times\Mat_{ns}\times\Mat_n^l]^{\GL_n}$ is generated by the functions $(A,B,\un X)\mapsto\tr(f_\eta(\un X))$ and $(A,B,\un X)\mapsto (Af_\xi(\un X)B)_{ij}$, $\eta\in\mc X_{n^2}'$, $\xi\in\mc X_{n^2-1}$, $i\in\{1,\ldots,r\}$ and $j\in\{1,\ldots,s\}$. Now let $\mc M_n$ be the closed subscheme of $\Mat_n^l$ corresponding to the ideal of $k[\Mat_n^l]$ generated by the functions $\un X\mapsto\tr(f_\eta(\un X))$. Then it follows from the above that for $m=|\mc X_{n^2-1}'|$ the restriction of the morphism $$\psi_{r,s,n,l}:(A,B,\un X)\mapsto(Af_\xi(\un X)B)_{\xi\in \mc X_{n^2-1}'} :Y_{r,s,n}\times\Mat_n^l\to\Mat_{rs}^m$$ to $Y_{r,s,n}\times\mc M_n$ is a $\GL_n$-quotient morphism onto its scheme-theoretic image $\mc W_{n,l}$. Note that we omitted the empty sequence from $\mc X_{n^2-1}$, since we passed to $Y_{r,s,n}$, the variety of pairs of matrices $(A,B)\in\Mat_{rn}\times\Mat_{ns}$ with $AB=0$. Analogous to the case of one matrix we will identify $\Mat_n^l$ with the closed subvariety $\{(E_r,F_s)\}\times\Mat_n^l$ of $Y_{r,s,n}\times\Mat_n^l$ and denote the restriction of $\psi_{r,s,n,l}$ to $\Mat_n^l$ again by $\psi_{r,s,n,l}$. Then the union of the $\GL_n$-conjugates of $\Mat_n^l=\{(E_r,F_s)\}\times\Mat_n^l$ is $\mc O\times\Mat_n^l$, where $\mc O$ consists of the pairs $(A,B)\in Y_{r,s,n}$ with ${\rm rk}(A)=r$ and ${\rm rk}(B)=s$. The same holds with $\Mat_n^l$ replaced by $\mc M_n$. It follows that the comorphism of $\psi_{r,s,n,l}:\mc M_n\to\mc W_{n,l}$ is injective, since the natural map $k[Y_{r,s,n}\times\mc M_n]\to k[\mc O\times\mc M_n]$ is injective. Furthermore, the analogue of the identity for $\varphi_{r,s,n,m}$ at the beginning of the proof of Theorem~\ref{thm.surjective_pullback} holds for $\psi_{r,s,n,l}$. Finally we apply the graded Nakayama Lemma to the $k[\Mat_n^l]^{\GL_n}$-module $k[\Mat_n^l]^{U_n}_\chi$ and we obtain \begin{thmgl} Let $\chi=[\lambda,\mu]$ be a dominant weight with coordinate sum zero and put $m=|\mc X_{n^2-1}'|$. Then the pull-back along $\psi_{r,s,n,l}:\Mat_n^l\to\Mat_{rs}^m$ of the spanning set of $k[\Mat_{rs}^m]^{U_r\times U_s}_{(-\mu^{\rm rev},\lambda)}$ from Theorem~\ref{thm.highest_weight_vecs} or its corollary is a spanning set of the $k[\Mat_n^l]^{\GL_n}$-module $k[\Mat_n^l]^{U_n}_\chi$. \end{thmgl} \begin{remsgl} 1.\ Of course $m$ above is huge, but if we are only interested in homogeneous highest weight vectors of degree $d$ say, then we can take $m=|\mc X_d'|$ above and combine the resulting elements with homogeneous elements of $k[\Mat_n^l]^{\GL_n}$ to obtain a spanning set for the vector space of homogeneous highest weight vectors of weight $\chi$ and degree $d$.\\ 2.\ Much of Section~\ref{ss.several_matrices} generalises to prime characteristic, but it is not clear how to prove the analogue of Proposition~\ref{prop.good_pair} for several matrices. \end{remsgl}
1,314,259,996,048
arxiv
\section{Introduction} \label{sec:intro} Strongly correlated quantum systems remain a great challenge in condensed matter theory. In many cases, we have to rely on numerical tools to make predictions from a microscopic model that can be compared to experiment. Unfortunately the exponential growth of the Hilbert space dimension severely limits the size of systems that can be treated exactly. However, to gain understanding of the physics in quantum systems, often the properties of the ground state and a few excited states go a long way. Therefore, a variety of tools have been developed to separate the low energy part of the Hilbert space from the rest. In one dimension, this is most notably the density matrix renormalization group (DMRG) \cite{white92}, which is a variational ansatz. There are various extensions of the original DMRG, many of which work within the framework of matrix product states (MPS) \cite{schol11} or extensions thereof \cite{vidal03,shi06}. Another approach that extends more readily to higher dimensions is the semi-numerical method of continuous unitary transformations (CUTs) \cite{knett00a,knett03a,kehre06,yang11a,krull12}. The idea behind these CUTs is to partially diagonalize the Hamiltonian and derive an effective model (explained below) in second quantization for the low-energy sector of the Hilbert space. This effective model can then be used to calculate physical properties of the system. CUTs have been applied successfully in a variety of cases \cite{moeck08,dusue10,fause13a}. In our view, it is a promising long-term goal to establish effective models for strongly correlated systems in terms of the elementary excitations with the ground state being the vacuum. Assuming translational invariance for infinitely large systems, i.e., systems in the thermodynamic limit, the momentum space notation is convenient due to momentum conservation and mutual orthogonality of momentum eigenstates with different momenta. The goal of our approach is to map a microscopic lattice model such as the transverse field Ising model (TFIM), see Eq.\ \eqref{eq.H_TFIM}, to an effective Hamiltonian for the low-energy physics in the quasi-particle picture \begin{subequations} \label{eq.H_eff} \begin{align} \mathcal{H}^\mathrm{eff} &= E_0 + \sum_{q_1} \omega_{q_1} a_{q_1}^\dagger a_{q_1} \label{eq.H_eff_1qp} \\ & + \frac{1}{\sqrt{L}} \sum_{q_1, q_2, q_3} \left[ D^{q_1}_{q_2,q_3} a^\dagger_{q_2} a^\dagger_{q_3} a_{q_1} + \mathrm{h.c.} \right] \, \delta_{q_2+q_3,q_1} \label{eq.H_decay} \\ & + \frac{1}{L} \sum_{\begin{matrix} q_1, q_2,\\ q_3, q_4 \end{matrix}} V^{q_1, q_2}_{q_3, q_4} \, [a_{q_1}^\dagger a_{q_3}^\dagger a_{q_2} a_{q_4}] \, \delta_{q_1+q_2,q_3+q_4} \label{eq.H_int} \\ & + [\mathrm{ higher\ terms }] \nonumber \ . \end{align} \end{subequations} In $\mathcal{H}^\mathrm{eff}$ the second quantization operator $a_{q_{i}}^\dagger$ creates a quasi-particle with momentum $q_{i}$ and $a_{q_{i}}$ annihilates one. The ground state energy is labeled $E_0$ and the dispersion relation $\omega_q$. The $D^{q_1}_{q_2,q_3}$ and $V^{q_1, q_2}_{q_3, q_4}$ are the matrix elements for quasi-particle decay and two-particle interaction, respectively. Momentum conservation is included through the Kronecker $\delta$ symbols. At this point we do not include further algebraic properties, since these depend on the model under consideration. By the effective model \eqref{eq.H_eff}, the fundamental physical properties of complicated microscopic models can be reduced to a level which makes further quantitative studies possible. The above mentioned continuous unitary transformations (CUTs) represent a systematic tool which successfully achieves this aim \cite{knett03a,fisch10a}. They work particularly well if the chosen starting point, the so-called reference state, is already close to the true ground state of the system. But if this is not the case or if it is even not possible to find a technically tractable starting point with the relevant symmetries the CUTs cannot be applied efficiently. In such situations a completely numerical approach is appealing because it can tackle a problem at hand in an unbiased fashion. This is where the MPS representation comes into play. It is extremely efficient in finding ground states and certain excited states. We will show how one can use MPS to numerically define and derive local creation and annihilation operators $a^{(\dag)}_i$ at site $i$. Thereby, we provide the first steps towards representations such as \eqref{eq.H_eff} derived by numerical variational means. The key to efficient handling of translationally invariant systems are transfer matrices. Their usefulness in handling MPS representations of infinte systems has been known for a while \cite{fanne92}. Impressive progress has been achiev{-}ed in describing elementary excitations within the MPS framework \red{\cite{haege12,haege13a,Haegeman2013b,Haegeman2014}}. Haegeman et al.\ presented a momentum space ansatz \cite{haege12,Haegeman2013b} that yields very accurate results for the dispersion relation and is related to the calculation presented in this paper. They also proved by means of the Lieb-Robinson bounds \cite{lieb72} that an excited momentum eigenstate of a lattice Hamiltonian can be exponentially well approximated by acting on the ground state with the momentum superposition of a local operator with finite support \cite{haege13a} if the system displays an energy gap. In the present paper we show how such an operator can be constructed using eigenvectors of an eigenvalue problem arising in the dispersion calculation. We demonstrate, that this operator is a representation of the local creation operator $a^\dagger$ by using it to compute the one-particle contribution to the spectral weight. \red{To this end, we think that a brief presentation of the concept of matrix product states is required in order to explain technical details and the employed notation.} We test the method on the TFIM which is analytically solvable \cite{pfeut70} and well understood. Inspite of its simplicity, it shows a variety of interesting features such as a quantum phase transition, ground state degeneracy and two different types of elementary excitations. The paper is structured as follows: In Section \ref{sec:model} the model and its exact solution are recalled. In Section \ref{sec:MPS} a short introduction to matrix product states is given\red{, which may be skipped by readers familiar with this concept.} Section \ref{sec:GSE} shows the results for the ground state energy and the dispersion followed by the derivation of the effective model in Section \ref{sec:eff_model}. In Section \ref{sec:spectral_weight} we compute the static one-particle spectral weight as an application of the effective model and finally conclude the paper in Section \ref{sec:outlook}. \section{Model and exact results} \label{sec:model} The TFIM \cite{genne63} is a common toy model for studying quantum magnets. In the one-dimensional ferromagnetic case considered here, it is given by the Hamiltonian \begin{equation} \mathcal{H} = - \Gamma \sum_i \Sz_i - J \sum_i \Sx_i \Sx_{i+1}, \quad \Gamma,\ J > 0 \label{eq.H_TFIM} \end{equation} where $\Sz$ and $\Sx$ are components of the standard spin-$\frac{1}{2}$ operators, $\Gamma$ is the external field and $J$ the coupling strength. We define the dimensionless parameter \begin{equation} \lambda = \frac{J}{2 \Gamma} \label{eq.def_lambda} \end{equation} that controls the system's behavior and in terms of which the Hamiltonian reads \begin{equation} \mathcal{H}_\lambda := - \sum_{i} \Sz_i - 2\lambda \sum_{i} \Sx_i \Sx_{i+1} \ . \end{equation} At the point $\lambda = 1$ the coupling to the external field is of the same strength as the nearest-neighbor coupling, giving rise to a quantum critical point. The phase with $\lambda > 1$ is called the Ising regime or ordered phase since the Ising interaction is dominant. The ground state is twofold degenerate since its long-range order spontaneously breaks the Hamiltonian's $\mathbb{Z}_2$ symmetry $\Sx \to -\Sx$. In 1D, the elementary excitations are (dressed) domain walls between regions of the two different ground states. Conversely, for $\lambda < 1$ the external field dominates the behavior. This phase is called the strong-field regime or disordered phase. Due to the strong external field, its ground state is unique and its excitations are (dressed) spin flips. The model is analytically solvable by a sequence of Jordan-Wigner, Fourier and Bogoliubov transformations \cite{parki10} as shown by Pfeuty \cite{pfeut70} \red{and has been studied extensively, see for instance Refs.\ \cite{mccoy83a,mccoy83b,mulle85,perk09}. Here we will recall the important known facts.} The ground state energy per lattice site is given by the elliptic integral \begin{align} \frac{E_0}{L} &= - \frac{\Gamma (1+\lambda)}{\pi} \int_0^{\frac{\pi}{2}} \sqrt{ 1 - \frac{4\lambda \, \sin^2q}{(1+\lambda)^2}} \, \d q \label{eq.E0_TFIM} \end{align} which displays a non-analyticality at $\lambda = 1$ due to the singular derivative of the square root function. The one-particle dispersion relation $\omega_q$ reads \begin{align} \omega_q &= \Gamma \sqrt{1 + \lambda^2 - 2\lambda \cos q} \label{eq.Dispersion_TFIM} \end{align} with the lattice constant set to unity. From this the excitation gap $\Delta$ is read off to be \begin{align} \Delta = \min_q \omega_q = \Gamma | 1 - \lambda | \label{eq.Gap_TFIM} \end{align} which vanishes at $\lambda = 1$. A quantity of great interest in connection with elementary magnetic excitations is the ground state spin-spin correlation function defined by \begin{align} G^{\alpha\beta}_j := \langle S^\alpha_0 S^\beta_j \rangle = \bra{\psi_0} S^\alpha_0 S^\beta_j \ket{\psi_0} \label{eq.def_Gj} \ . \end{align} where $\alpha, \beta \ \in \ \{x, y, z, +, -\}$ and $\ket{\psi_0}$ denotes the ground state. For translationally invariant gapped systems with local Hamiltonians in one dimension the ground state correlation function Eq.\ \eqref{eq.def_Gj} is known to show exponential behavior \cite{hasti06} \begin{align} G_j \propto \exp\left( - \frac{|r_j|}{\xi} \right) \label{eq.Gj_asymptotics} \end{align} where $\xi$ is the correlation length. For the 1D TFIM it can be calculated analytically \cite{okuni01} to be \begin{align} \xi = \frac{1}{|\ln\lambda|} \label{eq.xi_TFIM} \ . \end{align} The standard approximation $\xi \approx \frac{v}{\Delta}$ where $v$ is obtained by fitting $\omega_q \approx \sqrt{\Delta^2 + (2v \sin\left(\frac{q}{2}\right))^2}$ to the minimum and maximum of the dispersion is in very good agreement with Eq.\ \eqref{eq.xi_TFIM} for $\lambda \gtrsim 0.2$. Another important quantity in relating theoretical results to experiment is the dynamical structure factor (DSF) \cite{marsh71} \begin{align} S^{\alpha\beta}(\omega,q) = \frac{1}{2\pi L} \sum_{i,j} \int_{-\infty}^{\infty} \d t \, e^{i[\omega t + q(r_i - r_j)]} \left\langle S^{\alpha}_j(t) S^\beta_i(0) \right\rangle \label{eq.def_DSF} \end{align} which describes the intensity in neutron scattering experiments. We consider only zero temperature where the angular brackets denote the ground state expectation value.\\ Integrating Eq.\ \eqref{eq.def_DSF} over frequency yields the static structure factor \begin{align} S^{\alpha\beta}(q) = \frac{1}{L} \sum_{i,j} e^{iq(r_i - r_j)} \langle S^\alpha_j S^\beta_i \rangle \label{eq.def_SSF} \end{align} which is the Fourier transform of the ground state spin-spin correlation function Eq.\ \eqref{eq.def_Gj}. If for given momentum $q$ the energy levels are well separated, i.e. the spectral function is a sequence of Dirac-$\delta$-spikes, Eq.\ \eqref{eq.def_DSF} can be written in the spectral form \begin{align} S^{\alpha\beta}(\omega,q) = \sum_\Lambda \delta(\omega - E_\Lambda) S^{\alpha\beta}_{\Lambda}(q) \ . \label{eq.def_SLambda} \end{align} The spectral weights $S^{\alpha\beta}_{\Lambda}(q)$ are given by projecting Eq.\ \eqref{eq.def_SSF} onto the states with energy $E_\Lambda$. Note that the energy $E_\Lambda$ is understood to be defined \emph{relative} to the ground state energy. Except at criticality, the TFIM has a well defined one-particle energy $\omega_q$. Thus evaluating Eq.\ \eqref{eq.def_SLambda} at $E_\Lambda = \omega_q$ is valid and yields the one-particle spectral weights \begin{subequations} \begin{align} S^{\alpha\beta}_{1\mathrm{p}}(q) &= \frac{1}{L} \sum_{j,j'} \bra{\psi_0} S_i^\alpha \ket{\phi_q}\bra{\phi_q} S_j^\beta \ket{\psi_0} e^{iq(j'-j)} \label{eq.def_Sab} \\ &= \bra{\psi_0} S_q^{\alpha\dagger} a_q^\dagger \ket{\psi_0} \bra{\psi_0} a_q S_q^\beta \ket{\psi_0} \label{eq.Sxxq} \end{align} \end{subequations} where $\ket{\phi_q}$ is a one-particle state with energy $\omega_q$. \red{Hamer et al.\ have given an analytic formula for the spectral weight in the $xx$-channel $S^{xx}_\mathrm{1p}(q)$ in the disordered phase. The expression has been conjectured by them from high order series expansion \cite{hamer06b}. In fact, its Fourier transform had been derived exactly by Vaidya and Tracy \cite{vaidya78}.} It reads \begin{align} S^{xx}_\mathrm{1p}(q) = \frac{(1-\lambda^2)^\frac{1}{4}}{\omega(q,\lambda)} \ , \quad \lambda < 1 \label{eq.Sxx_Hamer} \ . \end{align} Note, that there is no single-particle contribution in the $xx$-channel for $\lambda > 1$ \cite{vaidya78}. \section{Matrix product states} \label{sec:MPS} \subsection{Definition} The formalism of matrix product states (MPS) has been introduced in various contexts \cite{baxte68,fanne89,fanne92}. It is a way of denoting quantum mechanical states that is particularly convenient for variational calculations. It is also closely related to the DMRG method \cite{ostlu95,romme97,schol11}. This section gives a brief introduction to the concept. Since we are interested only in translationally invariant chain models, we will restrict ourselves to those here. For a more detailed overview, we refer the reader to Ref.\ \cite{schol11}. Consider a state $\ket\psi$ of a system with $L$ sites where $\sigma_i$ defines the local state at each site $i$ \begin{equation} \ket{\psi} = \sum_{\sigma_1,\ldots,\sigma_L} c_{\sigma_1,\ldots,\sigma_L} \ket{\sigma_1,\ldots,\sigma_L} = \sum_{\{\sigma_i\}} c_{\{\sigma_i\}} \ket{{\{\sigma_i\}}} % set notation {\sgima_i} \end{equation} where the $\ket{\sigma_1,\ldots,\sigma_L}$ represent an ortho normal basis set. For simplicity, we assume that all $\sigma_i$ have the same local Hilbert space dimension $d$. The expansion coefficients $c_{\sigma_1,\ldots,\sigma_L}$ can be interpreted as elements of a matrix $\Psi^{[1]}_{(\sigma_1),(\sigma_2,\ldots,\sigma_L)}$ of dimension $d \times d^{L-1}$. This can be written as \begin{align} \Psi^{[1]} = U^{[1]} S^{[1]} V^{\dagger\,[1]} \label{eq.SVD} \end{align} by means of the singular value decomposition (SVD). In Eq.\ \eqref{eq.SVD} $U^{[1]}$ is a $d\times d$ unitary matrix, $S^{[1]}$ is a $d \times d$ real diagonal matrix that holds the singular values of $\Psi^{[1]}$ and $V^{[1]}$ is a $d^{L-1} \times d$ column orthogonal matrix, i.e., $V^{\dagger\,[1]}V^{[1]} = \I_d$.\\ Now one can define the elements of the $d\times d^{L-1}$ matrix $S^{[1]} V^{\dagger\,[1]}$ as elements of a new $d^2 \times d^{L-2}$ matrix $\Psi^{[2]}$ \begin{align} (S^{[1]}V^{\dagger\,[1]})_{\alpha_1,(\sigma_2,\ldots,\sigma_L)} := \Psi^{[2]}_{(\alpha_1,\sigma_2),(\sigma_3,\ldots,\sigma_3)} \label{eq.def_Psi_i} \end{align} (with $\alpha_1 = 1, \ldots, d$), and apply the SVD again. This process can be iterated for all quantum numbers $\sigma_i$. In the end, one has \begin{align} c_{{\{\sigma_i\}}} % set notation {\sgima_i} &= \left( U^{[1]}_{\sigma_1} \cdots U^{[L-1]}_{\sigma_{L-1}} \cdot \Psi^{[L]}_{\sigma_L} \right) \ . \end{align} As seen in Eq.\ \eqref{eq.def_Psi_i}, in each iteration, the quantum number $\sigma_i$ is shifted from the column index to the row index of $\Psi^{[i]}$. Therefore, the matrices $U^{[i]}$ are of dimensions $d^i \times d^i$. In order to carry out the matrix product, one has to select the right block (labeled by $\sigma_i$) from each $U^{[i]}$. In other words, each $U^{[i]}$ and also the leftover $\Psi^{[L]}$ can be interpreted as a column vector of $d$ sub-matrices of dimension $d^{i-1} \times d^{i}$, which are indexed by $\sigma_i$ \begin{align} U^{[i]} = \left(\begin{matrix} A^{1} \\ \vdots \\ A^{d} \end{matrix}\right) \quad \text{with } A^{\sigma_i} \in \mathbb{C}^{d^{i-1} \times d^i} \label{eq.U_decomp} \ . \end{align} This quantity can also be seen as a tensor of order three $\tens{A}^{\sigma_i}_{\alpha_{i-1},\alpha_{i}}$. Eventually, a single coefficient $c_{{\{\sigma_i\}}} % set notation {\sgima_i}$ is represented in the form \begin{align} c_{{\{\sigma_i\}}} % set notation {\sgima_i} &= \sum_{\alpha_1,\ldots,\alpha_L} \tens{A}^{\sigma_1}_{1,\alpha_1} \tens{A}^{\sigma_2}_{\alpha_1,\alpha_2} \cdots \tens{A}^{\sigma_{L-1}}_{\alpha_{L-1},\alpha_L} \tens{A}^{\sigma_L}_{\alpha_L,1} \nonumber \\ &= A^{\sigma_1} \cdot A^{\sigma_2} \cdots A^{\sigma_L} \ . \label{eq.Csigma_matrix} \end{align} The index $\sigma_i$ denotes the physical state of the corresponding quantum number and is therefore referred to as physical index. The index $1$ in $\tens{A}^{\sigma_1}$ and $\tens{A}^{\sigma_L}$ implies that these objects are vectors. Finally, the entire state reads \begin{equation} \ket{\psi} = \sum_{\{\sigma_i\}} ( A^{\sigma_1} \cdots A^{\sigma_L} ) \, \ket{\{\sigma_i\}} \ . \label{eq.LC_MPS} \end{equation} Since each coefficient $c_{{\{\sigma_i\}}} % set notation {\sgima_i}$ in Eq.\ \eqref{eq.Csigma_matrix} has the form of a product of $L$ matrices, the representation Eq.\ \eqref{eq.LC_MPS} is called a matrix product state. By construction from the SVD, seen from the left end the matrices have increasing dimension: $1 \times d, d \times d^2, \ldots$ up to $i = \frac{L}{2}$ and are therefore distinct for different $\sigma_i$. Since we consider 1D chain models, this corresponds to open boundary conditions (OBC) where the position in the chain matters. To implement periodic boundary conditions, all matrices have to be of the same dimension\footnote{In general, this dimension is $d^{L/2}\times d^{L/2}$. In special cases an exact MPS representation can be found even with $2\times 2$ matrices, e.g., for the 1D AKLT valence bond crystal \cite{affle87a,fanne92,schol11}.}, because the labeling of the sites can be shifted arbitrarily. This is reflected in the cyclic property of the trace operation, which yields the proper scalar coefficient $c_{{\{\sigma_i\}}} % set notation {\sgima_i}$ in this case \cite{schol11} \begin{align} \label{eq.def_csigmas_tr} c_{\{\sigma_i\}} &= \Tr( A^{\sigma_1} \cdot A^{\sigma_2} \cdots A^{\sigma_L} ) \nonumber \\ &= \Tr( A^{\sigma_2} \cdots A^{\sigma_L} \cdot A^{\sigma_1} ) \ . \end{align} Since Eq.\ \eqref{eq.Csigma_matrix} is a scalar expression, applying the trace does not change it and Eq.\ \eqref{eq.def_csigmas_tr} also holds for OBC. In summary, generally the (maximum) dimension of the $A^{\sigma_i}$ grows as $d^{L/2}$ with $L$ and may vary with the lattice site depending on the boundary conditions. However, for variational calculations, fixing all matrices to a given dimension $D \times D$ provides a way of truncating the Hilbert space which is systematic in the sense that it influences all bulk matrices in the same way. This $D$ is sometimes referred to as `bond dimension'. Then it is more convenient to have the same dimension also at the ends of the system, regardless of the boundary conditions. To this end, the handling of the boundary conditions is shifted to two auxiliary systems of dimension $D$ located at both ends of the chain with states $\ket{\alpha}$ and $\ket{\beta}$. The corresponding matrices are vectors $\vec{\tilde a}^{\alpha\dagger}$ and $\vec{\tilde b}^\beta$ of dimension $D$. Putting everything together results in a very general ansatz for a MPS \begin{subequations} \label{eq.gen_MPS} \begin{align} \ket{\psi} &= \sum_{\alpha,\beta} \sum_{{\{\sigma_i\}}} % set notation {\sgima_i} c_{{\{\sigma_i\}}} % set notation {\sgima_i}^{\alpha\beta} \ket{{\{\sigma_i\}}} % set notation {\sgima_i} \ket{\alpha} \ket{\beta} \\ &= \sum_{\alpha,\beta} \sum_{{\{\sigma_i\}}} % set notation {\sgima_i} \Tr( \vec{\tilde a}^{\alpha\dagger} A^{\sigma_1} \cdots A^{\sigma_L} \vec{\tilde b}^\beta ) \, \ket{{\{\sigma_i\}}} % set notation {\sgima_i} \ket{\alpha} \ket{\beta} \ . \label{eq.gen_MPS_tr} \end{align} \end{subequations} Note, that in this representation the trace operation is redundant. However, it is still helpful in understanding the way matrix elements and overlaps are computed in the thermodynamic limit, therefore, it is kept in the notation. Another commonly used notation hides the boundary conditions in a boundary operator $Q$ in terms of which the MPS ansatz reads \begin{align} \ket{\psi} &= \sum_{{\{\sigma_i\}}} % set notation {\sgima_i} \Tr( Q A^{\sigma_1} \cdots A^{\sigma_L} ) \, \ket{{\{\sigma_i\}}} % set notation {\sgima_i} \end{align} where the trace operation is then required to make the coefficients scalars. Note that the MPS representation Eq.\ \eqref{eq.LC_MPS} is never unique. \red{The construction starting the SVDs from the left side described in this section yields the so called \emph{left canonical} form of a MPS. One could equally well start the decomposition from the right side or from both sides simultaneously meeting somewhere in the middle of the chain. This would yield different matrices $A^{\sigma_i}$. These canonical forms are very special representations because there are many gauge degrees of freedom generally.} Between any two matrix sets $A^{\sigma_i},\ A^{\sigma_{i+1}}$ one can always introduce an invertible matrix $X$ such that \begin{align} c_{{\{\sigma_i\}}} % set notation {\sgima_i} &= \Tr( A^{\sigma_1} \cdots A^{\sigma_i} \I A^{\sigma_{i+1}} \cdots A^{\sigma_L} ) \nonumber \\ &= \Tr( A^{\sigma_1} \cdots (A^{\sigma_i} X_i) (X_i^{-1} A^{\sigma_{i+1}}) \cdots A^{\sigma_L} ) \nonumber \\ &= \Tr( A^{\sigma_1} \cdots \tilde A^{\sigma_i} \tilde A^{\sigma_{i+1}} \cdots A^{\sigma_L} ) \label{eq.gauge} \end{align} which changes the adjacent matrices but leaves the coefficient $c_{\{\sigma_i\}}} % set notation {\sgima_i$ unchanged. Thus, equality of two states $\ket{\psi_1} = \ket{\psi_2}$ does not imply equality of their respective MPS matrix sets. Therefore we understand the equality of two matrix sets $A^{\sigma_i}$ and $\tilde A^{\sigma_i}$ up to such a gauge transformation and as a shorthand meaning both sets represent the same state. \subsection{Local operators} \label{subsec:MPOs} \begin{figure*}[t] \normalsize \begin{subequations} \begin{align} \bracket{\psi_0}{\psi_0} &= \sum_{\alpha',\alpha,\beta',\beta} \sum_{\{s_i\}} \sum_{\{s_i'\}} \vec{\tilde a}^{\alpha' T} A^{s_1'\ast} \cdots A^{s_L'\ast} \vec{\tilde b}^{\beta' \ast} \, \vec{\tilde a}^{\alpha \dagger} A^{s_1} \cdots A^{s_L} \vec{\tilde b}^{\beta} \left\langle s_1',\ldots,s_L'|s_1,\ldots,s_L\right\rangle \bracket{\alpha'}{\alpha} \bracket{\beta'}{\beta} \tag{29a} \\ &= \sum_{\alpha,\beta} \sum_{\{s_i\}} \Tr( \vec{\tilde a}^{\alpha T} A^{s_1'\,\ast} \cdots A^{s_L'\,\ast} \vec{\tilde b}^{\beta\ast} ) \Tr( \vec{\tilde a}^{\alpha \dagger} Q A^{s_1} \cdots A^{s_L} \vec{\tilde b}^{\beta} ) \tag{29b} \\ &= \sum_{\alpha,\beta} \sum_{\{s_i\}} \Tr\left[ (\vec{\tilde a}^{\alpha T} \otimes \vec{\tilde a}^{\alpha \dagger}) ( A^{s_1\ast} \otimes A^{s_1} ) \cdots ( A^{s_L\ast} \otimes A^{s_L} ) (\vec{\tilde b}^{\beta \ast} \otimes \vec{\tilde b}^{\beta}) \right] \tag{29c} \\ &= \Tr\Biggl[ \vec{a}^\dagger \underbrace{\left( \sum_{s_1 = 1}^{d} A^{s_1\ast} \otimes A^{s_1} \right)}_{=: T_1} \ \cdots\ \underbrace{\left( \sum_{s_j = 1}^{d} A^{s_j\ast} \otimes A^{s_j} \right)}_{=: T_j} \ \cdots\ \underbrace{\left( \sum_{s_L = 1}^{d} A^{s_L\ast} \otimes A^{s_L} \right)}_{=: T_L} \vec{b} \Biggr] \tag{29d} \label{eq.def_T} \end{align} \end{subequations} \addtocounter{equation}{-1} \hrule \end{figure*} Having defined the matrix product representation of quantum mechanical states, a compatible definition of operators is introduced as well. Analogous to a state being defined by its expansion coefficients with respect to the basis $\ket{\{\sigma_i\}}$, an operator $\hat O$ can be defined by its matrix elements \begin{align} \bra{\{\sigma_i'\}} \hat O \ket{\{\sigma_i\}} &= \Tr( W^{\sigma_1'\sigma_1} \cdots W^{\sigma_L'\sigma_L} ) \nonumber \\ &= \sum_{\alpha_1,\ldots,\alpha_L} \tens{W}^{\sigma_1'\sigma_1}_{\alpha_L,\alpha_1} \cdots \tens{W}^{\sigma_L'\sigma_L}_{\alpha_{L-1},\alpha_L} \label{eq.def_Wsps} \end{align} where the quantities $\tens{W}^{\sigma_i'\sigma_i}_{\alpha_{i-1},\alpha_{i}}$ are tensors of order $4$ and for given $\sigma_i', \sigma_i$ the $W^{\sigma_i'\sigma_i}$ represents a matrix. The general derivation and treatment of these objects is called matrix product operator (MPO) formalism \cite{schol11}. Generic Hamiltonians \eqref{eq.H_TFIM} consist of terms acting only on a small number of lattice sites. In the TFIM one or two sites are involved. This simplifies the general definition in Eq.\ \eqref{eq.def_Wsps}. Let $\hat O$ be an operator that is the identity everywhere except at site $j$, i.e., $\hat O$ is a single-site operator. Then its matrix elements with respect to two MPS are given by \begin{align} \bra{\phi} \hat O \ket{\psi} &= \sum_{\alpha\beta} \Tr\left[ \left( \vec{\tilde a}_\alpha^{\dagger \ast} \otimes \vec{\tilde a}_\alpha^{\dagger} \right) \left( \sum_{\sigma_1} F^{\sigma_1 \ast} \otimes A^{\sigma_1} \right) \right. \nonumber \\ & \phantom{=} \left. \cdots \ \left( \sum_{\sigma_j \sigma_j'} W^{\sigma_j'\sigma_j} F^{\sigma_j'\ast} \otimes A^{\sigma_j} \right) \cdots \left( \vec{\tilde b}_\beta^\ast \otimes \vec{\tilde b}_\beta \right) \right] \label{eq.apply_local_MPO} \end{align} \addtocounter{equation}{1} where the matrices $F^{\sigma_i}$ describe the state $\ket{\phi}$, the matrices $A^{\sigma_i}$ the state $\ket{\psi}$, and $\otimes$ denotes the Kronecker product. In the local Hilbert space of the single site $j$ the $W^{\sigma_j'\sigma_j}$ are just the elements of the matrix representation of $\hat O$, i.e., scalars. This scheme readily extends to operators that are products of a finite number of single-site operators (see Eq. \eqref{eq.gse_per_site}). \subsection{Thermodynamic limit (iMPS)} \label{subsec:TDL} Let us consider the case, where the local Hilbert spaces at all sites refer to locally identical spin degrees of freedom in a spin chain model such as the one defined in Eq.\ \eqref{eq.H_TFIM}. The labels $\sigma_i$ run everywhere over the same set of values. Furthermore, we assume that the Hamiltonian acting on these degrees of freedom is the same at each site. Then, the chain is translationally invariant in the thermodynamic limit $L \to \infty$. \red{Given translational invariance, it is plausible to assume that a uniform MPS representation exists for separable ground states, i.e., states that can be separated in two blocks by a Schmidt decomposition \cite{fanne92}. This means that all matrices $A^{\sigma_i}$ can be chosen the same.} Such a state also results as a fixed point in the infinite system DMRG algorithm \cite{ostlu95,romme97} and is called an iMPS. The uniform ground state matrices will be labeled $A^s$ henceforth. Next, we consider the norm of the ground state in Eq.\ (29). \begin{center} \emph{see equation (29) above.}\\ \end{center} In Eq.\ \eqref{eq.def_T} we defined the boundary vectors $\vec{a}^\dagger := \sum_{\alpha} \vec{\tilde a}^{\alpha T} \otimes \vec{\tilde a}^{\alpha\dagger}$ and $\vec{b} := \sum_{\beta} \vec{\tilde b}^{\beta\ast} \otimes \vec{\tilde b}^{\beta}$. The object $T$, which is also defined in Eq.\ \eqref{eq.def_T}, is called transfer operator or transfer matrix \cite{onsag44}. Because the $A^{s_i}$ are the same at each site the transfer matrix is also uniform: $T_i = T \ \forall i$. The trace operation is redundant and used only to motivate the Kronecker product structure because of the identity $\Tr(A)\Tr(B) = \Tr(A\otimes B)$. Finally, the norm can be cast in the form \begin{align} \label{eq.norm_no_Q} \bracket{\psi}{\psi} &= \vec{a}^\dagger( T^{\dagger})^{\frac{L}{2}} T^{\frac{L}{2}} \vec{b} \ , \end{align} which explains the name transfer matrix: If at some site $\vec{b}$ represents the right end of the chain the application of $T$ transfers the this chain end by one site to the left, i.e., it adds the next site. From the definitions in Eq.\ \eqref{eq.def_T} it is obvious, that the transfer matrix $T$ is of dimension $D^2 \times D^2$ and the vectors $\vec{a}$ and $\vec{b}$ are of dimension $D^2$. They can also be interpreted to be $D \times D$ matrices $a$ and $b$ by filling such a matrix from top to bottom and left to right with the vector components \begin{align} \label{eq.vec2mat} \vec a = \left( \begin{matrix} \vec a_1 \\ \vdots \\ \vec a_D \end{matrix} \right) \quad \mapsto \quad a = \left( \begin{matrix} \vec a_1 & \ldots & \vec a_D \end{matrix} \right) \end{align} where the $\vec a_i$ are $D$ dimensional column vectors. In this notation, the standard scalar product in $\mathbb{C}^{D^2}$ reads \begin{equation} (a,b) = \Tr( a^\dagger b ) \ . \label{eq.matrix_sp} \end{equation} The application of $T$ to $b$ or of $T^\dagger$ to $a$ is also very concise if $a$ and $b$ are denoted as matrices \begin{subequations} \label{eq.apply_T} \begin{align} T[b] &= \sum_{s=1}^{d} A^{s} b \, A^{s \dagger} \\ T^\dagger[a] &= \sum_{s=1}^{d} A^{s \dagger} a \, A^{s} \end{align} \end{subequations} yielding again $D \times D$ matrices. Note that, if $2d < D$, this evaluation of these expressions is computationally more efficient than multiplying a $D^2 \times D^2$ matrix to a $D^2$ dimensional vector. Let $\mu_i$ be the eigenvalues, $v_i$ the corresponding right eigenvectors (or eigenmatrices in the notation \eqref{eq.vec2mat}) of $T$, and $u_i$ the left eigenvectors, which are also the co-vectors of the $v_i$. If $T$ is hermitian, $v_i = u_i$ holds. But this is generally not the case. We consider the decomposition of $b$ into the $v_i$ \begin{align} b &= \sum_i (u_i, b) \, v_i =: \sum_i \beta_i v_i \nonumber \\ \Rightarrow \quad T[b] &= \sum_i \beta_i \mu_i v_i \label{eq.T_ev_decomp} \ . \end{align} By the same argument that supports the power method for finding eigenvalues one realizes that for very large $L$ \eqref{eq.norm_no_Q} implies \begin{align} \frac{\bracket{\psi}{\psi}}{\mu_0^L} = \alpha_0^\ast u_0^\dagger \, v_0 \beta_0 \label{eq.norm_GS} \end{align} where $\mu_0$ is the largest eigenvalue in absolute value of $T$ and $u_0$ and $v_0$ are the corresponding left and right eigenvectors. This holds under the two conditions that the overlaps $\alpha_0 = \bracket{a}{v_0}$ and $\beta_0 = \bracket{b}{u_0}$ are finite and that $|\mu_0|$ is unique, i.e., there is no other eigenvalue of the same absolute value. \red{\emph{If} these two conditions are met, the explicit form of $\vec{a}$ and $\vec{b}$ and thus the boundary conditions they describe are irrelevant. From the physical point of view, this is understood for correlations of finite correlation length. Then, the behavior in the bulk of the infinite system does not depend on the boundary conditions.} \red{Note that the conclusion on the irrelevance of the boundary conditions does not hold for degenerate ground states where the boundary conditions may indeed influence the state in the bulk (see Appendix \ref{app:gs_degeneracy}). Then the exact transfer matrix $T$ generically displays two or more eigenvalues of the same absolute value. For a physical example we refer to the Majumdar-Ghosh model \cite{majum69a,majum69b,majum69c}. In its ground state, two spin-$\frac{1}{2}$ couple into a singlet state either on the odd or on the even bonds which leads to a two-fold ground state degeneracy in the infinite system. This degeneracy is broken if there is a boundary: If there is a boundary, the realized ground state favors a singlet on the last bond in order to avoid a dangling spin. In the corresponding analytical transfer matrix we observe a two-fold degeneracy of $|\mu_0|$, i.e., the above mentioned conditions are not met.} \red{In the Ising phase, the TFIM also has a two-fold degenerate ground state. As opposed to the Majumdar-Ghosh model, however, it does not have an exact iMPS representation at finite $D$. We observe that the ground state search produces either one ground state or the other. The superposition of both cannot be captured well by the MPS ansatz. Around each of the two ground states, the above stated conditions hold and $|\mu_0|$ is unique so that we may omit the boundary vectors $\vec{a}$ and $\vec{b}$ from the notation unless stated otherwise. In this sense, the description of the system reduces to computing $\mu_0$, $v_0$ and $u_0$ and we will call $v_0$ and $u_0$ the boundary matrices.} Once $\mu_0$ is known, $A^s$ can always be rescaled such that $|\mu_0| = 1$ and \eqref{eq.norm_GS} stays finite for $L \to \infty$. If $\mu_0$ is positive, it can be rescaled to $\mu_0 \equiv 1$, otherwise, a phase factor remains\footnote{In this case, for every explicitly applied single-site operator (including $T$) the resulting matrix needs to be devided by $\mu_0$ to account for the phase factor. See Ref. \cite{ueda11} for a more detailed discussion of degenerate $\mu_0$.}. This scaling for $\mu_0 \in \mathbb{R}$ is implied in the sequel. Moreover, any scalar multiple of an eigenvector is also an eigenvector. Therefore, $u_0$ and $v_0$ can be rescaled such that $(\alpha_0 u_0, \beta_0 v_0) = 1$. These rescaled eigenvectors are labeled $u$ and $v$ and will be used from here on. Due to the gauge freedom, see Eq.\ \eqref{eq.gauge}), one can find a gauge for $A^s$ such that either $v$ or $u$ equals the identity. Then the other eigenmatrix is a diagonal matrix with non-negativ eigenvalues and unit trace. It corresponds to the reduced density matrix appearing in DMRG. This gauge has some advantages, see Appendix \ref{app:gs_search} for details), and is therefore the representation of choice. For further details of this \emph{canonical} form of the infinite-size MPS (iMPS) see Ref. \cite{orus14} \footnote{Ref. \cite{orus14} uses a composite representation $\{\Gamma, \Lambda\}$ of the MPS where $\Gamma$ is a rank $3$ tensor that lives on the sites and $\Lambda$ is a diagonal matrix that lives on the bonds. It holds the Schmidt coefficients of the Schmidt decomposition across the bond. This representation can be obtained from the $A^s$ tensors by computing the SVD $U = W \Lambda V^\dagger$ and setting $\Gamma^s = V^\dagger W^s$. Here $U$ is the matrix $U^{[i]}$ from Eq.\ \eqref{eq.U_decomp} and the $W^s$ are the blocks of $W$ defined analogously to the matrices $A^{\sigma_i}$. The canonical $A^s$ is obtained by $A^s = \tilde\Gamma^s \tilde\Lambda$ where $\{\tilde\Gamma,\tilde\Lambda\}$ is the canonical composite representation.}. If not stated otherwise, we henceforth consider a non-degenerate tranfer matrix $T$ with unique $\mu_0=1$ after appropriate rescaling. We consider the single-site operator $\hat O$ from Eq.\ \eqref{eq.apply_local_MPO} to be the identity with the local matrix representation $\I_d$. Using Eq.\ \eqref{eq.norm_no_Q} we calculate the ground state expectation value in the thermodynamic limit \begin{subequations} \begin{align} \bra{\psi_0} \I \ket{\psi_0} &= \vec{a}^\dagger (T^\dagger)^{\frac{L}{2}} \left( \sum_{s,s'} (\I_d)_{ss'} A^{s'\ast} \otimes A^{s} \right) T^{\frac{L}{2}} \vec{b} \\ &= \vec{u}^\dagger \left( \sum_{s} A^{s\ast} \otimes A^{s} \right) \vec{v} \\ &= \vec{u}^\dagger T \vec{v} = (u,T[v]) = 1 \ . \end{align} \end{subequations} In this sense, $T$ can also be perceived as an identity operation at one site \begin{align} T[v] &= \sum_{s} A^{s} v \, A^{s\dagger} = \sum_{s, s'} \delta_{ss'} A^{s'} v \, A^{s \dagger} \nonumber \\ &= \sum_{s,s'} (\I_d)_{ss'} A^{s'} v \, A^{s \dagger} =: \I^{(A,A)}[v] = v \ . \end{align} The scheme in Eq.\ \eqref{eq.apply_T} extends to nontrivial operators straightforwardly as follows \begin{subequations} \label{eq.apply_op} \begin{align} \hat O^{(B,A)}[v] &= \sum_{s,s'} O_{ss'} \, A^{s'} v \, B^{s \dagger} \\ \hat O^{\dagger (B,A)}[u] &= \sum_{ss'} O^\dagger_{ss'} \, A^{s\dagger} u \, B^{s'} \ . \end{align} \end{subequations} As an example for the application of a local MPOs let us consider a single term \begin{align} h_i = - \Gamma \Sz_i - J \Sx_i \Sx_{i+1} \label{eq.def_hi} \end{align} of the Hamiltonian \eqref{eq.H_TFIM}. Applying the scheme in Eq.\ \eqref{eq.apply_local_MPO} using \eqref{eq.apply_op} yields \begin{subequations} \label{eq.gse_per_site} \begin{align} \bra{\phi} h_i \ket{\psi} = & -\Gamma \bra{\phi} \Sz_i \ket{\psi} - J \bra{\phi} \Sx_i \Sx_{i+1} \ket{\psi} \\ = \, & - \Gamma (\bar u, S^{z\,(F,A)}[\bar v]) \ + \label{eq.gse_TF} \\ \, & - J (\bar u, S^{x\,(F,A)}[ S^{x\,(F,A)}[\bar v] ]) \label{eq.gse_Ising} \end{align} \end{subequations} where $\bar v$ and $\bar u$ are the eigenvectors of $\bar T = \sum_s F^{s \ast} \otimes A^s$. This concludes the brief formal and technical review of the matrix product formalism. Below we turn to its application to the TFIM and to the construction of local creation and annihilation operators. \section{Ground state energy and dispersion} \label{sec:GSE} In this section, we describe one of several ways to obtain a uniform iMPS representation of the ground state and how to calculate the dispersion of a single quasi-particle in the system. \subsection{Ground state search} Starting from the ansatz \eqref{eq.gen_MPS}, finding the ground state energy is a variational problem in the coefficients of the ground state matrices $A^s$ \begin{align} \frac{E_0}{L} \leq \min_{\{A^s\}} \frac{\bra{\psi_0(A^s)} h_i \ket{\psi_0(A^s)}}{\bracket{\psi_0(A^s)}{\psi_0(A^s)}} \label{eq.Rayleigh_Ritz} \end{align} where $h_i$ is the local term of the Hamiltonian defined in Eq.\ \eqref{eq.def_hi}. There are various methods of finding an optimal $A^s$ for given $D$\red{.} Eq.\ \eqref{eq.Rayleigh_Ritz} is a highly nonlinear function in the elements of $A^s$. Thus, for its minimization, one may think to resort to any multi-dimensional minimizer that does not rely on derivatives. But the convergence of them is usually slow. \red{Another non-variational possibility for the ground state search is the imaginary time evolution in Vidal's iTEBD \cite{vidal07} which, however, is also found to much slower than MPS-based iDMRG \cite{mccull08}}. Alternatively, one may use an iterative approach. In each step, only the elements of the matrcies $B^s$ at a single site are varied, all other sites are kept at a fixed $A^s$. Then, the matrix $B^s$ with the lowest ``local energy'' $\epsilon_0$ is adopted everywhere as improved guess for $A^s$ and this process is iterated untill convergence $B^s_0 = A^s$ is reached within some numerical tolerance. Let $\ket{\psi(A^s,B^s)}$ be the state that has $A^s$ matrices everywhere except at site $i=0$ where the $B^s$ matrices are inserted instead. This insertion breaks the uniformity of the state. Therefore, its energy is no longer given by a multiple of the expectation value of $h_i$. Instead the full Hamiltonian $\mathcal{H} = \sum_i h_i$ has to be taken into account. The minimization problem in terms of the elements of $B^s$ reads \begin{align} \epsilon = \frac{\bra{\psi(A^s,B^s)} \left( \sum_i h_i - E(A^s) \right) \ket{\psi(A^s,B^s)}}{\bracket{\psi(A^s,B^s)}{\psi(A^s,B^s)}} \label{eq.min_Bs} \end{align} where $E(A^s)$ is the energy per site of the uniform state that has only $A^s$ matrices. This is also the best estimate for the ground state energy per site $E_0/L$ at each step of the iteration. We subtract it in order to avoid extensive contributions. Both the numerator and the denominator of Eq.\ \eqref{eq.min_Bs} are bilinear forms in the $d\cdot D^2$-dimensional vector $\vec B$ that holds all the elements of the matrix set $B^s$. Looking for minima in Eq.\ \eqref{eq.min_Bs} means looking for roots in its derivative with respect to $\vec B^\dagger$. It is well-established that for bilinear forms the roots of this derivative amount up to the generalized eigenvalue problem (EVP) \begin{subequations} \label{eq:evp} \begin{align} \eqref{eq.min_Bs} \quad &\Leftrightarrow \quad \vec B^\dagger M(\mathcal{H},A^s) \vec B = \epsilon \vec B^\dagger N(A^s) \vec B \\ &\rightarrow \quad M(\mathcal{H},A^s) \vec B = \epsilon N(A^s) \vec B \label{eq.GSE_EVP} \ . \end{align} \end{subequations} Note that the matrices $M$ and $N$ are both Hermitian by construction. For details on the ground state search algorithm, we refer the reader to appendix \ref{app:gs_search}. There is no rigorous proof that adopting the local minimum $B^s_0$, that is found from the minimization at one site, at all sites will lower the total energy. But empirically it is found to be the case if the initial guess for the $A^s$ is not too far away from an optimal $A^s$. Moreover, in practice we adopt a line-search algorithm between $B^s_0$ and the former $A^s$ to stabilize the minimization, see appendix \ref{ss:fine-tuning}. Results for the ground state energy of the TFIM are depicted in Fig.\ \ref{plot.GSE}. The agreement is extremely good in view the low matrix dimension. There is a clear maximum in the deviation from the exact result, close to the location of the phase transition. The parameter value of the largest deviation is found to be below the true critical values $\lambda < \lambda_c=1$, but quickly approaches it as $D$ grows. As explained for instance in Ref.\ \cite{orus14}, the MPS formalism inherently implies exponentially decaying correlations because of the finite, bounded amount of entanglement which can be represented. The length scale of the decay of these correlations, the correlation length, is determined by the second largest magnitude eigenvalue $\mu_1$ of $T$. To see this, consider the application of $T$ to a matrix $b$ expanded in eigenmatrices $v_i$ of $T$ in Eq.\ \eqref{eq.T_ev_decomp}. Assuming a non-degenerate spectrum of $T$ and $\mu_0 = 1$ the subleading term is $\beta_1 \mu_1 v_1$ with $|\mu_1| < 1$. This term determines the rate at which $T^j[b]$ converges to $\beta_0 v_0$ \begin{align} T^j[b] \approx \beta_0 v_0 + \mu_1^j \beta_1 v_1 \ . \end{align} Therefore, the correlation length $\xi_T$ captured by $T$ is given as \begin{align} \xi_T = - \frac{1}{|\ln\mu_1|} \label{eq.xi_T} \ . \end{align} Figure \ref{plot.ZetaXi} displays $\xi_T$ for various matrix dimensions $D$ in comparison to the exact expression Eq.\ \eqref{eq.xi_TFIM}. Especially close to criticality a larger matrix dimension is required to improve the agreement. \red{Since this is a proof-of-concept study, the computations were carried out on laptop computers and workstations. Therefore, we restricted the bond dimension $D$ to low values to keep the runtime short. Bond dimensions of several hundred are possible, but then the calculations take considerably more time.} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig1} \caption{(Color online) Upper panel: Ground state energy per site $E_0/L$ as function of $\lambda$. Comparison of the exact result \eqref{eq.E0_TFIM} to results from iMPS calculations with various bond dimensions $D$. The lower panel shows the deviation $|\Delta E| = |E_0/L - E_\text{0,exact}|$. The critical point is located at $\lambda = 1$ and the shaded area to its right marks the Ising regime with two-fold degenerate ground state.} \label{plot.GSE} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig2} \caption{(Color online) The correlation length $\xi_T$ as computed from the second largest EV of the transfer matrix, see Eq.\ \eqref{eq.xi_T}, compared to the exact expression $\xi = |\ln\lambda|^{-1}$ from \eqref{eq.xi_TFIM}.} \label{plot.ZetaXi} \end{figure} \subsection{Dispersion} For simplicity, we focus here on the regime where the ground state is unique. In Appendix \ref{app:gs_degeneracy} the changes for degenerate ground state are summarized. \red{The approach is also described in Ref.\ \cite{Haegeman2013b}.} If the ground state is unique, the eigenvectors $\vec B_{\alpha > 0}$ of \eqref{eq.GSE_EVP} with higher local energy $\epsilon_{\alpha>0}$ describe excitations of the system. Let \begin{align} \ket{\psi^\alpha_j} = \sum_{\{s_i\}} \Tr(A^{s_1} \cdots A^{s_{j-1}} B^{s_j}_\alpha A^{s_{j+1}} \cdots A^{s_L}) \ket{\{s_i\}} \label{eq.def_Psi_ARS} \end{align} be the state that has ground state matrices everywhere except at site $j$ where a $B^s_\alpha$ matrix is inserted instead. By construction, these states are orthogonal to the ground state \begin{align} \bracket{\psi_0}{\psi^\alpha_j} = (u,\I^{(A,B_\alpha)}[v]) = \vec A^{\dagger} N \vec B_\alpha = 0 \ . \end{align} The same holds for states with different $\alpha$ if the $B^s_\alpha$ are at the same site since \begin{align} \bracket{\psi^\alpha_j}{\psi^\beta_j} = \vec B^{\dagger}_\alpha N \vec B_\beta \propto \delta_{\alpha\beta} \end{align} because they result from the same generalized EVP \eqref{eq:evp} with Hermitian matrices $M$ and $N$. But if $B^s_\alpha$ and $B^s_\beta$ are inserted at different sites $j' \neq j$ the corresponding states will not be orthogonal. If we want to view the insertion $A^s \to B^s_\alpha$ as the effect of a creation operator we need that the excitations at different sites are mutually orthogonal. How can we solve this issue? We achieve orthogonality by resorting to the the construction of Wannier states known from solid state text books. One takes a detour via the Fourier transform since the resulting momentum eigenstates \begin{align} \ket{\psi^\alpha_q} := \frac{1}{\sqrt{L}} \sum_j e^{-iqj} \ket{\psi^\alpha_j} \label{eq.def_Psi_AQS} \end{align} are known to be orthogonal in momentum space $\bracket{\psi_q^\alpha}{\psi_{q'}^\beta} \propto \delta_{qq'}$ because they refer to different eigenvalues under discrete translations. The restriction of all momenta to the first Brillouin zone is implied. Exploiting translational invariance, the overlap of two momentum eigenstates can be computed as \begin{subequations} \label{eq.def_Nq} \begin{align} \bracket{\psi_q^\alpha}{\psi_q^\beta} &= \frac{1}{L} \sum_{j,j'} e^{iqj'} e^{-iqj} \bracket{\psi^\alpha_{j'}}{\psi^\beta_j} \\ &= \frac{1}{L} \sum_{j,j'} e^{iq(j'-j)} \bracket{\psi^\alpha_{j'-j}}{\psi^\beta_0} \\ &= \sum_j e^{iqj} \bracket{\psi^\alpha_j}{\psi^\beta_0} \\ &=: N_q^{\alpha\beta} \label{eq.def_Nq_ab} \end{align} \end{subequations} where Eq.\ \eqref{eq.def_Nq} defines the matrix $N_q$ which is the metric tensor of the states $\ket{\psi_q^\alpha}$. As seen in Eqs.\ \eqref{eq.def_Nq}, the normalization factor $\frac{1}{L}$ always cancels out in the computation of an expectation value, norm or overlap. Thus the limit $L \to \infty$ does not pose any numerical problems. As seen in the sequence of equalities \eqref{eq.def_Nq}, translational invariance allows us to assume that the ket-side matrices $B^s$ are always placed at site $0$. The infinite sums over real space indices in the above equations may be seen as insurmountable problem. But this is not the case because the infinite sums converge exponentially. To elucidate this point we look at the following limits. Any $D \times D$ matrix $m$ (assuming $\mu_0 = 1$) can be decomposed according to Eq.\ \eqref{eq.T_ev_decomp}. Applying the transfer matrix $T$ $j$-times yields \begin{subequations} \label{eq.Tv_convergence} \begin{align} T^j[m] &= \sum_i (u_i,m) T^j v_i \nonumber \\ &= \sum_i (u_i,m) (\mu_i)^j v_i \\ & \quad\quad \text{with } |\mu_i| < 1 \text{ for } i > 0 \nonumber \\ &\Rightarrow \quad \lim_{j\to\infty} T^j [m] = (u,m) \cdot v \\ &\phantom{\Rightarrow} \quad \lim_{j\to\infty} T^{\dagger\, j} [m] = (m,v) \cdot u \end{align} \end{subequations} where the convergence to these limits is exponential in $j$ governed by the second largest absolute value $|\mu_1|$ of the eigenvalues of $T$. The overlap $\bracket{\psi^\alpha_j}{\psi^\beta_0}$ goes to zero for large $j$ \begin{subequations} \label{eq.Nj_decay} \begin{align} \bracket{\psi^\alpha_j}{\psi^\beta_0} &= (\I^{(B_\alpha,A)}[u], T^j[ \I^{(A,B_\beta)}[v]] ) \\ &\approx ( \I^{(B_\alpha,A)}[u], (u, \I^{(A,B_\beta)}[v]) \cdot v ) \label{eq:approx} \\ & = \bracket{\psi_0}{\psi^\beta_0} \bracket{\psi^\alpha_j}{\psi_0} \\ & = 0 \end{align} \end{subequations} where the approximation in \eqref{eq:approx} refers to the exponential convergence established in Eqs.\ \eqref{eq.Tv_convergence}. In the last line we used the local orthogonality $\bracket{\psi_0}{\psi^\alpha_0} = 0$. The exponential convergence to zero justifies to trunate the Fourier series after a finite number $j_\text{max}$ of terms. If the ground state search is converged, $\vec B_{\alpha=0}$ is the set of ground state matrices, i.e., $\ket{\psi_i^{\alpha=0}} \equiv \ket{\psi_0} \ \forall\ i$. If the ground state matrices are established with sufficient numerical accuracy, all $\ket{\psi^{\alpha > 0}_q}$ are orthogonal to $\ket{\psi_0}$. Thus, the dispersion relation can be found by a second variational calculation in the orthogonal complement of the ground state, i.e., in the sub space spanned by the $\ket{\psi^{\alpha>0}_q}$ \begin{subequations} \label{eq.dispersion_min} \begin{align} \omega_q &\leq \min_{\vec v_q} \frac{\bra{\phi_q} (\mathcal{H} - E_0) \ket{\phi_q}}{\bracket{\phi_q}{\phi_q}} \\ \ket{\phi_q} &:= \sum_{\alpha = 1}^{d\cdot D^2-1} v_q^\alpha \ket{\psi^\alpha_q} \label{eq.RR_dispersion} \ . \end{align} \end{subequations} This leads to another generalized EVP \begin{align} \label{eq.dispersion_EVP} H_q \vec v_q &= \omega_q N_q \vec v_q \end{align} where $N_q$ is the matrix defined in Eq.\ \eqref{eq.def_Nq_ab} and $H_q$ is defined analogously \begin{subequations} \begin{align} H_q^{\alpha\beta} &:= \bra{\psi^\alpha_q} (\mathcal{H}-E_0) \ket{\psi^\beta_q} \label{eq.Hq_elements} \\ &= \sum_{j,i} e^{iqj} h^{\alpha\beta}_{j,i} \\ &:= \sum_{j} e^{iqj} \left[ \sum_i \bra{\psi^\alpha_j} \left(h_i - \frac{E_0}{L} \right) \ket{\psi^\beta_0} \right] \ . \end{align} \end{subequations} The lowest eigenvalue $\omega^0_q$ is the best estimate for the one-particle dispersion at given $q$. Haegeman et al.\ \cite{haege12} observed that there is always a number of choices $B^s_\alpha$ such that $\ket{\psi^\alpha_q} \equiv 0 \ \forall\ q$ due to the gauge degrees of freedom stated in Eq.\ \eqref{eq.gauge} combined with translational invariance. Because of the associativity of the matrix product for any $X \in \mathbb{C}^{n \times n}$ we have \begin{align} \ket{\psi_{j-1}^R} &= \sum_{\{s_i\}} \Tr( \cdots (A^{s_{j-1}} X) \tilde A^{s_{j}} \cdots ) \ket{\{s_i\}} = \nonumber \\ \ket{\psi_{j}^L} &= \sum_{\{s_i\}} \Tr( \cdots A^{s_{j-1}} (X \tilde A^{s_{j}}) \cdots ) \ket{\{s_i\}} \ . \end{align} Note that we allow here for the more general case where the ground state matrices are different to the left and to the right of the inserted gauge matrix $X$. This includes the possibility of excitations of domain wall character where one switches between degenerate ground states. Let us define $B^s := e^{iq} A^s X - X \tilde A^s$ implying \begin{subequations} \label{eq.Hj_decay} \begin{align} \ket{\psi_q} &= \frac{1}{\sqrt{L}} \sum_j e^{-iqj} \ket{\psi_j} \\ &= \frac{1}{\sqrt{L}} \sum_j e^{-iqj} \left( e^{iq} \ket{\psi_j^R} - \ket{\psi_j^L} \right) \\ &= \frac{1}{\sqrt{L}} \sum_j e^{-iqj} \left( \ket{\psi_{j-1}^R} - \ket{\psi_{j}^L} \right) \\ &= 0 \end{align} \end{subequations} because a phase factor of $e^{iq}$ translates to a shift of the states in real space by one site to the left under the Fourier transformation. The matrix $X$ has $D^2$ parameters. Thus, for $q \neq 0$ or for $q = 0$ and $\tilde A^s \neq A^s$, the dimension of the space spanned by $\ket{\psi_q^\alpha}$ is reduced by $D^2$. In other words, there are $D^2$ ``zero modes''. For $q = 0$ and $\tilde A^s = A^s$, the choice of $X = \I$ results in $B^s = 0$ which makes $\ket{\psi_j}$ the null vector. Thus the number of linearly independent zero modes is reduced to $D^2-1$ in this case. Therefore, the metric tensor $N_q$ has a $D^2$ or $(D^2-1)$ dimensional null space. In order to take this into account $H_q$ needs to be projected onto the non-zero eigenspace for computing $\omega_q$. Within this non-zero eigenspace, the dispersion is found by solving the standard EVP \begin{align} \sqrt{D_N}^{\,-1} V'^\dagger H_q V' \sqrt{D_N}^{\,-1} \, \vec v\,'_q = \omega_q \vec v\,'_q \label{eq.gen_to_std_EVP} \end{align} where the diagonal matrix $D_N$ holds the non-zero eigenvalues of $N_q$ and $V'$ the corresponding eigenvectors. The original $\vec v_q$ from Eq.\ \eqref{eq.dispersion_EVP} can be obtained as $\vec v_q = V' \vec v\,'_q$. The computation of the matrix elements in Eq.\ \eqref{eq.Hq_elements} is the most time consuming part of the calculation because the complete Hamiltonian acts on all lattice sites and the Fourier coefficients have to be computed for many values of $j$. But by the same argument as in Eq.\ \eqref{eq.Nj_decay}, the contributions converge exponentially to zero if $|j| \gg 1$. Let for instance $j \ll 0$, $i \gtrsim 0$. Then \begin{subequations} \begin{align} h^{\alpha\beta}_{j,i} &= \bra{\psi^\alpha_j} h_i \ket{\psi^\beta_0} - \frac{E_0}{L}\bracket{\psi^\alpha_j}{\psi^\beta_0} \\ \nonumber &= (\I^{(B_\alpha,A)}[ T^{\dagger\,|j-1|}[ \I^{(A,B_\beta)}[u] ] ], T^{i-1}[ h_i[v] ] ) \\ &\qquad\qquad - \frac{E_0}{L}\bracket{\psi^\alpha_j}{\psi^\beta_0} \\ &\approx \left( \I^{(B_\alpha,A)}[(\I^{(A,B_\beta)}[u],v) \cdot u], T^{i-1}[ h_i[v] ] \right) \\ &= \bracket{\psi_0}{\psi^\beta_0} (\I^{(B_\alpha,A)}[u],T^{i-1}[ h_i[v] ]) = 0 \ . \end{align} \end{subequations} where the vanishing of the last expression holds in the limit $j\to \infty$. For $|i| \gg 1$ we obtain similarly \begin{align} h^{\alpha\beta}_{j,i} &= (\I^{(B_\alpha,A)}[T^{\dagger |j-1|}[\I^{(A,B_\beta)}[u]]], T^{i-1}[h_i[v]]) \nonumber \\ &\qquad\qquad - \frac{E_0}{L} \bracket{\psi^\alpha_j}{\psi^\beta_0} \nonumber \\ &\approx \left(\I^{(B_\alpha,A)}[T^{\dagger |j-1|}[\I^{(A,B_\beta)}[u]]], v \cdot \frac{E_0}{L}\right) \nonumber \\ &\qquad\qquad - \frac{E_0}{L} \bracket{\psi^\alpha_j}{\psi^\beta_0} \nonumber \\ &= \bracket{\psi^\alpha_j}{\psi^\beta_0} \frac{E_0}{L} - \frac{E_0}{L} \bracket{\psi^\alpha_j}{\psi^\beta_0} = 0 \quad \text{for} \quad j\to\infty \end{align} where $j < 0$ is assumed for simplicity. Figures \ref{plot.Dispersion_sf} through \ref{plot.Dispersion_crit} show the dispersion $\omega_q$ for various parameter values. At $\lambda = 0.8$ and $\lambda = 1.2$, see Figs.\ \ref{plot.Dispersion_sf} and \ref{plot.Dispersion_is}, the agreement is very good, both in the strong field and in the Ising regimes, because the system is placed not too close to criticality. The nice agreement illustrates that ground state degeneracy is handled very well by the advocated method. Directly at the quantum critical point, see Fig.\ \ref{plot.Dispersion_crit}, the closing of the gap is difficult to capture numerically. The reason is the diverging correlation length $\xi$, see \eqref{eq.xi_TFIM}. The amount of entanglement that can be described by an MPS is bounded by the matrix dimension $D$. Thus, no finite-dimensional MPS can completely describe a state with diverging correlation length. The inset in Fig.\ \ref{plot.Dispersion_crit} depicts the gap $\Delta$ as function of $\lambda$ in the vicinity of $\lambda = 1$. Note that the gap values are as low as $10^{-6}\Gamma$ to $10^{-5}\Gamma$ in spite of the limited bond dimension. The occurrence of a rather sharp minimum indicates a possible phase transition. Note that this criterion is independent of a comparison to the exact result and allows one to estimate the corresponding critical parameter value as well. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig3} \caption{(Color online) Dispersion for $\lambda = 0.8$ in the strong-field regime. Comparison of the exact result \eqref{eq.Dispersion_TFIM} to results from iMPS calculations with various matrix dimensions $D$.} \label{plot.Dispersion_sf} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig4} \caption{(Color online) Dispersion for $\lambda = 1.2$ in the Ising regime. Comparison of the exact result \eqref{eq.Dispersion_TFIM} to results from iMPS calculations with various matrix dimensions $D$.} \label{plot.Dispersion_is} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig5} \caption{(Color online) Dispersion at $\lambda = 1$, i.e., at the quantum critical point. Comparison of the exact result \eqref{eq.Dispersion_TFIM} to results from iMPS calculations with various matrix dimensions $D$. The inset focuses on the gap as function of $\lambda$ around the critical point.} \label{plot.Dispersion_crit} \end{figure} \section{Effective model} \label{sec:eff_model} As mentioned in Sect. \ref{sec:model}, in the strong-field limit $\lambda \to 0$ the elementary excitations are flips of single spins from the polarized ground state. If the Ising interaction is switched on by a finite $\lambda$, such a flipped spin acquires a virtual dressing which corresponds to a polarization cloud around it. This means that the elementary excitations are no longer strictly local, but ``smeared out'' over a certain region on the chain. This concept has been the basis of the CUTs in real space representation \cite{knett00a,knett03a,yang11a,krull12}. The spatial extension of the polarization cloud, i.e., of the smeared out region, is governed by the correlation length. This can also be seen from Eq.\ (4) in Ref.\ \cite{haege13a}. In order to make progress in deriving effective models \eqref{eq.H_eff} in terms of the elementary excitations we want to establish the key ingredients of second quantization, namely the creation and annihilation operator of an elementary excitation. Thus, it is our objective in this section to explicitly derive a local creation operator acting on the ground state. If we know the ground state (or a very good numerical representation thereof) and we are able to characterize the local excited states we can follow the route advocated previously \cite{knett03a} to determine the effective model on the bilinear level. More work will be required for the determination of decay terms \eqref{eq.H_decay} and two-particle interactions \eqref{eq.H_int}. To determine them, states with two quasi-particles at sites $i$ and $j$ must be properly defined. This requires that they are normalized and two such states $(i,j)$ and $(i',j')$ are orthogonal if $i\neq i'$ or $j\neq j'$. Moreover, such two-particle states must fit to the one-particle states in the sense that they decompose into the one-particle states for $|i-j|\to\infty$. These issues set the roadmap of research, but they are beyond the scope of the present article. In order to construct a local creation operator, we consider the eigenvector $\vec v_q$ of Eq.\ \eqref{eq.dispersion_EVP} that belongs to the lowest eigenvalue which defines the dispersion $\omega_q$. Its components $v^\alpha_q$ describe how the states $\ket{\psi^\alpha_q}$ are linearly combined to form an elementary excited state that satisfies \begin{subequations} \begin{align} &\bra{\phi_q} (\mathcal{H} - E_0) \ket{\phi_q} = \omega_q \\ &\qquad \ket{\phi_q} = \sum_\alpha v^\alpha_q \ket{\psi^\alpha_q} = a^\dagger_q \ket{\psi_0} \label{eq.def_AdQ} \ . \end{align} \end{subequations} This means that $\ket{\phi_q}$ can be interpreted as a state, in which one quasi-particle of momentum $q$ has been created. Taking the inverse Fourier transform of Eq.\ \eqref{eq.def_AdQ} one obtains an expression for the action of a local creation operator $a^\dagger_i$ on the ground state \begin{subequations} \label{eq.def_Ad_rs} \begin{align} &a_i^\dagger \ket{\psi_0} = \sum_{j,\alpha} v_j^\alpha \ket{\psi^\alpha_{i+j}} \label{eq.def_Ad_i} \\ &\quad \text{with} \quad v_j^\alpha := \frac{1}{L} \sum_q v_q^\alpha e^{iqj} \label{eq.def_vj} \ . \end{align} \end{subequations} This equation is the key element in advancing towards effective models via MPS representations. In the thermodynamic limit $q$ is a continuous variable and the sum in Eq.\ \eqref{eq.def_vj} becomes the integral over the Brillouin zone \begin{align} v_j^\alpha := \frac{1}{2\pi} \int_{-\pi}^{\pi} v_q^\alpha \, e^{iqj} \, \d q \label{eq.def_Ad_i_int} \ . \end{align} Although numerical integration always comes down to summation at some point, the continuous representation is advantageous for adaptive algorithms because Eq.\ \eqref{eq.dispersion_EVP} can be evaluated at arbitrary values of $q$. See Appendix \ref{app:creation_op} for comments and technical details on handling $\vec{v}_q$. Taking the sum over $\alpha$ first in Eq.\ \eqref{eq.def_Ad_i} simplifies the numerical computation. Hence, we define a single matrix set \begin{align} C^s_j := \sum_\alpha v_j^\alpha B^s_\alpha \label{eq.def_Csj} \end{align} to be inserted at distance $j$ from the center site $i$ of the particle created by $a_i^\dagger$. In this description the particle is represented by a number of matrices $\{C^s_j\}$ as follows \begin{align} a^\dagger_i \ket{\psi_0} = \sum_{j=-j_\text{max}}^{j_\text{max}} \ket{\psi(C^s_j)_{i+j}} \label{eq.def_Ad_i_compact} \end{align} where $\ket{\psi(C^s_j)_{i+j}}$ is a state analogous to $\ket{\psi^\alpha_i}$ that has $A^s$ matrices everywhere and $C^s_j$ inserted at site $(i+j)$. To quantify the degree of localization of the excitations, we study the squared norm of the vectors $\vec v_j$ \begin{align} V_j := \|\vec v_j\|^2 = \sum_\alpha |v_j^\alpha|^2 \ . \label{eq.def_Vj} \end{align} Figure \ref{plot.Vj} shows $V_j$ for various values of the bond dimension $D$ and compares their dependence on $j$ to the decay of the correlation function $G_j$. Clearly, the distributed (smeared out) contributions to the quasi-particle decay exponentially with the distance $j$ from the center site. \red{This agrees with the findings in Ref.\ \cite{haege13a}}. With increasing matrix dimension the decay becomes slower and approaches the decay of the correlation function. This is is consistent with the finding in Fig.\ \ref{plot.ZetaXi} illustrating that larger $D$ allows one to capture longer correlations. For the numerics, it is very advantageous that the decay of $V_j$ is always even faster than the decay of the correlation function defined by the correlation length $\xi$, because this fact implies that the representation can be truncated after a fairly small number of sites $|j|\le j_\text{max}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig6} \caption{(Color online) The quantity $V_j$ defined in Eq.\ \eqref{eq.def_Vj} for $\lambda = 0.9$ and various matrix dimensions $D$. The solid red line shows the function $0.2 \cdot \exp(-j/\xi)$ where $\xi$ is the exact correlation length from \eqref{eq.xi_TFIM}. $V_j$ displays exponential decay on a length scale that is always smaller than the correlation length $\xi$.} \label{plot.Vj} \end{figure} \section{Spectral weight} \label{sec:spectral_weight} To illustrate the validity of the creation operator defined in \eqref{eq.def_AdQ} we compute the spectral weight $S^{xx}_\mathrm{1p}$. For $\alpha = \beta = x$ Eq.\ \eqref{eq.Sxxq} becomes \begin{align} S^{xx}_{\mathrm{1p}}(q) &= \bra{\psi_0} S_q^{x\dagger} a_q^\dagger \ket{\psi_0} \bra{\psi_0} a_q S_q^{x} \ket{\psi_0} \nonumber \\ &=: |m_q|^2 \label{eq.Sxx_mq} \end{align} where $m_q$ is defined by $m_q = \bra{\psi_0} a_q S_q^{x} \ket{\psi_0}$. Inserting the definition of $a_i^\dagger$ \eqref{eq.def_Ad_rs} and the Fourier transform of $\Sx_q$ we obtain \begin{subequations} \begin{align} m_q &= \frac{1}{L} \sum_{i,j,\alpha} v^{\alpha\ast}_q e^{iqr_j} e^{-iq r_i} \bra{\psi^\alpha_j} S^x_i \ket{\psi_0} \\ &= \sum_{i,\alpha} v^{\alpha\ast}_q e^{iq r_i} \bra{\psi^\alpha_i} \Sx_0 \ket{\psi_0} \end{align} \end{subequations} where the matrix elements $\bra{\psi^\alpha_i} \Sx_0 \ket{\psi_0}$ can be computed in analogy to the single-site operator in Eq.\ \eqref{eq.gse_per_site}. Figures \ref{plot.Sxx1} and \ref{plot.Sxx2} depict the spectral weight in comparison to the analytical result Eq.\ \eqref{eq.Sxx_Hamer} for various values of $\lambda$ and $D$. For smaller values of $\lambda$, see Fig.\ \ref{plot.Sxx1}, well away from the critical point $\lambda=1$, the agreement is very good for all values of $q$. Still, larger values of $D$ imply an even better agreement. For a value of $\lambda$ closer to the critical point, the agreement is still good, see Fig.\ \ref{plot.Sxx2}, in view of the small values of $D$. But in particular close to the almost diverging correlation at $q=0$, larger values of $D$ are indispensable to capture the correct correlations. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig7} \caption{(Color online) Upper panel: The spectral weight $S^{xx}_\mathrm{1p}(q)$ for $\lambda = 0.5$ and various matrix dimensions $D$. Lower panel: The deviation of the iMPS results from Hamer's formula Eq.\ \eqref{eq.Sxx_Hamer}. The plot interval $[0,0.3]$ is chosen to emphasize the deviation for small values of $q$ where $S^{xx}_\mathrm{1p}(q)$ has its maximum.} \label{plot.Sxx1} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig8} \caption{(Color online) Upper panel: The spectral weight $S^{xx}_\mathrm{1p}(q)$ for $\lambda = 0.99$ and various matrix dimensions $D$. Lower panel: The deviation of the iMPS results from Hamer's formula Eq.\ \eqref{eq.Sxx_Hamer}. The plot interval $[0,0.3]$ is chosen to emphasize the deviation for small values of $q$ where $S^{xx}_\mathrm{1p}(q)$ has its maximum.} \label{plot.Sxx2} \end{figure} \section{Conclusions} \label{sec:outlook} The objective of the present paper has been to sketch the roadmap to a derivation of effective one-dimensional models by a numerical variational approach. In particular, we have explicitly shown how the first step works, i.e., the systematic construction of a local creation operator acting on the ground state. Thereby, bilinear terms in the Hamilton operator such as the dispersion can be determined, cf.\ the generic Hamiltonian Eq.\ \eqref{eq.H_eff}. We have shown how the matrix product state (MPS) formalism can be used to derive effective models in terms of quasi-particles from microscopic local spin model Hamiltonians. Based on transfer matrices, MPS work efficiently in the thermodynamic limit (iMPS). Starting point of the MPS is the accurate determination of a MPS representation of the ground state. This defines the vacuum of excitations similar to the reference state in continuous unitary transformations \cite{knett03a}. A side product of our ground state search algorithm are eigenmatrices with higher local energies. We have shown how this side product can be exploited to construct the elementary excited states. Such constructions work very well for unique and for degenerate ground states. In the latter case the elementary excitations generically are domain walls between the degenerate ground states. We derived an expression for the action of a local quasi-particle creation operator on the ground state. These quasi-particles are no longer completely local, but they are found to be ``smeared out'' but localized around one lattice site similar to Wannier states for the band electrons. The approach is illustrated and tested for the excitations in the transverse-field Ising model in one dimension in the disordered strong-field phase as well as in the ordered Ising phase. In the strong-field phase the elementary excitations are spin flips while they are domain walls in the Ising phase. It turns out that the quasi-particles are exponentially localized on a length scale always smaller than the correlation length $\xi$. In this way, the numerical representation of the elementary excitations is well controlled. Using this definition, the one-particle contribution to the spectral weight in the $xx$-channel has been computed. The very good agreement with Hamer's formula \cite{hamer06b} confirms his conjecture and strongly corroborates the validity of our approach. What are the next steps on the roadmap to effective models from variational approaches? In order to be able to determine the parts of the Hamiltonian \eqref{eq.H_eff} which describe the decay of quasi-particles \eqref{eq.H_decay} or the interaction of a pair of them \eqref{eq.H_int} we need to extend the definition of single particle states to states with two-particles. The key issues are a proper orthogonalization of states with excitations at different sites. Furthermore, it must be ensured that the two-particle state of two very distant quasi-particles equals the state obtained from the successive application of the creation operator defined from single-particle states. These issues are beyond the scope of the present article, but represent future research. \red{The ultimate aim is to be able to write down effective models in second quantization in terms of the elementary excitations.} \red{An interesting step towards this aim has been accomplished very recently by the variational construction of scattering states of two elementary excitations \cite{Haegeman2014}. But so far the explicit construction of the effective model has not been realized.} A longer-term vision consists in the generalization of the presented approach to higher dimensions by passing from matrix product states to projected entangled pair states. The conceptual issues and their solutions, for instance the construction of Wannier type of local excitations, are the same in higher dimensions. But the numerical handling is less efficient than in one dimension where the thermodynamic limit is easily built-in by transfer matrices. In summary, we are convinced that the construction of effective models via numerical variational approaches constitutes an interesting and promising route to capture the physics of strongly correlated systems. \begin{acknowledgement} We gratefully acknowledge the financial support of the Helmholtz virtual institute ``New States of Matter and Their Excitations''. We also thank B.\ Fauseweh and N.A.\ Drescher for many helpful discussions. \end{acknowledgement}
1,314,259,996,049
arxiv
\section{Introduction} One of the highest mortality rates in developed countries is due to cancer. For this reason, it is one of the most studied diseases, and however, it is still far from being well understood. Aside from the intensive medical and biological research, an increasing number of theoretical models is being introduced in order to describe some of the fundamental properties of tumors. One of the most common mathematical approaches is the use of partial differential equations, which has lead to some interesting results in the field~\cite{castro}. A different methodology was used by Br\'{u} {\it et al.}~\cite{bru1,bru2}, that employed some tools of fractal geometry, as scaling analysis, to characterize the rough interface of growing solid tumors. They found strong empirical evidence that a broad class of tumors belong to the same universality class: the molecular beam epitaxy (MBE) universality class. MBE is a well known process in physics, in which a crystal surface grows due to the external input of atoms coming from a beam directed to the growing surface~\cite{barabasi}. This finding allowed them to postulate that tumor dynamics has some features in common with MBE, say: (a) a linear growth rate, (b) the constraint of growth activity to the outer border of the tumor/crystal, and (c) surface diffusion at the growing edge. All of these features were again tested against experiment with a positive result. These experimental observations lead to a new and very interesting picture of tumor growth. The usual assumed exponential growth is replaced by a linear growth, and the dynamics is constrained to the peripheral region, because it is assumed that what happens in the core of the tumor has little effect on growth. Surface diffusion has been identified as a mechanism for favoring tumor growth. The host tissue exerts pressure on solid tumors which opposes their growth, but surface diffusion drives the tumor cells to the concavities of the interface, keeping the number of neighboring cells that belong to the host tissue to a minimum. Since these cells are responsible of the pressure exerted on the tumor, surface diffusion minimizes the pressure on the interface, and favors the propagation of the tumor~\cite{bru1,bru2}. This new theoretical description of a solid tumor was used to develop a strategy to stop the growth~\cite{bru3}. Suggested by the above findings, Br\'{u} {\it et al.} proposed that an enhancement of the pressure on the tumor surface would be able to decelerate, and eventually to stop tumor growth. They performed a new experiment in which they observed the response of the tumor to an enhancement of the inmune response. An increase of the number of neutrophils shifted the dynamics of the interface from the MBE universality class to the much slower quenched Edwards-Wilkinson (QEW) universality class, and to the pinning of the tumor interface~\cite{bru3}. This technique was later applied to a patient with a terminal cancer, who subsequentially improve his state till finally achieve a good health, possibly due to the applied treatment~\cite{bru4}. This success has a fundamental importance for the development of efficient therapies, and therefore it would be highly desirable to achieve a good theoretical understanding of the models used. MBE dynamics is described by the Mullins-Herring equation~\cite{mullins,herring} \begin{equation} \label{mh} \partial_t h=-K\nabla^4 h + F + \eta({\bf x},t), \end{equation} where $h$ is the interface height, $K$ is the surface diffusion coefficient, and $\eta({\bf x},t)$ is a Gaussian noise with zero mean and correlations given by \begin{equation} <\eta({\bf x},t)\eta({\bf x'},t')>=D\delta({\bf x}-{\bf x'})\delta(t-t'). \end{equation} This equation was developed to describe crystal growth, so it assumes that the substrate is planar and does not change its size in time. This no longer applies to tumors, since they are approximately spherical and grow linearly in time; both features are possibly the main discrepancies between standard MBE dynamics and the dynamics of tumors~\cite{bru2}. The same criticism applies to the equation describing QEW dynamics~\cite{barabasi} \begin{equation} \label{qew} \partial_t h=K\nabla^2 h + F + \eta({\bf x},h), \end{equation} where $\eta({\bf x},h)$ is a quenched disorder with zero mean and correlations given by \begin{equation} \label{qew2} <\eta({\bf x},h)\eta({\bf x'},h')>=D\delta({\bf x}-{\bf x'})\Delta(h-h'), \end{equation} and the function $\Delta$ characterizes the nature of the quenched disorder. In a former article, we developed a stochastic partial differential equation describing the same dynamics as Eq.(\ref{mh}) but with the correct geometrical symmetries~\cite{escudero}. The analysis of this equation revealed that it was able to reproduce some of the fundamental mechanisms of tumor growth found in experiments~\cite{bru1,bru2}. In the present work we extend the geometrical approach to tumor growth by developing and analyzing spherically symmetric equations describing both MBE and QEW dynamics. These equations model the behaviour found experimentally in the $(1+1)-$dimensional case~\cite{bru1,bru2,bru3}, and allow us to predict what would happen in the more realistic and unexplored case of $(2+1)-$dimensional geometry. We also derive the equations using a more systematic technique, that allows us to conjecture what are the geometrical principles that drive tumor growth. \section{Expansion from a potential} \label{potential} In general, a planar stochastic growth equation may be written in the form \begin{equation} \frac{\partial h(x,t)}{\partial t}=G[h(x,t)]+\eta(x,t), \end{equation} where $G$ is the deterministic growth term and $\eta$ is the noise. If we want the mean field equation to describe a conservative dynamics (as for instance a diffusion), then the deterministic part must have the form of a continuity equation \begin{equation} \frac{\partial h(x,t)}{\partial t}=-\nabla \cdot j(x,t), \end{equation} where the macroscopic current $j(x,t)$ describes the flux of cells on the surface. The current $j(x,t)$ arises in general from differences in the local pressure $p(x,t)$ as argued before, following the law \begin{equation} j(x,t)=\nabla \Pi(x,t), \end{equation} where $\Pi$ is defined as a pressure potential. We can perform the expansion of $\Pi$ in terms of the pressure \begin{equation} \Pi(x,t)=-A_1 p(x,t)+A_2 \nabla^2 p(x,t)+\cdots. \end{equation} Since the difference in pressure comes mainly from the differences in height of the different parts of the interface, we may assume that $p \propto h$. Finally, we can write the deterministic part of the evolution equation as \begin{equation} \frac{\partial h(x,t)}{\partial t}= A_1 \nabla^2 h - A_2 \nabla^4 h + \dots, \end{equation} where we can identify the terms present in the drifts of both Eq.(\ref{mh}) and Eq.(\ref{qew}). The equation of growth of a general Riemannian surface reads \begin{equation} \partial_t \vec{r}({\bf s},t)=\hat{n}({\bf s},t)\Gamma[\vec{r}({\bf s},t)]+\vec{\Phi}({\bf s},t), \end{equation} where the $d+1$ dimensional surface vector $\vec{r}({\bf s},t)=\{r_\alpha({\bf s},t)\}_{\alpha=1}^{d+1}$ runs over the surface as ${\bf s}=\{s^i\}_{i=1}^d$ varies in a parameter space (in the following, latin indices vary from 1 to $d$ and greek indices from 1 to $d+1$). In this equation $\hat{n}$ stands for the unitary vector normal at the surface at $\vec{r}$, $\Gamma$ contains a deterministic growth mechanism that causes growth along the normal $\hat{n}$ to the surface, and $\vec{\Phi}$ is a random force acting on the surface. In our case the deterministic part should include a term modelling cell diffusion in the tumor border. When surface diffusion occurs to minimize the surface area the corresponding term in the equation is~\cite{marsili}: \begin{equation} \label{surdiff} \Gamma_s=-K\Delta_{BL}H, \end{equation} where $\Delta_{BL}$ is the Beltrami-Laplace operator \begin{equation} \Delta_{BL}=\frac{1}{\sqrt{g}}\partial_i(\sqrt{g}g^{ij}\partial_j), \end{equation} $g_{ij}$ is the metric tensor and g is its determinant, $\partial_i=\partial/\partial s^i$ is a covariant derivative, and $H=\hat{n} \cdot \Delta_{BL} \vec{r}$ is the mean curvature. Summation over repeated indices is always assumed along this work. Finally, the unitary normal vector is given by $\hat{n}=g^{-1/2}\partial_1\vec{r}\times\cdots\times\partial_d\vec{r}$. The equation of growth can be derived straightforwardly from here~\cite{escudero}, but since our aim is to understand the geometrical principles underlying tumor growth, we write the more general growth equation for those cases in which the drift can be derived from a potential: \begin{equation} \partial_t \vec{r}(s,t)=-\frac{1}{\sqrt{g(s)}}\frac{\delta \mathcal{V}[\vec{r}(s,t)]}{\delta \vec{r}(s,t)}. \end{equation} In our case the potential $\mathcal{V}$ depends on the mean curvature $H$ of the interface. This dependence can be expressed in a power series expansion: \begin{equation} \label{expansion} \mathcal{V}=\int d^ds \sqrt{g}\sum_{i=0}^N K_i H^i=\sum_{i=0}^N \mathcal{V}_i. \end{equation} From here we can derive straightforwardly the stationary probability distribution functional $P[r(s,t)]$, which yields the probability of the interface configuration $r(s,t)$ in the limit $t \to \infty$ \begin{equation} P[r(s,t)]=\mathcal{N}\exp \left[ -\frac{\mathcal{V}[r(s,t)]}{D/2} \right], \end{equation} where $D$ is the noise strength and $\mathcal{N}$ is the normalization constant. In this work, we are more interested in the dynamical rather than in the stationary properties of the model, so we will focus on the stochastic partial differential equation which describes tumor growth. In the general equation, the contribution to the drift reads \begin{equation} \Gamma_i=-\frac{1}{\sqrt{g}}\hat{n} \cdot \frac{\delta \mathcal{V}_i}{\delta \vec{r}}= K_i \left(H^{i+1}-i\Delta_{BL}H^{i-1}-iH^{i-1}\sum_{j=1}^d \lambda_j^2 \right), \end{equation} where $\lambda_j$ are the eigenvalues of the matrix of the coefficients of the second fundamental form and express the principal curvatures of the surface~\cite{marsili}. \section{Stochastic equations for tumor growth} In this section we will build the models for describing the growth of a non-treated tumor. First of all, we will derive the zeroth, first, and second order terms in the expansion Eq.(\ref{expansion}) in one and two dimensions. The corresponding contributions to the drift in polar coordinates read~\cite{polar} \begin{subequations} \begin{eqnarray} \Gamma_0(d=1) &=& \frac{1}{r^2}\frac{\partial^2 r}{\partial \theta^2}, \\ \Gamma_1(d=1) &=& 0, \\ \Gamma_2(d=1) &=& -\frac{1}{r^4} \frac{\partial^4 r}{\partial \theta^4}, \\ \Gamma_0(d=2) &=& \frac{1}{r^2}\left(\frac{\partial^2 r}{\partial \theta^2} +\frac{1}{\sin^2(\theta)}\frac{\partial^2 r}{\partial \phi^2}\right), \\ \Gamma_1(d=2) &=& \frac{2}{r^3}\left(\frac{\partial^2 r}{\partial \theta^2}+ \frac{1}{\sin^2(\theta)}\frac{\partial^2 r}{\partial \phi^2}\right), \\ \Gamma_2(d=2) &=& -\frac{1}{r^4}\left(\frac{\partial^4 r}{\partial \theta^4}+ \frac{2}{\sin^2(\theta)}\frac{\partial^4 r}{\partial \theta^2 \partial \phi^2}+ \frac{1}{\sin^4(\theta)}\frac{\partial^4 r}{\partial \phi^4}\right), \end{eqnarray} \end{subequations} where we have linearized the different derivatives of $r$ about zero and chosen only the most relevant terms in the renormalization group sense. Retaining only the linear terms is a valid approximation whenever sharp changes in the tumor interface are absent~\cite{escudero}. It is very important to realize that $\Gamma_1(d=1)$ vanish identically, since $H=\lambda_1$ in $d=1$. This is a consequence of the Gauss-Bonnet theorem, which states that the integral of the Gaussian curvature $K$ on a closed surface is a constant. Since $H=K$ in $d=1$, the variation of $\mathcal{V}_1$ is zero. Another important fact is that $\Gamma_1(d > 1)$ is strictly nonlinear in the derivatives of $r$ in the case of a planar geometry, so it will never survive after a linearization~\cite{marsili}. We will show what are the consequences of this in the next sections. In the case of non-treated tumors, only the $\Gamma_2$ term seems to appear in the dynamics. In this case, we can derive the following equations describing the tumor interface dynamics in $(1+1)-$dimensions \begin{equation} \label{tumor2d} \frac{\partial r}{\partial t}= -\frac{K}{r^4} \frac{\partial^4 r}{\partial \theta^4} + F + \frac{1}{\sqrt{r}}\eta(\theta,t), \end{equation} where the noise $\eta(\theta,t)$ is Gaussian, with zero mean, and correlation given by \begin{equation} <\eta(\theta,t)\eta(\theta',t')>=D\delta(\theta-\theta')\delta(t-t'), \end{equation} and $(2+1)-$dimensions \begin{equation} \label{tumor3d} \frac{\partial r}{\partial t}= -\frac{K}{r^4}\left(\frac{\partial^4 r}{\partial \theta^4}+ \frac{2}{\sin^2(\theta)}\frac{\partial^4 r}{\partial \theta^2 \partial \phi^2}+ \frac{1}{\sin^4(\theta)}\frac{\partial^4 r}{\partial \phi^4}\right)+ F + \frac{1}{r\sqrt{|\sin(\theta)|}}\eta(\theta,\phi,t), \end{equation} where the noise $\eta(\theta,\phi,t)$ is Gaussian, with zero mean, and correlation given by \begin{equation} <\eta(\theta,\phi,t)\eta(\theta',\phi',t')>=D\delta(\theta-\theta')\delta(\phi-\phi')\delta(t-t'), \end{equation} and the noise \emph{must} be interpreted in the It\^{o} sense. A more detailed derivation of these equations can be found in Ref. \cite{escudero}. To analyze these equations we will perform a small noise expansion~\cite{sancho}, where the solution is decomposed as follows: \begin{equation} r(\theta,t)=R(t)+\sqrt{D}\rho(\theta,t), \end{equation} in the $(1+1)-$dimensional case, where $R$ is the deterministic solution, given by $R(t)=Ft+R_0$. The stochastic perturbation obeys the equation \begin{equation} \frac{\partial \rho}{\partial t}=-\frac{K}{(R_0+Ft)^4}\frac{\partial^4 \rho}{\partial \theta^4}+\frac{1}{\sqrt{R_0+Ft}}\eta(\theta,t), \end{equation} where the noise $\eta(\theta,t)$ is Gaussian, with zero mean, and correlation given by \begin{equation} <\eta(\theta,t)\eta(\theta',t')>=\delta(\theta-\theta')\delta(t-t'). \end{equation} The discrete Fourier transformed version of this equation reads \begin{equation} \frac{d\rho_n}{dt}=-\frac{Kn^4}{(R_0+Ft)^4}\rho_n+\frac{1}{\sqrt{R_0+Ft}}\eta_n(t), \end{equation} where the noise $\eta_n(t)$ is Gaussian, with zero mean, and correlation given by \begin{equation} <\eta_n(t)\eta_m(t')>=(2\pi)^{-1}\delta_{n,-m}\delta(t-t'), \end{equation} where $\delta_{n,-m}$ denotes the Kronecker symbol. The mean value of the stochastic process obeys the equation \begin{equation} \frac{d<\rho_n>}{dt}=-\frac{Kn^4}{(R_0+Ft)^4}<\rho_n>, \end{equation} that can be solved to yield \begin{equation} <\rho_n(t)>=\exp\left(\frac{Kn^4}{3F}\left[\frac{1}{(R_0+Ft)^3}-\frac{1}{R_0^3}\right]\right)<\rho_n(0)>. \end{equation} We can also calculate the two-point correlation function using standard techniques~\cite{doering} \begin{eqnarray} \nonumber d<\rho_n(t)\rho_m(t)>=<\rho_n(t+dt)\rho_m(t+dt)>-<\rho_n(t)\rho_m(t)>= \\ \nonumber <d\rho_n(t)\rho_m(t)>+<\rho_n(t)d\rho_m(t)>+ <d\rho_n(t)d\rho_m(t)>= \\ -\frac{K}{(R_0+Ft)^4}(n^4+m^4)<\rho_n(t)\rho_m(t)>dt+\frac{1}{R_0+Ft}\delta_{n,-m}(2\pi)^{-1}dt, \end{eqnarray} where we have used the facts that $\eta_n(t)=(2\pi)^{-1/2}dW_n(t)/dt$ and $dW_n(t)^2=dt$, and where $dW_n(t)$ denotes the increment of a Wiener process. And thus, the two-point correlation function $C_{n,m}(t)=<\rho_n(t)\rho_m(t)>$ obeys the equation \begin{equation} \frac{dC_{n,m}(t)}{dt}=-\frac{K}{(R_0+Ft)^4}(n^4+m^4)C_{n,m}(t)+\frac{(2\pi)^{-1}}{R_0+Ft}\delta_{n,-m}. \end{equation} The exact solution of this equation is \begin{eqnarray} \nonumber C_{n,m}(t)=\exp \left[-\frac{K(n^4+m^4)}{3F} \left(\frac{1}{R_0^3}-\frac{1}{(R_0+Ft)^3}\right) \right]\times \\ \nonumber \left[C_{n,m}(0)+\frac{\delta_{n,-m}}{6\pi F}\exp \left[\frac{K(n^4+m^4)}{3FR_0^3}\right]\times \right. \\ \left. \left(\mathrm{Ei}\left[-\frac{K(n^4+m^4)}{3FR_0^3}\right]-\mathrm{Ei}\left[-\frac{K(n^4+m^4)}{3F(R_0+Ft)^3}\right]\right) \right], \end{eqnarray} where $\mathrm{Ei}$ denotes the exponential integral. In the $(2+1)-$dimensional case we can perform the expansion \begin{equation} r(\theta,\phi,t)=R(t)+\sqrt{D}\rho(\theta,\phi,t), \end{equation} where $R(t)=R_0+Ft$. The stochastic perturbation obeys the equation \begin{equation} \frac{\partial \rho}{\partial t}= -\frac{K}{(R_0+Ft)^4}\left(\frac{\partial^4 \rho}{\partial \theta^4}+ \frac{2}{\sin^2(\theta)}\frac{\partial^4 \rho}{\partial \theta^2 \partial \phi^2}+ \frac{1}{\sin^4(\theta)}\frac{\partial^4 \rho}{\partial \phi^4}\right) + \frac{1}{(R_0+Ft)\sqrt{|\sin(\theta)|}}\eta(\theta,\phi,t), \end{equation} which implies \begin{equation} \frac{d\rho_{n,m}}{dt}=-\frac{K}{(R_0+Ft)^4}\left( n^4+\frac{8}{3}n^2m^2+\frac{8}{3}m^4 \right)\rho_{n,m}+\frac{A}{R_0+Ft}\eta_{n,m}(t), \end{equation} where $A=160F(\pi/4|2)/(63\pi)$, and $F$ denotes an incomplete elliptic integral of the first kind~\cite{elliptic}. The noise is again Gaussian, with zero mean, and correlation given by \begin{equation} <\eta_{n,m}(t)\eta_{p,q}(t')>=(2\pi)^{-2}\delta_{n,-p}\delta_{m,-q}\delta(t-t'). \end{equation} We can derive the equation for the first moment \begin{equation} \frac{d<\rho_{n,m}>}{dt}=-\frac{K}{(R_0+Ft)^4}\left( n^4+\frac{8}{3}n^2m^2+\frac{8}{3}m^4 \right)<\rho_{n,m}>, \end{equation} and solve it to obtain \begin{equation} <\rho_{n,m}(t)>=\exp \left(-\frac{K(8m^4+8m^2n^2+3n^4)}{9F}\left[\frac{1}{R_0^3}-\frac{1}{(R_0+Ft)^3}\right] \right)<\rho_{n,m}(0)>. \end{equation} The equation for the two-point correlation function can be straightforwardly derived and is \begin{eqnarray} \nonumber \frac{dC_{n,m,p,q}(t)}{dt}=-\frac{K}{(R_0+Ft)^4}\left(n^4+\frac{8}{3}n^2m^2+\frac{8}{3}m^4+p^4+\frac{8}{3}p^2q^2+\frac{8}{3}q^4\right)C_{n,m,p,q}(t) \\ +\frac{B}{(R_0+Ft)^2}\delta_{n,-p}\delta_{m,-q}, \end{eqnarray} where $B=6400F(\pi/4|2)^2/(3969\pi^4) (\approx 0.03)$, and $C_{n,m,p,q}(t)=<\rho_{n,m}(t)\rho_{p,q}(t)>$. The solution to this equation is \begin{eqnarray} \nonumber C_{n,m,p,q}(t)=\exp \left[-\frac{K}{9F}(8m^4+8m^2n^2+3n^4+3p^4+8p^2q^2+8q^4)\left(\frac{1}{R_0^3}-\frac{1}{(R_0+Ft)^3} \right)\right] \\ \nonumber \times \left[ C_{n,m,p,q}(0)+B\delta_{n,-p} \delta_{m,-q}\exp\left(\frac{K}{9FR_0^3}(8m^4+8m^2n^2+3n^4+3p^4+8p^2q^2+8q^4)\right)\right.\times \\ \left. \int_0^t \exp\left[-\frac{K(8m^4+8m^2n^2+3n^4+3p^4+8p^2q^2+8q^4)}{9F(F\tau+R_0)^3}\right](F\tau+R_0)^{-2}d\tau \right]. \end{eqnarray} One can see that our exact solutions reveal some interesting characteristics of the growth. The mean value of the small perturbation decreases in time proving the stability of the mean field radially symmetric solution, both in one and two dimensions. The correlation functions give a much more interesting information. The correlations generated by the noise are much smaller in the two-dimensional case: not only the numerical prefactor is smaller, but in the one-dimensional case these correlations decrease in time as an exponential integral, while in the two-dimensional case the decay is much faster. This means that the effect of the noise is much stronger in one dimension, while the deterministic dynamics are more robust in two dimensions, something that might have serious consequences in tumor therapy, as we will show below. \section{Growth in a disordered medium} In the models commented in the introduction, it seems that the disorder is induced by the enlarged amount of neutrophils introduced to stop tumor growth. But actually, the medium in which tumors grow is highly disordered.Tumors are extensively infiltrated by immune cells which may constitute as much as one third of its volume. Both the tumor phenotype and the tumor environment are very heterogeneous. The former is the result of accumulating random mutations, variable enviromental selection forces and perhaps restriction of proliferate capacity in non-stem cell components of the tumor. In addition, the tumor environment is extremely heterogeneous primarily due to disordered angiogenesis and blood flow. These facts suggest that one possible explanation for the noise term appearing in the equations of the last section comes from the underlying disorder. In fact, if we consider Eq.(\ref{qew}) and the corresponding correlations of the quenched disorder Eq.(\ref{qew2}), we see that for large $F$ the function $h$ will behave as $h \sim Ft$, and assuming that the function $\Delta$ models short-range correlations (as found experimentally~\cite{bru3}), implies $\Delta(h-h') \propto \delta(t-t')$. This means that, far from the pinning threshold, the role of the disorder is to induce thermal fluctuations in the dynamics as those found in Eq.(\ref{mh}). Physically, this means that a rapidly moving interface samples so many values of the disorder in a small time interval, that the overall effect is that of a time dependent noise. This implies in turn that the intensity of the noise is proportional to the disorder, and thus an enhancement in the inmune response corresponds to a stronger noise in Eqs.(\ref{tumor2d}, \ref{tumor3d}), while the tumor is still in a state far enough from the pinning threshold. Not only the noise, but also the diffusion terms vary from Eq.(\ref{mh}) to Eq.(\ref{qew}). We can understand this effect if we suppose the terms in the expansion Eq.(\ref{expansion}) dependent on the disorder. Assuming that $K_0 \sim D$ and that $K_2$ is independent of $D$, where $D$ is the intensity of the disorder, then we find that the $K_0$ term is the most relevant in high disorder, while $K_2$ would be the most important term in other case. We can use these facts to build a more general model of tumor growth in the $(1+1)-$dimensional setting \begin{equation} \label{gtumor2d} \frac{\partial r}{\partial t}= \frac{K_0}{r^2}\frac{\partial^2 r}{\partial \theta^2}-\frac{K_2}{r^4} \frac{\partial^4 r}{\partial \theta^4} + F + \frac{1}{\sqrt{r}}\eta(\theta,r), \end{equation} where the correlations of the quenched disorder are given by \begin{equation} <\eta(\theta,r)\eta(\theta',r')>=\delta(\theta-\theta')\Delta(r-r'). \end{equation} In weak disorder, this equation reduces to Eq.(\ref{tumor2d}), because the disorder behaves as a thermal noise and the term proportional to $K_0$ loses its importance, as explain above. In the $(2+1)-$dimensional case the situation is different. In opposition to the former case, now the term proportional to $K_1$ does not vanish identically as shown in section~\ref{potential}. Taking into account this fact, we can again build the more general equation for tumor growth, that reads \begin{eqnarray} \nonumber \frac{\partial r}{\partial t}=\frac{K_0}{r^2}\left(\frac{\partial^2 r}{\partial \theta^2} +\frac{1}{\sin^2(\theta)}\frac{\partial^2 r}{\partial \phi^2}\right)+\frac{2K_1}{r^3}\left(\frac{\partial^2 r}{\partial \theta^2}+\frac{1}{\sin^2(\theta)}\frac{\partial^2 r}{\partial \phi^2}\right) \\ -\frac{K_2}{r^4}\left(\frac{\partial^4 r}{\partial \theta^4}+\frac{2}{\sin^2(\theta)}\frac{\partial^4 r}{\partial \theta^2 \partial \phi^2}+ \frac{1}{\sin^4(\theta)}\frac{\partial^4 r}{\partial \phi^4}\right) + F + \frac{1}{r\sqrt{|\sin(\theta)|}}\eta(\theta,\phi,r), \label{gtumor3d} \end{eqnarray} where the correlations of the quenched disorder are given by \begin{equation} <\eta(\theta,\phi,r)\eta(\theta',\phi',r')>=\delta(\theta-\theta')\delta(\phi-\phi')\Delta(r-r'). \end{equation} Now, it is our goal to understand what differences appear in the evolution of the equations for tumor growth in different dimensions, mainly due to the presence of the $K_1$ term in the $(2+1)-$dimensional model. For this, we will study the deterministic counterparts of Eqs.(\ref{gtumor2d}, \ref{gtumor3d}). In the case of $(1+1)-$dimensions we have \begin{equation} \frac{\partial r}{\partial t}= \frac{K_0}{r^2}\frac{\partial^2 r}{\partial \theta^2}-\frac{K_2}{r^4} \frac{\partial^4 r}{\partial \theta^4} + F, \end{equation} which admits the radially symmetric solution $r(t)=R_0+Ft$. If we perform the linear stability analysis of this solution by adding a small perturbation $\rho$, we obtain the equation \begin{equation} \frac{d\rho_n}{dt}=-\left[\frac{K_0n^2}{(R_0+Ft)^2}+\frac{K_2n^4}{(R_0+Ft)^4}\right]\rho_n, \end{equation} that can be solved to yield \begin{equation} \label{sol2d} \rho_n(t)=\exp\left(\frac{K_2n^4}{3F}\left[\frac{1}{(R_0+Ft)^3}-\frac{1}{R_0^3}\right]+\frac{K_0n^2}{F}\left[\frac{1}{R_0+Ft}-\frac{1}{R_0}\right]\right)\rho_n(0). \end{equation} This reveals that all the Fourier modes $n \neq 0$ of the solution are linearly stable for $t>0$. The $n=0$ mode is marginal, but this is unimportant because it implies perturbations homogeneous in $\theta$. On the other hand, the equation corresponding to the $(2+1)-$dimensional case reads \begin{eqnarray} \nonumber \frac{\partial r}{\partial t}=\frac{K_0}{r^2}\left(\frac{\partial^2 r}{\partial \theta^2} +\frac{1}{\sin^2(\theta)}\frac{\partial^2 r}{\partial \phi^2}\right)+\frac{2K_1}{r^3}\left(\frac{\partial^2 r}{\partial \theta^2}+\frac{1}{\sin^2(\theta)}\frac{\partial^2 r}{\partial \phi^2}\right) \\ -\frac{K_2}{r^4}\left(\frac{\partial^4 r}{\partial \theta^4}+\frac{2}{\sin^2(\theta)}\frac{\partial^4 r}{\partial \theta^2 \partial \phi^2}+ \frac{1}{\sin^4(\theta)}\frac{\partial^4 r}{\partial \phi^4}\right)+F, \end{eqnarray} which again admits the solution $r(t)=R_0+Ft$. If we perform the linear stability analysis we arrive at the equation \begin{eqnarray} \nonumber \frac{d\rho_{n,m}}{dt}=-\frac{K_0}{(Ft+R_0)^2}\left(n^2+ \frac{4}{3} m^2 \right)\rho_{n,m}- 2\frac{K_1}{(Ft+R_0)^3}\left(n^2+\frac{4}{3}m^2 \right) \rho_{n,m} \\ -\frac{K_2}{(Ft+R_0)^4}\left( n^4+\frac{8}{3}m^2n^2+\frac{8}{3}m^4 \right)\rho_{n,m}, \end{eqnarray} that can be solved to yield \begin{eqnarray} \nonumber \rho_{n,m}(t)=\exp \left(-\frac{K_2(8m^4+8m^2n^2+3n^4)}{9F}\left[\frac{1}{R_0^3}-\frac{1}{(R_0+Ft)^3}\right] \right. \\ -\frac{4m^2+3n^2}{3F} \left. \left[\frac{K_1}{R_0^2}-\frac{K_1}{(R_0+Ft)^2}+\frac{K_0}{R_0}-\frac{K_0}{R_0+Ft}\right]\right)\rho_{n,m}(0), \label{sol3d} \end{eqnarray} and we see again that all the Fourier modes, except $m=n=0$, are stable for $t>0$. The case $m=n=0$ is marginal, but it is unimportant since it implies perturbations homogeneous in $\theta$ and $\phi$. The effect of the $K_1$ term can be immediately understood by regarding Eqs.(\ref{sol2d}, \ref{sol3d}). It is a new mechanism for "dissipating" curvature. When a stochastic perturbation drives the solution away from the radially symmetric form, then this perturbation behaves as stated by this two equations. As can be clearly seen, the restoring of the symmetric form is faster in the $(2+1)-$dimensional case due to the presence of the $K_1$ term. \section{Conclusions} Motivated by the successful research on tumor growth by Br\'{u} {\it et al.}~\cite{bru1,bru2,bru3,bru4}, we have introduced theoretical models able to reproduce some of the features found in these experiments. We have also analyzed these models in order to better understand what is happening in the physical phenomenon. The models were derived as an expansion from a potential, which resulted a power series expansion in the mean curvature of the surface. The effect of the deterministic part of the equations is thus to reduce the mean curvature and its different powers, and this way to minimize the pressure in the tumor border and to favour tumor growth. Since all the experiments were performed with two-dimensional tumors, it is important to develop theoretical models that represent what happens in these experiments, and to extend them to the three-dimensional case. This way we will get an insight of the evolution of the more realistic three-dimensional system, and we can derive conclusions that can be used as a guide for future experiments. We have analyzed the effect of stochasticity in these equations, and we have shown that stochastic effects are much less relevant in the case of $(2+1)-$dimensional growth. Also, we have discussed the possible origin of the noise on the underlying disorder of the system, and how its intensity can be enhanced by increasing the number of inmune cells in the tumor environment. This shows that the strategy of enhancing the inmune response in order to stop tumor growth should be less effective in the case of three-dimensional tumors. In addition to this, we have shown that a new term (the $K_1$ term) appeared with higher dimensionality. The origin of this term is very interesting, because it vanishes identically in one dimension for any geometry and in any dimension in the case of a planar geometry. Thus it appeared in the dynamics of the growing tumor as a combined effect of dimensionality and geometry. This term contributes to minimize the pressure and to favour tumor growth. Since it is present only in the three-dimensional case, it is another new mechanism that helps tumor propagation in this dimensionality, what lead us to conclude that it is much more difficult to stop a three-dimensional tumor than to stop its two-dimensional counterpart. All along this work we have assumed that there are no overhangs in the interface in the radial direction, in such a way that a single valued solution of the corresponding stochastic growth equation makes sense for representing the interface evolution. There are, however, situations for which we cannot assume this. An interesting problem for future work is to derive a continuum model allowing overhangs and an arbitrary topology of the growing interface. Similar models were developed in a different context~\cite{keblinski1,keblinski2}, and allow a more detailed description of a complex growth phenomenon. \section*{Acknowledgments} This work has been partially supported by the Ministerio de Educaci\'{o}n y Ciencia (Spain) through Projects No. EX2005-0976 and FIS2005-01729.
1,314,259,996,050
arxiv
\section{Introduction} Weak-scale supersymmetry(SUSY) \cite{prnilles} has been a promising candidate for physics beyond the Standard Model due to the natural solution to the hierarchy problem and the gauge coupling unification and etc. It is well known that supergravity mediation of SUSY breaking at the hidden sector generates all required soft SUSY breaking terms of order the weak scale \cite{prnilles}. However, it does not explain how soft masses approximately conserve flavor as required by bounds on flavor-changing neutral currents. Recently, there has been a lot of attention to models with extra dimensions which give a new ground for understanding the SUSY breaking in a geometric way. Identifying extra dimensions by discrete actions leads to orbifolds \cite{dixon}, which lead to chiral fermions and the reduction of higher dimensional supersymmetry. Moreover, all or some of SM particles can be regarded to live on the appearing orbifold fixed points or branes. Particularly, one can impose on bulk fields twisted boundary conditions in extra dimensions, $\acute{{\it a}}$ {\it la} Scherk-Schwarz(SS) \cite{SS}. Then, one can break further the remaining SUSY after orbifolding. In 5d ${\cal N}=1$ SUSY gauge theory compactified on $S^1/Z_2$, it was shown that in the presence of the SS breaking of SUSY, there arises a finite one-loop mass correction of the zero mode of a bulk scalar or a brane scalar due to the sum of Kaluza-Klein(KK) modes of bulk fields \cite{one-loop}. It turns out that the SS breaking is equivalent to the case with a nonzero auxiliary field ($F$ term) of the radion multiplet in the off-shell 5d supergravity \cite{max,offshellsugra,pomarol}. A nonzero twist parameter or $F$ term can be determined dynamically after the radion stabilization \cite{chacko,dyndet}. On the other hand, one can consider the localized breaking of SUSY at the orbifold fixed points \cite{peskin,nilles2,gaugino,ah,pomarol,weiner,evenmass,nilles,glocal,ckl,nomura,choi-lee}. For instance, when the remaining SUSY is broken at the hidden brane, only bulk fields such as gaugino or gravitino get nonzero masses at tree level and the broken SUSY is transmitted to the visible brane by bulk fields. Then, one can find that the mass spectrum of bulk fields and their coupling at the visible brane are equivalent to those in the SS breaking without brane mass terms \cite{pomarol,weiner}. Therefore, there also appears the one-loop finiteness for a scalar mass of the visible brane \cite{ah,nomura,choi-lee}, which is due to the geometric separation of SUSY breaking from the visible brane. If the broken SUSY is mediated dominantly by gaugino, the scalar mass becomes flavor-blind which sheds light on the supersymmetric flavor problem \cite{gaugino}. In this paper, we will consider the SUSY breaking in 6d ${\cal N}=1$ supersymmetric gauge theory compactified on the orbifold $T^2/Z_2$ \cite{lnz}. Even if we consider only $U(1)$ gauge group in the bulk, it is straightforward to extend to the non-abelian gauge group. The orbifold fixed points on $T^2/Z_2$ correspond to codimension-two branes. First we consider a generalization of the SS breaking of SUSY to the 6d case. Then, we show that the SS breaking is equivalent to the localized breaking with mass terms along the lines rather than points. This localized SUSY breaking can be realized by positioning the hidden sector at the fixed boundaries under additional $Z_2$ actions. For the localized breaking with mass terms at the codimension-two brane, however, the classical solution of a bulk field is singular for an infinitely thin brane \cite{wise}. So, one must regularize the zero thickness of brane. Then, the regulator dependence in the classical solution is absorbed into the renormalized brane mass, which has a {\it classical} logarithmic RG running \cite{wise}. In that sense, the localized breaking at the codimension-two brane is sensitive to the ultraviolet physics of regularization even in a mild way with the log divergence. Actually, it has been shown \cite{kai} that in the presence of mass terms localized at the fixed points, the one-loop mass for a brane scalar due to bulk gauge fields has a log divergence due to the infinitely thin brane. On the other hand, the localized mass terms at codimension-one branes are insensitive to the regularization of the brane thickness, as seen from the equivalence to the SS breaking. By using the off-shell action for 6d SUSY gauge theory with the bulk-brane coupling \cite{lnz}, we make a computation of one-loop mass correction to a brane scalar due to the SS breaking or the localized breaking along the distant lines. Thus, we find that the resulting one-loop correction is finite. In the limit of taking one extra dimension without a SS twist to be much smaller than the other one with a SS twist, we reproduce the 5d result with a SS twist. On the other hand, a small extra dimension with a nontrivial SS twist is not decoupled but rather its effect is dominant in the one-loop mass correction. The paper is organized as follows. First we describe the SS twisted boundary conditions on the bulk gaugino and find the mass spectrum and mode expansion of gaugino. Next in the localized SUSY breaking with general $Z_2$-even mass terms along the lines, we obtain the similar result as in the SS twist. Then, we compute the one-loop mass correction to a brane scalar due to the KK modes of bulk gauge fields. Finally the conclusion is drawn. \section{Scherk-Schwarz breaking of SUSY on $T^2/Z_2$} Let us consider a 6d ${\cal N}=1$ supersymmetric $U(1)$ gauge theory compactified on the $T^2/Z_2$ orbifold \footnote{It is straightforward to include hypermultiplets coupled to the $U(1)$ \cite{lnz} and extend to bulk non-abelian gauge groups. In these cases, one needs to remember that the bulk matter content is severely restricted due to genuine 6d anomalies \cite{lnz}.}. Two extra dimensions on a torus are identified as $x_5\equiv x_5+2\pi R_5$ and $x_6\equiv x_6+2\pi R_6$ where $R_5$ and $R_6$ are radii of extra dimensions. By orbifolding on the torus by $Z_2$, we identify $(x_5,x_6)$ with $(-x_5,-x_6)$. Then, there appear four orbifold fixed points, \begin{equation} (0,0), \ \ (\pi R_5,0), \ \ (0,\pi R_6), \ \ (\pi R_5,\pi R_6). \end{equation} The fundamental region is the half of the torus. The kinetic term for the $U(1)$ gaugino \footnote{For notations and conventions, refer to \cite{lnz}.} is given by \begin{eqnarray} {\cal L}=i{\bar\Omega}_i\Gamma^M\partial_M\Omega^i.\label{gkin} \end{eqnarray} The gaugino $\Omega^i$ is a right-handed simplectic Majorana-Weyl fermion satisfying the chirality condition \begin{equation} \Gamma^7\Omega^i=\Omega^i. \end{equation} On writing the gaugino in a four dimensional Weyl representation $\Omega^i_R\equiv \lambda^i$, eq.~(\ref{gkin}) becomes \begin{eqnarray} {\cal L}=i{\bar\lambda}_i\gamma^M\partial_M\lambda^i. \label{gkina} \end{eqnarray} From the symmetry of the action on an orbifold $T^2/Z_2$, let us consider the orbifold boundary conditions and the Scherk-Schwarz(SS) twists on $T^2/Z_2$ as follows, \begin{eqnarray} Z_2&:&~~\lambda(x,-x_5,-x_6)=\tau_3(i\gamma^5)\lambda(x,x_5,x_6) \equiv P\lambda(x,x_5,x_6), \label{orb}\\ T_1&:&~~\lambda(x,x_5+2\pi R_5,x_6)= U_1\lambda(x,x_5,x_6), \label{tw1}\\ T_2&:&~~\lambda(x,x_5,x_6+2\pi R_6)= U_2\lambda(x,x_5,x_6) \label{tw2} \end{eqnarray} where $U_i(i=1,2)$ are $2\times 2$ twist matrices corresponding to $SU(2)_R$ rotations. The SS twists on the orbifold are subject to the consistency conditions $U_iPU_i=P(i=1,2)$ and $U_1U_2=U_2U_1$. We note that there is another possible choice of the parity matrix $P=\pm {\bf 1}_2(i\gamma^5)$, instead of $P=\tau_3(i\gamma^5)$. In this case, the consistency conditions lead to $U_i=\pm {\bf 1}_2$ or $\pm \tau_3$. However, in this paper, let us focus on the case with $P=\tau_3(i\gamma^5)$ for which a continuous twist is possible. The first condition $U_iPU_i=P(i=1,2)$ gives rise to the following form for either $U_1$ or $U_2$: a continuous twist connected to the identity, \begin{eqnarray} U_i=e^{-i[2\pi\omega_i(\tau_1\sin\phi_i+\tau_2\cos\phi_i)]} \end{eqnarray} with $\omega_i,\phi_i$ being real parameters, or a discrete twist not connected to the identity, \begin{eqnarray} U_i=-{\bf 1}_2. \end{eqnarray} By using the residual global invariance, a continuous twist($U_i$ with $i=1$ or $2$) can be always set to the one with $\phi_i=0$. Therefore, also considering the second condition $U_1U_2=U_2U_1$, we find that there are four possible twisted boundary conditions: \begin{eqnarray} U_1&=&e^{-2\pi i\omega_5\tau_2}, \ \ U_2=e^{2\pi i\omega_6\tau_2}, \label{twa}\\ U_1&=&-{\bf 1}_2, \ \ U_2=e^{2\pi i\omega_6\tau_2}, \label{twb}\\ U_1&=&e^{-2\pi i\omega_5\tau_2}, \ \ U_2=-{\bf 1}_2, \label{twc}\\ U_1&=&U_2=-{\bf 1}_2 \label{twd} \end{eqnarray} where $\omega_5,\omega_6$ are real constant parameters. We note that the discrete choice of twist matrices corresponds to using $R$-parity of ${\cal N}=1$ 4d supersymmetry as the global symmetry. First, for the case with continuous twists in both extra dimensions given by eq.~(\ref{twa}), let us make a redefinition of the gaugino as \begin{equation} \lambda(x,x_5,x_6)=e^{-i(\omega_5x_5/R_5-\omega_6x_6/R_6)\tau_2} {\tilde\lambda}(x,x_5,x_6). \end{equation} Then, regarding $\tilde\lambda$ to be untwisted fields, we take the redefined gaugino to be a solution to the twisted boundary conditions (\ref{tw1}) and (\ref{tw2}) with eq.~(\ref{twa}). Moreover, one can show that $\tilde\lambda$ satisfies the same orbifold boundary condition as $\lambda$ in eq.~(\ref{orb}). Let us write the untwisted fields $\tilde\lambda$ in terms of 4d Majorana spinors ${\tilde\psi}^i(i=1,2)$ as \begin{eqnarray} {\tilde\lambda}^1&=&+\frac{1}{2}(1+i\gamma^5){\tilde\psi}^1 +\frac{1}{2}(1-i\gamma^5){\tilde\psi}^2,\\ {\tilde\lambda}^2&=&-\frac{1}{2}(1-i\gamma^5){\tilde\psi}^1 +\frac{1}{2}(1+i\gamma^5){\tilde\psi}^2, \end{eqnarray} and similarly for the twisted fields $\lambda$ in terms of 4d Majorana spinors $\psi^i(i=1,2)$. We note that the untwisted Majorana spinors are related to the twisted ones by \begin{eqnarray} \left(\begin{array}{l} \psi^1 \\ \psi^2 \end{array}\right) =\left(\begin{array}{ll} \cos\bigg(\frac{\omega_5x_5}{R_5} +\frac{\omega_6x_6}{R_6}\bigg) & -\sin\bigg(\frac{\omega_5x_5}{R_5} +\frac{\omega_6x_6}{R_6}\bigg) \\ \sin\bigg(\frac{\omega_5x_5}{R_5} +\frac{\omega_6x_6}{R_6}\bigg) & \cos\bigg(\frac{\omega_5x_5}{R_5} +\frac{\omega_6x_6}{R_6}\bigg) \end{array}\right) \left(\begin{array}{l} {\tilde\psi}^1 \\ {\tilde\psi}^2 \end{array}\right). \label{majrel} \end{eqnarray} Then, from eq.~(\ref{orb}), one can show that ${\tilde\psi}^i$ satisfy the following $Z_2$ boundary conditions, \begin{eqnarray} {\tilde\psi}^1(x,-x_5,-x_6)&=&+{\tilde\psi}^1(x,x_5,x_6), \\ {\tilde\psi}^2(x,-x_5,-x_6)&=&-{\tilde\psi}^2(x,x_5,x_6), \end{eqnarray} and similarly for $\psi^i$. With this redefinition of fields, let us write the gaugino kinetic term (\ref{gkina}) in terms of untwisted fields $\tilde\psi$ as \begin{eqnarray} {\cal L}&=&i\overline{{\tilde\psi}^1}\gamma^\mu\partial_\mu{\tilde\psi}^1 +i\overline{{\tilde\psi}^2}\gamma^\mu\partial_\mu{\tilde\psi}^2 -\overline{{\tilde\psi}^1}(\partial_5+\gamma^5\partial_6){\tilde\psi}^2 +\overline{{\tilde\psi}^2}(\partial_5+\gamma^5\partial_6){\tilde\psi}^1 \nonumber \\ &-&\frac{\omega_5}{R_5}(\overline{{\tilde\psi}^1}{\tilde\psi}^1 +\overline{{\tilde\psi}^2}{\tilde\psi}^2) +\frac{\omega_6}{R_6}(\overline{{\tilde\psi}^1}\gamma^5{\tilde\psi}^1 +\overline{{\tilde\psi}^2}\gamma^5{\tilde\psi}^2). \end{eqnarray} Equivalently, by writing ${\tilde\psi}^i=(\chi^i,{\bar\chi}^i)^T(i=1,2)$ with 4d Weyl spinors $\chi^i$, the action becomes \begin{eqnarray} {\cal L}=&&\sum_{i=1,2}(i\chi^i\sigma^\mu\partial_\mu{\bar\chi}^i +i{\bar\chi}^i{\bar\sigma}^\mu\partial_\mu\chi^i) \nonumber \\ &&+[-\chi^1(\partial_5-i\partial_6)\chi^2 +\chi^2(\partial_5-i\partial_6)\chi^1+c.c.] +{\cal L}_m \label{fnl} \end{eqnarray} where ${\cal L}_m$ corresponds to the bulk mass terms given by \begin{eqnarray} {\cal L}_m=-\bigg[\bigg(\frac{\omega_5}{R_5}+i\frac{\omega_6}{R_6}\bigg) (\chi^1\chi^1+\chi^2\chi^2)+c.c.\bigg].\label{twistedmass} \end{eqnarray} Therefore, we find that the SS twist on the torus induces nonzero $Z_2$-even mass terms in the basis of the untwisted gaugino that we have introduced for redefining the gaugino. We note that the $Z_2$-even and odd untwisted fields take equal bulk masses as in the 5d case. From the action (\ref{fnl}), we can derive the equations of motion for gaugino as follows, \begin{eqnarray} i\sigma^\mu\partial_\mu{\bar\chi}^2+(\partial_5-i\partial_6)\chi^1 -\bigg(\frac{\omega_5}{R_5}+i\frac{\omega_6}{R_6}\bigg)\chi^2&=&0, \\ i{\bar\sigma}^\mu\partial_\mu\chi^1-(\partial_5+i\partial_6){\bar\chi}^2 -\bigg(\frac{\omega_5}{R_5}-i\frac{\omega_6}{R_6}\bigg){\bar\chi}^1&=&0. \end{eqnarray} Therefore, solving the above equations, we find the solution for the untwisted gaugino as \begin{equation} \left(\begin{array}{l}\chi^1 \\ \chi^2\end{array}\right)(x,x_5,x_6) =\frac{1}{2\pi\sqrt{R_5R_6}} \sum_{n_5,n_6\in {\bf Z}} \left(\begin{array}{l} \cos\bigg(\frac{n_5}{R_5}x_5-\frac{n_6}{R_6}x_6\bigg) \\ \sin\bigg(\frac{n_5}{R_5}x_5-\frac{n_6}{R_6}x_6\bigg) \end{array}\right) \eta^{(n_5,n_6)}(x) \end{equation} where $n_5,n_6$ are integer, $i\sigma^\mu\partial_\mu{\bar\eta}^{(n_5,n_6)}(x)=M_{n_5,n_6}\eta^{(n_5,n_6)}(x)$ and the mass spectrum is given by \begin{equation} M_{n_5,n_6}=\frac{n_5+\omega_5}{R_5}+i\bigg(\frac{n_6+\omega_6}{R_6}\bigg). \label{ssmass} \end{equation} Consequently, due to the relation (\ref{majrel}), the solution for the twisted gaugino $\psi^i=(\zeta^i,{\bar\zeta}^i)^T(i=1,2)$ becomes \begin{equation} \left(\begin{array}{l}\zeta^1 \\ \zeta^2\end{array}\right)(x,x_5,x_6) =\frac{1}{2\pi\sqrt{R_5R_6}} \sum_{n_5,n_6\in {\bf Z}} \left(\begin{array}{l} \cos\bigg(\frac{n_5+\omega_5}{R_5}x_5-\frac{n_6+\omega_6}{R_6}x_6\bigg) \\ \sin\bigg(\frac{n_5+\omega_5}{R_5}x_5-\frac{n_6+\omega_6}{R_6}x_6\bigg) \end{array}\right) \eta^{(n_5,n_6)}(x).\label{twistedg} \end{equation} Similarly, for the case with a continuous twist in one direction and a discrete twist in the other direction given by eq.~(\ref{twb}), we can make a redefinition of the gaugino with ${\tilde\lambda}$ as \begin{equation} \lambda(x,x_5,x_6)=e^{i(\omega_6x_6/R_6)\tau_2}{\tilde\lambda}(x,x_5,x_6). \end{equation} Then, ${\tilde\lambda}$ satisfies the following orbifold and twisted boundary conditions: \begin{eqnarray} Z_2&:&~~{\tilde\lambda}(x,-x_5,-x_6)=\tau_3(i\gamma^5){\tilde\lambda}(x,x_5,x_6) \label{orba}\\ T_1&:&~~{\tilde\lambda}(x,x_5+2\pi R_5,x_6)=- {\tilde\lambda}(x,x_5,x_6), \label{tw1a}\\ T_2&:&~~{\tilde\lambda}(x,x_5,x_6+2\pi R_6)= {\tilde\lambda}(x,x_5,x_6). \label{tw2a} \end{eqnarray} Consequently, plugging the redefined gaugino into the action, deriving the equation for ${\tilde\lambda}$ and imposing the above boundary conditions to ${\tilde\lambda}$, we find the corresponding solution for ${\tilde\lambda}$ in 4d Weyl representation as \begin{equation} \left(\begin{array}{l}\chi^1 \\ \chi^2\end{array}\right)(x,x_5,x_6) =\frac{1}{2\pi\sqrt{R_5R_6}} \sum_{n_5,n_6\in {\bf Z}} \left(\begin{array}{l} \cos\bigg(\frac{n_5+\frac{1}{2}}{R_5}x_5-\frac{n_6}{R_6}x_6\bigg) \\ \sin\bigg(\frac{n_5+\frac{1}{2}}{R_5}x_5-\frac{n_6}{R_6}x_6\bigg) \end{array}\right) \eta^{(n_5,n_6)}(x)\label{untwisted} \end{equation} where $n_5,n_6$ are integer, $i\sigma^\mu\partial_\mu{\bar\eta}^{(n_5,n_6)}(x)=M_{n_5,n_6}\eta^{(n_5,n_6)}(x)$ and the mass spectrum is given by \begin{equation} M_{n_5,n_6}=\frac{n_5+\frac{1}{2}}{R_5}+i\bigg(\frac{n_6+\omega_6}{R_6}\bigg). \label{ssmassa} \end{equation} Therefore, the solution for the twisted gaugino $\lambda$ is given by eq.~(\ref{twistedg}) with $\omega_5=\frac{1}{2}$. Also for the case with twist matrices (\ref{twc}), we only have to interchange $(n_5,R_5)\leftrightarrow (n_6,R_6)$ with $\omega_6\rightarrow\omega_5$ in eq.~(\ref{untwisted}), and then obtain the solution for the twisted gaugino $\lambda$ given by eq.~(\ref{twistedg}) with $\omega_6=\frac{1}{2}$. Lastly, for the case with discrete twists in both extra dimensions, the solution for the twisted gaugino $\lambda$ is given by eq.~(\ref{twistedg}) with $\omega_5=\omega_6=\frac{1}{2}$. \section{SUSY breaking due to localized gaugino masses} In this section, instead of the SS boundary twists of gaugino, let us consider a local breaking of supersymmetry which is parametrized by gaugino mass terms, and show the equivalence between the SS breaking and the localized breaking. Let us take the most general $Z_2$-even mass terms \footnote{If one introduces gaugino mass terms proportional to $\delta(x_5)$ and $\delta(x_6)$, there appears a non-supersymmetric gauge coupling at the origin due to the suppression of gaugino wave function \cite{choi-lee}. Since we assume the visible sector fields to be localized at the origin, let us consider the gaugino mass terms only at distant lines.} for gaugino, which are localized along the two lines intersecting at a fixed point $(\pi R_5,\pi R_6)$ on the orbifold, \begin{eqnarray} {\cal L}_m&=&-[2m(\chi^1\chi^1+\rho\chi^2\chi^2)+c.c]\delta(x_5-\pi R_5) \nonumber \\ &&-[2im'(\chi^1\chi^1+\rho'\chi^2\chi^2)+c.c.]\delta(x_6-\pi R_6) \end{eqnarray} where $(m,\rho)$ and $(m',\rho')$ are gaugino mass parameters and they are assumed to be real. Then, the lines with localized mass terms should be regarded as the fixed boundaries under two additional independent $Z_2$ actions \cite{kkl}: $Z'_2$: $(x_5,x_6)\rightarrow (-x_5,x_6)$ and $Z^{\prime\prime}_2$: $(x_5,x_6)\rightarrow (x_5,-x_6)$. In this case, it is conceivable that the localized mass terms are due to the SUSY breaking in the hidden sector located on the lines, rather than points. In this case, the gaugino equations of motion are \begin{eqnarray} i\sigma^\mu\partial_\mu{\bar\chi}^2+(\partial_5-i\partial_6)\chi^1 -2(m\rho\delta(x_5-\pi R_5)+im'\rho'\delta(x_6-\pi R_6))\chi^2&=&0, \label{geq1}\\ i{\bar\sigma}^\mu\partial_\mu\chi^1-(\partial_5+i\partial_6){\bar\chi}^2 -2(m\delta(x_5-\pi R_5)-im'\delta(x_6-\pi R_6)){\bar \chi}^1&=&0.\label{geq2} \end{eqnarray} Now let us take the solution of gaugino to the above equations as \begin{equation} \left(\begin{array}{l}\chi^1 \\ \chi^2\end{array}\right)(x,x_5,x_6) = \sum_{M}N_M \left(\begin{array}{l} u^1(x_5,x_6) \\ u^2(x_5,x_6) \end{array}\right) \eta_M(x) \end{equation} where $N_M$ is the normalization constant and $i\sigma^\mu\partial_\mu{\bar\eta}_M(x)=M\eta_M(x)$. Then, the gaugino equations are \begin{eqnarray} M{\bar u}^2+(\partial_5-i\partial_6)u^1-2(m\rho\delta(x_5-\pi R_5) +im'\rho'\delta(x_6-\pi R_6))u^2&=&0,\label{geq1a}\\ {\bar M}u^1-(\partial_5+i\partial_6){\bar u}^2-2(m\delta(x_5-\pi R_5) -im'\delta(x_6-\pi R_6)){\bar u}^1&=&0.\label{geq2a} \end{eqnarray} Let us take $u^1,u^2$ to be real functions. Then, taking $M=M_5+iM_6$ with real $M_5$ and $M_6$ and using eqs.~(\ref{geq1a}) and (\ref{geq2a}), we obtain the equation for $t\equiv u^2/u^1$ as \begin{eqnarray} \partial_5 t&=&M_5(1+t^2)-2m(1+\rho t^2)\delta(x_5-\pi R_5), \\ \partial_6 t&=&-M_6(1+t^2)+2m'(1+\rho' t^2)\delta(x_6-\pi R_6). \end{eqnarray} It is convenient to consider the $Z_2$-odd solution of $t$ separately around different fixed points and match them in the overlap regions \cite{choi-lee}. That is, let us consider the solution of $t$ which satisfies the equations of motion inside a torus centered at each fixed point. Thus, we find the solution for $t$: \begin{itemize} \item $-\pi R_5<x_5<\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, \begin{equation} t=\tan(M_5x_5-M_6x_6). \end{equation} \item $0<x_5<2\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, \begin{equation} t=\tan[M_5(x_5-\pi R_5)-M_6x_6-{\rm arctan}\alpha(\rho,m\epsilon(x_5-\pi R_5))] \end{equation} where $\epsilon(x_5-\pi R_5)$ is a step function with $2\pi R_5$ periodicity given by \begin{equation} \epsilon(x_5)=\left\{\begin{array}{l} +1, \ \ 0<x_5<\pi R_5, \\ 0, \ \ x_5=0, \\ -1, \ \ -\pi R_5<x_5<0,\end{array}\right. \end{equation} and \begin{equation} \alpha(\rho,m\epsilon(x_5-\pi R_5))\equiv \frac{1}{\sqrt{\rho}}\tan(\sqrt{\rho}m\epsilon(x_5-\pi R_5)). \end{equation} \item $-\pi R_5<x_5<\pi R_5$ and $0<x_6<2\pi R_6$, \begin{equation} t=\tan[M_5x_5-M_6(x_6-\pi R_6) +{\rm arctan}\alpha(\rho',m'{\tilde\epsilon}(x_6-\pi R_6))], \end{equation} where ${\tilde\epsilon}(x_6-\pi R_6)$ is a step function with $2\pi R_6$ periodicity. \item $0<x_5<2\pi R_5$ and $0<x_6<2\pi R_6$, \begin{eqnarray} t&=&\tan[M_5(x_5-\pi R_5)-M_6(x_6-\pi R_6) -{\rm arctan}\alpha(\rho,m\epsilon(x_5-\pi R_5)) \nonumber \\ &+&{\rm arctan}\alpha(\rho',m'{\tilde \epsilon}(x_6-\pi R_6))]. \end{eqnarray} \end{itemize} Identify the first two solutions in the overlap region of $0<x_5<\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, we find \begin{equation} M_5=\frac{1}{R_5}\bigg(n_5+\frac{1}{\pi}{\rm arctan}\alpha(\rho,m)\bigg), \ \ \ n_5={\rm integer}. \end{equation} Likewise, identifying the first and third solutions in the overlap region of $-\pi R_5<x_5<\pi R_5$ and $0<x_6<\pi R_6$, we also find \begin{equation} M_6=\frac{1}{R_6}\bigg(n_6+\frac{1}{\pi}{\rm arctan}\alpha(\rho',m') \bigg), \ \ \ n_6={\rm integer}. \end{equation} Then, comparing the other solutions in the overlap regions does not lead to a new condition. Therefore, the mass spectrum is equivalent to the one with SS breaking when $\omega_5$ and $\omega_6$ in eq.~(\ref{ssmass}) are identified with ${\rm arctan}\alpha(\rho,m)/\pi$ and ${\rm arctan}\alpha(\rho',m')/\pi$, respectively. Moreover, the solutions of $u^1$ and $u^2$ are also given in the separate regions: \begin{itemize} \item $-\pi R_5<x_5<\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, \begin{eqnarray} u^1&=&\cos(M_5x_5-M_6x_6), \label{sol11}\\ u^2&=&\sin(M_5x_5-M_6x_6). \end{eqnarray} \item $0<x_5<2\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, \begin{eqnarray} u^1&=&(-1)^{n_5}A(\rho,m\epsilon(x_5-\pi R_5))\times \nonumber \\ &\times&\cos[M_5(x_5-\pi R_5) -M_6x_6-{\rm arctan}\alpha(\rho,m\epsilon(x_5-\pi R_5))], \\ u^2&=&(-1)^{n_5}A(\rho,m\epsilon(x_5-\pi R_5))\times \nonumber \\ &\times&\sin[M_5(x_5-\pi R_5) -M_6x_6-{\rm arctan}\alpha(\rho,m\epsilon(x_5-\pi R_5))] \end{eqnarray} where \begin{equation} A(\rho,m\epsilon(x_5-\pi R_5))\equiv \bigg(\frac{1+\alpha^2(\rho,m\epsilon(x_5-\pi R_5))} {1+\rho\alpha^2(\rho,m\epsilon(x_5-\pi R_5))}\bigg)^{1/2}. \end{equation} \item $-\pi R_5<x_5<\pi R_5$ and $0<x_6<2\pi R_6$, \begin{eqnarray} u^1&=&(-1)^{n_6}A(\rho',m'{\tilde\epsilon}(x_6-\pi R_6))\times \nonumber \\ &\times&\cos[M_5x_5-M_6(x_6-\pi R_6) +{\rm arctan}\alpha(\rho',m'{\tilde\epsilon}(x_6-\pi R_6))],\\ u^2&=&(-1)^{n_6}A(\rho',m'{\tilde\epsilon}(x_6-\pi R_6))\times \nonumber \\ &\times&\sin[M_5x_5-M_6(x_6-\pi R_6) +{\rm arctan}\alpha(\rho',m'{\tilde\epsilon}(x_6-\pi R_6))]. \end{eqnarray} \item $0<x_5<2\pi R_5$ and $0<x_6<2\pi R_6$, \begin{eqnarray} u^1&=&(-1)^{n_5+n_6} A(\rho,m\epsilon(x_5-\pi R_5))A(\rho',m'{\tilde\epsilon}(x_6-\pi R_6)) \times \nonumber \\ &\times&\cos[M_5(x_5-\pi R_5)-M_6(x_6-\pi R_6)\nonumber \\ &-&{\rm arctan}\alpha(\rho,m\epsilon(x_5-\pi R_5)) +{\rm arctan}\alpha(\rho',m'{\tilde\epsilon}(x_6-\pi R_6))],\\ u^2&=&(-1)^{n_5+n_6} A(\rho,m\epsilon(x_5-\pi R_5))A(\rho',m'{\tilde\epsilon}(x_6-\pi R_6)) \times \nonumber \\ &\times&\sin[M_5(x_5-\pi R_5)-M_6(x_6-\pi R_6)\nonumber \\ &-&{\rm arctan}\alpha(\rho,m\epsilon(x_5-\pi R_5)) +{\rm arctan}\alpha(\rho',m'{\tilde\epsilon}(x_6-\pi R_6))]. \end{eqnarray} \end{itemize} In order to make a normalization of KK modes, let us insert the solutions in the action and integrate it over extra dimensions. Then, we obtain the normalization constant in the separate regions: \begin{itemize} \item $-\pi R_5<x_5<\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, \begin{equation} N_M=\bigg(\int_{-\pi R_5}^{\pi R_5}dx_5\int_{-\pi R_6}^{\pi R_6}dx_6 [(u^1)^2+(u^2)^2]\bigg)^{-1/2}=\frac{1}{2\pi\sqrt{R_5R_6}}.\label{norm1} \end{equation} \item $0<x_5<2\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, \begin{equation} N_M=\bigg(\int_0^{2\pi R_5}dx_5\int_{-\pi R_6}^{\pi R_6}dx_6 [(u^1)^2+(u^2)^2]\bigg)^{-1/2}=\frac{1}{2\pi\sqrt{R_5R_6}}A^{-1}(\rho,m). \end{equation} \item $-\pi R_5<x_5<\pi R_5$ and $0<x_6<2\pi R_6$, \begin{equation} N_M=\bigg(\int_{-\pi R_5}^{\pi R_5}dx_5\int_0^{2\pi R_6}dx_6 [(u^1)^2+(u^2)^2]\bigg)^{-1/2}=\frac{1}{2\pi\sqrt{R_5R_6}}A^{-1}(\rho',m'). \end{equation} \item $0<x_5<2\pi R_5$ and $0<x_6<2\pi R_6$, \begin{eqnarray} N_M&=&\bigg(\int_0^{2\pi R_5}dx_5\int_0^{2\pi R_6}dx_6 [(u^1)^2+(u^2)^2]\bigg)^{-1/2}\nonumber \\ &=&\frac{1}{2\pi\sqrt{R_5R_6}} A^{-1}(\rho,m)A^{-1}(\rho',m'). \end{eqnarray} \end{itemize} \section{One-loop mass correction to a brane scalar} In the general case with nonzero gaugino masses, let us put a chiral multiplet at the $(0,0)$ fixed point. Then, the scalar partner of the chiral multiplet does not feel the supersymmetry breaking directly but there exists a loop contribution to its mass due to the distant supersymmetry breaking. Only $Z_2$-even gaugino couples to the brane scalar. From the solution (\ref{sol11}) with normalization (\ref{norm1}) in the region $-\pi R_5<0<\pi R_5$ and $-\pi R_6<x_6<\pi R_6$, we find that all KK modes of $Z_2$-even gaugino have the same brane coupling as the one of bulk gauge boson, \begin{equation} g_4=\frac{g_6}{2\pi\sqrt{R_5R_6} } \end{equation} where $g_6$ is the six-dimensional gauge coupling. One has the KK mass spectrums for gauge bosons and gaugino running in loops, respectively, \begin{eqnarray} M^2_{(0)n_5,n_6}&=&\bigg(\frac{n_5}{R_5}\bigg)^2 +\bigg(\frac{n_6}{R_6}\bigg)^2, \\ M^2_{n_5,n_6}&=&\bigg(\frac{n_5+\omega_5}{R_5}\bigg)^2 +\bigg(\frac{n_6+\omega_6}{R_6}\bigg)^2. \end{eqnarray} On the other hand, from the even-mode $\zeta^1$ from eq.~(\ref{twistedg}) at the $(0,0)$ fixed point and the mass spectrum in eq.~(\ref{ssmass}), one can find that the SS breaking leads to the same brane coupling and mass spectrum of gaugino as in the localized breaking of supersymmetry. So, the brane scalar fields do not feel the difference between the SS twist and the localized gaugino masses along the distant lines. Now let us consider the KK mode contribution to the one-loop mass correction for a brane scalar $\phi$ with charge $Q$ under the $U(1)$. For this, we note that the coupling of the bulk auxiliary field to the brane scalar is given by the following action \cite{lnz}, \begin{eqnarray} \int d^6 x\bigg[\frac{1}{2}(D^3)^2+\delta(x^5)\delta(x^6)g_6Q\phi^\dagger (-D^3+F_{56})\phi\bigg] \end{eqnarray} where $D^3$ is the third component of auxiliary field in the bulk vector multiplet and $F_{56}$ is the extra component of field strength. After eliminating the auxiliary field by its equation of motion, we find the resulting coupling as \begin{eqnarray} \int d^4 x\bigg[-g_6Q\phi^\dagger F_{56}(x,x_5=0,x_6=0)\phi -\frac{1}{2}g^2_6Q^2(\phi^\dagger\phi)^2 \delta(0)\delta(0)\bigg] \end{eqnarray} with \begin{eqnarray} \delta(0)\delta(0)&=&\frac{1}{4\pi^2R_5R_6}\sum_{n_5,n_6\in {\bf Z}}1 \nonumber \\ &=&\frac{1}{4\pi^2R_5R_6}\sum_{n_5,n_6\in {\bf Z}} \frac{p^2-M^2_{(0)n_5,n_6}}{p^2-M^2_{(0)n_5,n_6}}. \end{eqnarray} Therefore, considering the similar Feynman diagrams as in 5d \cite{peskin,ckl}, in the dimensional regularization with $d=4-\epsilon$, bosonic and fermionic loop contributions to the scalar self energy are, at nonzero external momentum $q^2$, respectively, \begin{eqnarray} -im^2_B(q^2)=4g^2_4 Q^2\mu^{4-d}\sum_{n_5,n_6\in {\bf Z}}\int \frac{d^d p}{(2\pi)^d}\frac{p(q+p)}{(p^2-M^2_{(0)n_5,n_6})(q+p)^2} \end{eqnarray} and \begin{eqnarray} -im^2_F(q^2)=-4g^2_4 Q^2\mu^{4-d}\sum_{n_5,n_6\in {\bf Z}}\int \frac{d^d p}{(2\pi)^d}\frac{p(q+p)}{(p^2-M^2_{n_5,n_6})(q+p)^2} \end{eqnarray} By using the Schwinger representation \begin{eqnarray} \frac{1}{A^n}=\frac{1}{\Gamma(n)}\int^\infty_0 dt\, t^{n-1} e^{-At}, \end{eqnarray} and performing the momentum integrations via the identities \begin{eqnarray} \int^\infty_0 dy \,y^{2n+d-1}e^{-y^2t}=\frac{\Gamma(d/2+n)}{2t^{d/2+n}}, \end{eqnarray} we find the one-loop corrections as \begin{eqnarray} m^2_B(q^2)=\frac{g^2_4Q^2(\mu\pi R_5)^\epsilon}{4\pi^3R^2_5}\int^1_0 dx \bigg[(2-\frac{\epsilon}{2}){\cal J}_2[0,0,c] +\pi x(1-x)q^2 R^2_5{\cal J}_1[0,0,c]\bigg] \end{eqnarray} and \begin{eqnarray} m^2_F(q^2)&=&-\frac{g^2_4Q^2(\mu \pi R_5)^\epsilon}{4\pi^3R^2_5}\int^1_0 dx \bigg[(2-\frac{\epsilon}{2}){\cal J}_2[\omega_5,\omega_6,c] \nonumber \\ &&+\pi x(1-x)q^2 R^2_5{\cal J}_1[\omega_5,\omega_6,c]\bigg] \end{eqnarray} with \begin{eqnarray} &&{\cal J}_j[\omega_5,\omega_6,c]\equiv \sum_{n_5,n_6\in {\bf Z}} \int^\infty_0 \frac{dt}{t^{j-\epsilon/2}} e^{-\pi t[c+a_5(n_5+\omega_5)^2+a_6(n_6+\omega_6)^2]}, \ \ j=1,2; \ \ a_{5,6},c>0; \nonumber \\ &&a_5\equiv x, \ \ a_6\equiv x\bigg(\frac{R_5}{R_6}\bigg)^2, \ \ c\equiv -x(1-x)q^2R^2_5. \end{eqnarray} For small positive\footnote{$c$ is positive after a Wick rotation $q^2=-q^2_E$.} $c$, we obtain the following approximate formulas \cite{ghilencea} for ${\cal J}_j[\omega_5,\omega_6,c]$, \begin{eqnarray} {\cal J}_1[\omega_5,\omega_6,c\ll 1]&\simeq& \frac{\pi c}{\sqrt{a_5a_6}} \bigg[\frac{-2}{\epsilon}\bigg] -\ln\bigg|\frac{\vartheta_1(\omega_6-iu\omega_5|iu)}{(\omega_6-iu\omega_5)\eta(iu)} e^{-\pi u\omega^2_5}\bigg|^2 \nonumber \\ &&-\ln[(c+a_5\omega^2_5+a_6\omega^2_6)/a_6], \ \ \ \ u\equiv \sqrt{\frac{a_5}{a_6}},\\ {\cal J}_2[\omega_5,\omega_6,c\ll 1]&\simeq& -\frac{\pi^2c^2}{2\sqrt{a_5a_6}} \bigg[\frac{-2}{\epsilon}\bigg] +\frac{\pi^2a_5}{3}\bigg(\frac{a_5}{a_6}\bigg)^{1/2} \bigg[\frac{1}{15}-2\Delta^2_{\omega_5}(1-\Delta_{\omega_5})^2\bigg] \nonumber \\ &&+\bigg[\sqrt{a_5a_6}\sum_{n\in {\bf Z}}|n+\omega_5|{\rm Li}_2(e^{-2\pi iz}) +c.c.\bigg] \nonumber \\ &&+\bigg[\frac{a_6}{2\pi}\sum_{n\in {\bf Z}}{\rm Li}_3(e^{-2\pi iz})+c.c.\bigg] \end{eqnarray} where $\Delta_{\omega_5}\equiv\omega_5-[\omega_5]$ with $0\leq \Delta_{\omega_5}<1$ and $[\omega_5]\in {\bf Z}$, and $z\equiv \omega_6-i\sqrt{\frac{a_5}{a_6}}|n+\omega_5|$. Here, $\vartheta_1$ is the Jacobi theta function and $\eta$ is the Dedekind eta function. And ${\rm Li}_2,{\rm Li}_3$ are the polylogarithm functions as \begin{equation} {\rm Li}_n(x)=\sum_{k=1}^\infty\frac{x^k}{k^n}, \ \ n=2,3. \end{equation} Therefore, the resulting one-loop correction for the brane scalar is given by \begin{eqnarray} m^2_\phi(q^2)&=&m^2_B(q^2)+m^2_F(q^2) \nonumber \\ &=&\frac{g^2_4Q^2}{2\pi^3R^2_5}\int^1_0 dx\bigg[{\cal J}_2[0,0,c] -{\cal J}_2[\omega_5,\omega_6,c]\bigg] \nonumber \\ &&+\frac{g^2_4Q^2}{4\pi^2 R^2_5}(q^2R^2_5)\int^1_0 dx\, x(1-x) \bigg[{\cal J}_1[0,0,c]-{\cal J}_1[\omega_5,\omega_6,c]\bigg]. \end{eqnarray} Consequently, we observe that both divergences of ${\cal J}_1$ and ${\cal J}_2$ are cancelled and there appear only finite corrections. Thus, we can take $c=0$ safely at the zero external momentum without involving the UV and IR mixing found in \cite{ghilencea}. So, the mass correction with $q^2=0$ is given by \begin{eqnarray} m^2_\phi(0)&=&\frac{g^2_4Q^2}{4\pi^3R^2_5}\bigg[\frac{2\pi^2}{3}r \Delta^2_{\omega_5}(1-\Delta_{\omega_5})^2 +\frac{1}{r}(I_1(0,0)-I_1(\omega_5,\omega_6)+c.c.) \nonumber \\ &&+\frac{1}{2\pi r^2}(I_2(0,0)-I_2(\omega_5,\omega_6) +c.c.)\bigg], \ \ \ r\equiv \frac{R_6}{R_5} \label{1loopmass} \end{eqnarray} with \begin{eqnarray} I_1(\omega_5,\omega_6)\equiv \sum_{n\in {\bf Z}}|n+\omega_5|{\rm Li}_2(e^{-2\pi |n+\omega_5|r -2\pi i\omega_6}), \end{eqnarray} and \begin{eqnarray} I_2(\omega_5,\omega_6)\equiv \sum_{n\in {\bf Z}}{\rm Li}_3(e^{-2\pi |n+\omega_5|r-2\pi i\omega_6}). \end{eqnarray} In order to see the mass correction explicitly, let us simplify the sums as follows, \begin{eqnarray} I_1(\omega_5,\omega_6) &=&\frac{1}{2}\sum_{k=1}^{\infty} \frac{e^{-2\pi i k\Delta_{\omega_6}}}{k^2{\rm sinh}^2(\pi k r)} \bigg[\Delta_{\omega_5}{\rm cosh}(2\pi k(1-\Delta_{\omega_5})r) \nonumber \\ &&+(1-\Delta_{\omega_5}){\rm cosh}(2\pi k \Delta_{\omega_5}r)\bigg], \end{eqnarray} and \begin{eqnarray} I_2(\omega_5,\omega_6) =\sum_{k=1}^{\infty}\frac{e^{-2\pi i k\Delta_{\omega_6}}}{k^3} \frac{{\rm cosh}(\pi k(1-2\Delta_{\omega_5})r)}{{\rm sinh}(\pi k r)}. \end{eqnarray} Here $\Delta_{\omega_6}\equiv\omega_6-[\omega_6]$ with $0\leq \Delta_{\omega_6}<1$ and $[\omega_6]\in {\bf Z}$. Therefore, inserting the above expressions into eq.~(\ref{1loopmass}), we find that the resulting mass correction is finite as \begin{eqnarray} m^2_\phi(0)&=&\frac{g^2_4Q^2}{4\pi^3R^2_5}\bigg[\frac{2\pi^2}{3} r\Delta^2_{\omega_5}(1-\Delta_{\omega_5})^2 \nonumber \\ &&+\frac{1}{r}\sum_{k=1}^{\infty} \frac{1}{k^2{\rm sinh}^2(\pi k r)}\bigg(1-\cos(2\pi k\Delta_{\omega_6}) \{\Delta_{\omega_5}{\rm cosh}(2\pi k(1-\Delta_{\omega_5})r) \nonumber \\ &&+(1-\Delta_{\omega_5}){\rm cosh}(2\pi k \Delta_{\omega_5}r)\}\bigg) \nonumber \\ &&+\frac{1}{\pi r^2} \sum_{k=1}^{\infty} \frac{1}{k^3{\rm tanh}(\pi k r)}\bigg(1-\cos(2\pi k\Delta_{\omega_6}) \frac{{\rm cosh}(\pi k(1-2\Delta_{\omega_5})r)}{{\rm cosh}(\pi kr)} \bigg)\bigg].\label{massfinal} \end{eqnarray} First let us consider the case with $\Delta_{\omega_5}=0$. Then, eq.~(\ref{massfinal}) becomes \begin{eqnarray} m^2_\phi(0)&=&\frac{g^2_4Q^2}{4\pi^3R^2_5}\bigg[\frac{1}{r}\sum_{k=1}^{\infty} \frac{1}{k^2{\rm sinh}^2(\pi k r)}(1-\cos(2\pi k\Delta_{\omega_6})) \nonumber \\ &&+\frac{1}{\pi r^2} \sum_{k=1}^{\infty} \frac{1}{k^3{\rm tanh}(\pi k r)}(1-\cos(2\pi k\Delta_{\omega_6}))\bigg]. \end{eqnarray} In this case, let us take the limit of $\pi r\gg 1$, i.e. one extra dimension with radius $R_5$ to be much smaller than the other. Thus, the resulting mass correction reproduces exactly the 5d case with a SS twist \cite{choi-lee}, \begin{eqnarray} m^2_\phi(0)\simeq \frac{g^2_4Q^2}{4\pi^4R^2_6}\sum_{k=1}^{\infty} \frac{1}{k^3}(1-\cos(2\pi k\Delta_{\omega_6})). \end{eqnarray} On the other hand, when one takes $\Delta_{\omega_5}=\frac{1}{2}$, which is the case with a discrete twist in the fifth direction, eq.~(\ref{massfinal}) becomes \begin{eqnarray} m^2_\phi(0)&=&\frac{g^2_4Q^2}{4\pi^3R^2_5}\bigg[\frac{\pi^2}{24}r+\frac{1}{r}\sum_{k=1}^{\infty} \frac{1}{k^2{\rm sinh}^2(\pi k r)} \nonumber \\ &&+\frac{1}{\pi r^2}\sum_{k=1}^{\infty}\frac{1}{k^3{\rm tanh}(\pi k r)} \bigg(1-\cos(2\pi k\Delta_{\omega_6})\frac{\pi k r}{{\rm sinh}(\pi k r)}\bigg)\bigg]. \end{eqnarray} Again in the limit of $\pi r\gg 1$, the resulting mass correction is \begin{eqnarray} m^2_\phi(0)\simeq \frac{g^2_4Q^2}{96\pi R^2_5}\bigg(\frac{R_6}{R_5}\bigg) \bigg[1+{\cal O}\bigg(\frac{R^2_5}{R^2_6}\bigg)\bigg]. \end{eqnarray} Therefore, in this case, one extra dimension with small radius $R_5$ is not decoupled, but rather the effect due to the nontrivial SS twist in that direction is a dominant contribution to the mass correction. For other nonzero values of $\Delta_{\omega_5}$, such a non-decoupling of small extra dimension remains true because the first term in eq.~(\ref{massfinal}) is dominant for $\pi r\gg 1$. \section{Conclusion} We considered supersymmetry breaking on the orbifold $T^2/Z_2$ via the SS twisted boundary conditions or the localized mass terms. It turns out that the SS breaking is equivalent to the localized breaking at the lines which should be regarded to be fixed boundaries under additional $Z_2$ actions. In this case, we have shown that in the presence of the SS twist or localized mass terms for the bulk gauge sector, there arises a finite one-loop mass correction to the visible brane scalar. In particular, for the case with one extra dimension much smaller than the other, we observe that the effect from the small extra dimension to the one-loop mass correction is not decoupled due to a nontrivial SS twist in that direction. In order to know whether the contribution due to the bulk gaugino dominates over other contributions such as anomaly mediation \cite{chacko}, one needs to determine the SS twist parameter dynamically. At the level of 4d effective supergravity, one could think of the SS breaking to be equivalent to a nonzero $F$ term of the corresponding radion multiplet for two extra dimensions as in 5d case \cite{chacko}, and introduce a radius stabilization mechanism to determine the $F$ term dynamically. Moreover, in order to estimate supergravity loop corrections as in 5d case \cite{sugra}, it seems indispensible to understand the 6d off-shell supergravity, which is not available yet. Let us leave these issues in a future publication. \section*{Acknowledgements} The author would like to thank W. Buchm$\ddot{{\rm u}}$ller, A. Falkowski and D. Ghilencea for useful comments and discussion.
1,314,259,996,051
arxiv
\section*{References}}}{ \usepackage{amsthm} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem*{corollary}{Corollary} \renewcommand{\thetheorem}{\arabic{theorem}} \theoremstyle{remark} \newtheorem*{remark}{Remark} } \makeatother \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \def\ceil#1{\lceil #1 \rceil} \def\floor#1{\lfloor #1 \rfloor} \def\bm{1}{\bm{1}} \DeclareMathAlphabet{\mathsfit}{\encodingdefault}{\sfdefault}{m}{sl} \SetMathAlphabet{\mathsfit}{bold}{\encodingdefault}{\sfdefault}{bx}{n} \newcommand{\tens}[1]{\bm{\mathsfit{#1}}} \newcommand{\etens}[1]{\mathsfit{#1}} \section{Conclusion} In this paper, we have proposed a novel graph active learning framework, which integrates CL and AL seamlessly to leverage the power of abundant unlabeled data. Besides, consider that previous work neglects the neighborhood information in graph AL selection strategies, we propose that the active selection problem can be casted as selecting central nodes in homophilous subgraphs. Then, we design a novel minimax selection scheme that takes into account the neighborhood information. Moreover, comprehensive, confounding-free experiments have been conducted to verify the effectiveness of our proposed framework and the selection strategy, which show that our method outperforms all baselines by significant margins. \section{Experiments} The experiments presented in this section aim to answer the following questions. \begin{itemize} \item \textbf{Q1}. How does CL benefits the AL performance? \item \textbf{Q2}. How is the proposed minimax selection criterion compared with existing graph AL approaches? \item \textbf{Q3}. How do key hyperparameters affect the model performance? \end{itemize} \subsection{Experimental Setup} \paragraph{Datasets.} Following previous work \cite{Cai:2017wm,Gao:2018wh}, we use three citation network datasets: Cora, Citeseer and Pubmed. Each dataset contains a graph, where nodes represent articles and edges represent citation relationship, and the node features are sparse bag-of-words feature vectors. In addition, considering the three datasets are of small scales, we include two larger datasets: Computer and Photo, where nodes indicate items and edges indicate co-purchase relationship from the Amazon dataset Detailed statistics of datasets is summarized in Table \ref{tab:statistics}. The datasets we use in experiments are all publicly available. \begin{table}[t] \centering \caption{\small Statistics of five datasets used in experiments.} \vskip0.5em \label{tab:statistics} \begin{tabular}{cccccc} \toprule Dataset & \#Nodes & \#Edges & \#Classes & \#Features \\ \midrule Cora & 2,708 & 5,429 & 7 & 1,433 \\ Citeseer & 3,327 & 4,732 & 6 & 3,703 \\ Pubmed & 19,717 & 44,338 & 3 & 500 \\ Computer & 13,752 & 245,861 & 10 & 767 \\ Photo & 7,650 & 119,081 & 8 & 745 \\ \bottomrule \end{tabular} \end{table} \paragraph{Baselines.} To evaluate the performance of our proposed approach, we compare it with six representative baselines, including three traditional AL methods (Random, Degree, and Entropy), and three graph-based AL methods (AGE, ANRMAB, and FeatProp). \begin{itemize} \item \textbf{Random}. All training data are randomly selected. \item \textbf{Degree}. We select the node with the largest degree in each iteration. \item \textbf{Entropy}. We select the node that has the maximum entropy of the predicted label distribution in each iteration. \item \textbf{AGE} \cite{Cai:2017wm}. AGE designs three selection criteria, calculating uncertainty via entropy of the predicted label distribution, measuring node centrality via the PageRank algorithm, and obtaining node density by calculating distances between nodes and cluster centers. \item \textbf{ANRMAB} \cite{Gao:2018wh}. ANRMAB uses the same criteria as AGE and further applies a multi-armed bandit mechanism to adaptively adjust the importance of these criteria. \item \textbf{FeatProp} \cite{Wu:2019wz}. FeatProp applies the K-medoids algorithm to cluster nodes based on node features and selects the central node for labeling. \end{itemize} \paragraph{Implementation details.} For fair comparison, we closely follow the experimental setting in previous work \cite{Cai:2017wm,Gao:2018wh}. For each dataset, we randomly sample 500 nodes for validation and 1,000 nodes for test to ensure that the performance variance is due to AL strategies. We train a two-layer GCN model with 128 hidden units for a maximum of 200 epochs using the Adam optimizer \cite{Kingma:2015us} with a learning rate of 0.001. We set \(\lambda = 1.0\) after grid search on the validation set. Similar to existing CL work, after training the model, we train a logistic regression model on the learned node embeddings. We repeat training and test for 10 times on 10 different validation sets and report the averaged performance. The proposed method is implemented using PyTorch 1.5.1 \cite{Paszke:2019vf} and PyTorch Geometric 1.6 \cite{Fey:2019wv}. All experiments are conducted on a Linux server equipped with four NVIDIA Tesla V100S GPUs (each with 32GB memory) and 12 Intel Xeon Silver 4214 CPUs. \subsection{Overall Performance (Q1--Q2)} \paragraph{Confounding-free experimental configurations.} For simplicity, we denote the number of classes as \(C\) hereafter. First of all, to demonstrate the effectiveness of the unified graph AL paradigm, we conduct comparative experiments with a budget of \(20C\) nodes, with and without CL components. Specifically, we utilize a widely-used CL framework GRACE \cite{Zhu:2020vf} \emph{on top of all baselines} such that the comparison can be made on the same basis. As a comparison, following previous work AGE \cite{Cai:2017wm} and ANRMAB \cite{Gao:2018wh}, we also report model performance for all baselines without CL components, but \emph{with an initial dataset} that consists of \(4C\) labeled nodes. In addition, to verify the effectiveness of our proposed minimax selection criteria, we further conduct experiments under smaller budget settings (30 and 60 nodes), closely following FeatProp \cite{Wu:2019ke}. In this case, all baselines are employed with CL components and without initial pools. \subsubsection{Contrastive learning v.s. initial labeled pools (Q1).} The performance of the comparative study is summarized in Fig. \ref{fig:performance}. It is obvious that \emph{all AL approaches} trained with CL components show superior performance to their counterparts, which are initialized with a labeled pool. This indicates that, on the one hand, our proposed framework is able to effectively leverage the power of abundant unlabeled data via self-supervised learning. It benefits AL selection strategies by providing a well-aligned embedding space. On the other hand, as nodes selected via AL strategies offer rich semantic information, CL could in turn benefit from such valuable information to find more positive samples, resulting in a more robust embedding space. In summary, the performance demonstrates the effectiveness of our proposed unified paradigm, which boosts performance of AL methods with no initial pool given. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/ablation-study.pdf} \caption{\small Performance comparison of our proposed unified AL paradigm for baselines with and without CL components in terms of Micro-F1 (Mi-F1) and Macro-F1 (Ma-F1) scores (\%).} \label{fig:performance} \end{figure} \subsubsection{The minimax selection scheme v.s. baselines (Q2).} We further experiment with smaller budget configurations to comprehensively study the proposed minimax selection criterion, where the results are presented in Table \ref{tab:main-exp}. Again, for fair comparison, we employ the same CL components on all node selection baselines, except for Random and Degree since they do not select nodes based on embeddings. From the table, we can see that our proposed minimax selection scheme outperforms all traditional and graph-based baselines on five datasets by significant margins, which demonstrates the superiority of our method. Furthermore, there are several interesting findings regarding the results. \begin{itemize} \item The performance of traditional AL baselines is relatively worse than that of graph-based methods, indicating that both structural and attributive information is necessary for AL on graph-structured data. What stands out from the table is that selecting nodes merely according to node degrees results in severe performance downgrading on the Computers dataset. We suspect that it is primarily due to the large scale and relatively low homophily of the dataset (c.f., Table \ref{tab:homophily}), where high-degree nodes may be connected with many inter-class edges; selecting such nodes brings no benefits for accurate prediction. \item Graph-based baselines achieve inferior performance to that of ours. Once again, this suggests that selecting nodes according to uncertainty and representativeness measures without taking the GNN propagation scheme into consideration will lead to suboptimal performance. \item We summarize the average homophily score (i.e., the ratio of intra-class edges to all edges) of all one-ego networks of the selected nodes. As illustrated in Table \ref{tab:homophily}, we observe that the homophily scores of subgraphs induced by the selected nodes are much higher than the average score of the original graph on all datasets, which verifies that our proposed method is able to select central nodes from homophilous subgraphs. \end{itemize} \begin{table} \centering \caption{\small Performance of node classification in terms of Micro-F1 (Mi-F1) and Macro-F1 (Ma-F1) scores (\%) with different budgets: 30, 60, and 20\(C\). For fair comparison, all models are provided with no initial pools and are trained with the same CL component. Note that 20\(C\) is equal to 60 on the Pubmed dataset.} \vskip0.5em \label{tab:main-exp} \begin{subtable}[h]{\linewidth} \centering \caption{\small Experiments with default budget configuration \(b = 20C\)} \begin{tabular}{ccccccccc} \toprule Dataset & Metric & Random & Degree & Entropy & AGE & ANRMAB & FeatProp & Ours \\ \midrule \multirow{1.5}[2]{*}{Cora} & Mi-F1 & 78.31 & 80.68 & 84.50 & 83.97 & 84.02 & 83.94 & \textbf{85.35} \\ & Ma-F1 & 79.11 & 79.42 & 82.92 & 82.52 & 82.14 & 82.63 & \textbf{84.02} \\ \midrule \multirow{1.5}[2]{*}{Citeseer} & Mi-F1 & 67.80 & 67.21 & 70.18 & 70.08 & 70.63 & 69.95 & \textbf{71.98} \\ & Ma-F1 & 62.91 & 59.93 & 64.91 & 64.40 & 64.74 & 63.97 & \textbf{66.44} \\ \midrule \multirow{1.5}[2]{*}{Pubmed} & Mi-F1 & 77.14 & 77.61 & 83.98 & 84.26 & 84.21 & 84.45 & \textbf{84.76} \\ & Ma-F1 & 76.98 & 76.92 & 83.13 & 83.53 & 83.62 & 83.88 & \textbf{84.11} \\ \midrule \multirow{1.5}[2]{*}{Computer} & Mi-F1 & 84.21 & 71.75 & 87.71 & 87.67 & 87.91 & 87.94 & \textbf{88.69} \\ & Ma-F1 & 77.83 & 44.09 & 85.99 & 85.92 & 85.69 & 85.73 & \textbf{87.33} \\ \midrule \multirow{1.5}[2]{*}{Photo} & Mi-F1 & 91.97 & 86.19 & 92.33 & 92.21 & 92.47 & 92.63 & \textbf{93.14} \\ & Ma-F1 & 91.05 & 84.89 & 91.12 & 91.11 & 91.23 & 91.62 & \textbf{92.73} \\ \bottomrule \end{tabular} \end{subtable} \begin{subtable}[h]{\linewidth} \caption{\small Experiments with a smaller budget configuration (\(b = 30\) and \(b = 60\))} \resizebox{\linewidth}{!}{ \begin{tabular}{cccccccccc} \toprule Dataset & Budget & Metric & Random & Degree & Entropy & AGE & ANRMAB & FeatProp & Ours \\ \midrule \multirow{4}[2]{*}{Cora} & \multirow{2}[1]{*}{30} & Mi-F1 & 68.23 & 69.81 & 83.44 & 83.74 & 83.60 & 83.64 & \textbf{84.13} \\ & & Ma-F1 & 67.12 & 69.24 & 81.77 & 81.88 & 81.82 & 82.24 & \textbf{83.56} \\ \cmidrule{2-10} & \multirow{2}[0]{*}{60} & Mi-F1 & 77.22 & 75.58 & 83.75 & 83.86 & 83.68 & 83.86 & \textbf{84.69} \\ & & Ma-F1 & 76.81 & 74.48 & 82.20 & 82.32 & 81.92 & 82.26 & \textbf{83.87} \\ \midrule \multirow{4}[2]{*}{Citeseer} & \multirow{2}[1]{*}{30} & Mi-F1 & 55.86 & 56.21 & 70.18 & 69.96 & 69.51 & 69.79 & \textbf{70.95} \\ & & Ma-F1 & 51.33 & 51.66 & 63.82 & 64.02 & 63.21 & 63.84 & \textbf{65.89} \\ \cmidrule{2-10} & \multirow{2}[0]{*}{60} & Mi-F1 & 62.27 & 64.06 & 70.68 & 70.12 & 69.74 & 70.22 & \textbf{71.91} \\ & & Ma-F1 & 60.18 & 62.23 & 64.31 & 65.18 & 63.92 & 64.73 & \textbf{66.03} \\ \midrule \multirow{4}[2]{*}{Pubmed} & \multirow{2}[1]{*}{30} & Mi-F1 & 72.91 & 73.32 & 84.10 & 84.16 & 84.12 & 84.19 & \textbf{84.45} \\ & & Ma-F1 & 70.21 & 72.72 & 83.59 & 83.41 & 83.67 & 83.63 & \textbf{83.98} \\ \cmidrule{2-10} & \multirow{2}[0]{*}{60} & Mi-F1 & 77.14 & 77.61 & 83.98 & 84.26 & 84.21 & 84.45 & \textbf{84.76} \\ & & Ma-F1 & 76.98 & 76.92 & 83.13 & 83.53 & 83.62 & 83.88 & \textbf{84.11} \\ \midrule \multirow{4}[2]{*}{Computer} & \multirow{2}[1]{*}{30} & Mi-F1 & 72.55 & 53.83 & 87.23 & 87.16 & 87.31 & 87.24 & \textbf{87.90} \\ & & Ma-F1 & 68.49 & 40.16 & 85.29 & 85.45 & 85.55 & 84.97 & \textbf{86.48} \\ \cmidrule{2-10} & \multirow{2}[0]{*}{60} & Mi-F1 & 79.92 & 56.63 & 87.54 & 87.63 & 87.86 & 87.57 & \textbf{88.33} \\ & & Ma-F1 & 73.94 & 42.28 & 85.61 & 85.75 & 85.94 & 85.18 & \textbf{86.72} \\ \midrule \multirow{4}[2]{*}{Photo} & \multirow{2}[1]{*}{30} & Mi-F1 & 88.72 & 42.19 & 92.26 & 92.28 & 92.12 & 92.14 & \textbf{92.45} \\ & & Ma-F1 & 88.51 & 51.92 & 90.97 & 91.07 & 91.08 & 90.82 & \textbf{91.52} \\ \cmidrule{2-10} & \multirow{2}[0]{*}{60} & Mi-F1 & 92.56 & 46.93 & 92.19 & 92.11 & 92.33 & 92.37 & \textbf{92.59} \\ & & Ma-F1 & 91.53 & 57.88 & 91.03 & 91.17 & 91.08 & 91.16 & \textbf{91.58} \\ \bottomrule \end{tabular} } \end{subtable} \end{table} \begin{table} \centering \caption{\small The average homophily scores of the original graph (Original) and the subgraphs selected by ours (Selected), with a budget \(b = 20C\). The relative improvement is shown in the last row.} \vskip0.5em \label{tab:homophily} \begin{tabular}{cccccc} \toprule Graph & Cora & Citeseer & Pubmed & Computer & Photo \\ \midrule Original & 0.810 & 0.736 & 0.802 & 0.778 & 0.828 \\ Selected & 0.921 & 0.816 & 0.872 & 0.858 & 0.929 \\ \midrule Improv. & 13.7\% & 10.9\% & 8.73\% & 10.3\% & 12.2\% \\ \bottomrule \end{tabular} \end{table} \subsection{Sensitivity Analysis (RQ3)} In this section, we conduct sensitivity analysis of two key hyper-parameters in our proposed approach. While we alter one parameter, other experimental configurations remain unchanged. Firstly, we conduct experiments with different values (0.2, 0.5, 0.8, 1.0) of \(\lambda\), which adjusts the ratio of contribution of the supervised component in the unified loss. From Fig. \ref{fig:lambda-value}, we can observe that the performance improves steadily when \(\lambda\) increases, demonstrating that the supervised information provided by our AL algorithm plays a positive role in model training. We suspect that the main reason behind the performance gain is the rich semantic information involved in labeled nodes promotes CL to pull more positive samples together so that representations are more distinguishable. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/sensitivity.pdf} \caption{\small Node classification performance with varied \(\lambda\)'s in terms of Mi-F1 and Ma-F1 (\%), with a budget \(b = 20C\) nodes and without an initial pool.} \label{fig:lambda-value} \end{figure} Secondly, we analyze the impact of neighborhoods considered in our minimax selection scheme. We change \(k\) from one- to two-hop neighbors, since deep GCNs will lead to oversmoothing \cite{Li:2018wc}. From Table \ref{tab:k-value}, we note that models using one- and two-hop neighborhood information exhibits negligible difference of performance. We also observe that Ours--1 slightly outperforms Ours--2 on all datasets, which may indicate that some noisy neighboring nodes are included when considering two-hop neighborhoods. Furthermore, since computing distance between two-hop neighborhood pairs introduces extra computational burden, in our experiments, we simply choose to include only one-hop neighbors to speed up computation. \begin{table}[t] \centering \caption{\small Performance comparison of models considering one-hop neighborhood (Ours--1) and two-hop neighborhood (Ours--2) in terms of Mi-F1 (\%), with a budget \(b = 20C\) nodes and without an initial pool.} \label{tab:k-value} \vskip0.5em \begin{tabular}{cccccc} \toprule Model & Cora & Citeseer & Pubmed & Computer & Photo \\ \midrule Ours--1 & 85.35 & 71.98 & 84.76 & 88.69 & 93.14 \\ Ours--2 & 85.26 & 71.82 & 84.74 & 88.60 & 92.94 \\ \bottomrule \end{tabular} \end{table} \section{Introduction} Graph neural networks (GNNs), as a promising means of learning graph representations, attract a lot of research interests \cite{Kipf:2016tc,Velickovic:2018we,Hu:2019vq}. Most existing GNN models are established in a semi-supervised manner, where limited labeled nodes are given. However, with the same amount of labeled data, the quality of labels strongly affects the model performance \cite{Kipf:2016tc}. Intuitively, a question arises: \emph{how to select high-quality labels to maximize the performance of GNN models?} Active learning (AL), which iteratively selects the most informative samples with the greatest potential to improve the model performance, is widely used to solve this problem. One of the most critical components in designing an AL algorithm is measuring the informativeness of an instance for labeling. Existing graph AL methods \cite{Cai:2017wm,Gao:2018wh,Wu:2019wz} design various criteria, which can be roughly categorized into two lines: uncertainty- and representativeness-based strategies. The former method queries the sample with the least confidence to the model by computing the entropy based on the predicted label distribution, for instance, the samples with a 50\% probability of being positive in binary classification. The latter approach focuses on instances that are representative of the data distribution, e.g., the node closest to the clustering center or with the highest PageRank score. Though promising progress has been made by previous attempts, these work follows a traditional AL paradigm that learns node representations from an actively selected labeled dataset. Recent surge in self-supervised learning (SSL) suggests that abundant unlabeled data is useful for learning representations \cite{Chen:2020wj,He:2020tu,Caron:2018ba}, which is largely ignored by current AL work. In this work, we propose to unify graph AL with CL, where we argue the two components could benefit each other. To be specific, on the one hand, CL maximizes the agreement among stochastically augmented versions of the original graph, which empowers AL to leverage the whole dataset without human annotations; on the other hand, AL provides additional semantic information of informative nodes via active selection, which adds positive samples from the same class and further better guides representation learning. In addition, previous work fails to explicitly consider the information propagation scheme of GNNs, which may end up selecting nodes detrimental to model training. For example, a central node with high entropy and PageRank scores but having dense, noisy inter-class connections may be selected. Labeling such kind of nodes will cause indistinguishable representations, since the embeddings of neighborhood nodes belonging to different communities are smoothed after propagation \cite{Chen:2020cn}. On the contrary, if nodes in an ego network all share the same label, it is easy for the model to make correct prediction once we have obtained the label for the central node. Therefore, we model the active selection as selecting the central nodes from \emph{homophilous ego networks}, where most nodes are of the same cluster. \begin{figure}[t] \centering \includegraphics[width=0.55\linewidth]{figures/example.pdf} \caption{\small Illustrating the proposed minimax selection scheme, including three one-ego networks shown in dashed boxes. Arrows denote neighborhood aggregation; nodes in each subgraph are projected to an embedding space, where the red line represents the furthest distance between the central node and its neighbors. According to our minimax scheme, node 1 will be selected, whose neighboring nodes are densely clustered, indicating high homophily of its subgraph.} \label{fig:example} \vskip-2em \end{figure} However, as most labels remain unavailable during training in AL, we turn to approximating the homophily score of an ego network via similarities of node embedding pairs. This is motivated by the smoothness assumption of semi-supervised learning, which states that instances close to each other in a dense area belong to the same cluster with a high probability \cite{Chapelle:2006vz}. Accordingly, we introduce a novel minimax active selection scheme. As illustrated in Fig. \ref{fig:example}, we first assign each node with a score defined as the furthest distance in embedding space between itself and its neighbors in its \(k\)-ego network, where \(k\) is the number of GNN layers. Then, we select the node with the minimum score. By explicitly considering the neighborhood information, the proposed scheme is able to discover homophilous subgraphs in which nodes are the most densely clustered in the embedding space (as shown in the leftmost subgraph in Figure \ref{fig:example}), which benefits the model training. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/model.pdf} \caption{\small The overall framework of our proposed method. In each iteration, we first augment the original graph twice and maximize the congruency between augmented embeddings. Then, we compute the maximum distances between nodes and its neighbors and select the node which minimizes this distance. The model is trained to minimize the unified learning objective \(\mathcal{J}\). The algorithm ends when the number of labeled nodes reaches the budget.} \label{fig:model} \vskip-1em \end{figure} The overall framework is presented in Fig. \ref{fig:model}. In a nutshell, the main contribution of this paper can be summarized as follows. \begin{itemize} \item \textbf{Novel paradigm.} We propose a unified paradigm to empower graph AL with self-supervision. Please kindly note that the CL component is not simply used as a preprocessing step, but is \emph{jointly trained with AL}. The unified \mbox{paradigm is} able to leverage the power of unlabeled dataset; nodes from AL in \mbox{turn provide} additional semantics information to boost classification performance. To the best of our knowledge, this is the first work that has bridged AL with CL. \item \textbf{Principled methodology.} Our proposed minimax selection scheme is a more principled approach to GNN-based AL. We explicitly consider the neighborhood aggregation scheme when designing selection criteria and model active selection as selecting the central node from homophilous ego networks. We then propose a novel propagation-aware minimax selection criterion to discover central nodes in homophilous subgraph. \item \textbf{Empirical studies.} We have conducted extensive empirical studies without confounding experimental designs on five real-world graph datasets of different scales. The results show the superiority of our method over both traditional and graph-based baselines. \end{itemize} \section{The Proposed Method} In this section, we give the problem definition and notation description. Then, we introduce the proposed paradigm that unifies graph AL with CL, followed by details of the minimax active selection scheme. Lastly, we summarize the overall framework of our proposed method. \subsection{Preliminaries} \paragraph{Problem definition.} Active learning aims to train an accurate model using training data of a limited budget. Specifically, given a large unlabeled data pool $\mathcal{U}^0$, an empty initial labeled set \(\mathcal{L}^0\), and a budget $b$, the model aims to select the top-$b$ informative nodes via the designed selection criteria and train the model with labeled nodes to maximize the performance. \paragraph{Graph representation learning.} Let $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ be a graph with $n$ nodes, where $\mathcal{V} = \{v_i\}_{i=1}^n$ is the vertex set and \(\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}\) is the edge set. We denote the adjacency matrix as \(\bm{A} = \{0,1\}^{n \times n}\) and the node feature matrix as \(\bm{X} \in \mathbb{R}^{n \times m}\), where \(m\) is the dimension of features. In this paper, following previous work \cite{Cai:2017wm,Gao:2018wh}, we choose the widely-used GCN \cite{Kipf:2016tc} model to learn node representations. Mathematically, the layer-wise propagation rule in GCN can be formulated as \begin{equation} \bm{H}^{(l+1)} = \sigma\left(\widehat{\bm{D}}^{-\frac{1}{2}} \widehat{\bm{A}} \widehat{\bm{D}}^{-\frac{1}{2}} \bm{H}^{(l)} \bm{W}^{(l)}\right), \label{eq:gcn} \end{equation} where $\widehat{\bm{A}} = \bm{A} + \bm{I}_{n}$ is the adjacency matrix with self-loops and $\widehat{\bm{D}}_{i i}=\sum_{j} \widehat{\bm{A}}_{i j}$. $\bm{H}^{(l)} \in \mathbb{R}^{n \times m_l}$ represents the node embedding matrix in the $l$-th layer, where \(m_l\) is the dimension of the node embedding and \(\bm{H}^{(0)} = \bm{X}\). ${\bm{W}^{(l)}} \in \mathbb{R}^{m \times m_l}$ is a learnable weight matrix of layer \(l\) and $\sigma(\cdot)$ is the nonlinear activation function. We utilize a two-layer GCN model, denoted as \(f(\bm{A}, \bm{X})\). For brevity, we denote \(\bm{H}\) = \(\bm{H}^{(2)}\) as the output representations. \subsection{Unifying Contrastive Learning and Graph Active learning} Previous work of graph AL fails to utilize the large amount of unlabeled data during training, which leads to the poor quality of embeddings at the beginning. Motivated by recent progress of self-supervised representation learning, we propose to unify graph AL with CL techniques. \subsubsection{Graph contrastive learning.} Graph CL aims at maximizing the agreement between augmented views and promotes the node representations to be invariant to perturbation \cite{Zhu:2020vf,You:2020ut}. To be specific, we first draw two stochastic augmentation functions \(t, t' \sim \mathcal{T}\) from a set of all possible functions \(\mathcal{T}\). Accordingly, we generate two graph views \(\widetilde{\mathcal{G}}_1 = t(\mathcal{G})\) and \(\widetilde{\mathcal{G}}_2 = t'(\mathcal{G})\) and obtain their node embeddings via a shared GNN, denoted by \(\bm{H}' = f(\widetilde{\bm{A}}_1, \widetilde{\bm{X}}_1)\) and \(\bm{H}'' = f(\widetilde{\bm{A}}_2, \widetilde{\bm{X}}_2)\) respectively. Finally, we employ a contrastive objective function \(\mathcal{J}\) to train the GCN model in a self-supervised manner. \paragraph{Data augmentation.} Following previous work \cite{You:2020ut,Zhu:2020ui}, we modify both graph structures and node attributes to comprehensively generate graph views. For topology-level augmentation, we randomly remove edges by sampling probability \(\widetilde{\bm{R}}_{ij}\sim \operatorname{Bern}(1 - p_e)\) for each edge from a Bernoulli distribution, where \(p_e\) denotes the probability of each edge being removed. When there is no edge between two nodes in the original graph, the corresponding entry in \(\widetilde{\bm{R}}\) is set to 0. Then, the resulting corrupted graph structure can be represented as \[ \widetilde{\bm{A}} = \bm{A} \circ \widetilde{\bm{R}}, \] where \(\circ\) denotes element-wise multiplication. Note that other augmentations such as subgraph sampling \cite{Qiu:2020gq} and node dropping \cite{You:2020ut} could be regarded as a special case of edge removing. For node-level augmentation, we randomly mask partial dimensions of node attributes with zeros. Specifically, we generate a vector \(\widetilde{\bm{m}}\) whose each entry is sampled from another Bernoulli distribution \(\widetilde{\bm{m}}_{ij} \sim \operatorname{Bern}(1-p_n)\) and \(p_n\) is the probability of one dimension being masked. Then, we compute the corrupted attributes as \[ \widetilde{\boldsymbol{X}}=\left[\boldsymbol{x}_{1} \circ \widetilde{\boldsymbol{m}} ; \enspace \boldsymbol{x}_{2} \circ \widetilde{\boldsymbol{m}} ; \enspace \cdots ; \enspace \boldsymbol{x}_{n} \circ \widetilde{\boldsymbol{m}}\right], \] where \([\cdot;\cdot]\) denotes the concatenation operator and \(\bm{x}_i\) is the \(i\)-th row of \(\bm{X}\). \paragraph{Contrastive objective.} Thereafter, we employ a contrastive objective function that learns the node embeddings in a self-supervised manner by pulling embeddings of the same node together and pushing that of different nodes apart. Specifically, for an anchor \(\bm{h}_i'\), we first define its pairwise contrastive objective, which takes in the form of NT-Xent loss \cite{Chen:2020wj}: \begin{equation} \ell(\bm{h}_i', \bm{h}_i'') = - \log \frac{\theta(\bm{h}_i', \bm{h}_i'')}{\theta(\bm{h}_i', \bm{h}_i'') + \sum_{j \neq i} \left[ \theta(\bm{h}_i', \bm{h}_j') + \theta(\bm{h}_i', \bm{h}_j'') \right] }. \label{eq:pairwise-objective} \end{equation} The critic function \(\theta(\cdot, \cdot)\) computes the cosine similarity between two embeddings scaled by a temperature scalar \(\tau \in \mathbb{R}^+\): \[ \theta(\bm{h}', \bm{h}'') = \exp\left( \frac{g(\bm{h}')^\top g(\bm{h}'')}{\tau \cdot \|g(\bm{h}')\| \|g(\bm{h}'')\|} \right), \] where \(g(\cdot)\) is a non-linear projection function, e.g., a multilayer perception. Since the two views are symmetric, the final contrastive objective is defined as the average over all nodes: \begin{equation} \mathcal{J}_\text{CL} = \frac{1}{2n} \sum_{v_i \in \mathcal{V}} \left[\ell(\bm{h}_i', \bm{h}_i'') + \ell(\bm{h}_i'', \bm{h}_i') \right]. \label{eq:contrastive-objective} \end{equation} \subsubsection{Incorporating semantic information from active learning.} In a pure unsupervised learning setting, we can see that for an anchor \(\bm{h}_i'\), the computation in Eq. (\ref{eq:pairwise-objective}) involves only one positive sample, and \(2(n - 1)\) negative samples. However, in our graph AL scenarios, we need to consider semantic information of nodes selected by AL algorithms. Due to the presence of labels, we further encourage the GNN encoder to include positive samples belonging to the same class, resulting in a more aligned and compact embedding space. To generalize CL to AL, we modify the contrastive objective in Eq. (\ref{eq:pairwise-objective}) as \begin{equation} \small \ell(\bm{h}_i', \bm{h}_i'') = - \log \frac{ \theta(\bm{h}_i', \bm{h}_i'') + \lambda \cdot \sum_{\bm{h}_p \in \mathcal{P}(i)} \theta(\bm{h}_i', \bm{h}_p)}{\theta(\bm{h}_i', \bm{h}_i'') + \lambda \cdot \sum_{\bm{h}_p \in \mathcal{P}(i)} \theta(\bm{h}_i', \bm{h}_p) + \sum_{j \neq i} \left[ \theta(\bm{h}_i', \bm{h}_j') + \theta(\bm{h}_i', \bm{h}_j'') \right] }. \label{eq:active-pairwise-objective} \end{equation} Here, besides \(\bm{h}_i''\) we further introduce a positive embedding set \(\mathcal{P}(i)\) consisting of embeddings whose label is the same with node \(v_i\)'s. In particular, to closely examine the contribution of these supervised positives, we further introduce a hyperparameter \(\lambda \in \mathbb{R}^+\) to balance the two terms. Similar to Eq. (\ref{eq:contrastive-objective}), we define our final objective as summation over all nodes: \begin{equation} \mathcal{J} = \frac{1}{2n} \sum_{v_i \in \mathcal{V}} \frac{1}{|\mathcal{P}(i)| + 1} \left[\ell'(\bm{h}_i', \bm{h}_i'') + \ell'(\bm{h}_i'', \bm{h}_i') \right]. \label{eq:active-contrastive-objective} \end{equation} \subsection{The Minimax Active Selection Scheme} Having introduced the active-contrastive learning framework, we further describe our proposed minimax selection scheme in detail. Consider that the neighborhood aggregation scheme of GNNs could be regarded as Laplacian smoothing, which makes representations of neighboring nodes similar \cite{Li:2018wc,Chen:2020vu}. Ideally, if nodes in an ego network share the same label, once we label the central node in that network, the remaining nodes will get the correct prediction due to smoothed embeddings \cite{Chen:2020cn}. We thus argue that the active node selection could be formulated as selecting the central node from homophilous \(k\)-ego networks, where the homophily \mbox{score is} defined by the number of neighbors belonging to the same class of the \mbox{center node}. However, labels of unselected nodes remain unavailable during training. We then turn to select the nodes close to their neighbors in the embedding space, since they are more likely to fall into the same community according to the smoothness assumption in semi-supervised learning \cite{Chapelle:2006vz}. Thereafter, we propose a minimax-based selection scheme to measure the neighborhood similarity and select the central nodes from the most homophilous subgraphs. Specifically, we first discover the furthest neighbor of each node in the embedding space. Among these nodes, we select the node with the shortest distance such that it has the most probability to constitute a homophilous subgraph. Formally, the node \(v^\star\) we select according to the minimax selection scheme is \begin{equation} v^\star = \argmin_{v_i \in \mathcal{V} \setminus \mathcal{L}} \max_{v_j \in \mathcal{N}(i)}{d(\bm{h}_i, \bm{h}_j)}, \label{eq:minimax} \end{equation} where \(d(\bm{h}_i, \bm{h}_j) = \| \bm{h}_i - \bm{h}_j \|_2^2\) denotes the Euclidean distance between two node embeddings \(\bm{h}_i\) and \(\bm{h}_j\), \(\mathcal{N}(i)\) denotes the \(k\)-hop neighbor set of node \(i\), and \(\mathcal{L}\) denotes the labeled set. Once we have obtained the label of \(v^\star\), we add it to the labeled set and update the positive embedding set \(\mathcal{P}(\star)\) accordingly. \subsection{The Overall Framework} Finally, we summarize the proposed method, as shown in Fig. \ref{fig:model} and Algorithm \ref{algo:algorithm}. At each iteration \(c\), we first generate two different augmented graph views. Then, we feed two graph views \(\widetilde{G}_1\) and \(\widetilde{G}_2\) into GNNs to compute their node embeddings. After that, along with an enlarged positive set \(\mathcal{P}\) resulting from the labeled dataset \(\mathcal{L}^c\), we train the model using a unified objective function \(\mathcal{J}\). After training, we calculate the distance between each node and its \(k\)-hop neighbors based on their embeddings and select one node \(v^\star\) via the proposed minimax scheme to query its label. Finally, we obtain an updated unlabeled set \(\mathcal{U}^{c+1} = \mathcal{U}^{c} \backslash {\{v^\star\}}\) and a labeled set \(\mathcal{L}^{c+1}=\mathcal{L}^{c} \cup {\{(v^\star, y^\star)\}}\). We repeat the above steps until the size of labeled set reaches the labeling budget \(b\). \begin{algorithm}[h] \DontPrintSemicolon\SetNoFillComment \caption{The proposed method} \label{algo:algorithm} \SetKw{Continue}{continue} \KwData{Graph \(\mathcal{G}\), budget \(b\), GNN model \(f\)} Initialize the unlabeled set \(\mathcal{U}^0 = \mathcal{V}\) and the labeled set \(\mathcal{L}^0 = \emptyset\)\; \For{\(c = 0\) to \(b\)}{ Construct two graph views \(\widetilde{G}_1\) and \(\widetilde{G}_2\), and obtain embeddings \(\bm{H}'\) and \(\bm{H}''\)\; Train the model \(f\) to minimize the learning objective \(\mathcal{J}\) as in Eq. (\ref{eq:active-contrastive-objective}) \; Calculate the distance between each node \(v_i\) and its neighbors \(v_j \in \mathcal{N}{(i)}\)\; Select node \(v^\star\) according to Eq. (\ref{eq:minimax}) and query its label \(y^\star\)\; Update \(\mathcal{P}(\star)\) with node embeddings having the same label as \(y^\star\) \; \(\mathcal{L}^{t+1} = \mathcal{L}^t\) $\cup$ \(\{(v^\star, y^\star)\}\)\; \(\mathcal{U}^{t+1} = \mathcal{U}^{t} \backslash {\{v^\star\}}\)\; } \Return labeled set \(\mathcal{L}\)\; \end{algorithm} \section{Related Work} In this section, we briefly review related work of graph contrastive learning. Then, we review prior arts on active learning for both Euclidean data and graphs. \subsection{Graph Contrastive Learning} Contrastive learning (CL) is a specific branch of self-supervised learning, which targets at obtaining robust and discriminative representations by contrasting positive and negative instances. In the domain of computer vision, many representative CL methods including DIM \cite{Bachman:2019wp,Hjelm:2019uk}, SimCLR \cite{Chen:2020wj}, and SimSiam \cite{He:2020tu}, achieve superior performance in visual representation learning. Generally, most CL work differs from each other in terms of the generation of negative samples (e.g., color jitter, random flip, cropping, resizing, rotation) and contrastive objectives. Recently, CL on the graph-structure data has attracted lots of research interest. \citet{Velickovic:2019tu} first design a CL method for GNNs, where they introduce an objective function measuring the Mutual Information (MI) between global graph embeddings and local node embeddings. Similarly, InfoGraph \cite{Sun:2020vi} generalizes DGI to graph-level tasks. Thereafter, MVGRL \cite{Hassani:2020un} performs node diffusion and contrasts node embeddings on corrupted graphs. Follow-up work GraphCL \cite{You:2020ut} and GRACE \cite{Zhu:2020vf} propose a node-level contrastive objective to simplify previous work. Recent work GCA \cite{Zhu:2021wh} proposes stronger augmentation schemes on graphs; GCC \cite{Qiu:2020gq} firstly introduces a pre-training framework for contrastive graph representation learning. \subsection{Active Learning on Euclidean Data} Different active learning algorithms propose various strategies to select the most informative instances from a large pool of unlabeled data. Previous approaches can be roughly grouped into three categories \cite{Settles:2009vo}: uncertainty-based, performance-based, and representativeness-based methods. For the methods falling into the first category, \citet{Settles:2008tf} propose the uncertainty sampling, which calculates uncertainty score based on cross-entropy over the label distribution. \citet{Bilgic:2010uo} introduce a vote mechanism to select the instance who receives the most disagreement votes from the models. Regarding performance-based algorithms, \citet{Guo:2007vp,Schein:2007wd} propose criteria directly related to the model performance including prediction error and variance reduction. The last group of methods focus on discovering the instance that is representative of the data distribution. \citet{Sener:2018we} regard the sampling process as a coreset problem, in which the representations of the last layer in deep neural networks are used for constructing the coreset. However, these methods can not be directly performed on graph-structural data, since they are all designed for independent and identical distributed (i.i.d) data and do not consider rich structural information. \subsection{Active Graph Representation Learning} Recently, there is a growing interest of AL on graphs. AGE \cite{Cai:2017wm} calculates the informative score by combining three designed criteria. For uncertainty, they compute entropy on the predicted label distribution. For representativeness, they measure the distance between a node and its cluster center and obtain the centrality via the PageRank algorithm \cite{Page:1997qy}. Finally, they combine these criteria with the time-sensitive parameters linearly. ANRMAB \cite{Gao:2018wh} uses the same selection criteria as AGE and further introduces a multi-armed bandit algorithm to adaptively decide weights of these three criteria. However, these methods ignore the essence of propagation scheme, which may lead the algorithm to select the node with too many inter-class neighbors. \citet{Hu:2020tl} propose to learn a transferable AL policy that decreases the cost of retraining. Besides, FeatProp \cite{Wu:2019wz}, a clustering-based method, calculates distances between every pair of nodes based on the fixed raw node features. Then, it applies the K-Medoids algorithm to clustering nodes by utilizing the distances. Our method also computes node distances, but we model the AL selection from the perspective of neighborhood propagation and explicitly utilize the local neighborhood information, which is fundamentally different from FeatProp.
1,314,259,996,052
arxiv
\section{Proof of the First Zonklar Equation} \section*{Acknowledgments} This research was supported by Hong Kong RGC GRF grant HKBU 12200418. We thank the anonymous reviewers for their constructive comments and suggestions. We would also like to thank NVIDIA AI Technology Centre (NVAITC) for providing the GPU clusters for some experiments. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Conclusion} \label{sec:conclusion} In this paper, we proposed an efficient yet accurate neural network, FADNet++, for end-to-end disparity estimation to embrace the time efficiency and estimation accuracy on the stereo matching problem. The proposed FADNet++ exploits point-wise correlation layers, residual blocks, and multi-scale residual learning strategy to make the model be accurate in many scenarios while preserving fast inference time. Moreover, to adapt to the target computing devices of different capability, we design a simple but effective configurable channel scaling ratio that can generate various FADNet++ variants of different inference performance. Our training solution can be applied to all the variants and boost their highest accuracy. We conducted extensive experiments to compare our FADNet++ with existing state-of-the-art 2D and 3D based methods in terms of accuracy and speed. Experimental results showed that FADNet++ achieves comparable accuracy while it runs much faster than the 3D based models. Compared to the existing mobile solution, FADNet++ achieves a competitive inference speed of 15 FPS with nearly three times accurate. We have two future directions following our discovery in this paper. First, we would like to improve the disparity estimation accuracy on the low-end devices. To approach the accuracy of FADNet++ produced by the server GPUs, it is necessary to explore the techniques of model compression, including pruning, quantization, and so on. Second, we would also like to apply AutoML \cite{he2019automl} for searching a well-performing network structure for disparity estimation. \section{Experimental Studies} \label{sec:exp} In this section, we present the experimental studies to show the effectiveness of our FADNet++. We first demonstrate the accuracy of our proposed networks on different datasets compared to existing state-of-the-art methods. Then we present the inference performance on some popular inference GPUs (including server GPUs and mobile GPUs) to show that our networks are able to support real-time disparity estimation (i.e., not less than 30FPS). \subsection{Experimental Setup} \textbf{Testbed}. For model training, we use four Nvidia Tesla V100-PCIe GPUs to train all compared models. For model inference, to cover various types of inference GPUs, we choose a desktop-level Nvidia RTX2070 GPU and two server-level Nvidia GPUs (i.e., Tesla P40 and Tesla T4) to measure the inference speed. We also choose two mobile GPUs including Jetson TX2 and Jeston AGX to evaluate the inference speed. The details of the training and inference servers are shown in Table~\ref{table:testbed-servers}, and the inference mobile devices are shown in Table~\ref{table:testbed-devices}. In terms of software that are related to the time performance, the server is installed with GPU Driver-440.36, CUDA-10.2, and PyTorch-1.4.0 with cuDNN-7.6. \begin{table}[!ht] \centering \caption{The inference server configuration.} \label{table:testbed-servers} \begin{tabular}{|c|c|c|c|c|} \hline & \multirow{2}{*}{Training Server} & \multicolumn{3}{c|}{Inference Servers} \\\cline{3-5} & & IS1 & IS2 & IS3 \\\hline\hline GPU & Tesla V100$\times$ 4 & RTX2070 & Tesla P40 & Tesla T4 \\\hline Memory & 512GB & 32GB & 256GB & 64GB \\\hline OS & CentOS7.2 & \multicolumn{3}{c|}{Ubuntu16.04}\\\hline \end{tabular} \end{table} \begin{table}[!ht] \centering \caption{The mobile platform configuration.} \label{table:testbed-devices} \begin{tabular}{|c|c|c|} \hline & Jetson TX2 & Jetson AGX \\ \hline\hline CPU & \makecell{2-Core NVIDIA Denver \\ +4-Core ARM Cortex-A57} & 8-Core ARM v8.2 \\ \hline GPU & 256-Core Pascal & 512-Core Volta \\ \hline Memory & 8GB & 32GB \\\hline OS & \multicolumn{2}{c|}{Ubuntu 18.04.5, JetPack 4.4} \\\hline \end{tabular} \end{table} \textbf{Datasets}. To cover a range of scenarios in disparity estimation, we use many popular public datasets, including Middlebury 2014 (M2014) \cite{scharstein2014high}, KITTI 2015 (K2015) \cite{kitti2015}, ETH3D 2017 (ETH3D) \cite{eth3d2017}, and SceneFlow (SF) \cite{mayer2016large}, to evaluate the performance of different algorithms. The details of the datasets are shown in Table~\ref{table:datasets}. \begin{table}[!ht] \centering \caption{The evaluated datasets.} \label{table:datasets} \addtolength{\tabcolsep}{-2.5pt} \begin{tabular}{|c|c|c|c|} \hline Dataset & \# of Training Samples & \# of Test Samples & Resolution \\\hline\hline M2014 \cite{scharstein2014high} & 15 & 15 & 2960$\times$1942 \\\hline K2015 \cite{kitti2015} & 200 & 200 & 1242$\times$375 \\\hline ETH3D \cite{eth3d2017} & 27 & 20 & 960$\times$480 \\\hline SF \cite{mayer2016large} & 35454 & 4370 & 960$\times$540 \\\hline \end{tabular} \end{table} The distribution of disparity of different datasets is quite different, which is an important factor to guide the network design, especially the disparity search range in the point-wise correlation layer discussed in Section \ref{subsec:pwc}. We statistic the disparity distribution from the ground truth of the above datasets as shown in Fig.~\ref{fig:distribution}. \begin{figure*}[htbp] \centering \subfloat[SceneFlow] { \includegraphics[width=0.24\linewidth]{figures/FlyingThings3D_release_TRAIN_100.list.pdf}} \subfloat[KITTI2015] { \includegraphics[width=0.24\linewidth]{figures/kitti2015_train.list.pdf}} \subfloat[Middlebury] { \includegraphics[width=0.24\linewidth]{figures/MB2014_TRAIN.list.pdf}} \subfloat[ETH3D] { \includegraphics[width=0.24\linewidth]{figures/eth3d_train.list.pdf}} \caption{Disparity distribution in different datasets. Note that zero disparities are excluded.} \label{fig:distribution} \end{figure*} \textbf{Baselines}. We choose existing state-of-the-art DNNs in estimating disparity from stereo images. In terms of ED-Conv2D, we choose DispNetC \cite{mayer2016large}, CRL \cite{crl2017}, DN-CSS \cite{flownet3}, AnyNet \cite{anynet2019}, and FADNet~\cite{wang2020fadnet}. Regarding CVM-Conv3D, we use PSMNet~\cite{psmnet2018}, GANet \cite{ganet2019}, GWCNet \cite{gwcnet2019}, AANet \cite{xu2020aanet}, and LEAStereo \cite{cheng2020hierarchical}. From the model accuracy's perspective, GANet and LEAStereo are the main top-ranked methods, while from the inference performance's perspective, AnyNet and FADNet are very efficient. Comparing with these baselines, we will show how our new proposed framework balance the model accuracy and inference speed. \textbf{Implementation Details.} We firstly pre-train FADNet++ on the SceneFlow training samples for 90 epochs. Following the finetuning strategy proposed in \cite{cfnet2021}, we then jointly finetune our pre-trained FADNet++ on the combination of training samples in M2014, K2015 and ETH3D for another 2400 epochs. \subsection{Model Accuracy} In this subsection, we train the chosen models on the selected datasets and evaluate their model accuracy (EPE, endpoint error). We follow the same training scheme \cite{cfnet2021} that first trains a base model on the SceneFlow dataset, and fine-tunes the model on other datasets. \begin{table}[!ht] \centering \caption{Model accuracy on the SceneFlow dataset. Bold indicates the best. Underline indicates the second best. The runtime is the inference time measured on an Nvidia Tesla V100.} \label{table:sf_results_epe} \addtolength{\tabcolsep}{-3.5pt} \begin{tabular}{|c|c|c|c|c|} \hline {Type} & {Method} & \makecell{GPU Memory \\ Footprint [GB]} & {EPE [px]} & {Runtime [s]} \\ \hline\hline \multirow{4}{*}{ED-Conv2D} & DispNetC \cite{mayer2016large} & 1.9 & 1.68 & 0.015 \\ & CRL \cite{crl2017} & 2.2 & 1.32 & 0.026 \\ & AnyNet \cite{anynet2019} & \textbf{1.31} & 3.39 & \textbf{0.013} \\ & FADNet \cite{wang2020fadnet} & 2.6 & 0.83 & 0.048 \\ \hline \multirow{5}{*}{CVM-Conv3D} & PSMNet\cite{psmnet2018} & 5.6 & 1.09 & 0.619 \\ & GANet\cite{ganet2019} & 7.5 & 0.78 & 2.292 \\ & GWCNet\cite{gwcnet2019} & 5.7 & \underline{0.77} & 0.260 \\ & AANet\cite{xu2020aanet} & 1.91 & 0.87 & 0.086 \\ & LEAStereo\cite{cheng2020hierarchical} & 25.3 & 0.78 & 0.478 \\ \hline \multirowcell{4}{Configurable\\ ED-Conv2D} & FADNet++ & 2.3 & \textbf{0.76} & 0.033 \\ & FADNet-M & \underline{1.7} & 0.91 & 0.019 \\ & FADNet-S & 1.8 & 1.19 & 0.015 \\ & FADNet-T & 1.9 & 1.83 & \underline{0.014} \\ \hline \end{tabular} \end{table} \textbf{SceneFlow.} The accuracy comparison of different models is shown in Table~\ref{table:sf_results_epe}. In terms of EPE on the SceneFlow dataset, we can see that our FADNet++ outperforms all the other models including both ED-Conv2D and CVM-Conv3D, which shows the capability of our model to capture the disparity information of stereo images. Compared to ED-Conv2D methods, our FADNet++ significantly improves the model accuracy with comparable inference time. For example, in ED-Conv2D, the best accuracy model is FADNet with EPE of 0.83, whose inference time is 0.048 seconds. Our FADNet++ outperforms FADNet in both EPE (with around 9\% improvement) and runtime (with around 50\% faster speed). In terms of the runtime of ED-Conv2D, AnyNet is very efficient with on 0.013 seconds, but its EPE is very high, which is far away from real-world production. Our configurable feature of FADNet++ enables to configure different sizes of models to balance EPE and runtime. For example, FADNet-T is as efficient as AnyNet, but it achieves around 80\% lower EPE than AnyNet. With a larger model of our FADNet-M, the runtime is only 0.003 longer than AnyNet, but our method can achieve 3.7 times lower EPE than AnyNet. Compared to CVM-Conv3D methods, our FADNet++ achieves better EPE and inference time. Existing GANet, GWCNet, and LEAStereo obtain about 0.77-0.78 EPE on SceneFlow with more than 0.27 inference time, while our FADNet++ achieves 0.76 EPE with a magnitude smaller inference time. Even the very efficient 3D mode of AANet, it runs at 0.07 seconds, which is more than 2 times slower than FADNet++, and its EPE is still larger than ours. We also analyze the GPU memory footprint needed to support the runtime execution of each network. The memory space is typically used to hold the model parameters, the optimizer status and the intermediate output tensors \cite{rasley2020deepspeed}. The memory footprint is managed by the deep learning toolkit, such as PyTorch in our implementation, and related to not only the network characteristics listed above but also the chosen network forwarding/back-propagation algorithms and the memory caching scheme. Notice that the CVM-Conv3D methods usually suffer from large memory requirements and fail to be deployed on those low-end computing devices. However, our FADNet++ and its variants only consume nearly 2 GB of memory space, which make them feasible in many platforms. We also observe that FADNet-S and FADNet-T consumes a bit more memory space than FADNet-M. The reason is that the cuDNN library may choose different convolution algorithms, which consume different sizes of memory, for different layer channel settings to achieve the best model inference efficiency. The visualization of some samples is shown in Fig.~\ref{fig:vis_results_on_sf}, which compares our FADNet++ with two ED-Conv2D networks, DispNetC and CRL, and three CVM-Conv3D networks, AANet, GANet and LEAStereo. It is observed that DispNetC and CRL fail to produce accurate disparities for the object boundaries. Besides, the hole of the knife cannot be correctly recognized by those two ED-Conv2D methods. On the contrary, our FADNet++ can work well on the boundaries and the details of the knife. The predicted disparity map of FADNet++ is close to those of AANet, GANet and LEAStereo, while FADNet++ runs much faster than those CVM-Conv3D methods. \begin{figure*}[!ht] \captionsetup[subfigure]{labelformat=empty} \centering \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/A_0012_0010_left.png}\label{fig:vt_bf_left_rgb} } \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/dispnetc_A_0012_disp_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/crl_A_0012_disp_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/aanet_A_0012_disp_0010.png}\label{fig:vt_bf_disp_gt} } \vspace{-2.0 em} \qquad \subfloat[(a) Left/Right image] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/A_0012_0010_right.png}\label{fig:vt_bf_left_rgb} } \subfloat[(b) DispNetC (0.02 s) \cite{mayer2016large}] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/dispnetc_A_0012_err_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[(c) CRL (0.03 s) \cite{crl2017}] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/crl_A_0012_err_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[(d) AANet (0.09 s) \cite{xu2020aanet}] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/aanet_A_0012_err_0010.png}\label{fig:vt_bf_disp_gt} } \qquad \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/gt_A_0012_disp_0010.png}\label{fig:vt_bf_left_rgb} } \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/ganet_A_0012_disp_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/leastereo_A_0012_disp_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/fadnet_A_0012_disp_0010.png}\label{fig:vt_bf_disp_gt} } \vspace{-2.0 em} \qquad \subfloat[(e) Groundtruth] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/blank.png}\label{fig:vt_bf_left_rgb} } \subfloat[(f) GANet (2.29 s) \cite{ganet2019}] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/ganet_A_0012_err_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[(g) LEAStereo (0.48 s) \cite{cheng2020hierarchical}] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/leastereo_A_0012_err_0010.png}\label{fig:vt_bf_disp_gt} } \subfloat[(h) Our FADNet++ (0.03 s)] { \adjincludegraphics[width=0.23\linewidth,trim={{.25\width} {.18\width} {.35\width} {.12\width}},clip]{figures/sf/fadnet_A_0012_err_0010.png}\label{fig:vt_bf_disp_gt} } \caption{An illustration of the disparities produced by different methods. The sample is from SceneFlow and has a resolution of 960$\times$540 The input left and right images and the groundtruth disparity are shown in (a) and (e), respectively. (b)-(d) and (f)-(h) show the predicted disparity maps of different methods as well as their error maps. The colder color in error maps indicate lower errors. The parentheses in the sub-captions include the runtime of each method on an Nvidia Tesla V100.} \label{fig:vis_results_on_sf} \end{figure*} \textbf{Robotics Vision Challenge.} To demonstrate the model robustness on different scenarios, we utilize the similar strategy as \cite{cfnet2021}, where we validate our model on three realistic stereo datasets using the Robotics Vision Challenge (RVC) 2020\footnote{\url{http://www.robustvision.net/index.php}}. In RVC, each model is required to be trained in the dataset combined with M2014, K2015 and ETH3D, and it then is evaluated on M2014, K2015 and ETH3D separately. We choose top-ranked representative models (i.e., from top-1 to top-6, the models are CFNet~\cite{cfnet2021}, NLCANet\_V2~\cite{nlcanet2020}, HSMNet~\cite{hsmnet2019}, CVANet\footnote{CVANet has no published paper and code, so we cannot evaluate its runtime.}, AANet~\cite{xu2020aanet}, and GANet~\cite{ganet2019}, respectively) on the RVC leaderboard\footnote{\url{http://www.robustvision.net/leaderboard.php?benchmark=stereo}} to compare the model accuracy and the inference speed. \begin{table*}[!ht] \centering \caption{Joint generalization comparison on RVC with ETH3D, Middlebury, and KITTI2015 datasets. Rank indicates the ranking of model accuracy on the RVC leaderboard. Bold indicates the best. Underline indicates the second best. The runtime is measured with the input resolution (1242$\times$375) of the KITTI2015 dataset, and other resolutions should have similar patterns.} \label{tab:rvc_results} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{KITTI2015} & \multicolumn{3}{c|}{Middlebury2014} & \multicolumn{3}{c|}{ETH3D2017} & Runtime & \multirow{2}{*}{Rank} \\ \cline{2-10} & D1\_bg & D1\_fg & D1\_all & bad 4.0 & rms & avg error & bad 1.0 & bad 2.0 & avg error & [s] & \\ \hline\hline CFNet \cite{cfnet2021} & \underline{1.65} & \underline{3.53} & \underline{1.96} & 11.3 & \underline{18.2} & \underline{5.07} & \textbf{3.7} & \textbf{0.97} & \textbf{0.26} & 0.234 & 1\\ NLCANet\_V2 \cite{nlcanet2020} & \textbf{1.51} & 3.97 & \textbf{1.92} & \underline{10.3} & 21.9 & 5.60 & \underline{4.11} & \underline{1.2} & 0.29 & 0.44 & 2\\ HSMNet \cite{hsmnet2019} & 2.74 & 8.73 & 3.74 & \textbf{9.68} & \textbf{13.4} & \textbf{3.44} & 4.40 & 1.51 & \underline{0.28} & 0.15 & 3\\ CVANet & 1.74 & 4.98 & 2.28 & 23.1 & 25.9 & 8.64 & 4.68 & 1.37 & 0.34 & - & 4 \\ AANet \cite{xu2020aanet} & 2.23 & 4.89 & 2.67 & 25.8 & 32.8 & 12.8 & 5.41 & 1.95 & 0.33 & \underline{0.062} & 5\\ GANet \cite{ganet2019} & 1.88 & 4.58 & 2.33 & 16.3 & 42.0 & 15.8 & 6.97 & 1.25 & 0.45 & 1.71 & 6 \\ \hline FADNet++ & 1.99 & \textbf{3.18} & 2.19 & 31.4 & 27.7 & 11.9 & 4.36 & 1.30 & 0.34 & \textbf{0.029} & - \\ \hline \end{tabular} \end{table*} The results are shown in Table~\ref{tab:rvc_results}. The runtime for different models is measured on the same platform using their open-sourced code to guarantee fair comparison. The runtime in CVANet is empty as it has no publicly available code and paper. It can be seen that the performance of our model is ranked from 3-5 in the three datasets among the top-6 models. Specifically, in KITTI2015, our model is slightly worse than CFNet and NLCANet\_V2, and it outperforms other four models in terms of the metric of D1\_all. In the average error of the M2014 dataset, our FADNet++ still outperforms AANet and GANet. Regarding the ETH3D dataset, our model outperforms GANet and is comparable with CVANet and GANet. In summary, the top-3 models have good model accuracy, but their inference time is very slow, while our FADNet++ achieves a magnitude order of faster speed. Compared with the top-4 to top-6 models, FADNet++ achieves comparable model accuracy while achieving around $3\times$ faster than AANet and around $59\times$ faster than GANet. Note that among the compared methods, only our FADNet++ can provide real-time inference speed (i.e., $\geq 30$FPS) on a Tesla V100 GPU. \begin{figure*}[!ht] \centering \subfloat[Left image] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/test_08_left.png}\label{fig:vt_bf_left_rgb} } \subfloat[GANet \cite{ganet2019}] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/ganet_test_08_disp.png}\label{fig:vt_bf_disp_gt} } \subfloat[AANet \cite{xu2020aanet}] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/aanet_test_08_disp.png}\label{fig:vt_bf_disp_gt} } \subfloat[HSMNet \cite{hsmnet2019}] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/hsmnet_test_08_disp.png}\label{fig:vt_bf_disp_gt} } \qquad \subfloat[Right image] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/test_08_right.png}\label{fig:vt_bf_left_rgb} } \subfloat[NLCANet \cite{nlcanet2020}] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/nlcanet_test_08_disp.png}\label{fig:vt_bf_disp_gt} } \subfloat[CFNet \cite{cfnet2021}] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/cfnet_test_08_disp.png}\label{fig:vt_bf_disp_gt} } \subfloat[FADNet++ (Ours)] { \includegraphics[width=0.23\linewidth]{figures/kitti2015/fadnet++_test_08_disp.png}\label{fig:vt_bf_disp_gt} } \caption{Results achieved on the KITTI 2015 dataset. } \label{fig:results_on_k2015} \end{figure*} \begin{figure*}[ht] \centering \subfloat[Left image] { \includegraphics[width=0.23\linewidth]{figures/md/crusadep_left.png}\label{fig:vt_bf_left_rgb} } \subfloat[GANet \cite{ganet2019}] { \includegraphics[width=0.23\linewidth]{figures/md/ganet_crusadep_disp.jpeg}\label{fig:vt_bf_disp_gt} } \subfloat[AANet \cite{xu2020aanet}] { \includegraphics[width=0.23\linewidth]{figures/md/aanet_crusadep_disp.jpeg}\label{fig:vt_bf_disp_gt} } \subfloat[HSMNet \cite{hsmnet2019}] { \includegraphics[width=0.23\linewidth]{figures/md/hsmnet_crusadep_disp.jpeg}\label{fig:vt_bf_disp_gt} } \qquad \subfloat[Right image] { \includegraphics[width=0.23\linewidth]{figures/md/crusadep_right.png}\label{fig:vt_bf_left_rgb} } \subfloat[NLCANet \cite{nlcanet2020}] { \includegraphics[width=0.23\linewidth]{figures/md/nlcanet_crusadep_disp.jpeg}\label{fig:vt_bf_disp_gt} } \subfloat[CFNet \cite{cfnet2021}] { \includegraphics[width=0.23\linewidth]{figures/md/cfnet_crusadep_disp.jpeg}\label{fig:vt_bf_disp_gt} } \subfloat[FADNet++ (Ours)] { \includegraphics[width=0.23\linewidth]{figures/md/fadnet++_crusadep_disp.jpeg}\label{fig:vt_bf_disp_gt} } \caption{Results achieved on the Middlebury 2014 test set. The image pair show above is taken from the CrusadeP data. Our method generates smooth results close to HSMNet and CFNet and performs better than GANet and AANet, especially for the white flat desk.} \label{fig:results_on_md} \end{figure*} \begin{figure}[ht] \centering \subfloat[Left image] { \includegraphics[width=0.47\linewidth]{figures/eth3d/storage_room_2_2l_left.png}\label{fig:vt_bf_left_rgb} } \subfloat[Right image] { \includegraphics[width=0.47\linewidth]{figures/eth3d/storage_room_2_2l_right.png}\label{fig:vt_bf_left_rgb} } \qquad \subfloat[CBMV \cite{cbmv2018}] { \includegraphics[width=0.47\linewidth]{figures/eth3d/cbmv_storage_room_2_2l_disp.png}\label{fig:vt_bf_left_rgb} } \subfloat[SGM-Forest \cite{sgmforest2018}] { \includegraphics[width=0.47\linewidth]{figures/eth3d/sgmforest_storage_room_2_2l_disp.png}\label{fig:vt_bf_left_rgb} } \qquad \subfloat[CFNet \cite{cfnet2021}] { \includegraphics[width=0.47\linewidth]{figures/eth3d/cfnet_storage_room_2_2l_disp.png}\label{fig:vt_bf_left_rgb} } \subfloat[FADNet++ (ours)] { \includegraphics[width=0.47\linewidth]{figures/eth3d/fadnet++_storage_room_2_2l_disp.png}\label{fig:vt_bf_left_rgb} } \caption{Results achieved on the ETH3D 2017 test set. The storage\_room\_2\_2l image pair is used for test. The disparity map generated by FADNet++ is close to the Top-1 CFNet in RVC2020, and much smoother than two traditional SOTA stereo matching methods, especially on the ping-pong table.} \label{fig:results_on_eth3d} \end{figure} Some visualization effects on K2015, M2014, and ETH3D datasets are shown in Fig.~\ref{fig:results_on_k2015}, Fig.~\ref{fig:results_on_md}, and Fig.~\ref{fig:results_on_eth3d} respectively. For K2015, compared to GANet and AANet, our FADNet++ can generate disparity maps with richer details (see left white boxes in Fig. \ref{fig:results_on_k2015}) and smoother results (see right white boxes in Fig. \ref{fig:results_on_k2015}). For M2014, from the white desk in Fig. \ref{fig:results_on_md}, it can also be clearly observed that our method produces much better and smoother results than GANet and AANet. For ETH3D, it is clear that our FADNet++ performs well on textureless regions (such as the ping-pong table). Its disparity is close to the top-1 CFNet and much smoother than those achieved by other traditional SOTA methods. \subsection{Inference Efficiency} In the above subsection, we have shown that our model achieves comparable model accuracy while providing very efficient inference speed on the Tesla V100 GPU. In this subsection, we provide more experimental results on inference GPUs and mobile GPUs to show how our configurable model achieves real-time inference performance on different platforms with good model accuracy. \begin{table}[!ht] \centering \caption{Quantitative results on the SceneFlow dataset among different inference servers. Bold indicates the best. Underline indicates the second best. } \label{tab:sf_results_server_gpus} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{EPE [px]} & \multicolumn{3}{c|}{Runtime [s]} \\ \cline{3-5} & & RTX2070 & P40 & T4 \\ \hline\hline DispNetC\cite{mayer2016large} & 1.68 & 0.022 & 0.025 & 0.04 \\ CRL\cite{crl2017} & 1.32 & 0.042 & 0.047 & 0.074 \\ AnyNet\cite{anynet2019} & 3.39 & \textbf{0.012} & \textbf{0.017} & \textbf{0.014} \\ FADNet\cite{wang2020fadnet} & 0.83 & 0.085 & 0.096 & 0.146 \\\hline PSMNet\cite{psmnet2018} & 1.09 & 0.571 & 0.492 & 0.792 \\ GANet\cite{ganet2019} & 0.78 & 5.2 & 5.5 & 7.344 \\ GWCNet\cite{gwcnet2019} & \underline{0.77} & 0.45 & 0.421 & 0.646 \\ AANet\cite{xu2020aanet} & 0.87 & 0.124 & 0.183 & 0.23 \\ LEAStereo\cite{cheng2020hierarchical} & 0.78 & 0.851 & 0.71 & 0.978 \\\hline FADNet++ & \textbf{0.76} & 0.053 & 0.06 & 0.091 \\ FADNet-M & 0.91 & 0.025 & 0.031 & 0.037 \\ FADNet-S & 1.19 & 0.017 & 0.023 & 0.023 \\ FADNet-T & 1.83 & \underline{0.013} & \underline{0.02} & \underline{0.015} \\ \hline \end{tabular} \end{table} \textbf{On Inference Server GPUs.} The inference performance on the inference servers is shown in Table~\ref{tab:sf_results_server_gpus}. In terms of the runtime, we can see that AnyNet achieves the fastest speed among the evaluated methods, but its EPE on the SceneFlow dataset is extremely high (3.39). Our FADNet-T achieves very close inference speed with AnyNet while achieving around $83\%$ improvement in EPE. Being aimed to achieving real-time inference speed (i.e., $\geq$30FPS whose inference time should be around 0.033s), our FADNet-M can provide real-time inference speed in all three inference server GPUs with the EPE of 0.91. The other existing model, DispNetC, who also achieves real-time inference speed in all inference servers, has the EPE of 1.68, which is around $87\%$ higher than ours. Even the CVM-Conv3D based models achieve very good model accuracy, they run very slow on these inference GPUs so that they are far away from production to provide real-time disparity estimation. In summary, our configuration framework can be configured as a relatively small model (i.e., FADNet-M) compared to FADNet++ and provides real-time inference speed with good model accuracy. \begin{table}[!ht] \centering \caption{Quantitative results on SceneFlow dataset on among different mobile platforms. Bold indicates the best. Underline indicates the second best. } \label{tab:sf_results_mobile_gpus} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\makecell{GPU Memory \\ Footprint [GB]}} & \multirow{2}{*}{EPE [px]} & \multicolumn{2}{c|}{Runtime [s]} \\ \cline{4-5} & & & TX2 & AGX \\ \hline\hline DispNetC\cite{mayer2016large} & 3.9 & 1.68 & 0.309 & 0.108 \\ StereoNet\cite{stereonet2018} & 9.5 & 1.10 & 1.148 & 0.282 \\ AnyNet\cite{anynet2019} & \textbf{3.1} & 3.39 & \underline{0.125} & \textbf{0.041} \\ AANet\cite{xu2020aanet} & 12.6 & 0.87 & 1.83 & 0.585 \\ FADNet\cite{wang2020fadnet} & 4.9 & \underline{0.83} & 1.176 & 0.413 \\ \hline FADNet++ & 4.3 & \textbf{0.76} & 0.735 & 0.258 \\ FADNet-M & \underline{3.7} & 0.91 & 0.335 & 0.113\\ FADNet-S & 3.8 & 1.19 & 0.192 & 0.068 \\ FADNet-T & 3.9 & 1.83 & \textbf{0.111} & \underline{0.043} \\ \hline \end{tabular} \end{table} \textbf{On Mobile GPUs.} To demonstrate the feasibility of our model applying on mobile devices, we choose two model GPUs (Nvidia TX2 and AGX) to compare the performance. Due to the memory limitation, all the CVM-Conv3D methods cannot run on such mobile devices. Therefore, we compare the inference speed with ED-Conv2D methods and also include the occupied GPU memory footprints. The results are shown in Table~\ref{tab:sf_results_mobile_gpus}. Again, AnyNet still has very fast inference speed even on mobile GPUs, but its EPE is rather high. Our configured model FADNet-T achieves very close inference speeds with AnyNet while it has much better model accuracy than AnyNet. Comparing between our configured FANet-S and StereoNet, both of which have similar model accuracy (EPE is around 1.1-1.2), we can see that FADNet-S runs $4\times$ and $5.9\times$ faster than StereoNet on TX2 and AGX GPUs, respectively. In summary, our configurable framework enables us to set different sizes of models for adapting on different computing power devices with reasonable model accuracy. We also profile the device memory usage of different models. Notice that there are no CVM-Conv3D models since they fail to run on our tested mobile platforms due to the memory limitation. Compared to the existing real-time networks like DispNetC and AnyNet, our FADNet++ and FADNet-M achieve much lower EPEs with similar memory usage. Besides, since the cuDNN library in PyTorch may use different convolution algorithms for different layer channel numbers to achieve the best inference speed, it is possible that the smaller FADNet-S and FADNet-T can even consume a bit large memory than FADNet-M. In addition, the GPU memory usage of the same network can be also different between two computing platforms, such as 2.3 GB on V100 but 4.3 GB on AGX for FADNet++. On the one hand, the memory space on Jetson TX and AGX is shared by both the CPU and GPU so that the memory management strategy is different from the pure GPU memory on V100. On the other hand, the cuDNN library may also have different implementations for the X86-based and ARM-based systems, respectively. We put our configured models on FADNet++ running on all evaluated devices in Table~\ref{tab:all_results_fadnet}, which shows the configurable feature of our model for balancing model accuracy and inference speeds on different hardware. \begin{table}[!ht] \centering \caption{Configurable speed vs. model accuracy (EPE on the SceneFlow dataset) on different GPUs. } \addtolength{\tabcolsep}{-1.5pt} \label{tab:all_results_fadnet} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & EPE & \multicolumn{6}{c|}{Runtime [s]} \\\cline{3-8} & [px] & RTX2070 & P40 & T4 & V100 & TX2 & AGX \\ \hline\hline FADNet++ & 0.76 & 0.053 & 0.06 & 0.091 & 0.032 & 0.735 & 0.258 \\ FADNet-M & 0.91 & 0.025 & 0.031 & 0.037 & 0.016 & 0.335 & 0.113\\ FADNet-S & 1.19 & 0.017 & 0.023 & 0.023 & 0.015 & 0.192 & 0.068 \\ FADNet-T & 1.83 & 0.013 & 0.02 & 0.015 & 0.013 & 0.111 & 0.043 \\ \hline \end{tabular} \end{table} \iffalse \begin{table}[ht] \centering \caption{Cross-domain generalization evaluation on ETH3D, Middlebury, and KITTI training sets. } \label{tab:rvc_results} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & {K2012} & {K2015} & {M2014} & {ETH3D} \\ & D1\_all(\%) & D1\_all (\%) & bad 2.0 & bad 1.0 \\ \hline PSMNet & 15.1 & 16.3 & 39.5 & 23.8 \\ GWCNet & 12.0 & 12.2 & 37.4 & 11.0 \\ CasStereo & 11.8 & 11.9 & 40.6 & 7.8 \\ GANet & 10.1 & 11.7 & 32.2 & 14.1 \\ DSMNet \cite{dsmnet2020} & 6.2 & 6.5 & 21.8 & 6.2 \\ \hline FADNet++ & 6.1 & 6.5 & 32.0 & 9.38 \\ \hline \end{tabular} \end{table} \fi \iffalse \subsubsection{KITTI} \begin{table}[ht] \centering \caption{Results on the KITTI 2012 dataset} \label{tab:perfkitti} \scriptsize{ \begin{tabular}{ccccccc} \hline \textbf{Model} & \multicolumn{3}{c}{Noc(\%)} & \multicolumn{3}{c}{All(\%)} \\ \cline{2-7} & D1-bg & D1-fg & D1-all & D1-bg & D1-fg & D1-all \\ \cline{1-7} DispNetC & 4.11\% & 3.72\% & 4.05\% & 4.32\% & 4.41\% & 4.34\% \\ GC-Net & 2.02\% & 5.58\% & 2.61\% & 2.21\% & 6.16\% & 2.87\% \\ PSMNet & 1.71\% & 4.31\% & 2.14\% & 1.86\% & 4.62\% & 2.32\% \\ GANet & \textbf{1.34\%} & 3.11\% & \textbf{1.63\%} & \textbf{1.48\%} & \textbf{3.46\%} & \textbf{1.81\%} \\ \hline MobileFAD & & & & & & \\ FADNet(ours) & 2.49\% & \textbf{3.07\%} & 2.59\% & 2.68\% & 3.50\% & 2.82\% \\ \hline \end{tabular} } \begin{tablenotes} \item Note: ``Noc'' and ``All'' indicates percentage of outliers averaged over ground truth pixels of non-occluded and all regions respectively. ``D1-bg'', ``D1-fg'' and ``D1-all'' indicates percentage of outliers averaged over background, foreground and all ground truth pixels respectively. \end{tablenotes} \end{table} \begin{table}[ht] \centering \caption{Results on the KITTI 2015 dataset} \label{tab:perfkitti} \scriptsize{ \begin{tabular}{ccccccc} \hline \textbf{Model} & \multicolumn{3}{c}{Noc(\%)} & \multicolumn{3}{c}{All(\%)} \\ \cline{2-7} & D1-bg & D1-fg & D1-all & D1-bg & D1-fg & D1-all \\ \cline{1-7} DispNetC & 4.11\% & 3.72\% & 4.05\% & 4.32\% & 4.41\% & 4.34\% \\ GC-Net & 2.02\% & 5.58\% & 2.61\% & 2.21\% & 6.16\% & 2.87\% \\ PSMNet & 1.71\% & 4.31\% & 2.14\% & 1.86\% & 4.62\% & 2.32\% \\ GANet & \textbf{1.34\%} & 3.11\% & \textbf{1.63\%} & \textbf{1.48\%} & \textbf{3.46\%} & \textbf{1.81\%} \\ \hline MobileFAD & & & & & & \\ FADNet(ours) & 2.49\% & \textbf{3.07\%} & 2.59\% & 2.68\% & 3.50\% & 2.82\% \\ \hline \end{tabular} } \begin{tablenotes} \item Note: ``Noc'' and ``All'' indicates percentage of outliers averaged over ground truth pixels of non-occluded and all regions respectively. ``D1-bg'', ``D1-fg'' and ``D1-all'' indicates percentage of outliers averaged over background, foreground and all ground truth pixels respectively. \end{tablenotes} \end{table} \fi \section{Introduction}\label{sec:introduction} Disparity estimation (also referred to as stereo matching) is a classical and important problem in robotics and autonomous driving for 3D scene reconstruction \cite{linestereo2015,orbslam22017,smart_iotj2020}. While traditional methods based on hand-crafted feature extraction and matching cost aggregation such as Semi-Global Matching (SGM) \cite{hirschmuller2007stereo}) tend to fail on those textureless and repetitive regions in the images, recent advanced deep neural network (DNN) techniques surpass them with decent generalization and robustness to those challenging patches, and achieve state-of-the-art performance in many public datasets \cite{zagoruyko2015learning}\cite{zbontar2016stereo}\cite{flownet}\cite{mayer2016large}\cite{psmnet2018}\cite{ganet2019}. However, how to design an efficient DNN structure for disparity estimation with limited computational cost for those Internet-of-Things (IoT) scenarios remains a concern. The DNN-based methods for disparity estimation are end-to-end frameworks which take stereo images (left and right) as input to the neural network and predict the disparity directly. The architectures of DNN are very essential to achieve accurate estimation, and can be categorized into two classes, the encoder-decoder network with 2D convolution (ED-Conv2D) and the cost volume matching with 3D convolution (CVM-Conv3D). Besides, recent studies \cite{Saikia_2019_ICCV, he2019automl} begin to reveal the potential of automated machine learning (AutoML) for neural architecture search (NAS) on stereo matching. In practice, to measure whether a DNN model is applicable in real-world applications, we not only need to evaluate its accuracy on unseen stereo images (whether it can estimate the disparity correctly), but also need to evaluate its time efficiency (whether it can generate the results in real-time). However, existing methods either focus on model accuracy (e.g.,\cite{psmnet2018}\cite{ganet2019}) or on time efficiency (e.g.,\cite{fast2018}\cite{energy_depth_iotj2021}\cite{cyclic2021}), which could make the trained models not applicable to the real-world applications supporting real-time inference on GPU servers or mobile devices with good model accuracy. \begin{figure}[t] \captionsetup[subfigure]{farskip=1pt} \centering \subfloat[Left image] { \adjincludegraphics[width=0.48\linewidth,trim={{.45\width} {.2\width} 0 0},clip]{figures/sf/A_0013_0010_left.png}\label{fig:vt_bf_left_rgb} } \subfloat[Right image] { \adjincludegraphics[width=0.48\linewidth,trim={{.45\width} {.2\width} 0 0},clip]{figures/sf/A_0013_0010_right.png}\label{fig:vt_bf_disp_gt} } \newline \subfloat[CRL (0.03 s)\cite{crl2017}] { \adjincludegraphics[width=0.48\linewidth,trim={{.45\width} {.2\width} 0 0},clip]{figures/sf/crl_A_0013_disp_0010.png}\label{fig:vt_bf_disp_irs} } \subfloat[GANet (2.29 s) \cite{ganet2019}] { \adjincludegraphics[width=0.48\linewidth,trim={{.45\width} {.2\width} 0 0},clip]{figures/sf/ganet_A_0013_disp_0010.png}\label{fig:vt_bf_disp_sf} } \newline \subfloat[Our FADNet++ (0.03 s)] { \adjincludegraphics[width=0.48\linewidth,trim={{.45\width} {.2\width} 0 0},clip]{figures/sf/fadnet_A_0013_disp_0010.png}\label{fig:vt_bf_disp_irs} } \subfloat[Ground truth] { \adjincludegraphics[width=0.48\linewidth,trim={{.45\width} {.2\width} 0 0},clip]{figures/sf/gt_A_0013_disp_0010.png}\label{fig:vt_bf_disp_sf} } \caption{Performance illustrations of a challenging sample. (a) the left input image. (b) the right input image. (c) result of CRL \cite{crl2017} which runs only 0.03 s but produces wrong disparity values on the shell. (d) result of GANet \cite{ganet2019}, which produces accurate disparity values close to the ground truth but consumes 7.5 GB GPU memory and runs 2.29 s for one stereo image pair. (c) result of our FADNet++, which only consumes 2.3 GB GPU memory and runs 0.03 s to produce the same accurate values. All the data is collected on the Nvidia Tesla V100 GPU.} \label{fig:results_preview} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.98\linewidth]{figures/net_device_tmc.pdf} \caption{Runtime VS EPE of different methods. On a server GPU Tesla V100, only our FADNet can achieve 30 FPS while the EPE is as low as those CVM-Conv3D models. On a mobile GPU Jetson AGX, our FADNet not only achieves 15 FPS but also produces much lower EPE compared to AnyNet.} \label{fig:net_device} \end{figure} In ED-Conv2D methods, which are relatively compute-efficient compared to CVM-Conv3D, stereo matching neural networks \cite{zagoruyko2015learning}\cite{zbontar2016stereo}\cite{mayer2016large} are first proposed for end-to-end disparity estimation by exploiting an encoder-decoder structure. The encoder part extracts the features from the input images, and the decoder part predicts the disparity with the generated features. The disparity prediction is optimized as a regression or classification problem using large-scale datasets (e.g., SceneFlow \cite{mayer2016large}) with disparity ground truth. The correlation layer \cite{flownet}\cite{mayer2016large} is then proposed to increase the learning capability of DNNs in disparity estimation, and it has been proved to be successful in learning strong features at multiple levels of scales \cite{flownet}\cite{mayer2016large}\cite{flownet2}\cite{flownet3}\cite{liang2018learning}. To further improve the capability of the models, residual networks \cite{resnet2016}\cite{orhan2017skip}\cite{zhan2019dsnet} are introduced into the architecture of disparity estimation networks since the residual structure enables much deeper network to be easier to train \cite{du2019amnet}. The ED-Conv2D methods have been proven computing efficient, but they cannot achieve very high estimation accuracy~\cite{wang2020fadnet}. To address the accuracy problem of disparity estimation, researchers have proposed CVM-Conv3D networks to better capture the features of stereo images and thus improve the estimation accuracy \cite{zbontar2016stereo}\cite{kendall2017end}\cite{psmnet2018}\cite{ganet2019}\cite{nie2019multi}. The key idea of the CVM-Conv3D methods is to generate the cost volume by concatenating left feature maps with their corresponding right counterparts across each disparity level \cite{kendall2017end}\cite{psmnet2018}. The features of cost volume are then automatically extracted by 3D convolution layers. 3D operations in DNNs, However, are computing-intensive and hence very slow even with current powerful AI accelerators (e.g., GPUs). Although the 3D convolution based DNNs can achieve state-of-the-art disparity estimation accuracy, they are difficult for deployment due to the very high latency to generate results. On one hand, it requires a large amount of memory to off-load the model, so only a limited set of accelerators (like Nvidia Tesla V100 with 32GB memory) can run these models. On the other hand, it takes several seconds to generate a single result even on a very powerful Nvidia Tesla V100 GPU using CVM-Conv3D models. The memory consumption and the high computation workloads make the CVM-Conv3D methods difficult to be deployed in practice. Therefore, it is crucial to address the accuracy and efficiency problems for real-world applications. To ease the human efforts of designing an efficient network structure for stereo matching, some recent studies \cite{Saikia_2019_ICCV,cheng2020hierarchical} also take advantages of automated machine learning (AutoML) \cite{he2019automl}, especially the neural architecture search (NAS) technique, to search the optimal set of network operators as well as their connections. However, those state-of-the-art methods are still far from real-time inference even on a server GPU since they are still based on either the complicated network stacking or low-efficient 3D convolution operations. Besides, another series of studies focus on light-weight network structures for fast inference, such as StereoNet \cite{stereonet2018} and AnyNet \cite{anynet2019}. However, the light-weight models significantly sacrifice the model accuracy, especially on some complex realistic datasets, such as KITTI \cite{kitti2015} and Middlebury \cite{scharstein2014high}. To achieve a practical model in stereo matching, we propose FADNet++ which produces real-time and accurate disparity estimation with configurable networks. This article is an extension of our previous conference paper \cite{wang2020fadnet}. Similar to the previous FADNet, in FADNet++, we first exploit the multiple stacked 2D-based convolution layers with fast computation, and then we combine state-of-the-art residual architectures to improve the learning capability, and finally we introduce multi-scale outputs for FADNet++ so that it can exploit the multi-scale weight scheduling to improve the training speed. As illustrated in Fig. \ref{fig:results_preview}, our FADNet++ can easily obtain comparable performance as state-of-the-art GANet \cite{ganet2019}, while it runs approximately 70$\times$ faster than GANet and consumes 3$\times$ less GPU memory. Besides, the new FADNet++ advances the previous FADNet in \cite{wang2020fadnet} in three folds. First, we allow configurable variants of FADNet++ to meet different demands of model accuracy and speed. Second, we conduct an extensive comparative study on the model accuracy and speed of different FADNet++ variants during both the training and inference stages. Third, compared to only two stereo datasets and two high-end GPUs in \cite{wang2020fadnet}, we validate our proposed FADNet++ on four stereo datasets and six different GPU platforms from server-level to edge-level. As shown in Fig. \ref{fig:net_device}, the FADNet++ variants (denoted by ``FADNet*'') can adapt to the platforms of different computing capability. On a server GPU, even the slowest FADNet++ can achieve 30 FPS with a lower EPE than those CVM-Conv3D methods. On a mobile GPU, our FADNet++ can achieve up to 15 frames per second (FPS) with a much lower EPE than the fastest AnyNet \cite{anynet2019}. We make the project of FADNet++ publicly available\footnote{\url{https://github.com/HKBU-HPML/FADNet}}. Our contributions are summarized as follows: \begin{itemize} \item We propose an accurate yet efficient DNN architecture for disparity estimation named FADNet++ (with configurable architecture to support multiple hardware for efficient inference), which achieves comparable prediction accuracy as CVM-Conv3D models and it runs at an order of magnitude faster speed than the 3D-based models. \item We develop a multiple rounds training scheme with multi-scale weight scheduling for FADNet++ as well as its variants during training, which improves the training speed yet maintains the model accuracy. \item We achieve state-of-the-art accuracy on the Scene Flow dataset with more than 14$\times$ and up to 69$\times$ faster disparity prediction speed than both the NAS-based (LEAStereo \cite{cheng2020hierarchical}) and the human-designed (PSMNet \cite{psmnet2018} and GANet \cite{ganet2019}) models. Besides, by tuning the channel ratios of our FADNet++ to meet the limited computational resources, the variant FADNet-S advances the existing mobile solution, AnyNet \cite{anynet2019}, with much higher prediction accuracy and a competitive inference speed of 15 FPS on the mobile Jetson AGX. \end{itemize} The rest of the paper is organized as follows. We introduce some related work about DNN based solutions to disparity estimation in Section \ref{sec:related_work}. Section \ref{sec:model} introduces the methodology and implementation of our proposed network with configurable size of models. We demonstrate our experimental settings and results in Section \ref{sec:exp}. We finally conclude the paper in Section \ref{sec:conclusion}. \section{Approach} \label{sec:model} \begin{figure*}[htbp] \centering \includegraphics[width=0.96\linewidth]{figures/FADNet.pdf} \caption{The model structure of our proposed FADNet++. ``Configurable'' indicates that the channel numbers of the convolution/deconvolution layers can be modified by a tunable ratio (discussed in Section \ref{subsec:configurable}) to control the overall model size. ``L'' indicates the left input image, and ``R'' indicates the right input image. ``Warped L'' indicated the aligned left image produced by warping the right image with the initial predicted disparity map of RB-NetC. The sizes of different predicted disparity maps reflect their scales in the network.} \label{fig:fadnet} \end{figure*} \subsection{Model Design and Implementation} Our proposed FADNet++ exploits the structure of DispNetC \cite{mayer2016large} as a backbone, but it is extensively reformed to take care of both accuracy and inference speed, which is lacking in existing studies. We introduce four novel components in FADNet++ to enable its good generalization ability and fast inference speed with configurable size for different hardware. 1) We first change the structure in terms of branch depth and layer type by introducing two new modules, residual block and point-wise correlation; 2) Then we exploit the multi-scale residual learning strategy for training the refinement network; 3) We design the model to be configurable (with a scaling ratio) to balance the accuracy and inference speed. 4) Finally, a loss weight training schedule is used to train the network in a coarse-to-fine manner. \subsection{Residual Block and Point-wise Correlation}\label{subsec:pwc} DispNetC and DispNetS which are both from the study in \cite{mayer2016large} basically use an encoder-decoder structure equipped with five feature extraction and down-sampling layers and five feature deconvolution layers. While conducting feature extraction and down-sampling, DispNetC and DispNetS first adopt a convolution layer with a stride of 1 and then a convolution layer with a stride of 2 so that they consistently shrink the feature map size by half. We call the two-layer convolutions with size reduction as Dual-Conv, as shown in Fig. \ref{fig:resblock}(a). DispNetC equipped with Dual-Conv modules and a correlation layer finally achieves an end-points error (EPE) of 1.68 on the SceneFlow dataset~\cite{mayer2016large}, as reported in \cite{mayer2016large}. \begin{figure}[htbp] \centering \includegraphics[width=0.96\linewidth]{figures/Dual-ResBlock.pdf} \caption{the original two-layer convolutions (Dual-Conv) in DispNetC \cite{mayer2016large}, while the right part shows the Dual-ResBlock module applied in our FADNet++.} \label{fig:resblock} \end{figure} The residual block originally derived in \cite{resnet2016} for image classification tasks is widely used to learn robust features and train a very deep network. The residual block can well address the gradient vanish problem when training very deep networks. Thus, we replace the convolution layer in the Dual-Conv module by the residual block to construct a new module called Dual-ResBlock, as shown in Fig. \ref{fig:resblock}(b). With Dual-ResBlock, we can make the network deeper without training difficulty as the residual block allows us to train very deep models. Therefore, we further increase the number of feature extraction and down-sampling layers from five to seven. Finally, DispNetC and DispNetS are evolving to two new networks with better learning ability, which are called RB-NetC and RB-NetS respectively, as shown in Fig. \ref{fig:fadnet}. One of the most important contributions of DispNetC is the correlation layer, which targets at finding correspondences between the left and right images. Given two multi-channel feature maps $\textbf{f}_1,\textbf{f}_2$ with $w, h$ and $c$ as their width, height and number of channels, the correlation layer calculates the cost volume of them using Eq. \eqref{eq:corr}. \begin{align} c(\textbf{x}_1,\textbf{x}_2)=\sum_{\textbf{o} \in [-k, k]\times [-k,k]}\langle\textbf{f}_1(\textbf{x}_1 + \textbf{o}), \textbf{f}_2(\textbf{x}_2 + \textbf{o}) \rangle, \label{eq:corr} \end{align} where $k$ is the kernel size of cost matching, $\textbf{x}_1$ and $\textbf{x}_2$ are the centers of two patches from $\textbf{f}_1$ and $\textbf{f}_2$ respectively. Computing all patch combinations involves $c\times K^2 \times w^2 \times h^2$ multiplication and produces a cost matching map of $w \times h$. Given a maximum searching range $D$, we fix $\textbf{x}_1$ and shift the $\textbf{x}_2$ on the x-axis direction from $-D$ to $D$ with a stride of two. Thus, the final output cost volume size becomes $w\times h \times D$. However, the correlation operation assumes that each pixel in the patch contributes equally to the point-wise convolution results, which may lost the ability to learn more complicated matching patterns. Here we propose point-wise correlation composed of two modules. The first module is a classical convolution layer with a kernel size of $3\times3$ and a stride of $1$. The second one is an element-wise multiplication which is defined by Eq. \eqref{eq:pw_corr}. \begin{align} c(\textbf{x}_1,\textbf{x}_2)=\sum{\langle\textbf{f}_1(\textbf{x}_1), \textbf{f}_2(\textbf{x}_2) \rangle}, \label{eq:pw_corr} \end{align} where we remove the patch convolution manner from Eq. \eqref{eq:corr}. Note that the maximum search range for the original image resolution should not be larger than the maximum valid disparity. For example, in the SceneFlow dataset, its maximum valid disparity is 192, and the correlation layer of our FADNet++ is put after the third Dual-ResBlock, of which the output feature resolution is 1/8. So a proper searching range value should not be less than 192/8=16. We set a marginally larger value 20. We also test some other values, such as 10 and 40, which do not surpass the version of using 20. The reason is that applying too small or large search range value may lead to under-fitting or over-fitting. Table \ref{tab:res_corr} lists the accuracy improvement brought by applying the proposed Dual-ResBlock and point-wise correlation. To simplify the validation experiment, we train them using the same SceneFlow \cite{mayer2016large} dataset for only 20 epochs, which is different from the complete training scheme in Section \ref{sec:exp}. It is observed that RB-NetC outperforms DispNetC with a much lower EPE, which indicates the effectiveness of the residual structure. We also notice that setting a proper searching range value of the correlation layer helps further improve the model accuracy. \begin{table}[!ht] \centering \caption{Model accuracy improvement of Dual-ResBlock and point-wise correlation with different $D$.} \label{tab:res_corr} \begin{tabular}{|c|c|c|c|} \hline \textbf{Model} & $D$ & Training EPE & Test EPE \\ \hline\hline DispNetC & 20 & 2.89 & 2.80 \\ \hline RB-NetC &10 & 2.28 & 2.06 \\ \hline RB-NetC &20 & 2.09 & 1.76 \\ \hline RB-NetC &40 & 2.12 & 1.83 \\ \hline \end{tabular} \end{table} \subsection{Multi-Scale Residual Learning} Instead of directly stacking DispNetC and DispNetS sub-networks to conduct disparity refinement procedure \cite{flownet3}, we apply the multi-scale residual learning firstly proposed by \cite{crl2017}. The basic idea is that the second refinement network learns the disparity residuals and accumulates them into the initial results generated by the first network, instead of directly predicting the whole disparity map. In this way, the second network only needs to focus on learning the highly nonlinear residual, which is effective to avoid gradient vanishing. Our final FADNet++ is formed by stacking RB-NetC and RB-NetS with multi-scale residual learning, which is shown in Fig. \ref{fig:fadnet}. As illustrated in Fig. \ref{fig:fadnet}, the upper RB-NetC takes the left and right images as input and produces disparity maps at a total of 7 scales, denoted by $c_{s}$, where $s$ is from 0 to 6. The bottom RB-NetS exploits the inputs of the left image, right image, and the warped left images to predict the residuals. The generated residuals (denoted by $r_{s}$) from RB-NetS are then accumulated to the prediction results by RB-NetC to generate the final disparity maps with multiple scales ($s=0,1,...,6$). Thus, the final disparity maps predicted by FADNet++, denoted by $\hat{d_{s}}$, can be calculated by \begin{align} \hat{d_{s}} = c_{s} + r_{s}, 0 \leq s \leq 6. \label{eq:d_r_sum} \end{align} \subsection{Configurable Network Size} \label{subsec:configurable} Although the recent state-of-the-art models, such as PSMNet \cite{psmnet2018}, GANet \cite{ganet2019}, LEAStereo \cite{cheng2020hierarchical} and our previous FADNet \cite{wang2020fadnet}, produce decent accuracy of disparity estimation, the practicability on computing devices of different computational capability, especially those low-end mobile ones, has not yet been extensively studied. Recently, AnyNet \cite{anynet2019} reduced the inference overhead of stereo matching by alternatively refining the disparity map in a coarse-to-fine manner according to the target device, and made it possible to be deployed on a mobile Jetson TX2 platform with over 20 FPS. However, the low-level features, which are important to recover the object details and boundaries, could be discarded to keep a high inference speed on a low-end device. Prior to AnyNet, we keep all the features from low to high scales but make the channel numbers of convolution/deconvolution layers configurable so that we can balance the model accuracy and inference speed. Our design has three advantages. First, the network size can be easily controlled by two ratio parameters, which is proved to be simple yet effective in our experiments. Second, the variants of different configurations still share the overall network structure of FADNet++ instead of dropping some layers/modules (as adopted in \cite{stereonet2018}) or some scales (as adopted in \cite{anynet2019}) such that the benefits of the FADNet++ backbone can be maintained. Third, the configurable ratio is convenient in terms of balancing the accuracy and performance under different application requirements. In our proposed FADNet++, RB-NetC and RB-NetS have the same number of layers in their decoder and encoder parts, respectively. Assume that the encoder part has $E$ layers and the decoder part has $D$ layers. The $i^{\text{th}}$ layer in the encoder is denoted by $l_{i}^{E}$. The $i^{\text{th}}$ layer in the decoder is denoted by $l_{i}^{D}$. For each convolution layer, we have a basic channel number denoted by $\hat{C}$, which also indicates the minimum channels. Then we introduce two ratios, E-Ratio for encoders and D-Ratio for decoders, to conveniently configure the model size. By assigning different values for E-Ratio and D-Ratio, we are able to construct a set of FADNet++ variants. We list some of them in Table \ref{tab:fadnet_variants}. The channel number of each convolution layer can be calculated by \begin{subequations} \begin{align} C_{l_{i}^{E}} &= \hat{C}_{l_{i}^{E}} \times \text{E-Ratio} \\ C_{l_{i}^{D}} &= \hat{C}_{l_{i}^{D}} \times \text{D-Ratio} \end{align} \end{subequations} The feature of configurable network size obviously promotes the flexibility of FADNet++ in terms of network parameters as well as the model inference speed. We will further evaluate its effectiveness and efficiency in Section \ref{sec:exp} by deploying different variants to a wide range of computing devices. On the one hand, on a server GPU, the full FADNet++ outperforms those expensive CVM-Conv3D methods with slightly better accuracy and a considerable margin of model speed. On the other hand, on a mobile device, the shrinking FADNet-T beats the real-time AnyNet with equivalent model speed but much lower prediction errors. \begin{table}[ht] \centering \caption{FADNet++ variants of different configurations. } \label{tab:fadnet_variants} \small{ \begin{tabular}{|c|c|c|c|} \hline Network & E-Ratio & D-Ratio & Params [M] \\ \hline\hline FADNet++ & 16 & 16 & 124.38 \\ FADNet-M & 8 & 8 & 31.15 \\ FADNet-S & 4 & 4 & 7.82 \\ FADNet-T & 2 & 1 & 1.65\\ \hline \end{tabular} } \end{table} \subsection{Loss Function Design}\label{subsec:loss} Given a pair of stereo RGB images, our FADNet++ takes them as input and produces seven disparity maps at different scales. Assume that the input image size is $H \times W$. The dimension of the seven scales of the output disparity maps are $H \times W$, $\frac{1}{2}H \times \frac{1}{2}W$, $\frac{1}{4}H \times \frac{1}{4}W$, $\frac{1}{8}H \times \frac{1}{8}W$, $\frac{1}{16}H \times \frac{1}{16}W$, $\frac{1}{32}H \times \frac{1}{32}W$, and $\frac{1}{64}H \times \frac{1}{64}W$ respectively. To train FADNet++ in an end-to-end manner, we adopt the pixel-wise smooth L1 loss between the predicted disparity map and the ground truth using \begin{align} L_s(d_s, \hat{d_s})=\frac{1}{N}\sum_{i=1}^{N}{smooth}_{L_1}(d_{s}^i - \hat{d_{s}^i}), \label{eq:smooth_l1} \end{align} where $N$ is the number of pixels of the disparity map, $d_s^i$ is the $i^th$ element of $d_s\in \mathcal{R}^N$ and \begin{align} {smooth}_{L_1}(x)= \begin{cases} 0.5x^2,& \text{if } |x| < 1\\ |x|-0.5, & \text{otherwise}. \end{cases} \end{align} Note that $d_s$ is the ground truth disparity of scale $\frac{1}{2^s}$ and $\hat{d_s}$ is the predicted disparity of scale $\frac{1}{2^s}$. The loss function is separately applied in the seven scales of outputs, which generates seven loss values. The loss values are then accumulated with loss weights. \begin{table}[!ht] \centering \caption{Multi-scale loss weight scheduling.} \label{tab:loss_weights} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \textbf{Round} & $w_0$ & $w_1$ & $w_2$ & $w_3$ & $w_4$ & $w_5$ & $w_6$ \\ \hline\hline 1 & 0.32 & 0.16 & 0.08 & 0.04 & 0.02 & 0.01 & 0.005 \\ \hline 2 & 0.6 & 0.32 & 0.08 & 0.04 & 0.02 & 0.01 & 0.005 \\ \hline 3 & 0.8 & 0.16 & 0.04 & 0.02 & 0.01 & 0.005 & 0.0025 \\ \hline 4 & 1.0 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline \end{tabular} \end{table} The loss weight scheduling technique which is initially proposed in \cite{mayer2016large} is useful to learn the disparity in a coarse-to-fine manner. Instead of just switching on/off the losses of different scales, we apply different non-zero weight groups for tackling different scale of disparity. Let $w_s$ denote the weight for the loss of the scale of $s$. The final loss function is \begin{equation} L=\sum_{s=0}^{6}w_sL_s(d_s,\hat{d_s}). \end{equation} The specific setting is listed in Table \ref{tab:loss_weights}. Totally there are seven scales of predicted disparity maps. At the beginning, we assign low-value weights for those large scale disparity maps to learn the coarse features. Then we increase the loss weights of large scales to let the network gradually learn the finer features. Finally, we deactivate all the losses except the final predict one of the original input size. With different rounds of weight scheduling, the evaluation EPE is gradually increased to the final accurate performance which is shown in Table \ref{tab:weightsresults} on the SceneFlow dataset. \begin{table}[!ht] \centering \caption{Model accuracy with different rounds of weight scheduling.} \label{tab:weightsresults} \addtolength{\tabcolsep}{-2.0pt} \begin{tabular}{|c|c|c|c|c|c|} \hline Network & Round & \# Epochs & \makecell{Training \\ EPE} & \makecell{Test \\ EPE} & Improvement (\%) \\ \hline\hline \multirow{4}{*}{FADNet++} & 1 & 20 & 1.45 & 1.28 & - \\ & 2 & 20 & 1.07 & 0.96 & 25.0 \\ & 3 & 20 & 0.91 & 0.89 & 7.3 \\ & 4 & 30 & 0.74 & 0.76 & 14.6 \\ \hline \multirow{4}{*}{FADNet-M} & 1 & 20 & 1.61 & 1.38 & - \\ & 2 & 20 & 1.31 & 1.19 & 13.8 \\ & 3 & 20 & 1.16 & 1.02 & 14.3 \\ & 4 & 30 & 0.97 & 0.91 & 10.8 \\ \hline \multirow{4}{*}{FADNet-S} & 1 & 20 & 2.10 & 1.91 & - \\ & 2 & 20 & 1.72 & 1.54 & 19.4 \\ & 3 & 20 & 1.58 & 1.35 & 12.3 \\ & 4 & 30 & 1.47 & 1.19 & 11.9 \\ \hline \multirow{4}{*}{FADNet-T} & 1 & 20 & 3.10 & 2.52 & - \\ & 2 & 20 & 2.65 & 2.16 & 14.3 \\ & 3 & 20 & 2.49 & 2.11 & 2.3 \\ & 4 & 30 & 2.25 & 1.83 & 13.3 \\ \hline \end{tabular} \begin{tablenotes} \item Note: ``Improvement'' indicates the test EPE decrease of the current round of weight schedule over its previous. \end{tablenotes} \end{table} Table \ref{tab:weightsresults} lists the model accuracy improvements (average 13.3\% and up to 25.0\% among all the rounds) brought by the multiple round training of four loss weight groups. For each tested network, it is observed that both the training and testing EPEs are decreased smoothly and close, which indicates good generalization and advantages of our training strategy. \section{Related Work} \label{sec:related_work} There exist many studies using deep learning methods in estimating image depth using monocular, stereo and multi-view images. Although monocular vision is low cost and commonly available in practice, it does not explicitly introduce any geometrical constraint, which is important for disparity estimation\cite{Luo_2018_CVPR}. On the contrary, stereo vision leverages the advantages of cross-reference between the left and the right view, and usually shows greater performance and robustness in geometrical tasks. Thanks to the rapid and promising development of DNNs, stereo matching also gains considerable credits from DNNs which efficiently extract great feature representation and fit the cost matching function between the left and right view. The early studies mainly focus on optimizing the existing network architectures by enormous hands-on trial-and-error tweaking efforts. Besides, recent studies also leverage multi-task learning \cite{segstereo2018,song2020edgestereo,irs2021} to combine other prior vision information and NAS-based methods \cite{cheng2020hierarchical,Saikia_2019_ICCV} to tweak the network structure as well as the operator hyper-parameters (i.e., kernel size and channel number for the convolution layer). According to the basic operator (related to the computational efficiency) and the network pipeline, we mainly discuss two branches of network structures for disparity estimation, the ED-Conv2D series and the CVM-Conv3D series. \subsection{Disparity Estimation with ED-Conv2D CNNs} In the ED-Conv2D series, end-to-end architectures with mainly convolution layers \cite{mayer2016large}\cite{crl2017} are proposed for disparity estimation, which use two stereo images as input and generate the disparity directly and the disparity is optimized as a regression task. This is achieved by adopting large U-shape encoder-decoder networks with 2D convolutions to predict the disparity map. However, the models that are pure 2D CNN architectures are difficult to capture the matching features such that the estimation results are not good. To address the problem, the correlation layer which can express the relationship between left and right images is introduced in the end-to-end architecture (e.g., DispNetCorr1D \cite{mayer2016large}, FlowNet \cite{flownet}, FlowNet2 \cite{flownet2}, DenseMapNet \cite{flownet3}). The correlation layer significantly increases the estimating performance compared to the pure CNNs, but existing architectures are still not accurate enough for production. Furthermore, CRL \cite{crl2017} and FADNet \cite{wang2020fadnet} introduce the idea of residual learning \cite{resnet2016} to conduct efficient disparity refinement in a coarse-to-fine manner. Liang et al. \cite{iresnet2021} apply the similar idea of them but with constructing multi-scale cost volumes from the feature pyramid. Although those existing ED-Conv2D methods enjoy the high model inference efficiency, they usually fail to produce satisfactory results in some challenging scenarios. Besides, some studies leverage multi-task learning to incorporates other visual information, such as edge cues \cite{song2020edgestereo} and semantic segmentation \cite{segstereo2018}, to promote the accuracy of the textureless regions, detailed structures and small objects. \subsection{Disparity Estimation with CVM-Conv3D CNNs} The CVM-Conv3D CNNs are further proposed to increase the estimation performance \cite{zbontar2016stereo}\cite{kendall2017end}\cite{psmnet2018}\cite{ganet2019}\cite{nie2019multi}, which leverage the concept of semi-global matching \cite{hirschmuller2007stereo} to learn disparities from a 4D cost volume. The cost volume is mainly constructed by concatenating left feature maps with their corresponding right counterparts across each disparity level \cite{kendall2017end}\cite{psmnet2018}, and the features of the generated cost volumes can be learned by 3D convolution layers. The CVM-Conv3D CNNs can automatically learn to regularize the cost volume, which have achieved state-of-the-art accuracy of various datasets. However, the key limitation of the 3D based CNNs is their extremely high computation resource requirements. For example, training GANet \cite{ganet2019} with the Scene Flow \cite{mayer2016large} dataset takes weeks even using very powerful Nvidia Tesla V100 GPUs. Even they achieve good accuracy, it is difficult to deploy due to their very low time efficiency. Thus, recent research proposes some optimization solutions, such as cost volume compression by grouping \cite{gwcnet2019}, efficient search space pruning \cite{deeppruner2019} and corporative learning of multi-scale features \cite{xu2020aanet}. However, the fastest AANet \cite{xu2020aanet} among all CVM-Conv3D CNNs only runs 12 FPS even on a great Tesla V100 GPU and is still far from real-time inference on other low-end devices. Besides, to lessen the effort dedicated to designing network architectures, automated machine learning (AutoML) \cite{he2019automl} especially neural architecture search (NAS) \cite{nas2019,liu2018darts,cai2020once}, has also been applied to stereo matching in \cite{cheng2020hierarchical,Saikia_2019_ICCV,searchdense2018} and successfully achieved the leader accuracy and generalization in several benchmarks. However, the low time efficiency and high memory footprint of those 3D-conv based architectures still remain. To this end, we propose a fast and accurate DNN model for disparity estimation.
1,314,259,996,053
arxiv
\section{Introduction} \label{Sec_Int} Studies of hard exclusive reactions rely on the factorization properties of the leading twist amplitudes \cite{fact}. The leading twist distribution amplitude (DA) of a transversally polarized vector meson is chiral-odd, and hence decouples from hard amplitudes even when another chiral-odd quantity is involved \cite{DGP} unless in reactions with more than two final hadrons \cite{IPST}. Thus transversally polarized $\rho-$meson production is generically governed by twist 3 contributions for which a pure collinear factorization fails due to the appearance of end-point singularities \cite{MP,AT}. The meson quark gluon structure within collinear factorization may be described by Distribution Amplitudes (DAs), classified in \cite{BB}. Measurements \cite{exp} of the $\rho_T-$meson production amplitude in photo and electro-production show that it is by no means negligible. We consider here the case of very high energy collisions at colliders, for which future progress may come from real or virtual photon photon collisions \cite{IP,PSW}. In the literature there are two approaches to the factorization of the scattering amplitudes in exclusive processes at leading and higher twists. The first approach \cite{APT,AT}, the Light-Cone Collinear Factorization (LCCF), extends the inclusive approach \cite{EFP} to exclusive processes, dealing with the factorization in the momentum space around the dominant light-cone direction. On the other hand, there exists a Covariant Collinear Factorization (CCF) approach in coordinate space succesfully applied in \cite{BB} for a systematic description of DAs of hadrons carrying different twists. We show \cite{us} that these two descriptions are equivalent at twist 3, and illustrate this by calculating within both methods the impact factor $\gamma^* \to \rho_T$, up to twist 3 accuracy. \section{LCCF factorization of exclusive processes} \label{Sec_LCCF} The amplitude for the exclusive process $A \to \rho \, B$ is, in the momentum representation and in axial gauge reads ($H$ and $H_\mu$ are 2- and 3-parton coefficient functions, respectively) \begin{eqnarray} \label{GenAmp} {\cal A}= \int d^4\ell \, {\rm tr} \biggl[ H(\ell) \, \Phi (\ell) \biggr]+ \int d^4\ell_1\, d^4\ell_2\, {\rm tr}\biggl[ H_\mu(\ell_1, \ell_2) \, \Phi^{\mu} (\ell_1, \ell_2) \biggr] + \ldots \,. \end{eqnarray} In (\ref{GenAmp}), the soft parts $\Phi$ are the Fourier-transformed 2- or 3-parton correlators which are matrix elements of non-local operators. To factorize the amplitude, we choose the dominant direction around which we decompose our relevant momenta and we Taylor expand the hard part. Let $p\sim p_\rho$ and $n$ be two light-cone vectors ($p \cdot n =1$). Any vector $\ell$ is then expanded as \begin{eqnarray} \label{k} \ell_{i\, \mu} = y_i\,p_\mu + (\ell_i\cdot p)\, n_\mu + \ell^\perp_{i\,\mu} , \quad y_i=\ell_i\cdot n , \end{eqnarray} and the integration measure in (\ref{GenAmp}) is replaced as $d^4 \ell_i \longrightarrow d^4 \ell_i \, dy_i \, \delta(y_i-\ell\cdot n) .$ The hard part $H(\ell)$ is then expanded around the dominant $p$ direction: \begin{eqnarray} \label{expand} H(\ell) = H(y p) + \frac{\partial H(\ell)}{\partial \ell_\alpha} \biggl|_{\ell=y p}\biggr. \, (\ell-y\,p)_\alpha + \ldots \end{eqnarray} where $(\ell-y\,p)_\alpha \approx \ell^\perp_\alpha$ up to twist 3. To obtain a factorized amplitude, one performs an integration by parts to replace $\ell^\perp_\alpha$ by $\partial^\perp_\alpha$ acting on the soft correlator. This leads to new operators containing transverse derivatives, such as $\bar \psi \, \partial^\perp \psi $, thus requiring additional DAs $\Phi^\perp (l)$. Factorization is then achieved by Fierz decomposition on a set of relevant Dirac $\Gamma$ matrices, and we end up with \begin{eqnarray} \label{GenAmpFac23} \hspace{-.4cm}{\cal A}= {\rm tr} \left[ H_{q \bar{q}}(y) \, \Gamma \right] \otimes \Phi_{q \bar{q}}^{\Gamma} (y) + {\rm tr} \left[ H^{\perp\mu}_{q \bar{q}}(y) \Gamma \right] \otimes \Phi^{\perp\Gamma}_{{q \bar{q}}\,\mu} (y) + {\rm tr} \left[ H_{q \bar{q}g}^\mu(y_1,y_2) \, \Gamma \right] \otimes \Phi^{\Gamma}_{{q \bar{q}g}\,\mu} (y_1,y_2) \,, \end{eqnarray} where $\otimes$ is the $y$-integration. Although the fields coordinates $z_i$ are on the light-cone in both LCCF and CCF parametrizations of the soft non-local correlators, $z_i$ is along $n$ in LCCF while arbitrary in CCF. The transverse physical polarization of the $\rho-$meson is defined by the conditions \begin{equation} \label{pol_RhoTdef} e_T \cdot n=e_T \cdot p=0\,. \end{equation} Keeping all the terms up to the twist-$3$ order with the axial (light-like) gauge, $n \cdot A=0$, the matrix elements of quark-antiquark nonlocal operators for vector and axial-vector correlators without and with transverse derivatives, with $\stackrel{\longleftrightarrow} {\partial_{\rho}}=\frac{1}{2}(\stackrel{\longrightarrow} {\partial_{\rho}}-\stackrel{\longleftarrow}{\partial_{\rho}})\,,$ can be written as (here, $z=\lambda n$) \begin{eqnarray} \label{par1a} &&\langle \rho(p_\rho)| \bar\psi(z)\gamma_5\gamma_{\mu} \psi(0) |0\rangle = m_\rho\,f_\rho \, i\int_{0}^{1}\, dy \,\text{exp}\left[iy\,p\cdot z\right]\varphi_A(y)\, \varepsilon_{\mu\alpha\beta\delta}\, e^{*\alpha}_{T}p^{\beta}n^{\delta} \, \nonumber \\ \label{par1.1a} &&\langle \rho(p_\rho)| \bar\psi(z)\gamma_5\gamma_{\mu} i\stackrel{\longleftrightarrow} {\partial^T_{\alpha}} \psi(0) |0\rangle = m_\rho\,f_\rho \, i\int_{0}^{1}\, dy \,\text{exp}\left[iy\,p\cdot z\right]\varphi_A^T (y) \, p_{\mu}\, \varepsilon_{\alpha\lambda\beta\delta}\, e_T^{*\lambda} p^{\beta}\,n^{\delta}\,, \end{eqnarray} for the axial case, where $y$ ($\bar y$) is the quark (antiquark) momentum fraction. Two analogous correlators are needed to describe gluonic degrees of freedom, introducing $B$ and $D$ DAs. One thus needs 7 DAs: $\varphi_1$ (twist-$2$), $B$ and $D$ (genuine (dynamical) twist-$3$) and $\varphi_3$, $\varphi_A, \varphi_1^T$, $\varphi_A^T$ (contain both parts: kinematical (\`a la Wandzura-Wilczek) twist-$3$ and genuine (dynamical) twist-$3$). These DAs are related by 2 Equations of Motions (EOMs) and 2 equations arising from the invariance of ${\cal A}$ under rotation on the light-cone. Indeed, this invariance with respect to $n$ does not involve the hard part of ${\cal A}$, and therefore implies constraints on the soft part, i.e. on the DAs. We thus have only 3 independent DAs $\varphi_1$ , $B$ and $D$, which fully encode the non-perturbative content of the $\rho$ at twist 3. The original CCF parametrizations of the $\rho$ DAs~\cite{BB} also involve 3 independent DAs, defined through 4 correlators related by EOMs. For example, the 2-parton axial-vector correlators reads, \begin{equation} \label{BBA} \langle \rho(p_\rho)|\bar \psi(z) \, [z,\, 0] \, \gamma_\mu \gamma_5 \psi(0)|0\rangle = \frac{1}{4}f_\rho\,m_\rho\, \varepsilon_\mu^{\,\,\,\alpha \beta \gamma} e^*_{T \alpha} \,p_\beta \, z_\gamma\, \int\limits_0^1\,dy\,e^{iy(p \cdot z)}\,g_\perp^{(a)}(y)\;, \end{equation} $[z_1, \, z_2] = P \exp \left[ i g \int\limits^1_0 dt \, (z_1-z_2)_\mu A^\mu(t \,z_1 +(1-t)\,z_2 \right]$ being the Wilson line. Denoting the meson polarization vector by $e,$ $e_T$ is here defined to be orthogonal to the light-cone vectors $p$ and $z$: \begin{equation} \label{pol_Rho} e_{T \mu}=e_\mu -p_\mu \frac{e \cdot z}{p \cdot z}-z_\mu \frac{e \cdot p}{p \cdot z} \, , \end{equation} Thus $e_T$ (\ref{pol_Rho}) in CCF and $e_T$ (\ref{pol_RhoTdef}) in LCCF differ since $z$ does not generally point in the $n$ direction. \section{$\gamma^* \to \rho_T$ Impact factor up to twist three accuracy in LCCF and CCF} We have calculated, in both LCCF and CCF, the forward impact factor $\Phi^{\gamma^*\to\rho}$ of the subprocess $g+\gamma^*\to g+\rho_T\,,$ defined as the integral of the discontinuity in the $s$ channel of the off-shell S-matrix element ${\cal S}^{\gamma^*_T\, g\to\rho_T\, g}_\mu$. In LCCF, one computes the diagrams perturbatively in a fairly direct way, which makes the use of the CCF parametrization \cite{BB} less practical. We need to express the impact factor in terms of hard coefficient functions and soft parts parametrized by the light-cone matrix elements. The standard technique here is an operator product expansion on the light cone, which gives the leading term in the power counting. Since there is no operator definition for an impact factor, we have to rely on perturbation theory. The primary complication here is that the $z^2\to 0$ limit of any single diagram is given in terms of light-cone matrix elements without any Wilson line insertion between the quark and gluon operators (''perturbative correlators``), like $ \langle \rho(p_\rho)|\bar \psi(z)\gamma_\mu \psi(0)|0 \rangle\,.$ Despite working in the axial gauge one cannot neglect effects coming from the Wilson lines since the two light cone vectors $z$ and $n$ are not identical and thus, generically, Wilson lines are not equal to unity. Nevertheless in the axial gauge the contribution of each additional parton costs one extra power of $1/Q$, allowing the calculation to be organized in a simple iterative manner expanding the Wilson line. At twist 3, we need to keep the contribution $[z,0]=1+i \,g \int\limits^1_0 dt \, z^\alpha A_\alpha (z t)$ and to care about the difference between the physical $\rho_T$-polarization (\ref{pol_RhoTdef}) from the formal one (\ref{pol_Rho}). At twist 3-level the net effect of the Wilson line when computing our impact factor is just a renormalization of the DA $g^a_\perp$ of (\ref{BBA}), and similarly for the vector case. Based on the solution of the EOMs and $n$-independence set of equations, our two LCCF and CCF results are identical; they are gauge invariant due to a consistent inclusion of fermionic and gluonic degrees of freedom and are free of end-point singularities, due to the $k_T$ regulator. This work is partly supported by the ECO-NET program, contract 18853PJ, the French-Polish scientific agreement Polonium, the grant ANR-06-JCJC-0084, the RFBR (grants 09-02-01149, 08-02-00334, 08-02-00896), the grant NSh-1027.2008.2 and the Polish Grant N202 249235.
1,314,259,996,054
arxiv
\section{Introduction} \label{intr} One of the outstanding issues in neutrino physics today is to clarify the Dirac or Majorana character of neutrino masses. If neutrinos are Dirac particles, they must have right-handed electroweak singlet components in addition to the known left-handed modes; in such case lepton number remains as a conserved quantity. Alternatively, if they are Majorana particles, they are indistinguishable from their antiparticles, and the lepton number in the reactions involving them may be violated. The nature of the neutrinos can be discerned via detection of neutrinoless double beta decays ($0 \nu \beta \beta$) in nuclei \cite{0nubb}, by considering specific scattering processes \cite{scatt}, or by studying rare meson decays \cite{rmeson,JHEP}. The experimental results to date are unable to distinguish between these two alternatives. Among the principal tasks in neutrino physics are the ascertainment of the nature of the neutrino mass (Dirac or Majorana) and the CP violation in the neutrino sector. The measurement of neutrino oscillations \cite{Pontecorvo,oscatm,oscsol,oscnuc} suggests that the first three neutrinos are not massless but very light particles, with masses less than 1 eV. If these light masses are produced via a seesaw \cite{seesaw} or related mechanism, then the existence of significantly heavier neutrinos is expected. Furthermore, there is a possibility of CP violation in the neutrino sector, both if neutrinos are Dirac or Majorana particles. In the Majorana case, though, the number of possible CP-violating phases in the PMNS matrix is larger. If $n$ is the number of neutrino generations, the number of CP-violating phases is $(n-1)(n-2)/2$ in the Dirac case, and $n(n-1)/2$ in the Majorana case, cf.~Ref.~\cite{Bilenky}. In this work, we investigate the possibility of measuring the CP asymmetry in the rare pion decays: The CP violation in the neutrino sector can be measured by neutrino oscillations \cite{oscCP}. However, here we consider a scenario in which CP violation of the neutrino sector can be measured by investigating rare meson decays. We consider a scenario of two additional, sterile, almost degenerate neutrinos $N_j$ ($j=1,2$) with masses $M_N \sim 10^2$ MeV. Such neutrinos are not typically predicted by seesaw scenarios; nonetheless, there are models which predict such neutrinos \cite{nuMSM,Shapo}, and they are not ruled out by experiments \cite{PDG2012,Atre}. We note that the model, $\nu$MSM~\cite{nuMSM,Shapo}, proposes two almost degenerate Majorana neutrinos with mass between $100$ MeV and a few GeV, in addition to a light Majorana neutrino of mass $\sim 10^1$ keV. The existence of such neutrinos is strongly motivated, because it can explain simultaneously the baryon asymmetry of the Universe, the pattern of light neutrino masses and oscillations, and can provide a dark matter candidate -- cf.~\cite{nuMSMrev} for a review, and \cite{CDSh} for the allowed range of the sterile neutrinos in $\nu$MSM.\footnote{ The tentative evidence of a dark matter line, recently discussed in \cite{DM}, is well within the regime predicted in $\nu$MSM in \cite{CDSh}. We thank Marco Drewes for bringing this point to our attention.} The requirement that the lightest sterile neutrino be the dark matter candidate reduces the parameters of the model in such a way as to make the two heavier neutrinos nearly degenerate in mass. Recently CERN-SPS has proposed a search of such heavy neutrinos, Ref.~\cite{CERN-SPS}, in the decays of $D$, $D_s$ mesons. We are interested in the question whether in such models the CP violation in rare pion decays can be appreciable to cover the parameter space favored by theoretical models. We investigate the rare decays of charged pions into three charged leptons and a light neutrino, with the two intermediate neutrinos $N_j$ in the decay being on-shell, and we look for a possibility of detection of CP asymmetries in such decays. The relevant processes are the lepton number conserving (LC) processes $\pi^{\pm} \to e^{\pm} N_j \to e^{\pm} e^{\pm} \mu^{\mp} \nu$ where $\nu=\nu_e$ for $\pi^+$ and $\nu={\bar \nu}_e$ for $\pi^-$; and the lepton number violating (LV) processes, where $\nu={\bar \nu}_{\mu}$ for $\pi^+$ and $\nu={\nu}_{\mu}$ for $\pi^-$. If the $N_j$ neutrinos are Dirac, only LC decays contribute. If they are Majorana, both LC and LV decays contribute. In our previous work \cite{JHEP}, we demonstrated that the decay branching ratios for these processes are very small but can be appreciable and could be measured in the future $\pi$ factories where huge numbers of pions will be produced, if the heavy-light neutrino mixing parameters are sufficiently large but still below the present upper bounds. Moreover, we showed that the consideration of the muon spectrum of these decays may allow us to distinguish whether the intermediate neutrinos are Dirac or Majorana. We will investigate the branching ratios ${\rm Br}_{\pm} \equiv [\Gamma(\pi^- \to e^- e^- \mu^+ \nu) \pm \Gamma(\pi^+ \to e^+e^+\mu^-\nu)]/\Gamma(\pi^- \to {\rm all})$ and the CP asymmetry ratio ${\cal A}_{\rm CP} \equiv {\rm Br}_{-}/{\rm Br}_{+}$ of the mentioned rare processes in the scenario of two intermediate on-shell neutrinos. We demonstrate that there exist scenarios where this CP asymmetry can be detected. In Sec.~\ref{sec:form} we outline the formalism for the calculation of the various decay widths and branching ratios. The details of the calculation are given in Appendix \ref{app1}. In Sec.~\ref{sec:ACPsum} we derive the expressions for the branching ratios ${\rm Br}_{\pm}$ and for the CP asymmetry ratio ${\cal A}_{\rm CP}$, and present the numerical results. Additional details are given in Appendix \ref{app2}. In Sec.~\ref{sec:concl} we present the conclusions. \section{The processes and formalism for the rare pion decays} \label{sec:form} We consider the lepton number violating (LV) process, Fig.~\ref{FigLV}, and the lepton number conserving (LC) process, Fig.~\ref{FigLC}. \begin{figure}[htb] \begin{minipage}[b]{.49\linewidth} \centering\includegraphics[width=65mm]{Figcp1a.pdf} \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering\includegraphics[width=65mm]{Figcp1b.pdf} \end{minipage} \vspace{-0.4cm} \caption{The lepton number violating (LV) process: (a) the direct (D) channel; (b) the crossed (C) channel.} \label{FigLV} \end{figure} \begin{figure}[htb] \begin{minipage}[b]{.49\linewidth} \centering\includegraphics[width=65mm]{Figcp2a.pdf} \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering\includegraphics[width=65mm]{Figcp2b.pdf} \end{minipage} \vspace{-0.4cm} \caption{The lepton number conserving (LC) process: (a) the direct (D) channel; (b) the crossed (C) channel.} \label{FigLC} \end{figure} We note that if the intermediate neutrinos $N_j$ ($j=1,2$) are Majorana, both processes (LV and LC) take place; and if $N_j$ are Dirac, only the LC process takes place. We will denote the mixing coefficient between the standard flavor neutrino $\nu_{\ell}$ ($\ell = e, \mu, \tau$) and the heavy mass eigenstate $N_j$ as $B_{\ell N_j}$ ($j=1,2)$, i.e., this mixing element appears in the relation \begin{equation} \nu_{\ell} = \sum_{k=1}^3 B_{\ell \nu_k} \nu_k + \left( B_{\ell N_1} N_1 + B_{\ell N_2} N_2 \right) \ , \label{mix} \end{equation} where $\nu_k$ ($k=1,2,3$) are the light mass eigenstates. We adopt the phase conventions of the book Ref.~\cite{Bilenky}, i.e., all the CP-violating phases are incorporated in the PMNS matrix of mixing elements. The decay widths and asymmetries of these processes may become appreciable only if the two intermediate neutrinos $N_j$ are on-shell, i.e., if \begin{equation} (M_{\mu} + M_e) < M_{N_j} < (M_{\pi}-M_e) \ , \label{MNjint} \end{equation} i.e., when the masses $M_{N_j}$ are within the interval (106.2 MeV, 139 MeV). From now on, unless otherwise stated, we will use the simplified notations for the decay widths of these rare processes: \begin{equation} \Gamma^{(X)}(\pi^{\pm}) \equiv \Gamma^{(X)}(\pi^{\pm} \to e^{\pm} e^{\pm} \mu^{\mp} \nu) \ , \quad (X={\rm LV, \ LC}) \ . \label{not0} \end{equation} The decay widths $\Gamma^{(X)}(\pi^{\pm})$ can be written in the form \begin{equation} \Gamma^{(X)}(\pi^{\pm}) = \frac{1}{2!} \frac{1}{2 M_{\pi}} \frac{1}{(2 \pi)^8} \int d_4 \; | {\cal T}^{(X)}(\pi^{\pm}) | ^2 \ , \label{GX1} \end{equation} where $1/2!$ is the symmetry factor due to two final state electrons, and $d_4$ denotes the integration over the 4-particle final phase space \begin{equation} d_4 =\left( \prod_{j=1}^2 \frac{d^3 {\vec p}_j}{2 E_e({\vec p}_j)} \right) \frac{d^3 {\vec p}_{\mu}}{2 E_{\mu}({\vec p}_{\mu})} \frac{d^3 {\vec p}_{\nu}}{2 |{\vec p}_{\nu}|} \delta^{(4)} \left( p_{\pi} - p_1 - p_2 - p_{\mu} - p_{\nu} \right) \ , \label{d4} \end{equation} and we denoted by $p_1$ and $p_2$ the momenta of $e^+$ from the left and the right vertex of the direct channels, respectively (and for the crossed channels just the opposite). The squared matrix element $| {\cal T}^{(X)}(\pi^{\pm}) | ^2$ in Eq.~(\ref{GX1}) is a combination of contributions from $N_1$ and $N_2$ and from the two channels $D$ (direct) and $C$ (crossed), and is given explicitly in Eq.~(\ref{calTX}) in Appendix \ref{app1}. Combining Eqs.~(\ref{GX1}) and (\ref{calTX}), we obtain \begin{eqnarray} \Gamma^{(X)}(\pi^{\pm}) &=& \sum_{i=1}^2 \sum_{j=1}^2 k_{i,\pm}^{(X) *} k_{j,\pm}^{(X)} {\big [} {\overline{\Gamma}}^{(X)}(DD^{*})_{ij} + {\overline{\Gamma}}^{(X)}(CC^{*})_{ij} + {\overline{\Gamma}}_{\pm}^{(X)}(DC^{*})_{ij} + {\overline{\Gamma}}_{\pm}^{(X)}(CD^{*})_{ij} {\big ]} \ , \label{GX2} \end{eqnarray} where $X=$LV, LC; the indices $(i, j)$ indicate contributions from $N_i$ and $N_j$ neutrino exchange amplitudes; and $k_{j,\pm}^{(X)}$ are the corresponding heavy-light mixing factors \begin{equation} \label{kj} k_{j,+}^{{\rm (LV)}} = B_{e N_j}^2 \ , \qquad k_{j,+}^{{\rm (LC)}} = B_{e N_j} B^{*}_{\mu N_j} \ , \qquad k_{j,-}^{(X)} = \left( k_{j,+}^{(X)} \right)^{*} \ . \end{equation} In Eq.~(\ref{GX2}) we denoted by ${\overline{\Gamma}}^{(X)}(YZ^{*})_{ij}$ ($i,j=1,2$) the elements of the normalized (i.e., without mixings) decay width matrices ${\overline{\Gamma}}^{(X)}(YZ^{*})$ ($X=$LV, LC; $Y, Z = D, C$) \begin{equation} \label{GXij} {\overline{\Gamma}}_{\pm}^{(X)}(XY^{*})_{ij} = K^2 \; \frac{1}{2!} \frac{1}{2 M_{\pi}} \frac{1}{(2 \pi)^8} \int d_4 \; P_i^{(X)}(Y) P_j^{(X)}(Z)^{*} \; T_{\pm}^{(X)}(YZ^{*}) \ , \end{equation} where the expressions for $T_{\pm}^{(X)}( YZ^{*})$ (with $X=$LV, LC) for the direct ($YZ^{*}=DD^{*}$), crossed ($YZ^{*}=CC^{*}$) and direct-crossed interference ($YZ^{*}=DC^{*}$, $CD^{*}$) appearing in Eq.~(\ref{GXij}) are given in Appendix \ref{app1}, Eqs.~(\ref{TLV})-(\ref{TLC}). We note that $T_{+}^{(X)}(DD^{*})=T_{-}^{(X)}(DD^{*})$ and $T_{+}^{(X)}(CC^{*})=T_{-}^{(X)}(CC^{*})$, so that the terms ${\overline{\Gamma}}^{(X)}(DD^{*})_{ij}$ and ${\overline{\Gamma}}^{(X)}(CC^{*})_{ij}$ in Eq.~(\ref{GX2}) have no subscripts $\pm$. In Eq.~(\ref{GXij}), $P_j^{(X)}(Y)$ ($X=$LV, LC) represent the $N_j$ propagator functions of the direct and crossed channels ($Y=D, C$) \begin{subequations} \label{Pj} \begin{eqnarray} P_j^{\rm (LC)}(D) &=& \frac{1}{\left[ (p_{\pi}-p_1)^2 - M_{N_j}^2 + i \Gamma_{N_j} M_{N_j} \right]} , \qquad P_j^{\rm (LV)}(D) = M_{N_j} P_j^{\rm (LC)}(D) , \label{PjD} \\ P_j^{\rm (LC)}(C) &=& \frac{1}{\left[ (p_{\pi}-p_2)^2 - M_{N_j}^2 + i \Gamma_{N_j} M_{N_j} \right]} , \qquad P_j^{\rm (LV)}(C) = M_{N_j} P_j^{\rm (LC)}(C) , \label{PjC} \end{eqnarray} \end{subequations} and $K^2$ constant is \begin{equation} K^2 = G_F^4 f_{\pi}^2 |V_{ud}|^2 \approx 2.983 \times 10^{-22} \ {\rm GeV}^{-6} \ . \label{Ksqr} \end{equation} Several symmetry relations are valid between the normalized matrices (\ref{GXij}), cf.~Eqs.~(\ref{symm}) in Appendix \ref{app1}; the most important is that ${\overline{\Gamma}}^{(X)}(DD^{*})={\overline{\Gamma}}^{(X)}(CC^{*})$ and that this ($2 \times 2$) matrix is self-adjoint. Later we will see that the direct-crossed interference contributions ${\overline{\Gamma}}^{(X)}(DC^{*}), {\overline{\Gamma}}^{(X)}(CD^{*})$ are suppressed by several orders of magnitude in comparison to ${\overline{\Gamma}}^{(X)}(DD^{*})$. The branching ratios are obtained by dividing the calculated decay widths $\Gamma^{(X)}(\pi^{\pm})$, Eqs.~(\ref{not0})-(\ref{GX1}) and (\ref{GX2}), by the total decay width of the charged pion $\Gamma(\pi^+ \to {\rm all})$ \begin{equation} \Gamma(\pi^+ \to {\rm all}) = 2.529 \times 10^{-17} \ {\rm GeV} \approx \frac{1}{8\pi} G^{2}_{F}f^{2}_{\pi} M^{2}_{\mu} M_{\pi} |V_{ud}|^2 \left ( 1-\frac{M^{2}_{\mu}}{M^{2}_{\pi}} \right )^2 \ . \label{Gpiall} \end{equation} Another important quantity in the evaluations of ${\overline{\Gamma}}^{(X)}(YZ^{*})$ [and ${\rm Br}^{(X)}(YZ^{*})$] is the total decay width $\Gamma_{N_j}$ of the intermediate on-shell neutrinos, which for the mass range of interest [Eq.~(\ref{MNjint})] can be approximated in the following way: \begin{equation} \Gamma_{N_j} \approx {\cal C}\ {\widetilde {\cal K}}_j {\overline{\Gamma}}(M_{N_j}) \ , \label{DNwidth} \end{equation} where \begin{equation} {\overline{\Gamma}}(M_{N_j}) \equiv \frac{G_F^2 M_{N_j}^5}{192\pi^3} \ , \label{barG} \end{equation} and ${\cal C} =2$ if $N_j$ is Majorana neutrino, and ${\cal C} =1$ if $N_j$ is Dirac neutrino. The factor ${\widetilde {\cal K}}_j$ includes the heavy-light mixing factors dependence, from the charged channels and the neutral interaction channels mediated by $Z$. Using the results of Appendix C of Ref.~\cite{Atre}, the factor ${\widetilde {\cal K}}_j$ can be obtained \begin{equation} {\widetilde {\cal K}}_j \approx 1.6 \; |B_{e N_j}|^2 + 1.1 \; ( |B_{\mu N_j}|^2 + |B_{\tau N_j}|^2 ) \ , \quad (j=1,2) \ . \label{calK} \end{equation} The charged and neutral channel contributions produce only (light) neutrinos and $e^+ e^-$; decays with muon in the final state are suppressed by a kinematical factor $f(M^2_{\mu}/M^2_{N_j}) <10^{-2}$ and are neglected in the formula (\ref{calK}).\footnote{ Eq.~(\ref{calK}) is obtained by using Eqs.~(C.6)-(C.9) of Ref.~\cite{Atre}, for the channels $N_j \to e^+ e^- \nu_{\ell}$, $\nu_{\ell} {\overline \nu}_{\ell} \nu_{\ell'}$. The coefficients in the corresponding formula (2.3) of Ref.~\cite{JHEP} are not correct.} \section{The branching ratios and the CP asymmetry for the rare decays} \label{sec:ACPsum} In this Section we use the results of the previous Section to obtain the results for the branching ratios ${\rm Br}_{\pm}^{(X)}$ and the CP asymmetry ratios ${\cal A}^{(X)}_{\rm CP}$ ($X=LP,LC$) of the discussed rare processes \begin{eqnarray} {\rm Br}_{\pm}^{(X)} &=& \frac{S^{(X)}_{\pm}(\pi)}{\Gamma(\pi^+ \to {\rm all})} \equiv \frac{ \Gamma^{(X)}(\pi^-) \pm \Gamma^{(X)}(\pi^+)}{\Gamma(\pi^- \to {\rm all})} \ , \label{Brdef} \\ {\cal A}^{(X)}_{\rm CP} &=& \frac{{\rm Br}_{-}^{(X)}}{{\rm Br}_{+}^{(X)}}= \equiv \frac{ \Gamma^{(X)}(\pi^-) - \Gamma^{(X)}(\pi^+)}{ \Gamma^{(X)}(\pi^-) + \Gamma^{(X)}(\pi^+)} \ , \label{Adef} \end{eqnarray} where we recall the use of notations (\ref{not0}). The total branching ratios are ${\rm Br}_{\pm} ={\rm Br}_{\pm}^{\rm (LV)}+{\rm Br}_{\pm}^{\rm (LC)}$ when $N_j$ are Majorana neutrinos, and ${\rm Br}_{\pm} ={\rm Br}_{\pm}^{\rm (LC)}$ when $N_j$ are Dirac neutrinos. It is useful to introduce the following notations related with the heavy-light neutrino mixing elements $B_{e N_j}$ and $B_{\mu N_j}$, where we adopt the convention $M_{N_2} > M_{N_1}$: \begin{subequations} \label{not} \begin{eqnarray} \kappa_e & = & \frac{|B_{e N_2}|}{|B_{e N_1}|} \ , \quad \kappa_{\mu} = \frac{|B_{\mu N_2}|}{|B_{\mu N_1}|} \ , \label{kap} \\ B_{e N_j} &=& |B_{e N_j}| e^{i \theta_{e j}} \ , \quad B_{\mu N_j} = |B_{\mu N_j}| e^{i \theta_{\mu j}} \ , \label{thellj} \\ \theta^{\rm (LV)} & = & 2 (\theta_{e 2} - \theta_{e 1}) \ , \quad \theta^{\rm (LC)} = (\theta_{e 2} - \theta_{e 1}) - (\theta_{\mu 2} - \theta_{\mu 1}) \ . \label{delth} \end{eqnarray} \end{subequations} It turns out (see later) that in our cases of interest the $D$-$C$ interference contributions are negligible, and the resulting (sums) $S_{+}^{(X)}(\pi)$ of the decay widths are \begin{subequations} \label{SplX} \begin{eqnarray} S_{+}^{\rm (LV)}(\pi) & \equiv & \left( \Gamma^{\rm (LV)}(\pi^-) + \Gamma^{\rm (LV)}(\pi^+) \right) \nonumber\\ &= & 4 |B_{e N_1}|^4 {\overline{\Gamma}}^{\rm (LV)}(DD^{*})_{11} \left[ 1 + \kappa_e^4 \frac{{\overline{\Gamma}}^{\rm (LV)}(DD^{*})_{22}}{{\overline{\Gamma}}^{\rm (LV)}(DD^{*})_{11}} + 2 \kappa_e^2 \left( \cos \theta^{\rm (LV)} \right) \delta_1^{\rm (LV)} \right] \ , \label{SplLV} \\ S_{+}^{\rm (LC)}(\pi) & \equiv & \left( \Gamma^{\rm (LC)}(\pi^-) + \Gamma^{\rm (LC)}(\pi^+) \right) \nonumber\\ &= & 4 |B_{e N_1}|^2 |B_{\mu N_1}|^2 {\overline{\Gamma}}^{\rm (LC)}(DD^{*})_{11} \left[ 1 + \kappa_e^2 \kappa_{\mu}^2 \frac{{\overline{\Gamma}}^{\rm (LC)}(DD^{*})_{22}}{{\overline{\Gamma}}^{\rm (LC)}(DD^{*})_{11}} + 2 \kappa_e \kappa_{\mu} \left( \cos \theta^{\rm (LC)} \right) \delta_1^{\rm (LC)} \right] \ , \label{SplLC} \end{eqnarray} \end{subequations} where $\delta_j^{(X)}$ in the above quantities represent the (relative) contribution of the $N_1$-$N_2$ interference channel \begin{equation} \delta_j^{(X)} \equiv \frac{{\rm Re} {\overline{\Gamma}}^{(X)}(DD^{*})_{12}}{ {\overline{\Gamma}}^{(X)}(DD^{*})_{jj}} \ , \quad (X=LV, LC; \; j=1,2) \ . \label{delX} \end{equation} On the other hand, the difference $S_{-}^{(X)}(\pi)$ of the $\pi^{-}$ and $\pi^{+}$ rare decays is (where the $D$-$C$ interference terms are neglected) \begin{subequations} \label{SmiX} \begin{eqnarray} S_{-}^{\rm (LV)}(\pi) & \equiv & \left( \Gamma^{\rm (LV)}(\pi^-) - \Gamma^{\rm (LV)}(\pi^+) \right) = 8 |B_{e N_1}|^4 \kappa_e^2 \left( \sin \theta^{\rm (LV)} \right) {\rm Im} {\overline{\Gamma}}^{\rm (LV)}(DD^{*})_{12} \ , \label{SmiLV} \\ S_{-}^{\rm (LC)}(\pi) & \equiv & \left( \Gamma^{\rm (LC)}(\pi^-) - \Gamma^{\rm (LC)}(\pi^+) \right) = 8 |B_{e N_1}|^2 |B_{\mu N_1}|^2 \kappa_e \kappa_{\mu} \left( \sin \theta^{\rm (LC)} \right) {\rm Im} {\overline{\Gamma}}^{\rm (LC)}(DD^{*})_{12} \ . \label{SmiLC} \end{eqnarray} \end{subequations} In these expressions we can recognize (a posteriori) the difference of the CP-odd phases as $\theta^{(X)}$ ($X=LV, LC$) coming from the PMNS mixing matrix elements, cf.~Eqs.~(\ref{thellj})-(\ref{delth}); while (sinus of) the difference of the CP-even phases is contained in the imaginary part of the product of propagators, ${\rm Im} {\overline{\Gamma}}^{(X)}(DD^{*})_{12} \propto {\rm Im} P_1^{(X)}(D) P_2^{(X)}(D)^{*}$, cf.~Eqs.~(\ref{ImP1P2gen}) later. In the limit of $\Gamma_{N_j} \to +0$, i.e., $\Gamma_{N_j} \ll M_{N_j}$, the expression for the ``diagonal'' decay width ${\overline{\Gamma}}^{(X)}(DD^{*})_{11}$ [and thus also for ${\overline{\Gamma}}^{(X)}((DD^{*})_{22}$] can be calculated analytically. The differential decay width $d \Gamma^{(X)}/d E_{\mu}$ with respect to the muon energy $E_{\mu}$, in the $N_j$ rest frame, was obtained in Ref.~\cite{JHEP}, and the result of explicit integration of it over $E_{\mu}$, for the general case of not neglected electron mass ($M_e \not= 0$), is \begin{eqnarray} {\overline{\Gamma}}^{(X)}(DD^{*})_{jj} & \equiv & {\overline{\Gamma}}(DD^{*})_{jj} = \frac{K^2}{192 (2 \pi)^4} \frac{M_{N_j}^{11}} { M_{\pi}^3 \Gamma_{N_j} } \lambda^{1/2}(x_{\pi j}, 1, x_{e j}) \left[ x_{\pi j} -1 + x_{e j}(x_{\pi j} + 2 - x_{e j}) \right] {\cal F}(x_j, x_{e j}) \ , \label{GXDDp} \end{eqnarray} where we use the notations \begin{subequations} \label{notGXDDp} \begin{eqnarray} \lambda(y_1,y_2,y_3) & = & y_1^2 + y_2^2 + y_3^2 - 2 y_1 y_2 - 2 y_2 y_3 - 2 y_3 y_1 \ , \label{lambda} \\ x_{\pi j} &=& \frac{M_{\pi}^2}{M_{N_j}^2} \ , \quad x_{ej} = \frac{M_e^2}{M_{N_j}^2} \ , \quad x_j =\frac{M_{\mu}^2}{M_{N_j}^2} \ , \quad (j=1,2) \ , \label{xjs} \end{eqnarray} \end{subequations} and the function ${\cal F}(x_j, x_{e j})$ is given in Appendix \ref{app2} [Eq.~(\ref{calF})] where the derivation of this expression (\ref{GXDDp}) is given. When $M_e=0$, the results acquires a simpler form \begin{equation} \lim_{M_e \to 0} {\overline{\Gamma}}^{(X)}(DD^{*})_{jj} = \frac{K^2}{192 (2 \pi)^4} \frac{M_{N_j}^{11}} {\Gamma_{N_j} M_{\pi}^3} ( x_{\pi j} - 1 )^2 f (x_j) \ , \label{GXDDp0} \end{equation} where the function $f(x_j) = {\cal F}(x_j,0)$ is \begin{equation} f(x_j) = 1 - 8 x_j + 8 x_j^3 - x_j^4 - 12 x_j^2 \ln x_j \ . \label{fx} \end{equation} We note that the expression (\ref{GXDDp}) is the same for $X=LV$ and $X=LC$. In the range of the masses $0.117 \ {\rm GeV} < M_{N_j} < 0.136 \ {\rm GeV}$ the expression (\ref{GXDDp0}) differs from the exact expression (\ref{GXDDp}) [with Eq.~(\ref{calF})] by less than one per cent. However, for $0.106 \ {\rm GeV} < M_{N_j} < 0.117 \ {\rm GeV}$ and for $0.136 \ {\rm GeV} < M_{N_j} < 0.139 \ {\rm GeV}$ the deviation is more than one per cent. For values of $M_{N_j}$ close to the lower on-shell bound $M_{\mu}+M_e$ ($ \approx 0.1062$ GeV) the deviation is very large and the expression (\ref{GXDDp}) [with Eq.~(\ref{calF})] must be used instead of Eq.~(\ref{GXDDp0}) for ${\overline{\Gamma}}^{(X)}(DD^{*})_{jj}$. We will use the general expression (\ref{GXDDp}) unless otherwise stated. Furthermore, we can also calculate analogously as ${\overline{\Gamma}}^{(X)}(DD^{*})_{jj}$ the analytic expression for the asymmetric difference $S_{-}^{(X)}$ in the limit $\Gamma_{N_j} \to +0$ ($\Gamma_{N_j} \ll M_{N_2}-M_{N_1}$). In order to explain this analogy, we note that in the limit $\Gamma_{N_j} \to +0$ it was crucial to use in the analytic calculation of ${\overline{\Gamma}}^{(X)}(DD^{*})_{jj}$ the identity \begin{eqnarray} |P_j^{\rm (LC)}(D)|^2 &=& \left | \frac{1}{(p_{\pi}-p_1)^2-M^{2}_{N_j}+i \Gamma_{N_j} M_{N_j}} \right | ^2 \nonumber\\ &\approx & \frac{\pi}{M_{N_j} \Gamma_{N_j}} \delta((p_{\pi}-p_1)^2-M^{2}_{N_j})\ ; \quad ( j=1,2; \; \Gamma_{N_j} \ll M_{N_j} ) \ . \label{P1P1} \end{eqnarray} On the other hand, in the difference $S_{-}^{(X)} \propto {\rm Im} {\overline{\Gamma}}^{(X)}(DD^{*})_{12}$ we have in the integrand of ${\rm Im} {\overline{\Gamma}}^{(X)}(DD^{*})_{12}$ as a factor the following combination of propagators: \begin{subequations} \label{ImP1P2gen} \begin{eqnarray} {\rm Im} P_1^{\rm (LC)}(D) P_2^{\rm (LC)}(D)^{*} &= & \frac{ \left( p_N^2 - M_{N_1}^2 \right) \Gamma_{N_2} M_{N_2} - \Gamma_{N_1} M_{N_1} \left( p_N^2 - M_{N_2}^2 \right) } { \left[ \left( p_N^2 - M_{N_1}^2 \right)^2 + \Gamma_{N_1}^2 M_{N_1}^2 \right] \left[ \left( p_N^2 - M_{N_2}^2 \right)^2 + \Gamma_{N_2}^2 M_{N_2}^2 \right] } \label{ImP1P2ex} \\ & \approx & \mathcal{P} \left ( \frac{1}{p^{2}_{N}-M^{2}_{N1}} \right ) \pi\ \delta (p^{2}_{N}-M^{2}_{N2}) - \pi\ \delta (p^{2}_{N}-M^{2}_{N1}) \mathcal{P} \left ( \frac{1}{p^{2}_{N}-M^{2}_{N2}} \right ) \label{ImP1P2a} \\ & = & \frac{\pi}{M^{2}_{N2}-M^{2}_{N1}} \left [ \delta ( p^{2}_{N}-M^{2}_{N2})+ \delta ( p^{2}_{N}-M^{2}_{N1}) \right ] \ , \label{ImP1P2} \end{eqnarray} \end{subequations} where $p_N=(p_{\pi}-p_1)$ in the direct channel, and we assumed that $\Gamma_{N_j} \ll | \Delta M_N | \equiv M_{N_2}-M_{N_1}$ in Eqs.~(\ref{ImP1P2a})-(\ref{ImP1P2}). When $X=LV$, the corresponding combination of propagators is the same as in Eq.~(\ref{ImP1P2}) but with the additional factor $M_{N_1} M_{N_2}$. The expression (\ref{ImP1P2}) has formally the same structure as the expression (\ref{P1P1}), except for the factors in front of the delta(s). Therefore, the integration over the final phase space can be performed formally in the same way. This then results in the expressions \begin{subequations} \label{ImG12} \begin{eqnarray} \lefteqn{ {\rm Im} {\overline{\Gamma}}^{\rm (LV)}(DD^{*})_{12} = \eta^{\rm (LV)} \; \frac{K^2}{192 (2 \pi)^4} \frac{1}{M_{\pi}^3} \frac{M_{N_1} M_{N_2}}{(M_{N_2}+M_{N_1}) (M_{N_2}-M_{N_1})} } \nonumber\\ && \times \sum_{j=1}^2 M_{N_j}^{10} \lambda^{1/2}(x_{\pi j}, 1, x_{e j}) \left[ x_{\pi j} -1 + x_{e j}(x_{\pi j} + 2 - x_{e j}) \right] {\cal F}(x_j, x_{e j}) \ , \label{ImG12LV} \\ \lefteqn{ {\rm Im} {\overline{\Gamma}}^{\rm (LC)}(DD^{*})_{12} = \eta^{\rm (LC)} \; \frac{K^2}{192 (2 \pi)^4} \frac{1}{M_{\pi}^3} \frac{1}{(M_{N_2}+M_{N_1})(M_{N_2}-M_{N_1})} } \nonumber\\ && \times \sum_{j=1}^2 M_{N_j}^{12} \lambda^{1/2}(x_{\pi j}, 1, x_{e j}) \left[ x_{\pi j} -1 + x_{e j}(x_{\pi j} + 2 - x_{e j}) \right] {\cal F}(x_j, x_{e j}) \ , \label{ImG12LC} \end{eqnarray} \end{subequations} where the overall factor $\eta^{(X)}$ is equal to unity ($\eta^{(X)} = 1$) when $\Gamma_{N_j} \ll | \Delta M_N |$, i.e., when the identity (\ref{ImP1P2}) can be applied. Nonetheless, when $\Gamma_{N_j} \not\ll | \Delta M_N |$, we have in general corrections to these formulas, in the form of $\eta < 1$,\footnote{ We note that there is no such overall correction factor in the expression (\ref{GXDDp}) for ${\overline{\Gamma}}^{(X)}(DD^{*})_{jj}$, because in our considered cases $\Gamma_{N_j} \ll M_{N_j}$ always and Eq.~(\ref{GXDDp}) is the correct expression then.} and the exact expression (\ref{ImP1P2ex}) has to be used instead of the approximation (\ref{ImP1P2}). All these quantities can be evaluated also via numerical integrations over the final phase space, with finite widths $\Gamma_{N_j}$ in the propagators. The scalings ${\overline{\Gamma}}^{(X)}(DD^{*})_{jj} \propto \Gamma_{N_j}$, ${\rm Im} {\overline{\Gamma}}^{(X)}(DD^{*})_{12}$ $\propto 1/\Delta M_N$, as suggested by Eqs.~(\ref{GXDDp}) and (\ref{ImG12}), are confirmed numerically (when $\Gamma_{N_j} \ll M_{N_j}$, and $\Gamma_{N_j} \ll \Delta M_N$, respectively). Furthermore, the numerical evaluations indicate clearly that the direct-crossed ($DC^{*}$ and $CD^{*}$) interference contributions to $S_{\pm}^{(X)}(\pi)$ are negligible in all considered cases, in comparison with the corresponding direct ($DD^{*}$) and crossed channel ($CC^{*}$) contributions. Namely, in the sum $S_{+}^{(X)}(\pi)$, the interference contributions ${\rm Re} {\overline{\Gamma}}^{(X)}(DC^{*})_{ij} \sim 10^{-37} \ {\rm GeV}$ are approximately independent of $\Gamma_{N_j}$. On the other hand, ${\overline{\Gamma}}^{(X)}(DD^{*})_{jj}={\overline{\Gamma}}^{(X)}(CC^{*})_{jj}$ is at $\Gamma_N=10^{-4}$ GeV about two orders of magnitude larger than ${\rm Re} {\overline{\Gamma}}^{(X)}(DC^{*})_{ij}$. ${\overline{\Gamma}}^{(X)}(DD^{*})_{jj}$ grows at decreasing $\Gamma_N$ as $1/\Gamma_N$ [Eq.~(\ref{GXDDp})], while ${\rm Re} {\overline{\Gamma}}^{(X)}(DC^{*})_{ij}$ does not increase and becomes thus at $\Gamma_N < 10^{-4}$ GeV relatively negligible. In the difference (asymmetry) $S_{-}^{(X)}(\pi)$, the $DC^{*}$ interference contribution ${\rm Im} {\overline{\Gamma}}^{(X)}(DC^{*})_{12} \sim 10^{-38} \ {\rm GeV}$ is approximately independent of $\Delta M_N$. On the other hand, ${\rm Im} {\overline{\Gamma}}^{(X)}(DD^{*})_{12} = {\rm Im} {\overline{\Gamma}}^{(X)}(CC^{*})_{12}$ is at $\Delta M_N = 10^{-3}$ GeV about two orders of magnitude larger than ${\rm Im} {\overline{\Gamma}}^{(X)}(DC^{*})_{12}$. ${\rm Im} {\overline{\Gamma}}^{(X)}(DD^{*})_{12}$ grows at decreasing $\Delta M_N$ as $1/\Delta M_N$ [Eq.~(\ref{ImG12})], while ${\rm Im} {\overline{\Gamma}}^{(X)}(DC^{*})_{12}$ does not increase and becomes thus at $\Delta M_N < 10^{-3}$ GeV relatively negligible. On the other hand, the numerical evaluations with $\Gamma_{N_j} \not \ll \Delta M_N$ give us the values of the $\delta_j^{(X)}$ [cf.~Eqs.~(\ref{delX}) and (\ref{SplX})] and $\eta^{(X)}$ correction terms, due to non-negligible overlap of the $N_1$ with $N_2$ resonance. It turns out that these functions are independent of $X$ ($=LV,LC$), and that $\eta$ and $\delta \equiv (1/2)(\delta_1 + \delta_2)$ are effectively functions of only one parameter, $y \equiv \Delta M_N/\Gamma_N$, where $\Delta M_N \equiv M_{N_2}-M_{N_1}$ ($> 0$), and $\Gamma_N = (1/2)(\Gamma_{N_1} +\Gamma_{N_2})$ \begin{subequations} \label{etadel} \begin{eqnarray} \eta &=& \eta(y) \ , \quad y \equiv \frac{\Delta M_N}{\Gamma_N} \ , \quad \Gamma_N \equiv \frac{1}{2} (\Gamma_{N_1} + \Gamma_{N_2}) \ , \label{etadel1} \\ \delta &=&\delta(y) \ , \quad \delta \equiv \frac{1}{2} (\delta_1 + \delta_2) \ , \quad \frac{\delta_1}{\delta_2} = \frac{{\overline{\Gamma}}(DD^*)_{22}}{{\overline{\Gamma}}(DD^*)_{11}} = \frac{\Gamma_{N_1}}{\Gamma_{N_2}} = \frac{{\widetilde {\cal K}}_1}{{\widetilde {\cal K}}_2} \ . \label{etadel2} \end{eqnarray} \end{subequations} The values of $\delta$ ($=\delta^{(X)}$) and $\eta$ ($=\eta^{(X)}$) as functions of $\Delta M_N/\Gamma_N$ can be obtained by numerical integrations over the four-particle finite phase space, and are tabulated in Table \ref{tabdelet} (with their estimated uncertainties due to numerical integrations). \begin{table} \caption{Values of $\delta(y)$ correction terms and $\eta(y)/y$ correction factors for various values of $y \equiv \Delta M_N/\Gamma_N$.} \label{tabdelet} \begin{tabular}{ll|lll} $y \equiv \frac{\Delta M_N}{\Gamma_N}$ & $\log_{10} y$ & $\delta(y)$ & $\eta(y)$ & $\frac{\eta(y)}{y}$ \\ \hline 10.0 & 1.000 & $0.0100 \pm 0.0005$ & $0.984 \pm 0.003$ & $0.0984 \pm 3 \times 10^{-4}$ \\ 5.00 & 0.699 & $0.038 \pm 0.002$ &$ 0.957 \pm 0.003$ & $0.191 \pm 0.001$ \\ 2.50 & 0.398 & $0.137 \pm 0.006$ & $0.854 \pm 0.003$ & $0.342 \pm 0.001$ \\ 1.67 & 0.222 & $0.265 \pm 0.005$ & $0.730 \pm 0.005$ & $0.438 \pm 0.003$ \\ 1.25 & 0.097 & $0.392 \pm 0.006$ & $0.610 \pm 0.007$ & $0.488 \pm 0.006$ \\ 1.00 & 0.000 & $0.505 \pm 0.010$ & $0.498 \pm 0.005$ & $0.498 \pm 0.005$ \end{tabular} \end{table} We note that the rare process decay widths $S_{+}^{(X)}(\pi)$, Eq.~(\ref{SplX}), are formally quartic in the heavy-light mixing elements $|B_{\ell N}|$, i.e., very small. Nonetheless, they are proportional to the expressions ${\overline{\Gamma}}(DD^{*})_{jj}$, Eq.~(\ref{GXDDp}), which in turn is proportional to $1/\Gamma_{N_j}$ due to the on-shellness of the intermediate $N_j$'s. This $1/\Gamma_{N_j}$ is proportional to $1/{\widetilde {\cal K}}_j \sim 1/|B_{\ell N_j}|^2$ according to Eqs.~(\ref{DNwidth})-(\ref{calK}). Therefore, the on-shellness of $N_j$'s makes the rare process decay widths significantly less suppressed by the mixings: \begin{subequations} \label{onsh} \begin{eqnarray} {\overline{\Gamma}}(DD^{*})_{jj} &\propto& 1/\Gamma_{N_j} \propto 1/{\widetilde {\cal K}}_j \propto 1/|B_{\ell N_j}|^2 \ , \\ S_{+}^{(X)}(\pi) & \propto & |B_{\ell N_j}|^2 \ . \end{eqnarray} \end{subequations} On the other hand, comparing the expressions (\ref{ImG12}) relevant for the CP asymmetries $S_{-}^{(X)}(\pi)$ (\ref{SmiX}), with the expression (\ref{GXDDp}) relevant for the decay widths $S_{+}^{(X)}(\pi)$ (\ref{SplX}), we see that the asymmetries $S_{-}^{(X)}(\pi)$ are suppressed by mixings as $\sim |B_{\ell N}|^4$, making them in general much smaller than the decay widths $S_{+}^{(X)}(\pi) \propto |B_{\ell N_j}|^2$. However, the asymmetries are proportional to $1/\Delta M_N$ (where $\Delta M_N = M_{N_2} - M_{N_1} > 0$), cf.~Eqs.~(\ref{ImG12}). In general, $\Delta M_N \gg \Gamma_{N_j}$. Nonetheless, in a scenario where $\Delta M_N$ becomes very small and (almost) comparable with $\Gamma_{N_j}$, the asymmetries $S_{-}^{(X)}(\pi)$ can become comparable with the decay widths $S_{+}^{(X)}(\pi)$. A model with two almost degenerate neutrinos $N_j$ in the mass range of $\sim 10^2$ eV has been constructed and investigated in Ref.~\cite{Shapo}. In particular, in this limit of two almost degenerate neutrinos $N_j$, where now $M_{N_1} \approx M_{N_2} \equiv M_N$, the formulas (\ref{GXDDp}), (\ref{delX}) and (\ref{ImG12}) get simplified. In this case, it is convenient to introduce a ``normalized'' branching ratio ${\overline {\rm Br}}$ \begin{eqnarray} {\overline {\rm Br}}(M_N) & \equiv & \frac{1}{4 \pi} \frac{K^2 M_{\pi}^3}{G_F^2 \Gamma(\pi^+ \to {\rm all})} \frac{1}{x_{\pi}^3} \lambda^{1/2}(x_{\pi}, 1, x_{e}) \left[ x_{\pi} -1 + x_{e}(x_{\pi} + 2 - x_{e}) \right] {\cal F}(x, x_{e}) \ , \label{bBr} \end{eqnarray} where we use the notations \begin{eqnarray} x_{\pi} &=& \frac{M_{\pi}^2}{M_{N}^2} \ , \quad x_{e} = \frac{M_e^2}{M_{N}^2} \ , \quad x =\frac{M_{\mu}^2}{M_{N}^2} \ . \label{xs} \end{eqnarray} In terms of this branching ratio ${\overline {\rm Br}}$, the formulas (\ref{GXDDp}), (\ref{delX}) and (\ref{ImG12}) can be rewritten, in the mentioned almost degenerate scenario, as \begin{subequations} \label{Gnor} \begin{eqnarray} \frac{{\overline{\Gamma}}(DD^{*})_{jj}}{\Gamma(\pi^+ \to {\rm all})} &=& \frac{1}{4 {\cal C} {\widetilde {\cal K}}_j} {\overline {\rm Br}} \ , \label{Gjjnor} \\ \frac{{\rm Re} {\overline{\Gamma}}(DD^{*})_{12}}{\Gamma(\pi^+ \to {\rm all})} &=& \frac{\delta(y)}{2 {\cal C} ({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} {\overline {\rm Br}} \ , \label{ReG12nor} \\ \frac{{\rm Im} {\overline{\Gamma}}(DD^{*})_{12}}{\Gamma(\pi^+ \to {\rm all})} &=& \frac{\eta(y)/y}{2 {\cal C} ({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} {\overline {\rm Br}} \ , \label{ImG12nor} \end{eqnarray} \end{subequations} where $y \equiv \Delta M_N/\Gamma_N$. Similarly, after some algebra, we can rewrite in this scenario ($M_{N_1} \approx M_{N_2} \equiv M_N$) the obtained branching ratios ${\rm Br}_{\pm}$ and CP asymmetry ratios ${\cal A}_{\rm CP}$ for the considered rare decays, in terms of ${\overline {\rm Br}}$ and of the heavy-light mixing parameters. Below we present the results for the case when the neutrinos $N_j$ are Dirac (Di), and when they are Majorana (Ma) neutrinos. The branching ratio ${\rm Br}_{+}$ for the considered rare processes is \begin{subequations} \label{Brpl} \begin{eqnarray} {\rm Br}_{+}^{\rm (Di)} & \equiv & \frac{S_{+}^{\rm (LC)}(\pi)}{\Gamma(\pi^+ \to {\rm all})} \nonumber\\ &=& {\bigg [} \sum_{j=1}^2 \frac{|B_{e N_j}|^2 |B_{\mu N_j}|^2}{{\widetilde {\cal K}}_j} + 4 \delta(y) \frac{|B_{e N_1}||B_{e N_2}||B_{\mu N_1}||B_{\mu N_2}|}{({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} \cos \theta^{\rm (LC)} {\bigg ]} {\overline {\rm Br}}(M_N) \label{BrplDi1} \\ &=& \frac{|B_{e N_1}|^2 |B_{\mu N_1}|^2}{{\widetilde {\cal K}}_1} \left[ 1\! + \! \frac{{\widetilde {\cal K}}_1}{{\widetilde {\cal K}}_2} \kappa_e^2 \kappa_{\mu}^2 \! + \! 4 \delta(y) \frac{{\widetilde {\cal K}}_1}{({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} \kappa_e^2 \kappa_{\mu}^2 \cos \theta^{\rm (LC)} \right] {\overline {\rm Br}}(M_N) \ , \label{BrplDi2} \\ {\rm Br}_{+}^{\rm (Ma)} & \equiv & \frac{S_{+}^{\rm (LV)}(\pi)+S_{+}^{\rm (LC)}(\pi)}{\Gamma(\pi^+ \to {\rm all})} \nonumber\\ &=& {\bigg [} \sum_{j=1}^2 \frac{|B_{e N_j}|^2 ( |B_{e N_j}|^2 + |B_{\mu N_j}|^2)}{2 {\widetilde {\cal K}}_j} \nonumber\\ &&+ 2 \delta(y) \frac{|B_{e N_1}||B_{e N_2}|}{({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} \left( |B_{e N_1}||B_{e N_2}| \cos \theta^{\rm (LV)} + |B_{\mu N_1}||B_{\mu N_2}| \cos \theta^{\rm (LC)} \right) {\bigg ]} {\overline {\rm Br}}(M_N) \label{BrplMa1} \\ &=& \frac{|B_{e N_1}|^2 (|B_{e N_1}|^2+|B_{\mu N_1}|^2)}{2 {\widetilde {\cal K}}_1} {\bigg [} 1 + \frac{{\widetilde {\cal K}}_1}{{\widetilde {\cal K}}_2} \kappa_e^2 \left( \frac{\kappa_e^2 |B_{e N_1}|^2 + \kappa_{\mu}^2 |B_{\mu N_1}|^2}{|B_{e N_1}|^2+|B_{\mu N_1}|^2} \right) \nonumber\\ &&+ 4 \delta(y) \frac{{\widetilde {\cal K}}_1}{({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} \kappa_e {\bigg (} \frac{ \kappa_e |B_{e N_1}|^2}{(|B_{e N_1}|^2+|B_{\mu N_1}|^2)} \cos \theta^{\rm (LV)} + \frac{\kappa_{\mu} |B_{\mu N_1}|^2}{(|B_{e N_1}|^2+|B_{\mu N_1}|^2)} \cos \theta^{\rm (LC)} {\bigg )} {\bigg ]} {\overline {\rm Br}}(M_N) . \label{BrplMa2} \end{eqnarray} \end{subequations} Here we took into account that in the Dirac case only the $LC$ process contributes, while in the Majorana case both the lepton number violating ($LV$) and conserving ($LC$) processes contribute. The mixing parameters ${\widetilde {\cal K}}_j$ ($\sim |B_{\ell N_j}|^2$) are given in Eq.~(\ref{calK}), and we took into account that in Eq.~(\ref{DNwidth}) for $\Gamma_{N_j}$ the factor ${\cal C}$ is one in the Dirac case and is two in the Majorana case. The contributions of the $N_1$-$N_2$ overlap effects give the relative corrections of ${\cal O}(\delta)$ and are negligible when $\Delta M_N > 10 \Gamma_N$, cf.~Table \ref{tabdelet}. The (CP asymmetry) branching ratio ${\rm Br}_{-}$ for the considered rare processes is \begin{subequations} \label{Brmi} \begin{eqnarray} {\rm Br}_{-}^{\rm (Di)} &\equiv& \frac{S_{-}^{\rm (LC)}(\pi)}{\Gamma(\pi^+ \to {\rm all})} = \frac{\Gamma^{\rm (LC)}(\pi^-)-\Gamma^{\rm (LC)}(\pi^+)}{\Gamma(\pi^+ \to {\rm all})} \nonumber\\ &=& \frac{4 |B_{e N_1}| |B_{e N_2}| |B_{\mu N_1}| |B_{\mu N_2}|}{({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} \sin \theta^{\rm (LC)} \frac{\eta(y)}{y} {\overline {\rm Br}}(M_N) \label{BrmiDi1} \\ &=& \frac{4 |B_{e N_1}|^2 |B_{e N_2}| ^2 \kappa_e \kappa_{\mu}}{({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} \sin \theta^{\rm (LC)} \frac{\eta(y)}{y} {\overline {\rm Br}}(M_N) \label{BrmiDi2} \\ {\rm Br}_{-}^{\rm (Ma)} &\equiv& \frac{(S_{-}^{\rm (LV)}(\pi)+S_{-}^{\rm (LC)}(\pi))}{\Gamma(\pi^+ \to {\rm all})} = \frac{\Gamma^{\rm (LV)}(\pi^-)+\Gamma^{\rm (LC)}(\pi^-) -\Gamma^{\rm (LV)}(\pi^+)-\Gamma^{\rm (LC)}(\pi^+)}{\Gamma(\pi^+ \to {\rm all})} \nonumber\\ &=& \frac{2 |B_{e N_1}| |B_{e N_2}|}{({\widetilde {\cal K}}_1+{\widetilde {\cal K}}_2)} \left( |B_{e N_1}| |B_{e N_2}| \sin \theta^{\rm (LV)} + |B_{\mu N_1}| |B_{\mu N_2}| \sin \theta^{\rm (LC)} \right) \frac{\eta(y)}{y} {\overline {\rm Br}}(M_N) \label{BrmiMa1} \\ &=& \frac{2 \kappa_e |B_{e N_1}|^2}{({\widetilde {\cal K}}_1 \! + \! {\widetilde {\cal K}}_2)} \! \left( \kappa_e |B_{e N_1}|^2 \sin \theta^{\rm (LV)} \!\! + \! \kappa_{\mu} |B_{\mu N_1}|^2 \sin \theta^{\rm (LC)} \right) \frac{\eta(y)}{y} {\overline {\rm Br}}(M_N) . \label{BrmiMa2} \end{eqnarray} \end{subequations} Consequently, the usual CP asymmetry ratios ${\cal A}_{\rm CP}^{(X)}$ are obtained from Eqs.~(\ref{Brpl})-(\ref{Brmi}) \begin{subequations} \label{ACP} \begin{eqnarray} \lefteqn{ {\cal A}_{\rm CP}^{\rm (Di)} \equiv \frac{{\rm Br}_{-}^{\rm (Di)}}{{\rm Br}_{+}^{\rm (Di)}} = \frac{\Gamma^{\rm (LC)}(\pi^-)-\Gamma^{\rm (LC)}(\pi^+)}{\Gamma^{\rm (LC)}(\pi^-)+\Gamma^{\rm (LC)}(\pi^+)} } \nonumber\\ &=& \frac{ \sin \theta^{\rm (LC)}}{\left[ \frac{1}{4} \frac{|B_{e N_1}|}{|B_{e N_2}|}\frac{|B_{\mu N_1}|}{|B_{\mu N_2}|} \left(1 + \frac{{\widetilde {\cal K}}_2}{{\widetilde {\cal K}}_1} \right) + \frac{1}{4} \frac{|B_{e N_2}|}{|B_{e N_1}|}\frac{|B_{\mu N_2}|}{|B_{\mu N_1}|} \left(1 + \frac{{\widetilde {\cal K}}_1}{{\widetilde {\cal K}}_2} \right) + \delta(y) \cos \theta^{\rm (LC)} \right] } \; \frac{\eta(y)}{y} \ , \label{ACPDi} \\ \lefteqn{ {\cal A}_{\rm CP}^{\rm (Ma)} \equiv \frac{{\rm Br}_{-}^{\rm (Ma)}}{{\rm Br}_{+}^{\rm (Ma)}} = \frac{\Gamma^{\rm (LV)}(\pi^-)+\Gamma^{\rm (LC)}(\pi^-)-\Gamma^{\rm (LV)}(\pi^+)-\Gamma^{\rm (LC)}(\pi^+)}{\Gamma^{\rm (LV)}(\pi^-)+\Gamma^{\rm (LC)}(\pi^-)+\Gamma^{\rm (LV)}(\pi^+)+\Gamma^{\rm (LC)}(\pi^+)} } \nonumber\\ &=& \frac{ \left( \sin \theta^{\rm (LV)} + \frac{|B_{\mu N_1}||B_{\mu N_2}|}{|B_{e N_1}||B_{e N_2}|}\sin \theta^{\rm (LC)} \right) } {\left[ \frac{1}{4} \frac{(|B_{e N_1}|^2 +|B_{\mu N_1}|^2) }{|B_{e N_2}|^2} \left(1 + \frac{{\widetilde {\cal K}}_2}{{\widetilde {\cal K}}_1} \right) + \frac{1}{4} \frac{(|B_{e N_2}|^2 +|B_{\mu N_2}|^2) }{|B_{e N_1}|^2} \left(1 + \frac{{\widetilde {\cal K}}_1}{{\widetilde {\cal K}}_2} \right) + \delta(y) \left( \cos \theta^{\rm (LV)} + \frac{|B_{\mu N_1}||B_{\mu N_2}|}{|B_{e N_1}||B_{e N_2}|} \cos \theta^{\rm (LC)} \right) \right] } \nonumber\\ && \times \frac{\eta(y)}{y} \ . \label{ACPMa} \end{eqnarray} \end{subequations} When $y$ ($\equiv \Delta M_N/\Gamma_N$) becomes large ($y > 10$), i.e., when $\Delta M_N > 10 \Gamma_N$, Table \ref{tabdelet} implies that the CP asymmetries (\ref{Brmi})-(\ref{ACP}) become suppressed by the $\eta(y)/y$ factor. On the other hand, when $y < 10$ (i.e., $\Delta M_N < 10 \Gamma_N$) and $|\theta^{(X)}| \sim 1$, the factor $\eta(y)/y$ is $\sim 1$ and the CP asymmetry ratios ${\cal A}_{\rm CP}^{(X)}$ become $\sim 1$,\footnote{ If we also assume that $|B_{\ell N_2}| \approx |B_{\ell N_1}|$ (for $\ell = e, \mu, \tau$), then also ${\widetilde {\cal K}}_1 \approx {\widetilde {\cal K}}_2 \equiv {\widetilde {\cal K}}$, and the expressions for ${\cal A}_{\rm CP}$ become particularly simple \begin{displaymath} {\cal A}_{\rm CP}^{\rm (Di)} = \frac{\sin \theta^{\rm (LC)}} {\left(1 + \delta(y) \cos \theta^{\rm (LC)} \right)} \; \frac{\eta(y)}{y} = \sin \theta^{\rm (LC)} \; \frac{\eta(y)}{y} \left(1 + {\cal O}(\delta) \right) \ , \label{ABrkap1Di} \end{displaymath} \begin{displaymath} {\cal A}_{\rm CP}^{\rm (Ma)} = \left( \frac{|B_{e N_1}|^2 \sin \theta^{\rm (LV)} + |B_{\mu N_1}|^2 \sin \theta^{\rm (LC)}}{|B_{e N_1}|^2+|B_{\mu N_1}|^2} \right) \frac{\eta(y)}{y} \left(1 + {\cal O}(\delta) \right) \ . \label{ABrkap1Ma} \end{displaymath} } while all ${\rm Br}_{\pm}$ become $\sim |B_{\ell N_j}|^2 {\overline {\rm Br}}(M_N)$ ($\ell=e, \mu$). \begin{figure}[htb] \centering\includegraphics[width=100mm]{Figcp3.pdf} \vspace{-0.4cm} \caption{The normalized branching ratio ${\overline {\rm Br}}$, Eq.~(\ref{bBr}), as a function of the mass $M_{N_1} \approx M_{N_2} \equiv M_N$. The full formula was used (with $M_e=0.511 \times 10^{-3}$ GeV). The formula for $M_e=0$ case gives a line which is in this Figure indistinguishable from the depicted line.} \label{bBrfig} \end{figure} \begin{figure}[htb] \begin{minipage}[b]{.49\linewidth} \centering\includegraphics[width=73mm]{Figcp4a.pdf} \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering\includegraphics[width=73mm]{Figcp4b.pdf} \end{minipage} \vspace{-0.4cm} \caption{\footnotesize The normalized branching ratio ${\overline {\rm Br}}$ near the lower end point $M_{\mu} +M_e$ ($=0.1062$ GeV): (a) in the interval below $0.107$ GeV; (b) in the interval below $0.110$ GeV. The dashed line is for $M_e=0$, the full line includes the effects of $M_e = 0.511 \times 10^{-3}$ GeV.} \label{bBrendfig} \end{figure} We present in Fig.~\ref{bBrfig} the normalized quantity ${\overline {\rm Br}}$ as a function of $M_N$ in the on-shell kinematic interval (\ref{MNjint}); and in Figs.~\ref{bBrendfig} the same curve near the lower end point $M_N \approx M_{\mu}+M_e$ ($=0.1062$ GeV), where the effects of $M_e \not= 0$ are relatively appreciable. Further, in Fig.~\ref{etdelfig} we present the curves of the overlap suppression factors $\eta(y)/y$ and $\delta(y)$, as a function of the $N_1$-$N_2$ overlap parameter $y \equiv \Delta M_N/\Gamma_N$. On the other hand, the (CP asymmetry) branching ratio ${\rm Br}_{-}$ in the case of mixing one and maximal CP phases (i.e., when $B_{\ell N_j}=1$ for all $\ell$, and $\sin \theta^{(X)}=1$; ${\rm Br}_{-}^{\rm (Di)} = {\rm Br}_{-}^{\rm (Ma)} \equiv {\rm Br}_{-}$ then), as a function of $\Delta M_N$, is presented in Fig.~\ref{Brfig}. In that Figure, no overlap effects appear at the values of $\Delta M_N$ presented, i.e., $\eta=1$. \begin{figure}[htb] \centering\includegraphics[width=110mm]{Figcp5.pdf} \vspace{-0.4cm} \caption{The suppression factors $\eta(y)/y$ and $\delta(y)$, due to the overlap of the $N_1$ and $N_2$ resonances, as a function of $y \equiv \Delta M_N/\Gamma_N$, for $1 < y < 10$.} \label{etdelfig} \end{figure} \begin{figure}[htb] \centering\includegraphics[width=110mm]{Figcp6.pdf} \vspace{-0.4cm} \caption{The (CP asymmetry) branching ratio ${\rm Br}_{-}$ as a function of $\Delta M_N = M_{N_2} - M_{N_1}$, for mixing one ($B_{\ell N_j}=1$) and large CP-violating phases ($\sin \theta^{(X)}=1$), for four different values of $M_{N_2}$. No suppression effects from the overlap of the $N_1$ and $N_2$ resonances appear here ($\eta=1$).} \label{Brfig} \end{figure} Therefore, when $y \equiv \Delta M_N/\Gamma_N < 5$, i.e., in the almost degenerate case of two on-shell neutrinos $N_j$, we can expect in general the CP asymmetry ratio ${\cal A}_{\rm CP}$ of the considered rare process to be $\sim 1$. The branching ratio for this process, in the case of one $N$ neutrino, was considered in Ref.~\cite{JHEP},\footnote{ It was considered in the $M_e=0$ limit, but the general conclusions remain unchanged with respect to the $M_e = 0.511$ MeV case.} and all the conclusions about the measurability of this branching ratio ${\rm Br} \approx (1/2) {\rm Br}_{+}$ can be translated into the conclusions about the measurability of the (CP asymmetry) branching ratio ${\rm Br}_{-}$ in the described almost degenerate scenario, provided that $|\theta^{\rm (LC)}|, |\theta^{\rm (LV)}| \sim 1$. This means that the CP asymmetries could be measured in the future pion factories in the described scenarios, provided that the heavy-light mixing parameters $|B_{\ell N_j}|^2$ ($\ell = e, \mu$) are not many orders of magnitude below the present experimental upper bounds. The present experimental bounds of the mixing parameters $|B_{\ell N_j}|^2$ ($\ell = e, \mu, \tau$) in the considered mass range (\ref{MNjint}), are: $|B_{e N_j}|^2 \stackrel{<}{\sim} 10^{-8}$ \cite{PIENU:2011aa}; $|B_{\mu N_j}|^2 \stackrel{<}{\sim} 10^{-6}$ \cite{BmuN}; $|B_{\tau N_j}|^2 \stackrel{<}{\sim} 10^{-4}$ \cite{BtauN}; cf. also Refs.~\cite{Atre,Ruchayskiy:2011aa}. The future pion factories, among them the Project X at Fermilab, will produce charged pions with lab energies $E_{\pi}$ of a few GeV (i.e., the time dilation factor $\gamma_{\pi} \sim 10^1$), and luminosities $\sim 10^{22} \ {\rm cm}^{-2} {\rm s}^{-1}$ \cite{ProjX,Geer}, hence, $\sim 10^{29}$ charged pions could be expected per year. The probability of (on-shell) neutrino $N$ to decay inside a detector of length $L \sim 10^1$ m in such pion factories is \begin{equation} P_N \sim \frac{L}{\gamma_{\pi} \tau_N} = \frac{L \Gamma_N}{\gamma_{\pi}} \sim \frac{10^{-2}}{\gamma_{\pi}} {\widetilde {\cal K}} \sim 10^{-3} {\widetilde {\cal K}} \ , \label{PN} \end{equation} where ${\widetilde {\cal K}} \sim {\widetilde {\cal K}}_j \propto |B_{\ell N_j}|^2$. We should multiply the obtained branching ratios ${\rm Br}_{\pm}$ by such acceptance factors $P_N$ to obtain the effective branching ratios ${\rm Br}_{\pm}^{\rm (eff)}$. If the largest among the mixing elements $|B_{\ell N_j}|^2$ ($\ell=e,\mu$) are $|B_{\mu N_j}|^2$ ($\sim |B_{\mu N}|^2$) ($j=1,2$), i.e., if we have $|B_{\mu N}|^2 \gg |B_{e N_j}|^2$ ($\sim |B_{e N}|^2$), the formulas (\ref{PN}) with (\ref{Brpl}) and (\ref{Brmi}) give \begin{subequations} \label{mudom} \begin{eqnarray} P_N {\rm Br}_{+}^{\rm (Di,Ma)} & \sim & 10^{-3} |B_{e N}|^2 |B_{\mu N}|^2 {\overline {\rm Br}}(M_N) \sim |B_{e N}|^2 |B_{\mu N}|^2 10^{-7} \ , \label{mudomBr} \\ P_N {\rm Br}_{-}^{\rm (Di,Ma)} & \sim & 10^{-3} |B_{e N}|^2 |B_{\mu N}|^2 \sin \theta^{(X)} {\overline {\rm Br}}(M_N) \sim |B_{e N}|^2 |B_{\mu N_j}|^2 \sin \theta^{\rm (LC)} 10^{-7} \ . \label{mudomA} \end{eqnarray} \end{subequations} In these relations, we took into account that the LC process dominates over the LV process in the considered case, and that ${\overline {\rm Br}} \sim 10^{-4}$ in most of the on-shell interval for the masses $M_{N_1} \approx M_{N_2} \equiv M_N$, cf.~Fig.~\ref{bBrfig}. If in this case, in addition, $|B_{\ell N_j}|^2$ ($\ell=e,\mu$) are close to their present upper bounds, $|B_{e N_j}|^2 \sim 10^{-8}$ and $|B_{\mu N_j}|^2 \sim 10^{-6}$, this implies that $P_N {\rm Br}_{+} \sim 10^{-21}$ and $P_N {\rm Br}_{-} \sim 10^{-21}$ (the latter provided $\sin \theta^{(X)} \sim 1$), implying that $\sim 10^8$ events can be detected per year, with the difference between $\pi^-$ and $\pi^+$ decays also of the order $\sim 10^8$. This number decreases in proportionality with the factor $|B_{e N}|^2 |B_{\mu N}|^2$ when this factor decreases. In this scenario there is almost no difference between the case when $N_j$ are Dirac and the case when $N_j$ are Majorana. On the other hand, if the largest among the mixing elements $|B_{\ell N_j}|^2$ ($\ell=e,\mu$) are $|B_{e N_j}|^2$ ($\sim |B_{e N}|^2$) ($j=1,2$), i.e., if we have $|B_{e N}|^2 \gg |B_{\mu N_j}|^2$ ($\sim |B_{\mu N}|^2$), the formulas (\ref{PN}) with (\ref{Brpl}) and (\ref{Brmi}) give \begin{subequations} \label{edom} \begin{eqnarray} P_N {\rm Br}_{+}^{\rm (Di)} & \sim & 10^{-3} |B_{e N}|^2 |B_{\mu N}|^2 {\overline {\rm Br}}(M_N) \sim |B_{e N}|^2 |B_{\mu N}|^2 10^{-7} \ , \label{mudomBrDi} \\ P_N {\rm Br}_{+}^{\rm (Ma)} & \sim & 10^{-3} |B_{e N}|^4 {\overline {\rm Br}}(M_N) \sim |B_{e N}|^4 10^{-7} \ , \label{mudomBrMa} \\ P_N {\rm Br}_{-}^{\rm (Di)} & \sim & 10^{-3} |B_{e N}|^2 |B_{\mu N}|^2 \sin \theta^{\rm (LC)} {\overline {\rm Br}}(M_N) \sim |B_{e N}|^2 |B_{\mu N}|^2 \sin \theta^{\rm (LC)} 10^{-7} \ . \label{mudomADi} \\ P_N {\rm Br}_{-}^{\rm (Ma)} & \sim & 10^{-3} |B_{e N}|^4 \sin \theta^{\rm (LV)} {\overline {\rm Br}}(M_N) \sim |B_{e N}|^4 \sin \theta^{\rm (LV)} 10^{-7} \ . \label{mudomAMa} \end{eqnarray} \end{subequations} In this considered case, the LV process dominates over the LC process. If in this case, in addition, $|B_{e N_j}|^2$ are close to their present upper bounds, $|B_{e N_j}|^2 \sim 10^{-8}$ (and $|B_{\mu N}|^2 \ll |B_{e N_j}|^2$), this implies that $P_N {\rm Br}_{+}^{\rm (Ma)} \sim 10^{-23}$ ($\gg P_N {\rm Br}_{+}^{\rm (Di)}$) and $P_N {\rm Br}_{-}^{\rm (Ma)} \sim 10^{-23}$ ($\gg P_N {\rm Br}_{-}^{\rm (Di)}$), assuming that $\sin \theta^{\rm (LV)} \sim 1$. This implies that $\sim 10^6$ events can be detected per year, with the difference between $\pi^-$ and $\pi^+$ decays also of the order $\sim 10^6$, if $N_j$ are Majorana neutrinos (and less events if $N_j$ are Dirac neutrinos). This number decreases in proportionality with the factor $|B_{e N}|^4$ when this factor decreases. In this scenario there is a clear difference between the case when $N_j$ are Dirac and the case when $N_j$ are Majorana. The mentioned present experimental upper bounds on the mixings ($|B_{e N_j}|^2 \stackrel{<}{\sim} 10^{-8}$; $|B_{\mu N_j}|^2 \stackrel{<}{\sim} 10^{-6}$) suggest that the first of the mentioned two scenarios is more probable, i.e., that the LC processes dominate over the LV processes. The measurement of the CP asymmetries alone cannot distinguish between the Dirac and the Majorana character of intermediate neutrinos $N_j$'s. However, as argued in Ref.~\cite{JHEP}, the neutrino character could be determined from the measured differential decay rates of these processes with respect to the muon energy $E_{\mu}$ in the $N_j$ rest frame, $d \Gamma/d E_{\mu}$, if the heavy-light mixing elements satisfy the relation $|B_{e N_j}| \stackrel{>}{\sim} |B_{\mu N_j}|$ (if $|B_{e N_j}| \ll |B_{\mu N_j}|$, the $LC$ process dominates). \section{Summary} \label{sec:concl} We investigated the rare decays of charged pions, $\pi^{\pm} \to e^\pm N_j \to e^{\pm} e^{\pm} \mu^{\mp} \nu$, in scenarios with two heavy sterile neutrinos $N_j$ ($j=1,2$). Such scenarios allow the mentioned decays to proceed with exchange of on-shell intermediate neutrinos at the tree level, but are suppressed by the heavy-light neutrino mixing elements of the PMNS matrix. The mentioned decays can be of the lepton-number-conserving (LC) type ($\nu = \nu_{e}, {\bar \nu}_e$), or of the lepton-number-violating (LV) type ($\nu={\bar \nu}_{\mu}, \nu_{\mu}$). If the $N_j$ neutrinos are of Dirac nature, only LC decays take place; if they are of Majorana nature, both LC and LV decays take place. In Ref.~\cite{JHEP} such processes were studied with a view to ascertain the nature of the intermediate neutrino $N_j$, and it was shown there that it may be possible to do this in the future pion factories where the number of produced charged pions will be exceedingly high. In the present work, on the other hand, we investigated the possibility to ascertain the CP violation in such processes. Such a CP violation originates from the interference between the $N_1$ and $N_2$ exchange processes and the existence of possible CP-violating phases in the PMNS mixing matrix. We showed that such signals of CP violation could be detected in the future pion factories if we have (at least) two sterile neutrinos in the mentioned mass interval and such that their masses are almost degenerate, i.e., when the mass difference $\Delta M_N$ between them is not many orders of magnitude larger than their decay width $\Gamma_N$. Therefore, our calculation suggests that the observation of CP violation in pion decays would be consistent with the existence of $\nu$MSM model \cite{nuMSM,Shapo,nuMSMrev}, with the two almost degenerate heavy neutrinos in the lower mass range of the model. The Majorana nature of the neutrinos offers more possibilities of CP violation because there are more CP-violating phases in the PMNS matrix than in the case when the neutrinos have Dirac nature. On the other hand, the present experimental bounds on the heavy-light mixings allow higher rates and more appreciable CP-violating effects in these processes in the LC channels than in the LV channels, i.e., in the scenarios where the Majorana nature of the neutrinos is difficult to discern. \acknowledgments{ This work was supported in part by (Chile) FONDECYT Grant No.~1130599 (G.C. and C.S.K.), by CONICYT Fellowship ``Beca de Doctorado Nacional'' and Proyecto PIIC 2013 (J.Z.S.). The work of C.S.K. was supported by the NRF grant funded by the Korean Government of the MEST (No. 2011-0017430) and (No. 2011-0020333).}
1,314,259,996,055
arxiv
\subsection{Overview of Approach} \label{sec:intro:approach} Here we provide an overview of our approach for solving linear systems in directed Laplacians. We split it into three parts. In the first part, Section~\ref{sec:approach_reductions}, we describe how to reduce the problem to the special case of solving Eulerian Laplacians with polynomial condition number. In the second part, Section~\ref{sec:approach_sparsification} we cover the efficient construction of sparsifiers. Finally, and in the third part, Section~\ref{sec:solverOverview}, we discuss how to use the sparsifier construction to build an almost-linear-time solver for polynomially well-conditioned Eulerian Laplacian systems. \input{overview_reductions} \input{overview_sparsification} \input{overview_solver} \section{The Complete Solver} \label{sec:complete} In this section we provide some details of the full algorithm for computing stationary distributions and solving directed Laplacians. Various applications of these routines are presented in~\cite{cohen2016faster} and briefly enumerated in Section~\ref{sec:intro:results}. We provide details on these two routines of computing stationary and solving linear system because they are most important for completing the picture. We start by stating the stationary computation algorithm given as Algorithm 1 in Section 3.2 of~\cite{cohen2016faster}. One difference in presentation is that the routine as shown in~\cite{cohen2016faster} relies on solving matrices whose diagonal entries are strictly bigger than the total magnitude of off-diagonal entries in the corresponding row/column. On the other hand, we have only presented a solver for Eulerian Laplacians. The diagonal entries of these matrices are equal to the total magnitude of the off-diagonal entries. As a result, we also need to incorporate the reduction from such matrices to Eulerian Laplacians from Section 5 of~\cite{cohen2016faster}. Pseudocode of this routine is in Figure~\ref{fig:findStationary}. \begin{figure}[ht] \begin{algbox} $\vec{s}=\textsc{ComputeStationary}(\mathcal{L}, \alpha)$ \textbf{Input:} $n \times n$ directed Laplacian $\mathcal{L}$, with diagonal $\DD$.\\ Restart parameter $\alpha \in [0, \frac{1}{2}]$. \textbf{Output:} approximate stationary distribution $\vec{s}$. \begin{enumerate} \item Set $\vec{x}^{(0)} \leftarrow \DD^{-1} \vec{1}$, $\epsilon \leftarrow \mathrm{poly}\left(\frac{\alpha}{n}\right)$. \item For $t = 0, \ldots,k= 3 \ln \left(\alpha^{-1}\right)$ \begin{enumerate} \item Set $\vec{e}^{(t)} \leftarrow \max\left\{\vec{0},~\mathcal{L} \vec{x}^{(t)},~ \mathrm{diag}(\vec{x}^{(t)}) \mathcal{L} \vec{1} \right\},$ and let $\mvar E^{(t)} = \mvar{diag}(\vec{e}^{(t)}),$ $\XX^{(t)} = \mvar{diag}(\vec{x}^{(t)})$ be the corresponding diagonal matrices. \item Create $\mathcal{L}^{(t + 1)} \in \mathbb{{R}}^{(n + 1) \times (n + 1)}$ from $(\mvar E^{(t)} + \mathcal{L}) \XX^{(t)}$ by adding a row/column to make all row/column sums $0$. \item Create a length $n + 1$ vector $\vec{b}^{(t)}$ with sum $0$ whose first $n$ entries are given by the vector $\frac{1}{\norm{\DD^{-1} \vec{e}^{(t)}}_1} \DD^{-1} \vec{e}^{(t)}$. \item Let $\vec{z}^{(t + 1)}$ be the first $n$ entries of the vector returned by $\textsc{SolveEulerian}(\mathcal{L}^{(t)}, \vec{b}^{(t)}, \epsilon )$, and $\vec{x}^{(t + 1)} \leftarrow \XX^{(t)} \vec{z}^{(t + 1)}$ \end{enumerate} \item Return $\vec{s} = \frac{\DD \vec{x}^{(k + 1)}}{\norm{\DD \vec{x}^{(k + 1)}}_1}$. \end{enumerate} \end{algbox} \caption{Stationary Computational Algorithm. This routine combines the reduction from solving strictly row/column dominant matrices to Eulerian Laplacians from Section 5 of~\cite{cohen2016faster} with the stationary finding algorithm from Section 3.} \label{fig:findStationary} \end{figure} This can then be turned into a solver for a strongly connected Laplacian by rescaling it by the stationary, and solve the (approximately) Eulerian Laplacian that result from this. The pseudocode in Figure~\ref{fig:fullSolve} is based on Sections 7.1. and 7.3. of~\cite{cohen2016faster}. \begin{figure} \begin{algbox} $\vec{s}=\textsc{SolveFull}(\mathcal{L}, \alpha)$ \textbf{Input:} $n \times n$ directed Laplacian $\mathcal{L}$, with diagonal $\DD$, desired error $\epsilon$. vector $\vec{b}$ \textbf{Output:} approximate solution to $\mathcal{L} \vec{x} = \vec{b}$. \begin{enumerate} \item Estimate the mixing time of $\mathcal{L}$, $T_{mix}$ by binary searching on the mixing times of $\mathcal{L} + \alpha \mvar I$. \item Add $\frac{\epsilon}{T_{\min} \mathrm{poly}(n)}$ to $\mathcal{L}$ to get a strictly diagonally dominant matrix $\widehat{\mathcal{L}}$. \item Compute approximate stationary distribution of $\widehat{\mathcal{L}}$, $\vec{s}$. \item Let $\vec{x} \leftarrow \textsc{SolveEulerian}(\widehat{\mathcal{L}} \DD^{-1} \mvar S, \vec{b}, \epsilon / \mathrm{poly}(n))$. \item Return $\mvar D^{-1} \mvar S \vec{x}$. \end{enumerate} \end{algbox} \caption{Full solver algorithm for strongly connected Laplacians. It first perturbs $\mathcal{L}$ to form a matrix where a good stationary distribution is easy to compute, and uses the normalization given by it to reduce the problem solving on an Eulerian Laplacian.} \label{fig:fullSolve} \end{figure} \section{Entrywise Sparsification\label{sec:entry_sparsification}} In this section we prove Theorem~\ref{thm:concentration_entry}, our main result about entrywise sampling for sparsification. Our main technical tool for this is a rectangular matrix concentration of Tropp that we restate below: \begin{thm}[Matrix Bernstein (Theorem 1.6 of \cite{Tropp12}, restated)] \label{thm:concentration_tropp} Let $\mvar Z_{1},...,\mvar Z_{k}\in\mathbb{{R}}^{d_{1}\times d_{2}}$ be independent random matrices such that $\mathbb{E}\mvar Z_{i}=\mvar 0$ and $\norm{\mvar Z_{i}}_{2}\leq R$ almost surely for all $i$. Then \[ \Pr\left[\normFull{\sum_{i\in[k]}\mvar Z_{i}}_{2}\geq t\right]\leq(d_{1}+d_{2})\cdot\exp\left(\frac{-t^{2}/2}{\sigma^{2}+Rt/3}\right) \] where \[ \sigma^{2}\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\max\left\{ \normFull{\sum_{i\in[k]}\mathbb{E}\mvar Z_{i}\mvar Z_{i}^{\top}}_{2}\,,\,\normFull{\sum_{i\in[k]}\mathbb{E}\mvar Z_{i}^{\top}\mvar Z_{i}}_{2}\right\} \,. \] \end{thm} First we simplify this theorem, tailoring it to the case where we are sampling a sequence of matrices with the same expectation. \begin{thm} \label{thm:concentration_simple} Let $\mathcal{D}$ be a distribution over $\mathbb{{R}}^{d_{1}\times d_{2}}$. Let $\mvar \Sigma\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\mathbb{E}_{\mvar A\sim\mathcal{D}}\mvar A$ and let $R_{\mathcal{D}}$ and $\sigma_{\mathcal{D}}^{2}$ satisfy \[ \max\left\{ \norm{\mathbb{E}_{\mvar A\sim\mathcal{D}}\mvar A\ma^{\top}}_{2}\,,\,\norm{\mathbb{E}_{\mvar A\sim\mathcal{D}}\mvar A^{\top}\mvar A}_{2}\right\} \leq\sigma_{\mathcal{D}}^{2}\enspace\text{ and }\enspace\max_{\mvar A\in\mathrm{supp}(\mathcal{D})}\norm{\mvar A}_{2}\leq R_{\mathcal{D}}\,. \] Then for $\mvar A_{1},...,\mvar A_{k}$ sampled independently from $\mathcal{D}$ we have \[ \Pr\left[\left\Vert\frac{1}{k}\sum_{i\in[k]}\mvar A_{i}-\mvar \Sigma\right\Vert_{2}\geq\epsilon\right]\leq(d_{1}+d_{2})\cdot\exp\left(\frac{-k\epsilon^{2}/2}{\sigma_{\mathcal{D}}^{2}+R_{\mathcal{D}}\epsilon}\right)\,. \] and for $k\geq 64 \cdot \left(\frac{\sigma_{\mathcal{D}}^{2}}{\epsilon^{2}}+\frac{R_{\mathcal{D}}}{\epsilon}\right)\log\frac{d}{p}$ it is the case that $\Pr\left[\norm{\frac{1}{k}\sum_{i\in[k]}\mvar A_{i}-\mvar \Sigma}_{2}\geq\epsilon\right]\leq p$.\end{thm} \begin{proof} Let $\mvar Z_{i}\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\mvar A_{i}-\mvar \Sigma$. Clearly $\mathbb{E}\mvar Z_{i}=\mvar 0$, and by Jensen's inequality, \[ \norm{\mvar Z_{i}}_{2}\leq\norm{\mvar A_{i}}_{2}+\norm{\mvar \Sigma}_{2}\leq\norm{\mvar A_{i}}_{2}+\mathbb{E}_{\mvar A\sim\mathcal{D}}\norm{\mvar A}_{2}\leq2\cdot R_{\mathcal{D}}\,. \] Furthermore, \[ \mvar 0\preceq\mathbb{E}\mvar Z_{i}^{\top}\mvar Z_{i}=\mathbb{E}_{\mvar A\sim\mathcal{D}}(\mvar A-\mvar \Sigma)^{\top}(\mvar A-\mvar \Sigma)=\mathbb{E}_{\mvar A\sim\mathcal{D}}\mvar A^{\top}\mvar A-\mvar \Sigma^{\top}\mvar \Sigma\preceq\mathbb{E}_{\mvar A\sim\mathcal{D}}\mvar A^{\top}\mvar A \] and \[ \mvar 0\preceq\mathbb{E}\mvar Z_{i}^{\top}\mvar Z_{i}=\mathbb{E}_{\mvar A\sim\mathcal{D}}(\mvar A-\mvar \Sigma)(\mvar A-\mvar \Sigma)^{\top}=\mathbb{E}_{\mvar A\sim\mathcal{D}}\mvar A\ma^{\top}-\mvar \Sigma\mSigma^{\top}\preceq\mathbb{E}_{\mvar A\sim\mathcal{D}}\mvar A\ma^{\top}\,. \] Consequently, \[ \max\left\{ \normFull{\sum_{i\in[k]}\mathbb{E}\mvar Z_{i}\mvar Z_{i}^{\top}}_{2}\,,\,\normFull{\sum_{i\in[k]}\mathbb{E}\mvar Z_{i}^{\top}\mvar Z_{i}}_{2}\right\} \leq k\cdot\sigma_{\mathcal{D}}^{2}\,. \] Therefore, by Theorem~\ref{thm:concentration_tropp} we have that for all $t$, \[ \Pr\left[\normFull{\sum_{i\in[k]}\mvar Z_{i}}_{2}\geq t\right]\leq(d_{1}+d_{2})\cdot\exp\left(\frac{-t^{2}/2}{k\cdot\sigma_{\mathcal{D}}^{2}+2Rt/3}\right)\,. \] Since $\sum_{i\in[k]}\mvar Z_{i}=k\cdot(\frac{1}{k}\sum_{i\in[k]}\mvar A_{i}-\mvar \Sigma)$ picking $t=k\cdot\epsilon$ yields the result. \end{proof} Using Theorem~\ref{thm:concentration_simple} we can now prove Theorem~\ref{thm:concentration_entry}, our main result of this section. \begin{proof}[Proof of Theorem~\ref{thm:concentration_entry}] First note that by the definition of $s$ we have that \[ \sum_{i,j} p_{ij} = \frac{1}{s} \sum_{i,j} \left[ \frac{\mvar A_{ij}}{\vvar{r}_i} + \frac{\mvar A_{ij}}{\vvar{c}_j}\right] = \frac{1}{s} \left[\text{\# non-zero rows} + \text{\# non-zero columns}\right] = 1 \] and therefore $\mathcal{D}$ is a valid probability distribution. All that remains is to prove each of the claims of Theorem~\ref{thm:concentration_entry} by carefully applying Theorem~\ref{thm:concentration_simple}. First apply Theorem~\ref{thm:concentration_simple} for the distribution $\mathcal{D}$ which assigns probability $p_{ij}$ to matrix $\frac{\mvar A_{ij}\ensuremath{\vec{{1}}}_{i}\ensuremath{\vec{{1}}}_{j}^{\top}}{\sqrt{\vvar{r}_{i}\cdot \vvar{c}_{j}}}$. For this application of Theorem~\ref{thm:concentration_simple}, using that $x \cdot y \leq \frac{1}{2} x^2 + \frac{1}{2} y^2$ we have \[ R_{\mathcal{D}} = \max_{i,j}\normFull{\frac{\mvar A_{ij} \ensuremath{\vec{{1}}}_{i}\ensuremath{\vec{{1}}}_{j}^{\top} }{\sqrt{\vvar{r}_{i}\cdot \vvar{c}_{j}}}\cdot\frac{1}{p_{ij}}} = \max_{i,j} s \cdot \frac{1}{\sqrt{\vvar{r}_i \vvar{c}_j}} \left( \frac{1}{\vvar{r}_i} + \frac{1}{\vvar{c}_j} \right)^{-1} \leq \frac{s}{2}\,. \] Furthermore we have \[ \normFull{\mathbb{E}_{\mvar M\sim\mathcal{D}}\mvar M\mm^{\top}}_{2} = s \normFull{ \sum_{i,j} \frac{1}{\vvar{r}_{i}} \cdot \frac{1}{\vvar{c}_{j}} \cdot \mvar A_{ij} \cdot \ensuremath{\vec{{1}}}_{i} \ensuremath{\vec{{1}}}_{i}^{\top} \cdot \left( \frac{1}{\vvar{r}_i} + \frac{1}{\vvar{c}_j} \right)^{-1} }_{2} \leq s \] and \[ \normFull{\mathbb{E}_{\mvar M\sim\mathcal{D}}\mvar M^{\top}\mvar M}_{2} = s \normFull{\sum_{i,j}\frac{1}{\vvar{r}_{i}}\cdot\frac{1}{\vvar{c}_{j}}\cdot\mvar A_{ij}\cdot\ensuremath{\vec{{1}}}_{j}\ensuremath{\vec{{1}}}_{j}^{\top} \left( \frac{1}{\vvar{r}_i} + \frac{1}{\vvar{c}_j} \right)^{-1}}_{2} \leq s\,. \] Consequently, $\sigma_{\mathcal{D}}\leq s$ and since $\epsilon\in(0,1)$ and $k$ is chosen appropriately, the first inequality follows by Theorem~\ref{thm:concentration_simple}. Next, we apply Theorem~\ref{thm:concentration_simple} f or the distribution $\mathcal{D}$ which assigns probability $p_{ij}$ to matrix $\ensuremath{\vec{{1}}}_{i}\frac{\mvar A_{ij}}{\vvar{r}_{i}} $. For this application of Theorem~\ref{thm:concentration_simple} we have \[ R_{\mathcal{D}}=\max_{i,j} \normFull{\frac{\mvar A_{ij}}{\vvar{r}_{i}}\cdot\frac{1}{p_{ij}}} \leq s \cdot\frac{1}{\vvar{r}_{i}}\cdot \left( \frac{1}{\vvar{r}_i} + \frac{1}{\vvar{c}_j} \right)^{-1} \leq s\,. \] Furthermore we have \[ \sigma_{\mathcal{D}}^{2}=\normFull{\sum_{j}\frac{1}{\vvar{r}_{j}^{2}}\cdot\mvar A_{ij}^{2}\cdot \left( \frac{1}{\vvar{r}_i} + \frac{1}{\vvar{c}_j} \right)^{-1} \cdot s}_{2}\leq s\,{.} \] Since $\epsilon\in(0,1)$, with $k$ appropriately chosen, the second inequality follows by Theorem~\ref{thm:concentration_simple}. By symmetry the last inequality follows as well. Finally, by choosing $p$ to be $s$ times larger we can make all conditions hold simultaneously by union bound.\end{proof} \section{Decomposition \label{sec:decomposition}} Here we discuss the proof of Theorem~\ref{thm:decomposition_thm}, our main result on decomposing directed graphs. The theorem follows directly from standard results involving decomposing undirected graphs into expanders~\cite{SpielmanTengSolver:journal, SpielmanT11, KLOS14}. Before stating the decomposition result, we first define the \textit{conductance} of a graph: \begin{definition} Given an undirected graph $G(V,E,w)$, we define the conductance of a $S \subseteq V$ by $$\Phi(S) \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \frac{ \sum_{(u,v)\in E : u \in S, v \notin S} w_{uv} }{\min\{\textnormal{vol}(S), \textnormal{vol}(V\setminus S)\}}\,{,} $$ where $\textnormal{vol}(S) \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \sum_{u \in S} \sum_{v : (u,v) \in E} w_{uv} $. The conductance of $G$ is then defined as $$\Phi(G) = \min_{S \subset V, S\neq\emptyset} \Phi(S)$$ \end{definition} Relating the conductance of an undirected graph to its smallest nontrivial eigenvalue is done via Cheeger's inequality: \begin{thm}[Cheeger's inequality, rephrased]\label{thm:cheeger} Given an undirected graph $G(V,E,w)$, with a symmetric Laplacian $\mvar U = \mvar D - \mvar A$, one has that $\mvar U$'s spectral gap satisfies: $$\lambda_2( \mvar D^{-1/2} \mvar U \mvar D^{-1/2} ) \geq \frac{\Phi(G)^2}{4}\,{.}$$ \end{thm} We refer to the following lemma, which is implicit in~\cite{SpielmanTengSolver:journal, SpielmanT11}. \begin{lemma}[Lemma 31 from~\cite{KLOS14}]\label{lem:decomp} For an unweighted graph $G = (V,E)$, in $\tilde{O}(m)$ time, we can produce a partition $V_1, \dots,V_k$ of $V$, and a collection of sets $S_1,\dots,S_k \subseteq V$ with the following properties: \begin{enumerate} \item For all $i$, $S_i \subseteq V_i$. \item For all $i$, there exists a set $T_i$ with $S_i \subseteq T_i \subseteq V_i$, such that $\Phi(G(T_i)) \geq \Omega(1/\log^2 n)$. \item At least half of the edges are found within the sets $\{S_i\}$, i.e. $$\sum_{i=1}^k \abs{E(S_i)} = \sum_{i=1}^k \abs{\{ (a,b)\in E : a, b \in S_i \}} \geq \frac{1}{2} \abs{E}\,{.}$$ \end{enumerate} \end{lemma} We use this decomposition lemma in order to first prove the result for unweighted graphs. The more general weighted version will then follow from a bucketing argument. The decomposition can be produced by iteratively applying Lemma~\ref{lem:decomp} on the symmetrized input graph $\mvar U$, choosing $\mathcal{L}^{(i)}$ to be the directed Laplacian induced on vertices in $S_i$, and $\mvar U^{(i)}$ be the undirected Laplacian induced on vertices in $T_i$. Since $E[S_i] \subseteq E[T_i]$, clearly $\mvar{diag}(\mvar S_{\mathcal{L}^{(i)}}) \preceq \mvar{diag}(\mvar U^{(i)})$.~\footnote{Note that the decomposition lemma gives us a much stronger property than what we are using, since our sparsification routine only requires degree dominance.} The lower bound on the spectral gap for each $\mvar U_i$ is given by the second property of Lemma~\ref{lem:decomp}, combined with Cheeger's inequality (Theorem~\ref{thm:cheeger}): the spectral gap of $\mvar U^{(i)}$ is at least $1/\tilde{O}(1)$. Also, note that the sum of supports of these $\mvar U^{(i)}$ is $O(n)$, since the graphs $T_i$ are vertex-disjoint subgraphs of $G$, we have $\sum_{i=1}^k \mvar U^{(i)} \preceq \mvar U$. Removing the graphs induced by $S_i$ (or, equivalently, $\mathcal{L}^{(i)}$), for all $i$, reduces the number of edges in $G$ by half. Therefore, after $\lceil \log n \rceil$ iterations of applying Lemma~\ref{lem:decomp}, the edges on $G$ will have been exhausted, and we are done. Since each iteration produces undirected Laplacians whose sum is bounded by $\mvar U$, the sum of all the undirected Laplacians produced during the $\lceil \log n \rceil$ iterations is at most $\lceil \log n \rceil \cdot \mvar U$. Also, the sum of support sizes $O(n\log n)$. Hence we have a $(\tilde{O}(n), 1/\tilde{O}(1), \tilde{O}(1))$-decomposition of $G$. In order to obtain a weighted version of the theorem, we initially decompose the graph $G$ into $b = \lceil \log (w_{\max} / w_{\min}) \rceil$ graphs $G_1, \dots, G_b$ , where $w_{\max}$ and $w_{\min}$ represent the maximum, and minimum arc weight in $G$, respectively, and $G_j = (V, E_j)$ with $E_j = \{e \in E : w_{\min} \cdot 2^{t-1} \leq w_e < w_{\min} \cdot 2^t \}$. For each of these graphs, the cover corresponding to the unweighted $G_j$, scaled by $w_{\min} \cdot 2^t$, becomes a $(\tilde{O}(n), 1/\tilde{O}(1), \tilde{O}(1))$-decomposition of the weighted $G_j$. Therefore, taking the union of all the decompositions from the $b$ graphs, we obtain a $(\tilde{O}(bn), 1/\tilde{O}(1), \tilde{O}(b))$-decomposition of $G$. Since all weights are polynomially bounded, $b = \tilde{O}(1)$, and we obtain the desired result. In addition, it can be shown that the result from Lemma~\ref{lem:decomp} can be made parallel using~\cite{OrecchiaV11}. Indeed, using their SDP-based balanced partitioning routine, one can in $\tilde{O}(1)$ parallel time and $\tilde{O}(m)$ work find a balanced cut with with polylogarithmic (i.e. $1/\tilde{O}(1)$) conductance, or certify that none exists. Such a partitioning routine is then called recursively on the pieces of the input graph that are not yet certified to be expanders, yielding a nearly-linear work, polylogarithmic time algorithm which produces the partition from Lemma~\ref{lem:decomp}. We also refer the reader to Section 6 of~\cite{PengS14}, for a discussion concerning the parallelization of the decomposition routine. \section{Approximating the Harmonic Symmetrization} \label{sec:harmonic_approx} \newcommand{\widetilde{\mvar A}}{\widetilde{\mvar A}} To solve an Eulerian Laplacian system $\mathcal{L} \vec{x} = \vec{b},$ \cite{cohen2016faster} instead solved the system $\mathcal{L}^\top \mvar U_\mathcal{L}^\dagger \mathcal{L} = \mathcal{L}^\top \mvar U_\mathcal{L} \vec{b}$. Since $\mvar U_\mathcal{L}$ is a symmetric Laplacian, its pseudoinverse can be applied in nearly linear time via a variety of methods~\cite{SpielmanTengSolver:journal,KoutisMP10,KoutisMP11,KelnerOSZ13,lee2013efficient,PengS14,KyngLPSS16}, and therefore given a linear system solver for $\mathcal{L}^\top \mvar U_\mathcal{L}^\dagger \mathcal{L}$ one is achieved for $\mathcal{L}$ with only a polylogarithmic running time overhead at worst. % The matrix $\mathcal{L}^\top \mvar U_\mathcal{L}^\dagger \mathcal{L}$ was referred to in \cite{cohen2016faster} as the \emph{harmonic symmetrization} of $\mathcal{L}$ and it was shown that linear systems in this harmonic symmetrization can be solved in $O((nm^{3/4} + n^{2/3} m) \log^{O(1)} (n \kappa))$ time. Here we show that if $\widetilde{\mvar A}$ is a strong approximation for $\mvar A,$ then $\widetilde{\mvar A}^{\top}\mvar U^{\dagger}_{\widetilde{\mvar A}} \widetilde{\mvar A}$ is a spectral approximation for $\mvar A^{\top}\mvar U^{\dagger}_\mvar A\ma$. Consequently, by simply producing a sparsifier for $\mathcal{L}$ in nearly linear time using the results of Section~\ref{sec:sparsification}, solving the harmonic symmetrization using \cite{cohen2016faster}, and then using this solver as a preconditioner in the standard symmetric sense~\cite{Saad03:book}, yields an $O(m + n^{7/3} \log^{O(1)} (n \kappa))$ time algorithm for solving directed Laplacian systems. \begin{lem} If $\widetilde{\mvar A}$ is an $\epsilon$-strong-approximation of $\mvar A$ for $\epsilon \in (0, 1/2)$ and $\ker(\mvar U_\mvar A) \subseteq \ker(\mvar A)$, then \[ \left(1-2\epsilon\right)^{2} \mvar A^{\top} \mvar U_{\mvar A}^\dagger \mvar A \preceq \widetilde{\mvar A}^{\top} \mvar U_{\widetilde{\mvar A}}^\dagger \widetilde{\mvar A} \preceq\left(1+\epsilon\right)^{3} \mvar A^{\top} \mvar U_{\mvar A}^\dagger \mvar A \,. \] \end{lem} \begin{proof} Let $\mvar M=\mvar U^{\dagger/2}_{\mvar A}\widetilde{\mvar A}\mvar U^{\dagger/2}_\mvar A$ and $\mvar N=\mvar U^{\dagger/2}_{\mvar A}\mvar A\mvar U^{\dagger/2}_{\mvar A}$. The definition of strong approximation implies that $\norm{\mvar M-\mvar N}_{2} \leq \epsilon$. Applying Lemma~\ref{lem:relative-diff}, a general lemma regarding the difference of matrices, yields that \[ (1 - \epsilon) \mvar N^{\top}\mvar N - \epsilon^{-1}\epsilon^{2}\mvar I \preceq \mvar M^{\top}\mvar M\preceq(1 + \epsilon) \mvar N^{\top} \mvar N+(1 + \epsilon^{-1})\epsilon^{2}\mvar I\,. \] Now let $\vec{x} \in \mathbb{{R}}^n$ be an arbitrary vector perpendicular to the kernel of $\mvar U_\mvar A$. For such a vector $\vvar{x},$ we have \[ \vec{x}^{\top}\mvar N^{\top}\mvar N \vec{x} = \vec{x}^{\top}\mvar U^{\dagger/2}_\mvar A \mvar A^{\top}\mvar U^{\dagger}_\mvar A \mvar A \mvar U^{\dagger/2}_\mvar A\vec{x} \geq \vec{x}^{\top}\mvar U^{\dagger/2}_{\mvar A}\mvar U_{\mvar A}\mvar U^{\dagger/2}_{\mvar A}\vec{x} \geq \vec{x}^{\top}\mvar I \vec{x}\,{,} \] where we used that by Lemma~\ref{lem:sym-hsm} the harmonic symmetrization spectrally dominates the symmetric Laplacian, i.e., $\mvar A^\top \mvar U_\mvar A^\dagger \mvar A \succeq \mvar U_{\mvar A}$. Consequently, for $\mvar C=\mvar A^{\top}\mvar U^{\dagger}_\mvar A\ma$ and $\widetilde{\mvar C}=\widetilde{\mvar A}^{\top}\mvar U^{\dagger}_\mvar A \widetilde{\mvar A}$ we have \[ (1-2\epsilon) \vec{x}^{\top}\mvar U^{\dagger/2}_\mvar A \mvar C \mvar U^{\dagger/2}_\mvar A\vec{x} \preceq \vec{x}^{\top}\mvar U^{\dagger/2}_\mvar A \widetilde{\mvar C} \mvar U^{\dagger/2}_\mvar A\vec{x} \preceq(1 + 2\epsilon + \epsilon^{2}) \vec{x}^{\top}\mvar U^{\dagger/2}_\mvar A\mvar C\mvar U^{\dagger/2}_{\mvar A}\vec{x}\,{.} \] One can easily see that when $x\in\ker(\mvar U_\mvar A)$ all of the terms are $0$. Hence, we have that the above holds for all $x$. Since $\ker(\mvar U_\mvar A) \subseteq \ker(\mvar A)$ this in turn implies $ (1 - 2\epsilon) \mvar C \preceq \widetilde{\mvar C} \preceq (1 + \epsilon)^2 \mvar C $. Furthermore, since $ (1-\epsilon)\mvar U_{\mvar A} \preceq\mvar U_{\widetilde{\mvar A}}\preceq(1+\epsilon)\mvar U_{\mvar A} $ by Lemma ~\ref{lem:asym_strong_implies_undir}, we also have $(1-\epsilon)\mvar U_{\widetilde{\mvar A}}^\dagger \preceq \mvar U_{\mvar A}^\dagger \preceq (1 + \epsilon) \mvar U_{\widetilde{\mvar A}}^\dagger$. The result follows from combining and using $(1 - 2 \epsilon) \leq (1 - \epsilon)$. \end{proof} \section{Introduction} \label{sec:intro} \input{intro_general.tex} \input{previous_work.tex} \input{results.tex} \subsection{Paper Overview} The rest of this paper is organized as follows. \begin{itemize} \item Section~\ref{sec:prelim} -- we cover preliminaries such as notation, facts about directed Laplacians that we use throughout the paper, and an overview of our approach. \item Section~\ref{sec:sparsification} -- we introduce a notion of asymmetric approximation, and prove that we can, in nearly linear time, produce sparsifiers which are good approximations under this notion. \item Section~\ref{sec:solver} -- we show how to employ the sparsification routines from the previous section in order to obtain our fast Eulerian Laplacian system solver. \item Appendix~\ref{sec:entry_sparsification} -- we prove a matrix concentration result concerning entrywise sampling, which is the basic building block for the results from Section~\ref{sec:sparsification}. \item Appendix~\ref{sec:linear_algebra} -- we provide various general linear algebra facts used throughout the paper. \item Appendix~\ref{sec:decomposition} -- we show how to obtain the graph decompositions required for sparsifying arbitrary Eulerian Laplacians using a decomposition of undirected graphs into expanders. \item Appendix~\ref{sec:complete} -- we provide some details on the full algorithm for computing stationary distributions and solving directed Laplacians using a Eulerian Laplacian system solver. \item Appendix~\ref{sec:harmonic_approx} -- we relate our sparsification results to certain systems considered in \cite{cohen2016faster}. \item Appendix~\ref{sec:reduction} -- we provide a general reduction that improves the dependence of our Eulerian solver on the condition number from $\exp(\sqrt{\log \kappa})$ to $\log \kappa$. \end{itemize} \input{preliminaries.tex} \input{approach_overview.tex} \section{Linear Algebra Facts \label{sec:linear_algebra}} In this section we provide various general linear algebra facts we use throughout the paper. \begin{lem} \label{lem:relative-diff} Suppose that $\norm{\mvar A-\mvar B}_{2}\leq\epsilon$ then for all $c>0$ we have \[ (1-c)\mvar B^{\top}\mvar B-c^{-1}\epsilon^{2}\mvar I\preceq\mvar A^{\top}\mvar A\preceq(1+c)\mvar B^{\top}\mvar B+(1+c^{-1})\epsilon^{2}\mvar I \] \end{lem} \begin{proof} Using the trivial expansion of $\mvar A=\mvar B+(\mvar A-\mvar B)$ we have that \[ x^{\top}\mvar A^{\top}\mvar A x-x^{\top}\mvar B^{\top}\mvar B x=2x\mvar B^{\top}(\mvar A-\mvar B)x+x^{\top}(\mvar A-\mvar B)^{\top}(\mvar A-\mvar B)x \] Now since $xy\leq\frac{c}{2}x^{2}+\frac{1}{2c}y^{2}$ for all $x,y$ and $c>0$ we have \[ \left|2x\mvar B^{\top}(\mvar A-\mvar B)x\right|\leq2\norm{\mvar B x}_{2}\norm{(\mvar A-\mvar B)x}_{2}\leq c\norm{\mvar B x}_{2}^{2}+c^{-1}\norm{(\mvar A-\mvar B)x}_{2}^{2}\,. \] Combining this with the fact that \[ 0\leq x^{\top}(\mvar A-\mvar B)^{\top}(\mvar A-\mvar B)x=\norm{(\mvar A-\mvar B)x}_{2}^{2}\leq\epsilon^{2}\norm x_{2}^{2}=\epsilon^{2}x^{\top}\mvar I x \] we obtain the result. \end{proof} \begin{lem} \label{lem:spectral_equivalence} For all $\mvar A\in\mathbb{{R}}^{n\times n}$ and symmetric PSD $\mvar M,\mvar N\in\mathbb{{R}}^{n\times n}$ such that $\ker(\mvar M)\subseteq\ker(\mvar A^{\top})$ and $\ker(\mvar N)\subseteq\ker(\mvar A)$ we have \[ \norm{\mvar M^{-1/2}\mvar A\mvar N^{-1/2}}_{2}=\max_{x,y\neq0}\frac{x^{\top}\mvar A y}{\sqrt{\left(x^{\top}\mvar M x\right)\left(y^{\top}\mvar N y\right)}}=2\cdot\max_{x,y\neq0}\frac{x^{\top}\mvar A y}{x^{\top}\mvar M x+y^{\top}\mvar N y} \] where in each of the maximization problems we define $0/0$ to be $0$.\end{lem} \begin{proof} Let $L\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\norm{\mvar M^{-1/2}\mvar A\mvar N^{-1/2}}_{2}$. Since $\norm x_{2}=\max_{\norm y_{2}=1}y^{\top}x$ we have that \[ L=\max_{\norm x_{2}=\norm y_{2}=1}x^{\top}\mvar M^{-1/2}\mvar A\mvar N^{-1/2}y\,. \] Now, performing the change of basis $x:=\mvar M^{-1/2}x$ and $y:=\mvar M^{-1/2}y$ we have \[ L=\max_{\norm x_{\mvar M}=\norm y_{\mvar N}=1}x^{\top}\mvar A y=\max_{x,y\neq0}\frac{x^{\top}\mvar A y}{\norm x_{\mvar M}\norm y_{\mvar M}} \] where in each of these maximization problems we restrict that $x\in\im{\mvar M}$ and $y\in\im{\mvar N}$. However, for all $x\perp\im{\mvar M}$ or $y\perp\im{\mvar N}$, i.e. $x\in\ker(\mvar M)$ or $y\in\ker(\mvar N)$, we have that either $\norm x_{\mvar M}=0$ or $\norm y_{\mvar N}=0$ and $x^{\top}\mvar A y=0$. Consequently, the above equalities hold without the $x\in\im{\mvar M}$ and $y\in\im{\mvar N}$ restriction by our definition of $0/0=0$. The final equality we wish to prove follows from the fact that $\norm x_{\mvar M}\norm y_{\mvar N}\leq\frac{1}{2}(\norm x_{\mvar M}^{2}+\norm x_{\mvar N}^{2})$ and that this inequality is tight when $\norm x_{\mvar M}=\norm y_{\mvar N}=1$.\end{proof} \begin{lem} \label{lem:simple_spec_inequalities} For any $\MM \in \mathbb{{R}}^{n \times n}$ and symmetric positive semidefinite matrices $\mvar A, \mvar B \in \mathbb{{R}}^{n \times n}$ such that $\mvar A \preceq \mvar B$ we have that $\norm{\mvar A^{1/2} \mvar M}_2 \leq \norm{\mvar B^{1/2} \mvar M}_2$ and $\norm{\mvar M \mvar A^{1/2}}_2 \leq \norm{\mvar M \mvar B^{1/2}}_2$. \end{lem} \begin{proof} The first claim follows from the fact that adopting the convention $0 / 0 = 0$ \[ \norm{\mvar A^{1/2} \mvar M}_2 = \max_{x \in \mathbb{{R}}^{n}} \frac{\norm{\mvar A^{1/2} \mvar M x}_2}{\norm{x}_2} = \max_{x \in \mathbb{{R}}^{n}} \frac{\sqrt{x^\top \mvar M^\top \mvar A \mvar M x}}{\norm{x}_2} \leq \max_{x \in \mathbb{{R}}^{n}} \frac{\sqrt{x^\top \mvar M^\top \mvar B \mvar M x}}{\norm{x}_2} = \norm{\mvar B^{1/2} \mvar M}_2\,. \] The second follows from this and the fact that $\norm{\mvar A^{1/2} \mvar M}_2 = \norm{\mvar M^\top \mvar A^{1/2}}_{2}$. \end{proof} \begin{lem} \label{lem:matrix_two_norm} Let $\mvar M\in\mathbb{{R}}^{n\times m},$ $a\in\mathbb{{R}}_{\geq0}^{n}$ and $b\in\mathbb{{R}}_{\geq0}^{m}$ be arbitrary. Let $\AA = \mathrm{diag}(a)$ and $\mvar B = \mathrm{diag}(b).$ We have that for all $\alpha,\beta\in[0,1]$ \[ \norm{\mvar A\mvar M\mvar B}_{2}\leq\sqrt{\norm{\mvar A^{2\alpha}\mvar M\mvar B^{2\beta}}_{\infty}\cdot\norm{\mvar A^{2(1-\alpha)}\mvar M\mvar B^{2(1-\beta)}}_{1}}. \] Consequently, for PSD diagonal matrices $\mvar D_{1}\in\mathbb{{R}}^{n\times n}$ and $\mvar D_{2}\in\mathbb{{R}}^{m\times m}$ we have \[ \norm{\mvar D_{1}^{-1/2}\mvar M\mvar D_{2}^{-1/2}}_{2}\leq\max\left\{ \norm{\mvar D_{1}^{-1}\mvar M}_{\infty}\,,\,\norm{\mvar D_{2}^{-1}\mvar M^{\top}}_{\infty}\right\} \,. \] \end{lem} \begin{proof} Let $x\in\mathbb{{R}}^{n}$ and $y\in\mathbb{{R}}^{m}$ be arbitrary with $\norm x_{2}=\norm y_{2}=1$. We have \[ x^{\top}\mvar A\mvar M\mvar B y=\sum_{i,j}\mvar M_{i,j}a_{i}b_{j}x_{i}y_{j}\leq\sum_{i,j}|\mvar M_{i,j}|\cdot a_{i}\cdot b_{j}\cdot|x_{i}|\cdot|y_{j}| \] Consequently, by Cauchy Schwarz we have that \begin{align*} \left(x^{\top}\mvar A\mvar M\mvar B y\right)^{2} &\leq\left(\sum_{i,j}|\mvar M_{i,j}|\cdot a_{i}^{2\alpha}\cdot b_{j}^{2\beta}\cdot x_{i}^{2}\right)\cdot\left(\sum_{i,j}|\mvar M_{i,j}|\cdot a_{i}^{2(1-\alpha)}\cdot b_{j}^{2(1-\beta)}\cdot y_{j}^{2}\right)\\ &\leq\norm{\mvar A^{2\alpha}\mvar M\mvar B^{2\beta}}_{\infty}\cdot\norm{\mvar A^{2(1-\alpha)}\mvar M\mvar B^{2(1-\beta)}}_{1}. \end{align*} The final conclusion follows from the fact that for any matrix $\mvar C \in \mathbb{{R}}^{n\times m}$ $$ \norm{\mvar C}_{1} = \norm{\mvar C^T}_{\infty}.$$ \end{proof} \begin{lem} \label{lem:square_sym_upper_bound} If $\mvar M\in\mathbb{{R}}^{n\times n}$ is a symmetric matrix with $\norm{\mvar M}_{2}\leq1$, then $ \mvar 0\preceq\mvar I-\mvar M^{2}\preceq2\cdot\left(\mvar I-\mvar M\right)\,. $ \end{lem} \begin{proof} Since $\mvar M$ is symmetric we have that $\mvar M^{2}$ and $\mvar M$ are mutually diagonalizable and the above inequality reduces to showing that $0\leq1-x^{2}\leq2\cdot(1-x)$ for $x\in\mathbb{{R}}$ with $|x|\leq1$. The left hand side of the inequality is true because $x^2 \leq 1$ and the right hand side follows by noticing that it is equivalent to $0\leq x^2 -2x + 1 = (x-1)^2$ after rearranging terms. \end{proof} The following gives a similar statement involving the symmetrization of an arbitrary matrix. While it is based on the above proof, it loses a factor of $2$ over the previous lemma. \begin{lem} \label{lem:square_asym_upper_bound} If $\mvar M\in\mathbb{{R}}^{n\times n}$ is a possibly asymmetric matrix satisfying $\norm{\mvar M}_{2}\leq 1$ then \[ \mvar 0\preceq\mvar I - \mvar U_{\mvar M^2} \preceq 2 (\mvar I - \mvar U^2_{\mvar M}) \preceq 4 (\mvar I - \mvar U_{\mvar M}) \,. \] \end{lem} \begin{proof} Since the norm of $\mvar M$ is at most $1$, we immediately obtain $\norm{(\mvar M^{2})^{\top}}_2 = \norm{\mvar M^{2}}_{2}\leq1$, $\norm{\mvar M^{\top}\mvar M}_{2}\leq1$, $\norm{\mvar M\mm^{\top}}_{2}\leq1$. Then, by triangle inequality, \[ \norm{\mvar U_{\mvar M^2}}_2 = \normFull{\frac{1}{2}(\mvar M^2 + (\mvar M^2)^{\top})}_2 \leq \frac{1}{2}(\norm{\mvar M^2}_2 + \norm{(\mvar M^2)^{\top}}_2) \leq 1\,. \] Therefore $\mvar U_{\mvar M^2} \preceq \mvar I$, yielding the left hand side of the desired inequality. Next, we note that these inequalities imply $\mvar M^{\top}\mvar M\preceq\mvar I$ and $\mvar M\mm^{\top}\preceq\mvar I$, yielding \[ (\mvar M+\mvar M^{\top})^2=\mvar M^{2}+\mvar M^{\top}\mvar M+\mvar M\mm^{\top}+(\mvar M^{\top})^{2}\preceq\mvar M^{2}+(\mvar M^{\top})^{2}+2\mvar I\,. \] Consequently, \[ \mvar I - \mvar U_{\mvar M^2} = \mvar I-\frac{1}{2}(\mvar M^{2}+(\mvar M^{2})^{\top})\preceq2\mvar I-\frac{1}{2}(\mvar M+\mvar M^{\top})^2 =2(\mvar I-\mvar U^2_{\mvar M})\,. \] Finally, since $\mvar U_{\mvar M}$ is symmetric with $\norm{\mvar U_{\mvar M}}_{2}\leq1$, by Lemma~\ref{lem:square_sym_upper_bound} we have $\mvar I-\mvar U_{\mvar M}^2\preceq2\cdot (\mvar I-\mvar U_{\mvar M})$. \end{proof} \begin{lem} \label{lem:square_condition_number}Let $\mvar M\in\mathbb{{R}}^{n\times n}$ be a matrix such that $\norm{\mvar M}_{2}\leq1$. Furthermore, for $\alpha\in[0,1)$ let $\mvar N=\alpha\mvar I+(1-\alpha)\mvar M$ and let $\mvar L_{i}=\mvar I-\mvar U_{\mvar N^i}$. Then, \[ 2\alpha\mvar L_{1}\preceq\mvar L_{2}\preceq(4-2\alpha)\cdot\mvar L_{1}\,{.} \] \end{lem} \begin{proof} Note that $\mvar I-\mvar N=(1-\alpha)(\mvar I-\mvar M)$, and therefore $\mvar L_{1}=(1-\alpha)(\mvar I-\mvar U_{\mvar M})$. The first identity gives us that \[ \mvar N^{2}=\alpha^{2}\mvar I+2\alpha(1-\alpha)\mvar M+(1-\alpha)^{2}\mvar M^{2}\,. \] Consequently, \begin{align*} \mvar L_{2} & =\mvar I-\alpha^{2}\mvar I-2\alpha(1-\alpha)\mvar U_{\mvar M}-(1-\alpha)^{2}\mvar U_{\mvar M^2}\\ &=\left(1-\alpha^2 - (1-\alpha)^2 \right)\mvar I - 2\alpha (1-\alpha) \mvar U_{\mvar M} + (1-\alpha^2)\left(\mvar I - \mvar U_{\mvar M^2}\right) \\ &= 2\alpha(1-\alpha)(\mvar I - \mvar U_{\mvar M}) + (1-\alpha)^2 (\mvar I - \mvar U_{\mvar M^2})\\ & =2\alpha\mvar L_{1}+(1-\alpha)^2(\mvar I - \mvar U_{\mvar M^2})\,{.} \end{align*} This yields the first part of the inequality, since the second term in the last line is positive semidefinite. Now by Lemma~\ref{lem:square_asym_upper_bound} we know that \[ \mvar 0 \preceq\mvar I-\mvar U_{\mvar M^2} \preceq 4 \left( \mvar I-\mvar U_{\mvar M}\right) =\frac{4}{1-\alpha}\mvar L_{1}\,{,} \] Plugging this into our previous identity we obtain \begin{align*} \mvar L_2 \preceq 2\alpha \mvar L_1 + (1-\alpha)^2 \cdot \frac{4}{1-\alpha} \mvar L_1 = (4-2\alpha) \mvar L_1\,{,} \end{align*} thus yielding the result. \end{proof} \begin{lemma}[Condition Number Improvement]\label{lem:kappa-improvement} Let a nonzero matrix $\mvar M \in \mathbb{{R}}^{n \times n}$ be such that $\ker(\mvar M)=\ker(\mvar M^{\top})$, and $\norm{\mvar M}_2 \leq 1$. For $\alpha \in (0, 1/4]$ let $\mvar N \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \alpha \mvar I + (1 - \alpha) \mvar M$. Then, for $\lambda_{*} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \lambda_{*}(\mvar I - \mvar U_{\mvar M})$ we have \begin{equation} \label{eq:kappa-improve2} \lambda_{*} (\mvar I - \mvar U_{\mvar N^2} ) \geq \min \left\{\alpha, (1+\alpha) \lambda_{*}\right\}\,{.} \end{equation} \end{lemma} \begin{proof} Note that we can write \[ \mvar U_{\mvar N^2} = \frac{1}{2} \left(\mvar N^2 +(\mvar N^\top)^2\right) = \left(\frac{1}{2} (\mvar N + \mvar N^\top)\right)^2 - \left(\frac{1}{2} (\mvar N - \mvar N^\top) \cdot \frac{1}{2} (\mvar N - \mvar N^\top)^{\top} \right) \preceq \mvar U_{\mvar N}^2 \,{.} \] Therefore $\mvar I - \mvar U_{\mvar N^2} \succeq \mvar I - \mvar U_{\mvar N}^2$, so it is sufficient to lower bound the smallest nonzero eigenvalue of the latter. By expanding, we obtain: \begin{align*} \mvar I - \mvar U_{\mvar N}^2 &=\mvar I - ( \alpha \mvar I + (1-\alpha) \mvar U_{\mvar M} )^2 = \mvar I - ( \mvar I - (1-\alpha)(\mvar I - \mvar U_{\mvar M}) )^2 {.} \end{align*} Since $\norm{\mvar M}_2 \leq 1$, we also have $\norm{\mvar U_{\mvar M}}_2\leq 1$ by triangle inequality, and thus $\lambda_{*} \mvar I_{\im{\mvar M}} \preceq \mvar I - \mvar U_{\mvar M} \preceq 2\mvar I_{\im{\mvar M}}$. Therefore, \[ \mvar I-(1-\alpha) 2 \mvar I_{\im{\mvar M}}\preceq \mvar I - (1-\alpha)(\mvar I - \mvar U_{\mvar M}) \preceq \mvar I - (1-\alpha) \lambda_{*} \mvar I_{\im{\mvar M}}\,{,} \] and equivalently \[ \mvar I_{\perp \im{\mvar M}}+ (2\alpha-1) \mvar I_{\im{\mvar M}}\preceq \mvar I - (1-\alpha)(\mvar I - \mvar U_{\mvar M}) \preceq \mvar I_{\perp \im{\mvar M}}+(1 - (1-\alpha) \lambda_{*}) \mvar I_{\im{\mvar M}}\,{.} \] Hence, after squaring, each eigenvalue of the middle term will become upper bounded by the maximum of the squares of those in the lower and the upper bound. This can be seen as a matrix version of the inequality $b^2 \leq \max\{a^2, c^2\}$, if $a\leq b \leq c$. Hence, \[ (\mvar I - (1-\alpha)(\mvar I - \mvar U_{\mvar M}) )^2 \preceq \mvar I_{\perp \im{\mvar M}}+ \max\{(2\alpha-1)^2, (1 - (1-\alpha) \lambda_{*})^2 \}\mvar I_{\im{\mvar M}}\,{,} \] so after subtracting both sides from $\mvar I$ we obtain: \[ \mvar I-\mvar U_{\mvar N}^2 \succeq 1-\max\{(2\alpha-1)^2, (1 - (1-\alpha) \lambda_{*})^2 \}\mvar I_{\im{\mvar M}}\,{.} \] Therefore, \begin{align*} \lambda_{*}(\mvar I-\mvar U_{\mvar N^2}) &\geq \min\{1-(2\alpha-1)^2, 1-(1-(1-\alpha)\lambda_{*})^2\} \\ &= \min\{1-(2\alpha-1)^2, 2(1-\alpha)\lambda_{*} - (1-\alpha)^2\lambda_{*}^2\} \,{.} \end{align*} Observe that if $\lambda_{*} \leq (1-3\alpha)(1-\alpha)^{-2}$, then the second part of the lower bound is at least $$2(1-\alpha)\lambda_{*} - (1-\alpha)^2 (1-3\alpha)(1-\alpha)^{-2}\lambda_{*} = (1+\alpha)\lambda_{*}\,{.}$$ Otherwise, it can be lower bounded simply by $$2(1-\alpha)\lambda_{*} - (1-\alpha)^2 \lambda_{*} = (1-\alpha^2)\lambda_{*} \geq 1-3\alpha{.}$$ Finally, since for $\alpha \leq 1/4$, both $1-3\alpha \geq \alpha$ and $1-(1-2\alpha)^2 \geq \alpha$ are true, the result follows. \begin{comment} If $\lambda_{*} \geq \frac{1}{3(1-\alpha)}$, then we can lower bound $$\lambda_{*}(\mvar I-\mvar U_{\mvar N^2}) \geq (1-\alpha)(2\lambda_{*}-(1-\alpha)\lambda_{*}) = (1-\alpha^2) \lambda_{*} \geq \frac{1+\alpha}{3}\,{.}$$ Otherwise, we have $$\lambda_{*}(\mvar I-\mvar U_{\mvar N^2})\geq (1-\alpha)\left(2\lambda_{*} - (1-\alpha)\cdot \frac{1}{3(1-\alpha)} \lambda_{*}\right)=(1-\alpha)\frac{5}{3} \lambda_{*}\,{,}$$ Finally, setting $\alpha \leq 1/4$, we obtain $$\lambda_{*}(\mvar I-\mvar U_{\mvar N^2})\geq \min\{1/3, 5/4 \cdot \lambda_{*}\}$$ Now let $\vvar{v}$ be a nonzero eigenvector of $\mvar I - \mvar U_{\mvar M}$ of eigenvalue $\lambda$. By definition we have $\lambda \geq \lambda_{*}$ and since $\norm{\mvar U_\mvar M}_{2} \leq 1$ we have that $\lambda \leq 2$. Furthermore, we have that $\vvar{v}$ is an eigenvector of $\mvar U_{\mvar M}$ with eigenvalue $1 - \lambda$ and therefore is an eigenvector of $\mvar U_2$ of eigenvalue $\lambda'$ where \begin{align*} \lambda' &= 1 - \left(\alpha + (1 - \alpha) (1 - \lambda) \right)^2 = 1 - (1 - (1 - \alpha)\lambda)^2 \\ &\geq 1 - \max \left\{ \left(1 - (1 - \alpha) 2\right)^2, \left(1 - ( 1 - \alpha) \lambda_{*} \right)^2 \right\} \\ &\geq \min \left\{ 2\alpha - 4\alpha^2, 2(1 - \alpha) \lambda_{*} - (1 - \alpha)^2 \lambda_{*}^2 \right\} ~. \end{align*} Since the eigenvector $\vvar{v}$ was arbitrary \eqref{eq:kappa-improve} follows by combining this inequality with \eqref{eq:lower-helper}. Now, if $\alpha \leq 1/4$ then the first term in the lower bound in \eqref{eq:kappa-improve} is at least $2\alpha - 4\alpha^2 \geq \alpha$. Furthermore, if $\lambda_{*} \geq \alpha (1 - \alpha^2)^{-1}$ then since $\lambda_{*} \leq 1$ the second term is at least \[ (1 - \alpha) \lambda_{*} (2 - (1 - \alpha) 1) = (1 - \alpha^2) \lambda_{*} \geq \alpha \] and if $\lambda_{*} \leq \alpha (1 - \alpha^2)^{-1}$ the the second term is at least \[ (1 - \alpha) \lambda_{*} (2 - (1 - \alpha) \alpha (1 - \alpha^2)^{-1}) = (1 - \alpha) \lambda_{*} (2 - \alpha (1 + \alpha)^{-1}) \leq (1 - \alpha) (2 - \alpha) \lambda_{*} \geq (1 + \alpha) \lambda_{*} \] where we used $(1 - \alpha)(2 - \alpha) = 2 - 3\alpha + \alpha^2 \geq 1 + \alpha$ for $\alpha \in (0, 1/4)$. Consequently \eqref{eq:kappa-improve2} holds. \end{comment} \end{proof} \begin{lem}\label{lem:sym-hsm} If $\LL$ is a matrix with $\ker(\LL)=\ker(\LL^\intercal)=\ker(\mvar U_{\mvar L})$, and $\mvar U_{\mvar L}$ is positive semidefinite, then \[ \mvar U_{\mvar L} \preceq \mvar L^\intercal \mvar U_{\mvar L}^\dagger \mvar L\,{.} \] Furthermore, for any matrix $\AA$ with the same left and right kernels as $\LL$, one has that \[ \normFull{ \AA }_{\mvar U_{\LL}\rightarrow \mvar U_{\LL}} \leq \normFull{\mvar U_\LL^{\dag/2} \LL \AA \mvar U_\LL^{\dag/2}}_2\,{.} \] \end{lem} \begin{proof} We decompose $\mvar L$ and $\mvar L^{\top}$ as the sum/difference of a symmetric matrix $\mvar U$ and a skew symmetric matrix $\mvar V$. Specifically, write $\mvar L=\mvar U+\mvar V$ for \begin{align*} \mvar U &:= \mvar U_{\mvar L} =(\mvar L+\mvar L^{\intercal})/2 \text{\ and\ } \mvar V :=(\mvar L-\mvar L^{\intercal})/2. \end{align*} This gives \begin{align*} \mvar L^\intercal \mvar U^\dagger \mvar L &= % \mvar U^\intercal \mvar U^\dagger \mvar U +\mvar U^\intercal \mvar U^\dagger \mvar V +\mvar V^\intercal \mvar U^\dagger \mvar U +\mvar V^\intercal \mvar U^\dagger \mvar V. \end{align*} As $\mvar U \mvar U^\dagger \mvar V =\mvar V$ and $ \mvar V^{\top} \mvar U^\dagger \mvar U= \mvar V^{\top}$ by our kernel assumptions, and $\mvar V=-\mvar V^\intercal$, this simplifies to \begin{align*} % \mvar L^\intercal \mvar U^\dagger \mvar L &= \mvar U+\mvar V^\intercal \mvar U^\dagger \mvar V \succeq \mvar U, \end{align*} where we used the assumption that $\mvar U\succeq \mvar 0$ to guarantee that $\mvar V^\intercal \mvar U^\dagger \mvar V\succeq \mvar 0$ for the final inequality. The second part of the lemma follows by writing \[ \normFull{\AA}_{\mvar U_\LL\rightarrow \mvar U_\LL} = \normFull{\mvar U_\LL^{1/2}\AA\mvar U_\LL^{\dag/2}}_2 = \normFull{\mvar U_\LL^{1/2} \LL^\dag \left(\LL \AA\mvar U_\LL^{\dag/2}\right)}_2\,{,} \] then applying the equivalent form of the previous inequality $\LL^{\top \dag} \mvar U_\LL \LL^{\dag} \preceq \mvar U_\LL^\dag$ in order to obtain \[ \normFull{\mvar U_\LL^{1/2} \LL^\dag \left(\LL \AA\mvar U_\LL^{\dag/2}\right)}_2 \leq \normFull{\mvar U_\LL^{\dag/2} \LL \AA \mvar U_\LL^{\dag/2}}_2\,{.} \] \end{proof} \subsubsection{Reductions} \label{sec:approach_reductions} We begin by applying two reductions. The first is a result from~\cite{cohen2016faster}, which states that one can solve row- and column-diagonally dominant linear systems, which include general directed Laplacian systems, by solving a small number of Laplacian systems in which the graphs are Eulerian: \begin{thm}[Theorem 42 from~\cite{cohen2016faster}]\label{thm:dd-solver} Let $\mvar M$ be an arbitrary $n\times n$ column-diagonally-dominant or row-diagonally-dominant matrix with diagonal $\mvar D$. Let $b\in\im{\mvar M}$. Then for any $0<\epsilon\leq1$, one can compute, with high probability and in time \[ O\left(\mvar{\mathcal{T}_{\textnormal{solve}}}\log^{2}\left(\frac{n\cdot\kappa(\mvar D)\cdot\kappa(\mvar M)}{\epsilon}\right)\right) \] a vector $x'$ satisfying $\|\mvar M x'-b\|_{2}\leq\epsilon\norm b_{2}$. Furthermore, all the intermediate Eulerian Laplacian solves required to produce the approximate solution involve only matrices $\mvar R$ for which $\kappa(\mvar R+\mvar R^{\top}),\:\kappa(\mvar{diag}(\mvar R))\leq(n\kappa(\mvar D)\kappa(\mvar M)/\epsilon)^{O(1)}$. \end{thm} If we were to combine this directly with the algorithm from Section~\ref{sec:solver}, it would give a running time of $\Otil \left( \left( m + n \exp{ O\left(\sqrt{\log \kappa \cdot \log \log \kappa} \right)} \right) \log \left(1/\epsilon \right)\right) $ to solve linear systems in a directed Laplacian $\mathcal{L}= \mvar D-\mvar A^T$, where $\kappa$ is the condition number of the normalized Laplacian $\mvar D^{-1/2}\mathcal{L} \mvar D^{-1/2}$. While $\kappa$ is typically polynomial in $n$, it is possible for it to be exponential, so we would like our running time to depend on it logarithmically, instead of just sub-polynomially. We show how to do this in Appendix~\ref{sec:reduction}, where we give an algorithm to solve an arbitrarily ill-conditioned Eulerian Laplacian systems by solving $O(\log (n\kappa))$ Eulerian Laplacians whose condition numbers are polynomial in $n$. This allows us to restrict our attention for the rest of the paper to the case where $\kappa$ is polynomial in $n$ and, when applied to the algorithm from Section~\ref{sec:solver}, gives our final running time of $\log^{O(1)} (n \kappa \epsilon^{-1}))$. \subsubsection{Linear System Solving} \label{sec:solverOverview} In Section~\ref{sec:solver}, we describe our algorithm for solving Eulerian Laplacian systems of equations. It begins with a similar template to the Peng-Spielman solver~\cite{PengS14} described in Section~\ref{sec:prev_solvers}, but with modifications to accommodate our non-symmetric setting. Given a linear system in an Eulerian Laplacian $\mathcal{L} = \mvar D - \mvar A^{\top}$, we write $\mathcal{L}= \mvar D^{1/2}\left(\II-\WWhat\right)\mvar D^{1/2}$, where $\WWhat=\mvar D^{-1/2}\mvar A^\top \mvar D^{-1/2}$. This reduces the problem to solving linear systems in $\LL=\II-\WWhat$ where we can show that $\|\WWhat\|_2=1$. We then apply the expansion in Equation~\eqref{eq:series_expansion}, but with some slight modifications: \begin{itemize} \item We find it convenient to build up the product expansion in Equation~\eqref{eq:series_expansion} recursively. We do so using the identity \begin{equation}\label{eq:recursive} (\II - \WWhat)^+ =(\II -\WWhat^2)^+ (\II+\WWhat), \end{equation} which can be thought of as a matrix analogue of the rational function identity \[ \frac{1}{1-z}=\frac{1+z}{1-z^2}. \] Applying this identity repeatedly gives \[ (\II - \WWhat)^+ =(\II -\WWhat^2)^+ (\II+\WWhat) = (\II -\WWhat^4)^+ (\II+\WWhat^2) (\II+\WWhat) =(\II -\WWhat^8)^+ (\II+\WWhat^4) (\II+\WWhat^2) (\II+\WWhat) =\dots. \] After $k$ applications of the identity, this yields the first $k$ terms of the product expansion in~\eqref{eq:series_expansion} times $(\II-\WWhat^{2^k})^+$, which converges to the identity as $k$ gets large if $\|\WWhat\|_2<1$. Some advantages of this compared to the infinite product expansion are that it gives an exact expression rather than an asymptotic result, which will be more convenient to work with when analyzing the growth of errors, and that the pseudoinverses in the expression gives a correct answer when $\|\WWhat\|=1$, which decreases the extent to which we need to explicitly handle the kernel of $\mathcal{L}$ as a special case. \item If $z\neq 1$ is a complex number with $|z|=1$, $1/(1-z)$ exists but the series $1/(1-z)=1+z+z^2+\dots$ does not converge, and our matrix expansion will exhibit similar behavior. Graph theoretically, this case corresponds periodic behavior in the random walk, and we deal with it, as usual, by adding self-loops and working with a lazy random walk. Algebraically, we work with a convex combination with the identity, \[ \WWhat^{(\alpha)} = \alpha \II + (1-\alpha)\WWhat, \] and we note that $\II -\WWhat^{(\alpha)}=(1-\alpha)(\II -\WWhat)$. We then replace the identity in Equation~\eqref{eq:recursive} with the modified identity \begin{align}\label{eq:recursive_alpha} (\II-\WWhat)^+&=(1-\alpha)\left(\II-\WWhat^{(\alpha)}\right)^+ =(1-\alpha)\left(\II-\WWhat^{(\alpha)2}\right)^+ \left(\II+\WWhat^{(\alpha)}\right), \end{align} which leads to better convergence behavior. This step insures that each application of the identity causes a change that is more gradual than squaring. Moreover, our analysis takes advantage of the fact that taking a linear combination with the identity makes it easier to relate $\II - \AAcal_{j + 1}^{(\alpha)}$ to $\II - \AAcal_{j}^{(\alpha)}$. While it may not be necessary to do at every step, it is used to simplify the current analysis. Note, that this algebraic simplification through `lazy' random walks is also present in other works involving squaring~\cite{ChengCLPT15,JindalK15:arxiv}. \end{itemize} Similarly to the approach in~\cite{PengS14}, our strategy is to repeatedly apply~\eqref{eq:recursive_alpha}, but to replace $(\WWhat^{(\alpha)})^2$ with a sparsifier in each step to allow us to decrease the computational costs. More precisely, we show how to efficiently construct a sequence of matrices $\WWhat_0,\WWhat_1,\ldots,\WWhat_d$ and associated matrices $\WWhat_i^{(\alpha)}=\alpha \II + (1-\alpha)\WWhat_i$ such that each matrix in the sequence has $\widetilde{O}(n/\epsilon^2)$ nonzero entries, $\II-\WWhat_0$ is an $\epsilon$-approximation of $\II-\WWhat$, and $\II-\WWhat_i$ is an $\epsilon$-approximation of $\II-(\WWhat_{i-1}^{(\alpha)})^2$ for each $i\geq 1$ (note that we set $\WWhat_0$ by sparsifying the original Laplacian). We call this a \emph{square-sparsification chain}. In Section~\ref{sec:construction}, we show how to compute all of the matrices in such a chain in time $\widetilde{O}(\mathrm{nnz}(\mathcal{L})+ n\epsilon^{-2}d)$, which we note is within logarithmic factors of the total number of nonzero entries. The length of the chain is then dictated by the condition number $\kappa=\kappa(\UU_{\mvar I - \WWhat})$, the condition number of the symmetric Laplacian associated with the input Eulerian Laplacian. Note that $\kappa = O(\mathrm{poly}(n U))$ where $U \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \max_{i,j} |\mvar A_{ij}|/\min_{i,j : \mvar A_{ij} \neq 0} |\mvar A_{ij}|$ and may be smaller. If we set $d = \Omega(\log{\kappa})$, we show that $\mvar I-\WWhat_d^{(\alpha)}$ well-conditioned. We can thus stop our recursion at this point and % (approximately) apply $(\mvar I-\WWhat_d^{(\alpha)})^+$ using a small number of iterations of a standard iterative method, Richardson iteration. Expanding the recurrence in~\eqref{eq:recursive_alpha} gives \begin{equation} \left(\mvar I - \WWhat_{i}\right)^\dagger \approx \left( 1 - \alpha \right)^{j - i} \left(\mvar I-\WWhat_{j}^{\left(\alpha\right)}\right)^\dagger \left(\mvar I + \WWhat_{j-1}^{\left(\alpha\right)}\right) \left(\mvar I + \WWhat_{j-2}^{\left(\alpha\right)}\right) \dotsm \left(\mvar I + \WWhat_{i}^{\left(\alpha\right)}\right). \label{eqn:key} \end{equation} If we have already computed the matrices in the chain, we can apply the right-hand side to a vector $\vec{b}$ by performing $(j-i)$ matrix-vector multiplications and solving a linear system in $\II-\WWhat_j^{(\alpha)}$. It is useful to think of this as an approximate reduction from applying $(\II-\WWhat_i)^+$ to applying $(\II-\WWhat_j^{(\alpha)})^+ $. The matrices in~\eqref{eqn:key} have at most $\widetilde{O}(n/\epsilon^2)$ nonzero entries, so the total time for the matrix vector multiplications is then at most $\widetilde{O}\left((j-i)n\epsilon^{-2}\right)$. Because of the errors introduced by the sparsification steps, the right-hand side of~\eqref{eqn:key} is only an approximation of $(\II-\WWhat_i)^+$, so applying it directly to $\vec{b}$ only yields a (typically somewhat crude) approximation to solution to $(\II-\WWhat_i)\vec{x}=\vec{b}$. To obtain a better solution, we instead use it as a preconditioner inside an iterative method for the linear system. This allows us to obtain an arbitrarily good solution to the system, and the quality of the approximation in~\eqref{eqn:key} then determines the number of iterations required. This suggests that we quantify the error in our approximations using a notion that directly bounds the convergence rate of such a preconditioned iterative method. We do so with the notion of an $\epsilon$-approximate pseudoinverse (defined with respect to some PSD matrix $\mvar U$), % which we introduce in Section~\ref{sec:richardson}. Roughly speaking, solving a linear system with an appropriate iterative method using such a matrix as a preconditioner will guarantee the $\mvar U$-norm of the error to decrease by a factor of $\epsilon$ in each iteration. We note that this is only useful for $\epsilon<1$. For technical reasons, we measure the quality of approximate pseudoinverses with respect to different $\mvar U$ matrices at different stages of the algorithm and translate between them. For simplicity, we just refer to an ``$\epsilon$-approximate pseudoinverse'' in this overview, but in our algorithm we set the value of $\epsilon$ in our sparsification routines and apply iterative methods, again Richardson iterations, to appropriately pay for the costs of translating between norms. To analyze the errors introduced by sparsification, we therefore need to: \begin{enumerate} \item Relate our notion of graph approximation to approximate pseudoinverses, and \item Bound the rate at which the quality of the approximate pseudoinverse we produce decreases as we increase the number of terms in~\eqref{eqn:key}. We use~\eqref{eqn:key} recursively, so it is also be useful to bound how this is affected if we use an approximate pseudoinverse instead of the exact operator $\big(\II-\WWhat_j^{(\alpha)}\big)^+$. \end{enumerate} For the former, we show in Theorem~\ref{thm:spectral-to-approxInv} that our notion of an $\epsilon$-sparsifier leads to an $O(\epsilon)$-approximate pseudoinverse. For the latter, we show in Lemma~\ref{lem:chainproperty} that using a square-sparsifier chain of length $d$ with some given $\epsilon$, and using an $\epsilon'$-approximate pseudoinverse of $\big(\II-\WWhat_j^{(\alpha)}\big)$ in place of $\big(\II-\WWhat_j^{(\alpha)}\big)^+$, produces an $(\epsilon+\epsilon')\cdot 2^{O(d)}$-approximate pseudoinverse for $\II-\WWhat_j^{(\alpha)}$. The exponential dependence of the error on length of the chain is a key difference between our analysis and the undirected case, and it is what prevents us from having a simpler and more efficient algorithm. If the dependence on the chain length were polynomial, applying~\eqref{eqn:key} with $i=0$ and $j=d$ would provide an $\epsilon\cdot \mathrm{polylog}(n)$-approximate pseudoinverse. We could thus set $\epsilon=1/\mathrm{polylog}(\kappa)$ in our sparsifier chain and get an $O(1)$-approximate pseudoinverse in $\widetilde{O}(n)$ time. An iterative method could then call this $\log(1/\delta)$ times to obtain a solution with error $\delta$. However, because of the exponential dependence on the chain length, we would only get an $\epsilon\cdot \mathrm{poly}(\kappa)$-approximate pseudoinverse. We would thus need to set $\epsilon =1/\mathrm{poly}(\kappa)$ to get a value less than $1$, which would lead to ``sparsifiers'' with $\widetilde{\Omega}(n\cdot\mathrm{poly}(\kappa))$ edges. In the typical case where $\kappa=\mathrm{poly}(n)$, simply writing these down would exceed the desired almost-linear time bound. \newcommand{\rectime}[2]{\mathcal{T}_{#1,#2}} \newcommand{\epsilon_{\mathrm{spar}}}{\epsilon_{\mathrm{spar}}} \newcommand{\epsilon_{\mathrm{lo}}}{\epsilon_{\mathrm{lo}}} \newcommand{\epsilon_{\mathrm{hi}}}{\epsilon_{\mathrm{hi}}} \newcommand{(\esparse+\elow) 2^{O(\Delta)}}{(\epsilon_{\mathrm{spar}}+\epsilon_{\mathrm{lo}}) 2^{O(\Delta)}} \newcommand{O(\log \elow/\log \ehigh) }{O(\log \epsilon_{\mathrm{lo}}/\log \epsilon_{\mathrm{hi}}) } \newcommand{O\left(\frac{\log \elow}{\log \ehigh} \right)}{O\left(\frac{\log \epsilon_{\mathrm{lo}}}{\log \epsilon_{\mathrm{hi}}} \right)} To prevent this, we do not wait until the end to apply an iterative method to reduce the error. Instead, we break our sparsification and squaring steps into $\ceil{d/\Delta}$ blocks of size $\Delta \ll d$, each of which we will wrap in several steps of % Richardson iteration (which we review in Section~\ref{sec:richardson}), in order to keep the error under control. % Our algorithm first computes (once, not recursively) a square-sparsifier chain of length $d=O(\log \kappa)$ in which the sparsifiers are $\epsilon_{\mathrm{spar}}$-approximations. It then recursively combines two types of steps that are suggested by the discussion above: \begin{itemize} \item \textbf{High error $\big(\mvar I-\WWhat_i^{(\alpha)}\big)^+$ from low error $\big(\mvar I-\WWhat_{i+\Delta}^{(\alpha)}\big)^+$}: Given a routine to apply an $\epsilon_{\mathrm{lo}}$-approximate pseudoinverse of $\mvar I-\WWhat_{i+\Delta}^{(\alpha)}$ in time $\rectime{i+\Delta}{\epsilon_{\mathrm{lo}}}$, we can use the expansion in~\eqref{eqn:key} to apply an $\epsilon_{\mathrm{hi}}$-approximate pseudoinverse of $\mvar I-\WWhat_i^{(\alpha)}$ in time $\rectime{i}{\epsilon_{\mathrm{hi}}}=\rectime{i+\Delta}{\epsilon_{\mathrm{lo}}}+\widetilde{O}(\Delta n\epsilon_{\mathrm{spar}}^{-2})$, where $\epsilon_{\mathrm{hi}}=(\esparse+\elow) 2^{O(\Delta)}$. \item \textbf{Low error $\big(\II-\WWhat_i^{(\alpha)}\big)^+$ from high error $\big(\II- \WWhat_{i}^{(\alpha)}\big)^+$}: By running Richardson iteration for $O(\log \elow/\log \ehigh) $ steps, we can turn an $\epsilon_{\mathrm{hi}}$-approximate pseudoinverse of $\mvar I- \WWhat_i^{(\alpha)}$ into a $\epsilon_{\mathrm{lo}}$-approximate pseudoinverse. This applies the former once in each iteration, so it takes time \begin{equation}\label{eq:recurrence} \rectime{i}{\epsilon_{\mathrm{lo}}}=O\left(\frac{\log \elow}{\log \ehigh} \right) \rectime{i}{\epsilon_{\mathrm{hi}}}=O\left(\frac{\log \elow}{\log \ehigh} \right)\left(\rectime{i+\Delta}{\epsilon_{\mathrm{lo}}}+\widetilde{O}\left(\Delta n\epsilon_{\mathrm{spar}}^{-2}\right)\right). \end{equation} \end{itemize} If we set $\epsilon_{\mathrm{hi}}$ to be a constant (say, $1/10$), we get $\epsilon_{\mathrm{spar}} + \epsilon_{\mathrm{lo}}=2^{-\Omega(\Delta)}$, so we set $\epsilon_{\mathrm{spar}} =\epsilon_{\mathrm{lo}}=2^{-\Theta(\Delta)}$, and~\eqref{eq:recurrence} simplifies to \[\rectime{i}{\epsilon_{\mathrm{lo}}}= O(\Delta) \left(\rectime{i+\Delta}{\epsilon_{\mathrm{lo}}}+\widetilde{O}\big(\Delta n2^{\Theta(\Delta)}\big)\right) =O\left(\Delta\right) \rectime{i+\Delta}{\epsilon_{\mathrm{lo}}}+\widetilde{O}\big(n 2^{\Theta(\Delta)}\big). \] For the base case of our recurrence, $\II-\WWhat_d^{(\alpha)}$ is well-conditioned, so we can approximately apply its pseudoinverse using a standard iterative method in time $\rectime{d}{\epsilon_{\mathrm{lo}}}=\widetilde{O}(\mathrm{nnz}(\WWhat_d^{(\alpha)})\log \epsilon_{\mathrm{lo}}^{-1})=\widetilde{O}(n\epsilon_{\mathrm{spar}}^{-2}\log \epsilon_{\mathrm{lo}}^{-1}) =\widetilde{O}\big(n 2^{\Theta(\Delta)}\big)$. This can be folded into the additive $\widetilde{O}\big(n 2^{\Theta(\Delta)}\big)$ term in the recurrence, so it does not significantly affect the time bound. To estimate the solution to the recurrence, we note that depth of the recursion is $\ceil{ d/\Delta}$, and at each stage we multiply by $O(\Delta)$. We can think of this as producing a recursion tree with $O(\Delta)^{\ceil{ d/\Delta}}$ nodes, and we add $\widetilde{O}\big(n 2^{\Theta(\Delta)}\big)$ at each, so we get that \[\rectime{0}{\epsilon_{\mathrm{lo}}}= O(\Delta)^{\ceil{ d/\Delta}} \widetilde{O}\big(n 2^{O(\Delta)}\big) = nO(\Delta)^{O(d/\Delta)} 2^{O(\Delta)} = n 2^{O\left(\Delta+ \frac{d\log\Delta}{\Delta}\right)}. \] Setting $\Delta=\sqrt{d \log d}=\sqrt{\log\kappa \log \log \kappa}$ approximately balances the two terms in the exponent. Plugging this in and adding the $\widetilde{O}(m)$ for the overhead from the non-recursive parts of the algorithm gives our running time bound of $\widetilde{O}(m)+n 2^{O(\sqrt{\log\kappa \log \log \kappa})}$. \subsubsection{Sparsification} \label{sec:approach_sparsification} Our primary new graph theoretic tool is a directed notion of spectral sparsifiers, along with efficient techniques for constructing them for an Eulerian graph and its square. As discussed in the introduction, there are seemingly intrinsic problems with many of the notions of directed sparsification that one would propose based on analogies to the undirected case. In particular, both the cut-based and spectral notions have seemingly fatal issues that preclude their use in directed graphs. For the cut-based notion, as shown in Section~\ref{sub:directed_sparse_hard}, good sparsifiers provably don't exist for some graphs. If one instead seeks to generalize the undirected definition of spectral sparsifiers, which requires a sparsifier $H$ of a graph $G$ to obey $(1-\epsilon)\vec{x}^T \mathcal{L}_H \vec{x} \leq \vec{x}^T \mathcal{L}_G \vec{x}\leq(1+\epsilon)\vec{x}^T \mathcal{L}_H \vec{x}$, the problems are perhaps even more severe. For instance, when $G$ is directed $\mathcal{L}_G$ is no longer symmetric, so it's not clear that it makes sense to use it as a quadratic form $\vec{x}^\top \mathcal{L}_G \vec{x}$, and doing so essentially symmetrizes it and discards the directed structure, since $\vec{x}^\top \mathcal{L}_G \vec{x}=\vec{x}^\top \mathcal{L}_G^\top \vec{x}=\vec{x}^\top \left(\frac{\mathcal{L}_G+\mathcal{L}_G^\top}{2} \right)\vec{x}$. In addition, the resulting quadratic form is not typically PSD, i.e. there often exist $\vec{x}$ for which $\vec{x}^\top\mathcal{L}_G\vec{x}<0$, in which case $G$ would not approximate itself under the definition given for $\epsilon > 0$. One also has to deal with the fact that, unlike in the undirected case, the kernels of directed graph Laplacians are rather subtle objects: for a strongly-connected graph $G$, the kernel of $\mathcal{L}_G=\mvar D-\mvar A^\top$ is given by $\mvar D^{-1}\phi$, where $\phi$ is the stationary distribution of the random walk on $G$. This carries various problematic consequences, including the fact that $\mathcal{L}$ and $\mathcal{L}^\top$ typically have different kernels, and even small changes in the graph can change whether $\mathcal{L} \vec{x}=0$ for a given vector $\vec{x}$. Our approach to this is based on the fact that many of these problems do not occur for Eulerian graphs. In particular, if $\mathcal{L}$ is the Laplacian of an Eulerian directed graph $G$, $\mvar U_\mathcal{L}=(\mathcal{L}+\mathcal{L}^\top)/2$ is the Laplacian of an undirected graph and thus positive semidefinite, and cuts in the corresponding undirected graph are the same as those in $G$. In addition, the kernel of $\mathcal{L}$ is spanned by the all-ones vector and is the same as the kernel of $\mathcal{L}^\top$. In addition, the following was shown in~\cite{cohen2016faster}, which says that the Laplacian of any strongly connected graph can be turned into an Eulerian Laplacian by applying a diagonal scaling: \begin{lem}[Lemma 1 from~\cite{cohen2016faster}, abridged] \label{lem:stationary-equivalence} Given a directed Laplacian $\mathcal{L}=\mvar D-\mvar A^{\top}\in\mathbb{{R}}^{n\times n}$ whose associated graph is strongly connected, there exists a positive vector $\vec{x} \in\mathbb{{R}}_{>0}^{n}$ (unique up to scaling) such that $\mathcal{L} \cdot \mvar{diag}(\vec{x})$ is an Eulerian Laplacian. Furthermore, $\ker(\mathcal{L}) = \mathrm{span}(\vec{x})$, and $\ker(\mathcal{L}^\top) = \mathrm{span}(\allones)$. \end{lem} Moreover, it was shown in~\cite{cohen2016faster} that one could find a high-precision approximation to this scaling efficiently given access to an Eulerian solver. Intuitively, we define our notion of sparsification and approximation for Eulerian graphs, and we show that this notion induces a well-behaved definition for other strongly-connected graphs through the Eulerian scaling. As we do not want to neglect the directed structure, we will think of Laplacians as linear operators, not quadratic forms, and we study their sizes through various operator norms. For Laplacians of Eulerian graphs, we use the fact that their symmetrizations are PSD, and our definition of approximation will demand that the difference between the two operators be small relative to the corresponding quadratic form. More precisely, we say that an Eulerian Laplacian $\mathcal{L}_H$ $\epsilon$-approximates another Eulerian Laplacian $\mathcal{L}_G$ if $\big\|\mvar U_{\mathcal{L}_G}^{+/2}(\mathcal{L}_H-\mathcal{L}_G)\mvar U_{\mathcal{L}_G}^{+/2}\big\|_2\leq \epsilon$. We note that this use of $\mvar U_{\mathcal{L}_G}$ is closely related to the $\mathcal{L}_G^\top \mvar U_{\mathcal{L}_G}^+ \mathcal{L}_G$ matrix that appeared in~\cite{cohen2016faster}. The difference, however, is that we are not trying to directly use this matrix as a symmetric stand-in for our Laplacian; we are working directly with the original (asymmetric) Laplacians and are just using it to help define a matrix norm. To construct sparsifiers of Eulerian graphs with respect to this notion, we follow a similar approach to the one originally used by Spielman and Teng for spectral sparsification, but carefully tailored to the directed setting. The idea is to first partition our graph into well-connected components. Because the cuts in an Eulerian graph match those in its symmetrization, it makes sense to do this partitioning by simply partitioning the corresponding undirected graph into clusters with good expansion. We use existing decomposition techniques to argue that one can find such a partition with a significant fraction of the edges contained in the clusters. We then show a concentration result for asymmetric matrices that says that appropriately sub-sampling within these clusters preserves the relevant structure reasonably well while only keeping a small number of edges relative to the cluster size. In the undirected case, one would just repeat this procedure until the graph is sparse. Where our procedure differs, however, is that we keep track of the directed structure along the way, and ``patch'' the subsampled object to keep it from diverging from what it should be. In particular, the sampling procedure, when applied to an Eulerian graph will produce a non-Eulerian graph. However, we add additional edges to fix this after every sampling step and use our concentration bounds to show that the patches we add are sufficiently small to not decrease the quality of our approximation. Carefully, analyzing this procedure allows us to produce a sparsifier in nearly linear time. However, in order to use our sparsification routine to produce a solver, we also need to sparsify the Laplacian of the square of a graph. To do this, we could just explicitly form the square and then sparsify it. However, we would like to perform this procedure in time that is nearly-linear in the number of edges of the original graph, whereas explicitly forming the square would cause the running time to grow with the number of edges of the square, which could be substantially larger. To prevent this, we instead show how to work with an implicit representation of the square that we can manipulate more efficiently, similar to \cite{PengS14}. \section{Preliminaries \label{sec:prelim}} First we give notation in Section~\ref{sec:prelim:notation} and then we give basic information about directed Laplacians in Section~\ref{sec:prelim:laplacian}. Much of this is inherited from~\cite{cohen2016faster}. With this notation in place we give an overview of our approach in Section~\ref{sec:intro:approach}. \subsection{Notation} \label{sec:prelim:notation} \textbf{Matrices:} We use bold to denote matrices and let $\mvar I,\mvar 0 \in \mathbb{{R}}^{n\times n}$ denote the identity matrix and zero matrix respectively. For a matrix $\mvar A$ we use $\mathrm{nnz}(\mvar A)$ to denote the number of non-zero entries in $\mvar A$. When $\mvar A \in \mathbb{{R}}^{n \times n}$ we use $\mathrm{supp}(\mvar A)$ to denote the subset of $[n]$ corresponding to the indices for which at least one of the corresponding row or column in $\mvar A$ is non-zero.\\ \\ \textbf{Vectors:} We use the vector notation when we wish to highlight that we are representing a vector. We let $\vec{0},\vec{1}\in\mathbb{{R}}^{n}$ denote the all zeros and ones vectors, respectively. We use $\ensuremath{\vec{{1}}}_{i}\in\mathbb{{R}}^{n}$ to denote the $i$-th basis vector, i.e. $(\ensuremath{\vec{{1}}}_{i})_{j}=0$ for $j\neq i$ and $(\ensuremath{\vec{{1}}}_{i})_{i}=1$. Occasionally, when it is obvious from the context, we apply scalar operations to vectors with the interpretation that they be applied coordinate-wise. As with matrices, we use $\mathrm{supp}(\vec{x})$ to denote the indices of $\vec{x}$ with non-zero entries.\\ \\ \textbf{Positive Semidefinite Ordering:} For symmetric matrices $\mvar A,\mvar B\in\mathbb{{R}}^{n\times n}$ we use $\mvar A\preceq\mvar B$ to denote the condition that $x^{\top}\mvar A x\leq x^{\top}\mvar B x$, for all $x$. We define $\succeq$, $\prec$, and $\succ$ analogously. We call a symmetric matrix $\mvar A\in\mathbb{{R}}^{n\times n}$ positive semidefinite (PSD) if $\mvar A\succeq\mvar 0$. For vectors $x$, we let $\norm x_{\mvar A}\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\sqrt{x^{\top}\mvar A x}$. For asymmetric $\mvar A \in \mathbb{{R}}^{n \times n}$ we let $\mvar U_{\mvar A} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \frac{1}{2} (\mvar A + \mvar A^\top)$ and note that $x^\top \mvar A x = x^\top \mvar A^\intercal x = x^\top \mvar U_\mvar A x$ for all $x \in \mathbb{{R}}^{n}$. \\ \\ \textbf{Operator Norms:} For any norm $\norm{\cdot}$ defined on vectors in $\mathbb{{R}}^{n}$ we define the \emph{seminorm} it induces on $\mathbb{{R}}^{n\times n}$ by $\norm{\mvar A}=\max_{x\neq0}\frac{\norm{\mvar A x}}{\norm x}$ for all $\mvar A\in\mathbb{{R}}^{n\times n}$. When we wish to make clear that we are considering such a ratio we use the $\rightarrow$ symbol; e.g., $\norm{\mvar A}_{\mvar H \to \mvar H}=\max_{x\neq0}\frac{\norm{\mvar A x}_\mvar H}{\norm{x}_\mvar H}$, but we may also simply write $\norm{\mvar A}_{\mvar H} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \norm{\mvar A}_{\mvar H \rightarrow \mvar H}$ in this case. For symmetric positive definite $\mvar H$ we have that $\norm{\mvar A}_{\mvar H \to \mvar H}$ can be equivalently expressed in terms of $\norm{\cdot}_2$ as $\norm{\mvar A}_{\mvar H \rightarrow \mvar H}=\norm{\mvar H^{1/2} \mvar A \mvar H^{-1/2}}_2$. Also note that $\norm{\mvar A}_1$ is the is the maximum $\ell_{1}$ norm of a column of $\mvar A$, and $\norm{\mvar A}_{\infty}$ is the maximum $\ell_{1}$ norm of a row of $\mvar A$. \\ \\ \textbf{Diagonals} For $x\in\mathbb{{R}}^{n}$ we let $\mvar{diag}(x)\in\mathbb{{R}}^{n\times n}$ denote the diagonal matrix with $\mvar{diag}(x)_{ii}=x_{i}$ and typically use $\mvar X\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\mvar{diag}(x)$. For $\mvar A\in\mathbb{{R}}^{n\times n}$ we let $\mathrm{diag}(\mvar A) \in \mathbb{{R}}^{n}$ denote the vector corresponding to the diagonal of $\mvar A$, i.e. $\mathrm{diag}(\mvar A)_i = \mvar A_{ii}$ and we let $\mvar{diag}(\mvar A)$ denote the diagonal matrix having the same diagonal as $\mvar A$. \\ \\ \textbf{Linear Algebra} For a matrix $\mvar A$, we let $\mvar A^\dagger$ denote the (Moore-Penrose) pseudoinverse of $\mvar A$. For a symmetric positive semidefinite matrix $\mvar B$, we let $\mvar B^{1/2}$ denote the square root of $\mvar B$, that is the unique symmetric positive semidefinite matrix such that $\mvar B^{1/2} \mvar B^{1/2} = \mvar B$. Furthermore, we let $\mvar B^{\dagger/2}$ denote the pseudoinverse of the square root of $\mvar B$. We use $\ker(\mvar A)$ to denote nullspace (kernel) of $\mvar A$. We use $\mathrm{span}(x_{1},x_{2},...,x_{k})$ to denote the subspace spanned by $x_{1},...,x_{k}$. For a symmetric PSD matrix $\mvar A$ we let $\lambda_{*}(\mvar A)$ denote the smallest non-zero eigenvalue of $\mvar A$.\\ \\ \noindent\textbf{Misc}: We let $[n]\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\{1,...,n\}$. For $\mvar A\in\mathbb{{R}}^{n\times n}$, let $\kappa(\mvar A)\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\norm{\mvar A}_{2}\cdot\norm{\mvar A^{\dagger}}_{2}$ denote the condition number of $\mvar A$. For symmetric PSD matrices $\mvar A$ and $\mvar B$ with the same kernel, let $\kappa(\mvar A,\mvar B) \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \kappa(\mvar A^{\dagger/2} \mvar B \mvar A^{\dagger/2})$ denote their relative condition number (e.g. if $\alpha \mvar B \preceq \mvar A \preceq \beta \mvar B$ then $\kappa(\mvar A,\mvar B) \leq \beta / \alpha$). Note that our use of pseudoinverse rather than inverse in these definitions is non-standard but convenient. \subsection{Directed Laplacians} \label{sec:prelim:laplacian} Here we provide notation regarding directed Laplacians and review basic facts regarding these matrices that were proved in~\cite{cohen2016faster}. We begin with some basic definitions and notation regarding Laplacians:\\ \\ \textbf{Directed Laplacian:} A matrix $\mathcal{L}\in\mathbb{{R}}^{n\times n}$ is called a \emph{directed Laplacian} if (1) its off diagonal entries are non-positive, i.e. $\mathcal{L}_{i,j} \leq 0$ for all $i \neq j$, and (2) it satisfies $\allones^\top \mathcal{L} = \allzeros$, i.e. $\mathcal{L}_{ii}=-\sum_{j\neq i}\mathcal{L}_{ji}$ for all $i$.\\ \\ \textbf{Associated Graph:} To every directed Laplacian $\mathcal{L} \in \mathbb{{R}}^{n \times n}$ we associate a graph $G_{\mathcal{L}}=(V,E,w)$ with vertices $V = [n]$, and edges $(i,j)$ of weight $w_{ij} =-\mathcal{L}_{ji}$, for all $i\neq j\in[n]$ with $\mathcal{L}_{ji}\neq0$. Occasionally we write $\mathcal{L}=\mvar D-\mvar A^{\top}$ to denote that we decompose $\mathcal{L}$ into the diagonal matrix $\mvar D$ (where $\mvar D_{ii}=\mathcal{L}_{ii}$ is the out degree of vertex $i$ in $G_{\mathcal{L}}$) and non-negative matrix $\mvar A$ (which is weighted adjacency matrix of $G_{\mathcal{L}}$, with $\mvar A_{ij}=w_{ij}$ if $(i,j)\in E$, and $\mvar A_{ij}=0$ otherwise).\\ \\ \textbf{Eulerian Laplacian:} A matrix $\mathcal{L}$ is called an \emph{Eulerian Laplacian} if it is a directed Laplacian with $\mathcal{L}\vec{1} = \vec{0}$. Note that $\mathcal{L}$ is an \emph{Eulerian Laplacian} if and only if its associated graph is Eulerian.\\ \\ \textbf{(Symmetric) Laplacian}: A matrix $\mvar U\in\mathbb{{R}}^{n\times n}$ is called a \emph{symmetric} or \emph{undirected Laplacian} or just a \emph{Laplacian} if it is symmetric and a directed Laplacian. Note that the graph associated with an undirected Laplacian is undirected, i.e. for every forward edge there is a backward edge of the same weight. Given a symmetric Laplacian $\mvar U = \mvar D-\mvar A$, we let its \textit{spectral gap} be defined as the smallest nonzero eigenvalue of $\mvar D^{-1/2} \mvar U \mvar D^{-1/2}$, i.e. $\lambda_2(\mvar D^{-1/2} \mvar U \mvar D^{-1/2}) = \min_{x \perp \ker(\mvar D^{-1/2}\mvar U\mvar D^{-1/2}), \norm{x}=1} x^{\top} \mvar D^{-1/2} \mvar U \mvar D^{-1/2} x$.\\ \\ \textbf{Running Times: } Our central object is almost always a directed Laplacian $\mathcal{L} = \mvar D - \mvar A \in \mathbb{{R}}^{n \times n}$, where $m = \mathrm{nnz}(\mvar A)$, $U \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \max_{i,j} |\mvar A_{ij}|/\min_{i,j : \mvar A_{ij} \neq 0} |\mvar A_{ij}|$. We use $\tilde{O}(\cdot)$ notation to suppress factors polylogarithmic in $n$, $m$, $U$, and $\kappa$, the natural condition number of the particular problem. \subsection{Previous Work} \label{sec:previous_work} In this section, we briefly review some of the previous work related to our results and techniques. Given the extensive prior research on Markov chains, spectral graph theory, sparsification, solving general and Laplacian linear systems, and computing PageRank, we do not attempt to give a comprehensive overview of the literature; instead we simply describe the work that most directly relates to or motivates this paper. \subsubsection{Directed Laplacian Systems, Stationary Distributions, and PageRanks} The most direct precursor to this work is a recent paper by a subset of the authors~\cite{cohen2016faster}. As mentioned above, it showed that, by exploiting linear algebraic properties of directed Laplacians, one could obtain faster algorithms for a wide range of problems involving directed random walks. Prior to this paper, it seemed quite possible that the similarities between directed and undirected Laplacians were largely syntactic, and that there was no way to use the structure of directed Laplacians or random walks to obtain asymptotically faster algorithms. In particular, despite extensive theoretical and applied work in computer science, mathematics, statistics, and numerical scientific computing, all algorithms that we are aware of prior to~\cite{cohen2016faster} for obtaining high-quality% \footnote{By high-quality, we mean that the algorithm should be able to find a solution with error $\epsilon$ in time that is sub-polynomial in $1/\epsilon$, i.e. $(1/\epsilon)^{o(1)}$. For PageRank % there were some known techniques for achieving better dependence on $n$ and $m$ at the expense of a polynomial dependence on~$1/\epsilon$~\cite{AndersenCL07,ChungZ10,ChungS13,ChungS14}.} solutions for directed Laplacian systems, stationary distributions, or personalized PageRank vectors either have a polynomial dependence on a condition number or related parameter (such as a random walk's mixing time or PageRank's restart probability), or they apply a general-purpose linear algebra routine and thus run in at least the $\Omega(\min(mn,n^\omega))$ time currently required to solve arbitrary linear systems. % By showing that this was not the case,~\cite{cohen2016faster} provided the first indication that one could actually use the structure of directed Laplacian systems to accelerate their solution, which provided a strong motivation to see how much of an improvement was possible. It also created hope that the recently successful research program in building and applying fast algorithms for solving (symmetric) Laplacian systems~\cite{SpielmanTengSolver:journal,KoutisMP10,KoutisMP11,KelnerOSZ13,lee2013efficient,PengS14,KyngLPSS16} could be applied to give more direct improvements to running times for solving combinatorial optimization problems on directed graphs. In addition to motivating the search for faster Laplacian solvers,~\cite{cohen2016faster} provided a set of reductions that we will directly apply in this paper. In order to prove its results, \cite{cohen2016faster} showed how to reduce a range of algorithmic questions about directed walks, such as computing the stationary distribution, hitting and commute times, escape probabilities, and personalized PageRank vectors, to solving a small number of linear systems in directed Laplacians. It turns out that it is easier to work with such systems in the special case where the graph is Eulerian. One of the main technical tools in~\cite{cohen2016faster} is a reduction to this special case. They did this by giving an iterative method that solved a general Laplacian system by solving a small number of systems in which the graph is Eulerian. Together, this showed that to solve the aforementioned problems, it suffices to give a solver for Eulerian graphs, and that this only incurs a factor of $\widetilde{O}(1)$ overhead. % % It then obtained all of its results by constructing an Eulerian solver that runs in time $\widetilde{O}(m^{3/4}n+mn^{2/3})$. In this paper we construct an Eulerian solver that runs in time $m^{1+o(1)}$ and then just directly apply these reductions to obtain our other results. However, while \cite{cohen2016faster} opened the door for further algorithmic improvements in analyzing Markov chains, the arguments in it provided little evidence that the running time could be improved to anything approaching what was known in the undirected case. Indeed, while the techniques in it suggested that it might be possible to obtain further improvements, % even the most optimistic interpretations of the structural results in \cite{cohen2016faster} only gave hope for achieving running times of roughly $\widetilde{O} (m \sqrt{n})$. % % % This would make it no faster than some of the existing algorithms that use undirected Laplacian solvers to solve problems on directed graphs, such as the $\widetilde{O}(m^{10/7})$ algorithms for unit cost maximum flow~\cite{DBLP:conf/focs/Madry13,DBLP:journals/corr/Madry16} and shortest path with negative edge lengths~\cite{DBLP:journals/corr/CohenMSV16}, or the $\widetilde{O} (m \sqrt{n})$ type bounds for minimum cost flow~\cite{LeeS14}. As such, while this would provide better results for the applications to Markov chains, it would rule out the hope of obtaining improved results for these directed problems by replacing the undirected solver with a directed one. Intuitively, the solver in~\cite{cohen2016faster} worked by showing how one could use the existence of a fast undirected solver to solve directed Laplacians. For a directed Eulerian Laplacian $\mathcal{L}$, it showed that the symmetrized matrix $\mvar U=(\mathcal{L}+\mathcal{L}^\top)/2$ is the Laplacian of an undirected graph, and that the symmetric matrix $\mathcal{L}^\top \mvar U^+ \mathcal{L}$ was, in a certain sense, reasonably well approximated by $\mvar U$. Given a linear system $\mathcal{L}\vec{x}=\vec{b}$, one could then form the equivalent system $\mathcal{L}^\top \mvar U^+ \mathcal{L}\vec{x}=\mathcal{L}^\top \mvar U^+ \vec{b}$ and use a fast undirected Laplacian solver to apply $\mvar U^+$. One could then hope that the fact that the matrix on the left is somewhat well-approximated by $\mvar U$ would imply that $\mvar U^+$ is a sufficiently good preconditioner for it to yield an improved running time. It turned out that, while this would actually be the case in exact arithmetic, numerical issues provided a legitimate obstruction. This necessitated a more involved scheme, which gave a slightly slower running time of $\widetilde{O}(m^{3/4}n+mn^{2/3})$, rather than the roughly $\widetilde{O}(n^{2/3} m)$ running time that what would have been achieved by exact arithmetic. The way this algorithm works provides a good intuitive explanation for why one would not expect it to give a solver yielding substantial improvements for combinatorial ``Laplacian Paradigm'' algorithms that rely on undirected solvers. At its root, the solver from~\cite{cohen2016faster} works by trying to find the right way to ignore the directed structure and solve the system with an undirected solver; thus it is on essentially the same footing as the algorithms it would hope to improve. The obstructions it faces are rooted in the fact that directed Laplacians are fundamentally not very well-approximated by undirected ones. In essence, the difference between the solver in this paper and the one presented in~\cite{cohen2016faster} is that, instead of figuring out how to properly neglect the directed structure, the solver we present here intrinsically works with asymmetric (directed) objects, and redevelops the theory from the ground up to properly capture them. \subsubsection{Directed Graph Sparsification and Approximation} \label{sub:directed_sparse_hard} While sparsification of undirected graphs has been extensively studied~\cite{BenczurK96,SpielmanT11,FungHHP11,SpielmanS08:journal, BatsonSS12,BatsonSST13,ZhuLO15,LeeS15}, there has been very little success extending the notion to directed graphs. In fact, it was not even clear that there should exist a useful definition under which directed graphs should have sparsifiers with a subquadratic number of edges, and for many of the natural definitions one would propose, sparsification is provably impossible. For instance, a natural first attempt would be to try to generalize the classical notion of cut sparsification for undirected graphs~\cite{karger1994random,BenczurK96}. Given any weighted undirected graph $G$, Benczur and Karger showed that one could construct a new graph $H$ with at most $O(n\log n/\epsilon^2)$ edges such that the value of every cut in $G$ is within a multiplicative factor of $1\pm \epsilon$ of its value in $H$. While this definition makes sense for directed graphs as well, there is no analogous existence result. Indeed it is not hard to construct directed graphs for which any such approximation must have $\Omega(n^2)$ edges. \input{diag.tex} For example, consider the directed complete bipartite graph $K$ on the vertex set $A \cup B$ with all edges directed from $A$ to $B$. For each pair of $a \in A$ and $b \in B$, the directed cut \begin{equation}\label{eq:cut} E\left( \{a\} \cup B \setminus \{b\}, \{b\} \cup A \setminus \{a\}\right) \end{equation} contains only the edge $a \rightarrow b$. (See Figure~\ref{fig:bipartite_cut}.) Removing this edge from the graph would change the value of this cut from $1$ to $0$, resulting in an infinite multiplicative error. Any graph that multiplicatively approximates the cuts in $K$ must have $|E(B,A)|=0$, so it must be supported on a subset of the edges of $K$, and the above then shows that such a graph must contain the edge $a\rightarrow b$ for every $a\in A$ and $b\in B$. It thus follows that any graph that approximately preserves every cut in $K$ must contain all $|A||B|$ potential edges, so $K$ has no nontrivial sparsifier under this definition. It would therefore seem that any attempt at reducing the number of edges in a directed graph while preserving the combinatorial structure is doomed to fail. However, Eulerian graphs present a natural setting that circumvents this because cuts in Eulerian graphs have the same amount of edge weight going in each direction, the bipartite graph counterexamples above are precluded. This balancedness allows one to incorporate sparsification based tools for flows and routings in this setting to solve combinatorial flow and cut problems quickly on Eulerian graphs~\cite{EneMPS16}. Most closely related to our notion of sparsification of directed graphs is the work by Chung on Cheeger's inequality for directed graphs~\cite{Chung05}. This result transforms the graph into an Eulerian graph $G$ in a way identical to how we obtain Eulerian graphs~\cite{cohen2016faster}: by rescaling each edge weight by the probability of its source vertex in a stationary distribution. It then relates the convergence rate of random walks on $G$ to the eigenvalues of the undirected graph obtained by removing directions on all edges. Specifically if the Eulerian directed Laplaican is $\mathcal{L}$, this symmetrization is $\left(\mathcal{L} + \mathcal{L}^{\top}\right)/2.$ Since the eigenvalues of the symmetrization of an Eulerian graph give information about random walks on the original graph, it might be tempting to define approximation for Eulerian graphs in terms of whether their symmetrizations approximate each other in the conventional positive semidefinite sense. For our purposes, we require (and obtain) a substantially stronger notion of approximation that preserves much of the directed structure that would be erased by symmetrizing. The reason why we need a stronger notion of approximation is that we want graphs that approximate each other under this notion to be good preconditioners of one another. In contrast, if one defines approximation according to whether the symmetrizations approximate one another, one would have to say that the length $n$ undirected cycle and the length $n$ directed cycle approximate each other, since they are both Eulerian and have the same undirected symmetrization. However, they are not good preconditioners of one another, and using one as a substitute for the other would incur very large losses in our applications. Under the notion of approximation we introduce in this paper, these graphs differ by a factor of $\Omega(n^2)$. \subsubsection{Laplacian System Solvers} \label{sec:prev_solvers} Our algorithms build heavily on the literature for solving undirected Laplacians systems. Since undirected Laplacians are special cases of directed Laplacians, any directed solver will yield an undirected solver when given a symmetric input. It is thus helpful to consider what undirected solver we would like our method to resemble in this case. There are now a fairly large number of reasonably distinct algorithms for solving such systems, and we believe that several of them provide a template that could be turned into a working directed solver. Of these, the one that our solver most closely resembles is the parallel solver by Peng and Spielman~\cite{PengS14}, which we will briefly summarize here. To simplify the notation and avoid having to keep track of degree normalizations, we only consider regular graphs when giving the intuition behind the algorithm. Suppose that we are given a $d$-regular undirected graph $G$ with Laplacian $\mathcal{L} =d\mvar I-\mvar A=d(I-\WWhat)$, where $\WWhat=\mvar A/d$ has $\|\WWhat\|<1$ on $\ker(\mathcal{L})^\perp$. For simplicity, in the equations that follow, we restrict our attention to the space perpendicular to the kernel of $\mathcal{L}$. With this convention, the algorithm of \cite{PengS14} is then motivated by the series expansion \begin{equation}\label{eq:series_expansion} (\mvar I-\WWhat)^{-1}=\sum_{i\geq 0} \WWhat^i = \prod_{k\geq 0}\left( \mvar I + \WWhat^{2^k}\right), \end{equation} which is a matrix version of the standard scalar identity $1/(1-x)=1+x+x^2+x^3+\dots=1(1+x)(1+x^2)(1+x^4)\cdots$. If $\lambda$ is the smallest nonzero eigenvalue of $\mvar I-\WWhat$, then truncating this product at $k=\Theta(\log 1/\lambda)$ yields a constant relative error, which can be made arbitrarily small by further increasing $k$. Hence if $\lambda > 1/\mathrm{poly}(n)$, we obtain a small error by multiplying the first $O(\log n)$ terms of the product. % % This seems to suggest a good algorithm for solving a system $\mathcal{L} \vec{x}=\vec{b}$: simply compute $\mvar I+\WWhat^{2^k}$ for $k=0,\dots,t=O(\log n)$ and then return $\frac{1}{d}(\mvar I+\WWhat^{2^0})\dotsm (\mvar I+\WWhat^{2^t})\vec{b}$. % Unfortunately, this algorithm (implemented naively) would be too slow. As $k$ grows, $\WWhat^k$ quickly becomes dense, so computing it requires repeatedly squaring dense matrices, which takes time $O(n^\omega)$. To deal with this, their algorithm instead replaces these matrices with sparse approximations of them. Peng and Spielman showed that given a graph with $n$ vertices and $m$ edges, one can compute a sparse approximation of the requisite squared matrix in nearly-linear-time. % Making this idea work requires care, since in general it is not true that the product of two matrices will be well approximated by the product of their approximations. For positive semidefinite matrices, however, there is a variant of this statement that holds if one takes the products symmetrically: if $\mvar A$ and $\mvar B$ are PSD and $\mvar A$ is a good approximation of $\mvar B$, then for any matrix $\mvar V$, $\mvar V^\top \mvar A \mvar V$ is a good approximation of $\mvar V^\top \mvar B \mvar V$ . This led the authors of~\cite{PengS14} to work with a more stable symmetric version of the series described above, which allowed them to obtain their result. This turns out to be a reasonably convenient template for our directed solver. In particular, it has fewer moving parts than many of the other methods, and it does not require constructing combinatorial objects, like low-stretch spanning trees. Instead it directly relies on sparsification, which is our main new technical tool for directed graphs. Unfortunately, we cannot directly apply the methods described above, since the symmetric product constructions that are used to control the error are no longer available for the (asymmetric) Laplacians of directed graphs. Moreover, the strong notions of graph approximation and positive semidefinite inequalities that facilitate the analysis for the undirected solver are unavailable in the directed setting. As such, we end up having to work with weaker error guarantees, and correct the extra error they introduce using a more involved iterative method. \section{Reducing the Condition Number} \label{sec:reduction} In this section, we present a reduction from the problem of (approximately) solving Eulerian Laplacians to solving Eulerian Laplacians that are at most polynomially ill-conditioned, with a logarithmic dependence on condition number. This reduces the overall dependency on $\kappa$ to logarithmic instead of sub-polynomial. The main result of this section is: \begin{theorem} \label{thm:reduction} There exists a procedure $\textsc{CrudeSolveIllConditioned}$ which, when given an $n \times n$ Eulerian Laplacian $\mathcal{L}$ with $m$ non-zeros such that the condition number of $\mathcal{L} + \mathcal{L}^{\top}$ is bounded by $\kappa$, returns a crude approximate solution $x$ to $\mathcal{L} x = b$ in the sense that \[ \norm{\vec{x}-\mathcal{L}^\dagger \vec{b}}_{\mvar U_{\mathcal{L}}} \leq \frac{1}{2} \norm{\mathcal{L}^\dagger \vec{b}}_{\mvar U_{\mathcal{L}}}. \] This procedure performs only $O(\log(n \kappa))$ calls to an approximate Eulerian Laplacian solver, each on $O(n)$ vertices with $O(m)$ nonzero entries, with error parameter $\frac{1}{n^{O(1)}}$ and each with condition number $n^{O(1)}$, plus $O(m \log(n \kappa))$ additional work. Furthermore, if the Eulerian solver is an implicit polynomial, the overall procedure is an implicit polynomial as well. \end{theorem} Combining this routine with iterative refinement / preconditioned Richardson iteration as stated in Lemma~\ref{lem:precond_richardson} implies that one can obtain an $\epsilon$-approximate solution with $O(\log(n \kappa) \log(1/\epsilon))$ such solver calls and $O(m \log(n \kappa) \log(1/\epsilon))$ additional work. We note that this construction can also be used to reduce the $\log{\kappa}$ dependencies in~\cite{PengS14} to $\log{n}$. We include this construction here because it removes the $\exp(\sqrt{\log{\kappa}})$ dependencies in our algorithms and replaces them with similar terms in $n$. Since problems on directed graphs can be highly ill-conditioned even when the edge weights are all small integers, this can potentially be a significant improvement. \begin{remark*} We believe that developing combinatorial techniques to mitigate the effects of ill-conditioning is an important endeavor in both theory and practice, and that the scheme we include reflects only partial progress. While the technique we describe is sufficient to improve our running time bounds, we conjecture that far better reduction schemes are possible. Some ways in which an ideal such result could improve on the one presented here include: \begin{enumerate} \item The overhead in work and depth could be as low as $O(1)$: the different scales would only have minimal overlap, and processing could occur in parallel. \item The scheme could only need to manipulate numbers with double or quadruple the precision of $n/\epsilon$, instead of involving words whose sizes are $O(1)$ bigger. Treating these increases with similar emphasis as constants in approximation algorithms may be a more realistic model of the costs of floating point arithmetic. \item As highly ill conditioned systems often arise under floating point arithmetic (due to the exponent) such reductions should ideally be robust to floating instead of fixed point arithmetic. \end{enumerate} Systematically developing strong versions of such ``condition number reducing reductions'' is potentially a challenging but important direction important direction for future work. \end{remark*} The main idea of our reduction is to collar the Laplacian to a fixed ``scale'' of edge weights, contracting edges above the scale while adding a smaller multiple of a clique to bound the lower eigenvalue. The algorithm then uses the Laplacian at a given scale to route demand between vertices connected at about that scale. Our algorithm is defined around the following contraction and projection operators: \begin{defn} \label{def:contract} Given a partition of $[n]$ into $k$ sets $S_1$, $S_2$, ..., $S_k$, \begin{enumerate} \item the contraction operator $\mvar C$ is the linear map $\mathbb{{R}}^n \to \mathbb{{R}}^k$ mapping $\ensuremath{\vec{{1}}}_i$ to $\ensuremath{\vec{{1}}}_j$ if $i \in S_j$. \item let $\operatorname{Proj}$ be the orthogonal projection onto the kernel of $\mvar C$. \end{enumerate} \end{defn} Pseudocode based on these operators is in Figure~\ref{fig:reduction}. \begin{figure}[ht] \begin{algbox} $\textsc{CrudeSolveIllConditioned}(\mathcal{L}, \vec{b})$ \\ \textbf{Input:} Eulerian Laplacian $\mathcal{L} = \DD - \AA^{\top}$, $\vec{b} \perp \allones$ \textbf{Output:} A crude approximate solution $\vec{x}$ to $\mathcal{L} \vec{x} = \vec{b}$ in the sense that $\norm{\vec{x}-\mathcal{L}^\dagger \vec{b}}_{\mvar U_{\mathcal{L}}} \leq \frac{1}{2} \norm{\mathcal{L}^\dagger \vec{b}}_{\mvar U_{\mathcal{L}}}$. \begin{enumerate} \item Let $r = 1000 n^{10}$. \item Let $w^{(0)}$ be the smallest edge weight in a maximal spanning tree of $\mvar U_{\mathcal{L}}$. \item Let $\vec{x}^{(0)} = 0$, $\vec{b}^{(0)} = b$, $i = 0$. \item Loop: \begin{enumerate} \item Let $\mvar C^{(i)}$ and $\operatorname{Proj}^{(i)}$ be the contraction and projection operators as given in Definition~\ref{def:contract} for the connected components in the graph of edges in $\mvar U_{\mathcal{L}}$ with weight $\geq w^{(i)}$. \item Let $\vec{z}^{(i)} = {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top + \frac{w^{(i)}}{r^2} \mvar I \right )^{-1} \mvar C^{(i)} \vec{b}^{(i)}$. (A $\frac{1}{r}$-approximate solver may be substituted for the inverse). \label{step:approxSolveProjected} \item Let $\vec{x}^{(i+1)} = \vec{x}^{(i)} + \vec{z}^{(i)}$. \item \label{step:bNext} Let $\vec{b}^{(i+1)} = \operatorname{Proj}^{(i)} (\vec{b}^{(i)} - \mathcal{L} \vec{z}^{(i)})$. \item Let $w'^{(i)}$ be the smallest edge weight in $\mvar U_{\mathcal{L}}$ that is $\geq w^{(i)}$. If none, stop looping. \item Let $w^{(i+1)} = 2 w'^{(i)}$. \item $i \leftarrow i+1$. \end{enumerate} \item Return $\vec{x}^{(i+1)}$. \end{enumerate} \end{algbox} \caption{Reduction to solving well-conditioned Eulerian systems} \label{fig:reduction} \end{figure} Our proofs crucially rely on the following fact that relates the ordinary (arithmetic) and harmonic symmetrizations of Eulerian Laplacians. It is immediate from Lemma 13 of~\cite{cohen2016faster}, and one side of it is also present as Lemma~\ref{lem:sym-hsm}. \begin{lemma} \label{lem:symBound} Let $\mathcal{L}$ be an Eulerian Laplacian and $\mvar U_\mathcal{L}$ its symmetrization $\frac{\mathcal{L}+\mathcal{L}^\top}{2}$. Then \begin{equation*} \mvar U_\mathcal{L} \preceq \mathcal{L}^\top {\mvar U_\mathcal{L}}^\dagger \mathcal{L} \preceq 2(n-1)^2 \mvar U_\mathcal{L}. \end{equation*} \end{lemma} We also need a simple technical lemma about the extreme values in solutions to Eulerian Laplacian systems in terms of the support of the demand vector. \begin{lem} \label{lem:voltages} Let $\mathcal{L}$ be a connected Eulerian Laplacian and $\vec{x}$ and $\vec{b}$ be nonzero vectors such that $\mathcal{L} \vec{x} = \vec{b}$. Then the maximum value of the entries of $\vec{x}$, as well as the minimum value, must be attained on $\sup{\vec{b}}$. \end{lem} \begin{proof} Consider the set $S$ of all entries of $\vec{b}$ attaining the maximum value $v$. If this set contains every vertex, it automatically overlaps $\sup{\vec{b}}$. Otherwise, we have: \[ \sum_{i \in S} \vec{b}_i = \left ( \sum_{i \in S, j \not \in S} w_{ji} \vec{x}_j \right ) - \left ( \sum_{i \in S, j \not \in S} w_{ij} \vec{x}_i \right ) < v \left ( \sum_{i \in S, j \not \in S} w_{ji} \right ) - v \left ( \sum_{i \in S, j \not \in S} w_{ij} \right ) = 0. \] Here the last equality is due to $\mathcal{L}$ being Eulerian: the total weight entering and leaving $S$ is equal. The sum of the entries from $S$ in $\vec{b}$ is strictly negative and thus nonzero, so $S$ must overlap with the support of $\vec{b}$. The case of the minimum value is analogous. \end{proof} Next, we show that if the demands for an Eulerian Laplacian system are supported on a well-connected subset of the graph, perturbing the system by a small multiple of the identity matrix cannot induce too much error. \begin{lem} \label{lem:onepiece} Let $\mathcal{L}$ be a connected Eulerian Laplacian with corresponding undirected Laplacian $\mvar U_\mathcal{L}$, and let $S$ be a subset of the vertices such that any two vertices in $S$ are connected by a path in $\mvar U_\mathcal{L}$ containing only edges of weight at least $\beta$. Let $\vec{b}$ be a vector in the image of $\mathcal{L}$ supported on $S$, and let $\alpha > 0$. Then \begin{equation*} \normFull{(\mathcal{L} + \alpha \mvar I)^{-1} \vec{b} - \mathcal{L}^\dagger \vec{b}}_{\mvar U_\mathcal{L}} \leq n \sqrt{\frac{\alpha}{\beta}} \normFull{\mathcal{L}^\dagger \vec{b}}_{\mvar U_\mathcal{L}}. \end{equation*} \end{lem} \begin{proof} We define $\vec{x} = \mathcal{L}^\dagger \vec{b}$ and $\vec{y} = (\mathcal{L} + \alpha \mvar I)^{-1} \vec{b}$. Writing $\vec{y}$ as $\vec{x} + (\mathcal{L} + \alpha \mvar I)^{-1} (\vec{b} - (\mathcal{L} + \alpha \mvar I) \vec{x})$ gives: \[ \vec{y} - \vec{x} = \left(\mathcal{L} + \alpha \mvar I \right)^{-1} \left(\mathcal{L} \vec{x} - \left(\mathcal{L} + \alpha \mvar I\right) \vec{x}\right) = -\alpha \left(\mathcal{L} + \alpha \mvar I \right)^{-1} \vec{x}. \] which when substituted into the $\mu_{\mathcal{L}}$ norm gives: \[ \normFull{\vec{y} - \vec{x}}_{\mvar U_\mathcal{L}}^2 = \alpha^2 \vec{x}^\top \left(\mathcal{L}^\top + \alpha \mvar I\right)^{-1} \mvar U_\mathcal{L} \left(\mathcal{L} + \alpha \mvar I\right)^{-1} \vec{x}. \] Since $\mvar U_\mathcal{L} \preceq \mvar U_{\mathcal{L}} + \alpha \mvar I$, we can invoke Lemma~\ref{lem:symBound} with the matrix $(\mathcal{L} + \alpha \mvar I)$ to get: \[ \normFull{\vec{y} - \vec{x}}_{\mvar U_\mathcal{L}}^2 \leq \frac{\alpha^2}{\alpha} \vec{x}^\top \vec{x} = n \alpha \norm{\vec{x}}_\infty^2. \] Now, by Lemma~\ref{lem:voltages}, the full range of the entries of $\vec{x}$ occurs within $S$. The existence of a path with at most $n$ vertices connecting the minimum and maximum of these entries implies that \[ \normFull{\vec{x}}_{\mvar U_\mathcal{L}}^2 \geq \frac{\beta}{n} \normFull{\vec{x}}_\infty^2. \] Putting this together we get \[ \normFull{\vec{y} - \vec{x}}_{\mvar U_\mathcal{L}}^2 \leq n^2 \frac{\alpha}{\beta} \normFull{\vec{x}}_{\mvar U_\mathcal{L}}^2. \] \end{proof} Next, we handle the case of simultaneous demands within multiple well-connected components. We also switch the error bound to be in $\norm{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}$ to facilitate our later steps. \begin{lem} \label{lem:addclique} Let $\mathcal{L}$ be a connected Eulerian Laplacian with corresponding undirected Laplacian $\mvar U_\mathcal{L}$, and let $S_1, S_2, ... S_k$ be the connected components of the graph consisting of those edges in $\mvar U_\mathcal{L}$ with edges of weight at least $\beta$. Let $\vec{b}$ be a vector such that $\sum_{i \in S_j} \vec{b}_i = 0$ for all $j$, and let $\alpha > 0$. Then \begin{equation*} \normFull{(\mathcal{L} + \alpha \mvar I)^{-1} \vec{b} - \mathcal{L}^\dagger \vec{b}}_{\mvar U_\mathcal{L}} \leq 2 n^{7/2} \sqrt{\frac{\alpha}{\beta}} \normFull{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}. \end{equation*} \end{lem} \begin{proof} We decompose $\vec{b}$ as \begin{equation*} \vec{b} = \sum_j \vec{b_j} \end{equation*} where $\vec{b_j}$ is supported on $S_j$, and $0$ everywhere else. Now we aim to bound $\norm{\vec{b_j}}_{{\mvar U_\mathcal{L}}^\dagger}$. Note that there is an electrical flow $\vec{y}$ on $\mvar U_\mathcal{L}$ with energy $\norm{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}^2$ routing overall demands $\vec{b}$. We define $\vec{y}'_j$ as the restriction of the flow to the internal edges of $S_j$, and let is residuals be $\vec{b'_j}$. Since this flow is on a subset of the edges, its total energy is almost most $\norm{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}^2$ so this certifies that $\norm{\vec{b'_j}}_{{\mvar U_\mathcal{L}}^\dagger} \leq \norm{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}$. Then \[ \normFull{\vec{b'_j}-\vec{b_j}}_1 \leq \frac{n}{\sqrt{\beta}} \normFull{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger} \] since it is at most the $\ell_1$ norm of the flows on the edges incident to but not contained in $S_j$ in $y$, and each of these $\leq n^2$ edges has weight at most $\beta$ by the assumption of $S_j$ being the connected components on edges with weights $\geq \beta$. $\vec{b'_j}-\vec{b_j}$ is also supported on $S_j$; since $S_j$ is connected by edges of weight $\geq \beta$, we have \[ \normFull{\vec{b'_j}-\vec{b_j}}_{{\mvar U_\mathcal{L}}^\dagger} \leq \sqrt{n \beta} \normFull{\vec{b'_j}-\vec{b_j}}_1 \leq n^{3/2} \normFull{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}. \] Then by the triangle inequality \[ \normFull{\vec{b_j}}_{{\mvar U_\mathcal{L}}^\dagger} \leq \normFull{\vec{b'_j}}_{{\mvar U_\mathcal{L}}^\dagger} + \normFull{\vec{b'_j}-\vec{b_j}}_{{\mvar U_\mathcal{L}}^\dagger} \leq 2 n^{3/2} \normFull{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}. \] Lemma~\ref{lem:sym-hsm} then implies that $\norm{\mathcal{L}^\dagger \vec{b_j}}_{\mvar U} \leq \norm{\vec{b_j}}_{{\mvar U_\mathcal{L}}^\dagger}$. Finally, we apply Lemma~\ref{lem:onepiece} to each $\vec{b_j}$, yielding that \begin{equation*} \normFull{(\mathcal{L} + \alpha \mvar I)^{-1} \vec{b_j} - \mathcal{L}^\dagger \vec{b_j}}_{\mvar U_\mathcal{L}} \leq 2 n^{5/2} \sqrt{\frac{\alpha}{\beta}} \normFull{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}. \end{equation*} Summing over the up to $n$ different $\vec{b_j}$ and applying the triangle inequality over this sum gives \begin{equation*} \normFull{(\mathcal{L} + \alpha \mvar I)^{-1} \vec{b} - \mathcal{L}^\dagger \vec{b}}_{\mvar U_\mathcal{L}} \leq 2 n^{7/2} \sqrt{\frac{\alpha}{\beta}} \normFull{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger} \end{equation*} as desired. \end{proof} We now have the tools to analyze $\textsc{CrudeSolveIllConditioned}$. Our analyses rely on the following key intermediate variables: \begin{enumerate} \item \[ \vec{q}^{(i)} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \vec{b}^{(i)}. \] \item \[ \vec{e}^{(i)} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \vec{z}^{(i)}-\vec{q}^{(i)}, \] where $\vec{z}^{(i)}$ is the `shifted' solution obtained on Step~\ref{step:approxSolveProjected}. \item \[ \vec{f}^{(i)} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \operatorname{Proj}^{(i)}\left(\mathcal{L} \vec{e}^{(i)}\right). \] \item \[ \vec{b}^{*(i)} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \vec{b} - \sum_{j<i} \vec{f}^{(j)}. \] \end{enumerate} We first show that the right hand side in the iterations can be expressed as a close form involving the errors. \begin{lemma} \label{lem:asproj} For all $i$, \begin{equation*} \vec{b}^{(i)} = \left ( \mvar I - \mathcal{L} {\mvar C^{(i-1)}}^\top \left ( \mvar C^{(i-1)} \mathcal{L} {\mvar C^{(i-1)}}^\top \right )^\dagger \mvar C^{(i-1)} \right ) \vec{b}^{*(i)}. \end{equation*} \end{lemma} \begin{proof} First we note two equations about the projection operator $\left ( \mvar I - \mathcal{L} {\mvar C^{(i-1)}}^\top \left ( \mvar C^{(i-1)} \mathcal{L} {\mvar C^{(i-1)}}^\top \right )^\dagger \mvar C^{(i-1)} \right )$. Now we prove this by induction. The base case of $i = 0$ follows from the two sides being identical. For the inductive case, substituting in the construction of $\vec{b}^{(i + 1)}$ on Step~\ref{step:bNext} gives: \begin{align*} \vec{b}^{(i+1)} &= \operatorname{Proj}^{(i)} (\vec{b}^{(i)} - \mathcal{L} \vec{z}^{(i)}) \\ &= \operatorname{Proj}^{(i)} \left ( \vec{b}^{(i)} - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \vec{b}^{(i)} - \mathcal{L} \vec{e}^{(i)} \right ) \\ &= \operatorname{Proj}^{(i)} \left ( \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \vec{b}^{(i)} \right ) - \operatorname{Proj}^{(i)} ( \mathcal{L} \vec{e}^{(i)} ) \\ &= \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \vec{b}^{(i)} - \vec{f}^{(i)}. \end{align*} Here, the last line follows from the fact that $\left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \vec{b}^{(i)}$ is already in the kernel of $\mvar C^{(i)}$, as \begin{align*} \mvar C^{(i)} \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \vec{b}^{(i)} &= \mvar C^{(i)} \vec{b}^{(i)} - \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right ) \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \vec{b}^{(i)} \\ &= \mvar C^{(i)} \vec{b}^{(i)} - \mvar C^{(i)} \vec{b}^{(i)} \\ &= 0 \end{align*} We similarly have \begin{align*} \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \mathcal{L} {\mvar C^{(i)}}^\top &= \mathcal{L} {\mvar C^{(i)}}^\top - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right ) \\ &= \mathcal{L} {\mvar C^{(i)}}^\top - \mathcal{L} {\mvar C^{(i)}}^\top \\ &= 0 \end{align*} Since the image of ${\mvar C^{(i-1)}}^\top$ is contained in the image of ${\mvar C^{(i)}}^\top$, this also implies that \begin{equation*} \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \mathcal{L} {\mvar C^{(i-1)}}^\top = 0 \end{equation*} and hence that \begin{align*} \hspace{8em}&\hspace{-8em} \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \left ( \mvar I - \mathcal{L} {\mvar C^{(i-1)}}^\top \left ( \mvar C^{(i-1)} \mathcal{L} {\mvar C^{(i-1)}}^\top \right )^\dagger \mvar C^{(i-1)} \right ) \\ &= \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ). \end{align*} We also have $\left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \vec{f}^{(i)} = \vec{f}^{(i)}$ as $\vec{f}^{(i)}$, output by $\operatorname{Proj}^{(i)}$, is in the kernel of $\mvar C^{(i)}$. Putting these together and substituting in the induction hypothesis on $i$ gives \begin{align*} \vec{b}^{(i+1)} &= \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \left ( \vec{b}^{*(i)} - \vec{f}^{(i)} \right ) \\ &= \left ( \mvar I - \mathcal{L} {\mvar C^{(i)}}^\top \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \right ) \vec{b}^{*(i+1)} \end{align*} which shows that the identity holds for $i + 1$ as well. \end{proof} \begin{lemma} \label{lem:answerbound} For all $i$, \begin{equation*} \vec{x}^{(i)} + \mathcal{L}^\dagger \vec{b}^{(i)} = \sum_{j<i} \vec{e}^{(j)} + \mathcal{L}^\dagger \vec{b}^{*(i)}. \end{equation*} \end{lemma} \begin{proof} Again, we proceed by induction. \begin{align*} \vec{x}^{(i+1)} + \mathcal{L}^\dagger \vec{b}^{(i+1)} &= \vec{x}^{(i)} + \vec{z}^{(i)} + \mathcal{L}^\dagger \operatorname{Proj}^{(i)} \left ( \vec{b}^{(i)} - \mathcal{L} \vec{z}^{(i)} \right ) \\ &= \vec{x}^{(i)} + \vec{q}^{(i)} + \vec{e}^{(i)} + \mathcal{L}^\dagger \operatorname{Proj}^{(i)} \left ( \vec{b}^{(i)} - \mathcal{L} \vec{q}^{(i)} - \mathcal{L} \vec{e}^{(i)} \right ) \\ &= \vec{x}^{(i)} + \vec{q}^{(i)} + \vec{e}^{(i)} + \mathcal{L}^\dagger \operatorname{Proj}^{(i)} \left ( \vec{b}^{(i)} - \mathcal{L} \vec{q}^{(i)} \right ) - \mathcal{L}^\dagger \operatorname{Proj}^{(i)}\left(\mathcal{L} \vec{e}^{(i)}\right) \\ &= \vec{x}^{(i)} + \vec{q}^{(i)} + \vec{e}^{(i)} + \mathcal{L}^\dagger \vec{b}^{(i)} - \vec{q}^{(i)} - \mathcal{L}^\dagger \vec{f}^{(i)}. \end{align*} The last line here follows from the fact that $\vec{b}^{(i)} - \mathcal{L} \vec{q}^{(i)}$ is already inside the kernel of $\mvar C^{(i)}$. Cancelling the $\vec{q}^{(i)}$ terms and applying the induction hypothesis then gives \begin{align*} \vec{x}^{(i+1)} + \mathcal{L}^\dagger \vec{b}^{(i+1)} &= \vec{x}^{(i)} + \mathcal{L}^\dagger \vec{b}^{(i)} + \vec{e}^{(i)} - \mathcal{L}^\dagger \vec{f}^{(i)} \\ &= \mathcal{L}^\dagger \vec{b}^{*(i)} + \sum_{j<i} \vec{e}^{(j)} + \vec{e}^{(i)} - \mathcal{L}^\dagger \vec{f}^{(i)} \\ &= \mathcal{L}^\dagger \vec{b}^{*(i+1)} + \sum_{j<i+1} \vec{e}^{(j)}. \end{align*} \end{proof} These relations then allows us to bound the global error via the guarantees of the separate approximate solves. As these solves produce solutions with relative error, we first need to bound the norm of $\vec{b}^{(i)}$: \begin{lemma} \label{lem:bstable} For all $i$, \[ \normFull{\vec{b}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \leq 3 n \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \] \end{lemma} \begin{proof} First, note that \begin{equation*} \normFull{\mvar C^{(i-1)} \vec{b}^{*(i)}} _{\left ( \mvar C^{(i-1)} \mvar U_\mathcal{L} {\mvar C^{(i-1)}}^\top \right )^\dagger} \leq \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \end{equation*} Lemma~\ref{lem:symBound} then gives \begin{equation*} \normFull{\left ( \mvar C^{(i-1)} \mathcal{L} {\mvar C^{(i-1)}}^\top \right )^\dagger \mvar C^{(i-1)} \vec{b}^{*(i)}}_{\left ( \mvar C^{(i-1)} \mvar U_\mathcal{L} {\mvar C^{(i-1)}}^\top \right )} \leq \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \end{equation*} or equivalently \begin{equation*} \normFull{{\mvar C^{(i-1)}}^\top \left ( \mvar C^{(i-1)} \mathcal{L} {\mvar C^{(i-1)}}^\top \right )^\dagger \mvar C^{(i-1)} \vec{b}^{*(i)}}_{\mvar U_\mathcal{L}} \leq \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \end{equation*} Now applying Lemma~\ref{lem:symBound}, we get \begin{equation*} \normFull{\mathcal{L} {\mvar C^{(i-1)}}^\top \left ( \mvar C^{(i-1)} \mathcal{L} {\mvar C^{(i-1)}}^\top \right )^\dagger \mvar C^{(i-1)} \vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \leq 2 n \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \end{equation*} The result then follows from the triangle inequality with $\vec{b}^{*(i)}$. \end{proof} For the next step we will use the following fact about matrices: \begin{fact} \label{fact:schur} For any symmetric positive semidefinite matrix $\mvar M$ and arbitrary matrix $\mvar C$, and for any vector $\vec{v}$ in the image of $\mvar M$, \[ \normFull{\mvar C \vec{v}}_{\left ( \mvar C \mvar M \mvar C^\top \right )^\dagger} \leq \normFull{\vec{v}}_{\mvar M^\dagger}. \] \end{fact} Notably, this fact is equivalent to the standard result that the Schur complement of a positive semidefinite matrix is spectrally dominated by the matrix. \begin{proof} We can prove this using duality of norms: \begin{align*} \normFull{\mvar C \vec{v}}_{\left ( \mvar C \mvar M \mvar C^\top \right )^\dagger} &= \max_{\normFull{\vec{u}}_{\left ( \mvar C \mvar M \mvar C^\top \right )} \leq 1} \left \langle u, \mvar C v \right \rangle \\ &= \max_{\normFull{\mvar C^\top \vec{u}}_{\mvar M} \leq 1} \left \langle \mvar C^\top u, v \right \rangle \\ &\leq \max_{\norm{\vec{u}'}_{\mvar M} \leq 1} \left \langle u', v \right \rangle \\ &= \normFull{\vec{v}}_{\mvar M^\dagger}. \end{align*} \end{proof} We begin to bound the error terms: \begin{lemma} \label{lem:errore} \[ \normFull{\vec{e}^{(i)}}_{\mvar U_\mathcal{L}} \leq \frac{12 n^{9/2}}{r} \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \] \end{lemma} \begin{proof} First, we note that by Fact~\ref{fact:schur} \begin{equation*} \normFull{\mvar C^{(i)} \vec{b}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger} \leq \normFull{\vec{b}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \leq 3 n \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}, \end{equation*} where the last inequality is by Lemma~\ref{lem:bstable}. Now, define intermediate variables: \begin{enumerate} \item \[ \vec{q'}^{(i)} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger \mvar C^{(i)} \vec{b}^{(i)}, \] \item \[ \vec{z''}^{(i)} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \left ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top + \frac{w^{(i)}}{r^2} \mvar I \right )^{-1} \mvar C^{(i)} \vec{b}^{(i)}, \] and $\vec{z'}^{(i)}$ analogously, but as the output of a $\frac{1}{r}$-approximate solver for the system $(\mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top + \frac{w^{(i)}}{r^2} \mvar I ) \vec{x} = \mvar C^{(i)} \vec{b}^{(i)}$. \end{enumerate} By rearranging and applying the triangle inequality, we have \begin{equation*} \normFull{\vec{e}^{(i)}}_{\mvar U_\mathcal{L}} \leq \normFull{\vec{z'}^{(i)}-\vec{z''}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )}+\normFull{\vec{z''}^{(i)}-\vec{q'}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )} \end{equation*} The first term captures the error induced by using the approximate rather than exact solver, while the second captures the error induced from adding the multiple of the identity (the more serious issue). For the first term, we have \begin{align*} \normFull{\vec{z'}^{(i)}-\vec{z''}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )} &\leq \normFull{\vec{z'}^{(i)}-\vec{z''}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top + \frac{w^{(i)}}{r^2} \mvar I \right )} \\ &\leq \frac{1}{r} \normFull{\vec{z''}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top + \frac{w^{(i)}}{r^2} \mvar I \right )} \textrm{ (by definition of approximate solver)} \\ &\leq \frac{1}{r} \normFull{\mvar C^{(i)} \vec{b}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top + \frac{w^{(i)}}{r^2} \mvar I \right )^{-1}} \textrm{ (by Lemma~\ref{lem:symBound})} \\ &\leq \frac{1}{r} \normFull{\mvar C^{(i)} \vec{b}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )^{\dagger}} \\ &\leq \frac{3 n}{r} \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \end{align*} For the second term, we will apply Lemma~\ref{lem:addclique}. We will use the fact that $\vec{b}^{(i)}$ is in the image of $\mvar C^{(i-1)}$--or equivalently, that its entries on any connected component of the edges in $\mvar U_\mathcal{L}$ with weight $\geq w^{(i-1)}$ sum to 0. By the definition of $w^{(i)}$, these are the same as the edges with weight $\geq \frac{w^{(i)}}{2}$. Furthermore, contracting can only increase the connectivity of a component, so $\mvar C^{(i)} \vec{b}^{(i)}$ satisfies the same property relative to $\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )$. Then we can apply Lemma~\ref{lem:addclique} with $\alpha = \frac{w^{(i)}}{r^2}$ and $\beta = \frac{w^{(i)}}{2}$: \[ \normFull{\vec{z''}^{(i)}-\vec{q'}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )} \leq \frac{3 n^{7/2}}{r} \normFull{\mvar C^{(i)} \vec{b}^{(i)}}_{\left ( \mvar C^{(i)} \mvar U_\mathcal{L} {\mvar C^{(i)}}^\top \right )^\dagger} \leq \frac{9 n^{9/2}}{r} \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \] Summing these two bounds gives the desired result. \end{proof} It remains to bound the norms of the other error vectors $\vec{f}^{(i)}$. \begin{lemma} \label{lem:errorf} For all $i$, \[ \normFull{\vec{f}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \leq 6 n^{5/2} \normFull{\vec{e}^{(i)}}_{\mvar U_\mathcal{L}}. \] \end{lemma} \begin{proof} First, we apply Lemma~\ref{lem:symBound}, showing that $\norm{\mathcal{L} \vec{e}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \leq 2 n \norm{\vec{e}^{(i)}}_{\mvar U_\mathcal{L}}$. Now we will show that the $\operatorname{Proj}^{(i)}$ operator cannot increase the ${\mvar U_\mathcal{L}}^\dagger$ norm by more than a factor of $2 n^{3/2}$. The proof is similar to that of Lemma~\ref{lem:addclique}: we consider the electrical flow $\mvar U_\mathcal{L}$ that meets the demands $\mathcal{L} \vec{e}^{(i)}$. We denote this flow with $\vec{y}^{(i)}$, and define $\vec{y}^{(i)'}$ as the restriction of $\vec{y}^{(i)}$ to the edges of weight $\geq w^{(i)}$. We then let the residue of this flow be $\vec{b}^{(i)'}$, and write: \begin{equation*} \operatorname{Proj}^{\left(i\right)}\left(\mathcal{L} \vec{e}^{(i)}\right) = \operatorname{Proj}^{\left(i\right)}\left(\vec{b}^{(i)'}\right) + \operatorname{Proj}^{\left(i\right)} \left(\mathcal{L} \vec{e}^{(i)}-\vec{b}^{(i)'}\right). \end{equation*} Since $\vec{b}^{(i)'}$ is induced by a flow $\vec{y}^{(i)'}$ wholly within the components with weights $\geq w^{(i)}$, \[ \operatorname{Proj}^{(i)}(\vec{b}^{(i)'}) = \vec{b}^{(i)'}. \] Furthermore, since $\vec{y}^{(i)'} $ rounds the demand $\vec{b}^{(i)'}$, and is a restriction of $\vec{y}$, we have \[ \normFull{\vec{b}^{(i)'}}_{{\mvar U_\mathcal{L}}^\dagger}^2 \leq \mathcal{E}_{\mvar U_{\mathcal{L}}}\left(\vec{y}^{(i)'}\right) \leq \mathcal{E}_{\mvar U_{\mathcal{L}}}\left(\vec{y}^{(i)}\right) \leq \normFull{\mathcal{L} \vec{e}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger}^2, \] where $\mathcal{E}_{\mvar U_{\mathcal{L}}}(\vec{y})$ denotes the electrical energy of the flow $\vec{y}$ on $\mu_{\mathcal{L}}$. On the other hand, $\mathcal{L} \vec{e}^{(i)}-\vec{b}^{(i)'}$ is the residual of the flow $\vec{y}^{(i)}-\vec{y}^{(i)'}$, which is supported on edges with weight $< w^{(i)}$ and also has energy at most $\norm{\mathcal{L} \vec{e}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger}^2$. Since each edge can contribute to the residual of at most two vertices, we have \[ \normFull{\mathcal{L} \vec{e}^{(i)}-\vec{b}^{(i)'}}_1 \leq 2 \normFull{\vec{y}^{(i)}-\vec{y}^{(i)'}}_1 \leq 2 n \normFull{\vec{y}^{(i)}-\vec{y}^{(i)'}}_2 \leq \frac{2n}{\sqrt{w^{(i)}}} \normFull{\mathcal{L} \vec{e}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \] Finally, using the fact that $\operatorname{Proj}^{(i)}$ can at most double the $\ell_1$ norm of its input and that the ${\mvar U_\mathcal{L}}^\dagger$ norm of demands connected by edges of weight at least $w^{(i)}$ is at most $\sqrt{n w^{(i)}}$ times the $\ell_1$ norm of those demands, we have \begin{equation*} \normFull{\operatorname{Proj}^{\left(i\right)} \left(\mathcal{L} \vec{e}^{\left(i\right)}-\vec{b}^{(i)'}\right)}_{\mvar U_\mathcal{L}}^\dagger \leq 4n^{3/2} \normFull{\mathcal{L} \vec{e}^{\left(i\right)}}_{{\mvar U_\mathcal{L}}^\dagger}. \end{equation*} Applying the triangle inequality to $\vec{b}^{(i)'}$ and $\mathcal{L} \vec{e}^{(i)}-\vec{b}^{(i)'}$ then gives the desired result. \end{proof} Note that combining the previous two lemmas shows that \[ \normFull{\vec{f}^{(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \leq \frac{72 n^7}{r} \normFull{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}. \] Putting these together with the breakdown of errors then gives the overall guarantees. \begin{proof}[Proof of Theorem~\ref{thm:reduction}] First, note that the number of rounds is bounded by $\min\{n^2, O(\log(n \kappa))\}$. The former is from the number of edges, while the latter follows from the fact that the largest and smallest eigenvalues of $\mvar U_\mathcal{L}$ are within $\mathrm{poly}(n)$ factors of the smallest and largest weighted vertex degrees. This then implies by induction that $\norm{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger} \leq 2 \norm{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger}$ for all $i$ -- since, assuming it held for all previous $i$, each of the at most $n^2$ error terms $\vec{f}^{(j)}$ had norm at most $\frac{1}{5 n^3} \norm{\vec{b}^{*(i)}}_{{\mvar U_\mathcal{L}}^\dagger}$ (by Lemmas~\ref{lem:errore} and \ref{lem:errorf}). By Lemma~\ref{lem:errore} each $\vec{e}^{(i)}$ had norm at most $\frac{1}{40 n^{11/2}}$. Then applying Lemma~\ref{lem:answerbound} on the final configuration (where $\vec{b}^{(i+1)} = 0$) with these bounds (and again using the fact that there are most $n^2$ iterations) implies that \begin{align*} \normFull{\vec{x}-\mathcal{L}^\dagger \vec{b}}_{\mvar U_{\mathcal{L}}} &\leq \sum_i \left(\normFull{\vec{e}^{\left(i\right)}}_{\mvar U_\mathcal{L}} + \normFull{\mathcal{L}^\dagger \vec{f}^{\left(i\right)}}_{\mvar U_\mathcal{L}}\right) \\ &\leq \sum_i \left(\normFull{\vec{e}^{\left(i\right)}}_{\mvar U_\mathcal{L}} + \normFull{\vec{f}^{\left(i\right)}}_{\mvar U^\dagger}\right) \\ &\leq \frac{1}{4n} \normFull{\vec{b}}_{{\mvar U_\mathcal{L}}^\dagger} \\ &\leq \frac{1}{2} \normFull{\mathcal{L}^\dagger \vec{b}}_{\mvar U_{\mathcal{L}}}. \end{align*} Here the second and last inequalities follow from Lemma~\ref{lem:symBound}. This is the desired bound on the final error of the solver. Finally, we need to show that the procedure can be implemented in the desired runtime. We note that the contracted matrices $( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top )$ are still Eulerian Laplacians, and the contractions cannot increase the number of vertices or edges. The actual systems solved are in $ ( \mvar C^{(i)} \mathcal{L} {\mvar C^{(i)}}^\top + \frac{w^{(i)}}{r^2} \mvar I )$, or an Eulerian Laplacian plus positive diagonal. This matrix can be reduced, with the reduction in Section 5 of~\cite{cohen2016faster}, to solving an Eulerian Laplacian with asymptotically the same sparsity and condition number. The symmetrized matrix for each system has min eigenvalue at least $\frac{w^{(i)}}{r^2}$ and max eigenvalue at most $O(n w^{(i)})$, so all condition numbers of their symmetrizations are polynomially bounded, as desired. \end{proof} \subsection{Our Results} \label{sec:intro:results} In this paper, we show that, in spite of these seemingly fundamental differences between the directed and undirected settings, we can develop directed analogues of several of the core spectral primitives that have been deployed to great effect on undirected graphs, and we use them to obtain the first almost-linear-time algorithms for many of the central problems in the analysis of non-reversible Markov chains. The main new theoretical tools and algorithmic primitives we introduce are: \begin{itemize} \item \textbf{Directed graph approximation:} We develop a well-behaved notion of spectral approximation for directed graphs, despite the fact that the corresponding Laplacians lack the symmetry and positive semidefiniteness properties that the undirected version crucially relies on. Our definition specializes to the standard version based on PSD matrix inequalities when applied to undirected graphs, and it retains many of the useful features of the undirected definition. For example, our notion of graph approximations roughly preserve the behavior of random walks, behave well under composition and change of basis, retain certain key aspects of the combinatorial structure, and provide good preconditioners for iterative methods. \item \textbf{Directed sparsification:} We show that, under our notion of approximation, any strongly-connected directed graph can be approximated by a \emph{sparsifier} with only $\widetilde{O}(n/\epsilon^2)$ edges, and we give an algorithm to compute such a sparsifier in almost-linear time. To our knowledge, this is the first time that directed sparsifiers with $o(n^2)$ edges have been proven to exist, even non-algorithmically, for any computationally useful definition that retains the directed structure of a graph. % \item \textbf{Almost-linear-time solvers for directed Laplacian systems:} Given the Laplacian $\mathcal{L}=\mvar D - ma^\top$ of a directed graph with $n$ vertices and $m$ edges, we provide an algorithm that leverages our sparsifier construction to solve the linear system $\mathcal{L} \vec{x}=\vec{b}$ in time \begin{equation}\label{eq:timebound} \mathcal{T}=\ensuremath{O\left(m+n2^{O(\sqrt{\log n \log \log n})}\right)\log^{O(1)}\left(n \kappa \epsilon^{-1}\right)}=\ensuremath{O\left(m+n^{1+o(1)}\right)\log^{O(1)}\left(n \kappa \epsilon^{-1}\right)}, \end{equation} where $\kappa=\max(\kappa(\mathcal{L}),\kappa(\mvar D))$ is the maximum of the condition numbers of $\mathcal{L}$ and $\mvar D$, improving on the best previous running time of \ensuremath{O\left(nm^{3/4}+n^{2/3}m\right)\log^{O(1)} \left(n \kappa \epsilon^{-1}\right)}. (See Theorem~\ref{thm:laplacian_general}.) To do so, we introduce a novel iterative scheme and analysis that allows us to mitigate the accumulation of errors from multiplying sparse approximations without having access to the more stable constructions and bounds available for symmetric matrices. % \end{itemize} In~\cite{cohen2016faster}, we provided a suite of reductions that used a solver for directed Laplacians to solve a variety other problems. Plugging our new solver's running time into these reductions immediately gives the following almost-linear-time algorithms:% \footnote{We use $\mathcal{T}$ to denote anything of the form given in equation~\ref{eq:timebound}, not the time required for one call to the solver. Some of the reductions call the solver a logarithmic number of times, so precise value of the $\log^{O(1)}(n\kappa \epsilon^{-1})$ term varies among the applications. Also, note that in this paper we give solving running times in terms of the condition number of symmetric Laplacian whereas in \cite{cohen2016faster} they are often given in terms of the condition number of the corresponding diagonal matrix. However it is well-known that these differ only by a $O(\mathrm{poly}(n))$ factor and as they are in the logarithmic terms, this does not affect the running times.} \begin{itemize} \item \textbf{Computing stationary distributions:} We can compute a vector within $\ell_2$ distance $\epsilon$ of the stationary distribution of a random walk on a strongly connected directed graph in time $\mathcal{T}$. \item \textbf{Computing Personalized PageRank vectors:} We can compute a vector within $\ell_2$ distance $\epsilon$ of the Personalized PageRank vector with restart probability $\beta$ for a directed graph in time~$\mathcal{T} \log^2 (1/\beta)$. \item \textbf{Simulating random walks:} We can compute the escape probabilities, commute times, and hitting times for a random walk on a directed graph and estimate the mixing time of a lazy random walk up to a polynomial factor in time $\mathcal{T}$. \item \textbf{Estimating all-pairs commute times:} We can build a data structure of size $\widetilde{O}(n\epsilon^{-2}\log n)$ in time $\mathcal{T}/\epsilon^2$ that, when queried with any two vertices $a$ and $b$, outputs a $1\pm\epsilon$ multiplicative approximation to the expected commute time between $a$ and $b$. \item\textbf{Solving row- and column-diagonally dominant linear systems:} We can solve linear systems that are row- or column-diagonally dominant in time $\mathcal{T}\log K$, where $K$ denotes the ratio of the largest and smallest diagonal entries. \end{itemize} This gives the first almost-linear-time algorithm for each of these problems. For all of them, the best previous running time for obtaining high-quality solutions was what is obtained by replacing $\mathcal{T}$ with $\ensuremath{O\left(nm^{3/4}+n^{2/3}m\right)\log^{O(1)} \left(n \kappa \epsilon^{-1}\right)}$ and was proven in~\cite{cohen2016faster}. \subsection{Pseudoinverse Properties} \label{sec:approxInverse} Here we show that $\epsilon$-approximations in the square sparsifier chain imply useful properties for building approximate pseudoinverses. We provide key lemmas to show that an approximate pseudoinverse for $\mvar I-\WWhat_j$ can be transformed into one for $\mvar I-\WWhat_i$, where $i < j$. More precisely, this is based on Equation~\ref{eqn:key}, and can be seen by substituting the condition from Definition~\ref{defn:squarechain} that $\II - \WWhat_{j}$ is an $\epsilon$-approximation of $\II - ( \WWhat^{(\alpha)}_{j-1})^2$ into the identity \[ \left( \II - \WWhat_{j- 1} \right)^{\dag} = \left(1 - \alpha\right) \left( \II - \WWhat^{(\alpha)}_{j-1} \right)^{\dag} = \left(1 - \alpha\right) \left( \II - \left( \WWhat^{(\alpha)}_{j-1} \right)^2 \right)^{\dag} \left( \II + \WWhat^{(\alpha)}_{j-1} \right)\,{.} \] This method of producing solvers accumulates error very quickly. However, as discussed in Section~\ref{sec:richardson}, we can reduce the error accumulation using preconditioned Richardson iterations. Consequently, the main goal of the remainder of this section is to formally bound this accumulated error so that we can ensure Richardson will quickly produce a high quality approximate pseudoinverse. We prove this result in several steps. First we provide several properties regarding approximate pseudoinverses, showing that they are well behaved under composition, and that they exhibit desirable properties such as approximate triangle inequality, and being preserved under right multiplication. Then using these lemmas we prove the main result of this section, Lemma~\ref{lem:chainproperty}. Given that ultimately we build our pseudoinverse recursively, in our presentation $\ZZ$ will often take the role of a solver being used as a preconditioner. This is consistent with the way we use it, since all solvers we produce are linear operators. The following lemma bounds the quality of a preconditioner obtained via composition. \begin{lem}[Triangle Inequality of Approximate Pseudoinverses] \label{lem:approxInvBasic} If matrix $\ZZ$ is an $\epsilon$-approximate pseudoinverse of $\MM$ with respect to $\UU$, and $\MMtil^{\dag}$ is an $\epsilon'$-approximate pseudoinverse of $\ZZ^{\dag}$ with respect to $\UU$, and has the same left and right kernels as $\MM$ and $\ZZ$, then $\MMtil^{\dag}$ is an $(\epsilon + \epsilon' + \epsilon\epsilon')$-approximate pseudoinverse of $\MM$ with respect to $\UU$. \end{lem} \begin{proof} Applying triangle inequality for the $\mvar U\rightarrow \mvar U$ norm, we see that, for all $x \in \mathbb{{R}}^n$, \[ \normFull{ \left(\II_{\imFull{\MM}} - \MMtil^{\dag} \MM \right) \vec{x}}_{\UU }\\ \leq \normFull{ \left(\II_{\imFull{\MM}} - \MMtil^{\dag} \ZZ^{\dag} \right) \vec{x}}_{\UU } +\normFull{\left(\MMtil^{\dag} \ZZ^{\dag} - \MMtil^{\dag} \MM\right) \vec{x}}_{\UU }\,{.} \] The first term is upper bounded by $\epsilon' \norm{\vec{x}}_{\UU}$ by the condition given in the statement. The second term can be rewritten as: \[ \normFull{\left(\MMtil^{\dag} \ZZ^{\dag} - \MMtil^{\dag} \MM\right) \vec{x}}_{\UU } = \normFull{\MMtil^{\dag} \ZZ^{\dag} \left(\II_{\imFull{\MM}} - \ZZ \MM \right) \vec{x}}_{\UU} \leq \normFull{\MMtil^{\dag} \ZZ^{\dag}}_{\UU \rightarrow \UU} \cdot \normFull{ \left(\II_{\imFull{\MM}} - \ZZ \MM \right) \vec{x}}_{\UU}\,{.} \] Next, we bound the two components of the product. For the first one, using triangle inequality along with the condition given in the statement, we obtain $$\normFull{\MMtil^{\dag}\ZZ^{\dag}}_{\UU\rightarrow \UU} \leq \normFull{\II_{\imFull{\mvar M}}}_{\UU\rightarrow\UU} + \normFull{\II_{\im{\MM}} - \MMtil^{\dag}\ZZ^{\dag}}_{\UU\rightarrow\UU} \leq 1+\epsilon'\,{.}$$ The second term is by definition bounded by $\epsilon\norm{\vec{x}}_{\UU}$. Combining these bounds, we obtain \[ \normFull{ \left( \mvar I_{\imFull{\MM}} -\MMtil^{\dag}\MM \right)\vec{x} }_{\UU}\leq \epsilon' \normFull{\vec{x}}_{\UU} + (1+\epsilon')\epsilon \normFull{\vec{x}}_{\UU} = (\epsilon + \epsilon' + \epsilon \epsilon')\normFull{\vec{x}}_{\UU}\,{.} \] \end{proof} The following lemma is used to show that $\epsilon$-approximations obtained via the sparsification routines from Section~\ref{sec:sparsification} also yield matrices whose pseudoinverses are good preconditioners for the original. \begin{lem}\label{thm:spectral-to-approxInv} Let $\MM$ be any matrix with $\mvar U_{\MM}$ positive semidefinite, such that $\ker(\MM)=\ker(\MM^\top)=\ker(\mvar U_{\MM})$. Suppose that matrix $\MMtil$ $\epsilon$-approximates $\MM$, for some $\epsilon \leq 1/2$. Then $\MMtil^\dagger$ is an $2\epsilon$-approximate pseudoinverse for $\MM$ with respect to $\mvar U_{\MM}$. Furthermore, $\ker(\mvar U_\MM) = \ker(\mvar U_{\MMtil})$.% \end{lem} \begin{proof} First, we prove that $\ker(\MMtil)=\ker(\MMtil^\top)=\ker(\mvar U_{\MMtil})$ and that these kernels are the same as those of $\MM$, $\MM^\top$ and $\mvar U_{\MM}$. We already know that $\ker(\MM)= \ker(\MM^\top) = \ker(\mvar U_{\MM})$ and it is not hard to prove that $\ker(\MMtil), \ker(\MMtil^{\top}) \subseteq \ker(\mvar U_{\MMtil})$. Thus, it suffices to prove that $\ker(\mvar U_\MM) \subseteq \ker(\MMtil),\ker(\MMtil^\top)$ and $\ker(\mvar U_{\MMtil}) \subseteq \ker(\mvar U_\MM)$. To prove $\ker(\mvar U_\MM) \subseteq \ker(\MMtil),\ker(\MMtil^\top)$, consider any vector $x$ in the kernel of $\MM$. Then $x$ is also in the kernel of $\mvar U_{\MM}$. However, by the definition of strong approximation, this means that $x$ is in the kernel of $\MM - \MMtil$ and $\MM^\top - \MMtil^\top$. Since $x$ is in the left and right kernels of $\MM$, we have $0=(\MM - \MMtil)x= \MMtil x$ and similarly obtain $0=(\MM^\top - \MMtil^\top)x= \MMtil x$. Thus, the left and right kernels of $\MMtil$ are supersets of $\ker(\MM)$. For the reverse direction, note that Lemma~\ref{lem:asym_strong_implies_undir} implies that $\mvar U_\MM$ and $\mvar U_{\MMtil}$ approximate each other in the standard positive semidefinite sense for undirected matrices, and thus have the same kernel. Thus, the kernels of $\MM$ meet the requirements for being approximate pseudoinverses. Now we can show the requisite inequality for $\MMtil^{\dag}$ being an approximate pseudoinverse of $\MM$ with respect to $\mvar U_\MM$. First we show that $\MM^{\dag}$ is an $\epsilon$-approximate pseudoinverse of $\MMtil$ with respect to $\mvar U_\MM$. We can apply a lemma upper bounding matrix norms whose proof can be found in the appendix (Lemma~\ref{lem:sym-hsm}) in order to obtain: \[ \normFull{\mvar I_{\im{\MM}}-\MM^{\dag}\MMtil }_{\mvar U_\MM\rightarrow \mvar U_\MM} \leq \normFull{\mvar U_\MM^{\dag/2}\MM \left(\mvar I_{\im{\MM}}-\MM^{\dag}\MMtil\right) \mvar U_\MM^{\dag/2}}_2 = \normFull{\mvar U_\MM^{\dag/2}\left( \MM-\MMtil\right) \mvar U_\MM^{\dag/2}}_2 \leq \epsilon\,{.} \] Next we prove that this implies the desired conclusion by writing \begin{align*} &\normFull{\mvar I_{\im{\MM}}-\MM^{\dag}\MMtil}_{\mvar U_\MM\rightarrow \mvar U_\MM} = \normFull{(\MMtil^{\dag}\MM-\mvar I_{\im{\MM}})\MM^{\dag}\MMtil}_{\mvar U_\MM\rightarrow \mvar U_\MM} \\ &\geq \normFull{\MMtil^{\dag}\MM-\mvar I_{\im{\MM}}}_{\mvar U_\MM\rightarrow \mvar U_\MM} -\normFull{(\MMtil^{\dag}\MM-\mvar I_{\im{\MM}})(\MM^{\dag}\MMtil-\mvar I_{\im{\MM}})}_{\mvar U_\MM\rightarrow \mvar U_\MM} \\ &\geq \normFull{\mvar I_{\im{\MM}}-\MMtil^{\dag}\MM}_{\mvar U_\MM\rightarrow \mvar U_\MM} \left(1 - \normFull{\mvar I_{\im{\MM}}-\MM^{\dag}\MMtil}_{\mvar U_\MM\rightarrow \mvar U_\MM} \right)\,{.} \end{align*} For the first inequality we used triangle inequality, for the second one we used the fact that the norm of a product is upper bounded by the product of norms, which follows immediately from applying the definition of our matrix norm. By rearranging terms, and using the fact that $\MM^{\dag}$ is an $\epsilon$-approximate pseudoinverse of $\MMtil$ with respect to $\mvar U_\MM$ we obtain: \[ \normFull{\mvar I_{\im{\MM}}-\MMtil^{\dag}\MM}_{\mvar U_\MM\rightarrow\mvar U_\MM} \leq \frac{\epsilon}{1-\epsilon} \leq 2\epsilon\,{.} \] \end{proof} Next, recall that the goal of the square-sparsifier chain is to allow an approximate inverse for $\mvar I-\WWhat_j$ to be used as preconditioner for $\mvar I-\WWhat_i$, with $i < j$. We show that our notion of approximate pseudoinverse is (approximately) preserved under right-multiplications, and when changing the reference matrix $\mvar U$. \begin{lem}[Composition of Approximate Pseudoinverses] \label{lem:approxInv-composition} Let $\ZZ, \MM, \UU\in \mathbb{{R}}^{n \times n}$ be matrices such that $\UU$ is symmetric positive semidefinite, and $\ker(\ZZ) = \ker(\ZZ^{\top}) = \ker(\MM) = \ker(\MM^{\top}) \supseteq \ker(\UU)$. Then the following hold. \begin{enumerate} \item \label{part:approxInv-multiply} (Preserved under right multiplication) Let $\mvar C \in \mathbb{{R}}^{n \times n}$ such that both $\mvar C$ and $\mvar C^\top$ are invariant on $\ker(\MM)$, in the sense that $x\in \ker(\MM)$ if and only if $\mvar C \in \ker(\MM)$, and similarly for $\mvar C^\perp$. Then $\ZZ$ is an $\epsilon$-approximate pseudoinverse for $\mvar C \MM$ with respect to $\UU$ if and only if $\ZZ \mvar C$ is an $\epsilon$-approximate pseudoinverse for $\mvar M$ with respect to $\mvar U$. \item \label{part:weak-norm-change} (Approximately preserved under norm change) If $\ZZ$ is an $\epsilon$-approximate pseudoinverse for $\MM$ with respect to $\UU$, then for any symmetric positive semidefinite matrix $\UUtil$, such that $\ker(\UUtil) = \ker(\UU)$, $\ZZ$ is an $(\epsilon \cdot \sqrt{\kappa(\UUtil,\mvar U)})$-approximate pseudoinverse of $\MM$ with respect to $\widetilde{\UU}$. \end{enumerate} \end{lem} \begin{proof} For preservation under right multiplication (claim~\ref{part:approxInv-multiply}), we immediately see that by associativity: \[ \normFull{\II_{\imFull{\MM}} - \left( \ZZ \mvar C \right) \MM }_{\mvar U \rightarrow \mvar U}\\ = \normFull{\II_{\imFull{\MM}} - \ZZ (\mvar C \MM) }_{\mvar U \rightarrow \mvar U} \,. \] What we have left is to verify that kernel conditions are satisfied. The assumptions on $\mvar C$, together with the fact that all the left and right kernels of $\ZZ$ and $\MM$ coincide, ensure that $\ker(\ZZ\mvar C) = \ker(\MM)$, $\ker(\mvar C^\top \ZZ^\top) = \ker(\ZZ^\top)$, $\ker(\mvar C\MM) = \ker(\MM)$, $\ker(\MM^\top \mvar C^\top) = \ker(\MM^\top)$. Therefore the matrices satisfy the kernel requirements for being approximate pseudoinverses. For approximate preservation under change of norms (claim~\ref{part:weak-norm-change}), the bound on $\kappa(\UUtil, \UU)$ means that there exist $\alpha$ and $\beta$ such that $ \alpha \mvar U \preceq \UUtil \preceq \beta \mvar U $ and $\beta/\alpha \leq \kappa(\UUtil, \UU)$. Using this, we obtain: \begin{align*} \normFull{ \II_{\imFull{\MM}} - \ZZ \MM }_{\UUtil \rightarrow \UUtil} &= \max_{\vec{x}: \UUtil\vec{x}\neq 0} \frac{\normFull{ \left(\II_{\imFull{\MM}}- \ZZ \MM \right)\vec{x} }_{\UUtil}}{\normFull{\vec{x}}_{\UUtil}} \leq \max_{\vec{x}: \UU\vec{x} \neq 0} \frac{\beta \normFull{ \left(\II_{\im{\MM}} - \ZZ \MM\right)\vec{x} }_{\UU}} {\alpha \norm{\vec{x}}_{\UU}} \\ &\leq \sqrt{\kappa\left(\UUtil, \UU\right)} \cdot \normFull{\II_{\im{\MM}} - \ZZ \MM}_{\UU \rightarrow \UU} \leq \epsilon \cdot \sqrt{\kappa\left(\UUtil, \UU\right)}. \end{align*} \end{proof} The preservation under right-multiplications combined with Equation~\ref{eqn:key} suggests that if we have a linear operator $\ZZ$ that is an approximate pseudoinverse for $\mvar I-\WWhat_{j}$, we can right-multiply it by $(1 - \alpha) (\mvar I + \WWhat_{j-1}^{(\alpha)})$ to form an operator that is an approximate pseudoinverse for $\mvar I-\WWhat_{j-1}$. This process can then be repeated down the chain, but will lead to an accumulation of error. The following lemma bounds the amount of error accumulated after repeating this process $\Delta$ times. \begin{lem} \label{lem:chainproperty} Let the sequence $\WWhat_0, \WWhat_1, \ldots \WWhat_{d}$ be a $(d,\epsilon,\alpha)$-chain as specified in Definition~\ref{defn:squarechain}, with $\epsilon \leq 1/2$ and $\alpha = 1/4$. Using the notation from Definition~\ref{defn:squarechain}, consider the matrix \[ \ZZbar_{i, i+\Delta} = \left(1 - \alpha\right)^\Delta (\mvar I-\WWhat_{i+\Delta})^{\dagger} \left( \II + \WWhat_{i+\Delta - 1}^{(\alpha)} \right ) \cdot \cdot \cdot \left( \II + \WWhat_{i}^{(\alpha)} \right)\,{,} \] for any $i,\Delta \geq 0$. Then $\ZZbar_{i,i+\Delta}$ is an $(\exp(5\Delta) \cdot \epsilon)$-approximate pseudoinverse of $\mvar I-\WWhat_{i}$ with respect to $\mvar I-\UU_{\WWhat_i}$. \end{lem} \begin{proof} We will prove this by induction on $\Delta$. The base case $\Delta = 0$ follows immediately, since $\ZZbar_{i,i} = (\mvar I-\WWhat_i)^\dag$, which is a $0$-approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. For the induction step, let us assume that the claim is true for $\Delta-1$. Therefore, the matrix \[ \ZZbar_{i + 1, i+\Delta} = \left(1 - \alpha\right)^{\Delta - 1} (\mvar I-\WWhat_{i+\Delta})^{\dagger} \left ( \II + \WWhat_{i+\Delta - 1}^{(\alpha)} \right ) \cdot \cdot \cdot \left ( \II + \WWhat_{i + 1}^{(\alpha)} \right) \] is an $(\exp(5(\Delta - 1))\cdot \epsilon)$-approximate pseudoinverse of $ \mvar I-\WWhat_{i + 1}$ with respect to $\mvar I-\UU_{\WWhat_{i + 1}}$. From Lemma~\ref{lem:buildChain} we see that for our choice of $\alpha=1/4$, we have $\kappa(\mvar I-\UU_{\WWhat_{i + 1}}, \mvar I-\UU_{\WWhat_i}) \leq 21$. Since the matrices $\mvar I-\mvar U_{\WWhat_i}$ and $\mvar I-\mvar U_{\WWhat_{i+1}}$ have the same kernel, we can use the bound on their relative condition number with Lemma~\ref{part:approxInv-multiply}, part~\ref{part:weak-norm-change} to obtain that $\ZZbar_{i + 1, i+\Delta}$ is also a $(\sqrt{21} \exp(5(\Delta - 1))\cdot \epsilon)$-approximate pseudoinverse of $\mvar I- \WWhat_{i + 1}$ with respect to $\mvar I-\UU_{\WWhat_i}$. By definition, we have that $\mvar I-\WWhat_{i+1}$ is an $\epsilon$-approximation of $\mvar I-(\WWhat_i^{(\alpha)})^2$. Therefore, by Lemma~\ref{thm:spectral-to-approxInv}, we know that $(\mvar I-\WWhat_{i+1})^\dag$ is a $2\epsilon$-approximate pseudoinverse of $\mvar I-(\WWhat_i^{(\alpha)})^2$ with respect to $\mvar I-\mvar U_{(\WWhat_i^{(\alpha)})^2}$. In order to change norms, we use Lemma~\ref{lem:square_condition_number}, which gives us that \[ \kappa\left(\mvar I-\mvar U_{\left(\WWhat_i^{(\alpha)}\right)^2}, (1-\alpha)\left(\mvar I-\mvar U_{\WWhat_i}\right)\right) = \kappa\left(\mvar I-\mvar U_{\left(\WWhat_i^{(\alpha)}\right)^2}, \mvar I-\mvar U_{\WWhat_i^{(\alpha)}}\right) \leq \frac{4-2\alpha}{2\alpha}\,{,} \] and therefore \[ \kappa\left(\mvar I-\mvar U_{\left(\WWhat_i^{(\alpha)}\right)^2}, \mvar I-\mvar U_{\WWhat_i}\right) \leq \frac{4-2\alpha}{2\alpha(1-\alpha)} \leq \frac{28}{3}\,{.} \] Therefore, using Lemma~\ref{part:weak-norm-change}, and since $\sqrt{28/3}\cdot 2 \leq 7$, we obtain that $(\mvar I-\WWhat_{i+1})^\dag$ is a $7\epsilon$-approximate pseudoinverse of $\mvar I-(\WWhat_i^{(\alpha)})^2$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. Combining these two results via the triangle inequality for approximate pseudoinverses (Lemma~\ref{lem:approxInvBasic}), we obtain that $\ZZbar_{i+1,i+\Delta}$ is an $\epsilon'$-approximate pseudoinverse of $\mvar I-(\WWhat_i^{(\alpha)})^2$ with respect to $\mvar I-\mvar U_{\WWhat_i}$, where \[ \epsilon' = \sqrt{21}\exp(5(\Delta-1)) \epsilon + 7\epsilon + \sqrt{21}\exp(5(\Delta-1)) \epsilon \cdot 7\epsilon \leq 50 \exp(5(\Delta-1))\epsilon\,{.} \] Equivalently, by writing $\mvar I-(\WWhat_i^{(\alpha)})^2 = (\mvar I+\WWhat_i^{(\alpha)})(\mvar I-\WWhat_i^{(\alpha)})=(1-\alpha)(\mvar I+\WWhat_i^{(\alpha)})\cdot(\mvar I-\WWhat_i)$, and applying the composition under multiplication property from Lemma~\ref{lem:approxInv-composition} Part~\ref{part:approxInv-multiply}, we obtain that $\ZZbar_{i+1,i+\Delta} \cdot (1-\alpha)(\mvar I+\WWhat_i^{(\alpha)})$ is an $\epsilon'$-approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. Note that in order to correctly apply the lemma, we require the kernel requirement for $(1-\alpha)(\mvar I+\WWhat_i^{(\alpha)})$ to be satisfied, but this follows easily since all left and right kernels of the other matrices involved are identical to $\ker(\mvar I-\WWhat_i^{(\alpha)})$. Finally, since $\ZZbar_{i,i+\Delta} = \ZZbar_{i+1,i+\Delta} \cdot (1-\alpha)(\mvar I+\WWhat_i^{(\alpha)})$, this is equivalent to saying that $\ZZbar_{i,i+\Delta}$ is an approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect to $\mvar I-\mvar U_{\WWhat_i}$, with error bounded by \[ 50\exp(5(\Delta - 1))\epsilon \leq \exp(5\Delta)\epsilon\,{.} \] \end{proof} Note that the amount of error accumulated through this process is significantly greater than the sum of $\epsilon$'s across the different levels of the chain. This is because we are measuring the quality of the approximate inverse with respect to a matrix that may change by a constant factor at each level, rather than with respect to a fixed one. If we only invoke Lemma~\ref{lem:chainproperty} for the first and last matrices of the chain ($i = 0$, $j = d$), it would give an error of $\exp(O(d))\epsilon = \mathrm{poly}(\kappa(\mathcal{L})) \epsilon$, necessitating a sparsifier accuracy that is more or less keeping everything dense. Instead, in our algorithms we will only invoke the above result for $j \approx i + \sqrt{d}$. Between such steps, we will remove the accumulated error using the preconditioned Richardson iteration, which was described in Section~\ref{sec:richardson}. \subsection{Construction of Square-Sparsification Chains} \label{sec:construction} Here we define the square-sparsification chain we use in our algorithm and show how to compute such a chain efficiently. In other words, we show how to create the sequence of matrices that through careful application yield an almost linear time algorithm for solving an Eulerian Laplacian system. \begin{defn} \label{defn:squarechain}[Square Sparsifier Chain] We call a sequence of matrices $\WWhat_0, \WWhat_1, \ldots \WWhat_{d} \in \mathbb{{R}}^{n \times n}$ a \emph{square-sparsifier chain of length $d$ with parameter $0 < \alpha< \frac{1}{2}$ and error $\epsilon \leq 1/2$} (or a \emph{$(d,\epsilon,\alpha)$-chain} for short) if under the definitions ${\mvar L}_i = \mvar I - \WWhat_i$ and $\WWhat^{(\alpha)}_i = \alpha \II + (1-\alpha)\WWhat_i$ for all $i$ the following hold \begin{enumerate} \item $\norm{\WWhat_i}_2 \leq 1$ for all $i$, \item $\mvar I - \WWhat_{i}$ is an $\epsilon$-approximation of $\mvar I - ( \WWhat^{(\alpha)}_{i-1} )^2$ for all $i\geq 1$, \item\label{item:ker} $\ker({\mvar L}_i)=\ker({\mvar L}_i^\intercal) = \ker({\mvar L}_j) = \ker({\mvar L}_j^\intercal)=\ker(\mvar U_{\mvar L_i}) = \ker(\mvar U_{{\mvar L}_j})$ for all $i,j$. \end{enumerate} \end{defn} \newcommand{\textsc{BuildChain}}{\textsc{BuildChain}} \begin{figure}[ht!] \begin{algbox} $(\WWhat_0,...,\WWhat_{d})=\textsc{BuildChain}(\mathcal{L} = \mvar D - \mvar A^{\top}, d, \alpha, \epsilon, p)$ \textbf{Input:} Eulerian Laplacian $\mathcal{L}$, parameters $\alpha, \epsilon, p \in (0,1)$. \begin{enumerate} \item \label{itm:sparsify} $\mathcal{L}_0 \leftarrow \textsc{SparsifyEulerian}(\mathcal{L}, p/(d + 1), 1/20)$, and set $\WWhat_0 = \mvar I-\mvar D^{-1/2} \mathcal{L}_0 \mvar D^{-1/2}$. % \item For $i=0,1 \ldots d-1$ do \begin{enumerate} \item $\WWhat^{(\alpha)}_{i} \leftarrow \alpha \II + (1-\alpha)\WWhat_i,$ \item $\mvar A_{i + 1} \leftarrow \textsc{SparsifySquare} \left(\mvar D^{1/2} \WWhat^{(\alpha)}_{i} \mvar D^{1/2}, p/(d + 1), \epsilon \right)$, \\ and set $\WWhat_{i+ 1} = \mvar D^{-1/2} \mvar A_{i+1} \mvar D^{-1/2}$. \end{enumerate} \item Return $\WWhat_0,...,\WWhat_{d}.$ \end{enumerate} \end{algbox} \caption{Algorithm for Constructing the Square-Sparsification Chain.} \label{fig:buildChain} \end{figure} Pseudocode for the construction of the square-sparsifier chain is given in Figure~\ref{fig:buildChain}. In the remainder of this subsection we prove correctness of this construction. We first provide a helper lemma, Lemma~\ref{lem:normalized_lap_properties} and then analyze the algorithm in Lemma~\ref{lem:chain_construction}. In Section~\ref{sec:progresstools} we prove additional properties regarding solver chains such as how the $\mvar U_{\LL_i}$ multiplicatively approximate each other and how the smallest eigenvalue at the end of the chain must eventually rise to at least a constant. \begin{lem}\label{lem:normalized_lap_properties} If for $\mvar A \in \mathbb{{R}}^{n \times n}_{\geq 0}$ and $\mvar D = \mvar{diag}(\mvar A \allones)$ the matrix $\mathcal{L}=\mvar D-\mvar A$ is an Eulerian Laplacian associated with a strongly connected graph then, $\norm{\mvar D^{-1/2}\mvar A^{\top}\mvar D^{-1/2}}_{2} \leq 1$ and $\ker(\mathcal{L}) = \ker(\mathcal{L}^\top) = \ker(\mvar U_{\mathcal{L}}) = \mathrm{span}(\mvar D^{1/2} \allones)$.\end{lem} \begin{proof} Since $\mathcal{L}$ is Eulerian $\mvar A \allones = \mvar A^\top \allones = \mvar D \allones$ and therefore we have $\norm{\mvar D^{-1}\mvar A^{\top}}_{\infty}=\norm{\mvar A^{\top}\mvar D^{-1}}_{1}=1$. Consequently, by Lemma~\ref{lem:matrix_two_norm} we have that $\norm{\mvar D^{-1/2}\mvar A^{\top}\mvar D^{-1/2}}_{2}\leq\sqrt{\norm{\mvar D^{-1}\mvar A^{\top}}_{\infty}\cdot\norm{\mvar A^{\top}\mvar D^{-1}}_{1}} \leq 1$. The characterization of the kernels follows from Lemma~\ref{lem:stationary-equivalence} and that $\mathcal{L} \allones = \mathcal{L}^\top \allones = \mvar U_{\mathcal{L}} \allones = \allzeros$. \end{proof} \begin{lem}[Chain Construction]\label{lem:chain_construction} Let $\mathcal{L} = \mvar D - \mvar A^\top$ be an Eulerian Laplacian that is associated with a strongly connected graph, let $\alpha, \epsilon, p \in (0,1)$, and let $d \geq 1$. Then in $\widetilde{O}(\mathrm{nnz}(\mathcal{L}) +n\epsilon^{-2} d)$ time the routine $\textsc{BuildChain}(\mathcal{L}, d, \alpha, \epsilon, p)$ produces $\WWhat_0, ..., \WWhat_d \in \mathbb{{R}}^{n \times n}$ that with probability $1 - p$ \begin{enumerate} \item $\WWhat_0, ..., \WWhat_d$ is a $(d, \alpha, \epsilon)$-chain, \item $\mathrm{nnz}(\WWhat_i) = \widetilde{O}(n \epsilon^{-2})$ for all $i$, and \item $\mvar I - \WWhat_0$ is a $(1/20)$-approximation of $\mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}$. \end{enumerate} \end{lem} \begin{proof} By Theorem~\ref{thm:order_n_sparsifier}, the call to \textsc{SparsifyEulerian} computes in $\widetilde{O}(m+n)$ time $\mathcal{L}_0$ that with probability $1 - p/(d + 1)$ is a $(1/20)$-sparsifier of $\mathcal{L}$ with the same diagonal as $\mathcal{L}$. Consequently, for $\mvar A_0 = \mvar D^{1/2} \WWhat_0 \mvar D^{1/2}$ we have $\mathcal{L}_0 = \mvar D - \mvar A_0$. By Lemma~\ref{cor:strong_basis_change} this implies that $\mvar I - \WWhat_0$ is a $(1/20)$-approximation of $\mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}$. Furthermore, Lemma~\ref{lem:normalized_lap_properties} then implies that $\norm{\WWhat_0}_2 = \norm{\mvar D^{-1/2} \mvar A_0 \mvar D^{-1/2}}_2 \leq 1$, and $\ker(\mvar I - \WWhat_0) = \ker((\mvar I - \WWhat_0)^\top) = \ker(\mvar U_{\mvar I - \WWhat_0}) = \mathrm{span}(\mvar D^{1/2} \allones)$. Thus, $\WWhat_0$ has all the desired properties. Now suppose the desired properties hold for $\WWhat_0, ..., \WWhat_k$ with probability $1 - p (k + 1) / (d + 1)$, for some $k \in [0, d - 1]$, and that for all $i \in [k]$ we have $\mvar A_i \in \mathbb{{R}}^{n \times n}_{\geq 0}$ such that $\mvar D - \mvar A_i$ is an Eulerian Laplacian associated with a strongly connected graph. Under this assumption, clearly $\mvar D^{-1/2} \WWhat_k \mvar D^{-1/2} = \alpha \mvar D + (1- \alpha)\mvar A_k$ has both row and column sums equal to $\mvar D\allones$. By Theorem~\ref{thm:sparsify_square}, the call to \textsc{SparsifySquare} computes in $\widetilde{O}(m+n\epsilon^{-2})$ time a matrix $\mvar A_{k + 1} \in \mathbb{{R}}^{n \times n}_{\geq 0}$ such that, with probability $1 - p(k + 2)/(d + 1)$, $\mvar D - \mvar A_{k + 1}$ is an $\epsilon$-sparsifier for $\mvar D - \mvar A_k^{(\alpha)} \mvar D^{-1} \mvar A_k^{(\alpha)}$, where $\mvar A_k^{(\alpha)} = \alpha \mvar I + (1 - \alpha) \mvar A_k$. Again, using Lemma~\ref{cor:strong_basis_change}, we see that $\mvar I - \WWhat_{k + 1}$ is an $\epsilon$-approximation of $\mvar I - (\WWhat_{k}^{(\alpha)})^2$. Furthermore, since $\mvar D - \mvar A_k^{(\alpha)} \mvar D^{-1} \mvar A_k^{(\alpha)}$ contains $\mvar D - \mvar A_k$ as a subgraph, this Eulerian Laplacian is strongly connected and therefore so is $\mvar D - \mvar A_{k + 1}$. Consequently, by Lemma~\ref{lem:normalized_lap_properties} we have that $\norm{\WWhat_{k + 1}}_2 = \norm{\mvar D^{-1/2} \mvar A_{k + 1} \mvar D^{-1/2}}_2 \leq 1$, and $\ker(\mvar I - \WWhat_{k + 1}) = \ker((\mvar I - \WWhat_{k + 1})^\top) = \ker(\mvar U_{\mvar I - \WWhat_{k + 1}}) = \mathrm{span}(\mvar D^{1/2} \allones)$. Therefore, by induction all the desired properties hold. \end{proof} \section{Solving Directed Laplacian Systems} \label{sec:solver} In this section, we show how to solve directed Laplacian systems in almost-linear time. Our main result is as follows: \begin{theorem}\label{thm:laplacian_general} Let $\mvar M$ be an arbitrary $n\times n$ column-diagonally-dominant or row-diagonally-dominant matrix with diagonal $\mvar D$ and $m$ non-zero entries. Let $\kappa(\mvar D)$ be the ratio between the maximum and minimum diagonal entries of $\mvar D$. Then for any $\vec{b}\in\im{\mvar M}$ and $0<\epsilon\leq1$, one can compute, with high probability and in time \[ \Otil\left(\left( m + n 2^{O\left(\sqrt{\log{n}\log\log{n}}\right)} \right) \log^{3}\left(\frac{\kappa(\mvar D)\cdot \kappa(\mvar M)}{\epsilon}\right)\right) \] a vector $\vec{x}'$ satisfying $\|\mvar M\vec{x}'-\vec{b}\|_{2}\leq\epsilon\norm{\vec{b}}_{2}$. \end{theorem} % Note that column-diagonally-dominant matrices include Laplacians of directed graphs. This bound follows from combining the reduction to solving linear systems in Eulerian Laplacians stated in Theorem 42 of~\cite{cohen2016faster} with our main solver result. The condition number of the undirected Laplacians that arise can be bounded by $O(\kappa(\mvar D) \cdot \kappa(\mvar M))$ by the preceding Theorem 41 in~\cite{cohen2016faster}. This condition number becomes a logarithmic overhead by the condition number reductions in Appendix~\ref{sec:reduction}, which allows us to focus on solving $\mathrm{poly}(n)$ conditioned Eulerian systems. The result that we will focus on in this section is an algorithm that given an Eulerian Laplacian $\mathcal{L} = \mvar D - \mvar A^\top \in \mathbb{{R}}^{n \times n}$ with $m$ non-zero entries computes an $\epsilon$-approximate solution to $\mathcal{L} \vvar{x} = \vvar{b}$ in time $\Otil((m + n \exp(O(\sqrt{\log \kappa \log\log \kappa }))) \log(1/\epsilon))$ where $\kappa = \kappa(\mvar U_{\mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}})$. Note that $\exp(O(\sqrt{\log(\kappa)\log\log\kappa}))$ is a term that is $\kappa^{o(1)}$, i.e. it grows less than $O(\kappa^\epsilon)$ for any constant $\epsilon > 0$, whereas iterative methods for solving such systems typically have a dependence of $\kappa^{1/2}$ or higher in their running time. An overview of the main components of this algorithm is in Section~\ref{sec:solverOverview}. \input{solver_pseudoinv.tex} \input{solver_construction.tex} \input{solver_squaring.tex} \input{solver_chainInvProps} \input{solver_recursive.tex} \subsection{Preconditioned Richardson Iteration and Approximate Pseudoinverse} \label{sec:richardson} The key iterative method that we use to build our solver is the preconditioned Richardson iteration. It can be thought of as a general-purpose tool that boosts the quality of a linear system solver by iteratively applying the solver on the residual.\footnote{One should note that being able to boost a solver using preconditioned Richardson relies on the fact that the solver is a linear operator, which is precisely the case in our algorithm.} In this section we describe the preconditioned Richardson iteration, and based on it, we derive a measure of quality of a linear system solver in terms of how well it functions as an approximate pseudoinverse for the matrix involved in the system we want to solve. In Section~\ref{sec:approxInverse} we analyze our solver chain in terms of this notion. The Richardson iteration refers to perhaps one of the simplest methods for solving a linear system $\mvar M \vvar{x} = \vvar{b}$: start with $\vvar{x}_0 = \allzeros$, then repeatedly move in the direction of the residual, i.e. $\vvar{x}_{k + 1} := \vvar{x}_k + \eta (\vvar{b} - \mvar M \vvar{x})$, for some step size $\eta$. The preconditioned Richardson iteration refers to applying the same method, with the aid of a matrix $\mvar Z$ whose purpose is to improve the quality of the iterations by producing better approximations to the matrix inverse applied to the residual: $\vvar{x}_{k + 1} = \vvar{x}_k + \eta \mvar Z (\vvar{b} - \mvar M \vvar{x})$. Note that whenever $\mvar Z = \mvar M^{\dag}$, preconditioned Richardson adds in every step an $\eta$ fraction of the true solution to our current iterate; therefore, in that case, we obtain the exact solution in one iteration by setting $\eta = 1$. Intuitively, the quality of the preconditioner $\mvar Z$ dictates the size of the steps we are allowed to take, and therefore how long it takes to get close to optimum. This motivates the notion of approximation we introduce in this section. The preconditioned Richardson iteration is very well studied and fundamental to numerical methods (see e.g. Section 13.2.1 of~\cite{Saad03:book}). However, we are not aware of an operator-based analysis involving asymmetric matrices in the different matrix norms required in our algorithm. Therefore, we provide the algorithm (see Figure~\ref{fig:richardson}) and its short analysis in Lemma~\ref{lem:precond_richardson} below. \begin{figure}[ht] \begin{algbox} $\vec{x}=\textsc{PreconRichardson}(\MM, \ZZ, \vec{b}, \eta, N)$ \textbf{Input}: $n \times n$ matrix $\MM$,\par \makebox[1.22cm]{} preconditioning linear operator $\ZZ$ (in the unpreconditioned case, $\ZZ = \mvar I$), \par \makebox[1.22cm]{} right hand side vector $\vec{b} \in \im{\MM}$, step size $\eta$, iteration count $N$. \begin{enumerate} \item Initialize $\vec{x}_0 \leftarrow 0$. \item For $k = 0, \ldots, N - 1$ \begin{enumerate} \item $\vec{x}_{k + 1} \leftarrow \vec{x}_{k} + \eta \ZZ \left(\vec{b}- \MM \vec{x}_{k} \right)$. \label{ln:richardsonStep} \end{enumerate} \item Return $\vec{x}_N$. \end{enumerate} \end{algbox} \caption{Pseudocode for the (preconditioned) Richardson Iteration} \label{fig:richardson} \end{figure} \begin{lemma}[Preconditioned Richardson] \label{lem:precond_richardson} Let $\vec{b} \in \mathbb{{R}}^{n}$ and $\MM, \ZZ, \mvar U \in \mathbb{{R}}^{n \times n}$ such that $\mvar U$ is symmetric positive semidefinite, $\ker(\mvar U) \subseteq \ker(\MM)=\ker(\MM^\intercal) = \ker(\ZZ)=\ker(\ZZ^\intercal)$, and $b \in \im{\mvar M}$. Then $N \geq 0$ iterations of preconditioned Richardson with step size $\eta > 0$, results in a vector $\vvar{x}_N =\textsc{PreconRichardson}(\MM, \ZZ, \vec{b}, \eta, N)$ such that $$\normFull{\vvar{x}_N - \mvar M^{\dag} \vvar{b}}_{\mvar U} \leq \normFull{\mvar I_{\imFull{\MM}} -\eta \ZZ \MM}_{\mvar U \rightarrow \mvar U}^{N} \normFull{\mvar M^\dag \vvar{b}}_{\mvar U}\,{.}$$ Furthermore, preconditioned Richardson implements a linear operator, in the sense that $\vec{x}_N = \ZZ_{N} \vec{b}$, for some matrix $\ZZ_N$ only depending on $\ZZ$, $\MM$, $\eta$ and $N$. \end{lemma} \begin{proof} Let $\vec{x}^{*} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \MM^{\dag} \vec{b}$. The iteration on Line~\ref{ln:richardsonStep}, together with the fact that $\vec{b}$ lives inside the image of $\MM$, implies \begin{align*} \vec{x}_{k+1} - \vec{x}^* &= ((\mvar I_{\imFull{\MM}} - \eta\mvar Z \MM)\vec{x}_k + \eta \mvar Z \vec{b}) - \vec{x}^* = \left((\mvar I_{\imFull{\MM}} - \eta\mvar Z \MM)\vec{x}_k + \eta \mvar Z \MM \vec{x}^*\right) - \vec{x}^* \\ &\hfill = (\mvar I_{\imFull{\MM}}-\eta \mvar Z\MM)(\vec{x}_k -\vec{x}^*)\,{,} \end{align*} and therefore, $$\normFull{\vec{x}_{k+1}-\vec{x}^*}_{\mvar U} = \normFull{(\mvar I_{\imFull{\MM}}-\eta\mvar Z\MM)(\vec{x}_{k}-\vec{x}^*)}_{\mvar U} \leq \normFull{\mvar I_{\imFull{\MM}}-\eta\mvar Z\MM}_{\mvar U\rightarrow\mvar U} \normFull{\vec{x}_{k}-\vec{x}^*}_{\mvar U}\,{.}$$ By induction, this shows that $$\normFull{\vec{x}_N - \vec{x}^*}_{\mvar U} \leq \normFull{\mvar I_{\imFull{\MM}} - \eta \mvar Z \MM}_{\mvar U\rightarrow\mvar U}^N \normFull{\vec{x}_0-\vec{x}^*}_{\mvar U}=\normFull{\mvar I_{\imFull{\MM}} - \eta \mvar Z \MM}_{\mvar U\rightarrow\mvar U}^N \normFull{\vec{x}^*}_{\mvar U}\,{.}$$ Now, by writing the iteration as $\vec{x}_{k+1} = (\mvar I_{\imFull{\mvar M}}-\eta \mvar Z\mvar M)\vec{x}_k + \eta\mvar Z\vec{b}$, and expanding, we see by induction that $\vec{x}_N = \sum_{k=1}^{N} (\mvar I_{\imFull{\mvar M}}-\eta \mvar Z\mvar M)^{k-1} \eta\mvar Z\vec{b}$, and therefore $$\mvar Z_N = \eta \sum_{k=1}^{N} (\mvar I_{\imFull{\mvar M}}-\eta \mvar Z\mvar M)^{k-1} \mvar Z\,{.}$$ \end{proof} Lemma~\ref{lem:precond_richardson} shows that if $\eta \mvar Z \mvar M$ is sufficiently close to the identity, then preconditioned Richardson converges quickly when solving a linear system. This highlights a precise way to quantify how good a matrix is as a preconditioner for the Richardson iteration. \begin{defn}[Approximate Pseudoinverse] \label{defn:approxInv} Matrix $\ZZ$ is an \emph{$\epsilon$-approximate pseudoinverse of matrix $\MM$ with respect to a symmetric positive semidefinite matrix $\UU$}, if $\ker(\UU) \subseteq \ker(\MM)=\ker(\MM^\intercal)=\ker(\ZZ)=\ker(\ZZ^\intercal)$, and \[ \normFull{ \II_{im(\MM)} - \ZZ \MM }_{\UU \rightarrow \UU} \leq \epsilon.~\footnote{Note that the ordering of $\ZZ$ and $\MM$ is crucial: this definition is not equivalent to $\norm{\II_{\im{\MM}} - \MM\ZZ}_{\UU \rightarrow \UU}$ being small.} \] \end{defn} With this definition and Lemma~\ref{lem:precond_richardson}, we see that our problem of solving a linear system can be reduced to producing an approximate pseudoinverse that we can apply efficiently. This is the approach we take in the rest of the paper. In the remainder of this section we give two tools towards producing such pseudoinverses. We show how to use preconditioned Richardson to improve the quality of an approximate pseudoinverse (Lemma~\ref{lem:precon}), and how to produce one for a matrix whose symmetrization is well conditioned (Lemma~\ref{lem:solveWellConditioned}). \begin{lem}[Pseudoinverse Improvement] \label{lem:precon} If $\ZZ$ is an $\epsilon$-approximate pseudoinverse of $\MM$ with respect to $\mvar U$, for $\epsilon \in (0, 1)$, $\vec{b} \in \im{\MM}$, and $N \geq 0$, then $\textsc{PreconRichardson}(\MM, \ZZ, \vec{b}, 1, N)$ computes $\vvar{x}_N =\ZZ_N$, for some matrix $\ZZ_N$ only depending on $\ZZ$, $\MM$ and $N$, such that $\ZZ_N$ is an $\epsilon^{N}$-approximate pseudoinverse of $\mvar M$ with respect to $\mvar U$. \end{lem} \begin{proof} By Lemma~\ref{lem:precond_richardson} we know that $\vvar{x}_{N} = \mvar Z_N \vvar{b}$, for some $\mvar Z_N$ that only depends on $\mvar Z$, $\mvar M$, and $N$; furthermore, we know that: \[ \normFull{(\mvar I_{\imFull{\mvar M}} - \mvar Z_N\mvar M) \mvar M^{\dag} \vec{b} }_{\mvar U\rightarrow\mvar U} = \normFull{\vec{x}_N - \mvar M^{\dag}\vec{b}}_{\mvar U} \leq \normFull{\mvar I_{\imFull{\mvar M}}-\mvar Z\MM}_{\mvar U\rightarrow\mvar U}^N \normFull{\mvar M^{\dag}\vec{b}}_{\mvar U} \leq \epsilon^N \normFull{\mvar M^{\dag}\vec{b}}_{\mvar U}\,{,} \] and that this holds for any vector $\vec{b} \in \imFull{\mvar M}$ (since $\mvar Z_N$ does not depend on $\vec{b}$). Equivalently, this means that \[ \normFull{\mvar I_{\imFull{\mvar M}}-\mvar Z_N\mvar M}_{\mvar U\rightarrow\mvar U} \leq \epsilon^N\,{.} \] In order to complete the proof we need to show that $\ker(\mvar Z) = \ker(\mvar Z^{\top}) = \ker(\mvar Z)$. As we saw in the proof of Lemma~\ref{lem:precond_richardson}, over $\mvar I_{\im{\mvar M}}$, $\mvar Z_N$ is a polynomial in $\mvar Z$ and $\mvar M$ with no constant term. Also, since all the kernels and cokernels of $\mvar Z$ and $\mvar M$ are identical by definition, $\ker{\mvar Z_N} \supseteq \ker{\mvar Z}$, and similarly $ \ker{\mvar Z^{\top}}\supseteq\ker{\mvar Z}$. Now suppose the first inclusion is strict, i.e. there exists $\vec{x} \perp \ker(\mvar Z)$ such that $\mvar Z_N\vec{x} = 0$. Then, $\norm{\mvar M^{\dag}\vec{x}}_{\mvar U} > 0$ since $\mvar M^{\dag} \vec{x} \perp \ker(\mvar U)$, as by definition, the kernel of $\mvar U$ is a subset of that of $\mvar Z$. This implies that $$\normFull{ (\mvar I_{\imFull{\mvar M}} - \mvar Z_N \mvar M)\mvar M^{\dag} \vec{x} }_{\mvar U}=\normFull{\mvar M^{\dag} \vec{x}}_{\mvar U}$$ This shows that $\norm{\mvar I_{\im{\mvar M}} - \mvar Z_N \mvar M}_{\mvar U\rightarrow\mvar U} \geq 1$, which contradicts the fact that it is at most $\epsilon^N$. Similarly, if there exists $\vec{x} \perp \ker(\mvar Z^{\top})$ such that $\mvar Z^{\top}\vec{x} = 0$, then we obtain $$\normFull{ (\mvar I_{\imFull{\mvar M}} - \mvar Z_N^{\top} \mvar M^{\top})(\mvar M^{\top})^{ \dag} \vec{x} }_{\mvar U}=\normFull{(\mvar M^{\top})^{\dag} \vec{x}}_{\mvar U}\,{,}$$ and thus $\normFull{\mvar I_{\imFull{\mvar M}}-\mvar Z_N^{\top} \mvar M^{\top}}_{\mvar U\rightarrow\mvar U} \geq 1$. Equivalently, this shows that $\norm{\mvar I_{\im{\mvar M}} - \mvar Z_N \mvar M}_{\mvar U\rightarrow\mvar U} \geq 1$, which yields a contradiction. The fact that the norm is the same when taking transposes follows from writing it in terms of the $\ell_2$ norm, and using the fact that $\ker(\mvar U)$ is a subset of both the kernel of the matrix and that of its transpose. \end{proof} In addition, we show that preconditioned Richardson converges quickly whenever the matrix $\mvar M$ is well conditioned (as a matter of fact, for our purposes we only care about the case when the ratio between $\norm{\mvar M}$ and $\lambda_{*}(\mvar U_\mvar M)$ is at most a constant). The lemma below gives precise bounds on the number of iterations required to obtain a good approximate pseudoinverse with respect to $\mvar I$. Such an approximate pseudoinverse is the standard object that can be used as a preconditioner in order to obtain a small number of preconditioned Richardson iterations when measuring error with respect to the $\ell_2$ norm. \begin{lemma}[Building a Pseudoinverse] \label{lem:solveWellConditioned} Let $\mvar M \in \mathbb{{R}}^{n \times n}$ such that $\mvar U_\mvar M$ is positive semidefinite, and $\ker(\mvar M) = \ker(\mvar M^\top)$. Let the vector $\vvar{b} \in \im{\mvar M}$, the step size $\eta \leq \lambda_{*}(\mvar U_{\mvar M}) / \norm{\mvar M}_2^2$, and the number of iterations $N$. Then $\textsc{PreconRichardson}(\MM, \eta\mvar I_{\im{\mvar M}}, \vec{b}, 1, N)$ computes $\vvar{x}_{N} = \ZZ_{N} \vec{b}$, for some matrix $\ZZ_N$ only depending on $\ZZ$, $\MM$ and $N$, such that $\ZZ_N$ is an $\exp(- N \eta \lambda_{*}(\mvar U_\mvar M)/2)$-approximate pseudoinverse of $\mvar M$ with respect to $\mvar I$. \end{lemma} \begin{proof} We begin by showing that $\eta\mvar I_{\im{\MM}}$ is a $(1-\eta\lambda_{*}(\mvar U_\mvar M))^{1/2}$-approximate pseudoinverse of $\mvar M$ with respect to $\mvar I$. First we notice that the kernel conditions for $\mvar I_{\im{\MM}}$ being an approximate pseudoinverse for $\MM$ are trivially satisfied. Second, we bound the matrix induced norm of $\mvar I_{\im{\mvar M}}-\eta\mvar I_{\im{\mvar M}} \cdot \mvar M = \mvar I_{\im{\mvar M}} - \eta\mvar M$: \begin{align*} \norm{\mvar I_{\im{\MM}} -\eta \MM}_{2}^2 &= \max_{\vec{x} \in \im{\MM},\norm{\vec{x}}_{2} = 1} \vec{x}^{\top} \left(\mvar I_{\im{\MM}}-\eta \MM\right)^{\top} \left(\mvar I_{\im{\MM}}-\eta \MM \right) \vec{x} \\ &= \max_{\vec{x} \in \im{\MM},\normFull{\vec{x}}_{2} = 1} \left( \vec{x}^{\top} \mvar I_{\im{\MM}} \vec{x} - \eta \vec{x}^{\top} \left(\MM + \MM^{\top} \right) \vec{x} + \eta^2 \vec{x}^{\top} \MM^{\top}\MM \vec{x} \right) \\ &\leq 1 - 2 \eta \min_{\vec{x} \in \im{\MM},\normFull{\vec{x}}_{2} = 1} \vec{x}^{\top} \mvar U_{\mvar M} \vec{x} + \eta^2 \max_{\vec{x} \in \im{\MM}, \normFull{\vec{x}_{2}} = 1} \vec{x}^{\top} \MM^{\top} \MM \vec{x}\,{.} \\ &\leq 1 - 2 \eta \lambda_{*}(\mvar U_\mvar M) + \eta^2 \norm{\mvar M}_2^2 \leq 1 - \eta \lambda_{*}(\mvar U_\mvar M) ~. \end{align*} Consequently, by Lemma~\ref{lem:precon} we have that $\mvar Z_ N$ is a $(1-\eta \lambda_{*}(\mvar U_\mvar M))^{N/2}$-approximate pseudoinverse of $\mvar M$ with respect to $\mvar I$. The conclusion follows by using $(1-\eta\lambda_{*}(\mvar U_\mvar M))^{N/2} \leq \exp(-N\eta\lambda_{*}(\mvar U_\mvar M)/2)$. \end{proof} % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % \subsection{The Recursive Solver} \label{sec:recursivesolver} Here we combine the pseudoinverse properties of the solver chain proved in Lemma~\ref{lem:chainproperty} with the preconditioned Richardson iteration from Lemma~\ref{lem:precon} to obtain an almost-linear time solver for Eulerian Laplacians. The resulting algorithm makes recursive calls on the square-sparsifier chain. These recursive calls can be viewed as phases. For some moderate value of $\Delta$ which we set to $\sqrt{\log\kappa}$, where $\kappa$ is the condition number of $\mvar U_{\mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}}$, we utilize Lemma~\ref{lem:chainproperty} to turn an approximate pseudoinverse of $\LL_{i + \Delta}$ into an approximate pseudoinverse for $\LL_{i}$ with larger error. The error accumulated in this process is then removed via preconditioned Richardson iteration. This iteration leads to recursive calls to $\LL_{i + \Delta}$. The resulting algorithm is a linear operator: its output can be viewed as multiplying the input by a fixed matrix. To analyze it, it is helpful to define the notion of implicit matrices, which is a more succinct way of writing ``linear operator''-style solver statements like those in~\cite{SpielmanTengSolver:journal} as well as subsequent works. An \emph{implicit matrix} is a routine that applies a linear operator to a vector. It's \emph{complexity} is defined as the time it takes to run it when given a vector. Note that if we have a matrix explicitly given, we can view it as an implicit matrix with complexity equal to one plus its number of nonzero entries. If $\mvar A$ is an implicit matrix, then we will use the notation $\mvar A\vec{x}$ to denote $\mvar A(\vec{x})$. In particular, this notation choice means we can write $\mvar A(\mvar B(\cdot))$ as $\mvar A \mvar B$ and $\mvar A(\cdot)+ \mvar B(\cdot)$ as $\mvar A + \mvar B$. If we form a new implicit matrix from two (or more) explicit matrices in either of these manners, the complexity of the new explicit matrix is equal to the sum of the complexities of the matrices it was formed from. Because an implicit matrix implements a linear operator, we are---for the purposes of analysis---free to treat it as if it is an actual matrix and talk about things like its eigenvalues or whether it approximates something---provided we do so with the understanding that whenever we say such things, we are really talking about the linear operator that the implicit matrix implements. When possible, we will use $\ZZ$ to denote implicit matrices that represent inverses of matrices, and $\MM$ to represent matrices related to linear systems that we are trying to solve. In particular, preconditioned Richardson iteration from Lemma~\ref{lem:precon} can be viewed as an implicit matrix $\textsc{PreconRichardson}(\mvar I- \WWhat_i, \MM, \frac{1}{2}, O(1/\Delta))$ built from $\mvar I-\WWhat_i$ and $\MM$, which might themselves be implicit matrices. With this in mind we can state the function that does most of the work in our algorithm in Figure~\ref{fig:solve}. \begin{figure}[ht] \begin{algbox} $\textsc{Solve}((\AAcal_{i} \ldots \AAcal_{d}), \widehat{\lambda}, \epsilon)$ \textbf{Input:} Matrices $\AAcal_{i} \ldots \AAcal_{d}$ forming a subsequence of a $(d,\widehat{\epsilon},1/4)$-chain corresponding to an \\ \makebox[1.22cm]{} Eulerian Laplacian $\mathcal{L}=\mvar D-\mvar A^\top$, a lower bound $\widehat{\lambda}$ on $\lambda_{*}(\mvar D^{-1/2}\mathcal{L}\mvar D^{-1/2})$, accuracy $\epsilon$. \\ \textbf{Output:} Implicit matrix that is an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect \\ \makebox[1.22cm]{} to $\mvar I-\mvar U_{\WWhat_i}$. \begin{enumerate} \item If $i = d$, \begin{enumerate} \item $\ell \leftarrow \min\{1/4, 1.125^d \cdot 0.9 \cdot \widehat{\lambda} \}$. \item Return $\textsc{PreconRichardson}(\II - \AAcal_{d}, \frac{\ell}{4}\mvar I_{\im{\mvar I-\WWhat_d}},1, \frac{8}{\ell^2} \log(1 / \epsilon) )$. \end{enumerate} \item $\Delta \leftarrow \min\{ \sqrt{d\log d}, d-i\}$. \item $\ZZtil_i \leftarrow {\left(1-\alpha\right)^{\Delta}} \cdot \textsc{Solve}(\AAcal_{i+\Delta} \ldots \AAcal_{d}, \widehat{\lambda}, \exp(-5\Delta)/30) \cdot (\II + \AAcal_{i+\Delta-1}^{(1/4)}) \cdots (\II + \AAcal_{i}^{(1/4)})$. \item Return $\textsc{PreconRichardson}( \mvar I-\WWhat_i, \ZZtil_i, 1, \log(1 / \epsilon) )$. \end{enumerate} \end{algbox} \caption{Algorithm that produces a matrix polynomial that produces an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect to $\mvar I-\mvar U_{\WWhat_i}$, using the global solver chain constructed via \textsc{BuildChain}.} \label{fig:solve} \end{figure} We now show that given a square-sparsifier chain and access to an implicit matrix that is an approximate pseudoinverse of $\mvar I-\WWhat_{d}$ with respect to $\mvar I-\mvar U_{\WWhat_d}$, we can efficiently compute an implicit matrix $\ZZ_{0}$ which is an approximate pseudoinverse of $\mvar I-\WWhat_0$ with respect to $\mvar I-\mvar U_{\WWhat_0}$. This done by invoking the transformations of approximate pseudoinverses in Lemma~\ref{lem:chainproperty}, but also swapping out the exact $(\mvar I-\WWhat_{i})^{\dag}$ with an operator which is an approximate pseudoinverse for it. We first provide a helper lemma, which shows that error does not increase between recursive calls to the $\textsc{Solve}$ routine. \begin{lem}\label{lem:chain_sequence} Let $\WWhat_0, \WWhat_1, \ldots, \WWhat_d$ be a $(d,\widehat{\epsilon},1/4)$-chain. Let $0 \leq i < d$, and let $\Delta = \min\{\sqrt{d\log d}, d-i\}$. Suppose that $\widehat{\epsilon} \leq \exp(-5\Delta)/30$, and that for any $\epsilon_\Delta \leq \exp(-5\Delta)/30$, calling the routine $\textsc{Solve}((\WWhat_{i+\Delta}, \ldots, \WWhat_d), \widehat{\lambda}, \epsilon_{\Delta})$ returns an implicit matrix $\ZZ_{i+\Delta}$ which is an $\epsilon_{\Delta}$-approximate pseudoinverse of $\mvar I-\WWhat_{i+\Delta}$ with respect to $\mvar I-\mvar U_{\WWhat_{i+\Delta}}$. Then, calling the routine $\textsc{Solve}((\WWhat_i, \ldots, \WWhat_d), \widehat{\lambda}, \epsilon)$ returns an implicit matrix $\ZZ_i$ which is an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. \end{lem} \begin{proof} First we notice that by Lemma~\ref{lem:buildChain} part~\ref{item:prop1}, $\kappa(\mvar I-\mvar U_{\WWhat_i}, \mvar I-\mvar U_{\WWhat_i +\Delta}) \leq 21^\Delta$. Therefore, by the norm change property from Lemma~\ref{lem:approxInv-composition} part~\ref{part:weak-norm-change}, and our hypothesis, we obtain that $\ZZ_{i+\Delta}$ is an $(\sqrt{21^\Delta}\cdot \epsilon_\Delta)$- and therefore also a $(1/30)$-approximate pseudoinverse of $\mvar I-\WWhat_{i+\Delta}$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. On the other hand, the error propagation down the chain, bounded in Lemma~\ref{lem:chainproperty} gives that the matrix \[ \ZZbar_i = {(1-1/4)^\Delta} (\mvar I- \WWhat_{i + \Delta})^{\dagger} \left( \II + \WWhat_{i + \Delta - 1}^{(1/4)} \right) \ldots \left( \II + \WWhat_{i}^{(1/4)} \right) \] is an $(\exp(5 \Delta) \widehat{\epsilon})$- and by the bound on $\widehat{\epsilon}$ also a $(1/30)$-approximate pseudoinverse of $\mvar I-\WWhat_{i}$ with respect to $\mvar I-\UU_{\WWhat_i}$. Using these two facts, we can now show that the implicit matrix \[ \ZZtil_i = {(1-1/4)^\Delta} \ZZ_{i + \Delta} \left( \II + \WWhat_{i + \Delta - 1}^{(1/4)} \right) \ldots \left( \II + \WWhat_{i}^{(1/4)} \right) \] is an $1/2$-approximate pseudoinverse of $\mvar I-\WWhat_{i}$ with respect to $\mvar I-\UU_{\WWhat_i}$. This can be easily seen by applying the properties of approximate pseudoinverses we proved in Section~\ref{sec:approxInverse}. Letting $\MM = (1-1/4)^\Delta ( \II + \WWhat_{i + \Delta - 1}^{(1/4)} ) \ldots ( \II + \WWhat_{i}^{(1/4)} ) $, and applying Lemma~\ref{part:approxInv-multiply} part~\ref{part:weak-norm-change} we obtain that $(\mvar I-\WWhat_{i+\Delta})^\dag$ is a $(1/30)$-approximate pseudoinverse of $\MM (\mvar I-\WWhat_i)$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. Applying the triangle inequality from Lemma~\ref{lem:approxInvBasic} we then obtain that $\ZZ_{i+\Delta}$ is a $(1/10)$-approximate pseudoinverse of $\MM(\mvar I-\WWhat_i)$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. Finally, applying Lemma~\ref{part:approxInv-multiply} part~\ref{part:weak-norm-change} again, we obtain that $\ZZtil_i = \ZZ_{i+\Delta} \MM$ is a $(1/10)$-approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. Finally, the guarantee on the output matrix $\ZZ_i = \textsc{PreconRichardson}( \LL_i, \ZZtil_{i}, 1, \log(1 / \epsilon))$ then follows from Lemma~\ref{lem:precon}. \end{proof} Given this, we can now analyze the quality of the implicit matrix produced by calling $\textsc{Solve}$ on the entire square sparsification chain. \begin{lem} \label{lem:chainerror} Given a $(d,\widehat{\epsilon},1/4)$-square-sparsifier chain $\AAcal_0, \AAcal_1, \ldots \AAcal_d$ constructed for a Laplacian $\mathcal{L} = \mvar D-\mvar A^{\top}$, with $\widehat{\epsilon} \leq \exp(-5\Delta)/30$, calling the routine $\textsc{Solve}((\AAcal_0 \ldots \AAcal_d), \widehat{\lambda}, \epsilon)$, where $\epsilon \leq \exp(-5\Delta)/30$, returns an implicit matrix $\ZZ$ which is an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_0$ with respect to $\mvar I-\mvar U_{\WWhat_0}$. \end{lem} \begin{proof} The proof relies on Lemma~\ref{lem:chain_sequence}, and follows from induction on the depth of the call to the \textsc{Solve} routine. The base case is $i = d$, for which we prove that the operator \[ \ZZ_{d} = \textsc{PreconRichardson}\left(\mvar I-\WWhat_{d}, \frac{\ell}{4}\II_{\im{\mvar I-\WWhat_d}}, 1, \frac{8}{\ell^2}\log\left(1 / \epsilon\right)\right) \] is an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_d$ with respect to $\mvar I-\mvar U_{\WWhat_d}$. In order to do so, we show that the input parameters fulfill the requirements for applying Lemma~\ref{lem:solveWellConditioned}. First we notice that by Lemma~\ref{lem:buildChain}, we have $\lambda_{*}(\mvar I- \mvar U_{\WWhat_d} ) \geq \min\{1/4, \lambda_{*}(\mvar I - \mvar U_{\WWhat_0})\cdot 1.125^d \}$. Also, from Lemma~\ref{lem:chain_construction}, we know that $\mvar I-\WWhat_0$ is a $(1/10)$-approximation of $\mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}$. Therefore by Lemma~\ref{lem:asym_strong_implies_undir} we know that $\lambda_{*}(\mvar I-\mvar U_{\WWhat_0}) \geq 9/10 \cdot \lambda_{*}(\mvar U_{\mvar D^{-1/2}\mathcal{L} \mvar D^{-1/2}})\geq 9/10 \cdot \widehat{\lambda}$. Hence $\lambda_{*}(\mvar I-\mvar U_{\WWhat_d}) \geq \min\{1/4, 1.125^d \cdot 0.9 \cdot \widehat{\lambda} \} = \ell$. By Lemma~\ref{lem:normalized_lap_properties} we have $\norm{\mvar I-\WWhat_d}_2 \leq 2$, and therefore, applying Lemma~\ref{lem:solveWellConditioned}, we obtain that $\textsc{PreconRichardson}(\mvar I-\WWhat_d, \frac{\ell}{4}\cdot\mvar I_{\im{\mvar I-\WWhat_d}}, 1, \frac{8}{\ell^2} \log(1/\epsilon))$ returns an implicit matrix $\ZZ$ that is an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_d$ with respect to $\mvar I-\mvar U_{\WWhat_d}$. Now for the induction step, suppose that the induction hypothesis holds for $i+\Delta$. Then, by Lemma~\ref{lem:chain_sequence}, the matrix produced when calling the chain starting at $i$, $\ZZ_i$, is an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_i$ with respect to $\mvar I-\mvar U_{\WWhat_i}$. Therefore, this property also holds for the matrix at the bottom of the call stack, which is what we wanted to prove. \end{proof} Having seen that the $\textsc{Solve}$ routine controls the accumulation of error, as it produces an approximate pseudoinverse for the first matrix in the square sparsification chain, we can now use its output as a preconditioner for the Richardson iteration, which yields our final algorithm, described in Figure~\ref{fig:solveEulerian}. \begin{figure}[ht] \begin{algbox} $\textsc{SolveEulerian}(\mathcal{L}, \epsilon,\vec{b})$ \textbf{Input:} Eulerian Laplacian $\mathcal{L} = \DD - \AA^{\top}$, accuracy $\epsilon$, vector $\vec{b} \perp \allones$. \textbf{Output:} An approximate solution $x$ to $\mathcal{L} x = b$ in the sense that $\norm{x-\mathcal{L}^\dagger b}_{\mvar U_{\mathcal{L}}} \leq \epsilon \norm{\mathcal{L}^\dagger b}_{\mvar U_{\mathcal{L}}}$. \begin{enumerate} \item Compute estimate $\widehat{\lambda}$ of the second eigenvalue of $\UU_{\mvar D^{-1/2}\mathcal{L}\mvar D^{-1/2}}$, such that $\frac{1}{2}\lambda_{\min}(\UU_{\mvar D^{-1/2}\mathcal{L}\mvar D^{-1/2}}) \preceq \widehat{\lambda} \leq \lambda_{\min}(\UU_{\mvar D^{-1/2}\mathcal{L}\mvar D^{-1/2}})$.~\footnotemark \item Set the depth of the chain to $d = 6 \log (1/\widehat{\lambda})$, and the chain solving accuracy $\widehat{\epsilon} = \exp(-5\sqrt{d \log d})/30$. \item $(\AAcal_0, \ldots \AAcal_d) \leftarrow \textsc{BuildChain}(\mathcal{L},d, \frac{1}{4}, \widehat{\epsilon}, 1/n^2)$. \item $\widehat{\ZZ} \leftarrow \textsc{PreconRichardson}( \mvar D^{-1/2}\mathcal{L}\mvar D^{-1/2},\textsc{Solve}((\AAcal_0, \ldots, \AAcal_{d}),\widehat{\lambda},\widehat{\epsilon}),1, 10\log(1 / \epsilon) )$. \item Return $\DD^{-1/2} \widehat{\ZZ} \DD^{-1/2} \vec{b}$. \end{enumerate} \end{algbox} \caption{Full algorithm for solving Eulerian Laplacian systems.} \label{fig:solveEulerian} \end{figure} \footnotetext{Since we know the nullspace of $\UU_{\LL}$, its minimum non-zero eigenvalue can be estimated in linear time to high accuracy with high probability via inverse powering. See e.g. Section 7 of~\cite{SpielmanTengSolver:journal} or Chapter 8 of~\cite{Vishnoi13}.} What we have left is to bound the running time of our solver. We do so by analyzing the recursion tree, as well as the outermost call to the preconditioned Richardson iteration. The final bound is provided in the theorem below. \begin{thm}[Eulerian Solver Guarantee]\label{thm:eulerianGuarantee} Given an Eulerian Laplacian $\mathcal{L} = \mvar D-\mvar A^\top \in \mathbb{{R}}^{n\times n}$ with $m$ nonzero entries, and given an error parameter $0 < \epsilon \leq 1/2$, the algorithm $\textsc{SolveEulerian}(\mathcal{L}, \epsilon, \vec{b})$ returns, with probability at least $1-1/n$, an approximate solution $\vec{x}$ to $\mathcal{L} \vec{x} = \vec{b}$ in the sense that $$\normFull{\vec{x} - \mathcal{L}^\dag \vec{b}}_{\mvar U_\mathcal{L}} \leq \epsilon \normFull{\mathcal{L}^\dag \vec{b}}_{\mvar U_\mathcal{L}}\,{.}$$ Furthermore, the total running time is $$\Otil \left( \left( m + n e^{ O\left(\sqrt{\log \kappa \cdot \log \log \kappa} \right)} \right) \log \left(1/\epsilon \right)\right), $$ where $\kappa$ is the condition number of the normalized Laplacian $\mvar D^{-1/2}\mathcal{L} \mvar D^{-1/2}$. \end{thm} \begin{proof} By Lemma~\ref{lem:chain_construction}, it takes \[ \Otil(m + n \widehat{\epsilon}^{-2} d) = \Otil\left(m + n d \cdot e^{O\left(\sqrt{d \log d}\right)}\right) \] time to build the square sparsification chain. Next we analyze the cost of the recursive calls to $\textsc{Solve}$. As we can see in the description of $\textsc{Solve}$, when invoked on $\mvar I-\WWhat_d$, $\textsc{PreconRichardson}$ requires $({8}/{\ell^2}) \log(1/\widehat{\epsilon})$ iterations, where $\ell= \min\{1/4, 1.125^d \cdot 0.9 \widehat{\lambda} \}$. Therefore, whenever $d = O(\log \widehat{\lambda}^{-1})$, the base case of \textsc{Solve} requires \[ O\left(\frac{1}{\widehat{\lambda} e^{O(d)}} \log(1/\widehat{\epsilon})\right ) = O\left( \frac{ \kappa \sqrt{d \log d}}{ e^{\Omega(d)} }\right) \] iterations. For the cost of invoking $\textsc{Solve}(\mvar I-\WWhat_0, \widehat{\lambda}, \widehat{\epsilon})$, note that at each recursive call, the branching factor in the recursion is \[ O\left(\log (1/\widehat{\epsilon}) \right) = O\left(\sqrt{d \log d}\right)\,{,} \] due to the iterations of preconditioned Richardson, each of them invoking the solver for a matrix from further down the chain. Furthermore, the depth of the call stack for \textsc{Solve} (or the number of layers of the recursion tree) is bounded by $d / \sqrt{d \log d} = \sqrt{d / \log d}$. Therefore the total number of recursive calls made before the final solve on $\mvar I-\WWhat_d$ is \[ O(\sqrt{d \log d})^{O(\sqrt{d / \log d})} = e^{O(\sqrt{d \log{d}})}. \] Now, note that each of the recursive call requires one multiplication by each of the matrices in the chain. Therefore each such recursive call makes \[ \Otil\left(n d \widehat{\epsilon}^{-2} \right) = \Otil\left(n d \cdot e^{O\left(\sqrt{d \log d}\right)}\right) \] work. Thus we see that the total amount of work it takes to construct and invoke the implicit matrix given by $\textsc{Solve}(\mvar I-\WWhat_0, \widehat{\lambda}, \widehat{\epsilon})$ is \[ \Otil\left( nd \cdot e^{O(\sqrt{d \log d})} \right) \cdot e^{O(\sqrt{d \log d})} \cdot O\left(\frac{ \kappa \sqrt{d \log d}}{ e^{\Omega(d)} }\right) = \Otil\left( n \kappa \cdot \frac{d}{ e^{\Omega\left(\sqrt{d / \log d}\right)}} \right) \,{.} \] Hence for our setting of $d = \Theta(\log \kappa)$, this quantity becomes \[ \Otil\left(n e^{O(\sqrt{\log \kappa \log \log \kappa})}\right)\,{.} \] Finally, since by definition and Lemma~\ref{thm:spectral-to-approxInv} we know that $(\mvar I-\WWhat_0)^\dagger$ is a $1/10$-approximate pseudoinverse of $\mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}$ with respect to $\UU_{\mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}}$, producing the implicit matrix $\widehat{\ZZ}$ requires $O(\log(1/\epsilon))$ iterations (by Lemma~\ref{lem:precon}), each of them requiring one multiplication by $\mvar D^{-1/2}\mathcal{L}\mvar D^{-1/2}$ and one call to the recursive \textsc{Solve}. Thus the total running time to construct and apply $\widehat{\ZZ}$ in \textsc{SolveEulerian} is \[ \Otil \left( \left ( m + n e^{ O (\sqrt{\log \kappa \log \log \kappa}) } \right ) \log \frac{1}{ \epsilon} \right) \,{.} \] To analyze the solution quality, let \[ \vec{y} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \widehat{\ZZ} \DD^{-1/2} \vec{b}, \] and let $\vec{x} = \DD^{-1/2} \vec{y}$ be the solution returned by the algorithm. Also, to simplify notation, let us define $\LL = \mvar D^{-1/2} \mathcal{L} \mvar D^{-1/2}$. For any vector $\vec{v}$, let $\mvar I_{\perp v}$ denote orthogonal projection projection orthogonal to $\vec{v}$. By Lemma~\ref{lem:precon}, we have \begin{align*} \normFull{\vec{y}-\LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\LL}} &\leq \epsilon \normFull{\LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\LL}}\,{.} \end{align*} Since $\mvar U_{\LL} = \DD^{-1/2} \mvar U_{\mathcal{L}} \DD^{-1/2}$, we have that for every vector $\vec{v}$, $\norm{\vec{v}}_{\mvar U_{\LL}} = \norm{\DD^{-1/2} \vec{v}}_{\mvar U_{\mathcal{L}}}$. Also, since $\ker(\mvar U_\mathcal{L})=\mathrm{span}(\allones)$ we have that for every vector $\vec{v}$, $\norm{\vec{v}}_{\mvar U_{\mathcal{L}}} = \norm{ \mvar I_{\perp \allones} \DD^{-1/2} \vec{v}}_{\mvar U_{\mathcal{L}}}.$ Using these, the above inequality is equivalent to \[ \epsilon \normFull{\mvar I_{\perp \allones} \mvar D^{-1/2} \LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\mathcal{L}}} \geq \normFull{\vec{x}-\mvar I_{\perp \allones}\mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\mathcal{L}}} = \normFull{\vec{x} - \mathcal{L}^{\dagger}}_{\mvar U_{\mathcal{L}}}, \] where we have used $\mvar I_{\perp \allones} \vec{x} = \vec{x}.$ To finish the proof, it suffices to show that $\mvar I_{\perp \allones} \mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b} = \mathcal{L}^\dagger \vec{b}$. We first make the substitution $\mvar I_{\perp \allones} = \mathcal{L}^\dagger \mathcal{L}$ and then use $\mathcal{L} = \mvar D^{1/2} \LL \mvar D^{1/2}$ \[ \mvar I_{\perp \allones} \mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b} = \mathcal{L}^\dagger \mathcal{L} (\mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2}) \vec{b} \\ = \mathcal{L}^\dagger \mvar D^{1/2} \LL \LL^\dagger \mvar D^{-1/2} \vec{b}. \] Since $\DD^{1/2} \allones$ is the kernel of $\LL$, we have $\LL \LL^{\dagger} = \mvar I_{\perp \DD^{1/2} \allones}$. By the fact that $\vec{b} \perp \allones$, we have $\DD^{-1/2} b \perp \DD^{1/2} \allones$, and therefore $ \mvar I_{\perp \mvar D^{1/2} \allones} \DD^{-1/2} \vec{b} = \DD^{-1/2} \vec{b}$. Making these substitutions, we obtain \begin{align*} \mvar I_{\perp \vec{1}} \mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b} &= \mathcal{L}^\dagger \vec{b}. \end{align*} \end{proof} \begin{comment} \begin{lem} \label{lem:applychain} Given a $(d,\widehat{\epsilon},1/4)$-square-sparsifier chain $\AAcal_0, \AAcal_1, \ldots \AAcal_d$, with $\widehat{\epsilon} \leq e^{-\Omega(\sqrt{d \log_2 d})}$,\av{set constant} the routine $\textsc{Solve}((\AAcal_0 \ldots \AAcal_d), \epsilon)$ returns an implicit matrix $\ZZ$ which is an $O(\epsilon)$-approximate pseudoinverse of $\mvar I-\WWhat_0$ with respect to $\mvar I-\mvar U_{\WWhat_0}$. Furthermore, if the total number of non-zeros in the matrices in the chain is $\mathrm{nnz}_{\mathrm{chain}}$, then $\ZZ$ can be applied to any vector $\vec{b}$ in $O \left ( \mathrm{nnz}_{\mathrm{chain}} \sqrt{d \log d}/ \epsilon \right )$ time. \end{lem} \begin{proof} We prove by induction that for each $i$, the matrix $\ZZ_i-\WWhat_i$ is is by induction applied backwards on $i$. For the base case of $i = d$, we note by Lemma~\ref{lem:precond_richardson} that the operator \[ \ZZ_{d} = \textsc{PreconRichardson}\left(\mvar I-\WWhat_{d}, \frac{1}{10}\II_{\im{\mvar M}}, 1, O\left(\log\left(1 / \epsilon\right)\right) \right) \] is an $\epsilon$-approximate pseudoinverse of $\mvar I-\WWhat_d$ with respect to $\mvar I-\mvar U_{\WWhat_d}$. First we notice that by Lemma~\ref{lem:buildChain}, we have $\lambda_{*}(\mvar I- \mvar U_{\WWhat_d} ) \geq \min\{1/4, \lambda_{*}(\mvar I - \mvar U_{\WWhat_0})\cdot ((1-\widehat{\epsilon})1.25)^d \}$. Also, from Lemma\ref{lem:chain_construction}, we know that $\mvar I-\WWhat_0$ is an $\widehat{\epsilon}$-approximation of ... For the inductive step, the inductive hypothesis gives that the operator \[ \ZZ_{i + \Delta} = \textsc{Solve}\left(\LL_{i+\Delta}, e^{-{\Omega(\sqrt{d \log_2 d})}} \right) \] is an $e^{-\Omega(\sqrt{d \log_2 d})}$-approximate pseudoinverse of $\LL_{i + \Delta}$ with respect to $\UU_{\LL_{i + \Delta}}$. By the norm change property in Lemma~\ref{lem:approxInv-composition}, part~\ref{part:weak-norm-change} and an appropriate choice of constants, $\ZZ_{i + \Delta}$ is an $e^{-\Omega(\sqrt{d \log_2 d})} \cdot \exp(\Delta) \leq 1/10$-approximate pseudoinverse of $\LL_{i}$ with respect to $\UU_{\LL_i}$. \rp{do we need to say something about $\UU_{\LL_{i + \Delta}}$ being close to $\UU_{\LL_{i}}$?} On the other hand, the error propagation down the chain from Lemma~\ref{lem:chainproperty} gives that the matrix \[ \ZZbar_{i} = {(1-\alpha)^\Delta} \LL_{i + \Delta}^{\dagger} \left( \II + \WWhat_{i + \Delta - 1}^{(\alpha)} \right) \cdot \cdot \cdot \left( \II + \WWhat_{i}^{(\alpha)} \right) \] is an $\exp(\Delta) \widehat{\epsilon} \leq \frac{1}{10}$-approximate pseudoinverse of $\LL_{i}$ with respect to $\UU_{\LL_i}$. Lemma~\ref{lem:switchLeft} then gives that the implicit matrix \[ \ZZtil_{i} = {(1-\alpha)^\Delta} \ZZ_{i + \Delta} \left( \II + \WWhat_{1 + \Delta - 1}^{(\alpha)} \right) \cdot \cdot \cdot \left( \II + \WWhat_{i}^{(\alpha)} \right) \] is an $1/2$-approximate pseudoinverse of $\LL_{i}$ with respect to $\UU_{\LL_i}$ if the constants are chosen appropriately. The guarantees on $\ZZ_{i} = \textsc{PreconRichardson}( \LL_i, \ZZtil_{i}, \frac{1}{2}, O(\log(1 / \epsilon)) )$ then follows from the guarantees on preconditioned Richardson iteration in Lemma~\ref{lem:precon}. \end{proof} The running time then follows from analyzing the recursion tree, as well as the outermost call to preconditioned Richardson iteration. \begin{thm}[Solver Guarantee] \label{thm:solverguarantee} When given an Eulerian Laplacian $\mathcal{L} = \mvar D - \mvar A^\top \in \mathbb{{R}}^{n \times n}$ with $m$ non-zeros $\mathcal{L}$ and any error $\epsilon > 0$, the algorithm $\textsc{SolveEulerian}(\mathcal{L}, \epsilon,b)$ will, with probability at least $1 - \frac{1}{n}$, return an approximate solution $\vvar{x}$ to $\mathcal{L} \vvar{x} = \vvar{b}$ in the sense that \[ \normFull{\vvar{x}-\mathcal{L}^\dagger \vvar{b}}_{\mvar U_{\mathcal{L}}} \leq \epsilon \normFull{\mathcal{L}^\dagger \vvar{b}}_{\mvar U_{\mathcal{L}}}. \] The total running time is \[ \Otil \left( \left( m + n e^{ O\left(\sqrt{\log \kappa(\mvar U_{\LL}) \cdot \log \log \kappa(\mvar U_{\LL})} \right)} \right) \log \left(1/\epsilon \right)\right), \] where $\LL = \DD^{-1/2} \mathcal{L} \DD^{-1/2}.$ \end{thm} \begin{proof}\av{what is this doing here? it should be in the previous lemma} By Lemma~\ref{lem:buildChain}, we have $\lambda_{*}(\UU_{\LL_d} ) \geq \min\{1/4, \lambda_{*}(\UU_{\LL_0})\cdot (1.125)^d \}$ and by Lemma~\ref{lem:chain_construction} we have $\UU_{\LL_0}$ is an $\frac{1}{10}$-approximation of $\UU_{\LL}.$ By choosing an appropriate $d = O(\log (1/\widehat{\lambda})),$ this implies that $\lambda_{*}(\UU_{\LL_d} ) \geq 1/4.$ By Lemma~\ref{lem:precond_richardson}, it takes $\Otil(n e^{O\left(\sqrt{ d \log d } \right)} )$ time of \textsc{PreconRichardson} (when invoked on $\LL_d$) in the base case of \textsc{Solve}. By Lemma~\ref{lem:chain_construction}, it takes $\Otil(m + n d \,)$ time to build the square sparsification chain. For the cost of invoking $\textsc{Solve}(\LL_0, 1/10)$, note that at each recursive call, the branching factor in the recursion is \[ O\left(\log\left(1/e^{-{O\left(\sqrt{d \log_2 d}\right)}}\right)\right) = O\left(\sqrt{d \log d}\right). \] Furthermore, the number of layers of the recursion tree is bounded by $d / \sqrt{d \log_2 d} = O(\sqrt{d / \log d})$. Therefore the total number of recursive calls made is \[ O(\sqrt{d \log d})^{O(\sqrt{d / \log d})} = e^{O(\sqrt{d \log{d}})}. \] Since each recursive call does \[ O\left(n e^{O\left(\sqrt{ d \log d } \right)}\right) \] work, the total amount of work it takes to construct and invoke the implicit matrix given by $\textsc{Solve}(\LL_0, 1/10)$ is $O\left(n e^{O\left(\sqrt{d \log d } \right)}\right)$. Since $\LL_0^\dagger$ of the square-sparsifier chain is a $1/10$-approximate pseudoinverse of $\LL$ w.r.t. $\UU_{\LL}$, the time it takes to construct and apply the implicit matrix given by \textsc{PreconRichardson} in \textsc{SolveEulerian} takes \[ \Otil \left( \left ( m + n e^{ O (\sqrt{d \log d}) } \right ) \log \left( \frac{1}{ \epsilon} \right) \right) \] time by Lemma~\ref{lem:precon}. The running time guarantee now follows by noting that $ \frac{1}{\lambda_{*}(\UU_{\LL} )}= O(\kappa(\UU_{\LL}))$ for any normalized undirected Laplacian $\UU_{\LL}.$\footnotetext{One can also bound this by $\frac{1}{\lambda_{*}(\UU_{\LL} ).} = O(R \cdot \mathrm{poly}(n))$, where $R$ is the ratio of maximum and minimum diagonal entries of $\mathcal{L}$. If we use this bound, then we can avoid computing minimum eigenvalue of $\UU_{\LL}$ in $\textsc{SolveEulerian}$.} To analyze the solution quality, let \[ \vec{y} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} [\textsc{PreconRichardson}( \LL, \textsc{Solve}(\LL_0, 1/10), \frac{1}{2}, O(\log(1 / \epsilon)) )] \; \DD^{-1/2} \vec{b}, \] and let $\vec{x} = \DD^{-1/2} \vec{y}$ be the solution return by the algorithm. For any vector $\vec{v}$, let $\mvar I_{\perp v}$ denote orthogonal projection projection orthogonal to $\vec{v}$. By Lemma~\ref{lem:precon}, we have \begin{align*} \normFull{\vec{y}-\LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\LL}} &\leq \epsilon \normFull{\LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\LL}} \\ \end{align*} By $\mvar U_{\LL} = \DD^{-1/2} \mvar U_{\mathcal{L}} \DD^{-1/2}$, we have for every vector $\vec{v}$, $\norm{\vec{v}}_{\mvar U_{\LL}} = \norm{\DD^{-1/2} \vec{v}}_{\mvar U_{\mathcal{L}}}$. Also, since $\ker(\mvar U_\mathcal{L})=\mathrm{span}(\vec{1})$ we have for every vector $\vec{v}$, $\norm{\vec{v}}_{\mvar U_{\mathcal{L}}} = \norm{ \mvar I_{\perp \vec{1}} \DD^{-1/2} \vec{v}}_{\mvar U_{\mathcal{L}}}.$ Using these, the above inequality is equivalent to \[ \epsilon \normFull{\mvar I_{\perp \vec{1}} \mvar D^{-1/2} \LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\mathcal{L}}} \geq \normFull{\vec{x}-\mvar I_{\perp \vec{1}}\mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b}}_{\mvar U_{\mathcal{L}}} = \normFull{\vec{x} - \mathcal{L}^{\dagger}}_{\mvar U_{\mathcal{L}}}, \] where we have used $\mvar I_{\perp \vec{1}} \vec{x} = \vec{x}.$ To finish the proof, it suffices to show that $\mvar I_{\perp \vec{1}} \mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b} = \mathcal{L}^\dagger \vec{b}$. We first make the substitution $\mvar I_{\perp \vec{1}} = \mathcal{L}^\dagger \mathcal{L}$ and then use $\mathcal{L} = \mvar D^{1/2} \LL \mvar D^{1/2}$ \[ \mvar I_{\perp \vec{1}} \mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b} = \mathcal{L}^\dagger \mathcal{L} (\mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2}) \vec{b} \\ = \mathcal{L}^\dagger \mvar D^{1/2} \LL \LL^\dagger \mvar D^{-1/2} \vec{b}. \] Since $\DD^{1/2} \vec{1}$ is the kernel of $\LL$, we have $\LL \LL^{\dagger} = \mvar I_{\perp \DD^{1/2} \vec{1}}$. By the fact that $\vec{b} \perp \vec{1}$, we have $\DD^{-1/2} b \perp \DD^{1/2} \vec{1}$ and therefore $ \mvar I_{\perp \mvar D^{1/2} \vec{1}} \DD^{-1/2} \vec{b} = \DD^{-1/2} \vec{b}$. Making these substitutions, we get \begin{align*} \mvar I_{\perp \vec{1}} \mvar D^{-1/2}\LL^\dagger \mvar D^{-1/2} \vec{b} &= \mathcal{L}^\dagger \vec{b}. \end{align*} \end{proof} \end{comment} \subsection{Properties of the Square-Sparsification Chain} \label{sec:progresstools} Here we prove several properties of the square-sparsification chain which we use in the analysis of our solver algorithm. The main result of this subsection is to prove the following: \begin{lem} \label{lem:buildChain}[Chain Properties] For length $d\geq 1$, parameter $\alpha = 1/4$, and error $\epsilon\in(0,1/2)$, and $(d,\alpha,\epsilon)$-chain $\WWhat_0, \WWhat_1, \ldots \WWhat_{d}$ the following properties hold: \begin{enumerate} \item $\kappa(\mvar I-\mvar U_{\WWhat_{i}}, \mvar I-\mvar U_{\WWhat_{i-1}}) \leq 21$, for all $i \in [d]$\,{, and}\label{item:prop1} \item $\lambda_{*}(\mvar I- \mvar U_{\WWhat_d} ) \geq \min\{1/4, \lambda_{*}(\mvar I - \mvar U_{\WWhat_0})\cdot ((1-\epsilon)1.25)^d \}$\,{.}\label{item:prop2} \end{enumerate} \end{lem} The first property shows that the matrix $\mvar I - \mvar U_{\WWhat_i}$ changes only within a constant factor for a constant change in $i$, while the second implies that the smallest non-zero eigenvalue improves geometrically with squaring, up to some fixed value that depends on $\alpha$. The proof of Lemma~\ref{lem:buildChain} relies on several components. First, we use a lemma which shows how squaring changes the associated symmetric matrix (see Lemma~\ref{lem:square_condition_number} in Appendix~\ref{sec:linear_algebra}). Combining with the sparsification guarantees from Lemma~\ref{lem:asym_strong_implies_undir}, we derive the first property. Second, we use a bound on how the smallest non-zero eigenvalue improves after squaring (see Lemma~\ref{lem:kappa-improvement} from Appendix~\ref{sec:linear_algebra}), in order to show that this is also the case with the matrices in our chain. \begin{proof}[Proof of Lemma~\ref{lem:buildChain}] By definition, $\mvar I - \WWhat_i$ is an $\epsilon$-approximation of $\mvar I-( \WWhat^{(\alpha)}_{i-1} )^2$, for all $i\geq 1$. Therefore by Lemma~\ref{lem:asym_strong_implies_undir} we know that $$(1-\epsilon)\left(\mvar I- \mvar U_{(\WWhat^{(\alpha)}_{i-1})^2}\right)\preceq \mvar I-\mvar U_{\WWhat_i} \preceq (1+\epsilon)\left(\mvar I - \mvar U_{(\WWhat^{(\alpha)}_{i-1})^2}\right)\,{.}$$ Applying Lemma~\ref{lem:square_condition_number}, we obtain $$ 2\alpha \left(\mvar I - \mvar U_{\WWhat^{(\alpha)}_{i-1}}\right) \preceq \mvar I - \mvar U_{\left(\WWhat^{(\alpha)}_{i-1}\right)^2} \preceq (4-2\alpha) \left(\mvar I - \mvar U_{\WWhat^{(\alpha)}_{i-1}}\right)\,{.}$$ Combining with the sparsification guarantee from above, we obtain: $$(1-\epsilon) 2\alpha \left(\mvar I - \mvar U_{\WWhat^{(\alpha)}_{i-1}}\right)\preceq \mvar I-\mvar U_{\WWhat_i} \preceq (1+\epsilon) (4-2\alpha) \left(\mvar I - \mvar U_{\WWhat^{(\alpha)}_{i-1}}\right)\,{.}$$ Finally, writing $\mvar I-\mvar U_{\WWhat_{i-1}^{(\alpha)}} = \mvar I - (\alpha\mvar I + (1-\alpha)\mvar U_{\WWhat_{i-1}}) = (1-\alpha)\mvar U_{\WWhat_{i-1}}$, this gives: $$(1-\epsilon) 2\alpha(1-\alpha) \left(\mvar I - \mvar U_{\WWhat_{i-1}}\right)\preceq \mvar I-\mvar U_{\WWhat_i} \preceq (1+\epsilon) (4-2\alpha)(1-\alpha) \left(\mvar I - \mvar U_{\WWhat_{i-1}}\right)\,{,}$$ which shows that $$\kappa(\mvar I-\mvar U_{\WWhat_i}, \mvar I-\mvar U_{\WWhat_{i-1}}) \leq \frac{ (1+\epsilon)(4-2\alpha) }{ (1-\epsilon)2\alpha } \leq 21\,{.}$$ For the second part, we see that Lemma~\ref{lem:kappa-improvement} yields: $$\lambda_{*}(\mvar I - \mvar U_{ (\WWhat^{(\alpha)})_{i-1}^2 }) \geq \min\{\alpha, (1+\alpha)\lambda_{*}(\mvar I - \mvar U_{\WWhat_{i-1}})\}\,{.}$$ Combining with the first inequality from the sparsification guarantee, this gives $$\lambda_{*}(\mvar I-\mvar U_{\WWhat_i}) \geq (1-\epsilon)\cdot \min\{\alpha, (1+\alpha)\lambda_{*}(\mvar I - \mvar U_{\WWhat_{i-1}})\}\,{.}$$ Applying this inequality $d$ times yields the second part of the result. \end{proof} \section{Sparsification of Directed Laplacians \label{sec:sparsification}} In this section, we define what it means for one strongly connected directed graph to approximate another, and we use this to define directed sparsifiers. We show that such sparsifiers exist for any directed graph and give efficient algorithms to construct them. For our almost linear time directed Laplacian system solver (Section~\ref{sec:solver}), we only need to be able to sparsify Eulerian graphs. However, we have included the more general case of non-Eulerian graphs because we believe it is of independent interest and may be useful in other settings. As discussed in Section~\ref{sec:approach_sparsification}, we define our notion of graph approximation by first giving a notion of approximation for matrices and then applying it to (possibly rescaled versions of) directed graph Laplacians. This notion will be qualitatively better-behaved for Laplacians of Eulerian graphs than for general directed Laplacians. As such, our definition will make use of the existence of a scaling that makes any strongly connected graph Eulerian. Our notion of approximation for asymmetric matrices is defined as follows: \begin{definition}[Asymmetric Matrix Approximation]\label{def:epsaprox} A (possibly asymmetric) matrix $\widetilde{\mvar A}$ is said to be an \emph{$\epsilon$-approximation of $\mvar A$} if: \begin{enumerate} \item $\mvar U_{\mvar A}$ is a symmetric PSD matrix, with $\ker(\mvar U_{\mvar A})\subseteq\ker(\widetilde{\mvar A}-\mvar A)\cap\ker((\widetilde{\mvar A}-\mvar A)^{\top})$, and \item $ \norm{\mvar U^{\dagger/2}_{\mvar A}(\widetilde{\mvar A}-\mvar A)\mvar U^{\dagger/2}_{\mvar A}}_{2}\leq\epsilon. $ \end{enumerate} When these properties hold for some constant $\epsilon \in (0, 1)$ we simply say $\widetilde{\mvar A}$ approximates $\mvar A$. \end{definition} In Section~\ref{sub:sparse_facts} we provide an equivalent definition and several facts which justify this choice of matrix approximation. In particular, we prove the following facts regarding Definition~\ref{def:epsaprox} \begin{itemize} \item it generalizes spectral approximation (ie., small relative condition number) of symmetric matrices, and behaves predictably under perturbations; \item it implies the symmetrizations $\mvar U_\mvar A$ and $\mvar U_{\widetilde{\mvar A}}$ of $\mvar A$ and $\widetilde{\mvar A}$, spectrally approximate each other; \item its behavior under composition is natural. \end{itemize} Furthermore, in Appendix~\ref{sec:harmonic_approx} we show that our notion of approximation also yields approximations of the symmetric systems solved in \cite{cohen2016faster}, known as harmonic symmetrizations. We extend this notion of approximation from asymmetric matrices to directed graphs as follows: \begin{definition}[Directed Graph Approximation] Let $\mathcal{L},\widetilde{\mathcal{L}}\in \mathbb{R}^{n\times n}$ be the Laplacians of strongly-connected directed graphs $G$ and $\widetilde{G}$ respectively, and let $\mvar X=\mathrm{diag}({\vec{x}})$ and $\widetilde{\mvar X}=\mathrm{diag}(\vec{\tilde{x}})$ be the diagonal matrices for which $\mathcal{L} \mvar X$ and $\widetilde{\mathcal{L}} \widetilde{\mvar X}$ are Eulerian Laplacians that are guaranteed to exist by Lemma~\ref{lem:stationary-equivalence}, normalized to have $\mathrm{Tr} (\mvar X)=\mathrm{Tr}(\widetilde{\mvar X})=n$. We say that \emph{$\widetilde{G}$ is an $\epsilon$-approximation of $G$} if: \begin{enumerate} \item $(1-\epsilon)\mvar X \leq \widetilde{\mvar X} \leq (1+\epsilon) \mvar X$, and \item $\widetilde{\mathcal{L}}\widetilde{\mvar X}$ is an $\epsilon$-approximation of $\mathcal{L}\mvar X$. \end{enumerate} If $\widetilde{\mvar X}=\mvar X$, we say that \emph{$\widetilde{G}$ is a strict $\epsilon$-approximation of $G$}. \end{definition} In words, we say that a graph approximates another graph if their Eulerian scalings are within small multiplicative factors of one another and the resulting Eulerian graphs obey our definition of asymmetric matrix approximation. We call the approximation ``strict'' if their Eulerian scalings are not just within small multiplicative factors of one another but are actually identical. Our main use of this notion is to define \emph{sparsifiers}, which are approximations that have a small number of nonzero entries. \begin{definition}(Graph Sparsifier) Let $\mathcal{L},\widetilde{\mathcal{L}}\in \mathbb{R}^{n\times n}$ be the Laplacians of strongly-connected directed graphs $G$ and $\widetilde{G},$ respectively. We say that \emph{$\widetilde{G}$ is a (strict) $\epsilon$-sparsifier of $G$} if: \begin{enumerate} \item $\widetilde{G}$ is a (strict) $\epsilon$-approximation of $G$, and \item $\mathrm{nnz}(\widetilde{\mathcal{L}}) \leq \tilde{O}(n\epsilon^{-2})$, where $n$ is the number of vertices in $G$. \end{enumerate} \end{definition} We note that, if we show that strict sparsifiers exist for Eulerian graphs, this will imply that they exist for general strongly connected graphs as well. One can simply apply the graph's Eulerian scaling, find a sparsifier for the resulting Eulerian graph, and then ``unscale'' the graph by applying the inverse of the Eulerian scaling. The results of this paper allow us to compute an Eulerian scaling for any graph in almost-linear time. Thus, the almost linear time sparsification procedure we will give for Eulerian graphs will imply an almost linear time sparsification procedure for all graphs.% \footnote{One has to exercise some care with the numerics here, since we will only be able to compute a finite precision estimate of the Eulerian scaling of a graph. So, if we apply this scaling and then want to sparsify, the graph we want to sparsify will not be perfectly Eulerian. However, it is straightforward to show that---as long as the approximate Eulerian scaling being used is fairly precise---one can ``patch'' the rescaled graph to become Eulerian while only incurring a very small loss in the approximation quality.} As such, we will focus on the Eulerian case. In this case, graph approximation will be the same as matrix sparsification, and it will often be convenient to refer directly to the Laplacian, instead of to the graph. Moreover, we will seek to exactly preserve the fact that the graph is Eulerian, so we will exclusively consider strict approximations. We thus define:\footnote{We could ask for sparsifiers under other notions of approximation, such as the weaker conditions required to obtain a preconditioner (see Section~\ref{sec:richardson}); as our algorithms always give this notion we use this terminology.} \begin{definition}[Eulerian Sparsifier] \label{def:strongApprox} $\widetilde{\mathcal{L}} \in \mathbb{{R}}^{n \times n}$ is an \emph{$\epsilon$-sparsifier of an Eulerian Laplacian $\mathcal{L}$} if \begin{enumerate} \item $\widetilde{\mathcal{L}}$ is a strict $\epsilon$-approximation of $\mathcal{L}$ \item $\mathrm{nnz}(\widetilde{\mathcal{L}}) \leq \tilde{O}(n\epsilon^{-2})$. \end{enumerate} \end{definition} In the remainder of the section, we show how to produce such sparsifiers of Eulerian Laplacians. I.E., given an Eulerian Laplacian, we will obtain an Eulerian Laplacian that approximates the original and has a small number of nonzero entries. Moreover, we show how to construct these sparsifiers in nearly linear time. Specifically, we give an algorithm that produces an $\epsilon$-sparsifier of an Eulerian Laplacian $\mathcal{L}$ with high probability in $\widetilde{O}(\mathrm{nnz}(\mathcal{L})/\epsilon^2)$ time. Even without any additional work, this result alone immediately implies some improvement in the runtime for solving arbitrary directed Laplacian systems. Specifically, one can write down the harmonic symmetrization of the original matrix, the harmonic symmetrization of the sparsify, and solve the original harmonic symmeterization preconditioned by the harmonic symmetrization of the sparsifier. We prove in Appendix~\ref{sec:harmonic_approx} that the these Harmonic symmetrizations have small relative condition number, so the runtime of this solver will be dominated by the time it takes to solve systems in the sparsified matrix plust the time to apply the unsparsified matrix to a vector. Using the solver in \cite{cohen2016faster}, this comes out to an $\widetilde{O}(m+n^{7/4})$ time algorithm for solving directed Laplacian systems. In order to construct a better, almost linear time solver, we'll also need to be able to sparsify a normalized version of the Laplacian of the square of graph. Specifically, we also show how to sparsify any matrix of the form $\mvar D-\mvar A^{\top}\mvar D^{-1}\mvar A^{\top}$, where we are given the adjacency matrix $\mvar A$ of some Eulerian graph $G$ and the degrees $\mvar D$ of $G$. Note that if $G$ is regular and has all (weighted) degrees equal to one, this formula simplifies to $I-(A^\intercal)^2$. Thus, it corresponds to the Laplacian of the square of the graph in this special case, and to a normalized version of it in general. Our algorithm for sparsifying matrices of this form takes outputs a strict $\epsilon$-sparsifier in $\widetilde{O}(\mathrm{nnz}(\mathcal{L})\epsilon^{-2})$ time. Combining these results, and using our properties regarding asymmetric approximations, we show that we can use this $\epsilon$-approximation to efficiently obtain an $\epsilon$-sparsifier of $\mvar D-\mvar A^{\top}\mvar D^{-1}\mvar A^{\top}$. Applying a closely related routine recursively we obtain a faster, almost linear time algorithm for solving Eulerian Laplacian systems in Section~\ref{sec:solver}. The remainder of this section is structured as follows. First, in Section~\ref{sub:sparse_facts}, we provide various facts regarding our notion of asymmetric approximation. Then, in Section~\ref{sub:sample_directed}, we provide one of our main technical tools: an algorithm for crudely sparsifying an arbitrary (not necessarily Eulerian) directed Laplacian by randomly sampling its adjacency matrix. On its own, this algorithm achieves relatively weak guarantees. In Section~\ref{sub:sparse_eulerian} we then combine this tool with known decomposition results for undirected graphs to sparsify any Eulerian Laplacian. In Section~\ref{sub:sparse_square}, we build on this to sparsify the normalized square of any Eulerian Laplacian. \subsection{Approximation Facts\label{sub:sparse_facts}} Here we provide various basic facts regarding the notion of approximation for asymmetric matrices given in Definition~\ref{def:strongApprox}. These facts both motivate and justify our choice of definition and are used extensively throughout this section. First we provide the following lemma, giving alternative definitions of approximation in terms of a quantity reminiscent of Rayleigh quotients. \begin{lem}[Equivalent Approximation Definitions] \label{lem:asym_strong_equiv} Let $\mvar A \in \mathbb{{R}}^{n \times n}$ be such that $\mvar U_{\mvar A}$ PSD. A matrix $\widetilde{\mvar A} \in \mathbb{{R}}^{n \times n}$ is an $\epsilon$-approximation of $\mvar A$ if and only if \[ \max_{\vec{x},\vec{y}\neq0} \frac{\vec{x}^{\top}(\widetilde{\mvar A}-\mvar A)\vec{y}} {\sqrt{\vec{x}^{\top}\mvar U_{\mvar A} \vec{x}\cdot \vec{y}^{\top}\mvar U_{\mvar A} \vec{y}}} \leq \epsilon \enspace\text{ or equivalently }\enspace \max_{\vec{x}, \vec{y} \neq 0} \frac{\vec{x}^{\top}(\widetilde{\mvar A}-\mvar A)\vec{y}} {\vec{x}^{\top}\mvar U_{\mvar A} \vec{x}+\vec{y}^{\top}\mvar U_{\mvar A} \vec{y}}\leq\frac{\epsilon} {2}\,{,} \] under the convention that $0/0=0$. \end{lem} \begin{proof} This lemma follows from a more general result, Lemma~\ref{lem:spectral_equivalence}, which we prove in Appendix~ \ref{sec:decomposition}, and by noting that if $\ker(\mvar U_{\mvar A})$ is not a subset of both $\widetilde{\mvar A}-\mvar A$ and its transpose then the maximization problems are infinite in value. \end{proof} This lemma will allow us to show that our notion of $\epsilon$-approximation does coincide with the standard notion in the case of symmetric matrices, and is therefore a stronger notion. More generally, we prove that $\epsilon$-approximation of asymmetric matrices implies that their symmetrizations are $\epsilon$-approximations in the traditional spectral (small relative condition number) sense. \begin{lem} \label{lem:asym_strong_implies_undir} Suppose $\widetilde{\mvar A}$ is an $\epsilon$-approximation of $\mvar A$. Then \[ (1-\epsilon)\mvar U_{\mvar A}\preceq \mvar U_{\widetilde{\mvar A}}\preceq(1+\epsilon)\mvar U_{\mvar A} ~. \] \end{lem} \begin{proof} Suppose $\widetilde{\mvar A}$ is an $\epsilon$-approximation of $\mvar A$, and let $\vec{x}\in\mathbb{{R}}^{n}$ with $\vec{x} \neq \vec{0}$ be arbitrary. Applying Lemma~\ref{lem:asym_strong_equiv} twice with $\vec{y} =\pm \vec{x}$ we have \[ \frac{\abs{\vec{x}^{\top}(\widetilde{\mvar A}-\mvar A)\vec{x}}} {\vec{x}^{\top}\mvar U_{\mvar A}\vec{x}} \leq\norm{\mvar U_{\mvar A}^{\dagger/2} (\widetilde{\mvar A}-\mvar A)\mvar U_{\mvar A}^{\dagger/2}}_{2}\leq\epsilon~. \] The desired result follows from the fact that $\vec{z}^{\top}\mvar A \vec{z} = \vec{z}^{\top}\mvar U_{\mvar A}\vec{z}$ and $\vec{z}^{\top}\widetilde{\mvar A}\vec{z} = \vec{z}^{\top} \mvar U_{\widetilde{\mvar A}} \vec{z}$ for all $z$. \end{proof} Next we use Lemma~\ref{lem:asym_strong_equiv} to show that just as in the symmetric case, asymmetric approximation is preserved when taking symmetric products. \begin{lem} \label{cor:strong_basis_change} If $\widetilde{\mvar A} \in \mathbb{{R}}^{n \times n}$ is an $\epsilon$-approximation of $\mvar A \in \mathbb{{R}}^{n \times n}$ and $\mvar M \in \mathbb{{R}}^{n \times n}$ satisfies $\ker(\mvar M^\intercal) \subseteq\ker(\mvar U_{\AA})$ then $\mvar M^{\top}\widetilde{\mvar A}\mvar M$ is an $\epsilon$-approximation of $\mvar M^{\top}\mvar A\mvar M$. \end{lem} \begin{proof} Define $\mvar B \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \mvar M^{\top}\mvar A\mvar M$ and $\widetilde{\mvar B} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \mvar M^{\top}\widetilde{\mvar A}\mvar M$. We first wish to show $\ker(\mvar U_\mvar B) \subseteq\ker(\widetilde{\mvar B} - \mvar B) \cap \ker((\widetilde{\mvar B} - \mvar B)^{\top})$. Suppose we have any $x \in \ker(\mvar U_\mvar B)$. Then, \[ \mvar M x \in \ker(\mvar M^\intercal \mvar U_\mvar A) = \ker(\mvar U_\mvar A) \subseteq \ker(\widetilde{\mvar A} - \mvar A) \cap \ker((\widetilde{\mvar A} - \mvar A)^{\top}) \subseteq \ker(\mvar M^\intercal(\widetilde{\mvar A} - \mvar A)) \cap \ker(\mvar M^\intercal(\widetilde{\mvar A} - \mvar A)^{\top}) \] which implies the portion of the definition of approximation concerning kernels. Then we have, using the convention that $0/0=0$ and by applying Lemma~\ref{lem:spectral_equivalence} and Lemma~\ref{lem:asym_strong_equiv}, that \begin{align*} \norm{\mvar U_\mvar B^{\dagger/2}(\widetilde{\mvar B} - \mvar B) \mvar U_\mvar B^{\dagger/2}}_{2} & = \max_{\vec{x},\vec{y} \neq 0} \frac{\vec{x}^{\top}(\widetilde{\mvar B} - \mvar B)\vec{y}} {\sqrt{\vec{x}^{\top}\mvar U_\mvar B\vec{x} \cdot \vec{y}^{\top} \mvar U_\mvar B \vec{y}}} v = \max_{\vec{x}, \vec{y} \neq 0} \frac{\vec{x}^{\top} \mvar M^{\top}(\widetilde{\mvar A}-\mvar A)\mvar M \vec{y}} {\sqrt{\vec{x}^{\top}\mvar M^{\top}\mvar U_\mvar A \mvar M \vec{x} \cdot \vec{y}^{\top}\mvar M^{\top}\mvar U_\mvar A \mvar M \vec{y}}}\\ &\leq \max_{\vec{x}, \vec{y} \neq 0} \frac{\vec{x}^{\top} (\widetilde{\mvar A}-\mvar A)\vec{y}} {\sqrt{\vec{x}^{\top} \mvar U_\mvar A \vec{x} \cdot \vec{y}^{\top} \mvar U_\mvar A \vec{y}}} = \norm{\mvar U_{\mvar A}^{+/2} (\widetilde{\mvar A} - \mvar A) \mvar U_{\mvar A}^{+/2}}_2 \leq \epsilon. \end{align*} \end{proof} Finally, we provide a transitivity result for strong-approximation. \begin{lem}[Approximation Transitivity] \label{lem:strong_transitivity} If $\mvar C$ is an $\epsilon$-approximation of $\mvar B$ and $\mvar B$ is an $\epsilon$-approximation of $\mvar A$ then $\mvar C$ is an $\epsilon(2+\epsilon)$-approximation of $\mvar A$. \end{lem} \begin{proof} Note that by triangle inequality \[ \norm{\mvar U_{\mvar A}^{\dagger/2}(\mvar C-\mvar A)\mvar U_{\mvar A}^{\dagger/2}}_{2}\leq\norm{\mvar U_{\mvar A}^{\dagger/2}(\mvar C-\mvar B)\mvar U_{\mvar A}^{\dagger/2}}_{2}+\norm{\mvar U_{\mvar A}^{\dagger/2}(\mvar B-\mvar A)\mvar U_{\mvar A}^{\dagger/2}}_{2}\,. \] Now, $\mvar U_{\mvar B} \preceq (1+\epsilon)\mvar U_{\mvar A}$ by Lemma~\ref{lem:asym_strong_implies_undir} and therefore $\mvar U_{\mvar A}^{\dagger} \preceq (1 + \epsilon) \mvar U_{\mvar B}^{\dagger}$. Applying Lemma~\ref{lem:simple_spec_inequalities} yields \[ \norm{\mvar U_{\mvar A}^{\dagger/2}(\mvar C-\mvar B)\mvar U_{\mvar A}^{\dagger/2}}_{2} \leq(1+\epsilon)\cdot\norm{\mvar U_{\mvar B}^{\dagger/2}(\mvar C-\mvar B)\mvar U_{\mvar B}^{\dagger/2}}_{2}\,.% \] The result follows as $\norm{\mvar U_{\mvar A}^{\dagger/2}(\mvar B-\mvar A)\mvar U_{\mvar A}^{\dagger/2}}_{2}\leq\epsilon$ and $\norm{\mvar U_{\mvar B}^{\dagger/2}(\mvar C-\mvar B)\mvar U_{\mvar B}^{\dagger/2}}_{2}\leq\epsilon$ by assumption. \end{proof} \input{sparsification_algorithms.tex} \subsection{Sampling a Directed Laplacian\label{sub:sample_directed}} Here we show how to compute a crude, sparse approximation to an arbitrary directed Laplacian by randomly sampling its entries. We provide both a general bound on the effect of such sampling for a directed Laplacian as well as a more specific result in the case where the directed Laplacian can be related to the symmetric Laplacian of an expander. The latter result (Lemma~\ref{lem:subgraph_sparse}) and the terminology relevant to it, is all we use from this subsection in order to obtain and analyze our sparsification algorithms. The main tool for our analysis is Theorem~\ref{thm:concentration_entry}, a general bound on concentration when sampling the entries of an asymmetric matrix. Its proof follows directly from standard matrix concentration inequalities, so we defer its proof to Appendix~\ref{sec:entry_sparsification}. \begin{thm} \label{thm:concentration_entry} Let $\mvar A\in\mathbb{{R}}^{d_{1}\times d_{2}}_{\geq 0}$ be a matrix where no row or column is all zeros. Let $\epsilon,p\in(0,1)$, $s=d_1+d_2$, $\vec{r} = \mvar A \vec{1}$, $\vec{c} = \mvar A^\top \vec{1}$, and $\mathcal{D}$ be a distribution over $\mathbb{{R}}^{d_1\times d_2}$ such that $\mvar X \sim \mathcal{D}$ takes value \[ \mvar X = \left(\frac{\mvar A_{ij}}{p_{ij}} \right)\ensuremath{\vec{{1}}}_{i}\ensuremath{\vec{{1}}}_{j}^{\top} \text{ with probability } p_{ij} = \frac{\mvar A_{ij}}{s} \left[\frac{1}{\vvar{r}_i} +\frac{1}{\vvar{c}_j} \right] \text{ for all } \mvar A_{ij} \neq 0~. \] If $\mvar A_{1},..,\mvar A_{k}$ are sampled independently from $\mathcal{D}$ for $k \geq 128 \cdot \frac{s}{\epsilon^2} \log \frac{s}{p}$, $\mvar R=\mvar{diag}(\vvar{r})$, and $\mvar C=\mvar{diag}(\vvar{c})$ then the average $\widetilde{\ma}\stackrel{\mathrm{{\scriptscriptstyle def}}}{=}\frac{1}{k}\sum_{i\in[k]}\mvar A_{i}$, satisfies \begin{align*} \Pr&\left[\norm{\mvar R^{-1/2}\left(\widetilde{\ma}-\mvar A\right)\mvar C^{-1/2}}_{2}\geq\epsilon\right]\leq p\,{,} \\ \Pr&\left[\norm{\mvar R^{-1}(\widetilde{\ma}-\mvar A)\vec{1}}_{\infty}\geq\epsilon\right]\leq p\,{,\text{ and}} \\ \Pr&\left[\norm{\mvar C^{-1}(\widetilde{\ma}-\mvar A)^{\top}\vec{1}}_{\infty}\geq\epsilon\right]\leq p\,{.} \end{align*} \end{thm} Theorem~\ref{thm:concentration_entry} shows that by sampling the entries of a rectangular matrix we can compute a new matrix such that spectral norm of the differences is bounded and the row sums and column sums are approximately preserved. In the next lemma we should how we can use this procedure to obtain a matrix with the same bound on the difference in spectral norm, but the row and column norms preserved \emph{exactly}. In short, we show how to add a matrix to the result of Theorem~\ref{thm:concentration_entry} preserving its properties while fixing the row norms, we call this procedure \emph{patching}. \begin{lem}[Sparsifying Non-negative Matrices] \label{lem:sparsifying_adjacency_matrix} Let $\mvar A \in \mathbb{{R}}^{n \times n}_{\geq 0}$ be a matrix with non-negative entries and having no all zeros row or column. Let $\epsilon,p \in (0,1)$. In $O(\mathrm{nnz}(\mvar A)+n\epsilon^{-2}\log(n/p))$ time we can compute a matrix $\widetilde{\mvar A} \in \mathbb{{R}}^{n \times n}_{\geq 0}$ with non-negative entries such that for $\mvar R \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \mvar{diag}(\mvar A \allones)$ and $\mvar C \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \mvar{diag}(\mvar A^\top \allones)$ we have \begin{enumerate} \item $\mathrm{nnz}(\widetilde{\mvar A}) = O(n\epsilon^{-2}\log(n/p))$,\footnote{Note that it is possible to remove the dependence in $p$ from the sparsifier simply by increasing the running time by a constant factor and making it an expected running time. This can be achieved by using the power method to approximately compute the value of $\norm{\mvar R^{-1/2}(\mvar A-\widetilde{\mvar A})\mvar C^{-1/2}}_{2}$ and resampling when this is large.} \item the row and column sums of $\mvar A$ and $\widetilde{\mvar A}$ are the same, i.e. $\mvar A \allones = \widetilde{\mvar A} \allones$ and $\mvar A^\top \allones = \widetilde{\mvar A}^\top \allones$. \item for $i \in [n]$, if $\mvar A_{ii} = 0$ then $\widetilde{\mvar A}_{ii} = 0$, and \item with probability at least $1-p$, $\norm{\mvar R^{-1/2}(\mvar A-\widetilde{\mvar A})\mvar C^{-1/2}}_{2}\leq\epsilon$. \end{enumerate} \end{lem} \begin{proof} We prove this using Theorem~\ref{thm:concentration_entry}. Let $\epsilon'=\epsilon/4$. By sampling as in Theorem~\ref{thm:concentration_entry} we can compute $\widehat{\mvar A}\in\mathbb{{R}}^{n\times n}$ such that $\mathrm{nnz}(\widehat{\mvar A})=O(n\epsilon^{-2}\log(n/p))$, $\norm{\mvar R^{-1/2}(\mvar A-\widehat{\mvar A})\mvar C^{-1/2}}_{2}\leq\epsilon'$, \[ \norm{\mvar R^{-1}(\widehat{\mvar A}-\mvar A)\vec{1}}_{\infty}\leq\epsilon' \text{ and } \norm{\mvar C^{-1}(\widehat{\mvar A}-\mvar A)^{\top}\vec{1}}_{\infty}\leq\epsilon'\,. \] Note that the latter two conditions imply that entrywise the row sums and column sums of $\mvar A$ are approximately the same as $\widehat{\mvar A}$. Formally, it implies that entrywise the following inequalities hold \begin{equation} \label{eq:nonnegat_approx_eq1} (1-\epsilon') \mvar A \allones \leq \widehat{\mvar A} \allones \leq(1+\epsilon') \mvar A \allones \text{ and } (1-\epsilon') \mvar A^\top \allones \leq \widehat{\mvar A}^\top \allones \leq(1+\epsilon') \mvar A^\top \allones\,. \end{equation} Therefore $(1+\epsilon')^{-1} \cdot \widehat{\mvar A}$ has row and column sums that are less than or equal to those of $\mvar A$. Next, we compute a matrix to make the row and column sums the same as in $\mvar A$. Formally, we let $\mvar E\in\mathbb{{R}}_{\geq0}^{n\times n}$ be a matrix with $\mathrm{nnz}(\mvar E)=O(n)$ such that \[ ((1+\epsilon')^{-1}\widehat{\mvar A}+\mvar E) \allones = \mvar A \allones \text{, and } ((1+\epsilon')^{-1}\widehat{\mvar A}+\mvar E)^\top = \mvar A^\top \allones \] and the $\ell_1$ norm of the entries of $\mvar E$ is at most $n\epsilon$. We can compute such a matrix $\mvar E$ in $O(\mathrm{nnz}(\widehat{\mvar A}))$ time by greedily adding in values to $\widehat{\mvar A}$ to make one of the row or column sums as large as that of $\mvar A$, while maintaining the invariant that no row or column sum is larger than it is in $\mvar A$. Note that if $\mvar A_{ii}=0$ for all $i\in[n]$, then $\widehat{\mvar A}_{ii}=0$ for all $i\in[n]$, and it can be ensured that $\mvar E_{ii}=0$ for all $i\in[n]$. Finally, we output $\widetilde{\mvar A}=(1+\epsilon')^{-1}\widehat{\mvar A}+\mvar E$. By construction $\mathrm{nnz}(\widetilde{\mvar A})=O(n\epsilon^{-2}\log(n/p))$ and $\mvar A$ and $\widetilde{\mvar A}$ have the same row and column sums, i.e. $\widetilde{\mvar A} \allones= \mvar A \allones$, $\widetilde{\mvar A}^\top \allones = \mvar A^\top \allones$. All that remains, is to show that the last property holds, ie., that $\widetilde{\mvar A}$ is still a good approximation of $\mvar A$. The previous conditions, together with \eqref{eq:nonnegat_approx_eq1} imply that \[ \norm{\mvar R^{-1}\mvar E}_{\infty}=\norm{\mvar R^{-1}\mvar E\vec{1}}_{\infty}=\norm{\mvar R^{-1}(\mvar A-(1+\epsilon')^{-1}\widehat{\mvar A})\vec{1}}_{\infty} \leq \left(1-\frac{1-\epsilon'}{1+\epsilon'}\right)\leq2\epsilon'\,. \] and similarly, \[ \norm{\mvar C^{-1}\mvar E}_{1}=\norm{\mvar E^{\top}\mvar C^{-1}\vec{1}}_{\infty} = \norm{( \mvar A - (1+\epsilon')^{-1} \widehat{\mvar A} )^{\top}\mvar C^{-1}\vec{1}}_{\infty} \leq \left(1-\frac{1-\epsilon'}{1+\epsilon'}\right) \leq2\epsilon'\,{.} \] Applying Lemma~\ref{lem:matrix_two_norm} then yields: \begin{align*} &\norm{\mvar R^{-1/2}(\mvar A-\widetilde{\mvar A})\mvar C^{-1/2}}_{2} \\ &\leq\norm{\mvar R^{-1/2}(\mvar A-\widehat{\mvar A})\mvar C^{-1/2}}_{2}+\left(1-\frac{1}{1+\epsilon'}\right)\norm{\mvar R^{-1/2}\widehat{\mvar A}\mvar C^{-1/2}}_{2}+\norm{\mvar R^{-1/2}\mvar E\mvar C^{-1/2}}_{2} \\ &\leq\epsilon'+\frac{\epsilon'}{1+\epsilon'}+2\epsilon'\leq4\epsilon'=\epsilon\,. \end{align*} \end{proof} This lemma immediately implies the following fact on providing sparse approximations to directed (not necessarily Eulerian) Laplacians. \begin{cor}[Crude Sparsification of Directed Laplacian] \label{cor:sparsifying_directed_laplacian} Let $\mathcal{L}=\mvar D-\mvar A^{\top}\in\mathbb{{R}}^{n\times n}$ be a directed Laplacian associated with a (not necessarily Eulerian) graph $G$ that has edges incident to at most $v$ vertices and let $\epsilon, p \in (0,1)$. The routine $\textsc{SparsifySubgraph}(\mathcal{L},p,\epsilon)$ computes a directed Laplacian $\widetilde{\mathcal{L}}=\mvar D-\widetilde{\mvar A}^{\top}$ in time $O(\mathrm{nnz}(\mathcal{L})+v\epsilon^{-2}\log(v/p))$ such that \begin{enumerate} \item $\widetilde{\mathcal{L}}$ is sparse, i.e. $\mathrm{nnz}(\widetilde{\mathcal{L}})=O(v\epsilon^{-2}\log(v/p))$, \item the in and out degrees of the graphs associated with $\mathcal{L}$ and $\widetilde{\mathcal{L}}$ are the same, i.e. $\mvar A \allones = \widetilde{\mvar A} \allones$ and $\mvar A^\top \allones = \widetilde{\mvar A}^\top \allones$, \item $\norm{\mvar D_{in}^{-1/2}(\mathcal{L}-\widetilde{\mathcal{L}})\mvar D_{out}^{-1/2}}_{2}\leq\epsilon$ with probability at least $1-p$ where $\mvar D_{in} = \mvar{diag}(\mvar A^\top \allones)$ and $\mvar D_{out} = \mvar{diag}(\mvar A \allones)$ are the diagonal matrices associated with the in and out degrees of $G$. \end{enumerate} \end{cor} \begin{proof} This follows by applying the sampling result from Lemma~\ref{lem:sparsifying_adjacency_matrix} to $\mvar A$, after removing the rows and columns corresponding to isolated vertices. The guarantees still hold on the larger matrix after inserting the zero rows and columns back. We can then substitute the corresponding directed Laplacians matrices into the formulas, since the diagonal terms will cancel (note that this follows again from Lemma~\ref{lem:sparsifying_adjacency_matrix}, since the sparsified Laplacian has the same out degrees as the original one). \end{proof} Using this, we prove the main result of this section, how to sparsify a subgraph that is contained in an expander, or more formally a particular undirected graph with large spectral gap. For notational convenience, we first formally define the type of graph symmetrization we consider for these subgraph arguments. Using this notation we then provide the main result of this subsection, Lemma~\ref{lem:subgraph_sparse}. Pseudocode of this routine is in Figure~\ref{fig:sparsifySubgraph}. \begin{definition}[Graph Symmetrization] For a directed Laplacian, $\mathcal{L} = \mvar D - \mvar A^\top$, its \emph{graph symmetrization} $\mvar S_\mathcal{L}$ is the symmetric Laplacian given by $\mvar S_\mathcal{L} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \mvar{diag}(\mvar U_\mvar A \allones) - \mvar U_{\mvar A}$. Equivalently, if we considering the graph associated with $\mathcal{L}$ and replace every directed edge with an undirected edge of half the weight, then $\mvar S_\mathcal{L}$ is the symmetric Laplacian associated with this undirected graph. \end{definition} \begin{figure}[ht] \begin{algbox} $\widetilde{\mathcal{L}}=\textsc{SparsifySubgraph}(\mathcal{L}, p, \epsilon)$\\ \textbf{Input}: A directed Laplacian $\mathcal{L} = \mvar D - \mvar A^\top \in \mathbb{{R}}^{n \times n}$ and parameters $p, \epsilon \in (0,1)$. \begin{enumerate} \item Implicitly restrict to the coordinates in $\mathrm{supp}(\mvar A)$. \item Compute $\vec{r} = \mvar A^\top \allones$, $\vec{c} = \mvar A \allones$, and let $s = \mathrm{nnz}(\vvar{r}) + \mathrm{nnz}(\vvar{c})$ denote the total number of rows and column which are non-zero. \item Let $\mathcal{D}$ be a distribution over $\mathbb{{R}}^{n \times n}$ such that $\mvar X \sim \mathcal{D}$ takes value \[ \mvar X = \left(\frac{\mvar A_{ij}^\top}{p_{ij}} \right)\ensuremath{\vec{{1}}}_{i}\ensuremath{\vec{{1}}}_{j}^{\top} \text{ with probability } p_{ij} = \frac{\mvar A_{ij}^\top}{s} \left[\frac{1}{\vvar{r}_i} +\frac{1}{\vvar{c}_j} \right] \text{ for all } \mvar A_{ij}^\top \neq 0~. \] \item Independently sample $\mvar A^{(1)}, \dots, \mvar A^{(k)}$ from $\mathcal{D}$ where $k = 128\cdot n\epsilon^{-2} \log(n/p)$ and compute $\widehat{\mvar A} = \frac{1}{k} \sum_{\ell=1}^{k} \mvar A^{(\ell)}$. \item Compute patching matrix $\mvar E \in \mathbb{{R}}_{\geq 0}^{n \times }$ by greedily setting $O(n)$ entries in order to make the row and columns sums of $(1+\epsilon/4)^{-1}\widehat{\mvar A} + \mvar E$ equal to those in $\mvar A^\top$. \item Let $\widetilde{\mvar A} \leftarrow (1+\epsilon/4)^{-1} \widehat{\mvar A}+\mvar E$. \item Implicitly extend $\widetilde{\mvar A}$ back to the full support. \item Return $\widetilde{\mathcal{L}} = \mvar{diag}(\widetilde{\mvar A}^\top \allones) - \widetilde{\mvar A}$. \end{enumerate} \end{algbox} \caption{Pseudocode of the subgraph sparsification routine} \label{fig:sparsifySubgraph} \end{figure} \begin{lem}[Subgraph Sparsification] \label{lem:subgraph_sparse} Let $\mathcal{L}$ be a directed Laplacian and $\mvar U$ be an undirected Laplacian with spectral gap at least $\alpha$, support size $v$, and $\mvar{diag}(\mvar S_{\mathcal{L}})\preceq\mvar{diag}(\mvar U)$. For $\delta \leq \alpha \epsilon / 2$ the routine $\textsc{SparsifySubgraph}(\mathcal{L}, p, \delta)$ in time $O(\mathrm{nnz}(\mathcal{L})+ v \delta^{-2} \log(v/p))$ computes a Laplacian $\widetilde{\mathcal{L}}$ with the same in and out degrees as $\mathcal{L}$, $\mathrm{nnz}(\widetilde{\mathcal{L}}) = O(v \delta^{-2} \log(v/p)))$, and $ \norm{\mvar U^{\dagger/2}(\mathcal{L}-\widetilde{\mathcal{L}})\mvar U^{\dagger/2}}_{2}\leq\epsilon $ with probability at least $1-p$. \end{lem} \begin{proof} Without loss of generality let $\mathcal{L}=\mvar D_{out}-\mvar A^{\top}$. Furthermore, let $\mvar D=\mvar{diag}(\mvar U)$, $\mvar D_{in} = \mvar{diag}(\mvar A \allones)$, and $\mvar D_{out} = \mvar{diag}(\mvar A^\top \allones)$. By Corollary~\ref{cor:sparsifying_directed_laplacian}, we can compute a directed Laplacian $\widetilde{\mathcal{L}}=\mvar D_{out}-\widetilde{\mvar A}$ with in and out degrees being the same as those of $\mathcal{L}$, $\mathrm{nnz}(\widetilde{\mathcal{L}})=O(v \delta^{-2}\log(v/p))$, and with probability at least $1 - p$ \[ \norm{\mvar D_{in}^{-1/2}(\mathcal{L}-\widetilde{\mathcal{L}})\mvar D_{out}^{-1/2}}_{2}=\norm{\mvar D_{in}^{-1/2}(\mvar A-\widetilde{\mvar A})\mvar D_{out}^{-1/2}}_{2}\leq \delta\,. \] If this happens, then since $\mathcal{L}$ and $\widetilde{\mathcal{L}}$ have the same in and out degrees and $\mvar{diag}(\mvar S_{\mathcal{L}})\preceq\mvar{diag}(\mvar U)$ we have that $\mvar U$ has larger support than $\mathcal{L}$ and $\widetilde{\mathcal{L}}$, and therefore $\ker(\mvar U)\subseteq\ker(\mathcal{L}-\widetilde{\mathcal{L}})\cap\ker((\mathcal{L}-\widetilde{\mathcal{L}})^{\top})$. Consequently, by Lemma~\ref{lem:asym_strong_equiv} we have \[ \norm{\mvar U^{\dagger/2}(\mathcal{L}-\widetilde{\mathcal{L}})\mvar U^{\dagger/2}}_{2} =\max_{\vec{x},\vec{y} \neq 0} \frac{\vec{x}^{\top}(\mathcal{L}-\widetilde{\mathcal{L}})\vec{y}} {\sqrt{\vec{x}^{\top}\mvar U \vec{x}\cdot \vec{y}^{\top}\mvar U \vec{y}}}\,. \] Now, clearly, there is a maximizing $\vec{x}, \vec{y}\perp\vec{1}$. Consequently $\vec{x}' = \vec{x} - \frac{\vec{x}^{\top}d}{\norm{\vec{d}_{1}}}\vec{1}$ and $\vec{y}' = \vec{y} - \frac{\vec{y}^{\top}\vec{d}}{\norm{\vec{d}_{1}}}\vec{1}$ are nonzero and satisfy $(\vec{x}')^{\top} \vec{d} = 0$ and $(\vec{y}')^{\top} \vec{d} = 0$. The spectral gap of $\mvar U$ being at least $\alpha$ implies \[ \mvar U \succeq \alpha \left(\mvar D - \frac{1}{\norm {\vec{d}}_{1}}\vec{d}\vec{d}^{\top}\right)~, \] which then gives \begin{align*} \norm{\mvar U^{\dagger/2}(\mathcal{L}-\widetilde{\mathcal{L}})\mvar U^{\dagger/2}}_{2} & =\frac{(\vec{x}')^{\top}(\mathcal{L}-\widetilde{\mathcal{L}})\vec{y}'} {\sqrt{(\vec{x}')^{\top}\mvar U \vec{x}'\cdot(\vec{y}')^{\top}\mvar U \vec{y}'}} \leq \frac{1}{\alpha}\cdot\frac{(\vec{x}')^{\top}(\mathcal{L}-\widetilde{\mathcal{L}})\vec{y}'}{\sqrt{(\vec{x}')^{\top}\mvar D \vec{x}' \cdot(\vec{y}')^{\top}\mvar D \vec{y}'}}\,{.} \end{align*} Since $\mvar D_{in}\preceq2\cdot\mvar D$ and $\mvar D_{out}\preceq2\cdot\mvar D$, applying Lemma~\ref{lem:asym_strong_equiv} again yields \[ \norm{\mvar U^{\dagger/2}(\mathcal{L}-\widetilde{\mathcal{L}})\mvar U^{\dagger/2}}_{2} \leq\frac{2}{\alpha}\cdot\frac{(\vec{x}')^{\top}(\mathcal{L}-\widetilde{\mathcal{L}}) \vec{y}'}{\sqrt{(\vec{x}')^{\top}\mvar D_{in} \vec{x}'\cdot(\vec{y}')^{\top}\mvar D_{out} \vec{y}'}} \leq\frac{2}{\alpha}\cdot\norm{\mvar D_{in}^{-1/2}(\mvar A-\widetilde{\mvar A})\mvar D_{out}^{-1/2}}_{2}\leq \frac{2}{\alpha} \delta \] and the result follows by our restriction on $\delta$. \end{proof} \subsection{Sparsifying an Eulerian Laplacian\label{sub:sparse_eulerian}} Here we show how to produce an $\epsilon$-sparsifier of an Eulerian Laplacian in nearly linear time. We achieve this by applying our result on sparsifying subgraphs (Lemma~\ref{lem:subgraph_sparse}) proved in Section~\ref{sub:sample_directed} on a decomposition of the Eulerian graph into well-connected pieces on an associated undirected graph. The decomposition we use is essentially identical to the expander decomposition used in Spielman and Teng's work on graph sparsification~\cite{SpielmanT11}. Interestingly, the quality of our decomposition is measured only in terms of properties of the symmetrized graph, rather than of the original directed graph. Ultimately, only the sampling probabilities that we use on the decomposition take into account edge direction.\footnote{Even this can possibly be overcome by choosing different sampling probabilities. It is the patching of the graph, i.e. adding edges to preserve degree imbalance, where we truly use the directed structure of the graph.} Below we formally define the type of decomposition we need, and provide a theorem about computing such decompositions. Finding these decompositions has been done in prior works~\cite{SpielmanTengSolver:journal, KLOS14, PengS14, OrecchiaV11}, and we defer the discussion of it to Appendix~\ref{sec:decomposition}. \begin{definition} An \emph{$(s,\alpha,\beta)$-decomposition} of a directed Laplacian $\mathcal{L}$ is a decomposition of $\mathcal{L}$ into directed Laplacians $\mathcal{L}^{(1)},...,\mathcal{L}^{(k)}\in\mathbb{{R}}^{n\times n}$, i.e. $\mathcal{L} = \sum_{i \in {k}} \mathcal{L}^{(i)}$, such that $\sum_{i\in[k]}\left|\mathrm{supp}(\mathcal{L}^{(i)})\right|\leq s$ and such that there exists undirected Laplacians $\mvar U^{(1)},...,\mvar U^{(k)}$ such that: \begin{enumerate} \item $\mvar{diag}(\mvar S_{\mathcal{L}^{(i)}}) \preceq \mvar{diag}(\mvar U^{(i)})$, for all $i\in[k]$, \item $\mvar U^{(i)}$ has spectral gap at least $\alpha$, for all $i\in[k]$, and \item $\sum_{i\in[k]}\mvar U^{(i)}\preceq\beta\mvar U_{\mathcal{L}}$. \end{enumerate} We call the $\mvar U^{(1)}, ... ,\mvar U^{(k)}$ with these properties an \emph{$(\alpha,\beta)$ undirected cover} of $\mathcal{L}^{(1)}, ... ,\mathcal{L}^{(k)}$. \end{definition} \begin{thm} \label{thm:decomposition_thm} Given a directed Laplacian, $\mathcal{L} \in \mathbb{{R}}^{n \times n}$, the routine $\textsc{FindDecomposition}(\mathcal{L})$ returns an $(\widetilde{O}(n),1/\alpha, \beta)$-decomposition, with $\alpha, \beta = \tilde{O}(1)$, in $\widetilde{O}(\mathrm{nnz}(\mathcal{L}))$ time. \end{thm} We produce our sparsifiers by computing the decomposition using Theorem~\ref{thm:decomposition_thm} and then applying Lemma~\ref{lem:subgraph_sparse} repeatedly to obtain the sparsifier. Pseudocode of this algorithm is given in Figure~\ref{fig:sparsifyEulerian}. \begin{figure}[ht] \begin{algbox} $\widetilde{\mathcal{L}}=\textsc{SparsifyEulerian}(\mathcal{L}, p, \epsilon)$\\ \textbf{Input}: $\mathcal{L}$ an $n \times n$ directed Laplacian, parameters $p, \epsilon \in (0,1)$. \begin{enumerate} \item $((\mathcal{L}^{(1)}, \dots, \mathcal{L}^{(k)}), \alpha, \beta) \leftarrow \textsc{FindDecomposition}(\mathcal{L})$. \item For $i = 1, \dots, k$ \begin{enumerate} \item $\widetilde{\mathcal{L}}^{(i)} \leftarrow \textsc{SparsifySubgraph}(\mathcal{L}^{(i)}, p/n^2, \epsilon/(2\alpha\beta))$. \end{enumerate} \item Return $\widetilde{\mathcal{L}} = \sum_{i=1}^k \widetilde{\mathcal{L}}^{(i)}$. \end{enumerate} \end{algbox} \caption{Pseudocode of the Eulerian graph sparsification routine} \label{fig:sparsifyEulerian} \end{figure} and the analysis of this algorithm is given in the following theorem. \begin{thm} \label{thm:order_n_sparsifier} For Eulerian Laplacian $\mathcal{L}\in\mathbb{{R}}^{n\times n}$ and $\epsilon,p\in(0,1)$ with probability at least $1 - p$ the routine $\textsc{SparsifyEulerian}(\mathcal{L}, p, \epsilon)$ computes in $\widetilde{O}(\mathrm{nnz}(\mathcal{L})+n\epsilon^{-2}\log(1/p))$ time an Eulerian Laplacian $\widetilde{\mathcal{L}}\in\mathbb{{R}}^{n\times n}$ such that \begin{enumerate} \item $\widetilde{\mathcal{L}}$ is an $\epsilon$-sparsifier of $\mathcal{L}$, \item the in and out degrees of the graphs associated with $\mathcal{L}$ and $\widetilde{\mathcal{L}}$ are identical. \end{enumerate} \end{thm} \begin{proof} Using the $\textsc{FindDecomposition}$ routine (Theorem~\ref{thm:decomposition_thm}) we compute Laplacians $\mathcal{L}^{(1)},...,\mathcal{L}^{(k)}\in\mathbb{{R}}^{n\times n}$ that are a $(s,\alpha,\beta)$ partition of $\mathcal{L}$ with $(\alpha,\beta)$ undirected cover $\mvar U^{(1)},...,\mvar U^{(k)}$ for $s=\widetilde{O}(n)$, $\alpha=1/\widetilde{O}(1)$, and $\beta=\widetilde{O}(1)$. We then apply the $\textsc{SparsifySubgraph}$ routine (Lemma~\ref{lem:subgraph_sparse}) to each $\mathcal{L}^{(i)}$ to compute $\widetilde{\mathcal{L}}^{(i)}$ in $O(\mathrm{nnz}(\mathcal{L})+s\epsilon^{-2}\beta^{-2}\alpha^{-2}\log (n/p))$ time such that each $\widetilde{\mathcal{L}}^{(i)}$ has the same in and out degree as $\mathcal{L}^{(i)}$ and $\norm{\left(\mvar U^{(i)}\right)^{+1/2}(\mathcal{L}^{(i)}-\widetilde{\mathcal{L}}^{(i)})\left(\mvar U^{(i)}\right)^{+/2}}_{2}\leq \epsilon / \beta$. The running time follows from the fact that $\sum_{i\in[k]}\left|\mathrm{supp}(\mathcal{L}^{(i)})\right|\leq s$ and thats happens with probability at least $1 - p$ by union bounding over the success probability of each call to $\textsc{SparsifySubgraph}$. Finally, considering $\widetilde{\mathcal{L}}=\sum_{i\in[k]}\widetilde{\mathcal{L}}^{(i)}$, Lemma~\ref{lem:asym_strong_equiv} yields that for all $\vec{x}, \vec{y}\neq0$ it is the case that \[ \vec{x}^{\top}(\mathcal{L}-\widetilde{\mathcal{L}})\vec{y} =\sum_{i\in[k]} \vec{x}^{\top}(\mathcal{L}^{(i)}-\widetilde{\mathcal{L}}^{(i)}) \vec{y} \leq \sum_{i\in[k]}\frac{\epsilon}{2 \beta} \left[\vec{x}^{\top}\mvar U^{(i)}\vec{x} + \vec{y}^{\top}\mvar U^{(i)}\vec{y}\right] \leq\frac{\epsilon \beta}{2 \beta} \left[\vec{x}^{\top}\mvar U_{\mathcal{L}} \vec{x} + \vec{y}^{\top}\mvar U_{\mathcal{L}}\vec{y}\right]\, \] The result follows from Lemma~\ref{lem:asym_strong_equiv} applied above and the the bounds on $s$,$\alpha$, and $\beta$. Note that the fact that in and out degrees are preserved is guaranteed by the fact that for each component in the decomposition the degrees are preserved, according to Lemma~\ref{lem:subgraph_sparse}. \end{proof} \subsection{Sparsifying a Squared Eulerian Laplacian\label{sub:sparse_square}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\WWhat}{\WWhat} Here we build upon Section~\ref{sub:sparse_eulerian} and show how to sparsify certain implicitly represented Eulerian Laplacians. In particular, given an Eulerian Laplacian $\mathcal{L} = \mvar D-\mvar A^{\top}$ associated with a strongly connected graph we show how to compute a sparsifier for the Eulerian Laplacian $\mathcal{M} = \mvar D-\mvar A^{\top}\mvar D^{-1}\mvar A^{\top}$ in nearly-linear time with respect to $\mathcal{L}$, i.e. without explicitly constructing $\mathcal{M}$. Note that running time we achieve may be sublinear in size of the matrix we are sparsifying, i.e $\mathcal{M}$. Our approach is a natural directed extension of the approach taken by Peng and Spielman \cite{PengS14} for solving the same problem in the case when $\mathcal{L}$ is symmetric. Broadly speaking, we decompose $\mathcal{M}$ into a directed Laplacian for each vertex. Each of these directed Laplacians may be dense but we show that they have a compact representation that allows us to efficiently implement a sampling scheme analogous to $\textsc{SparsifySubgraph}$ for each of these Laplacians such that adding these approximation yields an Eulerian approximation to $\mathcal{M}$ that has $\mathrm{nnz}(\mathcal{L})$ non-zero entries. Applying $\textsc{SparsifyEulerian}$ from Section~\ref{sub:sparse_eulerian} to the result then yields our desired sparsifier. Formally, we consider the slightly more general setting where we have a square matrix with non-negative entries, $\WWhat \in \mathbb{{R}}^{n \times n}_{\geq 0}$, that has the same same row and column sums, i.e. $\WWhat \allones = \WWhat^\top \allones$. We show how to compute a sparsifier of $\mathcal{M} = \mvar D - \WWhat \mvar D^{-1} \WWhat$ for $\mvar D = \mvar{diag}(\WWhat \allones)$ in time nearly linear in $\mathrm{nnz}(\WWhat)$. This setting is more general as we allow entries on the diagonal of the squared matrix. We consider this case as it simplifies our analysis in Section~\ref{sec:solver}. As discussed, we first decompose $\mvar M$ into a directed Laplacian for each vertex. For $i \in [n]$ we let \[ \mathcal{L}^{(i)} = \mvar{diag}(\WWhat_{i,:}) - \frac{1}{\mvar D_{i,i}} \WWhat_{:,i} \WWhat_{i,:}^\top \] where $\WWhat_{i,:}, \WWhat_{:,i} \in \mathbb{{R}}^{n}$ are the vectors corresponding to the row and column $i$ of $\WWhat$ respectively. In Theorem~\ref{thm:sparsify_square} we show that $\mathcal{M}$ and each $\mathcal{L}^{(i)}$ are directed Laplacians such that $\mathcal{M} = \sum_{i \in [n]} \mathcal{L}^{(i)}$. Note that while $\mathcal{M}$ may be dense and forming each of the $\mathcal{L}^{(i)}$ explicitly maybe expensive, we have a compact representation of each $\mathcal{L}^{(i)}$ in terms of a single row and column of $\WWhat$. Moreover, if we look at the total support of $\mathcal{L}^{(i)}$ then since each row and column only appears once we see that the total support is just $O(\mathrm{nnz}(\WWhat))$. Furthermore, we show that due to the low rank structure of the graph symmetrization, $\mvar S_{\mathcal{L}^{(i)}}$, of each $\mathcal{L}^{(i)}$ has spectral gap of at least a constant (See Lemma~\ref{lem:bipartite_conductance}). Consequently, if we apply $\textsc{SparsifySubgraph}$ to each $\mathcal{L}^{(i)}$ and sum the results, the analysis in Section~\ref{sub:sparse_eulerian} would imply that this matrix would be an approximation to $\mathcal{M}$ with $O(\mathrm{nnz}(\WWhat))$ non-zero entries. The only difficulty in following this approach is to show that we can apply $\textsc{SparsifySubgraph}$ to each $\mathcal{L}^{(i)}$ efficiently. We show that we can perform this operation in time proportional to the number of non-zeros in each row and column of $\WWhat$, rather than the naive $O(\mathrm{nnz}(\mathcal{L}^{(i)}))$ running time which could be much larger. To do this, as in Peng and Spielman \cite{PengS14}, we exploit the simple product structure of $\mathcal{L}^{(i)}$. Since $\WWhat$ has the same row and column sums this means that $\norm{\WWhat_{:,i}}_1 = \norm{\WWhat_{i,:}}_1 = \mvar D_{i,i}$. Consequently, each $\mathcal{L}^{(i)}$ is of the form $\mvar N = \mvar{diag}(\vvar{y}) - \frac{1}{\norm{\vvar{y}}_1} \vvar{x} \vvar{y}^\top$ for some $\vvar{x},\vvar{y} \in \mathbb{{R}}^{n}_{\geq 0}$ with $\norm{\vvar{x}}_1 = \norm{\vvar{y}}_1$ that depend on $i$. Since, the off-diagonals and their row and column sums have simple closed form expressions we can show that with $O(\mathrm{nnz}(\vvar{x}) + \mathrm{nnz}(\vvar{y}))$ preprocessing time, we can sample from the distribution required to apply $\textsc{SparsifySubgraph}$ on this matrix in $O(1)$ time. Consequently, we can implement the approach in our desired running time. In the remainder of this section we provide pseudocode for these routines and prove their correctness. First, in Figure~\ref{fig:sparsifyProduct} we provide $\textsc{SparsifyProduct}$ the pseudocode for sparsifying matrices of the form $\mvar N$ given above. In Lemma~\ref{lem:bipartite_conductance} we prove that the graph symmetrizations, $\mvar S_{\mvar N}$, of these matrices have spectral gap of at least $1$ and using this fact in Lemma~\ref{lem:sparsify_product} we prove that $\textsc{SparsifyProduct}$ does provide sparse approximations to these matrices. In Figure~\ref{fig:sparsifySquare} we provide $\textsc{SparsifySquare}$ the pseudocode for producing sparsifiers of $\mathcal{M}$ by invoking $\textsc{SparsifyProduct}$ on each $\mathcal{L}^{(i)}$ and then invoking $\textsc{SparsifyEulerian}$ on the result. Finally, we conclude the section with Theorem~\ref{thm:sparsify_square} which proves the correctness and analyzes the running time of $\textsc{SparsifyProduct}$. \begin{figure}[ht] \begin{algbox} $\widetilde{\mathcal{L}}=\textsc{SparsifyProduct}(\vec{x}, \vec{y}, p, \epsilon)$\\ \textbf{Input}: nonnegative vectors $\vec{x}$ and $\vec{y}$ with $r = \norm{\vec{x}}_1 = \norm{\vec{y}}_1$ that implicitly represent \\ \makebox[1.22cm]{} the Laplacian $\mvar{diag}(\vec{y}) - \frac{1}{r} \vec{x} \vec{y}^{\top}$ and parameters $p, \epsilon \in (0,1)$. \begin{enumerate} \item Let $s = \mathrm{nnz}(\vvar{x}) + \mathrm{nnz}(\vvar{y})$ be the total number of non-zero entries in $\vvar{x}$ and $\vvar{y}$. \item If $\mathrm{nnz}(\vvar{x}) \leq 1$ or $\mathrm{nnz}(\vvar{y}) \leq 1$ return $\mvar{diag}(\vec{y}) - \frac{1}{r} \vec{x} \vec{y}^{\top}$. \item Let $\mathcal{D}$ be a distribution over $\mathbb{{R}}^{n \times n}$ such that $\mvar X \sim \mathcal{D}$ takes value % \[ \mvar X = \left(\frac{\vvar{x}_{i} \vvar{y}_{j}}{r \cdot p_{ij}} \right)\ensuremath{\vec{{1}}}_{i}\ensuremath{\vec{{1}}}_{j}^{\top} \text{ with probability } p_{ij} = \frac{1}{s} \left[\frac{\vvar{x}_i}{r - \vvar{x}_j} + \frac{\vvar{y}_j}{r - \vvar{y}_i} \right] \text{ for } i \neq j \text{ and } \vvar{x}_{i} \vvar{y}_{j} \neq 0\,. \] \item Independently sample $\mvar A^{(1)}, \dots, \mvar A^{(k)}$ from $\mathcal{D}$ where $k = 128 \cdot s \epsilon^{-2} \log(s/p)$. Implement sampling $i,j\in[n]$ with probability $p_{ij}$ as follows. First set $i$ or $j$ with probability proportional to $r - \vvar{y}_i$ for $i$ with $\vvar{y}_i \neq 0$ and $r - \vvar{x}_j$ for $j$ with $\vvar{x}_{j} \neq 0$. If $i$ is chosen pick $j \neq i$ with probability proportional $\vvar{y}_j$. If $j$ is chosen pick $i \neq j$ with probability proportional to $\vvar{x}_i$. \item Compute $\widehat{\mvar A} = \frac{1}{k} \sum_{\ell=1}^{k} \mvar A^{(\ell)}$. \item Compute patching matrix $\mvar E \in \mathbb{{R}}_{\geq 0}^{n \times }$ by greedily setting $O(s)$ entries in order to make the row and columns sums of $(1+\epsilon/4)^{-1}\widetilde{\mvar A} + \mvar E$ equal to those in $\frac{1}{r} \vvar{x} \vvar{y}^\top$. \item Let $\widetilde{\mvar A} \leftarrow (1+\epsilon/4)^{-1} \widehat{\mvar A}+\mvar E$. \item Return $\widetilde{\mathcal{L}} = \mvar{diag}(\widetilde{\mvar A}^\top \allones) - \widetilde{\mvar A}$. \end{enumerate} \end{algbox} \caption{Pseudocode for sparsifying a single product graph} \label{fig:sparsifyProduct} \end{figure} \begin{figure}[h!] \begin{algbox} $\widetilde{\WWhat}=\textsc{SparsifySquare}(\WWhat, p, \epsilon)$\\ \textbf{Input}: $\WWhat \in \mathbb{{R}}^{n \times n}_{\geq 0}$ with $\WWhat \allones = \WWhat^\top \allones$ that implicitly represents the Laplacian \\ \makebox[1.22cm]{} $\mathcal{M} = \mvar D - \WWhat \mvar D^{-1} \WWhat$ for $\mvar D = \mvar{diag}(\WWhat \allones)$ and parameters $p, \epsilon \in (0, 1)$. \begin{enumerate} \item For all $i = 1, \dots, n$, \begin{enumerate} \item Let $\WWhat_{i,:}, \WWhat_{:,i} \in \mathbb{{R}}^{n}$ denote row and column $i$ of $\WWhat$ respectively. \item $\widetilde{\mathcal{L}}^{(i)} \leftarrow \textsc{SparsifyProduct}(\WWhat_{:,i}, \WWhat_{i,:}, p/{2n}, \epsilon/6)$ \end{enumerate} \item Let $\widehat{\mathcal{M}} = \sum_{i=1}^n \widetilde{\mathcal{L}}^{(i)}$. \item $\widetilde{\mathcal{M}} \leftarrow \textsc{SparsifyEulerian}(\widehat{\mathcal{M}}, p/2, \epsilon/3)$. \item Return $\mvar D - \widetilde{\mathcal{M}}$ \end{enumerate} \end{algbox} \caption{Pseudocode for producing an $\epsilon$-sparsifier of $\mathcal{M}=\mvar D-\WWhat^{\top}\mvar D^{-1}\WWhat^{\top}$, } \label{fig:sparsifySquare} \end{figure} \begin{lemma} \label{lem:bipartite_conductance} For $\vec{x}, \vec{y}\in\mathbb{{R}}_{\geq0}^{n}$ with $r=\norm {\vec{x}}_{1}=\norm {\vec{y}}_{1}>0$ and $\mvar Y=\mvar{diag}(\vec{y})$, the matrix $\mathcal{L}=\mvar Y-\frac{1}{r}\vec{x}\vec{y}^{\top}$ is a directed Laplacian and the spectral gap of its symmetrization $\mvar S_{\mathcal{L}}$ is at least $1$. \end{lemma} \begin{proof} Note that $\mathcal{L}_{ij}=-\frac{1}{r}x_i y_j\leq 0$ for $i\neq j$ and $\vec{1}^{\top}\mathcal{L}=\vec{1}^{\top}\mvar Y-\frac{1}{r}\vec{1}^\top \vec{x}\vec{y}^{\top}=\vec{y}^{\top}-\vec{y}^{\top}=\vec{0}^{\top}$. Consequently, $\mathcal{L}$ is a directed Laplacian. All that remains is to lower bound the spectral gap of $\mvar S_{\mathcal{L}}$. Letting $\mvar X = \mvar{diag}(\vec{x})$, we see that the graph symmetrization $\mvar S_{\mathcal{L}}$ of $\mathcal{L}$ is the undirected Laplacian \[ \mvar S_{\mathcal{L}} = \frac{1}{2} \left(\mvar X+\mvar Y - \frac{1}{r}\left(\vec{x}\vec{y}^{\top} +\vec{y}\vec{x}^{\top}\right)\right)\,{.} \] Furthermore, the diagonal entries of $\mvar S_{\mathcal{L}}$, denoted $\vvar{d} = \mathrm{diag}(\mvar S_{\mathcal{L}})$, are given by \[ \vec{d}_i={[\mvar S_{\mathcal{L}}]}_{ii} =\frac{1}{2}(\vec{x}_i+\vec{y}_i)-\frac{1}{r}\vec{x}_i \vec{y}_i \leq \frac{1}{2}(\vec{x}_i+\vec{y}_i)\,{.} \] Now recall that the spectral gap of $\mvar S_{\mathcal{L}}$ is defined to be the smallest nonzero eigenvalue of the normalized Laplacian $\mvar D^{-1/2}\mvar S_{\mathcal{L}}\mvar D^{-1/2}$, where $\mvar D=\mvar{diag}(\vec{d})$. \newcommand{\mm}{\mvar M} Since $d_i\leq \frac{1}{2}\left(\vec{x}_i+\vec{y}_i\right)$, we have \[ \mvar D^{-1/2}\mvar S_{\mathcal{L}}\mvar D^{-1/2} \succeq \left(\frac{1}{2}\left(\mvar X+\mvar Y\right)\right)^{-1/2} \mvar S_{\mathcal{L}} \left(\frac{1}{2}\left(\mvar X+\mvar Y\right)\right)^{-1/2} =2\left(\mvar X+\mvar Y\right)^{-1/2} \mvar S_{\mathcal{L}} \left(\mvar X+\mvar Y\right)^{-1/2}=: \mm. \] This implies that the eigenvalues of $\mvar D^{-1/2}\mvar S_{\mathcal{L}}\mvar D^{-1/2}$ dominate those of $\mm$. The multiplicity of zero as an eigenvalue is the same for the two matrices, so it thus suffices to show that the smallest nonzero eigenvalue of $\mm$ is at least 1. Plugging the definition of $\mvar S_{\mathcal{L}}$ in our expression for $\mm$ gives \begin{align*} \mm &=2\left(\mvar X+\mvar Y\right)^{-1/2} \mvar S_{\mathcal{L}} \left(\mvar X+\mvar Y\right)^{-1/2}\\ &=\left(\mvar X+\mvar Y\right)^{-1/2}\left(\mvar X+\mvar Y - \frac{1}{r}\left(\vec{x}\vec{y}^{\top} +\vec{y}\vec{x}^{\top}\right)\right) \left(\mvar X+\mvar Y\right)^{-1/2}% =\mvar I-\mvar N, \end{align*} where $\mvar N=\frac{1}{r}\left(\mvar X+\mvar Y\right)^{-1/2}\left(\vec{x}\vec{y}^{\top} +\vec{y}\vec{x}^{\top}\right)\left(\mvar X+\mvar Y\right)^{-1/2}$. The matrix $\mvar N$ has rank 2, so $\mvar I-\mvar N$ has at most 2 eigenvalues that are not equal to 1. Furthermore, we know that $\mvar M$ has a nontrivial kernel, so at least one of these eigenvalues is 0. Let $\lambda$ be the one remaining eigenvalue. Since $\mathrm{tr}(\mvar M)$ equals the sum of these eigenvalues, we have $\mathrm{tr}(\mvar M)=(n-2)\cdot 1 + 0 +\lambda = n-2+\lambda$, so \[\lambda=\mathrm{tr}(\mvar M)-n+2=\mathrm{tr}(\mvar I)-\mathrm{tr}(\mvar N)-n+2 = 2-\mathrm{tr}(\mvar N).\] The inequality $\frac{2ab}{a+b}\leq \frac{a+b}{2}$ between the harmonic and arithmetic means then gives \[ \mathrm{tr}(\mvar N) = \sum_{i \in [n]} \mvar N_{ii}=\frac{1}{r}\sum_{i \in [n]} \frac{2\vec{x}_i \vec{y}_i}{\vec{x}_i+\vec{y}_i} \leq \frac{1}{r}\sum_{i \in [n]} \frac{\vec{x}_i+\vec{y}_i}{2}=1, \] so $\lambda=2-\mathrm{tr}(\mvar N) \geq 1$. The nonzero eigenvalues of $\mvar M$ are thus all at least 1, as desired. \end{proof} \begin{lem}[Product Sparsification] \label{lem:sparsify_product} Let $\vvar{x}, \vvar{y} \in \mathbb{{R}}^{n}_{\geq 0}$ be non-negative vectors with $\norm{\vvar{x}}_1 = \norm{\vvar{y}}_1 = r$ and let $\epsilon, p \in (0,1)$. Furthermore, let $s$ denote the total number of non-zero entries in $\vvar{x}$ and $\vvar{y}$, i.e. $s = \mathrm{nnz}(\vvar{x}) + \mathrm{nnz}(\vvar{y})$ and let $\mathcal{L} \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \mvar{diag}(\vvar{y}) - \frac{1}{r} \vvar{x} \vvar{y}^\top$. The routine $\textsc{SparsifyProduct}(\vvar{x}, \vvar{y}, p, \epsilon/2)$ in time $O(s \epsilon^{-2} \log(s/p))$ computes with probability at least $1-p$ a Laplacian $\widetilde{\mathcal{L}}$ with the same in and out degrees as $\mathcal{L}$, $\mathrm{nnz}(\widetilde{\mathcal{L}}) = O(s \epsilon^{-2} \log(s/p)))$, and $ \norm{\mvar S_{\mathcal{L}}^{\dagger/2}(\mathcal{L}-\widetilde{\mathcal{L}})\mvar S_{\mathcal{L}}^{\dagger/2}}_{2}\leq\epsilon $. \end{lem} \begin{proof} Note that if $\mathrm{nnz}(\vvar{x}) \leq 1$ or $\mathrm{nnz}(\vvar{y}) \leq 1$. Then clearly $\frac{1}{r} \vvar{x} \vvar{y}^\top$ has at most $s$ non-zero entries and $\mathcal{L}$ has $O(s)$ non-zero entries. Furthermore, we can clearly compute $\mathcal{L}$ in $O(s)$ time and therefore the result follows. In the remainder of the proof we therefore assume that $\mathrm{nnz}(\vvar{x}) \geq 2$ and $\mathrm{nnz}(\vvar{y}) \geq 2$. First, we show that $\widetilde{\mathcal{L}}$ is precisely the output of an execution of $\textsc{SparsifySubgraph}(\mathcal{L}, p, \epsilon/2)$. Let $\mvar X = \mvar{diag}(\vvar{x})$, $\mvar Y = \mvar{diag}(\vvar{y})$, $\mvar A^\top \stackrel{\mathrm{{\scriptscriptstyle def}}}{=} \frac{1}{r} [\vvar{x} \vvar{y}^\top - \mvar X \mvar Y]$, and $\mvar D = \mvar Y - \frac{1}{r} \mvar X \mvar Y$. Clearly, $\mathcal{L} = \mvar D - \mvar A^\top$. Furthermore, we see that $\mvar A^\top$ is non-negative with a zero-diagonal $\mathrm{diag}(\mvar A^\top) = \allzeros$ and therefore $\mvar D$ is diagonal and this is the standard decomposition of $\mathcal{L}$. Now as in $\textsc{SparsifySubgraph}(\mathcal{L}, p, \epsilon)$ let $\vvar{r} = \mvar A^\top \allones = \vvar{x} - \frac{1}{r} \mvar X \mvar Y \allones$ and $\vvar{c} = \mvar A \allones = \vvar{y} - \frac{1}{r} \mvar X \mvar Y \allones$. Furthermore, since $\mathrm{nnz}(\vvar{x}) \geq 2$ and $\mathrm{nnz}(\vvar{y}) \geq 2$ we see that a row or column of $\frac{1}{r} \vvar{x} \vvar{y}^\top$ is non-zero if and only if the corresponding row and column in $\mvar A^\top$ is non-zero and thus $s$ is the same in $\textsc{SparsifySubgraph}(\mathcal{L}, p, \epsilon)$ and $\textsc{SparsifyProduct}(\vvar{x}, \vvar{y}, p, \epsilon)$. Now, for all $i \neq j$ with $\mvar A_{ij}^\top = \vvar{x}_i \vvar{y}_j \neq 0$ we have \[ \frac{\mvar A_{ij}^\top}{s} \left[\frac{1}{\vvar{r}_i} + \frac{1}{\vvar{c}_j}\right] = \frac{\vvar{x}_i \vvar{y}_j}{r \cdot s} \left[\frac{1}{x_i - \frac{1}{r} \vvar{x}_i \vvar{y}_i} + \frac{1}{{y_j - \frac{1}{r} \vvar{x}_j \vvar{y}_j}}\right] = \frac{1}{s} \left[\frac{\vvar{x}_i}{r - \vvar{x}_j} + \frac{\vvar{y}_j}{r - \vvar{y}_i} \right]\,. \] Consequently, we see that $\widetilde{\mathcal{L}}$ is precisely the output of an execution of $\textsc{SparsifySubgraph}(\mathcal{L}, p, \epsilon/2)$. Furthermore, since $\mvar S_\mathcal{L}$ has spectral gap at least $1$ by Lemma~\ref{lem:bipartite_conductance} we have by Lemma~\ref{lem:subgraph_sparse} which analyzed $\textsc{SparsifySubgraph}$ that $\widetilde{\mathcal{L}}$ has the desired properties. All the remains is to bound the running time of $\textsc{SparsifyProduct}(\vvar{x}, \vvar{y}, p, \epsilon/2)$. Note that computing $s$ takes $O(s)$ time and computing all the $r - \vvar{y}_i$ and $r - \vvar{x}_j$ can be done in $O(s)$ time. Consequently, with $O(s)$ preprocessing we can build a table so that each sample from $\mathcal{D}$ takes $O(1)$ time and computing $\widehat{\mvar A}$ takes $O(s + k) = O(s \epsilon^{-2} \log(s/p)$ time. Furthermore, since computing the patching $\mvar E$ also takes $O(s + k)$ time we have that the total running time is as desired. \end{proof} \begin{thm}[Sparsifying Squares] \label{thm:sparsify_square} Let $\WWhat \in \mathbb{{R}}^{n \times n}_{\geq 0}$ have the same row and column sums, i.e. $\WWhat \allones = \WWhat^\top \allones$, and let $\epsilon, p \in (0,1)$. Let $\mvar D = \mvar{diag}(\WWhat \allones)$, $\mathcal{L} = \mvar D - \WWhat$, and $\mathcal{M} = \mvar D - \WWhat \mvar D^{-1} \WWhat$. Both $\mathcal{L}$ and $\mathcal{M}$ are Eulerian Laplacians and in $\widetilde{O}(\mathrm{nnz}(\mathcal{L})\epsilon^{-2}\log(n/p))$ time the routine $\textsc{SparsifySquare}(\mathcal{L},p,\epsilon)$ computes $\widetilde{\WWhat}$ such that with probability at least $1 - p$ the matrix $\mapp{\mathcal{M}} = \mvar D - \widetilde{\WWhat}$ is an $\epsilon$-sparsifier of $\mathcal{M}$. \end{thm} \begin{proof} Clearly, both $\WWhat$ and $\WWhat \mvar D^{-1} \WWhat$ are entrywise non-negative and therefore the off diagonals of $\mathcal{L}$ and $\mathcal{M}$ are non-positive. Furthermore, since clearly $\WWhat \allones = \WWhat^\top \allones = \WWhat \mvar D^{-1} \WWhat \allones = [\WWhat \mvar D^{-1} \WWhat]^\top \allones = \mvar D \allones$ we have that $\mathcal{L} \allones = \mathcal{L}^\top \allones = \mathcal{M} \allones = \mathcal{M}^\top \allones = \allzeros$ and both $\mathcal{L}$ and $\mathcal{M}$ are Eulerian Laplacians. Next, for all $i \in [n]$ let $s_i = \mathrm{nnz}(\WWhat_{:,i}) + \mathrm{nnz}(\WWhat_{i,:})$ and \[ \mathcal{L}^{(i)} = \mvar{diag}(\WWhat_{i,:}) - \frac{1}{\mvar D_{i,i}} \WWhat_{:,i} \WWhat_{i,:}^\top\,. \] Note that since $\WWhat \allones = \WWhat^\top \allones$ and $\WWhat$ is entrywise non-negative we have that $\mvar D_{i,i} = \norm{\WWhat_{i,:}}_1 = \norm{\WWhat_{:,i}}_1$. Consequently, by Lemma~\ref{lem:sparsify_product} and union bound with probability at least $1 - \frac{1}{2p}$ it is the case that each $\widetilde{\mathcal{L}}^{(i)}$ is a directed Laplacian with the same in and out degrees as $\mathcal{L}^{(i)}$, $\mathrm{nnz}(\widetilde{\mathcal{L}}^{(i)}) = O(s_i \epsilon^{-2} \log(s n /p))$, and \begin{equation} \label{eq:square_subgraph_apx} \left\Vert{\mvar S_{\mathcal{L}^{(i)}}^{\dagger/2}(\mathcal{L}^{(i)} - \widetilde{\mathcal{L}}^{(i)}) \mvar S_{\mathcal{L}^{(i)}}^{\dagger/2}}\right\Vert_{2}\leq\epsilon/3\,{.} \end{equation} Furthermore, since clearly $\sum_{i} s_i = 2 \mathrm{nnz}(\WWhat)$ we have that \[ \mathrm{nnz}(\widehat{\mathcal{M}}) \leq \sum_{i \in [n]} \mathrm{nnz}(\widetilde{\mathcal{L}}^{(i)}) \leq \sum_{i \in [n]} s_i \epsilon^{-2} \log(n s_i / p) \leq O(\mathrm{nnz}(\WWhat) \epsilon^{-2} \log(n/p)) \] and therefore the total running time for computing $\widehat{\mathcal{M}}$ is $O(\mathrm{nnz}(\mathcal{L}) \epsilon^{-2} \log(n/p)$. Using Theorem~\ref{thm:order_n_sparsifier} to reason about the effect of $\textsc{SparsifyEulerian}$ then completes our running time analysis and union bounding yields that $\widetilde{\mathcal{M}}$ has the desired degrees and sparsity. Consequently, $\widetilde{\WWhat}$ also has the desired sparsity and since the degrees of the graph associated with $\mathcal{M}$ are at most the degrees of the graph associated with $\mathcal{L}$ we see that $\widetilde{\WWhat} \in \mathbb{{R}}^{n \times n}_{\geq 0}$. All that remains is to verify that $\widetilde{\mathcal{M}}$ is an $\epsilon$-approximation of $\mathcal{M}$. By Lemma~\ref{lem:asym_strong_equiv}, \eqref{eq:square_subgraph_apx} implies that for all $\vec{x}, \vec{y}\neq0$ it is the case that \[ \vec{x}^{\top}(\widehat{\mathcal{M}} - \mathcal{M})\vec{y} =\sum_{i=1}^n\vec{x}^{\top}(\widetilde{\mathcal{L}}^{(i)}-\mathcal{L}^{(i)})\vec{y} \leq\sum_{i=1}^n\frac{\epsilon}{3} \left[\vec{x}^{\top}\mvar S_{\mathcal{L}^{(i)}}\vec{x} + \vec{y}^{\top}\mvar S_{\mathcal{L}^{(i)}}\vec{y}\right] =\frac{\epsilon}{3} \left[\vec{x}^{\top}\mvar U_{\mathcal{M}}\vec{x}+\vec{y}^{\top}\mvar U_{\mathcal{M}}\vec{y}\right] \] where in the last identity we used that $\sum_{i=1}^n \mvar S_{\mathcal{L}^{(i)}} = \mvar U_{\mathcal{M}}$ from the fact that $\sum_{i \in [n]} \mathcal{L}^{(i)} = \mathcal{M}$. Applying Lemma~\ref{lem:asym_strong_equiv} again on the above bound, we obtain that $\widehat{\mathcal{M}}$ is an $\epsilon/3$ approximation of $\mathcal{M}$. Since, Theorem~\ref{thm:order_n_sparsifier} implies that $\widetilde{\mathcal{M}}$ is an $\epsilon/3$-approximation of $\widehat{\mathcal{M}}$, invoking the transitivity bound, Lemma~\ref{lem:strong_transitivity}, yields that $\widetilde{\mathcal{M}}$ is an $(\epsilon/3)(2 + \epsilon/3) \leq \epsilon$-approximation of $\mathcal{M}$ as desired. \end{proof}
1,314,259,996,056
arxiv
\section{Introduction} Although it is well known that many of the most spectacular examples of starburst galaxies are members of interacting systems, relatively little quantitative information is available on the overall effects of interactions on the star formation properties of galaxies. The degree of enhancement in the star formation rate varies over a large range, from galaxies which appear unaltered after the interaction, at least when observed at present, to galaxies with star formation bursts 10--100 times stronger than typically observed in isolated systems (Kennicutt, 1987; Schweitzer, 1990). This diversified scenario is a consequence of the large spread both in the phenomenologic conditions of the interaction and in the dynamical and physical stage of the colliding galaxies. In this paper, the first of a series on interacting systems, we focus our attention on a special class of interacting objects, the ring galaxies. They are commonly known as spiral galaxies where a burst of star formation is triggered by a close encounter with an intruder. An almost normal penetration of the companion at radii up to $40\%$ of the disc radius (Lynds and Toomre, 1976; Appleton and James, 1990) generates their strange morphology where a more or less sharp ring surrounds an off-centered nucleus or an empty region. Following Appleton and Struck-Marcell (1987), rings provide both an understanding example of extended coherent starbursts and a laboratory study of the effects of large--scale, nonlinear density waves on the interstellar gas. Ring galaxies are strong far--IR emitters. Their IR luminosity, $L_{FIR}$, as derived by IRAS colors, is even larger than for barred galaxies (De Jong et al. 1984), the most luminous class of the Shapley--Ames galaxies. Furthermore, some rings show $L_{FIR}$ also greater than the prototype starburst galaxy $M82$. As a general rule such galaxies emit as much infrared radiation as blue optical light. Therefore the mean value of the blue absolute luminosity, $L_B$, of ring galaxies agrees well with that derived by Keel et al. (1985) for Arp galaxies in general, $L_B=1.4\times 10^{10}(100/H_0)^2\,L\odot$, about 0.5 mag brighter than for non interacting ones. In this paper we adopt a self--consistent approach to analyze the morphological and photometric properties of five of them through up-to-date $N$--body simulations and evolutionary population synthesis (EPS) models extending from UV to far-IR wavelengths. We use the results of our $N$--body simulations as inputs of our EPS models to derive some suggestions for modeling the burst and the luminosities involved in the optical and far-IR ranges. Models obtained with a more refined SPH code including an internally consistent luminosity evolution will be treated in a later paper. Galaxies were selected from the list of Appleton and Struck-Marcell (1987) and observed by one of us (C.B.) in BVRI. Far-IR (FIR) data come from IRAS catalogue (Version 2). The plan of this paper is the following: observations and data reduction are described in Section 2; corresponding simulations of collisions between stellar disks and intruders are in Section 3 while in Section 4 we discuss the treatment of the burst; some suggestion is also given about the postburst evolution of such systems. In section 5 we point out the results for each galaxy and in section 6 our conclusions. \section{Observations} BVRI CCD observations were carried out at Padova-Asiago Observatory using a GEC CCD with a pixel size of 22$\,\mu$m, corresponding to 0.28 arcsec. Exposure times were 50m in B, 30m in V, 20m both in R and in I. Typical seeing during the observing run was $1".8$ The reduction of images consisted in the standard procedure. Pedestal was removed by subtracting a mean value estimated in the overscan region of each frame. Several flat field exposures, obtained at twilight in each color, were used to remove variations in pixel-to-pixel sensitivity. Sky background was determined in each frame by making a histogram of the gray levels of all pixels and locating the peak value by a gaussian fit of the low level part. Frames of selected stars in M67 cluster, obtained in the same nights, were used to calibrate the instrument photometry. The uncertainty in zero points, estimated from the statistics of the calibration constants derived from different standard stars in the same frame, turns out to be some hundredth of magnitudes in the four bands. However, we observed several standard fields in different nights and we found that the measurements show rms $\simeq 0.08$. This increased uncertainty is likely due to the extinction corrections and changes in photometric conditions from night to night. All in all we have assumed that the calibration error is $\simeq 0.1$ mag. Figure 1 shows R--band isophotes of our sample and Table 1 presents their BVRI magnitudes in the Johnson (1966) system. \section{N--body simulations} We performed numerical simulations of collisions between stellar disks embedded in static halos and suitable intruders. The code used is the Hernquist (1987) TREECODE which employs a tree structure to calculate the gravitational forces; this is a useful tool to study systems endowed with strong deviation from spherical symmetry (Curir, Diaferio, De Felice, 1993). The disks have been relaxed down by solving numerically the Laplace equation in cylindrical coordinates (Binney and Tremaine, 1987) and then immersed in a massive halo structure (King model, 1981). The system is evolved by several rotation periods to test its stability. The halo has an important heating effect on the disk during the assessment. The companions used as intruders are massive points or King spheres of different radii. A series of central collisions have been performed varying the angle of incidence and the velocity of the companion. We refer for a more complete and detailed set of simulations in the space of the numerical parameters to Curir and Filippi (1994). Here we present only the more suitable simulations in order to match the morphologies of our selected ring galaxies. {}From the pioneering numerical work of Lynds and Toomre (1976) many papers are now available in literature on this subject. Recently, models have been produced using non dissipative $N$--body codes by Luban-Lotan and Struck-Marcell (1989) and by Huang and Stewart (1988); furthermore, gas dynamics and dissipative phenomena have been taken into account by Gerber, Lamb and Balsara (1992), Weil and Hernquist (1993), Struck-Marcell and Higdon (1993) which presented very refined simulations, devoted to provide good models for specific astronomical objects. In this paper we will use pure $N$--body simulations to have more insights in the role played by the dynamical friction in the case of the intrusion. In the following we briefly summarize the main points of our model and then present our results. \subsection {The numerical methods} In the system of units employed, $G=1$ ($G$ is the gravitational constant), the length scale is 0.1, and the mass of the disk and of the intruder are equal to 1. Translating these values into physical units, mass unit is $5 \times 10^{10} \ M_{\odot}$, distance unit is $20\,$ Kpc and time unit is $ 158\,$ Myr. A fixed timestep $\delta t$ of $0.01$ (time units) is used to update the gravitational forces which are calculated including quadrupole moments of the gravitational potential and using a tolerance parameter $\theta = 0.8$. The target galaxy model consists of two components: a spherical halo and an exponential disk. The mass ratio of these components is $4:1$. The model mass-points , tracers of the true mass distribution, interact through the pseudo-keplerian softened potential $$ \Phi={{-K}\over{\left(r^2+\varepsilon^2\right)^{1\over2}}} $$ where $\varepsilon$ is the softening parameter adopted to avoid extremely close encounters. The softening length for the gravitational force depends on the type of particle: the stellar particles employ a softening $\varepsilon_d=0.015$ while halo particles employ $\varepsilon_h=0.038$ The disk and the halo are modeled as $N$--body systems with $3192$ and $5000$ particles respectively; these numbers are large enough to adequately describe their kinematical properties (see Curir and Filippi, 1994 for a detailed discussion on this subject). The halo particles are assumed to be dark matter particles. The higher value of the softening length accounts for the weakly interacting nature supposed for these particles. The spatial distribution of stars in the disk follows the planar surface density law $$ \rho(r)={\rho_0}{e^{-{{r}\over{r_0}}}} $$ where $r_0$ is the disk scale length, equal to 2 Kpc. To obtain each particle position-vector according to the assumed mass density we use the technique called ``rejection method'' for generating random deviates whose distribution function is known (Press, Flannery, Teukolsky, Vetterling 1990). Azimuthal angles are chosen randomly in the interval $[0,2\pi]$. The vertical co-ordinate is extracted from a gaussian distribution with a dispersion equal to the $1\%$ of the disk scale length. We assign circular velocities analytically to disk stars, through a particular expression of the potential obtained with the aid of Jeans equations. Radial, azimuthal and vertical velocity dispersions are locally imposed on disk particles; dispersions are monitored through a Toomre-like parameter $Q$ (Toomre, 1978), a true "stability thermometer" for the exponential disk (in our simulations $Q=1.6$). The $Q$ parameter is an input parameter for our model. The halo is a King model with the characteristic distribution function: $$ f_K(E)=\cases {{\rho {(2\pi {\sigma}^2)}^{-{{3}\over{2}}} (e^{{{E}\over{\sigma^2}}}-1)} &when $E>0$; \cr 0 &when $E\le 0$. \cr} $$ where $\rho$ is the density, $\sigma $ the velocity dispersion and E the total energy of the stars. King models are dynamically stable systems. To obtain the final galaxy model we relax the two components together, superimposing the gravitational potential generated from the exponential disk on the potential of the spherical model which has a radius twice that of the disk {}. The two components have approximately the same mass within three disk scale lengths. The resulting rotation curve of the disk is substantially flat and reproduces quite faithfully the observed curves for a large part of spiral galaxies (Rubin et al. 1980) (see for ex. Fig.5a, curve labeled 1). The energy is conserved better than $0.5 \%$ and the angular momentum better than $0.01 \%$ over the whole run for each model. \subsection { Results} We performed several simulations using different intruders and a different impact velocity, defined as the initial relative velocity between the two systems placed 60 Kpc a part. The mass of the intruder is equal to the mass of the disk target (Curir and Filippi, 1994). In our space of parameters we singled out two different regimes for the ring formation. In the first the impact velocity is below the escape velocity of the intruder ( i.e $\le 200\, km\, s^{-1}$) and the dynamical friction dominates the interaction. As the intruder oscillates crossing the disc, it loses a remarkable part of its mass and is finally trapped in the system. The time evolution of the ring structure is $190$ millions of years depending slowly on the radius and on the velocity of the companion. The ring appears while the intruder is very near to the disk, between $0.$ and $1.5$ in our numerical units (i.e. 30 kpc). Fig.s 2a and 2b show an edge--on and the face--on view respectively of the predicted evolution for a typical case in this regime. This is characterized by a range of velocities from $\approx 100\, km\, s^{-1}$ to $200\,km\, s^{-1}$. In the second regime the dynamical friction is much less active because the impact velocity is higher than the escape velocity, thus the ring is formed when the intruder is already far from the disk, at a distance between $1.5$ (30 Kpc) and $3$ (60 Kpc). The intruder emerges from the collision very unperturbed in shape but enlarged in volume (on the average the radius is almost twice the original one). The time evolution of the ring structure is $112$ millions of years when the radius of the intruder is equal to 0.28 of the target disk. As far as these high impact velocity cases are concerned, Fig. 3 shows the time evolution of the morphology related to the formation of a ring with a nucleus (the projected intruder) whereas Fig. 4 describes an empty ring. In Fig. 5 the behaviour of the rotation velocity (panel a)) and that of its components in the polar (panel b)) and in the radial (panel c)) directions, for the same simulation as in Fig. 3, is presented; in particular the curve number 4 of panel b) shows that the disk relaxes to a relatively high value of the z component of the velocity dispersion after the merger. By performing different simulations using smaller and smaller companions we obtain practically the same configurations provided we scale the impact velocity in a linear way with the radius of the intruder. However, the time of evolution, $\Delta t$, lengthens. In particular a simple material point behaves in this regime like a King sphere (of unit radius) giving rise to a ring living $350$ millions of years. So in this regime we can define the stage of the evolution with a parameter $\tau$, given by the ratio $t/\Delta t$ where t starts with the beginning of the interaction. We point out that the two regimes previously outlined are not the only two possibilities for the formation of a ring galaxy. There are more other possibilities which we did not analyze, like the disruption of a small companion, which may generate an empty ring, or the presence of a nucleus inside the ring simply because the target retains its bulge, which we did not include in our simulations; Curir and Filippi (1994) explored also the possibility of a ``spontaneous" ring formation. In particular these regimes, driven by the dynamical friction, are simply related to the mechanism of the intrusion and provide good numerical fits for our sample of ring galaxies. The related density waves have quite different features (Fig. 6). In the first one the wave appears as a light perturbation of a decreasing exponential density distribution. At the same time its amplitude is much less than in the second regime, where the wave expands starting from a strong central depression. In both cases, well after the collision, the system settles into an exponential disk thicker and larger than the original one. \section{Photometric properties from UV to far--IR: their evolution} Our EPS models, accounting for the dust effects, provide chemical and photometric evolution of a galaxy in a self-consistent way from ultraviolet up to 1 mm. The synthetic SED incorporates stellar emission, internal extinction and re-emission by dust. The stellar contribution, including all evolutionary phases of stars born with different metallicities, extends as far as 25$\,\mu$m. Dust includes a cold component, heated by the general radiation field, and a warm component associated with HII regions. Observations in the short wavelength range (25--60$\,\mu$m) and in the long one (over 60$\,\mu$m) allow us to disentangle their luminosities and temperatures. Emission from policyclic aromatic hydrocarbon molecules (PAH) and from circumstellar dust shells are also taken into account; the bulk of their emission centers in the 5-25$\,\mu$m spectral range, where the pure stellar contribution rapidly fades. This model has been successfully applied to spiral galaxies, with particular attention to our own galaxy (Mazzei,Xu and De Zotti, 1992, hereafter referred to as MXD92) and to early-type galaxies (Mazzei and De Zotti, 1994a; Mazzei, De Zotti and Xu, 1994). In Appendix we briefly summarize the main points of this model. We remember that the star formation rate (SFR) and the initial mass function (IMF) together with its lower and upper mass limits, are important model parameters. As discussed in MXD92, the metallicity of the system depends on their values and, as a consequence, the far--IR output. Since we are modeling in a self--consistent way both the chemical and the luminosity evolution, given the IMF with its mass limits, the metal enrichment and the residual gas fraction, as well as the total far--IR luminosity, are only dependent on the assumed SFR. This is slowly decreasing before the interaction since, due to the previous discussion (Sect. 3), a disk-like configuration well represents the unperturbed state. However a central collision with a spherical intruder may trigger a strong burst of star formation (Kennicutt et al. 1987; Wright et al. 1988) lasting some $10^8$ years (see Sect. 3). Therefore we modify the normal disk evolution (see also MXD92) superimposing a strong burst of star formation at an age of $12\,$Gyr. In this paper we present the first effort to analyze the effect of a burst on the overall SED of galaxies. We follow a general approach without any change of the IMF as well as of its lower and upper mass limits during the burst; this burst is defined by its length and intensity, which is the ratio between the value of the SFR at its beginning and that before its onset. Fig. 7 shows the evolution of the gas fraction, metallicity and optical depth (see Appendix), for models computed with different $m_l$ values and different burst length; in Fig. 8 their color evolution is presented. We emphasize that the burst affects only the UV--near IR SED whereas the far--IR output, depending on the optical depth, is a strong function of $m_l$ value alone. The shape of the far--IR SED depends on three parameters, $I_0$, $I_w$ and $R_{w/c}$ which represent the maximum value of the interstellar radiation field, the average energy density inside HII regions and the warm to cold luminosity ratio respectively (see Appendix for more details). These are completely defined by fitting the observed IRAS colors. Given the strong internal consistency of our model, there is not a large range of possible values for these far--IR parameters. As we will discuss with more detail, we suggest two different combinations of such parameters able to match the {\it overall} SED of a galaxy, consistent with our $N$--body simulations. In the following, in fact we discuss the treatment of the burst and the post--burst phases as derived from a self-consistent approach with $N$--body simulations. These simulations provide the length of the burst and the spatial distribution of the matter inside our galaxies. The first one allows us to reduce the number of free parameters needed to describe the burst whereas the second one gives us some information on the physical conditions which are affecting the shape of the far--IR SED. \subsection{The burst phase.} As far as optical and near-IR data only are concerned, the parameters defining the burst, its length and intensity, are not independent but inversely correlated. UV data, coupled with near--IR observations, could in principle disentangle their effects. In fact, the stronger the burst, the larger the number of massive stars, i.e. the UV flux, provided from the burst. Thus blue UV-V, V-K colors could suggest stronger and younger bursts, whereas red colors weaker and older stages since red supergiant stars, AGB stars, will appear $\approx 10^8\,yr$ after its beginning. Nevertheless the extinction smooths away these differences making the galaxies appear older and the SFR lower. However the energy absorbed by dust in the short wavelength region of the galaxy spectrum must be re-emitted in the long wavelength range. So in principle the observed overall SED entails all the information necessary to understand the behaviour of the SFR and to define the model parameters. Ring galaxies are very faint objects so UV observations are extremely rare; only VV787 has been observed (Schultz et al. 1991) and its data suffer from aperture correction problem. It is well known, however, that the far--IR output of starburst galaxies is higher than that of normal galaxies (Soifer et al. 1987; Joseph, 1990). Ring galaxies also match these expectations (Appleton and Struck-Marcell, 1987): some of them in fact have been detected by IRAS satellite. Our model, by means of its large spectral coverage, provides a useful tool for understanding the available data. Taking into account the previous considerations we use the time spent in the interaction suggested by $N$--body simulations as an input for our EPS model: the lifetime of the ring defines the length of the burst. Results are not strongly dependent on the behaviour of the SFR during the burst if we adopt a SFR constant or quickly decreasing with the same dependence on the mass of gas as before the onset of the burst. Given the burst parameters, the far--IR output of the system strongly depends on the lower mass limit of the Salpeter IMF, $m_l$. The higher the value of $m_l$, the larger the number of massive stars formed and then the larger the amount of heavy elements provided; this enhances the extinction effects simply because the optical depth of the system is proportional to the metallicity of the gas (see Fig. 7). We find that $0.01 \le m_l\, (m\odot) \le 0.20$ accounts for the overall properties of our sample. Most ring galaxies are characterized by $L_{FIR}/L_B$ ratios larger than those of normal galaxies (Appleton and Struck-Marcell, 1987), thus, following from our approach {\it the largest $L_{FIR}/L_B$ ratios suggest the highest $m_l$ values}. It is not surprising that ring far--IR SED may be different from that of a ``normal'' spiral since the interaction greatly perturbs the initial distributions of gas and dust. We attempt an approach consistent with the results of $N$--body simulations (cfr. 3.2) also in modeling the far--IR emission, in particular as far as the cold dust distribution is concerned. The luminosity of such a component, in fact, depends on the interstellar radiation field distribution through the parameter $I_0$, i.e. the central value of its energy density (see Appendix for more details). As discussed in the previous section, strong collisions provide high compression and concentration of matter in the density wave. At the same time a large depletion of gas and stars is produced in the central regions. According to Leisawitz and Hauser (1988), a remarkable fraction of the luminosity of OB stars can escape from Galactic HII regions and may contribute to the diffuse energy density, depending both on the efficiency of the star formation and on the optical depth in the ring (i.e. on the local gas properties in the star forming regions). So, we expect to match the IRAS colors using two very different values of the maximum radiation field, $I_{r_0}$ or $I_{0}$. We define $I_{r_0}$ as the maximum amplitude of a diffuse radiation field centered in the density wave instead of in the disk ($I_0$). Following the previous discussion we expect the values of $I_{r_0}$ larger than $I_0$. These possibilities, i.e. the "$I_{r_0}$" and the "$I_0$" cases, correspond to different physical conditions inside the galaxies, of course, in particular different amount of cold dust, lower for $I_{r_0}$ than for $I_{0}$, is provided. Given the strong internal consistency of our model, different combinations of the far--IR parameters with the same $m_l$ value are ruled out. The far--IR distribution extends, indeed, to the longer wavelength the lower $I_0$ or $I_{r_0}$. Therefore, for an observed $L_{100}/L_B$ ratio, the decrease of $I_0$ or $I_{r_0}$, for example, requires higher far--IR output to match the same IRAS colors, thus higher $m_l$ values. In the following section we will present with more details the results of models describing two extreme physical situations compatible both with observations and with our simulations i.e. the warmest $I_{r_0}$ value, corresponding to the lowest $m_l$, and the coldest $I_0$ case, corresponding to the largest $m_l$ (see Tables 2 and 3). Models computed with $m_l$ values inside this range could also match the data; in particular rising $m_l$ we have to lower $I_{r_0}$, to increase the $R_{w/c}$ ratio and may be to decrease the $I_w$ value. As suggested from Fig. 9, {\it submillimeter observations could provide a useful test to discriminate} between these possibilities keeping in suspence the global far--IR luminosity and the dust content of such galaxies. In Fig. 9 the overall synthetic SEDs for our sample galaxies are presented; heavy lines show the fit obtained in the "$I_{r_0}$ case" (see Table 2) light lines are in the opposite one, corresponding to the "$I_0$ case" (Table 3); for comparison, MXD92 found for the maximum radiation field a value of $7I_{local}$. \subsection{The postburst phase.} For deriving the overall synthetic SED some assumption concerning the shape of the far--IR SED (i.e. the values of the far--IR parameters) is needed, since at this stage no observational constraints on the far--IR appearance of postburst galaxies, faint FIR emitters, of course, are available. Therefore we can extract some useful information from our $N$--body simulations. A more realistic picture will be available in the next future with the help both of more sensitive far--IR measures, provided from ISO satellite, for example, and of high resolution images (HST) of galaxies. These would be compared with the results of dynamical ($N$--body) and hydrodynamical simulations (SPH) to identify the post--burst cases. As discussed in Sect. 3.2 the final effect of the interaction is to decrease the diffuse energy density with respect to its initial unperturbed value since the system re-arranges in a disk thicker and larger than the unperturbed one. We come to the same conclusion with an independent approach, i.e. taking into account that the number of OB stars, which may strongly contribute to the diffuse radiation field, in particular in the ``$I_{r_0}$ case", rapidly fades after the burst. Thus we can estimate the far--IR emission of the cold dust component, long after the burst, simply assuming $I_{r_0}$ proportional to some power, $\alpha$, of the number of OB stars. Models suggest $\alpha \approx 0.3-0.4$. The same suggestion cannot apply, of course, in the `` $I_0$ case" since OB stars do not provide a substantial feeding to diffuse radiation field which is, in fact, much lower than in ``normal'' disks also during the burst. The luminosity of the warm emission can be derived assuming the same $R_{w/c}$ ratio as for nearby spirals ($R_{w/c}=0.43$, Xu and De~Zotti, 1989), $I_w$ being the same as during the burst. Fig.s 10(a-e) compare the SEDs of our galaxy sample during the burst (heavy line) and $2\,$Gyr after the burst (light line) i.e at an age of $14\,$Gyr. $L_{FIR}/L_B$ ratios as large as 10 times the galactic one (of about $2.5$, MXD92) can arise also at $14\,$Gyr, depending on the $m_l$ value needed to fit the observed data. The bolometric luminosity of the systems is approximately 10 times lower than during the burst and the B luminosity drops by a factor of approximately 10--30. Thus these systems will appear as very faint red galaxies. The past interaction, which exhausted a significant amount of the residual gas ($35\%-60\%$ for our sample), reflects in the optical region of the spectrum through very red $B-V$ colors, redder than the average ones of normal disk galaxies. At $14\,$Gyr models suggest $0.85\le B-V \le 1$, where the reddest colors correspond to the greatest $m_l$ values (see also Fig. 8). At the same age $V-K$ ranges from about $3.5\,$mag, as for an unperturbed disk, up to $4\,$mag and $4.4\,$mag for $m_l=0.01\,,0.05$ and $0.1\,m\odot$ respectively. Only $NGC\,2793$ deviates from this behaviour. Its colors are not strongly affected since the burst consumed a low fraction of the residual gas. Therefore this appears as a very blue galaxy, its colors being typical of an Irr also long after the burst. \section{Comparison with observations } In this section we attempt a careful description of the morphological and photometric properties of each galaxy in our sample. The observed isophotes (Fig. 1) will be compared with suitable isodensity contour levels obtained from $N$-body simulations (Fig. 11). The particle positions of the numerical dump chosen to represent the galaxy are projected on a suitable plane and then the isodensity contours are extracted. The time spent in the interaction, as described in Section 3, is the length of the burst for our EPS model. This length is practically constant in the first regime (0.2 Gyr) whereas it depends on the radius and on the velocity of the intruder in the second one, ranging between 0.11 and 0.35 Gyr. In this case we refer our fits to the parameter $\tau$ which settles the stage of the burst evolution (see Sect. 3.2); it is not surprising that such a stage may be different for our morphological and photometric fits although, in most the cases, they well agree. We derive some suggestions about the strength of the collision or the intensity of the burst (as defined in Sect. 4), the total far--IR emission, $L_{FIR_{tot}}$, the absolute bolometric luminosity, $L_{bol}$, and the residual amount of gas. We point out that, given the internal consistency of UV--optical and far--IR SEDs, widely discussed both in Sect. 4 and in the Appendix, only two combinations of $m_l$ and far--IR parameters for each galaxy are discussed. These correspond to the extreme ``$I_{r_0}$" and ``$I_0$" cases respectively (see Table 2 and 3). Submillimeter observations should be careful recommended to reduce the uncertainty further. For reasons of comparison we summarize here some useful values derived for our own galaxy (MXD92): $L_{FIR_{tot}}/L_B=2.6$, the corresponding ratio using $L_{FIR}$, i.e. the far--IR luminosity computed from $42.5$ to $122.5\, \mu m$, $L_{FIR}/L_B\simeq 0.9$, and $L_{FIR_{tot}}/L_{bol}\simeq 0.3$. We used $H_0=50\,km\,s^{-1}\,Mpc^{-1}$. \subsection {VV 789 or I Zw 45} The isodensity contour levels are represented in Fig 11a (for comparison see Fig. 1). The simulation describes a collision of a disk target lying in the xy plane with a King sphere having a radius 0.28 times that of the disk. The velocity of the intruder is $1140\,km\,s^{-1}$ inclined of $30\deg$ with respect to the normal, z, to the disk plane. The stage of the ring evolution correspond to $\tau=0.20$. The overall SED of this galaxy (Fig. 9(a)) requires a strong burst of star formation whose intensity being 60. Models have been computed with a lower mass limit of $0.01$ and $0.05\ m\odot$ respectively for the $I_{r_0}$ (heavy line) and $I_0$ (light line) cases. Our photometric fits correspond to different stages of the burst evolution, $\tau=$0.75 and 0.28 respectively. The residual fractions of gas are $0.25$ and $0.54$ corresponding to (1.56--4.07)$\, \times 10^{10}\ m\odot$ respectively. For comparison Theys and Spiegel (1976) derived a value of $4.56\times 10^9\ m\odot$ for the mass of neutral hydrogen alone. The system appears as a very blue luminous galaxy with $L_B\,(10^{10}\ L\odot)\simeq 4.5$ (Appleton and Struck--Marcell, 1989). {}From the previous models we derive $L_{FIR_{tot}}/L_{bol}=$0.50 and 0.68, which correspond to $L_{FIR}/L_{bol}=$0.44 and 0.21. Moreover we predict $L_{FIR}/L_B=2.01$ and 1.59 respectively; the last ratio increases to 2.3 and 5.24 taking into account the total far--IR luminosity; thus the long wavelength range could include up to $70\%$ of the far--IR luminosity of this galaxy. \medskip \subsection {VV 330 or UGC 5600} The parameters describing the intrusion are the same as for VV798 but the collision velocity is orthogonal to the plane of the disk. In Fig. 11b we present the morphology of the simulated system (for comparison see Fig. 1) taken to a stage $\tau=0.40$ of the ring evolution. Two extreme models, computed with different lower mass limit of the IMF, $0.05$ and $0.1\ m\odot$ respectively, but the same intensity of the burst, 60, match quite well the optical SED and the IRAS colors of this galaxy (Fig. 9b). Our fits correspond to similar stages of the interaction, $\tau=0.43$ and $0.29$ respectively, where the residual fractions of gas are $0.47$ and $0.63$ respectively. We derive a blue luminosity, $L_B=0.56\times 10^{10}\ L\odot$, like normal galaxies (De Jong et a. 1984) however a large fraction of the bolometric luminosity comes out at the longest wavelengths: $L_{FIR_{tot}}/L_{bol}=$0.70--0.68 respectively. We find $L_{FIR}/L_B=$3.12 and 3.04 which rise up to 6.0 and 7.6 including the total far--IR luminosity. \medskip \subsection{VV 787 or Arp 147} In a recent paper Gerber et al. (1992) showed that this galaxy is the result of an off--center high velocity collision between an elliptical and a spiral of the same mass, the intrusion being perpendicular to a disk plane. This corresponds to a collision with a impact velocity $v\simeq 450\,km\,s^{-1}$ which well agrees with our second velocity regime. Their isophote fit suggests $tau \simeq 0.66$ from the beginning. For the same stage, models matching quite well the optical SED and the IRAS colors of this galaxy (Fig. 9c) correspond to $m_l$ $0.05$ and $0.1\ m\odot$ with intensities of 60 and 100 respectively. The blue luminosity of this galaxy is about $1.4 \times 10^{10}\, L\odot$ (Appleton and Struck--Marcell, 1989). We derive large $L_{FIR_{tot}}/L_{bol}$ ratios, 0.71--0.78 respectively, $L_{FIR}/L_B$ ratios of 2.70 and 4.50, increasing of the same factor, 2.5, including the global far--IR emission, and residual fractions of gas of 0.54 and 0.43 respectively. \medskip \subsection {VV 32 or Arp 148} The impact velocity used for this simulation is $160\,km s^{-1}$. Thus this ring is the only one in our sample described by the first velocity regime (see Sect. 3.2). This result confirms previous optical and mid-infrared analyses (Joy and Harvey, 1987) suggesting that this galaxy is coalescing. In Fig. 2 we have shown the time evolution of its morphology. Fig. 11c presents the corresponding isodensity curves (for comparison see Fig. 1). The BVRI region of the SED of this galaxy has been analyzed by Bonoli (1987) and we refer to that paper for a detailed discussion. Models computed with the same burst intensity, $i=60$, and different mass limits, $0.1\,m\odot$ and $0.2\,m\odot$, match well the overall SED of such a galaxy (Fig. 9d) for a warm and a cold diffuse radiation field respectively (see Tables 2 and 3). The $L_{FIR}/L_B$ ratio suggested by these models, $6$ and $7$ respectively, one of the greatest in the sample of ring galaxies by Appleton and Struck-Marcell (1987), becomes $10$ and $16$ including the total far--IR luminosity. All models predict that dust absorbs about $80\%$ of the bolometric luminosity of the galaxy. Even if the the burst consumes $75\%$ and $40\%$ of the residual mass of gas, the system retains a substantial gas fraction, $\approx 0.4$ in both the cases which correspond to different stages of the burst evolution, 0.15 and 0.05 Gyr from its beginning respectively. \subsection {$NGC\,2793$} Its morphology is obtained with an head--on collision of a material point. The impact velocity is $600\, km s^{-1}$. The simulated isodensity contours of the system seen face--on are represented in Fig. 11d at $\tau=0.17$. Our results suggest a very early stage in the burst development, in agreement with the short radius observed for this ring (Appleton and Struck-Marcell, 1989). Fig.9e, heavy line, shows that the overall IRAS SED of this galaxy is matched well by a very high radiation field, $I_{r_0}=45I_{loc}$ (see Table 2), as before the onset of the active phase of star formation in the ring: stars compressed in the ring increase the diffuse energy density, starting then the new process of star formation triggered by shocks. Models suggest: $L_{FIR}/L_{FIR_{tot}}=0.5$ and $L_{FIR}/L_{bol}=0.22$. At the onset of the burst, whose intensity is 10, a low central radiation field and a large amount of warm dust, about 3 times larger than in our own galaxy, are required to match the data (see Table 3). The far-IR output is leaded by the warm component which entails $40\%$ of the total IR luminosity of the galaxy instead of $17\%$ as in our own galaxy; moreover, although the ratio $L_{FIR}/L_{FIR_{tot}}$ is practically the same as in our own galaxy, $0.44$ and $0.39$ respectively, the warm emission encompasses about $65\%$ of $L_{FIR}$ instead of $14\%$. The $L_{FIR}/L_B$ ratio is approximately double of the galactic one and $L_{FIR_{tot}}/L_B$ is larger by a factor of 1.5. However $L_{FIR_{tot}}/L_{bol}=0.4$, like the Galaxy. The lower mass limit for the best fit model is $0.01\ m\odot$. We derive a large fraction of residual gas, $0.65$, and an absolute blue luminosity $L_B=3.26\times 10^9\ L\odot$, the lowest in our sample. Our fit implies a very early stage in the burst evolution corresponding to $\tau=0.05$ and to a residual gas consumption of only $15\%$. The burst does not affect the colors of this system assuming the typical colors of an Irr during whole evolution. \medskip \section{Conclusions} We investigate the evolution of ring galaxies in a self-consistent way using up-to-date $N$--body simulations and EPS models providing the overall SED, from 0.06$\,\mu$m up to 1 millimeter . Results are compared with optical and far--IR data of a sample of 5 ring galaxies selected from the Appleton and Struck-Marcell (1987) list. Data from B to I bands are derived from our CCD photometry, far--IR data come from IRAS catalogue (Version 2). Although in principle a ring galaxy can arise from a large number of situations (Curir and Filippi, 1994 and references therein), nevertheless the morphologies of our sample are well reproduced by a central collisions between a disk galaxy and a spherical intruder. We singled out two different behaviours for the ring formation, according with the collision velocity being below or up the escape velocity. In the former one the ring seems to be a transient structure, ending in a complete merging. VV32 is an example of this regime leaded by the effect of dynamical friction and lasting about $0.2\, Gyr$. Stronger collisions, instead, producing an empty ring (i.e VV787), require higher impact velocities. In this second regime the length of the interaction depends on the radius of the colliding galaxy: the smaller the radius the longer the interaction providing a ring configuration. When the burst turns off, after $0.1-0.35\,$Gyr from its beginning, a new disk-like configuration would arise owing to the re-arrangement of the galaxy. This will appear as a disk with a slightly larger radius and a higher thickness. These simulations give insights in the modeling the photometric properties of these objects providing the length of the burst and the distribution of the matter inside our galaxies. The former one enables us to reduce the number of free parameters needed to describe the burst whereas the second one suggests two different extreme values of the radiation field inside these galaxies both matching well their IRAS colors. It is well known, indeed, that rings are powerful far--IR emitters (Appleton and Struck--Marcell, 1987), furthermore their dust temperature, as suggested by their $f_{60}/f_{100}$ ratio, is also warmer than isolated disk galaxies. We find $L_{FIR}/L_{bol}$ ratios exceeding those of normal galaxies by at least a factor of two, a value of 0.7 being typical. As suggested from our EPS models, the lower mass limit of the IMF is the most important parameter driving the far--IR output of these systems. Values up to 20 times greater than those derived to match the overall SED of our own galaxy ($m_l=0.01$, MXD92) are needed. {}From our synthetic SEDs we derive $L_{FIR_{tot}} \ge 2 L_{FIR}$ being $L_{FIR}$ computed from 42.5 and 122.5$\,\mu$m, with a warm luminosity which entails up to one half of the whole far--IR emission. Unfortunately their global far--IR luminosity is uncertain for the lack of submillimeter data. Observations in this spectral domain will provide useful information on the distribution and the amount of dust in such galaxies. Data in the mid--far IR would also provide important hints on the understanding the nature of the dust in ring galaxies. Observations with ISO satellite, whose launch is scheduled for september 19, 1995 will be performed. ISO instrumentation, with its high sensitivity, better than IRAS, and its larger spectral coverage, will allow us to better define the observed SEDs from 7 up to 200$\,\mu$m. Observations in the short wavelength range will provide useful information on PAH and warm dust contributions, those in the long wavelength one, in particular at 200$\,\mu$m, will enable us to disentangle models with different maximum radiation field solving the puzzle of cold dust amount and distribution as so as of the true $L_{FIR}$ luminosity. After the burst the star formation rate lowers to a quasi-constant value driving a very slow system evolution. In time the bolometric luminosity of the galaxy will be dominated by a number of red giant stars larger than in normal galaxies. Systems which have experienced a ring phase characterized by a strong burst of star formation, would keep a larger $L_{FIR}/L_B$ ratio than normal disk galaxies as a consequence of the past interaction. The predicted SED will reflect this situation showing very red optical near--IR colors and a slightly cooler far--IR emission following the re-arrangement of the disk. Systems which have experienced a burst of low intensity, like $NGC\,2793$, will appear as very blue low brightness galaxies. \section{Appendix} In the following we summarize the fundamental assumptions of the model which allows us to derive the SED of galaxies over the whole frequency range, from UV ($\lambda=0.06$ $\mu\,m$) to far-IR ($\lambda=1000$ $\mu\,m$) ( see MXD92 for more details). \medskip \subsection {The chemical evolution model} We have adopted a Schmidt (1959) parametrization, wherein the star--formation rate (SFR), $\psi (t)$, is proportional to some power of the fractional mass of gas in the galaxy, $f_g = m_{gas}/m_{gal}$, assumed to be, initially, unity ($m_{gal}=10^{11}~m\odot$). $$\psi (t) = \psi _0 f_g^n\, m\odot\,yr^{-1}. \eqno(1)$$ The initial mass function (IMF), $\phi (m)$, has a Salpeter (1955) form: $$\phi (m) dm = A \left({m\over m\odot}\right)^{-2.35} d\left({m\over m\odot}\right)\qquad m_l \leq m \leq m_u, \eqno(2)$$ with $m_u=100\,m\odot$ and $m_l\le 0.2\,m\odot$ (see text). The influence of a different choice of the power law index, n, for the dependence of the SFR on the gas density has been discussed by Mazzei (1988). The effects of different choices for the IMF and its lower mass limit, $m_l$, are analysed in MXD92 for late--type systems and in Mazzei et al. (1994) for early--type galaxies. The general conclusion is that the overall evolution of late--type systems is weakly depending on n. We put $n=1$ and $\psi_0=4\, m\odot/yr$. The galaxy is assumed to be a closed system with gas and stars well mixed and uniformly distributed. However, we do not assume that recycling is instantaneous, i.e. stellar lifetimes are taken into account. The variations with galactic age of the fractional gas mass $f_g(t)$ [and, through eq. (1), of the SFR, $\psi(t)$] and of the gas metallicity $Z_g(t)$ are obtained by numerically solving the standard equations for the chemical evolution. As far as the burst is concerned we point out that the observed blue and red morphologies of our selected galaxies are well fitted by a density wave sweeping and compressing a large fraction of the matter in the galaxy (compare Fig. 1 with Fig. 11) which may give rise to a strong burst of star formation. So the closed box approximation models the chemical evolution well during the burst. \subsection {Synthetic starlight spectrum} The synthetic spectrum of stellar populations as a function of the galactic age was derived from UV to 25 $\mu\,m$. The global luminosity at the galactic age $t$ is then obtained as the sum of the contributions of all earlier generations, weighted by the appropriate SFR as described in MXD92. The number of stars born at each galactic age $t$ and their metallicity are obtained by solving the equations governing the chemical evolution, with the SFR and IMF specified above. To describe their distribution in the H--R diagram we have adopted the theoretical isochrones derived by Bertelli {\it et al.} (1990) for metallicities Z=0.001 and Z=0.02, extended by Mazzei (1988) up to $100\,m\odot$ and to an age of $10^6\,$yr. Isochrones include all evolutionary phases from the main sequence to the stage of planetary ejection or of carbon ignition, as appropriate given the initial mass. \subsection {Correction for internal extinction} The internal extinction has been taken into account assuming that stars and dust are well mixed. The optical depth depends on the dust to gas ratio that we assumed to be proportional to a power of the metallicity, as in Guiderdoni and Rocca--Volmerange (1987). Further details are given in MXD92. \subsection {Emission from circumstellar dust} The mid--IR emission from circumstellar dust shells was assumed to be dominated by OH/IR stars ( see MXD92 for a discussion). The spectrum of OH~27.2+0.2 (Baud {\it et al.}, 1985) was assumed to be representative for stars of this class, then the total luminosity of OH/IR stars in the passband $\Delta\lambda$ has been computed as in MXD92 (eq.[13]). \subsection { Diffuse dust emission} The diffuse dust emission spectrum takes into account the contributions of two components: warm dust, located in regions of high radiation field intensity (e.g., in the neighborhood of OB clusters) and cold dust, heated by the general interstellar radiation field. The temperature distribution of the warm dust has been parameterized as a function of $I_w$, the average energy density inside HII regions. Different values of this parameter entail a some change in the mean physical conditions inside HII regions, i. e. in the neutral hydrogen density, in the $m_l$ value, in the efficiency of star formation, and so on. The range of tested values to match the observed far--IR SED is $20\le I_w/I_{loc} \le 200$ where $I_{loc}$ is the local value (see also Xu and De Zotti, 1989). The temperature distribution of cold dust depends on the central intensity of the interstellar radiation field, $I_0$, and on its distribution, $I(r)$, exponentially decreasing with the galactic radius, r. The range of tested values to match the far--IR SED is $2\le I_0/I_{loc} \le 45$. The model allows for a realistic grain--size distribution and includes PAH molecules (see Xu and De~Zotti (1989) and MXD92 for more details). The amount of starlight absorbed and re--emitted by dust is determined at each time using the model for internal extinction mentioned above. The relative contributions of the warm and cold dust components are also evolving with galactic age, the warm/cold dust ratio, $R_{w/c}$, being proportional to the star formation rate. \section{Acknowledgments} Thanks are due to L. Hernquist who kindly provided us with his TREECODE and to R. Filippi who produced some of the software used in the numerical simulations. We thank also the referee, Curtis Struck--Marcell, for his useful comments to improve our manuscript. \section{References} \parindent=0pt Appleton, P.,N., Struck-Marcell, C.: 1987, Ap. J. 312, p.566 Appleton , P. N., James,R., A. in "Dynamical and Interactions of Galaxies" Wielen R. editor, Springer Verlag Berlin, Heidelberg 1990, p. 200 Baud, B., Sargent, A.J., Werner, M.W., Bentley, A.F., 1985, A. J. 292, 628. Bertelli, G., Betto, R., Bressan, A., Chiosi, C., Nasi, E., Vallenari, A.: 1990, A. A. S. 85, 845 Binney J., Tremaine S. in " Galactic Dynamics" , Princeton U. P. , 1987 Bonoli, C., 1987, A.A. 174, 57 Curir A., Filippi R.: 1994, A. A. submitted Curir A. , Diaferio A., De Felice F.: 1993, Ap. J. 413, 70 de Jong, T., Clegg, P.E., Soifer, B.T., Rowan-Robinson, M., Habing, H.J., Houck, J.R., Aumann, H.H., \& Raimond, E. 1984, Ap. J. 278, L67 Gerber, R.A., Lamb, S.A., Balsara, D.S. 1992, Ap. J. 399, L51 Guiderdoni, B., Rocca--Volmerange, B.: 1987, A. \& A. 186, 1 Hernquist, L. : 1987, Ap. J. Suppl. 64, 715 Huang, S., Stewart, P.: 1988, A.\& A.197, 14 Johnson, H.L.: 1966, Ann. Rev. Astr. Astrophys. 4, 193 Joseph, R.D., 1990, in "Dynamics and Interactions of Galaxies", Wielen R. editor, Springer Verlag Berlin, Heidelberg 1990, p. 132. Joy, M., Harvey, P., 1987, Ap. J., 315, 480 Leisawitz, D., Hauser, M.G.: 1988, Ap. J. 332, 954. Keel, W.C., Kennicutt, R.C., Hummel, E. \& van der Hulst, J.M. 1985, A.J. 90, 708 Kennicutt, R.C.jr.: 1987, A. J. 93, 1012. King I. R. - Quarterly Journal of the R.A. S. , 22, 227 (1981) Klein, U., Wielebinski, R., Morsi, H.W., 1988, A.A. 190, 41 Luban--Lotan,P., Struck--Marcell, C.: 1989, Ap. S., 156, 229 Lynds R., Toomre A. : 1976, Ap. J. 209, p. 382 Mazzei, P. 1988, Ph.D. thesis, Intern. School for Advanced Studies, Trieste. Mazzei, P., Xu, C., De Zotti, G.: 1992, A. \& A. 256, p. 45 Mazzei, P., De Zotti, G., Xu, C.,: 1994, Ap. J. 422, 81. Press, W. H., Flannery, B.P., Teukolsky, S. A., Vetterling, W. T. 1990, ''Numerical recipes'' Cambridge University Press , pag 203. Rubin, V. C., Ford, W.K., Thonnard, N., 1980, Ap J 238, 471 Salpeter, E.E.: 1955, A. A. 94, 175. Schmidt, M.: 1959, Ap. J. 129, 243. Schultz, A.B., Spight, L.D., Rodrigue, M., Colegrove, P.T. \& DiSanti, M.A. : 1991, BAAS, 23 953 Schweitzer, F.: 1990, in '' Dynamic and Interactions of Galaxies'', Proc. of the Int. Conf. Heidelberg (1989), p. 60, ed. Wielen R., Springer-Verlag Soifer, B.T., Houck, J.R. \& Neugebauer, G.: 1987, Ann. Rev. Astron. Astrophys. 25, 187. Struck--Marcell, C. \& Higdon, J.L 1993, Ap. J. 411, 108 Theys, J. C., Spiegel E. A. : 1977, Ap. J. 212, 616 Toomre A., in "The large scale structure of the Universe" , IAU Symp 1978 Longair and Einasto eds. , p. 109-116 Weil, M. L., \& Hernquist, L. 1993, AP. J. 405, 142 Wright, G.S., Joseph, R.D., Robertson, N.A., James, P.A., Meikle, W.P.S., 1988, Mon. Not. R. astr. Soc. 233, 1. Xu, C., De~Zotti, G.; 1989, A.A. 225, 12. \vfill \section{Figure captions} {\bf Fig. 1:} R isophotes of our sample of ring galaxies. \medskip {\bf Fig. 2a:} shows the time evolution of an edge--on system generated by an intrusion with an impact velocity of $160\, km\,s^{-1}$. The radius of the intruder, a King sphere, is $0.28$ of the target disk. The time step between different panels is $ 0.074\,$ Gyr. \medskip {\bf Fig. 2b:} is a face--on view of the time evolution for the same system as in panel a. \medskip {\bf Fig. 3:} shows the time evolution of a system generating a ring galaxy endowed with a nucleus. The time step between different panels is $ 0.03\,$ Gyr. \medskip {\bf Fig. 4:} shows the time evolution of a system generating an empty ring galaxy. The time step between different panels is $0.03\,$ Gyr. \medskip {\bf Fig. 5:} panels a), b) and c) show the time evolution of rotation velocity and its polar and radial component, respectively, for the same model as in Fig. 2 (upper panels). The time step between different panels is $ 0.074\,$ Gyr. \medskip {\bf Fig. 6:} presents the behaviour of the surface density (averaged on concentric anuli) of the simulated system at different times; panel a): intrusion within the first regime of velocities; panel b): intrusion within the second regime. \medskip {\bf Fig. 7:} shows the time evolution of the gas fraction, $f_g$, gas metallicity, $Z/Z_{\odot}$, and optical depth, $\tau/\tau_{disk}$, where $Z_{\odot}$ is the solar metallicity and $\tau_{disk}$ the effective optical depth for a pure disk (MXD92), for models computed with different $m_l$ values and with a burst of intensity $i=60$ and $i=300$ lasting $0.35$ and $0.20\,$Gyr respectively. \medskip {\bf Fig. 8:} shows the time evolution of the observed colors, B-V and V-K for the same models as in Fig. 7. During the burst the time step for the plot is 0.01 Gyr. \medskip {\bf Fig. 9:} pictures (a-e) compare the predicted SEDs with observational data (filled squares) for VV789, VV330, VV787, VV32 and NGC 2973 respectively. Heavy lines correspond to "$I_{r_0}$ case", light lines to "$I_0$ one" (see text); in picture (f) the former SEDs, long-short dashed line for VV32, short-dashed line for VV787, dotted short--dashed VV330, long--dashed VV789, dotted long-dashed NGC 2973, are compared with M82 data, dots (UBV fluxes come from RC3 catalogue, the other ones from Klein et al. 1988). Curves are normalized to B band. \medskip {\bf Fig. 10:} pictures (a-e) compare the SEDs matching the burst (heavy lines) with those expected at an age of $14\,$Gyr (light lines); heavy curves refer to "$I_{r_0}$ case". The postburst evolution (see text) has been computed using $R_{w/c}=0.43$, $I_0=4I_{loc}$ for VV789, VV330, VV787, $I_0=7I_{loc}$ for VV32 and $I_0=2I_{loc}$ for NGC 2973; the length of the burst is 0.35 Gyr with the exception of VV32 where a burst lasting 0.2 Gyr has been assumed, and VV 787 with 0.15 Gyr. In picture (f) the postburst SEDs (symbols are as in Fig. 9) are compared with that of our galaxy (dotted line) at the same age (MXD92). Curves are normalized to B band. \medskip {\bf Fig. 11}: panel a) shows our morphological fit for VV789, the isodensity contours are taken to a stage $\tau=0.20$ from the beginning of the ring structure; panel b) refers to VV330 with $\tau=0.40$; panel c): VV32, t=0.044 Gyr from the beginning of the ring structure; panel d): NGC 2793, $\tau= 0.17$ \vfill\eject \end{document}
1,314,259,996,057
arxiv
\section{Introduction} \label{sec:intro} For decades, face recognition (FR) from color images has achieved substantial progress and forms part of an ever-growing number of real world applications, such as video surveillance, people tagging and virtual/augmented reality systems \cite{zhao2003face, tan2006face, azeem2014survey}. With the increasing demand for recognition accuracy under unconstrained conditions, the weak points of 2D based FR methods become apparent: as an imaging-based representation, color image is quite sensitive to numerous external factors, such as lighting variations and makeup patterns. Therefore, 3D based FR techniques \cite{ding2016comprehensive, corneanu2016survey, bowyer2006survey} have recently emerged as a remedy because they take into consideration the intrinsic shape information of faces which is more robust while dealing with these nuisance factors. Moreover, the complementary strengths of color and depth data allow them to jointly work and gain further improvement. However, depth data is not always accessible in real-life conditions due to its special requirements for optical instruments and acquisition environment. Likewise, other challenges remain as well, including the real-time registration and preprocessing for depth images. An important question then naturally arises: can we design a recognition pipeline where depth images are only registered in gallery while still providing significant information for the identification of unseen color images? To cope with this problem, heterogeneous face recognition (HFR) \cite{toderici2010bidirectional, zhao2013benchmarking, huang2012oriented} has been proposed as a reasonable workaround. As a worthwhile trade-off between purely 2D and 3D based method, HFR adopts both color and depth data for training and gallery set while the online probe set will simply contains color images. Under this mechanism, a HFR framework can take full advantage of both color and depth information at the training stage to reveal the correlation between them. Once learned, this cross-modal correlation makes it possible to conduct heterogeneous matching between preloaded depth images in gallery and color images digitally captured in real time. Beyond the above-mentioned mechanism, in this paper we take a further look at our constraint on the use of depth image. Note that all difficulties, which hinder us from availing ourselves of depth information in probe set, come from the acquisition and registration of 3D data. Intuitively, these problems can be immediately solved if we can reconstruct depth image from color image accurately and efficiently. Despite many existing work on shape recovery from single image, most of them rely on 3D model fitting which is time-consuming and can be prone to lack accuracy when landmarks are not precisely located. Thanks to the extremely rapid development of generative models, especially the Generative Adversarial Network (GAN) \cite{goodfellow2014generative} and its conditional variation (cGAN) \cite{mirza2014conditional} which are introduced quite recently, we implement an end-to-end depth face recovery with cGAN to enforce the realistic image generation. Furthermore, the recovered depth information enables a straightforward comparison in 2.5D space. \begin{figure} \includegraphics[width=\linewidth]{overview.png} \caption{Overview of the proposed CNN models for heterogeneous face recognition. Note that (1) depth recovery is conducted only for testing; (2) the final joint recognition may or may not include color based matching, depending on the specific experiment protocol. } \label{fig:overview} \end{figure} A flowchart of the proposed method is illustrated in Fig. \ref{fig:overview}, and we list our contributions as follows: \begin{itemize} \item A novel depth face recovery method based on cGAN and Auto-encoder with skip connections which greatly improves the quality of reconstructed depth images. \item We first train two discriminative CNNs individually for a two-fold purpose: to extract features of color image and depth image, and to provide pre-trained models for the cross-modal 2D/2.5D CNN model. \item A novel heterogeneous face recognition pipeline which fuses multi-modal matching scores to achieve state-of-the-art performance. \end{itemize} \section{Related Work} \subsection{3D Face Reconstruction} 3D face reconstruction from single/multiple images or stereo video has been a challenging task due to its nonlinearity and ill-posedness. A number of prevailing approaches addressed this problem based on shape-subspace projections, where a set of 3D prototypes are fitted by adjusting corresponding parameters to a given 2D image and most of them were derived from 3DMM \cite{blanz2003face} and Active Appearance Models \cite{matthews20072d}. Alternative models were afterwards proposed as well which follow the similar processing pipeline by fitting 3D models to 2D images through various face collections or prior knowledge. For example, Gu and Kanade \cite{gu20063d} fit surface 3D points and related textures together with the pose and deformation estimation. Kemelmacher-Shlizerman et al. \cite{kemelmacher20113d} considered the input image as a guide with a single reference model to achieve 3D reconstruction. In recent work of Liu et al. \cite{liu2016joint}, two sets of cascaded regressors are implemented and correlated via a 3D-2D mapping iteratively to solve face alignment and 3D face reconstruction simultaneously. Likewise, using generic model remains a decent solution as well for 3D face reconstruction from stereo videos, as presented in \cite{chowdhury20023d, fidaleo2007model,park20073d}. Despite of strikingly accurate reconstruction result reported in the above researches, the drawback of relying on single or a large number of well-aligned 3D training data is observed and even enlarged here, because as far as we know 3D prototypes are necessary for almost all reconstruction approaches. \subsection{2D-3D Heterogeneous Face Recognition} As a pioneer and cornerstone for numerous subsequent 3D Morphable Model (3DMM) based methods, Blanz and Vetter \cite{blanz2003face} built this statistical model by merging a branch of 3D face models and then densely fit it to a given facial image for further matching. Toderici et al. \cite{toderici2010bidirectional} located some predefined key landmarks on the facial images in different poses, and then roughly align them to a frontal 3D model to achieve recognition target; Riccio and Dugelay \cite{riccio2007geometric} also established a dense correspondence between the 2D probe and the 3D gallery using geometric invariants across face region. Following this framework, a pose-invariant asymmetric 2D-3D FR approach \cite{zhang20123d} was proposed which conducts a 2D-2D matching by synthesizing 2D image from corresponding 3D models towards the same pose as a given probe sample. This approach was further extended and compared with work of Zhao et al. \cite{zhao2013benchmarking} as a benchmarking asymmetric 2D-3D FR system, a complete version of their work was recently released in \cite{kakadiaris20163d}. Though the above models achieved satisfactory performance, unfortunately they all suffer from high computational cost and long convergence process owing to considerable complexity of pose synthesis, and their common assumption that accurate landmark localization in facial images was fulfilled turns out to be another tough topic. More recently, learning based approaches have significantly increased on 2D/3D FR. Huang et al. \cite{huang2012oriented} projected the proposed illuminant-robust feature OGM onto the CCA space to maximize the correlation between 2D/3D features; instead, Wang et al. \cite{wang20142d} combined Restricted Boltzmann Machines (RBMs) and CCA/kCCA to achieve this goal. The work of Jin et al. \cite{jin2014cross}, called MSDA based on Extreme Learning Machine (ELM) as aforementioned, aims at finding a common discriminative feature space revealing the underlying relationship between different views. These approaches take well advantage of learning model, but would encounter weakness when dealing with non-linear manifold representations. \section{Depth Face Reconstruction} The target of taking a random color face image to recover its counterpart in depth space is realized in this section. We first formulate our problem by adapting it to the background of cGAN, then the detailed architecture design is described and discussed. \subsection{Problem Formulation} First proposed in \cite{goodfellow2014generative}, GAN has achieved impressive results in a wide variety of generative tasks. The core idea of GAN is to train two neural networks, which respectively represent the generator \textit{G} and the discriminator \textit{D}, to proceed a game-theoretic tussle between one another. Given the samples $x$ from the real data distribution $p_{data}(x)$ and random noise $z$ sampled from a noise distribution $p_{z}(z)$, the discriminator aims to distinguish between real samples $x$ and fake samples which are mapped from $z$ by the generator, while the generator is tasked with maximally confusing the discriminator. The objective can thus be written as: \begin{equation}\label{GAN_loss} \mathcal{L}_{GAN}(G,D) = \mathbb{E}_{x{\sim}p_{data}(x)}[\log D(x)] + \mathbb{E}_{z{\sim}p_{z}(z)}[\log (1-D(G(z)))] \end{equation} where $\mathbb{E}$ denotes the empirical estimate of expected value of the probability. To optimize this loss function, we aim to minimize its value for \textit{G} and maximize it for \textit{D} in an adversarial way, i.e. $\min_G \max_D \mathcal{L}_{GAN}(G,D)$. The advantage of GAN is that realistic images can be generated from noise vectors with random distribution, which is crucially important for unsupervised learning. However, note that in our face recovery scenario, training data contains image pairs $\{x,y\}$ where $x$ and $y$ refer to the depth and color faces respectively with a one-to-one correspondence between them. The fact that $y$ can be involved in the model as a prior for generative task leads us to the conditional variant of GAN, namely cGAN \cite{mirza2014conditional}. Specifically, we condition the observations $y$ on both the discriminator and the generator, the objective of cGAN extends \eqref{GAN_loss} to: \begin{equation}\label{cGAN_loss} \mathcal{L}_{cGAN}(G,D) = \mathbb{E}_{x,y{\sim}p_{data}(x,y)}[\log D(x,y)] + \mathbb{E}_{z{\sim}p_{z}(z), y{\sim}p_{data}(y)}[\log (1-D(G(z|y),y))] \end{equation} Moreover, to ensure the pixel-wise similarity between image generation outputs $G(z|y)$ and ground truth $x$, we subsequently impose a reconstruction constraint on the generator in the form of L1 distance between them: \begin{equation}\label{L1_loss} \mathcal{L}_{L1}(G) = \mathbb{E}_{x,y{\sim}p_{data}(x,y),z{\sim}p_{z}(z)}[\Vert x - G(z|y) \Vert_1] \end{equation} The comprehensive objective is formulated with a minmax value function on the above two losses where the scalar $\eta$ is used for balancing them: \begin{equation}\label{final_loss} \min_G \max_D [\mathcal{L}_{cGAN}(G,D) + \eta\mathcal{L}_{L1}(G)] \end{equation} Note that the cGAN itself can hardly generate specified images and using only $\mathcal{L}_{L1}(G)$ causes blurring, this joint loss successfully leverages the complementary strengths of them. \begin{figure*} \begin{minipage}[c][7.2cm][t]{.3\textwidth} \vspace*{0.6cm} \hspace*{0.1cm} \centering \includegraphics[width=\textwidth]{cGAN.png} \subcaption{Workflow of cGAN} \label{fig:test1} \end{minipage}% \begin{minipage}[c][7.2cm][t]{.7\textwidth} \centering \includegraphics[width=.85\textwidth]{generator.png} \subcaption{Generator} \label{fig:test2}\pa \includegraphics[width=.85\textwidth]{discriminator.png} \subcaption{Discriminator} \label{fig:test3} \end{minipage} \caption{The mechanism and architecture of cGAN. In Fig. \ref{fig:test2}, the noise variable $z$ presents itself under the the form of dropout layers, while the black arrows portray the skip connections. All convolution and deconvolution layers are with filter size 4$\times$4 and 1-padding, $n$ and $s$ represent the number of output channels and stride value, respectively. (Best view in color)} \label{fig:cGAN} \end{figure*} \subsection{CGAN Architecture} We adapt our cGAN architecture from that in \cite{isola2016image} which achieved particularly impressive results in image-to-image translation task. A detailed description of this model is illustrated in Fig. \ref{fig:cGAN} and some key features are discussed below. \textbf{Generator:} As a standard generative model, the architectures of auto-encoder (AE) \cite{hinton2006reducing} and its variants \cite{vincent2010stacked, rifai2011contractive, kingma2013auto} are widely adopted as $G$ for past cGANs. However, the drawback of conventional AEs is obvious: due to their dimensionality reduction capacity, a large portion of low-level information, such as precise localization, is compressed when an image passes through layers in the encoder. To cope with this lossy compression problem, we follow the idea of U-Net \cite{ronneberger2015u} by adding skip connections which forwards directly the features from encoder layers to decoder layers that are on the same 'level', as shown in Fig. \ref{fig:test2}. \textbf{Discriminator:} Consistent with Isola et al. \cite{isola2016image}, we adopt \textit{Patch}GAN for the discriminator. Within this pattern, no fully connected layers are implemented and $D$ outputs a 2D image where each pixel represents the prediction result with respect to the corresponding patch on original image. All pixels are then averaged to decide whether the input image is 'real' or 'fake'. Compared with pixel-level prediction, \textit{Patch}GAN efficiently concentrates on local patterns while the global low-frequency correctness is enforced by L1 loss in \eqref{L1_loss}. \textbf{Optimization:} The optimization for cGAN is performed by following the standard method \cite{goodfellow2014generative}: the mini-batch SGD and the Adam solver are applied to optimize $G$ and $D$ alternately (as depicted by arrows with different colors in Fig. \ref{fig:test1}). \section{Heterogeneous Face Recognition} The reconstruction of depth faces from color images enables us to maximally leverage shape information in both gallery and probe, which means we can individually learn a CNN model to extract discriminative features for depth images and transform the initial cross-modal problem into a multi-modal one. However, the heterogeneous matching remains another challenge in our work, below we demonstrate how this problem is formulated and tackled. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{hfr.png} \caption{Training procedure of the cross-modal CNN model. Models in the dashed box are pre-trained using 2D and 2.5D face images individually. } \label{fig:hfr} \end{figure} \textbf{Unimodal learning.} The last few years witnessed a surge of interest and success in FR with deep learning \cite{taigman2014deepface, sun2015deepid3, parkhi2015deep}. Following the basic idea of stacking convolution-convolution-pooling (C-C-P) layers in \cite{lee2016accurate}, we train from scratch two CNNs for color and grayscale images on CASIA-WebFace \cite{yi2014learning} and further fine-tune the grayscale based model with our own depth images. These two models serve two purposes: to extract 2D and 2.5D features individually, and to offer pre-trained models for the ensuing cross-modal learning. \textbf{Cross-modal learning.} Once a pair of unimodal models for both views are trained, the modal-specific representations, $\{X,Y\}$, can be obtained after the last fully connected layers. Note that each input for the two-stream cross-modal CNN is a 2D+2.5D image pair with identity correspondence, it is reasonable to have an intuition that $X$ and $Y$ share common patterns which help to classify them as the same class. This connection essentially reflects the nature of cross-modal recognition, and was investigated in \cite{wang2016correlated, huang2012oriented, wang20142d}. In order to explore this shared and discriminative feature, a joint supervision is required to enforce both correlation and distinctiveness simultaneously. For this purpose, we apply two linear mappings following $X$ and $Y$, denoted by $M_X$ and $M_Y$. First, to ensure the correlation between new features, they are enforced to be as close as possible, which is constrained by minimizing their distance in feature space: \begin{equation}\label{frob} \mathcal{L}_{corr} = \sum_{i=1}^{n}\Vert M_X X_i - M_Y Y_i\Vert_F^2 \end{equation} where $n$ denotes the size of mini-batch and $\Vert\cdot\Vert_F$ represents the Frobenius norm. If we only use the above loss supervision signal, the model will simply learn zero mappings for $M_X$ and $M_Y$ because the correlation loss will stably be 0 in this case. To avoid this tricky situation, we average the two outputs to obtain a new feature on which the classification loss is computed. The ultimate objective function is formulated as follows: \begin{align*}\label{eqhfr} \mathcal{L}_{hfr} &= \mathcal{L}_{softmax} + \lambda\mathcal{L}_{Corr}\\ &= -\sum_{i=1}^{n}\log\frac{e^{W_{c_i}^T(M_X X_i + M_Y Y_i)/2+b_{c_i}}}{\sum_{j=1}^m e^{W_{j}^T(M_X X_i + M_Y Y_i)/2+b_{j}}} + \lambda\sum_{i=1}^{n}\Vert M_X X_i - M_Y Y_i\Vert_F^2 \end{align*} where $c_i$ represents the ground truth class label of $i$th image pair, the scalar $\lambda$ denotes the weight for correlation loss. \textbf{Fusion.} To highlight the effectiveness of the proposed method, we adopt the cosine similarity of 4096-d hidden layer features as matching scores. As for the score fusion stage, all scores are normalized to [0,1] and fused by a simple sum rule. \section{Experimental Results} To intuitively demonstrate the effectiveness of the proposed method, we conduct extensive experiments for 2D/2.5D HFR on the benchmark 2D/2.5D face database. Besides the reconstructed 2.5D depth image, our method also outperforms state-of-the-art performance using only 2.5D images instead of holistic 3D face models. \subsection{Dataset Collection} Collecting 2D/2.5D image pairs presents itself as a primary challenge when considering deep CNN as a learning pipeline. Unlike the tremendous boost in dataset scale of 2D face images, massive 3D face data acquisition still remains a bottleneck for the development and practical application of 3D based FR techniques, from which our work is partly motivated. \textbf{Databases:} As listed in Table \ref{database}, three large scale and publicly available 3D face databases are gathered as training set and the performance is evaluated on another dataset, which implies that there was no overlap between training and test set and the generalization capacity of the proposed method is evaluated as well. Note that the attribute values only concern the data used in our experiments, for example, scans with large pose variations in CASIA-3D are not included here. \begin{table} \begin{center} \begin{tabular}{ccccc} \hline \multirow{2}{*}{Databases} & \multicolumn{3}{c}{Training Set} & Test set \\ \cline{2-4} & BU3D \cite{yin20063d} & Bosphorus \cite{savran2008bosphorus} & CASIA-3D \cite{CASIA-3D} & FRGC Ver2.0 \cite{phillips2005overview} \\ \hline \# Persons & 100 & 105 & 123 & 466 \\ \# Images & 2500 & 2896 & 1845 & 4003 \\ Conditions & E & E & EI & EI \\ \hline \end{tabular} \end{center} \caption{Database overview. E and I are short for expressions and illuminations, respectively.} \label{database} \end{table} \textbf{Preprocessing:} To generate 2.5D range image from original 3D shape, we either proceed a direct projection if the point cloud is pre-arranged in grids (Bosphorus/FRGC) or adopt a simple Z-buffer algorithm (BU3D/CASIA3D). Furthermore, to ensure that all faces are of the similar scale, we resize and crop the original image pairs to $128\times128$ while fixing their inter-ocular distance to a certain value. Especially, to deal with the missing holes and unwanted body parts (shoulder for example) in raw data of FRGC, we first locate the face based on 68 automatically detected landmarks \cite{asthana2014incremental}, and then apply a linear interpolation to approximate the default value of each hole pixel by averaging its non-zero neighboring points. \subsection{Implementation details} All images are normalized before being fed to the network by subtracting from each channel its mean value over all training data. With regards to the choice of hyperparameters, we adopt the following setting: in cGAN, the learning rate $\mu_{cGAN}$ is set to 0.0001 and the weight for L1 norm $\eta$ is 500; in cross-modal CNN model, the learning rate for training from scratch $\mu$ begins with 1 and is divided by 5 every 10 epochs while the learning rate during fine-tuning $\mu_{ft}$ is 0.001; for both models, the momentum $m$ is initially set as 0.5 until it is increased to 0.9 at the 10th epoch; the weight for correlation loss $\lambda$ is set to 0.6. \subsection{Reconstruction Results} The reconstruction results obtained for color images in FRGC are illustrated in Fig. \ref{recovery}. Samples from different subjects across expression and illumination variations are shown from left to right. They thereby give hints on the generalization ability of the proposed method. For each sample we first portray the original color image with its ground truth depth image, followed by the reconstructed results whereby we demonstrate the effectiveness and necessity of each constraint in the joint objective. In addition, some samples with low reconstruction quality are depicted in Fig. \ref{recovery_bad} as well. \begin{figure} \centering \begin{subfigure}{.75\textwidth} \centering \includegraphics[width=0.9\textwidth]{recovery.png} \caption{} \label{recovery_good} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=0.75\textwidth]{recovery_.png} \caption{} \label{recovery_bad} \end{subfigure} \caption{Qualitative reconstruction results of FRGC samples with varying illuminations and expressions. Fig. \ref{recovery_good}: correctly recovered samples. Fig . \ref{recovery_bad}: wrongly recovered samples.} \label{recovery} \end{figure} Being consistently similar with the ground truth, the reconstruction results with joint loss in Fig. \ref{recovery_good} intuitively demonstrate the strength of cGAN. The recovered depth faces hold their accuracy and realistic property irrespective of lighting and expression variations in the original RGB images. Furthermore, when we take an observation of the two reconstruction results in 3rd and 4th rows, the comparison implies that: 1) using only L1 loss will lead to blurry results because the model tends to average all plausible values, especially for regions containing high-level information like edges; 2) using only cGAN loss can achieve slightly sharper results, but suffers from noise. These results provide an evidence that the implementation of joint loss is beneficial and important for obtaining a 'true' and accurate output. Meanwhile, our model encounters some problems while dealing with extreme cases, such as thick beard, wide opened mouth and extremely dark shadows as displayed in Fig. \ref{recovery_bad}. The errors are principally due to few training samples with these cases. \subsection{2D-3D Asymmetric FR} We conduct the quantitative experiments on FRGC which has held the field as one of the most commonly used benchmark dataset over the last decade. In contrast with unimodal FR experiments, very few attempts have been made on 2D/3D asymmetric FR. For convenience of comparison, three recent and representative protocols reported respectively in \cite{jin2014cross}, \cite{huang2012oriented} and \cite{wang20142d} are followed. These protocols mainly differ in gallery and probe setting, including splitting and modality setup. For example, the gallery set in \cite{wang20142d} solely contains depth images, when compared with their work, our experiment will subsequently exclude 2D based matching to respect this protocol. The comparison results are shown in Table \ref{recognition}, through which we could gain the observation that the proposed cross-modal CNN outperforms state-of-the-art performance while fusing 2.5D matching into HFR with reconstructed depth image further helps improve the performance effectively. Moreover, the proposed method is advantageous in its 3D-free reconstruction capacity and efficiency. To the best of our knowledge, this is the first time to investigate a 2.5D face recovery approach which is free of any 3D prototype models. Despite nearly 20 hours for the whole training and fine-tuning procedure, it takes only 1.6 $ms$ to complete an online forward pass per image on a single NVIDIA GeForce GTX TITAN X GPU and is therefore capable of satisfying the real-time processing requirement. \begin{table} \begin{center} \begin{tabular}{cccccc} \hline \multirow{2}{*}{Protocol} & \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Rank-1 Recognition Accuracy} \\ \cline{3-6} & & 2D & 2.5D & 2D/2.5D & Fusion \\ \hline \multirow{2}{*}{Jin et al. \cite{jin2014cross}} & MSDA+ELM \cite{jin2014cross} & - & - & 0.9680 & - \\ & Ours & - & 0.9573 & 0.9603 & \textbf{0.9698} \\ \multirow{2}{*}{Wang et al. \cite{wang20142d}} & GRBM+rKCCA \cite{wang20142d} & - & - & 0.9600 & - \\ & Ours & - & 0.9529 & 0.9714 & \textbf{0.9745} \\ \multirow{2}{*}{Huang et al. \cite{huang2012oriented}} & OGM \cite{huang2012oriented} & 0.9390 & - & 0.9404 & 0.9537 \\ & Ours & 0.9755 & 0.9609 & 0.9688 & \textbf{0.9792} \\ \hline \end{tabular} \end{center} \caption{Comparison of recognition accuracy on FRGC under different protocols.} \label{recognition} \end{table} \begin{table} \begin{center} \begin{tabular}{cccccccc} \hline $\lambda$ & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 & 1.2 \\ \hline Accuracy & 0.9245 & 0.9481 & 0.9600 & 0.9688 & 0.9577 & 0.8851 & 0.7333\\ \hline \end{tabular} \end{center} \caption{2D/2.5D HFR accuracy with varying $\lambda$ under protocol of \cite{huang2012oriented}.} \label{hyper} \end{table} \textbf{Effect of hyperparameter $\lambda$.} An extended analysis is made to explore the role of softmax loss and correlation loss. We take the protocol in \cite{huang2012oriented} as a standard and vary the weight for correlation loss $\lambda$ each time. As shown in Table \ref{hyper}, the performance will remain largely stable across a range of $\lambda_c$ between 0.4 and 0.8. When we set $\lambda=0$ instead of 0.6, which means correlation loss is not involved while training, the network can still learn valuable features with a recognition rate decrease of 4.43$\%$. However, along with the increase of $\lambda$, the performance drops drastically, which implies that a too strong constraint on correlation loss could backfire by causing a negative impact on softmax loss. \section{Conclusion} In this paper, we have presented a novel framework for 2D/2.5D heterogeneous face recognition together with depth face reconstruction. This approach combines the generative capacity of conditional GAN and the discriminative feature extraction of deep CNN for cross-modality learning. The extensive experiments have convincingly evidenced that the proposed method successfully reconstructs realistic 2.5D from single 2D while being adaptive and sufficient for HFR. This architecture could hopefully be generalized to other heterogeneous FR tasks, such as visible light vs. near-infrared and 2.5D vs. forensic sketch, which provides an interesting and promising prospect. \section{Acknowledgement} This work was supported in part by the French Research Agency, l'Agence Nationale de Recherche (ANR), through the Jemime project (N$^\circ$ contract ANR-13-CORD-0004-02), the Biofence project (N$^\circ$ ANR-13-INSE-0004-02) and the PUF 4D Vision project funded by the Partner University Foundation.
1,314,259,996,058
arxiv
\section{Introduction} \noindent \textit{Motivation.} The \emph{model checking problem}, to decide whether a given logical sentence is true in a given structure, is a fundamental computational problem which appears in a variety of areas in computer science, including database theory, artificial intelligence, constraint satisfaction, and computational complexity. The problem is computationally intractable in its general version, and hence it is natural to seek restrictions of the class of structures or the class of sentences yielding sufficient or necessary conditions for computational tractability. Here, as usual in the complexity investigation of the model checking problem, computational tractability refers to \emph{polynomial-time tractability} or, in cases where polynomial-time tractability is unlikely, a relaxation known as \emph{fixed-parameter tractability with the sentence as a parameter}. The latter guarantees a decision algorithm running in $f(k) \cdot n^c$ time on inputs of size $n$ and sentences of size $k$, where $f$ is a computable function and $c$ is a constant. For further discussion of the complexity setup adopted here, including its algorithmic motivations, we refer the reader to \cite{Grohe07a, FlumGrohe06}. The study of model checking first-order logic on restricted classes of finite \emph{combinatorial structures} is an established line of research originating from the seminal work of Seese \cite{Seese96}. Results in this area have provided very general conditions for computational tractability, and even exact characterizations in many relevant cases \cite{GroheKreutzerSiebertz14}. As Grohe observes \cite{Grohe07a}, though, it would be also interesting to investigate structural properties facilitating the model checking problem in the realm of finite \emph{algebraic structures}, for instance groups or lattices. In this paper, we investigate the class of finite \emph{partially ordered sets}. A partially ordered set (in short, a \emph{poset}) is the structure obtained by equipping a nonempty set with a reflexive, antisymmetric, and transitive binary relation. In other words, the class of posets coincides with the class of directed graphs satisfying a certain universal first-order sentence (axiom); namely, the sentence that enforces reflexivity, antisymmetry, and transitivity of the edge relation. In this sense, from a logical perspective, posets form an intermediate case between combinatorial and algebraic structures; they can be viewed as being stronger than purely combinatorial structures, as the nonlogical vocabulary is presented by a first-order axiomatization; but weaker than genuinely algebraic structures, as the axiomatization is expressible in universal first-order logic (too weak of a fragment to define algebraic operations). Posets are fundamental combinatorial objects \cite[Chapter~8]{GrahamGrotschelLovasz95}, with applications in many fields of computer science, ranging from software verification \cite{NielsonNielsonHankin05} to computational biology \cite{RauschReinert10}. However, very little is known about the complexity of the model checking problem on classes of finite posets; to the best of our knowledge, even the complexity of natural syntactic fragments of first-order logic on basic classes of finite posets is open. A prominent logic in first-order model-checking is \emph{primitive positive} logic, that is, first-order sentences built using existential quantification ($\exists$) and conjunction ($\wedge$); the problem of model checking primitive positive logic is equivalent to the \emph{constraint satisfaction problem} and the \emph{homomorphism problem} \cite{FederVardi98}. However, restricted to posets, the problem of model checking primitive positive logic and even \emph{existential positive} logic, obtained from primitive positive logic by including disjunction ($\vee$) in the logical vocabulary, is trivial; because of reflexivity, every existential positive sentence is true on every poset! As we observe (Proposition~\ref{pr:exprcomplex}), the complexity scenario changes abruptly in \emph{existential conjunctive} logic, that is, first-order sentences in prefix negation normal form built using $\exists$, $\wedge$, and negation ($\neg$). Here, the model checking problem is $\textup{NP}$-hard even on a certain fixed finite poset; in the complexity jargon, the \emph{expression} complexity of existential conjunctive logic is $\textup{NP}$-hard on finite posets. In other words, as long as computational tractability is identified with polynomial-time tractability, any structural property of posets is algorithmically immaterial (in a sense that can be made precise). There is then a natural quest for relaxations of polynomial-time tractability yielding \textit{(i)} a nontrivial complexity analysis of the problem, and \textit{(ii)} a refined perspective on the structural properties of posets underlying tamer algorithmic behaviors; in this paper we achieve \textit{(i)} and \textit{(ii)} through the glasses of fixed-parameter tractability. More precisely, as we discuss below, our contribution is a complete description of the parameterized complexity of model checking (all syntactic fragments of) existential first-order logic (first-order sentences in prefix normal form built using $\exists$, $\wedge$, $\vee$, and $\neg$), with respect to classes of finite posets in a hierarchy generated by fundamental poset invariants.\footnote{Note that existential \emph{disjunctive} logic (first-order sentences in prefix negation normal form built using $\exists$, $\vee$, and $\neg$) is trivial on posets. In fact, every sentence in the fragment is either true on every poset, or false on every poset, and it is easy to check which of the two cases holds for any given sentence.} Model checking existential logic encompasses as a special case the fundamental \emph{embedding problem}, to decide whether a given structure contains an isomorphic copy of another given structure as an \emph{induced} substructure; in fact, the embedding problem reduces in polynomial-time to the problem of model checking certain existential (even conjunctive) sentences. The aforementioned fact that existential conjunctive logic is already $\textup{NP}$-hard on a fixed finite poset leaves open the existence of a nontrivial classical complexity classification of the embedding problem. We provide such a classification by giving a complete description of the classical complexity of the embedding problem in the introduced hierarchy of poset invariants. We hope that the investigation of the existential fragment prepares the ground (and possibly provides basic tools) for understanding the model checking problem for more expressive logics on posets. \medskip \noindent \textit{Contribution.} We now give an account of our contribution. We refer the reader to Figure~\ref{fig:overwparcompl} for an overview; the poset invariants and their relations are introduced in Section~\ref{sect:setup}. \begin{figure}[ht] \centering \begin{picture}(0,0)% \includegraphics{overwiewnotop_pspdftex}% \end{picture}% \setlength{\unitlength}{2279sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4074,2315)(2014,-10243) \put(2521,-9106){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{width}$}% }}}} \put(3421,-10006){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{size}$}% }}}} \put(4186,-9106){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{degree}$}% }}}} \put(2971,-8206){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{cover\textup{-}degree}$}% }}}} \put(5041,-8206){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{depth}$}% }}}} \end{picture}% \caption{ The (light or dark) gray region covers invariants such that, if a class of finite posets is bounded under the invariant, then model checking existential logic (or equivalently, by Proposition~\ref{proposition:metodological}, model checking existential conjunctive logic, or deciding embedding) over the class is fixed-parameter tractable; the white region covers invariants such that there exists a class of finite posets bounded under the invariant where the problem is $\textup{W}[1]$-hard. Similarly, the dark gray region covers invariants where the embedding problem is polynomial-time tractable, and the complement of the dark gray region (light gray or white) covers invariants where the problem is $\textup{NP}$-hard. In classical complexity, as opposed to parameterized complexity, the tractability frontier of existential (conjunctive) logic and embedding are different (the former, since existential logic is already $\textup{NP}$-hard on a fixed finite poset, is $\textup{NP}$-hard everywhere).} \label{fig:overwparcompl} \end{figure} In contrast to the classical case, model checking existential logic on fixed structures is trivially fixed-parameter tractable; in fact, even the full first-order logic is trivially fixed-parameter tractable on any class of finite structures of bounded size. On the other hand, there exist classes of finite posets where existential logic is unlikely to be fixed-parameter tractable (in fact, there exist classes where even the embedding problem is $\textup{W}[1]$-hard); but the reduction class given by the natural hardness proof is rather wild, in particular it has bounded depth but \emph{unbounded width} (Proposition~\ref{pr:parhardallposets}). The \emph{width} of a poset is the maximum size of a subset of pairwise incomparable elements (antichain); along with its \emph{depth}, the maximum size of a subset of pairwise comparable elements (chain), these two invariants form the basic and fundamental structural properties of a poset, arguably its most prominent and natural features. Our main result establishes that width helps algorithmically (in contrast to depth); specifically, we prove that \emph{model checking existential logic on classes of finite posets of bounded width is fixed-parameter tractable} (Theorem~\ref{thm:EFOFPT}). This, together with Seese's algorithm (plus a routine reduction described in Proposition~\ref{pr:harddegree}), allows us to complete the parameterized complexity classification of the investigated poset invariants, as depicted in Figure~\ref{fig:overwparcompl}. We believe that our tractability result essentially enlightens the fundamental feature of posets of bounded width that can be exploited algorithmically; namely, \emph{bounded width posets admit a polynomial-time compilation to certain semilattice structures}, which are algorithmically tamer than the original posets, but equally expressive with respect to the problem at hand. The proof proceeds in two stages. We first prove that, on any class of finite relational structures, model checking existential logic is fixed-parameter tractable if and only if the embedding problem is fixed-parameter tractable (Proposition~\ref{proposition:metodological}). Next, using the color coding technique of Alon, Yuster, and Zwick \cite{AlonYusterZwick95}, we reduce an instance of the embedding problem on posets of bounded width to a suitable family of instances of the homomorphism problem of certain semilattice structures, which is polynomial-time tractable by classical results of Jeavons, Cohen, and Gyssens \cite{JeavonsCohenGyssens97}. Our approach is reminiscent of the well established fact in order theory that finite posets correspond exactly (in a sense that can be made precise in category-theoretic terms) to finite distributive lattices. However, the algorithmic implications of this correspondence have been possibly overlooked. Indeed, using the correspondence and the known fact that the isomorphism problem is polynomial-time tractable on finite distributive lattices, we prove that \emph{the isomorphism problem for posets of bounded width is polynomial-time tractable} (Theorem~\ref{th:isoptime}), which settles an open question in order theory \cite[p.~284]{CaspardLeclercMonjardet12}. Motivated by the equivalence (in parameterized complexity) between embedding and model checking existential conjunctive logic (Proposition~\ref{proposition:metodological}) on one hand, and the fact that existential conjunctive logic is already $\textup{NP}$-hard on a fixed finite poset (Proposition~\ref{pr:exprcomplex}) on the other hand, we also revisit the classical complexity of the embedding problem for finite posets and classify it with respect to the poset invariants studied in the parameterized complexity setting. The outcome is pictured in Figure~\ref{fig:overwparcompl}; here, polynomial-time tractability of the embedding problem on posets of bounded size is optimal with respect to the studied poset invariants. We remark that the hardness results are technically involved (Theorem~\ref{th:widthnphard} and Theorem~\ref{th:degreenphard}); in particular, bounded width is a known obstruction for hardness proofs (for instance, the complexity of the dimension problem is unknown on bounded width posets) We conclude mentioning that our work on posets relates with, but is independent of, general results by Seese \cite{Seese96} and Courcelle, Makowsky, and Rotics \cite{CourcelleMakowskyRotics00}, respectively, on model checking first-order logic on classes of finite graphs of bounded degree and bounded clique-width. Namely, the order relation of a poset has bounded degree if and only if the poset has bounded depth and bounded \emph{cover-degree} (that is, its cover relation has bounded degree); moreover, if a poset has bounded width, then it has bounded cover-degree (Proposition~\ref{pr:diagram}). However, \emph{there exist classes of bounded width posets with unbounded degree} (for instance, chains), and \emph{there exist classes of bounded width posets with unbounded clique-width} (Proposition~\ref{pr:nocw}), which excludes the direct application of the aforementioned results. \medskip \noindent \shortversion{\emph{Throughout the paper, we mark with $\star$ all statements whose proofs are omitted; we refer to the \href{http://arxiv.org/abs/???}{arXiv} for a full version.}} \section{Preliminaries}\label{sect:prelim} For all integers $k \geq 1$, we let $[k]$ denote the set $\{ 1, \ldots, k \}$. \medskip \noindent \textit{Logic.} In this paper, we focus on relational first-order logic. A \emph{vocabulary} $\sigma$ is a \emph{finite} set of \emph{relation symbols}, each of which is associated to a natural number called its \emph{arity}; we let $\textup{ar}(R)$ denote the arity of $R \in \sigma$. An \emph{atom} $\alpha$ (over vocabulary $\sigma$) is an equality of variables ($x=y$) or is a predicate application $R x_1 \dots x_{\textup{ar}(R)}$, where $R \in \sigma$ and $x_1,\dots,x_{\textup{ar}(R)}$ are variables. A \emph{formula} (over vocabulary $\sigma$) is built from atoms (over $\sigma$), conjunction ($\wedge$), disjunction ($\vee$), negation ($\neg$), universal quantification ($\forall$), and existential quantification ($\exists$). A \emph{sentence} is a formula having no free variables. We let $\mathcal{FO}$ denote the class of first-order sentences in \emph{prefix negation normal form}, that is, for each $\phi \in \mathcal{FO}$, the quantifiers occur in front of the sentence and the negations occur in front of the atoms. Let $\rho$ be a subset of $\{\forall,\exists,\wedge,\vee,\neg\}$ containing at least one quantifier and at least one binary connective. We let $\mathcal{FO}(\rho) \subseteq \mathcal{FO}$ denote the \emph{syntactic fragment} of $\mathcal{FO}$-sentences built using only logical symbols in $\rho$. We call $\mathcal{FO}(\exists,\wedge,\vee,\neg)$ the \emph{existential} fragment, $\mathcal{FO}(\exists,\wedge,\neg)$ the \emph{existential conjunctive} fragment, and $\mathcal{FO}(\exists,\wedge)$, the \emph{existential conjunctive positive} (or \emph{primitive positive}) fragment. \medskip \noindent \textit{Structures.} Let $\sigma$ be a relational vocabulary. A \emph{structure} $\mathbf{A}$ (over $\sigma$) is specified by a nonempty set $A$, called the \emph{universe} of the structure, and a relation $R^{\mathbf{A}} \subseteq A^{\textup{ar}(R)}$ for each relation symbol $R \in \sigma$. A structure is \emph{finite} if its universe is finite. \emph{All structures considered in this paper are finite.} Given a structure $\mathbf{A}$ and $B \subseteq A$, we denote by $\mathbf{A}|_B$ the substructure of $\mathbf{A}$ induced by $B$, namely the universe of $\mathbf{A}|_B$ is $B$ and $R^{\mathbf{A}|_B}=R^{\mathbf{A}} \cap B^{\textup{ar}(R)}$ for all $R \in \sigma$. Let $\mathbf{A}$ and $\mathbf{B}$ be $\sigma$-structures. A \emph{homomorphism} from $\mathbf{A}$ to $\mathbf{B}$ is a function $h \colon A \to B$ such that $(a_1,\ldots,a_{\textup{ar}(R)}) \in R^{\mathbf{A}}$ implies $(h(a_1),\ldots,h(a_{\textup{ar}(R)})) \in R^{\mathbf{B}}$, for all $R \in \sigma$ and all $(a_1,\ldots,a_{\textup{ar}(R)}) \in A^{\textup{ar}(R)}$; a homomorphism from $\mathbf{A}$ to $\mathbf{B}$ is \emph{strong} if $(a_1,\ldots,a_{\textup{ar}(R)}) \not\in R^{\mathbf{A}}$ implies $(h(a_1),\ldots,h(a_{\textup{ar}(R)})) \not\in R^{\mathbf{B}}$. An \emph{embedding} from $\mathbf{A}$ to $\mathbf{B}$ is an injective strong homomorphism from $\mathbf{A}$ to $\mathbf{B}$. An \emph{isomorphism} from $\mathbf{A}$ to $\mathbf{B}$ is a bijective embedding from $\mathbf{A}$ to $\mathbf{B}$. \emph{In graph theory, an injective strong homomorphism is also called a \lq\lq strong embedding\rq\rq, and the term \lq\lq embedding\rq\rq\ is used in the weaker sense of injective homomorphism; here, we adopt the order-theoretic (and model-theoretic) terminology.} For a structure $\mathbf{A}$ and a sentence $\phi$ over the same vocabulary, we write $\mathbf{A} \models \phi$ if the sentence $\phi$ is \emph{true} in the structure $\mathbf{A}$. When $\mathbf{A}$ is a structure, $f$ is a mapping from variables to the universe of $\mathbf{A}$, and $\psi(x_1,\ldots,x_n)$ is a formula over the vocabulary of $\mathbf{A}$, we liberally write $\mathbf{A} \models \psi(f(x_1),\ldots,f(x_n))$ to indicate that $\psi$ is satisfied by $\mathbf{A}$ and $f$. A structure $\mathbf{G}=(V,E^\mathbf{G})$ with $\textup{ar}(E)=2$ is called a \emph{digraph}, and a \emph{graph} if $E^\mathbf{G}$ is irreflexive and symmetric. We let $\mathcal{G}$ denote the class of all graphs. Let $\mathbf{G}$ be a digraph. The \emph{degree} of $g \in G$, in symbols $\textup{degree}(g)$, is equal to $|\{ (g',g) \in E^\mathbf{G} \mid g' \in G \} \cup \{ (g,g') \in E^\mathbf{G} \mid g' \in G \}|$, and the \emph{degree} of $\mathbf{G}$, in symbols $\textup{degree}(\mathbf{G})$, is the maximum degree attained by the elements of $\mathbf{G}$. A digraph $\mathbf{P}=(P,\leq^\mathbf{P})$ is a \emph{poset} if $\leq^\mathbf{P}$ is a \emph{reflexive}, \emph{antisymmetric}, and \emph{transitive} relation over $P$, that is, respectively, $\mathbf{P} \models \forall x(x \leq x)$, $\mathbf{P} \models \forall x \forall y((x \leq y \wedge y \leq x) \to x=y)$, and $\mathbf{P} \models \forall x \forall y \forall z((x \leq y \wedge y \leq z) \to x \leq z)$. A \emph{chain} in $\mathbf{P}$ is a subset $C \subseteq P$ such that $p \leq^{\mathbf{P}} q$ or $q \leq^{\mathbf{P}} p$ for all $p,q \in C$ (in particular, if $P$ is a chain in $\mathbf{P}$, we call $\mathbf{P}$ itself a chain). We say that $p$ and $q$ are \emph{incomparable} in $\mathbf{P}$ (denoted $p \parallel^{\mathbf{P}} q$) if $\mathbf{P} \not\models p \leq q \vee q\leq p$. An \emph{antichain} in $\mathbf{P}$ is a subset $A \subseteq P$ such that $p \parallel^{\mathbf{P}} q$ for all $p,q \in A$ (in particular, if $P$ is an antichain in $\mathbf{P}$, we call $\mathbf{P}$ itself an antichain). Let $\mathbf{P}$ be a poset and let $p,q \in P$. We say that $q$ \emph{covers} $p$ in $\mathbf{P}$ (denoted $p \prec^{\mathbf{P}} q$) if $p<^{\mathbf{P}}q$ and, for all $r \in P$, $p \leq^{\mathbf{P}} r <^{\mathbf{P}}q$ implies $p=r$. The \emph{cover graph} of $\mathbf{P}$ is the digraph $\textup{cover}(\mathbf{P})$ with vertex set $P$ and edge set $\{ (p,q) \mid p \prec^{\mathbf{P}} q \}$. If $\mathcal{P}$ is a class of posets, we let $\textup{cover}(\mathcal{P})=\{ \textup{cover}(\mathbf{P}) \mid \mathbf{P} \in \mathcal{P} \}$. \longshort{It is well known that computing the cover relation corresponding to a given order relation, and vice versa the order relation corresponding to a given cover relation, is feasible in polynomial time \cite{Schroder03}.}{It is well known that computing the cover relation corresponding to a given order relation, and vice versa the order relation corresponding to a given cover relation, is feasible in polynomial time \cite{Schroder03}.} In the figures, posets are represented by their \emph{Hasse diagrams}, that is a diagram of their cover relation where all edges are intended oriented upwards. Let $\mathcal{P}$ be the class of all posets. A \emph{poset invariant} is a mapping $\textup{inv} \colon \mathcal{P} \to \mathbb{N}$ such that $\textup{inv}(\mathbf{P})=\textup{inv}(\mathbf{Q})$ for all $\mathbf{P},\mathbf{Q} \in \mathcal{P}$ such that $\mathbf{P}$ and $\mathbf{Q}$ are isomorphic. Let $\textup{inv}$ be any invariant over $\mathcal{P}$. Let $\mathcal{P}$ be any class of posets. We say that $\mathcal{P}$ is \emph{bounded} with respect to $\textup{inv}$ if there exists $b\in \mathbb{N}$ such that $\textup{inv}(\mathcal{P})\leq b$ for all $\mathbf{P} \in \mathcal{P}$. Two poset invariants are \emph{incomparable} if there exists a class of posets bounded under the first but unbounded under the second, and there exists a class of posets bounded under the second but unbounded under the first. \medskip \noindent \textit{Problems.} We refer the reader to \cite{FlumGrohe06} for the standard algorithmic setup of the model checking problem, including the underlying computational model, encoding conventions for input structures and sentences, and the notion of \emph{size} of the (encoding of an) input structure or sentence. We also refer the reader to \cite{FlumGrohe06} for further background in parameterized complexity theory (including the notion of \emph{fpt many-one reduction} and \emph{fpt Turing reduction}). Here, we mention that a \emph{parameterized problem} $(Q,\kappa)$ is a \emph{problem} $Q \subseteq \Sigma^*$ together with a \emph{parameterization} $\kappa \colon \Sigma^* \to \mathbb{N}$, where $\Sigma$ is a finite alphabet. A parameterized problem $(Q,\kappa)$ is \emph{fixed-parameter tractable (with respect to $\kappa$)}, in short \emph{fpt}, if there exists a decision algorithm for $Q$, a computable function $f \colon \mathbb{N} \to \mathbb{N}$, and a polynomial function $p \colon \mathbb{N} \to \mathbb{N}$, such that for all $x \in \Sigma^*$, the running time of the algorithm on $x$ is at most $f(\kappa(x)) \cdot p(|x|)$. We provide evidence that a parameterized problem is not fixed-parameter tractable by proving that the problem is $\textup{W}[1]$-hard under fpt many-one reductions; this holds unless the exponential time hypothesis fails \cite{FlumGrohe06} The (parameterized) computational problems under consideration are the following. Let $\sigma$ be a relational vocabulary, $\mathcal{C}$ be a class of $\sigma$-structures, and $\mathcal{L} \subseteq \mathcal{FO}$ be a class of $\sigma$-sentences. The \emph{model checking problem} for $\mathcal{C}$ and $\mathcal{L}$, in symbols $\textsc{MC}(\mathcal{C},\mathcal{L})$, is the problem of deciding, given $(\mathbf{A},\phi) \in \mathcal{C} \times \mathcal{L}$, whether $\mathbf{A} \models \phi$. The parameterization, given an instance $(\mathbf{A},\phi)$, returns the size of the encoding of $\phi$. The \emph{embedding problem} for $\mathcal{C}$, in symbols $\textsc{Emb}(\mathcal{C})$, is the problem of deciding, given a pair $(\mathbf{A},\mathbf{B})$, where $\mathbf{A}$ is a $\sigma$-structure and $\mathbf{B}$ is a $\sigma$-structure in $\mathcal{C}$, whether $\mathbf{A}$ embeds into $\mathbf{B}$. The parameterization, given an instance $(\mathbf{A},\mathbf{B})$, returns the size of the encoding of $\mathbf{A}$. The problems $\textsc{Hom}(\mathcal{C})$ and $\textsc{Iso}(\mathcal{C})$ are defined similarly in terms of homomorphisms and isomorphisms respectively. \section{Basic Results}\label{sect:setup} In this section, we set the stage for our parameterized and classical complexity results in Section~\ref{sect:mainresults} and Section~\ref{sect:classical} respectively. We start observing some basic reducibilities between the problems under consideration. \longshort{\begin{proposition}}{\begin{proposition}[$\star$]} \label{proposition:metodological} Let $\mathcal{C}$ be a class of structures. The following are equivalent. \begin{enumerate}[label=\textit{(\roman*)}] \item $\textsc{MC}(\mathcal{C},\mathcal{FO}(\exists,\wedge,\vee,\neg))$ is fixed-parameter tractable. \item $\textsc{MC}(\mathcal{C},\mathcal{FO}(\exists,\wedge,\neg))$ is fixed-parameter tractable. \item $\textsc{Emb}(\mathcal{C})$ is fixed-parameter tractable. \end{enumerate} In particular, $\textsc{Emb}(\mathcal{C})$ polynomial-time (thus fpt) many-one reduces to $\textsc{MC}(\mathcal{C},\mathcal{FO}(\exists,\wedge,\vee,\neg))$. \end{proposition} \newcommand{\pfmetodological}[0]{ \begin{proof} Let $\mathcal{C}$ be a class of $\sigma$-structures. We give a polynomial-time many-one reduction of $\textsc{Emb}(\mathcal{C})$ to $\textsc{MC}(\mathcal{C},\mathcal{FO}(\exists,\wedge,\neg))$. Note that embedding a $\sigma$-structure $\mathbf{A}$ into a $\sigma$-structure $\mathbf{B} \in \mathcal{C}$ reduces to checking whether $\mathbf{B}$ verifies the existential closure of the $\mathcal{FO}(\wedge,\neg)$-formula $$\bigwedge_{a,a' \in A, a \neq a'}a \neq a' \wedge \bigwedge_{R \in \sigma} \left( \bigwedge_{\mathbf{a} \in R^{\mathbf{A}}} R\mathbf{a} \wedge \bigwedge_{\mathbf{a} \not\in R^{\mathbf{A}}} \neg R\mathbf{a} \right)\text{.}$$ Clearly, $\textsc{MC}(\mathcal{C},\mathcal{FO}(\exists,\wedge,\neg))$ polynomial-time many-one reduces to $\textsc{MC}(\mathcal{C},\mathcal{FO}(\exists,\wedge,\vee,\neg))$. We conclude the proof giving a fpt Turing (in fact, even truthtable) reduction, from $\textsc{MC}(\mathcal{C},\mathcal{FO}(\exists,\wedge,\vee,\neg))$ to $\textsc{Emb}(\mathcal{C})$. Let $\phi \in \mathcal{FO}(\exists,\wedge,\vee,\neg)$. Say that $\phi$ is \emph{disjunctive} if $\phi=\psi_1 \vee \cdots \vee \psi_l$ and $\psi_i \in \mathcal{FO}(\exists,\wedge,\neg)$ for all $i \in [l]$. Clearly, for every $\phi \in \mathcal{FO}(\exists,\wedge,\vee,\neg)$, a disjunctive $\phi' \in \mathcal{FO}(\exists,\wedge,\vee,\neg)$ such that $\phi \equiv \phi'$ is computable by (equivalence preserving) syntactic replacements. Let $\psi$ be a $\sigma$-sentence in $\mathcal{FO}(\exists,\wedge,\neg)$. Say that the disjunctive $\sigma$-sentence $\psi'=\chi_1 \vee \cdots \vee \chi_l$ is a \emph{completion} of $\psi$ if $\psi' \equiv \psi$ and, for all $i \in [l]$, if the quantifier prefix of $\chi_i$ is $\exists x_1 \ldots \exists x_m$, then: \begin{itemize} \item for all $(y,y') \in \{x_1,\ldots,x_m\}^2$, it holds that $y=y'$ or $y \neq y'$ occur in the quantifier free part of $\chi_i$; \item for all $R \in \sigma$ and all $(y_1,\ldots,y_{\textup{ar}(R)}) \in \{x_1,\ldots,x_m\}^{\textup{ar}(R)}$, it holds that $R y_1 \ldots y_{\textup{ar}(R)}$ or $\neg R y_1 \ldots y_{\textup{ar}(R)}$ occur in the quantifier free part of $\chi_i$; \end{itemize} moreover, $\psi'$ is said \emph{reduced} if, for all $i \in [l]$, $\chi_i$ is satisfiable, $\chi_i$ does not contain dummy quantifiers, and $\chi_i$ does not contain atoms of the form $y=y'$. Let $\psi'=\chi_1 \vee \cdots \vee \chi_l$ be a reduced completion of the $\sigma$-sentence $\psi \in \mathcal{FO}(\exists,\wedge,\neg)$. Clearly, $\psi'$ is computable from $\psi$ as follows. Let $\exists x_1 \ldots \exists x_{m}$ be the quantifier prefix of $\psi$. \begin{itemize} \item For all $(y,y') \in \{x_1,\ldots,x_m\}^2$ such that neither $y=y'$ nor $y \neq y'$ occur in the quantifier free part of $\psi$, conjoin $(y=y' \vee y \neq y')$ to the quantifier free part of $\psi$. \item For all $R \in \sigma$ and $(y_1,\ldots,y_{\textup{ar}(R)}) \in \{x_1,\ldots,x_l\}^{\textup{ar}(R)}$ such that neither $R y_1 \ldots y_{\textup{ar}(R)}$ nor\longversion{\\} $\neg R y_1 \ldots y_{\textup{ar}(R)}$ occur in the quantifier free part of $\psi$, conjoin $(R y_1 \ldots y_{\textup{ar}(R)} \vee \neg R y_1 \ldots y_{\textup{ar}(R)})$ to the quantifier free part of $\psi$. \item Compute a disjunctive form of the resulting sentence, eliminate equality atoms and dummy quantifiers from each disjunct, and finally eliminate unsatisfiable disjuncts (empty disjunctions are false on all structures). \end{itemize} Note that for each $i \in [l]$, the disjunct $\chi_i$ naturally corresponds to a $\sigma$-structure $\mathbf{A}_i$, defined as follows. Let $\exists x_1 \ldots \exists x_{m}$ be the quantifier prefix of $\chi_i$. The universe $A_{\chi_i}$ is $\{x_1,\ldots,x_{m}\}$, and $(y_1,\ldots,y_{\textup{ar}(R)}) \in R^{\mathbf{A}_i}$ if and only if $R y_1 \ldots y_{\textup{ar}(R)}$ occurs in the quantifier free part of $\chi_i$. We are now ready to describe the reduction. Let $(\mathbf{B},\phi)$ be an instance of $\textsc{MC}(\mathcal{P},\mathcal{FO}(\exists,\wedge,\vee,\neg))$. The algorithm first computes a disjunctive form logically equivalent to $\phi$, say $\phi \equiv \psi_1 \vee \cdots \vee \psi_l$, and then, for each $i \in [l]$, computes a reduced completion $\psi'_i$ logically equivalent to $\psi_i$, say $\psi'_i \equiv \chi'_{i,1} \vee \cdots \vee \chi'_{i,l_i}$. For each $i \in [l]$ and $j \in [l_i]$, let $\mathbf{A}_{i,j}$ be the structure corresponding to $\chi'_{i,j}$. We claim that $\mathbf{B} \models \phi$ if and only if there exist $i \in [l]$ and $j \in [l_i]$ such that $\mathbf{A}_{i,j}$ embeds into $\mathbf{B}$. The backwards direction is clear. For the forwards direction, assume $\mathbf{B} \models \phi$. Then, there exist $i \in [l]$ and $j \in [l_i]$ such that $\mathbf{B} \models \chi'_{i,j}$. Then, $\mathbf{A}_{i,j}$ embeds into $\mathbf{B}$. Thus, the algorithm works as follows. For each $i \in [l]$ and $j \in [l_i]$, it poses the query $(\mathbf{A}_{i,j},\mathbf{B})$ to the problem $\textsc{Emb}(\mathcal{C})$, and it accepts if and only if at least one query answers positively. \end{proof} } \longversion{\pfmetodological} The next observation is that model checking existential conjunctive logic (and thus the full existential logic) on posets is unlikely to be polynomial-time tractable, even if the poset is fixed. Let $\mathbf{B}$ be the bowtie poset defined by the universe $B=[4]$ and the covers $i \prec^\mathbf{B} j$ for all $i \in \{1,2\}$ and $j \in \{3,4\}$. \longshort{\begin{proposition}}{\begin{proposition}[$\star$]} \label{pr:exprcomplex} $\textsc{MC}(\{\mathbf{B}\},\mathcal{FO}(\exists,\wedge,\neg))$ is $\textup{NP}$-hard. \end{proposition} \newcommand{\pfexprcomplex}[0]{ \begin{proof} Let $\sigma=\{\leq,1,2,3,4\}$ be a relational vocabulary where $\textup{ar}(\leq)=2$ and $\textup{ar}(i)=1$ for all $i \in [4]$. Let $\mathbf{B}^*$ be the $\sigma$-structure such that $(B^*,\leq^{\mathbf{B}^*})$ is isomorphic to $\mathbf{B}$, say without loss of generality via the isomorphism $f(b)=b \in B^*$ for all $b \in B$, and where $b^{\mathbf{B}^*}=\{f(b)\}=\{b\}$ for all $b \in B$. By the case $n=2$ of the main theorem in Pratt and Tiuryn \cite[Theorem~2]{PrattTiuryn96}, the problem $\textsc{Hom}(\{\mathbf{B}^*\})$ is $\textup{NP}$-hard. We give a polynomial-time many-one reduction of $\textsc{Hom}(\{\mathbf{B}^*\})$ to $\textsc{MC}(\{\mathbf{B}\},\mathcal{FO}(\exists,\wedge,\neg))$. Let $\mathbf{A}$ be an instance of $\textsc{Hom}(\{\mathbf{B}^*\})$, and let $\phi$ be the existential closure of the conjunction of the following $\{\leq\}$-literals (thus, $\phi$ is a $\mathcal{FO}(\exists,\wedge,\neg)$-sentence on the vocabulary of $\mathbf{B}$): \begin{itemize} \item $z_i \neq z_j$, for all $1 \leq i<j\leq 4$; \item $z_i<z_j$, for all $i \in \{1,2\}$ and $j \in \{3,4\}$; \item $a=z_i$, for all $i \in [4]$ and $a \in i^\mathbf{A}$; \item $a \leq a'$, for all $a \leq^{\mathbf{A}} a'$. \end{itemize} It is easy to check that $\mathbf{A}$ maps homomorphically to $\mathbf{B}^*$ if and only if $\mathbf{B} \models \phi$. \end{proof} } \longversion{\pfexprcomplex} In contrast, model checking existential logic on any fixed poset $\mathbf{P}$ is trivially fixed-parameter tractable (the instance is a structure of constant size, and a sentence taken as a parameter). However, there are classes of posets where the embedding problem, and hence, by Proposition~\ref{proposition:metodological}, the problem of model checking existential logic, is unlikely to be fixed-parameter tractable, as we now show. First, we introduce a family of poset invariants and relate them as in Figure~\ref{fig:diagram}. Let $\mathbf{P}$ be a poset. \begin{itemize} \item The \emph{size} of $\mathbf{P}$ is the cardinality of its universe, $|P|$. \item The \emph{width} of $\mathbf{P}$, in symbols $\textup{width}(\mathbf{P})$, is the maximum size attained by an antichain in $\mathbf{P}$. \item The \emph{depth} of $\mathbf{P}$, in symbols $\textup{depth}(\mathbf{P})$, is the maximum size attained by a chain in $\mathbf{P}$. \item The \emph{degree} of $\mathbf{P}$, in symbols $\textup{degree}(\mathbf{P})$, is the degree of the order relation of $\mathbf{P}$, that is, $\textup{degree}(\leq^\mathbf{P})$. \item The \emph{cover-degree} of $\mathbf{P}$, in symbols $\textup{cover\textup{-}degree}(\mathbf{P})$, is the degree of the cover relation of $\mathbf{P}$, that is, $\textup{degree}(\textup{cover}(\mathbf{P}))$. \end{itemize} \begin{figure}[t] \centering \begin{picture}(0,0)% \includegraphics{invariants_pspdftex}% \end{picture}% \setlength{\unitlength}{2279sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3166,2266)(2468,-10194) \put(2521,-9106){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{width}$}% }}}} \put(3421,-10006){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{size}$}% }}}} \put(4186,-9106){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{degree}$}% }}}} \put(2971,-8206){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{cover\textup{-}degree}$}% }}}} \put(5041,-8206){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\textup{depth}$}% }}}} \end{picture}% \caption{The order of poset invariants induced by Proposition~\ref{pr:diagram}.}\label{fig:diagram} \end{figure} \longshort{\begin{proposition}}{\begin{proposition}[$\star$]}\label{pr:diagram} Let $\mathcal{P}$ be a class of posets. \begin{enumerate}[label=\textit{(\roman*)}] \item $\mathcal{P}$ has bounded degree if and only if $\mathcal{P}$ has bounded depth and bounded cover-degree. \item If $\mathcal{P}$ has bounded width, then $\mathcal{P}$ has bounded cover-degree. \item $\mathcal{P}$ has bounded size if and only if $\mathcal{P}$ has bounded width and bounded degree. \end{enumerate} \end{proposition} \newcommand{\pfdiagram}[0]{ \begin{proof} We prove \textit{(i)}. Assume that $\mathcal{P}$ has bounded degree. Let $\mathbf{P} \in \mathcal{P}$. Then $\textup{cover\textup{-}degree}(\mathbf{P})\leq \textup{degree}(\mathbf{P})$ follows from the fact that $\textup{cover}(\mathbf{P})$ is contained in $\leq^\mathbf{P}$, while $\textup{depth}(\mathbf{P})\leq \textup{degree}(\mathbf{P})+1$ follows from the fact that each chain forms a complete directed acyclic subgraph in $\mathbf{P}$. Conversely, let $d \in \mathbb{N}$ and $c \in \mathbb{N}$ be the largest depth and cover-degree attained by a poset in $\mathcal{P}$, respectively. Then, for every $\mathbf{P} \in \mathcal{P}$ and $p \in P$, it holds that $\textup{degree}(p) \leq c^d$, hence $\mathcal{P}$ has bounded degree. We prove \textit{(ii)}. Let $w$ be the largest width attained by a poset in $\mathcal{P}$. Then, for every $\mathbf{P} \in \mathcal{P}$ and $p \in P$, it holds that $\textup{cover\textup{-}degree}(p) \leq 2w$, because the lower covers of $p$ and the upper covers of $p$ form antichains in $\mathbf{P}$, hence $p$ has at most $2w$ lower or upper covers. Hence, $\mathcal{P}$ has bounded cover-degree. We prove \textit{(iii)}. Assume that $\mathcal{P}$ has bounded size. Let $s$ be the largest size attained by a poset in $\mathcal{P}$. Then, for every $\mathbf{P} \in \mathcal{P}$, it holds that $\textup{width}(\mathbf{P}),\textup{degree}(\mathbf{P}) \leq s$, that is, $\mathcal{P}$ has bounded width and bounded degree. Conversely, by \textit{(i)}, $\mathcal{P}$ has bounded depth. Let $d$ and $w$ be the largest depth and width attained by a poset in $\mathcal{P}$, respectively. Let $\mathbf{P} \in \mathcal{P}$. By Dilworth's theorem, there exist $w$ chains in $\mathbf{P}$ whose union is $P$, hence $\textup{size}(\mathbf{P})\leq w \cdot d$. We conclude that $\mathcal{P}$ has bounded size. \end{proof} } \longversion{\pfdiagram} The previous proposition, together with the observation that bounded width and bounded degree (bounded width and bounded depth, bounded cover-degree and bounded depth, respectively) are incomparable, justifies the order in Figure~\ref{fig:diagram}, whose interpretation is the following: invariant $\textup{inv}$ is below invariant $\textup{inv}'$ if and only if, for every class $\mathcal{P}$ of posets, if $\mathcal{P}$ is bounded under $\textup{inv}$, then $\mathcal{P}$ is bounded under $\textup{inv}'$. The emerging hierarchy of poset invariants will provide a measure of tightness for our positive algorithmic results, once we will manage to surround them with complexity results on covering neighboring classes. To this aim, we immediately observe that there exists a class of posets of bounded depth where the embedding problem, and hence model-checking existential first-order logic, is $\textup{W}[1]$-hard. Given any graph $\mathbf{G} \in \mathcal{G}$, construct a poset $r(\mathbf{G})=\mathbf{P}$ by taking $|G|$ pairwise disjoint $3$-element chains, and covering the bottom of the $i$th chain by the top of the $j$th chain if and only if $i$ and $j$ are adjacent in $\mathbf{G}$. Note that $\textup{depth}(\mathbf{P}) \leq 3$. Hence, the class $\mathcal{P}_{\textup{depth}}=\{ r(\mathbf{G}) \mid \mathbf{G} \in \mathcal{G} \}$ has bounded depth. \begin{proposition}\label{pr:parhardallposets} $\textsc{Emb}(\mathcal{P}_\textup{depth})$ is $\textup{W}[1]$-hard. \end{proposition} \newcommand{\pfparhardallposets}[0]{ \begin{proof} $\textsc{Clique}$ fpt many-one reduces to $\textsc{Emb}(\mathcal{P}_{\textup{depth}})$ by mapping $(\mathbf{G},k)$ to $(r(\mathbf{K}_k),r(\mathbf{G}))$. \end{proof} } \longshort{\pfparhardallposets}{\pfparhardallposets} The goal of the technical part of the paper is to establish the facts leading from Figure~\ref{fig:diagram} to Figure~\ref{fig:overwparcompl}: \begin{itemize} \item For the parameterized complexity of model checking existential logic, we have tractability on bounded degree classes by Seese's algorithm \cite{Seese96}, and hardness on (certain) bounded depth classes by Proposition~\ref{pr:parhardallposets}. In Section~\ref{sect:mainresults}, we establish tractability on bounded width classes by Theorem~\ref{thm:EFOFPT}, and hardness on (certain) bounded cover-degree classes by Proposition~\ref{pr:harddegree}. \item For the classical complexity of the embedding problem (Section~\ref{sect:classical}), Proposition~\ref{th:wdtract} establishes tractability on bounded size classes, Theorem~\ref{th:widthnphard} establishes hardness on (certain) bounded width classes, and Theorem~\ref{th:degreenphard} establishes hardness on (certain) bounded degree classes. \end{itemize} We conclude the section by relating our work on posets of bounded width with previous work on digraphs of bounded clique-width, and showing that our results are indeed independent. Clique-width is a prominent invariant of undirected as well as directed graphs which generalizes treewidth \cite{CourcelleOlariu00}; in particular, it is known that monadic second-order logic (precisely, $\mathcal{MSO}_1$) is fixed-parameter tractable on digraphs of bounded clique-width \cite{CourcelleMakowskyRotics00}, thus: \begin{observation} \label{obs:MSO} $\textsc{MC}(\mathcal{P},\mathcal{FO})$ is fixed-parameter tractable for any class $\mathcal{P}$ of posets such that the clique-width of $\mathcal{P}$ is bounded. \end{observation} Since it is possible to compute the cover relation from the order relation (and vice versa) in polynomial time, one might wonder whether using the clique-width of the cover graph would allow us to efficiently model check wider classes of posets. This turns out not to be the case: \begin{observation}[follows from Examples 1.32, 1.33 and Corollary 1.53 of \cite{CourcelleEngelfriet12}] \label{obs:cw} For any class $\mathcal{P}$ of posets, the clique-width of $\mathcal{P}$ is bounded if and only if the clique-width of $\textup{cover}(\mathcal{P})$ is bounded. \end{observation} A natural class of posets which is easily observed having clique-width bounded by $2$ (despite having unbounded treewidth) is the class of \emph{series parallel posets}. However, we show that there exist classes of posets of bounded width which do not have bounded clique-width (if not Theorem~\ref{thm:EFOFPT} would follow from Observation \ref{obs:MSO}). \longshort{\begin{proposition}}{\begin{proposition}[$\star$]} \label{pr:nocw} There exists a class $\mathcal{P}$ of posets which has bounded width but does not have bounded clique-width. \end{proposition} \newcommand{\pfnocw}[0]{ \begin{proof} For each $i \in \mathbb{N}$, we define a poset $\mathbf{P}_i$ as follows. The universe is $P_i=\{p_{a,b}, q_{a,b} \mid a,b\in [i] \}$ and the cover relation is defined by the following pairs: \begin{itemize} \item $p_{a,b}\prec^{\mathbf{P}_i} p_{a,b+1}$ and $q_{a,b}\prec^{\mathbf{P}_i} q_{a,b+1}$, \item $p_{a,i}\prec^{\mathbf{P}_i} p_{a+1,1}$ and $q_{a,i}\prec^{\mathbf{P}_i} q_{a+1,1}$, \item $p_{a,b}\prec^{\mathbf{P}_i} q_{a+1,b}$ and $q_{a,b}\prec^{\mathbf{P}_i} p_{a+1,b}$. \end{itemize} Notice that $\textup{cover}(\mathbf{P}_i)$ contains a $i\times i$ grid as a subgraph; indeed, one may define the $j$th row of the grid to consist of the chain $p_{1,j}\prec^{\mathbf{P}_i} q_{2,j}\prec^{\mathbf{P}_i} p_{3,j}\prec^{\mathbf{P}_i} q_{4,j}\dots$ and similarly the $j$th column to consist of $p_{j,1}\prec^{\mathbf{P}_i} p_{j,2}\prec^{\mathbf{P}_i} p_{j,3}\dots$ for odd $j$ and $q_{j,1}\prec^{\mathbf{P}_i} q_{j,2}\prec^{\mathbf{P}_i} q_{j,3}\dots$ for even $j$. Furthermore, $\mathbf{P}_i$ has width $2$ and $\textup{cover}(\mathbf{P}_i)$ has degree $4$. We will prove that $\mathcal{P}=\{\mathbf{P}_i \mid i\in \mathbb{N}\}$ has unbounded clique-width. Let $\mathcal{H}$ be the class of undirected graphs corresponding to the covers of $\mathcal{P}$ (that is, $\mathcal{H}$ contains the symmetric closure of $\textup{cover}(\mathbf{P}_i)$ for all $\mathbf{P}_i \in \mathcal{P}$). Since $\mathcal{H}$ contains graphs with arbitrarily large grids, $\mathcal{H}$ has unbounded tree-width. Hence $\mathcal{H}$ also has unbounded clique-width by \cite[Corollary 1.53]{CourcelleEngelfriet12}, and the fact that it has bounded degree. It is a folklore fact that for any graph $\mathbf{G}$ and any orientation $\mathbf{G}'$ of $\mathbf{G}$, the clique-width of $\mathbf{G}$ is bounded by the clique-width of $\mathbf{G}'$ (indeed, one can use the same decomposition in this direction). Since $\textup{cover}(\mathcal{P})$ contains one orientation for each graph in $\mathcal{H}$ and since $\mathcal{H}$ has unbounded clique-width, we conclude that $\textup{cover}(\mathcal{P})$ has unbounded clique-width. \end{proof} } \longversion{\pfnocw} \section{Parameterized Complexity}\label{sect:mainresults} In this section, we study the parameterized complexity of the problems under consideration. The section is organized as follows. \begin{itemize} \item In Subsection \ref{sect:embfpt}, we develop a fixed-parameter tractable algorithm for the embedding problem on posets of bounded width (Theorem \ref{th:embfpt}), which yields that model checking existential logic on such posets is fixed-parameter tractable (Theorem \ref{thm:EFOFPT}). \item In Subsection \ref{sect:bdcover}, we provide a reduction proving $\textup{W[1]}$-hardness of model checking existential logic on posets of bounded cover-degree (Proposition \ref{pr:harddegree}). \end{itemize} \subsection{Embedding is FPT on Bounded Width Posets}\label{sect:embfpt} We first outline our proof strategy. The core of the proof lies in defining a suitable compilation of bounded width posets. We then proceed in two steps: \begin{enumerate}[label=\textit{(\roman*)}] \item proving that the homomorphism problem is polynomial-time tractable on such compilations, and \item reducing the embedding problem between two bounded width posets to fpt many instances of the homomorphism problem between compilations of these posets. \end{enumerate} For \textit{(i)}, we prove that the compilation admits a semilattice polymorphism (Lemma~\ref{lemma:compres}), and use the classical result by Jeavons et al.\ that the homomorphism problem is polynomial-time tractable on semilattice structures (Theorem \ref{th:semilpoly}). For \textit{(ii)}, we use color coding and hash functions (Theorem~\ref{th:hash}) to link a homomorphism between two compilations to the existence of an embedding between the compiled posets (Lemma \ref{lemma:correct}). \subsubsection{Known Facts}\label{sect:prim} The proof uses known facts about semilattice structures and hash functions, collected below. \medskip \noindent \textit{Semilattice Polymorphisms.} Let $\sigma$ be a finite relational vocabulary, and let $\mathbf{A}$ be a $\sigma$-structure. Let $f \colon A^m \to A$ be an $m$-ary function on $A$. We say that $f$ is a \emph{polymorphism} of $\mathbf{A}$ (or, $\mathbf{A}$ \emph{admits} $f$) if $f$ \emph{preserves} all relations of $\mathbf{A}$, that is, for all $R \in \sigma$, where $\textup{ar}(R)=r$, if $$(a_{1,1},a_{1,2},\ldots,a_{1,r}),\ldots,(a_{m,1},a_{m,2},\ldots,a_{m,r}) \in R^\mathbf{A}\text{,}$$ then $$( f(a_{1,1},a_{2,1},\ldots,a_{m,1}), \ldots, f(a_{1,r},a_{2,r},\ldots,a_{m,r}) ) \in R^\mathbf{A}\text{.}$$ We say that a function $f \colon A^2 \to A$ is a \emph{semilattice} function over $A$ if $f$ is idempotent, associative, and commutative on $A$, that is, $f(a,a)=a$, $f(a,f(a',a''))=f(f(a,a'),a'')$, and $f(a,a')=f(a',a)$ for all $a,a',a'' \in A$. \begin{theorem}[\cite{JeavonsCohenGyssens97}]\label{th:semilpoly} Let $\mathbf{A}$ be a $\sigma$-structure, and let $f$ be a semilattice function over $A$. If $f$ is a polymorphism of $\mathbf{A}$, then $\textsc{Hom}(\mathbf{A})$ is polynomial-time tractable. \end{theorem} \medskip \noindent \textit{Hash Functions.} Let $M$ and $N$ be sets, and let $k \in \mathbb{N}$. A \emph{$k$-perfect family of hash functions} from $M$ to $N$ is a family $\Lambda$ of functions from $M$ to $N$ such that for every subset $K \subseteq M$ of cardinality $k$ there exists $\lambda \in \Lambda$ such that $\lambda|_K$ is injective. \begin{theorem}\label{th:hash}[Theorem 13.14, \cite{FlumGrohe06}] Let $C$ be a finite set. There exists an algorithm that, given $C$ and $k \in \mathbb{N}$, computes a $k$-perfect family $\Lambda_{C,k}$ of hash functions from $C$ to $[k]$ of cardinality $2^{O(k)} \cdot \log^2 |C|$ in time $2^{O(k)} \cdot |C| \cdot \log^2 |C|$. \end{theorem} \newcommand{\exchainpartition}[0]{ \begin{example}\label{ex:chainpartition} Let $\mathbf{Q}$ be the poset with universe $Q=D_1 \cup D_2$, where $D_1=\{ d_{11},d_{12},d_{13},d_{14}\}$ and $D_2=\{ d_{21},d_{22},d_{23},d_{24}\}$, and cover relation $d_{11} \prec^{\mathbf{Q}} d_{12} \prec^{\mathbf{Q}} d_{13} \prec^{\mathbf{Q}} d_{14}$, $d_{21} \prec^{\mathbf{Q}} d_{22} \prec^{\mathbf{Q}} d_{23} \prec^{\mathbf{Q}} d_{24}$, $d_{11} \prec^{\mathbf{Q}} d_{23}$, $d_{12} \prec^{\mathbf{Q}} d_{24}$, and $d_{22} \prec^{\mathbf{Q}} d_{13}$. Then, $(\mathbf{D}_1,\mathbf{D}_2)$ is a chain partition of $\mathbf{Q}$. See Figure~\ref{fig:chainpartition} (left). Let $\mathbf{P}$ be the poset with universe $P=C_1 \cup C_2$, where $C_1=\{ c_{11},\ldots,c_{16}\}$ and $C_2=\{ c_{21},\ldots,c_{26}\}$, and cover relation $c_{11} \prec^{\mathbf{Q}} \cdots \prec^{\mathbf{Q}} c_{16}$, $c_{21} \prec^{\mathbf{Q}} \cdots \prec^{\mathbf{Q}} c_{26}$, $c_{11} \prec^{\mathbf{Q}} c_{24}$, $c_{12} \prec^{\mathbf{Q}} c_{25}$, $c_{13} \prec^{\mathbf{Q}} c_{26}$, $c_{21} \prec^{\mathbf{Q}} c_{14}$, $c_{22} \prec^{\mathbf{Q}} c_{15}$, and $c_{23} \prec^{\mathbf{Q}} c_{16}$. Then, $(\mathbf{C}_1,\mathbf{C}_2)$ is a chain partition of $\mathbf{P}$. See Figure~\ref{fig:chainpartition} (right). The mapping $e \colon Q \to P$ defined by $e(d_{11})=c_{11}$, $e(d_{12})=c_{12}$, $e(d_{13})=c_{15}$, $e(d_{14})=c_{16}$, $e(d_{21})=c_{21}$, $e(d_{22})=c_{22}$, $e(d_{23})=c_{24}$, $e(d_{24})=c_{25}$ embeds $\mathbf{Q}$ into $\mathbf{P}$. \end{example} \begin{figure}[h] \centering \includegraphics[scale=.2]{chainpartition} \caption{The posets $\mathbf{Q}$ (left) and $\mathbf{P}$ (right) in Example~\ref{ex:chainpartition}. The white points in $P$ form the image of the embedding $e \colon Q \to P$ in Example~\ref{ex:chainpartition}.} \label{fig:chainpartition} \end{figure} } \subsubsection{Semilattice Compilation}\label{sect:compil} Let $\mathbf{P}$ be a poset. Let $(i_1,\ldots,i_a) \in \mathbb{N}^a$ be a tuple of numbers. A \emph{chain partition} of $\mathbf{P}$ is a tuple $(\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_a})$ such that $\emptyset \neq C_{i_j} \subseteq P$ for all $j \in [a]$, $P=\bigcup_{j \in [a]}C_{i_j}$, $C_{i_j} \cap C_{i_{j'}}=\emptyset$ for all $1 \leq j<j' \leq a$, $\mathbf{C}_{i_j}$ is the substructure of $\mathbf{P}$ induced by $C_{i_j}$, and $\mathbf{C}_{i_j}$ is a chain. \longshort{\exchainpartition}{\exchainpartition} \begin{theorem}\label{th:felsner}[Theorem 1, \cite{FelsnerRaghavanSpinrad03}] Let $\mathbf{P}$ be a poset. Then, in time $O(\textup{width}(\mathbf{P}) \cdot |P|^2)$, it is possible to compute both $\textup{width}(\mathbf{P})$ and a chain partition of $\mathbf{P}$ of the form $(\mathbf{C}_1,\ldots,\mathbf{C}_{\textup{width}(\mathbf{P})})$. \end{theorem} \newcommand{\excompilp}[0]{ \begin{example}\label{ex:compilp} Let $\mathbf{Q}$ and $(\mathbf{D}_1,\mathbf{D}_2)$ be as in Example~\ref{ex:chainpartition}. Let the subtuple of $(1,2)$ be $(1,2)$ itself. Let $k_1=k_2=4=|D_1|=|D_2|$. Let $\mu_1 \colon D_1 \to [k_1]$ be defined by $\mu_1(d_{11})=1$, $\mu_1(d_{12})=2$, $\mu_1(d_{13})=3$, and $\mu_1(d_{14})=4$. Let $\mu_2 \colon D_2 \to [k_2]$ be defined by $\mu_2(c_{21})=1$, $\mu_2(c_{22})=2$, $\mu_2(c_{23})=3$, and $\mu_2(c_{24})=4$. Then, $\textup{compil}(\mathbf{Q},\mathbf{D}_{1},\mathbf{D}_{2},\mu_1,\mu_{2})$ is depicted in Figure~\ref{fig:compilq}. Let $\mathbf{P}$ and $(\mathbf{C}_1,\mathbf{C}_2)$ be as in Example~\ref{ex:chainpartition}. Let the subtuple of $(1,2)$ be $(1,2)$ itself. Let $k_1=k_2=4 \leq 6=|C_1|=|C_2|$. Let $\lambda_1 \colon C_1 \to [k_1]$ be defined by $\lambda_1(c_{11})=1$, $\lambda_1(c_{12})=2$, $\lambda_1(c_{13})=4$, $\lambda_1(c_{14})=1$, $\lambda_1(c_{15})=3$, and $\lambda_1(c_{16})=4$. Let $\lambda_2 \colon C_2 \to [k_2]$ be defined by $\lambda_2(c_{21})=1$, $\lambda_2(c_{22})=2$, $\lambda_2(c_{23})=3$, $\lambda_2(c_{24})=3$, $\lambda_2(c_{25})=4$, and $\lambda_2(c_{26})=1$. Then, $\textup{compil}(\mathbf{P},\mathbf{C}_{1},\mathbf{C}_{2},\lambda_1,\lambda_{2})$ is depicted in Figure~\ref{fig:compilp}. \begin{figure}[h] \centering \includegraphics[scale=.19]{compilq} \caption{Describing the structure $\textup{compil}(\mathbf{Q},\mathbf{D}_{1},\mathbf{D}_{2},\mu_1,\mu_{2})$ in Example~\ref{ex:compilp}. From left to right. The first picture displays the interpretation of $L$ (thin solid edges) and $I_{\{1,2\}}$ (gray points) induced by \textit{(i)} and \textit{(ii)}. The second picture displays the interpretation of $L$ (thin solid edges), $O_{(2,1)}$ (light gray points), and $O_{(1,2)}$ (dark gray points) induced by \textit{(i)} and \textit{(ii)}. The third picture displays the interpretation of $R_{(1,1)}$ (dotted edges), $R_{(1,2)}$ (medium solid edges), $R_{(1,3)}$ (thick solid edges), and $R_{(1,4)}$ (dashed edges), as induced by \textit{(iii)} and $\lambda_1$. Similarly, the fourth picture displays the interpretation of $R_{(2,1)}$, $R_{(2,2)}$, $R_{(2,3)}$, and $R_{(2,4)}$ induced by \textit{(iii)} and $\lambda_2$.} \label{fig:compilq} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.19]{compilp} \caption{Describing the structure $\textup{compil}(\mathbf{P},\mathbf{C}_{1},\mathbf{C}_{2},\lambda_1,\lambda_{2})$ in Example~\ref{ex:compilp}, along the lines of Figure~\ref{fig:compilq}.} \label{fig:compilp} \end{figure} \end{example}} We are now ready to define the aforementioned compilations. Note that our compilations will depend not only on the poset itself, but also on a chain decomposition of the poset and a family of colorings (the significance of the latter will become clear in the proof of Lemma \ref{lemma:correct}). Let $\mathbf{P}$ be a poset such that $\textup{width}(\mathbf{P})\leq w$, and let $(\mathbf{C}_1,\ldots,\mathbf{C}_{w})$ be a chain partition of $\mathbf{P}$. Let $w' \leq w$ and let $(i_1,\ldots,i_{w'})$ be a subtuple of $(1,\ldots,w)$, that is, $(i_1,\ldots,i_{w'})$ is obtained from $(1,\ldots,w)$ by deleting $w-w'$ indices. For all $j \in [w']$, let $k_{i_j} \in \mathbb{N}$ be such that $k_{i_j} \leq |C_{i_j}|$, $\Lambda_j$ be a family of functions from $C_{i_j}$ to $[k_{i_j}]$, and $(\lambda_1,\ldots,\lambda_{w'}) \in \Lambda_1 \times \cdots \times \Lambda_{w'}$. For a suitable relational vocabulary $\sigma$ depending on $w'$ and $k_{i_j}$ for all $j \in [w']$, we define the $\sigma$-structure $$\textup{compil}(\mathbf{P},\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}},\lambda_1,\ldots,\lambda_{w'})\text{,}$$ which we call the \emph{compilation} of $\mathbf{P}$ with respect to the \emph{coordinatization} $(\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}})$ and the \emph{coloring} $(\lambda_1,\ldots,\lambda_{w'})$, as follows (we use $\textup{compil}(\mathbf{P})$ as a shorthand if the coordinatization and the coloring are contextually clear). The relational vocabulary $\sigma$ of $\textup{compil}(\mathbf{P})$ consists of one binary relation symbol $L$, two unary relation symbols $I_{\{j,j'\}}$ and $O_{(j,j')}$ for each $2$-element subset $\{j,j'\}$ of $[w']$, and one binary relation symbol $R_{(j,k)}$ for each $j \in [w']$ and $k \in [k_{i_j}]$. The universe of $\textup{compil}(\mathbf{P})$ is $$\textup{compil}(P)=C_{i_1} \times C_{i_2} \times \cdots \times C_{i_{w'}}\text{.}$$ Let $\mathbf{c}=(c_1,\ldots,c_{w'})$ and $\mathbf{c}'=(c'_1,\ldots,c'_{w'})$ be elements of $\textup{compil}(\mathbf{P})$, and let $K_{(j,k)}=\{ c \in C_{i_j} \mid \lambda_j(c)=k \}$. The interpretation of the vocabulary $\sigma$ in $\textup{compil}(\mathbf{P})$ is the following: \begin{enumerate}[label=\textit{(\roman*)}] \item The interpretation of $L$ is the set of all pairs $(\mathbf{c},\mathbf{c}')$ such that $c_1 \leq^\mathbf{P} c'_1,\ldots,c_{w'} \leq^\mathbf{P} c'_{w'}$. \item For each $2$-element subset $\{j,j'\}$ of $[w']$, $I_{\{j,j'\}}$ and $O_{(j,j')}$ are interpreted, respectively, over $I_{\{j,j'\}}=\{ \mathbf{c} \mid c_{j} \parallel^\mathbf{P} c_{j'} \}$ and $O_{(j,j')}=\{ \mathbf{c} \mid c_{j} <^\mathbf{P} c_{j'} \}$, \item For each $j \in [w']$ and $k \in [k_{i_j}]$, $R_{(j,k)}$ is interpreted over the subset of the interpretation of $L$ defined by $$\{ (\mathbf{c},\mathbf{c}') \in L^{\textup{compil}(\mathbf{P})} \;\ifnum\currentgrouptype=16 \middle\fi|\; \text{$c_{j}\in K_{(j,k)},c_{j}=c'_{j}$} \} \text{.}$$ \end{enumerate} \longshort{\excompilp}{\excompilp} The intuition underlying the compilation procedure is the following. The universe of $\textup{compil}(\mathbf{P})$ is the Cartesian product of a family of chains $\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}}$ partitioning the universe of $\mathbf{P}$. The interpretation of $L$ in $\textup{compil}(\mathbf{P})$ is the natural lattice order inherited by $\textup{compil}(\mathbf{P})$ from $\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}}$. For $\{i,j\} \subseteq [w']$, the interpretations of $I_{\{j,j'\}}$ and $O_{(j,j')}$ in $\textup{compil}(\mathbf{P})$ record, respectively, incomparabilities and comparabilities between the $j$th and $j'$th coordinate (corresponding to elements in the chains $\mathbf{C}_{i_j}$ and $\mathbf{C}_{i_{j'}}$, respectively) of the tuples in $\textup{compil}(P)$. Finally, for each $j \in [w']$ and $k \in [k_{i_j}]$, the interpretation of $R_{(j,k)}$ in $\textup{compil}(\mathbf{P})$ is the restriction of the lattice order of $\textup{compil}(\mathbf{P})$ to those pairs of tuples in $\textup{compil}(P)$ such that their $j$th coordinate is colored $k$ by $\lambda_j$; the $R_{(j,k)}$'s responsibility is to implement the color coding technique (in our setting), as it will become clear in the proof of Claim~\ref{cl:cl1}. We define a binary function $$s \colon \textup{compil}(P)^2 \to \textup{compil}(P)$$ as follows. Let $\mathbf{c}=(c_1,\ldots,c_{w'})$ and $\mathbf{c}'=(c'_1,\ldots,c'_{w'})$ be elements in $\textup{compil}(P)$. Let $j \in [w']$. Recalling that $\mathbf{C}_{i_j}$ is a chain, let $d_j=\textup{min}^{\mathbf{C}_{i_j}}(c_j,c'_j)$. Define \begin{equation}\label{eq:semilattice} s(\mathbf{c},\mathbf{c}')=(d_1,\ldots,d_{w'})\text{.} \end{equation} Clearly, $s$ is idempotent, associative and commutative, and hence $s$ is a semilattice function over $\textup{compil}(P)$. \longshort{\begin{lemma}}{\begin{lemma}[$\star$]} \label{lemma:compres} Let $\mathbf{P}$ be a poset, $(\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}})$ be a coordinatization of $\mathbf{P}$, $(\lambda_1,\ldots,\lambda_{w'})$ be a coloring of $\mathbf{P}$. Then, the function $s$ in (\ref{eq:semilattice}) is a polymorphism of $\textup{compil}(\mathbf{P},\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}},\lambda_1,\ldots,\lambda_{w'})$. \end{lemma} \newcommand{\pfcompres}[0]{ \begin{proof} We denote $\textup{compil}(\mathbf{P},\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}},\lambda_1,\ldots,\lambda_{w'})$ by $\textup{compil}(\mathbf{P})$ in short. We check that $s$ preserves each relation in the vocabulary. In the rest of the proof, $\mathbf{c}=(c_1,\ldots,c_{w'})$, $\mathbf{c}'=(c'_1,\ldots,c'_{w'})$, $\mathbf{d}=(d_1,\ldots,d_{w'})$, and $\mathbf{d}'=(d'_1,\ldots,d'_{w'})$ are elements of $\textup{compil}(\mathbf{P})$. $L$ in $\sigma$: We claim that $s$ preserves $L$. Let $(\mathbf{c},\mathbf{c}'),(\mathbf{d},\mathbf{d}') \in L$. Suffices to show that, for all $j \in [w']$, $\textup{min}^{\mathbf{P}}(c_j,d_j) \leq^{\mathbf{P}} \textup{min}^{\mathbf{P}}(c'_j,d'_j)$. By hypothesis\longshort{ we have}{,} $c_1 \leq^\mathbf{P} c'_1,\ldots,c_{w'} \leq^\mathbf{P} c'_{w'}$ and $d_1 \leq^\mathbf{P} d'_1,\ldots,d_{w'} \leq^\mathbf{P} d'_{w'}$ so that $c_1 \leq^{\mathbf{C}_{i_1}} c'_1,\ldots,c_{w'} \leq^{\mathbf{C}_{i_{w'}}} c'_{w'}$ and $d_1 \leq^{\mathbf{C}_{i_1}} d'_1,\ldots,d_{w'} \leq^{\mathbf{C}_{i_{w'}}} d'_{w'}$. For all $j \in [w']$, $c_j \leq^{\mathbf{C}_{i_j}} c'_j$ and $d_j \leq^{\mathbf{C}_{i_j}} d'_j$ implies $\textup{min}^{\mathbf{C}_{i_j}}(c_j,d_j) \leq^{\mathbf{C}_{i_j}} \textup{min}^{\mathbf{C}_{i_j}}(c'_j,d'_j)$, which implies $\textup{min}^{\mathbf{P}}(c_j,d_j) \leq^{\mathbf{P}} \textup{min}^{\mathbf{P}}(c'_j,d'_j)$, and we are done. $I_{\{j,j'\}}$ in $\sigma$ for $1 \leq j < j' \leq w'$: We claim that $s$ preserves $I_{\{j,j'\}}$. Let $\mathbf{c},\mathbf{d} \in I_{\{j,j'\}}$. Suffices to show that $\textup{min}^{\mathbf{C}_{i_j}}(c_j,d_j) \parallel^\mathbf{P} \textup{min}^{\mathbf{C}_{i_{j'}}}(c_{j'},d_{j'})$. Assume the contrary for a contradiction, say $\textup{min}^{\mathbf{C}_{i_j}}(c_j,d_j) \leq^\mathbf{P} \textup{min}^{\mathbf{C}_{i_{j'}}}(c_{j'},d_{j'})$ (the other case is similar). If $c_j \leq^{\mathbf{C}_{i_j}} d_j$ and $c_{j'} \leq^{\mathbf{C}_{i_{j'}}} d_{j'}$, then $c_j \leq^\mathbf{P} c_{j'}$, contradicting the hypothesis that $c_j \parallel^\mathbf{P} c_{j'}$. Similarly, it is impossible that $d_j \leq^{\mathbf{C}_{i_j}} c_j$ and $d_{j'} \leq^{\mathbf{C}_{i_{j'}}} c_{j'}$. So, assume that $c_j \leq^{\mathbf{C}_{i_j}} d_j$ and $d_{j'} \leq^{\mathbf{C}_{i_{j'}}} c_{j'}$. Then, $c_j \leq^\mathbf{P} d_{j'} \leq^\mathbf{P} c_{j'}$ by the absurdum hypothesis and the case distinction, a contradiction. The case $d_j \leq^{\mathbf{C}_{i_j}} c_j$ and $c_{j'} \leq^{\mathbf{C}_{i_{j'}}} d_{j'}$ is similar. $O_{(j,j')}$ and $O_{(j',j)}$ in $\sigma$ for $1 \leq j < j' \leq w'$: We claim that $s$ preserves $O_{(j,j')}$ and $O_{(j',j)}$. We argue for $O_{(j,j')}$, and $O_{(j',j)}$ is similar. Let $\mathbf{c},\mathbf{d} \in O_{(j,j')}$. Suffices to show that $\textup{min}^{\mathbf{C}_{i_j}}(c_j,d_j) \leq^{\mathbf{P}} \textup{min}^{\mathbf{C}_{i_{j'}}}(c_{j'},d_{j'})$, since $\mathbf{C}_{i_j} \cap \mathbf{C}_{i_{j'}}=\emptyset$. If $c_{j} \leq^{\mathbf{C}_{i_j}} d_{j}$ and $c_{j'} \leq^{\mathbf{C}_{i_{j'}}} d_{j'}$, then $c_{j} \leq^\mathbf{P} c_{j'}$ by hypothesis; similarly if $d_{j} \leq^{\mathbf{C}_{i_j}} c_{j}$ and $d_{j'} \leq^{\mathbf{C}_{i_{j'}}} c_{j'}$. So, assume that $c_{j} \leq^{\mathbf{C}_{i_j}} d_{j}$ and $d_{j'} \leq^{\mathbf{C}_{i_{j'}}} c_{j'}$. Combining the main hypothesis and the case distinction, we have $c_{j} \leq^{\mathbf{C}_{i_j}} d_{j} \leq^{\mathbf{P}} d_{j'}$, that is, $c_{j} \leq^{\mathbf{P}} d_{j'}$. Similarly, $d_{j} \leq^{\mathbf{C}_{i_j}} c_{j}$ and $c_{j'} \leq^{\mathbf{C}_{i_{j'}}} d_{j'}$ implies $d_{j} \leq^{\mathbf{P}} c_{j'}$. $R_{(j,k)}$ for $j \in [w']$ and $k \in [k_{i_j}]$: We claim that $s$ preserves $R_{(j,k)}$. To prove the claim, let $(\mathbf{c},\mathbf{d}),(\mathbf{c}',\mathbf{d}') \in R$. Let $b,b' \in K_{(j,k)}$ be such that $c_{j}=d_{j}=b$ and $c'_{j}=d'_{j}=b'$. Assume $b \leq^{\mathbf{C}_{i_j}} b'$ (the other case is similar). Clearly, $\textup{min}^{\mathbf{C}_{i_j}}(c_{j},c'_{j})=\textup{min}^{\mathbf{C}_{i_{j}}}(d_{j},d'_{j})=b$. By hypothesis, $(\mathbf{c},\mathbf{d}),(\mathbf{c}',\mathbf{d}') \in L$, so that, by the above, $$(s(\mathbf{c},\mathbf{c}'), s(\mathbf{d},\mathbf{d}')) \in L\text{,}$$ and thus, by definition, $$(s(\mathbf{c},\mathbf{c}'), s(\mathbf{d},\mathbf{d}')) \in R\text{,}$$ which completes the proof. \end{proof} } \longversion{\pfcompres} It follows from Lemma \ref{lemma:compres} and Theorem \ref{th:semilpoly} that, for every poset $\mathbf{P}$ and every compilation $\mathbf{P}^*$ of $\mathbf{P}$, the problem $\textsc{Hom}(\mathbf{P}^*)$ is polynomial-time tractable; this settles the main result of this section. \subsubsection{Reduction}\label{sect:reduction} The following lemma reduces an instance of the poset embedding problem to a family of instances of the homomorphism problem for suitable compilations of the given posets. The lemma is illustrated in Example \ref{ex:compilhom}. \begin{lemma} \label{lemma:correct} Let $\mathbf{Q}$ and $\mathbf{P}$ be posets such that $\textup{width}(\mathbf{Q}) \leq \textup{width}(\mathbf{P})=w$. Let $(\mathbf{C}_1,\ldots,\mathbf{C}_{w})$ be a chain partition of $\mathbf{P}$. The following are equivalent. \begin{enumerate}[label=\textit{(\roman*)}] \item $\mathbf{Q}$ embeds into $\mathbf{P}$. \item There exist $w' \leq w$, a subtuple $(i_1,\ldots,i_{w'})$ of $(1,\ldots,w)$, a chain partition $(\mathbf{D}_{i_1},\ldots,\mathbf{D}_{i_{w'}})$ of $\mathbf{Q}$ such that $|D_{i_j}| \leq |C_{i_j}|$ for all $j \in [w']$, and a tuple $(\mu_1,\ldots,\mu_{w'})$ of bijections from $D_{i_j}$ to $[|D_{i_j}|]$ for all $j \in [w']$, such that, for all tuples $(\Lambda_{1},\ldots,\Lambda_{w'})$, where $\Lambda_j$ is a $|D_{i_j}|$-perfect family of hash functions from $C_{i_j}$ to $[|D_{i_j}|]$ for all $j \in [w']$, there exists a tuple $(\lambda_1,\ldots,\lambda_{w'}) \in \Lambda_{1} \times \ldots \times \Lambda_{w'}$ such that such that $$\mathbf{Q}^* \in \textsc{Hom}(\mathbf{P}^*)\text{,}$$ where \begin{align*} \mathbf{Q}^*&=\textup{compil}(\mathbf{Q},\mathbf{D}_{i_1},\ldots,\mathbf{D}_{i_{w'}},\mu_1,\ldots,\mu_{w'})\text{,}\\ \mathbf{P}^*&=\textup{compil}(\mathbf{P},\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}},\lambda_1,\ldots,\lambda_{w'})\text{.} \end{align*} \end{enumerate} \end{lemma} \newcommand{\pfforwclaima}[0]{ \begin{proof}\renewcommand{\qedsymbol}{$\dashv$}[Proof of Claim~\ref{claim:forwclaim1}] To prove the claim, let $e(Q)=\{ e(q) \mid q \in Q \}$. Let $(i_1,i_2,\ldots,i_{w'})$ be the subtuple of $(1,2,\ldots,w)$ uniquely determined by deleting the index $i \in [w]$ if and only if $e(Q) \cap C_{i}=\emptyset$. For all $j \in [w']$, let $D_{i_j}=e^{-1}(C_{i_j})$, and let $\mathbf{D}_{i_j}$ be the substructure of $\mathbf{Q}$ induced by $D_{i_j}$. Then, $(\mathbf{D}_{i_1},\ldots,\mathbf{D}_{i_{w'}})$ is a chain partition of $\mathbf{Q}$, and clearly $e(D_{i_j}) \subseteq C_{i_j}$ for all $j \in [w']$, which settles the claim. \end{proof}} \newcommand{\pfforwclaim}[0]{ \begin{proof}\renewcommand{\qedsymbol}{$\dashv$}[Proof of Claim~\ref{claim:forwclaim}] Note that $\mathbf{Q}^*$ and $\mathbf{P}^*$ have the same vocabulary. To prove the claim, we check that $h$ preserves all relations in the vocabulary. Below, $\mathbf{d}=(d_1,\ldots,d_{w'})$ and $\mathbf{d}'=(d'_1,\ldots,d'_{w'})$ are elements of $\mathbf{Q}^*$. $L$: If $(\mathbf{d},\mathbf{d}') \in L^{\mathbf{Q}^*}$, then $d_j \leq^{\mathbf{D}_{i_j}} d'_j$ for all $j \in [w']$, then $d_j \leq^{\mathbf{Q}} d'_j$ for all $j \in [w']$, then $e(d_j) \leq^{\mathbf{P}} e(d'_j)$ for all $j \in [w']$, then $e(d_j) \leq^{\mathbf{C}_{i_j}} e(d'_j)$ for all $j \in [w']$, then $((e(d_1),\ldots,e(d_{w'})),(e(d'_1),\ldots,e(d'_{w'}))) \in L^{\mathbf{P}^*}$. Altogether this yields $(h(\mathbf{d}),h(\mathbf{d}')) \in L^{\mathbf{P}^*}$. $I_{\{j,j'\}}$: If $\mathbf{d} \in I_{\{j,j'\}}^{\mathbf{Q}^*}$, then $d_j \parallel^{\mathbf{Q}} d_{j'}$, then $e(d_j) \parallel^{\mathbf{P}} e(d_{j'})$, then $(e(d_1),\ldots,e(d_{w'})) \in I_{\{j,j'\}}^{\mathbf{P}^*}$, that is, $h(\mathbf{d}) \in I_{\{j,j'\}}^{\mathbf{P}^*}$. $1 \leq j<j' \leq w'$, $O_{(j,j')}$: If $\mathbf{d} \in O_{(j,j')}^{\mathbf{Q}^*}$, then $d_j <^{\mathbf{Q}} d_{j'}$, then $e(d_j) <^{\mathbf{P}} e(d_{j'})$, then $(e(d_1),\ldots,e(d_{w'})) \in O_{(j,j')}^{\mathbf{P}^*}$, that is, $h(\mathbf{d}) \in O_{(j,j')}^{\mathbf{P}^*}$. The case $O_{(j',j)}$ is similar. $j \in [w']$, $k \in [|D_{i_j}|]$, $R_{(j,k)}$: If $(\mathbf{d},\mathbf{d}') \in R_{(j,k)}^{\mathbf{Q}^*}$, then first observe that $(\mathbf{d},\mathbf{d}') \in L^{\mathbf{Q}^*}$, so that $(h(\mathbf{d}),h(\mathbf{d}')) \in L^{\mathbf{P}^*}$ by the argument above. We have that $d_{j}=d'_{j}=d$ for some $d \in D_{i_j}$ such that $\mu_{j}(d)=k$. By construction, $\mu_{j}(d)=k$ if and only if there exists $c \in \mathbf{C}_{i_j}$ such that $d=e^{-1}(c)$ and $\lambda_{j}(c)=k$. Therefore, $e(d_{j})=e(d'_{j})=e(d)=c$, so that $((e(d_1),\ldots,e(d_{w'})),(e(d'_1),\ldots,e(d'_{w'}))) \in R_{(j,k)}^{\mathbf{P}^*}$, that is, $(h(\mathbf{d}),h(\mathbf{d}')) \in R_{(j,k)}^{\mathbf{P}^*}$. \end{proof}} \newcommand{\pfcla}[0]{ \begin{proof}\renewcommand{\qedsymbol}{$\dashv$}[Proof of Claim~\ref{cl:cl1}] Let $\mu_j(q)=k$. Since $\{ \mathbf{d} \in Q^* \mid d_j=q \}$ is nonempty, there exists at least one element $p \in C_{i_j}$ such that $\mathbf{c}$ is in the image of $h$ in $P^*$ and $c_j=p$. Let $p,p' \in C_{i_j}$ be such that, for some $\mathbf{d},\mathbf{d}' \in Q^*$ with $d_j=d'_j=q$, $h(\mathbf{d})=\mathbf{c}$ and $c_j=p$, and $h(\mathbf{d}')=\mathbf{c}'$ and $c'_j=p'$. We prove that $p=p'$ and $\lambda_j(p)=k$. We distinguish two cases. Case $1$: $(\mathbf{d},\mathbf{d}') \in L^{\mathbf{Q}^*}$ or $(\mathbf{d}',\mathbf{d}) \in L^{\mathbf{Q}^*}$. Assume $(\mathbf{d},\mathbf{d}') \in L^{\mathbf{Q}^*}$. Then, $(\mathbf{d},\mathbf{d}') \in R_{j,k}^{\mathbf{Q}^*}$. Then, $(\mathbf{c},\mathbf{c}') \in R_{j,k}^{\mathbf{P}^*}$, so that $c_j=c'_j$ by definition of $R_{j,k}^{\mathbf{P}^*}$, that is $p=p'$, and $\lambda_j(p)=k$. The argument is similar if $(\mathbf{d}',\mathbf{d}) \in L^{\mathbf{Q}^*}$. Case $2$: $(\mathbf{d},\mathbf{d}') \not\in L^{\mathbf{Q}^*}$ and $(\mathbf{d}',\mathbf{d}) \not\in L^{\mathbf{Q}^*}$. Clearly it then holds that $\textup{min}^{\mathbf{Q}_{i_j}}(d_{j},d'_{j})=q$ and $\textup{min}^{\mathbf{Q}_{i_{j'}}}(d_{j'},d'_{j'}) \leq^{\mathbf{Q}_{i_{j'}}} d_{j'},d'_{j'}$ for all $j' \in [w']$. Therefore, $$( (\textup{min}^{\mathbf{Q}_{i_1}}(d_1,d'_1),\ldots,\textup{min}^{\mathbf{Q}_{i_{w'}}}(d_{w'},d'_{w'})), \mathbf{d} ) \in R_{j,k}^{\mathbf{Q}^*}\text{,}$$ and $$( (\textup{min}^{\mathbf{Q}_{i_1}}(d_1,d'_1),\ldots,\textup{min}^{\mathbf{Q}_{i_{w'}}}(d_{w'},d'_{w'})), \mathbf{d}' ) \in R_{j,k}^{\mathbf{Q}^*}\text{.}$$ Let $$h((\textup{min}^{\mathbf{Q}_{i_1}}(d_1,d'_1),\ldots,\textup{min}^{\mathbf{Q}_{i_{w'}}}(d_{w'},d'_{w'})))=\mathbf{c}''\text{.}$$ Then, $(\mathbf{c}'',\mathbf{c}) \in R_{j,k}^{\mathbf{P}^*}$ and $(\mathbf{c}'',\mathbf{c}') \in R_{j,k}^{\mathbf{P}^*}$, so that $c''_j=c_j=c'_j$, that is $p=p'$, and $\lambda_j(p)=k$. \end{proof}} \newcommand{\pfclb}[0]{ \begin{proof}\renewcommand{\qedsymbol}{$\dashv$}[Proof of Claim~\ref{cl:cl2}] Let $q,q' \in Q$. It is sufficient to check that $q<^\mathbf{Q} q'$ implies $e(q)<^\mathbf{P} e(q')$, and $q\parallel^\mathbf{Q} q'$ implies $e(q)\parallel^\mathbf{P} e(q')$. Let $j,j' \in [w']$ be such that $q \in D_{i_j}$ and $q' \in D_{i_{j'}}$. $q<^\mathbf{Q} q'$ implies $e(q)<^\mathbf{P} e(q')$: Assume $q<^\mathbf{Q} q'$. Assume that $j \leq j'$ (the case $j' \leq j$ is similar). We distinguish two cases. Case $1$: If $j=j'$, then let $\mu_j(q)=k$ and $\mu_j(q')=k'$. Since $q,q' \in D_{i_j}$ and $\mu_j$ is a bijection from $D_{i_j}$ to $[|D_{i_j}|]$, we have that $k \neq k'$. Hence, if $e(q)=p \in C_{i_j}$ and $e(q')=p' \in C_{i_j}$, then by the definition of $e$ we have that $\lambda_j(p)=k\neq k'=\lambda_j(p')$, so that $p \neq p'$. We have that \begin{align*} ( &(\textup{bot}(\mathbf{D}_{i_1}),\ldots,q,\ldots,\textup{bot}(\mathbf{D}_{i_{w'}})), \\ &(\textup{bot}(\mathbf{D}_{i_1}),\ldots,q',\ldots,\textup{bot}(\mathbf{D}_{i_{w'}}))) \in L^{\mathbf{Q}^*}\text{,} \end{align*} where $q$ and $q'$ occur at the $j$th coordinate, and $\textup{bot}(\mathbf{D}_{i_{j''}})$ is the bottom of chain $\mathbf{D}_{i_{j''}}$ for all $j'' \in [w']\setminus\{j\}$. Let\shortversion{ us set} $h((\textup{bot}(\mathbf{D}_{i_1}),\ldots,q,\ldots,\textup{bot}(\mathbf{D}_{i_{w'}})))=\mathbf{c} \in P^*$ and similarly $h((\textup{bot}(\mathbf{D}_{i_1}),\ldots,q',\ldots,\textup{bot}(\mathbf{D}_{i_{w'}})))=\mathbf{c}' \in P^*$. Then $$(\mathbf{c},\mathbf{c}') \in L^{\mathbf{P}^*}\text{,}$$ so that, in particular, $c_j \leq^{\mathbf{C}_{i_j}} c'_j$. We claim that $c_j=p$. Indeed, since $h$ is a homomorphism, it is the case that $\mu_j(q)=\lambda_j(c_j)=k$, because there is a $R_{(j,k)}$ loop over the elements of $(\textup{bot}(\mathbf{D}_{i_1}),\ldots,q,\ldots,\textup{bot}(\mathbf{D}_{i_{w'}}))$ in $\mathbf{Q}^*$. By Claim~\ref{cl:cl1}, there exists a unique element in $\mathbf{C}_{i_j}$ having the same color of $q$ and occurring at the $j$th coordinate of any $h((\ldots,q,\ldots)) \in P^*$, and this element is $e(q)=p$ by definition. Similarly, $c'_j=p'$. Thus, since we observed that $p \neq p'$, we have that $p <^{\mathbf{C}_{i_j}} p'$, and therefore, $e(q)=p <^{\mathbf{P}} p'=e(q')$. Case $2$: If $j<j'$, then $e(q)=p \in C_{i_j}$ and $e(q')=p' \in C_{i_{j'}}$, so that $p \neq p'$ because $C_{i_j} \cap C_{i_{j'}}=\emptyset$. We have that $$(\ldots,q,\ldots,q',\ldots) \in O_{(j,j')}^{\mathbf{Q}^*}\text{,}$$ where $q$ occurs at the $j$th coordinate and $q'$ occurs at the $j'$th coordinate, so that, if $h((\ldots,q,\ldots,q',\ldots))=\mathbf{c} \in P^*$, then $$\mathbf{c} \in O_{(j,j')}^{\mathbf{P}^*}\text{,}$$ that is, $c_j \leq^{\mathbf{P}} c_{j'}$. We claim that $c_j=p$ and $c_{j'}=p'$, which implies $e(q)=p <^{\mathbf{P}} p'=e(q')$. Indeed, since $h$ is a homomorphism, it is the case that $\mu_j(q)=\lambda_j(c_j)=k$ and $\mu_{j'}(q')=\lambda_{j'}(c_{j'})=k'$, because there is both a $R_{(j,k)}$ loop and a $R_{(j',k')}$ loop over $(\ldots,q,\ldots,q',\ldots)$ in $\mathbf{Q}^*$. Then, by Claim~\ref{cl:cl1} and the definition of $e$, it is the case that $c_j=e(q)=p$ and $c_{j'}=e(q')=p'$. $q \parallel^\mathbf{Q} q'$ implies $e(q) \parallel^\mathbf{P} e(q')$: Let $\mu_j(q)=k$ and $\mu_{j'}(q')=k'$. We have that $$(\ldots,q,\ldots,q',\ldots) \in I_{\{j,j'\}}^{\mathbf{Q}^*}\text{,}$$ where $q$ occurs at the $j$th coordinate and $q'$ occurs at the $j'$th coordinate, so that, if $h((\ldots,q,\ldots,q',\ldots))=\mathbf{c} \in P^*$, then $$\mathbf{c} \in I_{\{j,j'\}}^{\mathbf{P}^*}\text{,}$$ that is, $c_j\parallel^{\mathbf{P}} c_{j'}$. We claim that $c_j=e(q)$ and $c_{j'}=e(q')$, which implies $e(q) \parallel^{\mathbf{P}} e(q')$. Indeed, since $h$ is a homomorphism, it is the case that $\lambda_j(c_j)=k$ and $\lambda_{j'}(c_{j'})=k'$, because there is both a $R_{(j,k)}$ loop and a $R_{(j',k')}$ loop over $(\ldots,q,\ldots,q',\ldots)$ in $\mathbf{Q}^*$. Then, by Claim~\ref{cl:cl1} and the definition of $e$, it is the case that $c_j=e(q)$ and $c_{j'}=e(q')$. \end{proof}} \newcommand{\excompilhom}[0]{ \begin{example}\label{ex:compilhom} Let $\mathbf{Q}$ and $\mathbf{P}$ be the posets in Example~\ref{ex:chainpartition}, so that $\mathbf{Q}$ embeds into $\mathbf{P}$ via the map $e \colon Q \to P$ defined in the example (see Figure~\ref{fig:chainpartition}). Let $\mathbf{Q}^*=\textup{compil}(\mathbf{Q},\mathbf{D}_{1},\mathbf{D}_{2},\mu_1,\mu_{2})$ and $\mathbf{P}^*=\textup{compil}(\mathbf{P},\mathbf{C}_{1},\mathbf{C}_{2},\lambda_1,\lambda_{2})$ be the structures in Example~\ref{ex:compilp}, respectively compiling $\mathbf{Q}$ and $\mathbf{P}$. The homomorphism $h \colon Q^* \to P^*$, corresponding to the embedding $e \colon Q \to P$ as by (the forward direction of) Lemma~\ref{lemma:correct}, is depicted in Figure~\ref{fig:compilhom}. \begin{figure}[h] \centering \includegraphics[scale=.2]{compilhom} \caption{The structures $\mathbf{Q}^*$ (left) and $\mathbf{P}^*$ (right) in Example~\ref{ex:compilhom}. The white points in $P^*$ form the image of the homomorphism $h \colon Q^* \to P^*$ in Example~\ref{ex:compilhom}. It is possible to check that $h$ is a homomorphism by direct inspection of Figure~\ref{fig:compilq} and Figure~\ref{fig:compilp}.} \label{fig:compilhom} \end{figure} \end{example}} \begin{proof}[Proof of Lemma~\ref{lemma:correct}] $\textit{(i)} \Rightarrow \textit{(ii)}$: Let $e \colon Q\to P$ be an embedding of $\mathbf{Q}$ into $\mathbf{P}$. \longshort{\begin{claim}}{\begin{claim}} \label{claim:forwclaim1} There exist $w' \in \mathbb{N}$ such that $\textup{width}(\mathbf{Q}) \leq w' \leq w$, a subtuple $(i_1,i_2,\ldots,i_{w'})$ of $(1,2,\ldots,w)$, and a chain partition $(\mathbf{D}_{i_1},\ldots,\mathbf{D}_{i_{w'}})$ of $\mathbf{Q}$ such that, for all $j \in [w']$, $e(D_{i_j})=\{ e(d) \mid d \in D_{i_j} \} \subseteq C_{i_j}$. \end{claim} \longshort{\pfforwclaima}{\pfforwclaima} We let $\mathbf{Q}^*=\textup{compil}(\mathbf{Q},\mathbf{D}_{i_1},\ldots,\mathbf{D}_{i_{w'}},\mu_1,\ldots,\mu_{w'})$ and $\mathbf{P}^*=\textup{compil}(\mathbf{P},\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}},\lambda_1,\ldots,\lambda_{w'})$ be the compilations of $\mathbf{Q}$ and $\mathbf{P}$ respectively, given by the colorings $(\mu_1,\ldots,\mu_{w'})$ and $(\lambda_1,\ldots,\lambda_{w'})$ defined as follows. For each $j \in [w']$, let $\Lambda_j$ be a $|D_{i_j}|$-perfect family of hash functions from $C_{i_j}$ to $[|D_{i_j}|]$. Let $j \in [w']$. Let $\lambda_j \in \Lambda_j$ be such that $\lambda_j|_{e(D_{i_j})}$ is injective; indeed such a $\lambda_j$ exists, because $e(D_{i_j})$ is a subset of $C_{i_j}$ of cardinality $|D{i_j}|$ (as $e$ is injective), and $\Lambda_j$ is a $|D_{i_j}|$-perfect family of hash functions from $C_{i_j}$ to $[|D_{i_j}|]$. Let $e(D_{i_j})=\{c_1,\ldots,c_{|D_{i_j}|}\}$. Let $\lambda_j(c_1)=k_1,\ldots,\lambda_j(c_{|D_{i_j}|})=k_{|D_{i_j}|}$. We let $\mu_j$ be such that $\mu_j(e^{-1}(c_i))=k_i$ for all $i \in [|D_{i_j}|]$. Clearly, $\mu_j$ is a bijection from $D_{i_j}$ to $[|D_{i_j}|]$. The following claim settles the forward direction. \longshort{\begin{claim}}{\begin{claim}} \label{claim:forwclaim} The function $h \colon Q^* \to P^*$ defined by $$h((d_1,\ldots,d_{w'}))=(e(d_1),\ldots,e(d_{w'}))$$ for all $(d_1,\ldots,d_{w'}) \in Q^*$ maps $\mathbf{Q}^*$ homomorphically to $\mathbf{P}^*$. \end{claim} \longshort{\pfforwclaim}{\pfforwclaim} $\textit{(ii)} \Rightarrow \textit{(i)}$: Let $\mathbf{Q}^*$ and $\mathbf{P}^*$ be specified as in the statement of the lemma, and let $h \colon Q^* \to P^*$ be a homomorphism from $\mathbf{Q}^*$ to $\mathbf{P}^*$. We define a function $e \colon Q\to P$ as follows. Below, $\mathbf{c}=(c_1,\ldots,c_{w'})$, $\mathbf{c}'=(c'_1,\ldots,c'_{w'})$, $\mathbf{c}''=(c''_1,\ldots,c''_{w'})$, $\mathbf{d}=(d_1,\ldots,d_{w'})$, and $\mathbf{d}'=(d'_1,\ldots,d'_{w'})$ are elements of $\mathbf{Q}^*$. Let $q \in Q$. Let $j \in [w']$ be such that $q \in D_{i_j}$. \longshort{\begin{claim}}{\begin{claim}} \label{cl:cl1} There exists a unique $p \in C_{i_j} \subseteq P$ such that: \begin{itemize} \item if $h(\mathbf{d})=\mathbf{c}$ and $d_j=q$, then $c_j=p$; \item $\mu_j(q)=\lambda_j(p)$. \end{itemize} \end{claim} \longshort{\pfcla}{\pfcla} We define $e(q)=p\text{,}$ where $p \in P$ is the unique element identified by Claim~\ref{cl:cl1} relative to $q$. The following claim then settles the backwards direction. \longshort{\begin{claim}}{\begin{claim}} \label{cl:cl2} $e$ embeds $\mathbf{Q}$ into $\mathbf{P}$. \end{claim} \longshort{\pfclb}{\pfclb} The statement is proved.\end{proof} \longshort{\excompilhom}{\excompilhom} \subsubsection{Algorithm}\label{sect:proof} We are now ready to list the pseudocode of our main algorithm. The input is a pair $(\mathbf{Q},\mathbf{P})$ of posets. \begin{tabbing} \textsc{Algorithm}$(\mathbf{Q},\mathbf{P})$\\ 1 \quad \= \textbf{if} ($|P|<|Q|$ \textbf{or} $\textup{width}(\mathbf{P})<\textup{width}(\mathbf{Q})$) \textbf{then reject}\\ 2 \> \= $w \leftarrow \textup{width}(\mathbf{P})$ \\ 3 \> \= compute a chain partition $(\mathbf{C}_1,\ldots,\mathbf{C}_w)$ of $\mathbf{P}$ \\ 4 \> \textbf{foreach} $1 \leq w' \leq w$,\\ \> \> \quad \= subtuple $(i_1,\ldots,i_{w'})$ of $(1,\ldots,w)$,\\ \> \> \> chain partition $(\mathbf{D}_{i_1},\ldots,\mathbf{D}_{i_{w'}})$ of $\mathbf{Q}$,\\ \> \> \> coloring $(\mu_1,\ldots,\mu_{w'}) \in M_1 \times \cdots M_{w'}$ \textbf{do}\\ 5 \> \quad \= \textbf{if} exists $j \in [w']$ such that $|C_{i_j}|<|D_{i_j}|$ \textbf{then reject}\\ 6 \> \> $\mathbf{Q}^* \leftarrow \textup{compil}(\mathbf{Q},\mathbf{D}_{i_1},\ldots,\mathbf{D}_{i_{w'}},\mu_1,\ldots,\mu_{w'})$\\ 7 \> \> \textbf{foreach} $j \in [w']$ \textbf{do}\\ 8 \> \> \quad \= $\Lambda_j \leftarrow$ $|D_{i_j}|$-perfect family of hashing functions\\% from $\mathbf{C}_{i_j}$ to $[|\mathbf{D}_{i_j}|]$\\ \> \> \> $\ \ \ \ \ \ \ $ from $C_{i_j}$ to $[|D_{i_j}|]$\\ 9 \> \> \textbf{foreach} $(\lambda_1,\ldots,\lambda_{w'}) \in \Lambda_1 \times \cdots \times \Lambda_{w'}$ \textbf{do}\\ 10 \> \> \quad \= $\mathbf{P}^* \leftarrow \textup{compil}(\mathbf{P},\mathbf{C}_{i_1},\ldots,\mathbf{C}_{i_{w'}},\lambda_1,\ldots,\lambda_{w'})$\\ 11 \> \> \> \textbf{if} $\mathbf{Q}^* \in \textsc{Hom}(\mathbf{P}^*)$ \textbf{then accept}\\ 12 \> \textbf{reject} \end{tabbing} In Line~4, $M_j$ is the set of all bijections from $D_{i_j}$ to $[|D_{i_j}|]$, for all $j \in [w']$. We conclude proving that the algorithm above has the desired properties, from which the main result of the section follows. \longshort{\begin{theorem}}{\begin{theorem}[$\star$]} \label{th:embfpt} Let $\mathcal{P}$ be a class of posets of bounded width. There exists an algorithm deciding any instance $(\mathbf{Q},\mathbf{P})$ of $\textsc{Emb}(\mathcal{P})$ in $2^{O(k\log k)}\cdot n^{O(1)}$ time, where $n=|P|$ and $k=|Q|$. \end{theorem} \newcommand{\pfembfpt}[0]{ \begin{proof} By Lemma \ref{lemma:correct}, \textsc{Algorithm} accepts if and only if there exists an embedding from $Q$ to $P$. Let us analyze its running time. Let $n=|P|$ and $k=|Q|$. In the rest of the analysis, we assume $k \leq n$; otherwise, the algorithm rejects in time $O(k+n)$ by the first test in Line 1. (The second test in) Line 1, and Lines 2-3 are feasible in time $n^{O(1)}$ by Theorem~\ref{th:felsner}. The loop between Line 4 and 10 executes at most $2^{O(k\log k)}$ times, and Lines 5-6 are feasible in time $n^{O(1)}$. The two loops in Lines 7-8 and 9-11 are feasible in time $2^{O(k)}\cdot n^{O(1)}$ by Theorem \ref{th:hash}; in particular, Line 11 executes in time $n^{O(1)}$ by Lemma \ref{lemma:compres} and Theorem \ref{th:semilpoly}. Hence the total running time is bounded above by $2^{O(k\log k)}\cdot n^{O(1)}$. \end{proof}} \longversion{\pfembfpt} \begin{theorem} \label{thm:EFOFPT} Let $\mathcal{P}$ be a class of posets of bounded width. Then, $\textsc{MC}(\mathcal{P},\mathcal{FO}(\exists,\wedge,\vee,\neg))$ is fixed-parameter tractable (with single exponential parameter dependence). \begin{proof} Directly from Proposition~\ref{proposition:metodological} and Theorem~\ref{th:embfpt}. \end{proof} \end{theorem} \subsection{Embedding is $\textup{W}[1]$-hard on Bounded Cover-Degree Posets}\label{sect:bdcover} We construct a class $\mathcal{P}_{\textup{cover\textup{-}degree}}$ of bounded cover-degree posets such that $\textsc{Emb}(\mathcal{P}_{\textup{cover\textup{-}degree}})$ is $\textup{W}[1]$-hard. By Proposition~\ref{proposition:metodological}, it follows that $\textsc{MC}(\mathcal{P}_{\textup{cover\textup{-}degree}},\mathcal{FO}(\exists,\wedge,\vee,\neg))$ is $\textup{W}[1]$-hard. Let $\mathbf{G}=(V,E^\mathbf{G})$ be a graph and let $V=[n]$. Then $r(\mathbf{G})=\mathbf{P}$ is the poset defined as follows. The universe of $\mathbf{P}$ is $P=\bigcup_{i \in [n]}P_i$ where, for all $i \in [n]$, \longshort{\begin{align*} P_i = &\{ \bot_i,a_i,b_i,c_i,d_i,\top_i \} \cup \{ l_{(i,j)}, u_{(i,j)} \mid j \in [n], (i,j) \in E^{\mathbf{G}} \}\text{.} \end{align*}}{\begin{align*} P_i = &\{ \bot_i,a_i,b_i,c_i,d_i,\top_i \} \\ & \cup \{ l_{(i,j)}, u_{(i,j)} \mid j \in [n], (i,j) \in E^{\mathbf{G}} \}\text{.} \end{align*}} The order is defined by the following. For each $i,j \in [n]$.: \begin{itemize} \item $a_i \prec^\mathbf{P} b_i$, $a_i \prec^\mathbf{P} c_i$, $b_i \prec^\mathbf{P} d_i$, $c_i \prec^\mathbf{P} d_i$, and $b_i \parallel^\mathbf{P} c_i$; \item $\bot_i \prec^\mathbf{P} l_{(i,1)} \prec^\mathbf{P} \cdots \prec^\mathbf{P} l_{(i,n)} \prec^\mathbf{P} a_i$; \item $d_i \prec^\mathbf{P} u_{(i,1)} \prec^\mathbf{P} \cdots \prec^\mathbf{P} u_{(i,n)} \prec^\mathbf{P} \top_i$; \item $l_{(i,j)} \prec^\mathbf{P} u_{(j,i)}$ if and only if $(i,j) \in E^{\mathbf{G}}$. \end{itemize} The construction satisfies the following properties. Let $\mathbf{G} \in \mathcal{G}$: \begin{enumerate}[label=\textit{(\roman*)}] \item since $\textup{cover\textup{-}degree}(r(\mathbf{G})) \leq 3$, the class $\mathcal{P}_{\textup{cover\textup{-}degree}}=\{ r(\mathbf{G}) \mid \mathbf{G} \in \mathcal{G} \}$ has bounded cover-degree; \item $r(\mathbf{G})$ can be constructed in polynomial time; \item for any $j,j'\in [n]$, $j \neq j'$, we have $\bot_j <^{\mathbf{P}} \top_{j'}$ if and only if $(j,j')\in E^{\mathbf{G}}$. \end{enumerate} \longshort{\begin{proposition}}{\begin{proposition}[$\star$]} \label{pr:harddegree} $\textsc{Emb}(\mathcal{P}_{\textup{cover\textup{-}degree}})$ is $\textup{W}[1]$-hard. \end{proposition} \newcommand{\pfharddegree}[0]{ \begin{proof} We give an fpt many-one reduction from the $\textsc{Clique}$ problem to $\textsc{Emb}(\mathcal{P}_{\textup{cover\textup{-}degree}})$, which suffices since $\textsc{Clique}$ is $\textup{W}[1]$-hard. The reader is advised to inspect Example~\ref{ex:degree}. Let $(\mathbf{G},k)$ be an instance of $\textsc{Clique}$; the question is whether $\mathbf{G}$ contains a clique on $k \in \mathbb{N}$ vertices Let $\mathbf{P}=r(\mathbf{G})$. We reduce to the instance $(\mathbf{Q}_k,\mathbf{P})$ of $\textsc{Emb}(\mathcal{P}_{\textup{cover\textup{-}degree}})$, where $\mathbf{Q}_k$ is the poset with universe $Q_k=\{ \bot_i,a_i,b_i,c_i,d_i,\top_i \mid i \in [k] \}$, uniquely determined by the following relations: \begin{itemize} \item $a_i \prec^{\mathbf{Q}_k} b_i$, $a_i \prec^{\mathbf{Q}_k} c_i$, $b_i \prec^{\mathbf{Q}_k} d_i$, $c_i \prec^{\mathbf{Q}_k} d_i$, and $b_i \parallel^{\mathbf{Q}_k} c_i$; \item $\bot_i \prec^{\mathbf{Q}_k} a_i$ and $d_i \prec^{\mathbf{Q}_k} \top_i$ for all $i \in [k]$; \item $\bot_i \prec^{\mathbf{Q}_k} \top_{i'}$ for all $i,i' \in [k]$, $i \neq i'$. \end{itemize} We argue correctness (the complexity of the reduction is clear). If $\{j_1,\ldots,j_k\} \subseteq G$ induces a clique of size $k$ in $\mathbf{G}$, then $\mathbf{Q}_k$ embeds into $\mathbf{P}$ by $q_i \mapsto q_{j_i}$ for all $q \in \{ \bot,a,b,c,d,\top \}$ and $i \in [k]$. Conversely, assume that $\mathbf{Q}_k$ embeds into $\mathbf{P}$ via a mapping $e$. Let $i \in [k]$. We claim that there exists $j \in [n]$ such that $\{e(b_i),e(c_i)\}=\{b_j,c_j\}$. Indeed, by construction, $b_i \parallel^{\mathbf{Q}_k} c_i$ and $a_i \leq^{\mathbf{Q}_k} b_i,c_i \leq^{\mathbf{Q}_k} d_i$. Note that any two incomparable elements $p' \in P_{j'}$ and $p'' \in P_{j''}$ with $j',j'' \in [n]$, $j' \neq j''$, lack a common upper bound or a common lower bound. Hence, since $e$ is an embedding, $\{e(b_i),e(c_i)\} \subseteq P_j$ for some $j \in [n]$, which forces $\{e(b_i),e(c_i)\}=\{b_j,c_j\}$ because $b_j$ and $c_j$ are the only two incomparable elements in $P_j$. We claim that $C=\{ j \mid \{b_j,c_j\} \cap \text{$e(Q_k) \neq \emptyset$} \}\subseteq V$ induces a clique of size $k$ in $\mathbf{G}$. By the above, $|C|=k$. Hence it suffices to show that $(j,j') \in E^{\mathbf{G}}$ for any $j,j' \in C$, $j \neq j'$. Let $i,i' \in [k]$, $i \neq i'$, be such that $\{e(b_i),e(c_i)\}=\{b_j,c_j\}$ and $\{e(b_{i'}),e(c_{i'})\}=\{b_{j'},c_{j'}\}$. Since $e(\bot_i) <^{\mathbf{P}} e(b_{i})$ and $e(\top_{i'}) >^{\mathbf{P}} e(b_{i'})$, we obtain that $e(\bot_i)\in P_j$ and $e(\top_{i'})\in P_{j'}$ by construction. The embedding ensures $e(\bot_i) <^{\mathbf{P}} e(\top_{i'})$ and so $(j,j') \in E^{\mathbf{G}}$ by the properties listed before the statement, concluding the proof. \end{proof} } \longversion{\pfharddegree} \newcommand{\exdegree}[0]{ \begin{example}\label{ex:degree} Let $(\mathbf{G},k)$ be an instance of $\textsc{Clique}$, where $\mathbf{G}$ is the graph whose universe is $G=[4]$ and whose edge relation $E^\mathbf{G}$ is the symmetric closure of $\{(1,2),(1,3),(2,3),(2,4),(3,4)\}$, and $k=3$. Then posets $\mathbf{Q}_k$ and $\mathbf{P}$ in the proof of Proposition~\ref{pr:harddegree} are depicted in Figure~\ref{fig:qdegree} and Figure~\ref{fig:pdegree} respectively. \end{example}} \longversion{\exdegree \newcommand{\figqdegree}[0]{ \begin{figure}[h] \centering \includegraphics[scale=.2]{qdegree} \caption{The poset $\mathbf{Q}_k$ in the proof of Proposition~\ref{pr:harddegree}, where $k=3$ as in Example~\ref{ex:degree}.} \label{fig:qdegree} \end{figure}} \longversion{\figqdegree} \newcommand{\figpdegree}[0]{ \begin{figure}[h] \centering \includegraphics[scale=.2]{pdegreecopy} \caption{The poset $\mathbf{P}$ in the proof of Proposition~\ref{pr:harddegree}, where $\mathbf{G}$ is as in Example~\ref{ex:degree}.} \label{fig:pdegree} \end{figure}} \longversion{\figpdegree} \section{Classical Complexity}\label{sect:classical} In this section, we study the classical complexity of the embedding problem on the targeted classes of posets, and we prove a tractability result of independent interest on bounded width posets. We first observe the following fact. \longshort{\begin{proposition}}{\begin{proposition}[$\star$]} \label{th:wdtract} Let $\mathcal{P}$ be a class of posets of bounded size. Then, $\textsc{Emb}(\mathcal{P})$ is polynomial-time tractable. \end{proposition} \newcommand{\pfwdtract}[0]{ \begin{proof} Let $s \in \mathbb{N}$ be such that $|\mathbf{P}| \leq s$ for all $\mathbf{P} \in \mathcal{P}$. Let $(\mathbf{Q},\mathbf{P})$ be an instance of $\textsc{Emb}(\mathcal{P})$. If $|Q|>|P|$, reject. Otherwise, check whether one of the at most $s^s$ many mappings from $\mathbf{Q}$ to $\mathbf{P}$ is an embedding. \end{proof}} \longshort{\pfwdtract}{ Note that the above together with Proposition \ref{pr:exprcomplex} rules out a polynomial-time tractability analogue of Proposition \ref{proposition:metodological}. The section is organized into three subsections, as follows. \begin{itemize} \item In Subsections \ref{sect:widthnphard} and \ref{sect:degreenphard}, we prove that the embedding problem is NP-hard on bounded width and bounded degree posets respectively. This implies that Proposition~\ref{th:wdtract} is tight with respect to the studied invariants. \item In Subsection \ref{sect:wdtract}, we show how the ideas developed in Section \ref{sect:mainresults} may be used to obtain a polynomial-time tractable algorithm for the isomorphism of bounded width posets, an open problem in order theory \cite[p.~284]{CaspardLeclercMonjardet12}. \end{itemize} \subsection{Embedding is $\textup{NP}$-hard on Bounded Width Posets}\label{sect:widthnphard} In this subsection, we construct a class $\mathcal{P}$ of posets of bounded width such that $\textsc{Emb}(\mathcal{P})$ is $\textup{NP}$-hard, which immediately implies $\textup{NP}$-hardness of $\textsc{MC}(\mathcal{P},\mathcal{FO}(\exists,\wedge,\neg))$. The reduction, from the Boolean satisfiability problem (SAT), is technically involved. Intuitively, given a SAT instance $\phi$, we construct two bounded width posets $\mathbf{Q}_\phi$ and $\mathbf{P}_\phi$. The two posets are such that, if $\phi$ is satisfiable, then $\mathbf{Q}_\phi$ embeds into $\mathbf{P}_\phi$ \lq\lq nicely\rq\rq, in the sense that certain chains of $\mathbf{Q}_\phi$ embed into certain families of chains in $\mathbf{P}_\phi$; conversely, every embedding of $\mathbf{Q}_\phi$ into $\mathbf{P}_\phi$ must be nice in the above sense, and any nice embedding of $\mathbf{Q}_\phi$ into $\mathbf{P}_\phi$ yields a satisfying assignment to $\phi$. \newcommand{\exexa}[0]{ \begin{example}\label{ex:ex1} Let $\phi(x_1,x_2,x_3,x_4)=\delta_1 \wedge \delta_2 \wedge \delta_3 \wedge \delta_4 \wedge \delta_5$, where $\delta_1=x_4 \vee \neg x_2$, $\delta_2=x_4 \vee \neg x_1$, $\delta_3=x_1 \vee \neg x_2$, $\delta_4=x_3 \vee \neg x_1$, and $\delta_5=\neg x_3 \vee x_2$. Note that, for instance, $\phi$ is satisfied by $\{(x_1,0),(x_2,0),(x_3,0),(x_4,1)\}$. The poset $\mathbf{Q}_{\phi}$ is depicted in Figure~\ref{fig:qphi}, where the chain on the left is $Q^v_{\phi}$, the chain in the middle contains $Q^a_{\phi}$, and the chain on the right is $Q^c_{\phi}$. Thick edges represent chains of $|Q^a_{\phi}|$ elements. \end{example}} \longversion{\exexa \newcommand{\figqphi}[0]{ \begin{figure}[h] \centering \includegraphics[scale=.2]{qnphardproof} \caption{The poset $\mathbf{Q}_{\phi}$ corresponding to $\phi \in \mathcal{S}$ in Example~\ref{ex:ex1}.} \label{fig:qphi} \end{figure}} \longversion{\figqphi} \newcommand{\exexb}[0]{ \begin{example}\label{ex:ex2} Let $\phi \in \mathcal{S}$ be as in Example~\ref{ex:ex1}. Then, the poset $\mathbf{P}_{\phi}$ is depicted in Figures~\ref{fig:pphi}-\ref{fig:pphi2}. The block on the left is $P^v_{\phi}$, the block in the middle contains $P^a_{\phi}$, and the block on the right is $P^c_{\phi}$. Thick edges represent chains of $|Q^a_{\phi}|$ elements. The white points in $\mathbf{P}_{\phi}$ form the image of the embedding $e \colon Q_{\phi} \to P_{\phi}$ of $\mathbf{Q}_{\phi}$ into $\mathbf{P}_{\phi}$ corresponding to the $\phi$-satisfying assignment in Example~\ref{ex:ex1} as by (the easy direction of) Theorem~\ref{th:widthnphard}. It is possible to check that $e$ is an embedding by direct inspection of Figure~\ref{fig:qphi}, Figure~\ref{fig:pphi}, and Figure~\ref{fig:pphi2}. \end{example}} \newcommand{\figpphi}[0]{ \begin{figure}[h] \centering \includegraphics[scale=.2]{pnphardproof1} \caption{Items (P1)-(P4) in the construction of poset $\mathbf{P}_{\phi}$, where $\phi \in \mathcal{S}$ is as in Example~\ref{ex:ex1}.} \label{fig:pphi} \end{figure}} \longversion{\figpphi} \newcommand{\figpphib}[0]{ \begin{figure}[h] \centering \includegraphics[scale=.2]{pnphardproof2} \caption{Items (P5)-(P6) in the construction of poset $\mathbf{P}_{\phi}$, where $\phi \in \mathcal{S}$ is as in Example~\ref{ex:ex1}. The three points on the bottom, from left to right, represent $(\{(x_2,0),(x_4,0)\},(\delta_1,1))$, $(\{(x_2,1),(x_4,0)\},(\delta_1,1))$, and $(\{(x_2,1),(x_4,1)\},(\delta_1,1))$, that is, we display the satisfying assignments of $\delta_1$ in the order $\{(x_2,0),(x_4,0)\}$, $\{(x_2,1),(x_4,0)\}$, and $\{(x_2,1),(x_4,1)\}$. Similarly, we display the satisfying assignments: of $\delta_2$ in the order $\{(x_1,0),(x_4,0)\}$, $\{(x_1,1),(x_4,0)\}$, and $\{(x_1,1),(x_4,1)\}$; of $\delta_3$, in the order $\{(x_1,0),(x_2,0)\}$, $\{(x_1,1),(x_2,0)\}$, and $\{(x_1,1),(x_2,1)\}$; of $\delta_4$, in the order $\{(x_1,0),(x_2,0)\}$, $\{(x_1,1),(x_2,0)\}$, and $\{(x_1,1),(x_2,1)\}$; of $\delta_5$, in the order $\{(x_2,0),(x_3,0)\}$, $\{(x_2,0),(x_3,1)\}$, and $\{(x_2,1),(x_3,1)\}$.} \label{fig:pphi2} \end{figure}} \longversion{\figpphib} Let $\mathcal{S}$ be the class of propositional formulas in conjunctive form, containing at least $3$ clauses, where each clause contains at most $3$ literals; also, no clause contains a pair of complementary literals, and each variable occurs in at least two clauses. Let $\phi(x_1,\ldots,x_n)=\delta_1 \wedge \cdots \wedge \delta_m$ be in $\mathcal{S}$. For $i \in [n]$ and $j \in [m]$, we write $x_i \in \delta_j$ if a literal on variable $x_i$ occurs in clause $\delta_j$, and we let $\textup{var}(\delta_j)=\{ x_i \mid i \in [n], x_i \in \delta_j \}$. \longshort{We proceed in two stages (recall Example~\ref{ex:ex1}). }{We proceed in two stages.} First, we define a poset $\mathbf{Q}_{\phi}$ as follows. The universe $Q_{\phi}$ contains $Q_{\phi}^a = \{ (\delta_i,j) \mid i \in [m], j \in [n]\}$, $Q_{\phi}^c = \{ (\delta'_i,j) \mid i \in [m], j \in [n-1]\}$, \begin{align*} Q_{\phi}^v &=\left\{ (x_i,(j,j')) \;\ifnum\currentgrouptype=16 \middle\fi|\; \begin{array}{c} i \in [n], x_i \in \delta_j, x_i \in \delta_{j'},j<j'\textup{,}\\ \textup{and } x_i \not\in \delta_{j''} \textup{ for all } j<j''<j' \end{array} \right\}\text{,} \end{align*} and a set $Q_{\phi}^l$ of auxiliary elements introduced below. For $q,q' \in Q_{\phi}$, we let $\ll^{\mathbf{Q}_{\phi}}$ denote the fact that, in the order of $\mathbf{Q}_{\phi}$, there is a chain of $|Q^a_{\phi}|$ fresh auxiliary elements, contained in $Q^l_{\phi}$, between $q$ and $q'$. The order relation of $\mathbf{Q}_{\phi}$ is defined by the following cover relations: \begin{itemize} \item[(Q1)] for all $(\delta_i,j),(\delta_{i+1},{j}) \in Q_{\phi}^a$: if $i+1 \leq i'$ where $i'$ is the minimum in $[m]$ such that $x_j \in \delta_{i'}$, then $(\delta_i,j) \ll^{\mathbf{Q}_{\phi}} (\delta_{i+1},{j})$; if $i' \leq i$ where $i'$ is the maximum in $[m]$ such that $x_j \in \delta_{i'}$, then $(\delta_i,j) \ll^{\mathbf{Q}_{\phi}} (\delta_{i+1},{j})$; otherwise, $(\delta_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta_{i+1},{j})$; \item[(Q2)] $(\delta_m,j) \ll^{\mathbf{Q}_{\phi}} (\delta_{1},{j+1})$, for $(\delta_m,j),(\delta_{1},{j+1}) \in Q_{\phi}^a$; \item[(Q3)] $(\delta'_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta'_{i+1},{j})$, for $(\delta'_i,j),(\delta'_{i+1},{j}) \in Q_{\phi}^c$; \item[(Q4)] $(\delta'_m,j) \prec^{\mathbf{Q}_{\phi}} (\delta'_{1},{j+1})$, for $(\delta'_m,j),(\delta'_{1},{j+1}) \in Q_{\phi}^c$; \item[(Q5)] $(x_i,(j,j')) <^{\mathbf{Q}_{\phi}} (x_i,(j',j''))$, for $(x_i,(j,j')),(x_i,(j',j'')) \in Q_{\phi}^v$; \item[(Q6)] $(x_i,(j,j')) <^{\mathbf{Q}_{\phi}} (x_{i+1},(k,k'))$, for $(x_i,(j,j')),(x_i,(k,k')) \in Q_{\phi}^v$ where $j'$ is maximum in $[m]$ such that $x_i \in \delta_{j'}$ and $k$ is minimum in $[m]$ such that $x_{i+1} \in \delta_{k}$. \item[(Q7)] $(\delta_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta'_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta_i,j+1)$, for all $(\delta_i,j),(\delta_i,j+1) \in Q_{\phi}^a$ and $(\delta'_i,j) \in Q_{\phi}^c$; \item[(Q8)] $(\delta_i,j) \ll^{\mathbf{Q}_{\phi}} (x_j,(i,i')) \ll^{\mathbf{Q}_{\phi}} (\delta_{i'},j)$, for all $(\delta_i,j),(\delta_{i'},j) \in Q_{\phi}^a$ and $(x_j,(i,i')) \in Q_{\phi}^v$. \end{itemize} Second, we define the poset $\mathbf{P}_{\phi}=r(\phi)$, using $\mathbf{Q}_{\phi}$ as a basis, as follows. The universe $P_{\phi}$ is the union of \begin{align*} P_{\phi}^a &= \bigcup_{(\delta_i,j) \in Q_{\phi}^a} \{ (f,(\delta_i,j)) \mid f \in \{0,1\}^{\textup{var}(\delta_i)} \textup{ satisfies } \delta_i \}\text{,}\\ P_{\phi}^c &= \bigcup_{(\delta'_i,j) \in Q_{\phi}^c} \{ (f,(\delta'_i,j)) \mid f \in \{0,1\}^{\textup{var}(\delta_i)} \textup{ satisfies } \delta_i \}\text{,}\\ P_{\phi}^v &=\bigcup_{(x_i,(j,j')) \in Q_{\phi}^v} \{ (x_i,(j,j')), (\neg x_i,(j,j'))\}\text{,} \end{align*} and a set $P_{\phi}^l$ of auxiliary elements introduced below. Again, for $p,p' \in P_{\phi}$, we let $\ll^{\mathbf{P}_{\phi}}$ denote the fact that, in the order of $\mathbf{P}_{\phi}$, there is a chain of $|Q^a_{\phi}|$ fresh auxiliary elements, contained in $P_{\phi}^l$, between $p$ and $p'$. The order relation of $\mathbf{P}_{\phi}$ is defined by the following cover relation: \begin{itemize} \item[(P1)] for all $(f,(\delta_i,j)),(f',(\delta_{i'},{j'})) \in P_{\phi}^a$, $(f,(\delta_i,j)) \prec^{\mathbf{P}_{\phi}} (f',(\delta_{i+1},{j}))$ if and only if $(\delta_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta_{i'},{j'})$, and $(f,(\delta_i,j)) $\\ $\ll^{\mathbf{P}_{\phi}} (f',(\delta_{i+1},{j}))$ if and only if $(\delta_i,j) \ll^{\mathbf{Q}_{\phi}} (\delta_{i'},{j'})$; \item[(P2)] for all $(f,(\delta'_i,j)),(f',(\delta'_{i'},{j'})) \in P_{\phi}^c$, $(f,(\delta'_i,j)) \prec^{\mathbf{P}_{\phi}} (f',(\delta'_{i+1},{j}))$ if and only if $(\delta'_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta'_{i'},{j'})$; \item[(P3)] for all $(x_i,(j,j'))$, $(\neg x_i,(j,j'))$, $(x_i,(j',j''))$, $(\neg x_i,(j',j''))$ in $P_{\phi}^v$, $(x_i,(j,j')) \prec^{\mathbf{P}_{\phi}} (x_i,(j',j''))$ and $(\neg x_i,(j,j')) \prec^{\mathbf{P}_{\phi}} (\neg x_i,(j',j''))$ if and only if $(x_i,(j,j')) \prec^{\mathbf{Q}_{\phi}} (x_i,(j',j''))$; \item[(P4)] for all $(x_i,(j,j'))$, $(\neg x_i,(j,j'))$, $(x_{i+1},(k,k'))$, and\\ $(\neg x_{i+1},(k,k'))$ in $P_{\phi}^v$, $(x_i,(j,j'))\longversion{\!} \prec^{\mathbf{P}_{\phi}}\longversion{\!}(x_{i+1},(k,k'))$, $(x_i,(j,j'))\longversion{\!} \prec^{\mathbf{P}_{\phi}}\longversion{\!}(\neg x_{i+1},(k,k'))$, $(\neg x_{i},(j,j')) \prec^{\mathbf{P}_{\phi}}$\\ $(x_{i+1},(k,k'))$, and $(\neg x_{i},(j,j')) \prec^{\mathbf{P}_{\phi}}(\neg x_{i+1},(k,k'))$, if and only if $(x_i,(j,j')) \prec^{\mathbf{Q}_{\phi}} (x_{i+1},(k,k'))$. \item[(P5)] for all $(f,(\delta_i,j)), (f,(\delta_i,j+1)) \in P_{\phi}^a$ and $(f,(\delta'_i,j)) \in P_{\phi}^c$, $(f,(\delta_i,j)) \prec^{\mathbf{P}_{\phi}} (f,(\delta'_i,j)) \prec^{\mathbf{P}_{\phi}} (f,(\delta_i,j+1))$ if and only if $(\delta_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta'_i,j) \prec^{\mathbf{Q}_{\phi}} (\delta_i,j+1)$; \item[(P6)] for all $(f,(\delta_i,j)),(f',(\delta_{i'},j)),(g,(\delta_i,j)),(g',(\delta_{i'},j)) \in P_{\phi}^a$ and $(x_j,(i,i')),(\neg x_j,(i,i')) \in P_{\phi}^v$, it holds that $(f,(\delta_i,j)) \ll^{\mathbf{P}_{\phi}} (x_j,(i,i')) \ll^{\mathbf{P}_{\phi}} (f',(\delta_{i'},j))$ and $(g,(\delta_i,j)) \ll^{\mathbf{P}_{\phi}} (\neg x_j,(i,i'))$\\$ \ll^{\mathbf{P}_{\phi}} (g',(\delta_{i'},j))$ if and only if $(\delta_i,j) \ll^{\mathbf{Q}_{\phi}} (x_j,(i,i')) \ll^{\mathbf{Q}_{\phi}} (\delta_{i'},j)$, $f(x_j)=f'(x_j)=1$, and $g(x_j)=g'(x_j)=0$. \end{itemize} Note that $\textup{width}(\mathbf{Q}_{\phi}) \leq 4$ and $\textup{width}(\mathbf{P}_{\phi}) \leq 2^2+7^2+7^2=102$ for all $\phi \in \mathcal{S}$ (we remark that this width bound may be improved at the cost of a more complicated construction). Hence $\mathcal{P}_{\textup{width}}=\{ r(\phi) \mid \phi \in \mathcal{S} \} $ has bounded width. \longshort{\begin{theorem}}{\begin{theorem}[$\star$]} \label{th:widthnphard} $\textsc{Emb}(\mathcal{P}_{\textup{width}})$ is $\textup{NP}$-hard. \end{theorem} \newcommand{\pfwidthnphard}[0]{ \begin{proof} We give a polynomial-time many-one reduction from the satisfiability problem over $\mathcal{S}$ to the problem $\textsc{Emb}(\mathcal{P}_{\textup{width}})$, which suffices since the source problem is $\textup{NP}$-hard. The reduction maps an instance $\phi \in \mathcal{S}$ of the satisfiability problem, say $\phi(x_1,\ldots,x_n)=\delta_1 \wedge \cdots \wedge \delta_m$, to the instance $(\mathbf{Q}_{\phi},\mathbf{P}_{\phi})$ of $\textsc{Emb}(\mathcal{P}_{\textup{width}})$, where $\mathbf{Q}_{\phi}$ and $\mathbf{P}_{\phi}$ are constructed as above. The reduction is clearly polynomial-time computable. We prove that the reduction is correct. If $\phi$ is satisfiable, then let $g \colon \{x_1,\ldots,x_n\} \to \{0,1\}$ be a satisfying assignment. We define a function $e \colon Q_{\phi} \to P_{\phi}$ as follows. Let $q \in Q_{\phi}$. Then: \begin{itemize} \item If $q=(\delta_i,j) \in Q_{\phi}^a$, then $e(q)=(f,(\delta_i,j)) \in P_{\phi}^a$ if and only if $g|_{\textup{var}(\delta_i)}=f$. \item If $q=(\delta'_i,j) \in Q_{\phi}^c$, then $e(q)=(f,(\delta'_i,j)) \in P_{\phi}^c$ if and only if $g|_{\textup{var}(\delta_i)}=f$. \item If $q=(x_i,(j,j')) \in Q_{\phi}^v$, then $e(q)=(x_i,(j,j')) \in P_{\phi}^v$ if $g(x_i)=1$, and $e(q)=(\neg x_i,(j,j')) \in P_{\phi}^v$ if $g(x_i)=0$. \item If $q \in Q_{\phi}^l$, then let $q',q'' \in Q_{\phi}$ and $q_1,\ldots,q_{|Q^a_{\phi}|} \in Q_{\phi}^l$ be such that $q' \prec^{\mathbf{Q}_{\phi}} q_1 \prec^{\mathbf{Q}_{\phi}} \cdots \prec^{\mathbf{Q}_{\phi}} q_{|Q^a_{\phi}|} \prec^{\mathbf{Q}_{\phi}} q''$ and $q=q_i$ for $i \in [|Q^a_{\phi}|]$. By construction, there exist $p_1,\ldots,p_{|Q^a_{\phi}|} \in P_{\phi}^l$ such that $e(q') \prec^{\mathbf{P}_{\phi}} p_1 \prec^{\mathbf{P}_{\phi}} \cdots \prec^{\mathbf{P}_{\phi}} p_{|Q^a_{\phi}|} \prec^{\mathbf{Q}_{\phi}} e(q'')$. Then, $e(q)=e(q_i)=p_i$. \end{itemize} It is easy to check that $e$ embeds $\mathbf{Q}_{\phi}$ into $\mathbf{P}_{\phi}$. Conversely, let $e \colon Q_{\phi} \to P_{\phi}$ be an embedding of $\mathbf{Q}_{\phi}$ into $\mathbf{P}_{\phi}$. \begin{claim}\label{claim:empart} $e(Q_{\phi}^a) \subseteq P_{\phi}^a$, $e(Q_{\phi}^v) \subseteq P_{\phi}^v$, $e(Q_{\phi}^c) \subseteq P_{\phi}^c$. \begin{proof}[Proof of Claim~\ref{claim:empart}] Let $Q^*=\{ q \in Q_{\phi}^a \mid \text{$q$ is comparable to all elements in $Q_{\phi}^v$} \}$. Note that, by construction, $\textup{depth}(\mathbf{Q}_{\phi})=|Q_{\phi}^v \cup Q_{\phi}^l \cup Q^*|=d$, and the chain $Q_{\phi}^v \cup Q_{\phi}^l \cup Q^*$ is the unique chain whose size equals $d$. In the poset $\mathbf{Q}_{\phi}$ depicted in Figure~\ref{fig:qphi}, $Q^*$ contains exactly the elements of the middle chain hit by a thick edge, and the chain $Q_{\phi}^v \cup Q_{\phi}^l \cup Q^*$ is represented by the thick edges. Moreover, by construction again, $\textup{depth}(\mathbf{P}_{\phi})=d$, and the only chains in $\mathbf{P}_{\phi}$ whose size equals $d$ force the embedding to satisfy $e(Q_{\phi}^v) \subseteq P_{\phi}^v$ and $e(Q^*) \subseteq P_{\phi}^a$. We now prove that $e(Q_{\phi}^a \setminus Q^*) \subseteq P_{\phi}^a$, which, together with the above, yields $e(Q_{\phi}^a) \subseteq P_{\phi}^a$. Indeed, let $q \in Q_{\phi}^a \setminus Q^*$. Let $q',q'' \in Q^*$ be such that $q' <^{\mathbf{Q}_{\phi}} q <^{\mathbf{Q}_{\phi}} q''$ and there do not exist $r',r'' \in Q^*$ such that $q' <^{\mathbf{Q}_{\phi}} r' <^{\mathbf{Q}_{\phi}} q$ or $q <^{\mathbf{Q}_{\phi}} r'' <^{\mathbf{Q}_{\phi}} q''$. In Figure~\ref{fig:qphi}, if, for instance, $q$ is the $8$th lowest element in the middle chain, then $q'$ and $q''$ are respectively the $6$th and $9$th lowest elements in the middle chain. Let $S=\{ p \in P_{\phi} \mid e(q') <^{\mathbf{P}_{\phi}} p <^{\mathbf{P}_{\phi}} e(q'') \}$, so that $e(q) \in S$, because $e$ is an embedding. By the above, $S \cap (P_{\phi}^v \cup P_{\phi}^l) \subseteq e(Q_{\phi}^v \cup Q_{\phi}^l \cup Q^*)$, therefore $e(q) \in S \setminus (P_{\phi}^v \cup P_{\phi}^l)$. Moreover, the distance between $e(q')$ and $e(q'')$ in $\mathbf{P}_{\phi}$ is strictly less than $m$, therefore $S \cap P_{\phi}^c=\emptyset$. It follows that $e(q) \in S \setminus (P_{\phi}^v \cup P_{\phi}^l \cup P_{\phi}^c)$, that is, $e(q) \in P_{\phi}^a$. Finally, we prove that $e(Q_{\phi}^c) \subseteq P_{\phi}^c$. Indeed, let $q \in Q_{\phi}^c$. By construction, there exist $m+1$ elements $q_0,\ldots,q_m \in Q_{\phi}^a$ such that $q_0 <^{\mathbf{Q}_{\phi}} \cdots <^{\mathbf{Q}_{\phi}} q_{m}$, $q_0 <^{\mathbf{Q}_{\phi}} q <^{\mathbf{Q}_{\phi}} q_m$, and $q$ is incomparable to $q_1,\ldots,q_{m-1}$ in $\mathbf{Q}_{\phi}$. By the above, $e(q_0),\ldots,e(q_{m}) \in P_{\phi}^a$. As $e$ is an embedding, $e(q_0) <^{\mathbf{P}_{\phi}} \cdots <^{\mathbf{P}_{\phi}} e(q_{m})$, $e(q_0) <^{\mathbf{P}_{\phi}} e(q) <^{\mathbf{P}_{\phi}} e(q_m)$, and $e(q)$ is incomparable to $e(q_1),\ldots,e(q_{m-1})$ in $\mathbf{P}_{\phi}$. By inspection of the construction, we now prove that $e(q) \not\in P_{\phi}^a \cup P_{\phi}^v \cup P_{\phi}^l$, which implies $e(q) \in P_{\phi}^c$ as desired. If $e(q) \in P_{\phi}^a$, then $e(q)$ is incomparable to at most $1$ element among $e(q_1),\ldots,e(q_{m-1})$, which implies $e(q) \not\in P_{\phi}^a$ since $m>2$. If $e(q) \in P_{\phi}^v \cup P_{\phi}^l$, then $e(q)$ is incomparable to at most $m-2$ elements among $e(q_1),\ldots,e(q_{m-1})$, which implies $e(q) \not\in P_{\phi}^v \cup P_{\phi}^l$. \end{proof} \end{claim} The previous three properties uniquely determine the behavior of $e$ over $Q_{\phi}^l$. Next, we state two facts which follow from the embedding and specific properties of the construction of $\mathbf{Q}$ and $\mathbf{P}$. \begin{itemize} \item Items (Q1)-(Q4) and (Q7) on one hand and (P1)-(P2) and (P5) on the other hand enforce the following: for all $i \in [m]$ and $j \in [n]$, there exists a unique $f \in \{0,1\}^{\textup{var}(\delta_i)}$ such that for all $(\delta_i,j),(\delta'_i,j) \in Q_{\phi}^a \cup Q_{\phi}^c$ it holds that $e((\delta_i,j))=(f,(\delta_i,j))$ and $e((\delta'_i,j))=(f,(\delta'_i,j))$. \item Items (Q5)-(Q6) and (Q8) on one hand and (P3)-(P5) and (P6) on the other hand enforce the following: for all $i,i' \in [m]$, $i \neq i'$, and $j \in [n]$ such that $x_j \in \textup{var}(\delta_i) \cap \textup{var}(\delta_{i'})$, it holds that if $e((\delta_i,j))=(f,(\delta_i,j))$ and $e((\delta_{i'},j))=(f',(\delta_{i'},j))$, then $f(x_j)=f'(x_j)$. \end{itemize} Therefore the union of all the assignments $f$ such that $e((\delta_i,\cdot))=(f,(\delta_i,\cdot))$, taken over all $i \in [m]$, defines an assignment $g \colon \{x_1,\ldots,x_n\} \to \{0,1\}$, and moreover $g$ satisfies $\phi$. This concludes the proof \end{proof}} \longversion{\pfwidthnphard} \subsection{Embedding is $\textup{NP}$-hard on Bounded Degree Posets}\label{sect:degreenphard} \longshort{We reduce from the satisfiability problem. Let $\mathcal{S}$ be the class of propositional formulas in conjunctive form, where each clause contains exactly $3$ pairwise non-complementary literals (for notational convenience, but the construction works even if relaxed to at most $3$ literals, which we use for illustration purposes in the examples).}{We reduce from the satisfiability problem. Let $\mathcal{S}$ be the class of propositional formulas in conjunctive form, where each clause contains exactly $3$ pairwise non-complementary literals.} \newcommand{\exdegreenphard}[0]{ \begin{example}\label{ex:degreenphard} Let $\phi(x_1,x_2,x_3)=\delta_1 \wedge \delta_2 \wedge \delta_3$, where $\delta_1=x_1 \vee \neg x_2$, $\delta_2=x_3 \vee \neg x_1$, and $\delta_3=\neg x_3 \vee x_2$. Note that, for instance, $\phi$ is satisfied by $\{(x_1,0),(x_2,0),(x_3,0)\}$. The poset $\mathbf{Q}_{\phi}$ is depicted in Figure~\ref{fig:qdegreenphard}, where $Q_0$, $Q_1$, and $Q_2$ form respectively the bottom, middle, and top layers of the diagram; poset $\mathbf{P}_{\phi}$ is similarly displayed in Figure~\ref{fig:pdegreenphard}. The white points in $\mathbf{P}_{\phi}$ form the image of the embedding $e \colon Q_{\phi} \to P_{\phi}$ of $\mathbf{Q}_{\phi}$ into $\mathbf{P}_{\phi}$ corresponding to the satisfying assignment above as by (the easy direction of) Theorem~\ref{th:degreenphard}. \end{example}} The idea of the reduction is the following. We encode a formula in $\mathcal{S}$ by a poset $\mathbf{P}$, whose universe partitions into three blocks, $P_0$, $P_1$ and $P_2$. The set $P_1$ contains several groups of $7$ elements, where each element corresponds to one possible satisfying assignment of a clause, and the embedding encodes an assignment for the whole formula by forcing us to choose one element out of each group. The set $P_2$ ensures that each assignment chosen by the embedding is consistent for each pair of clauses. To preserve bounded degree while ensuring the consistency of each pair of clauses, it is necessary to use many groups in $P_1$ for each clause. Finally, $P_0$ ensures that each choice made by the embedding for a given clause is consistent across all groups corresponding to that clause. \longversion{\exdegreenphard \newcommand{\figqdegreenphard}[0]{ \begin{figure}[h] \centering \includegraphics[scale=.2]{qdegreenphard} \caption{The poset $\mathbf{Q}_{\phi}$ corresponding to $\phi \in \mathcal{S}$ in Example~\ref{ex:degreenphard}.} \label{fig:qdegreenphard} \end{figure}} \longversion{\figqdegreenphard} \newcommand{\figpdegreenphard}[0]{ \begin{figure}[h] \centering \includegraphics[scale=.2]{pdegreenphard} \caption{The poset $\mathbf{P}_{\phi}$ corresponding to $\phi \in \mathcal{S}$ in Example~\ref{ex:degreenphard}.} \label{fig:pdegreenphard} \end{figure}} \longversion{\figpdegreenphard} We now formalize the ideas outlined above. Let $\phi(x_1,\ldots,x_n)=\delta_1 \wedge \cdots \wedge \delta_m$ be in $\mathcal{S}$. For $j \in [n]$ and $i \in [m]$, we write $x_j \in \delta_i$ if a literal on variable $x_j$ occurs in clause $\delta_i$, and we let $\textup{var}(\delta_i)=\{ x_j \mid j \in [n], x_j \in \delta_i \}$. For all $i \in [m]$, let $(g_{i,1},\ldots,g_{i,7})$ be a fixed ordering of the assignments in $\{0,1\}^{\textup{var}(\delta_i)}$ satisfying $\delta_i$, and let $(i_1,i_2,\ldots,i_{m-1})=(1,\ldots,i-1,i+1,\ldots,m)$. We define our two posets $\mathbf{Q}_\phi$ and $\mathbf{P}_\phi$ below. The poset $\mathbf{Q}_\phi$ has universe $Q_\phi=Q_{0} \cup Q_{1} \cup Q_{2}$, where \begin{align*} Q_{0} = & \{ c_{(i,j)}, c_{(i,m)}, c_{(m,j)} \mid i, j \in [m-1], i\neq j \}\text{,} \\ Q_{1} = & \{ f_{(i,j)} \mid i,j \in [m], i\neq j \} \text{,}\\ Q_{2} = & \{ d_{(i,j)} \mid 1 \leq i<j \leq m \} \text{,} \end{align*} and its cover relation is defined by the following: \begin{itemize} \item[(E1)] $f_{(i,j)},f_{(j,i)} \prec^{\mathbf{Q}_\phi} d_{(i,j)}$ for all $1 \leq i<j \leq m$. \item[(E2)] For all $i \in [m]$, \longshort{\begin{align*} f_{(i,i_1)} & \succ^{\mathbf{Q}_\phi} c_{(i,i_1)} \prec^{\mathbf{Q}_\phi} f_{(i,i_2)} \succ^{\mathbf{Q}_\phi} \cdots \succ^{\mathbf{Q}_\phi} c_{(i,i_{m-1})} \prec^{\mathbf{Q}_\phi} f_{(i,i_{m-1})}\text{.} \end{align*}}{\begin{align*} f_{(i,i_1)} & \succ^{\mathbf{Q}_\phi} c_{(i,i_1)} \prec^{\mathbf{Q}_\phi} f_{(i,i_2)} \succ^{\mathbf{Q}_\phi} \cdots \\ \cdots & \succ^{\mathbf{Q}_\phi} c_{(i,i_{m-1})} \prec^{\mathbf{Q}_\phi} f_{(i,i_{m-1})}\text{.} \end{align*}} \end{itemize} The poset $\mathbf{P}_\phi$ has universe $P_\phi=P_{0} \cup P_{1} \cup P_{2}$ where, \begin{align*} P_0 = & \{ c_{(i,j),a},c_{(i,m),a},c_{(m,j),a} \mid i,j \in [m-1], i\neq j, a \in [7] \}\text{,}\\ P_1 = & \{ f_{(i,j),a} \mid i,j\in [m], i\neq j, a \in [7] \}\text{,}\\ P_2 = & \{ d_{(i,j),(a,a')} \mid 1 \leq i<j \leq m, (a,a') \in [7]^2 \}\text{,} \end{align*} and its cover relation is defined by the following: \begin{itemize} \item[(D1)] For all $1 \leq i<j \leq m$, it holds that $f_{(i,j),a},f_{(j,i),a'} \prec^{\mathbf{P}_\phi} d_{(i,j),(a,a')}$ if and only if $g_{i,a}(x)=g_{j,a'}(x)$ for all $x \in \textup{var}(\delta_i) \cap \textup{var}(\delta_j)$. \item[(D2)] For all $i \in [m]$ and $a \in [7]$, \longshort{\begin{align*} f_{(i,i_{1}),a} & \succ^{\mathbf{P}_\phi} c_{(i,i_1),a} \prec^{\mathbf{P}_\phi} f_{(i,i_2),a} \succ^{\mathbf{P}_\phi} \cdots \succ^{\mathbf{P}_\phi} c_{(i,i_{m-1})} \prec^{\mathbf{P}_\phi} f_{(i,i_{m-1}),a}\text{.} \end{align*}}{\begin{align*} f_{(i,i_{1}),a} & \succ^{\mathbf{P}_\phi} c_{(i,i_1),a} \prec^{\mathbf{P}_\phi} f_{(i,i_2),a} \succ^{\mathbf{P}_\phi} \cdots\\ \cdots & \succ^{\mathbf{P}_\phi} c_{(i,i_{m-1})} \prec^{\mathbf{P}_\phi} f_{(i,i_{m-1}),a}\text{.} \end{align*}} \end{itemize} Since $\textup{cover\textup{-}degree}(\mathbf{P}_{\phi}) \leq 1+7=8$ and $\textup{depth}(\mathbf{P}_{\phi}) \leq 3$, $\mathcal{P}_{\textup{degree}}=\{ \mathbf{P}_{\phi} \mid \phi \in \mathcal{S} \}$ has bounded degree by Proposition~\ref{pr:diagram}. \longshort{\begin{theorem}}{\begin{theorem}[$\star$]} \label{th:degreenphard} $\textsc{Emb}(\mathcal{P}_{\textup{degree}})$ is $\textup{NP}$-hard. \end{theorem} \newcommand{\pfdegreenphard}[0]{ \begin{proof} We give a polynomial-time many-one reduction from the satisfiability problem over $\mathcal{S}$ to the problem $\textsc{Emb}(\mathcal{P}_{\textup{degree}})$, which suffices since the source problem is $\textup{NP}$-hard. The reduction maps an instance $\phi \in \mathcal{S}$ of the satisfiability problem, say $\phi(x_1,\ldots,x_n)=\delta_1 \wedge \cdots \wedge \delta_m$, to the instance $(\mathbf{Q}_{\phi},\mathbf{P}_{\phi})$ of $\textsc{Emb}(\mathcal{P}_{\textup{degree}})$. The reduction is clearly polynomial-time computable. For correctness, let $g \colon \{x_1,\ldots,x_n\} \to \{0,1\}$ be an assignment satisfying $\phi$. Recall that $(g_{i,1},\ldots,g_{i,7})$ is a fixed ordering of the assignments in $\{0,1\}^{\textup{var}(\delta_i)}$ satisfying $\delta_i$, for all $i \in [m]$. Let $(a_1,\ldots,a_m) \in [7]^m$ be such that $g|_{\textup{var}(\delta_i)}=g_{i,a_i}$ for all $i \in [m]$. It is easy to check that the function $e \colon Q_{\phi} \to P_{\phi}$ defined by setting: \begin{itemize} \item $e(c_{(i,j)})=c_{(i,j),a_i}$ for all $c_{(i,j)} \in Q_0$; \item $e(f_{(i,j)})=f_{(i,j),a_i}$ for all $f_{(i,j)} \in Q_1$; \item $e(d_{(i,j)})=d_{(i,j),(a_i,a_j)}$ for all $d_{(i,j)} \in Q_2$; \end{itemize} embeds $\mathbf{Q}_{\phi}$ into $\mathbf{P}_{\phi}$. Conversely, let $e \colon Q_{\phi} \to P_{\phi}$ embed $\mathbf{Q}_{\phi}$ into $\mathbf{P}_{\phi}$. We show that $\phi$ is satisfiable. Note that $e(Q_i) \subseteq P_i$ for all $i \in \{0,1,2\}$, because $e$ maps all $3$-element chains in $\mathbf{Q}_{\phi}$ into $3$-element chains in $\mathbf{P}_{\phi}$, all $3$-element chains in $\mathbf{Q}_{\phi}$ link three elements in $Q_0$, $Q_1$, and $Q_2$, in this order, and all $3$-element chains in $\mathbf{P}_{\phi}$ link three elements in $P_0$, $P_1$, and $P_2$, in this order. We first claim that for all $i \in [m]$, there exists exactly one $a \in [7]$ such that, for all $j \in [m]\setminus \{i\}$, it holds that $e(f_{(i,j)})=f_{(i,j),a}$. Assume for a contradiction that $e(f_{(i,j)})=f_{(i,j),a}$ and $e(f_{(i,j')})=f_{(i,j'),a'}$ for some $i \in [m]$, $a \neq a' \in [7]$, and $j \neq j' \in [m]\setminus\{i\}$; without loss of generality, let $j<j'$. By (E2), $f_{(i,j)}$ reaches $f_{(i,j')}$ through a fence of length $2(j'-j)$, starting in $Q_1$ and alternating steps in $Q_0$ and $Q_1$; but by (D2), $f_{(i,j),a}$ does not reach $f_{(i,j'),a'}$ through a fence of length $2(j'-j)$, starting in $P_1$ and alternating steps in $P_0$ and $P_1$, contradicting the assumption that $e$ is an embedding. Let $(a_1,\ldots,a_m) \in [7]^m$ be uniquely determined by the previous claim. We now claim that, for all $i,j \in [m]$ such that $i \neq j$, and all $x \in \textup{var}(\delta_i) \cap \textup{var}(\delta_{j})$, it holds that $g_{i,a_i}(x)=g_{j,a_j}(x)$. Assume without loss of generality that $i<j$. By (E1), $f_{(i,j)},f_{(j,i)} \prec^{\mathbf{Q}_\phi} d_{(i,j)}$. By hypothesis, $e(f_{(i,j)})=f_{(i,j),a_i}$ and $e(f_{(j,i)})=f_{(j,i),a_j}$. Therefore, since $e$ is an embedding, $f_{(i,j),a_i},f_{(j,i),a_j} \prec^{\mathbf{P}_\phi} e(d_{(i,j)})$; thus, by (D1), $e(d_{(i,j)})=d_{(i,j),(a_i,a_j)}$ that is, $g_{i,a_i}(x)=g_{j,a_j}(x)$ for all $x \in \textup{var}(\delta_i) \cap \textup{var}(\delta_{j})$. By the above, $g=g_{1,a_1} \cup \cdots \cup g_{m,a_m}$ is a function from $\{x_1,\ldots,x_n\}$ to $\{0,1\}$. Since $g_{i,a_i}$ satisfies $\delta_i$ for all $i\in [m]$, it follows that $g$ satisfies $\phi$, concluding the proof. \end{proof}} \longversion{\pfdegreenphard} \longshort{\subsection{Isomorphism in Polynomial Time on Bounded Width Posets}\label{sect:wdtract}}{\subsection{Isomorphism in Polytime on Bounded Width Posets}\label{sect:wdtract}} The insight on bounded width used to prove tractability of the embedding problem essentially scales to the isomorphism problem. \begin{theorem}\label{th:isoptime} Let $\mathcal{P}$ be a class of posets of bounded width. Then, $\textsc{Iso}(\mathcal{P})$ is polynomial-time tractable. \end{theorem} \begin{proof} The proof utilizes three known facts from the literature. Let $\mathbf{R}$ be any poset. For all $S \subseteq R$, let $(S]$ be \emph{downset} generated by $S$ in $\mathbf{R}$, i.e., $(S]=\{ r \in R \mid \exists s \in S \text{ such that $r \leq^{\mathbf{R}} s$}\}$. Let $l(\mathbf{R})$ be the order defined by equipping the universe of all antichains in $\mathbf{R}$ by the relation $A \leq^{l(\mathbf{R})} A'$ if and only if $(A]\subseteq (A']$. Note that, if $\textup{width}(\mathbf{R})$ is considered as a constant, the construction of $l(\mathbf{R})$ is polynomial-time computable from $\mathbf{R}$. The three needed facts are the following. First, for any (finite) poset $\mathbf{R}$, the structure $l(\mathbf{R})$ is a (finite) distributive lattice \cite[Proposition~5.5.5]{Schroder03}. Second, the substructure of $l(\mathbf{R})$ generated by join irreducible elements is isomorphic to $\mathbf{R}$ \cite[Theorem~5.5.6]{Schroder03}; recall that, if $\mathbf{L}=(L,\leq)$ is a lattice, then $j \in L$ is \emph{join irreducible} if, for all $l,l' \in L$, if $j$ is the least upper bound of $l$ and $l'$, then $j=l$ or $j=l'$. Third, the isomorphism problem restricted to finite distributive lattices is polynomial-time tractable \cite{GorazdIdziak95}. Using the previous facts, we design the following algorithm. Let $w \in \mathbb{N}$ be the upper bound on the width of posets in $\mathcal{P}$. Let $(\mathbf{Q},\mathbf{P})$ be an instance of $\textsc{Iso}(\mathcal{P})$. Let $|P|=n$. If $|Q| \neq n$, or $\textup{width}(\mathbf{Q})>w$, or $\textup{width}(\mathbf{Q}) \neq \textup{width}(\mathbf{P})$, then reject; the condition is checkable in time $O(w \cdot n^2)$ by Theorem~\ref{th:felsner}. Otherwise, in polynomial time, compute $l(\mathbf{Q})$ and $l(\mathbf{P})$ and accept if and only if $l(\mathbf{Q})$ and $l(\mathbf{P})$ are isomorphic. The algorithm clearly runs in polynomial time. For correctness, notice that $\mathbf{Q}$ and $\mathbf{P}$ are isomorphic if and only if $l(\mathbf{Q})$ and $l(\mathbf{P})$ are isomorphic. For the nontrivial direction (backwards), if $f$ is an isomorphism from $l(\mathbf{Q})$ to $l(\mathbf{P})$, then let $f'$ be the restriction of $f$ to the join irreducible elements of $l(\mathbf{Q})$. It is easy to check that $f'$ is bijective into the join irreducible elements of $l(\mathbf{P})$, hence, using the second fact mentioned above, $f'$ is an isomorphism between $\mathbf{Q}$ and $\mathbf{P}$. \end{proof} \section{Conclusion}\label{sect:concl} We embarked on the study of the model checking problem on posets; compared to graphs, the problem is largely unexplored, and we made a first contribution by studying basic syntactic fragments (existential logic) and fundamental poset invariants (including width, depth, and degree). Our complexity classification for existential logic also carries over to the \emph{jump number} (between size and width in Figure~\ref{fig:overwparcompl}); a future direction is to extend our study to \emph{dimension} (above width \cite{CaspardLeclercMonjardet12} and degree \cite{FurediKahn86} in Figure~\ref{fig:overwparcompl}). Our main algorithmic result, fixed-parameter tractability of existential logic on bounded width posets, raises the natural question of whether model checking the full first-order logic is fixed-parameter tractable on classes of posets of bounded width. We propose this as a topic for future research. \shortversion{\acks This research was supported by ERC Starting Grant (Complex Reason, 239962) and FWF Austrian Science Fund (Parameterized Compilation, P26200).} \bibliographystyle{abbrvnat} \longversion{\newpage}
1,314,259,996,059
arxiv
\section{Introduction} \label{sec:intro} In the $\Lambda$CDM model, structure builds up hierarchically, with smaller objects merging to build larger ones. One consequence of this is stellar streams in the Galactic halo, produced as accreted objects are tidally disrupted (see e.g. \citealt{Lynden-Bell1995} and references therein). A well-known example is the stream extending from the Sagittarius dwarf spheroidal, which has been traced more than one full wrap around the Milky Way (e.g., \citealt{Ibata1994, Majewski2003}). In a part of the sky dubbed the ``Field of Streams'' \citep{Belokurov2006b}, at least two wraps of the Sagittarius stream as well as several other structures are visible. A new velocity structure in this field has recently been discovered overlapping with the ultra-faint object Segue\,1 \citep{Belokurov2007}: Radial velocity data to determine Segue\,1's velocity dispersion by \citet{Geha2009} revealed a group of stars moving near 300\,km\,s$^{-1}$ (with dispersion of $\sim10$\,km\,s$^{-1}$). For comparison, the Segue\,1 dwarf has a mean velocity of 208\,km\,s$^{-1}$ with a dispersion of 3\,km\,s$^{-1}$ (Simon et al. 2011; hereafter S11)\nocite{Simon2011}. The same overdensity of stars at this velocity was seen by Norris et al. (2010; hereafter N10)\nocite{Norris2010a}, and in the larger spectroscopic sample of Segue\,1 stars of S11. The stars have so far been interpreted as part of a stellar stream independent of Segue\,1, and we shall refer to it as the ``300\,km\,s$^{-1}$ stream'' or ``300S'' throughout this paper. Since the full extent of this structure has not been mapped out, however, it is also possible that these stars belong to a bound object. From SDSS photometry, one can obtain an estimate of the distance and metallicity of the 300\,km\,s$^{-1}$ stream by comparing the color-magnitude diagram to globular cluster sequences and isochrones. To further characterize the stream chemically, however, high-resolution spectroscopy is necessary. Here, we present the first high-resolution spectrum and detailed abundance analysis of a star in 300S. This paper is organized as follows. In Section~\ref{sec:medres}, we briefly discuss the sample from which the main target for high-resolution spectroscopy was selected. The observations and basic spectral analysis of our target star is presented in Section~\ref{sec:obs}, and the abundance analysis in Section~\ref{sec:abund}. We interpret the results in Section~5, and discuss the nature and potential origin of the stream in Section~\ref{sec:conc}. \section{Stream sample and target selection} \label{sec:medres} There are two existing medium-resolution spectroscopic studies of the region around the ultra-faint dwarf galaxy Segue\,1, N10 and S11. N10's data were obtained with the Anglo-Australian Telescope's AAOmega spectrograph, which can take simultaneous spectra of 400 targets over a field 2$^{\circ}$ in diameter. They were targeting stars in the RGB locus. S11, on the other hand, were using the {\small DEIMOS} spectrograph on the Keck\,II telescope, focusing on an area within $\sim 15$\arcmin\, of the center of Segue\,1, and going deeper than N10. Table~\ref{tab:members} lists all stars (52 total) in the two samples with heliocentric radial velocities higher than 240\,km\,s$^{-1}$. This value corresponds to about the cutoff of Segue\,1 dwarf galaxy stars in S11 (their Figure 3). It conservatively includes all stars with velocities that are not consistent with Segue\,1 membership, and hence potential 300S candidates. The estimated velocity uncertainty for the N10 stars is 10\,km\,s$^{-1}$; the individual velocity uncertainties for S11 stars are shown in the table. The list includes four targets from the AAOmega sample not published in N10 due to slightly lower confidence in the radial velocity measurement. We also list photometry from the Sloan Digital Sky Survey, DR7 \citep{Abazajian2009}. Figure~\ref{fig:col_mag} summarizes the properties of the two samples. The top left panel shows a histogram of the heliocentric radial velocities measured. The central peak ($\sim$ 270-330\,km\,s$^{-1}$) has a mean velocity of $300.4 \pm 1.1\, \mbox{km\,s}^{-1}$ and a dispersion of $10.3 \pm 1.2\,\mbox{km\,s}^{-1}$. Considering the independent N10 and S11 samples separately we still arrive at a dispersion of 10\,km\,s$^{-1}$ for the central peak, and similar means ($303.3 \pm 3$\,km\,s$^{-1}$ for N10 versus $299.1 \pm 1$\,km\,s$^{-1}$ for S11). It is not clear whether the intermediate ``bridge'' candidates with 240\,km\,s$^{-1}$ $< v_{helio} < 270$\,km\,s$^{-1}$ or the three extreme-velocity stars with $v_{helio} > 340$\,km\,s$^{-1}$ should be considered part of the same structure, but follow-up observations tracing the stream over a larger field of view could resolve this. The top right panel of Figure~\ref{fig:col_mag} shows the color-magnitude diagram of all the stream candidate stars from Table~\ref{tab:members}. Here, stream candidates are shown as black circles. (Three stars redder than $(g-i) = 1.2 $ are not shown.) For comparison, the green triangles show radial velocity members of the Segue\,1 dwarf galaxy, which was the target of the N10 and S11 studies. Open symbols show stars from the N10/AAO sample, while filled symbols are from the S11 sample. Note that the N10 sample covers a much larger field of view but does not go as deep -- as a result, most of the bright stream stars, including our target (star symbol), show up in this sample. In order to illustrate the photometric criteria used in the samples, the blue and red dots show the stars observed by N10 and S11, respectively, that met their photometric selection cut for follow-up spectroscopy, but are radial velocity non-members of Segue\,1 and the stream. The solid line shows the M5 cluster sequence \citep{An2008}, which gives a good fit to the stream main sequence and subgiant branch when shifted to a distance of 18\,kpc. The metallicity of M5 is $\mbox{[Fe/H]} = -1.3$ (e.g., \citealt{Carretta2009}), and the SDSS photometry suggests that the 300S is slightly more metal-rich than Segue\,1 \citep{Simon2011}. We note, however, that the red giant branch of the stream is bluer than this sequence, and in prinicple, would be better fit by the RGB of the more metal-poor globular cluster M92. If done so, the better populated turnoff region is not well fitted, so we adopt the M5 sequence. There are six stream candidate stars brighter than $r = 19$, and thus possible candidates for high-resolution spectroscopy. Our chosen target (SDSSJ100914.95+155948.4; the first star in Table~\ref{tab:members}) is marked with an open star symbol. It is identified as ``Segue\,1-11'' in N10, but we shall refer to it by ``300S-1'' throughout this paper. Its location in the color-magnitude diagram indicates that it is most likely a red giant, though it could also be consistent with a horizontal branch star at this distance. Both its colors, and the radial velocity of 301\,km\,s$^{-1}$ measured by N10 are consistent with stream membership. A medium resolution spectrum, taken as part of the N10 campaign, indicates that 300S-1 is more metal-rich than a typical Segue\,1 metallicity -- consistent with S10's prediction based on isochrone fitting. Accordingly, we deemed it the best candidate for high-resolution spectroscopy follow-up. The nature of the other five bright stars is less clear. As noted in \citet{Geha2009}, random halo stars at these extreme velocities are very rare. If they indeed were turnoff stars, the stream would have a coherent velocity over four magnitudes in distance modulus, which seems unlikely. The three stars around $r\sim 17.5$ could be red horizontal branch stars, but if so, raises the question of why we do not see more red giants when assuming the numbers of horizontal branch stars and red giants to be roughly equal. The nature of the very bright star at $r = 15.8$ is also not understood but additional data of these brighter stars could provide more insight. Finally, the spatial coverage of the two samples is shown in the two bottom panels. The brighter stream stars in the N10 sample (left) extend at least over 1$^{\circ}$ on the sky; the deeper sample of S11 (right) only covers a 15\arcmin\, radius around the center of Segue\,1. The box size here represents the full field of view observed in the N10 sample, and so for the lower left panels, again, all stars observed by N10 are shown to better illustrate the distribution. New photometric observations to determine the full extent of this stream on the sky, as well as deeper observations in a larger region than that covered by S11, would be very important for better understanding and characterizing this structure. \begin{deluxetable*}{lcccccccc} \tabletypesize{\scriptsize} \tablecaption{Stream Member Candidate Stars\label{tab:members}} \tablehead{ \colhead{Identifier} & \colhead{R. A.} & \colhead{Dec.} & \colhead{$g$} & \colhead{$r$} & \colhead{$i$} & \colhead{$g \-- r$} & \colhead{$V_{helio}$} & \colhead{Ref} \\ & (J2000) & (J2000) & & & & & (km s$^{-1}$) & } \startdata 300S-1\tablenotemark{a} & 10 09 15.0 & $+$15 59 48.4 & 17.99 & 17.49 & 17.26 & 0.73 & 307 & N10 \\ 300S-2 & 10 07 40.1 & $+$16 03 09.7 & 19.96 & 19.55 & 19.44 & 0.52 & 298/303.1$\pm$3.1 & N10/S11 \\ 300S-3 & 10 06 59.0 & $+$15 44 18.8 & 19.72 & 19.33 & 19.14 & 0.58 & 307 & N10 \\ 300S-4 & 10 06 51.8 & $+$15 49 41.5 & 20.36 & 20.04 & 19.91 & 0.45 & 300 & AAO \\ 300S-5 & 10 06 20.0 & $+$15 46 12.7 & 20.09 & 19.78 & 19.67 & 0.42 & 315 & AAO \\ 300S-6 & 10 06 12.0 & $+$15 45 48.4 & 20.08 & 19.62 & 19.61 & 0.47 & 327 & N10 \\ 300S-7 & 10 05 30.6 & $+$15 54 18.1 & 19.83 & 19.49 & 19.31 & 0.52 & 305 & N10 \\ 300S-8 & 10 06 27.7 & $+$15 54 08.6 & 20.15 & 19.88 & 19.71 & 0.44 & 300 & AAO \\ 300S-9 & 10 06 47.7 & $+$16 13 30.1 & 20.01 & 19.70 & 19.51 & 0.50 & 300 & AAO \\ 300S-10 & 10 06 58.5 & $+$16 20 45.6 & 19.84 & 19.37 & 19.18 & 0.66 & 295 & N10 \\ 300S-11 & 10 08 39.7 & $+$16 28 26.7 & 19.16 & 18.73 & 18.60 & 0.56 & 288 & N10 \\ 300S-12 & 10 06 00.2 & $+$16 05 18.6 & 20.22 & 19.68 & 19.51 & 0.71 & 242 & N10 \\ 300S-13 & 10 06 18.7 & $+$16 03 39.0 & 21.67 & 21.14 & 20.90 & 0.77 & 268.6 $\pm$ 3.1 & S11 \\ 300S-14 & 10 06 20.0 & $+$16 00 42.1 & 21.70 & 21.41 & 21.18 & 0.52 & 305.5 $\pm$ 4.3 & S11 \\ 300S-15 & 10 06 30.9 & $+$16 15 12.1 & 21.47 & 21.13 & 20.95 & 0.52 & 321.0 $\pm$ 8.8 & S11 \\ 300S-16 & 10 06 42.8 & $+$15 57 09.3 & 21.56 & 21.23 & 21.02 & 0.54 & 291.9 $\pm$ 6.4 & S11 \\ 300S-17 & 10 06 46.8 & $+$16 06 08.3 & 20.53 & 20.22 & 20.07 & 0.46 & 294.5 $\pm$ 2.9 & S11 \\ 300S-18 & 10 06 48.5 & $+$16 09 58.1 & 20.27 & 19.98 & 19.79 & 0.48 & 306.0 $\pm$ 2.6 & S11 \\ 300S-19 & 10 06 50.8 & $+$16 03 51.2 & 22.13 & 21.95 & 21.83 & 0.30 & 312.6 $\pm$11.9 & G09, S11 \\ 300S-20 & 10 06 54.2 & $+$15 55 20.7 & 22.12 & 21.53 & 21.29 & 0.83 & 268.5 $\pm$ 6.2 & S11 \\ 300S-21 & 10 06 56.1 & $+$16 06 60.0 & 21.34 & 21.09 & 20.99 & 0.35 & 302.0 $\pm$ 2.6 & S11 \\ 300S-22 & 10 06 58.5 & $+$15 57 48.9 & 21.56 & 21.09 & 20.84 & 0.72 & 290.5 $\pm$ 3.6 & S11 \\ 300S-23 & 10 07 04.6 & $+$16 01 30.8 & 21.22 & 20.81 & 20.76 & 0.46 & 295.8 $\pm$ 3.9 & S11 \\ 300S-24 & 10 07 04.6 & $+$16 08 12.6 & 21.08 & 20.85 & 20.74 & 0.34 & 296.9 $\pm$ 3.4 & S11 \\ 300S-25 & 10 07 08.4 & $+$15 56 46.3 & 22.03 & 21.56 & 21.45 & 0.58 & 286.3 $\pm$ 5.4 & S11 \\ 300S-26 & 10 07 09.1 & $+$16 04 36.6 & 22.29 & 21.88 & 21.57 & 0.72 & 312.7 $\pm$ 6.4 & S11 \\ 300S-27 & 10 07 09.7 & $+$15 53 12.3 & 16.11 & 15.83 & 15.72 & 0.39 & 303.4 $\pm$ 2.2 & S11 \\ 300S-28 & 10 07 13.0 & $+$15 57 34.8 & 18.28 & 18.00 & 17.87 & 0.41 & 307.9 $\pm$ 2.4 & S11 \\ 300S-29 & 10 07 13.7 & $+$16 04 44.8 & 22.13 & 21.76 & 21.47 & 0.66 & 293.4 $\pm$ 4.8 & G09, S11 \\ 300S-30 & 10 07 15.5 & $+$16 05 52.1 & 20.36 & 20.07 & 19.97 & 0.39 & 282.0 $\pm$ 2.8 & S11 \\ 300S-31 & 10 07 15.5 & $+$16 15 19.1 & 20.74 & 20.43 & 20.42 & 0.32 & 266.8 $\pm$ 3.1 & S11 \\ 300S-32 & 10 07 17.2 & $+$16 05 11.9 & 22.10 & 21.78 & 21.38 & 0.72 & 266.3 $\pm$ 4.4 & S11 \\ 300S-33 & 10 07 17.4 & $+$16 03 55.6 & 20.11 & 19.77 & 19.64 & 0.47 & 295.9 $\pm$ 2.4 & G09, S11 \\ 300S-34 & 10 07 20.0 & $+$16 01 37.5 & 17.62 & 17.27 & 17.12 & 0.50 & 312.7 $\pm$ 2.2 & S11 \\ 300S-35 & 10 07 21.2 & $+$16 11 18.2 & 20.99 & 19.73 & 19.20 & 1.79 & 281.6 $\pm$ 2.4 & S11 \\ 300S-36 & 10 07 21.8 & $+$15 54 24.5 & 20.61 & 20.22 & 20.22 & 0.39 & 307.2 $\pm$ 5.5 & S11 \\ 300S-37 & 10 07 29.6 & $+$16 11 07.1 & 20.35 & 19.34 & 19.02 & 1.33 & 309.6 $\pm$ 2.2 & S11 \\ 300S-38 & 10 07 32.5 & $+$16 05 00.5 & 22.58 & 22.04 & 21.91 & 0.67 & 281.1 $\pm$ 6.9 & G09, S11 \\ 300S-39 & 10 07 35.0 & $+$15 54 31.5 & 20.78 & 20.54 & 20.39 & 0.39 & 303.2 $\pm$ 2.8 & S11 \\ 300S-40 & 10 07 37.3 & $+$16 07 46.2 & 21.25 & 20.99 & 20.81 & 0.44 & 296.0 $\pm$ 3.9 & S11 \\ 300S-41 & 10 07 40.2 & $+$15 58 55.6 & 21.32 & 20.96 & 20.80 & 0.52 & 295.8 $\pm$ 3.8 & S11 \\ 300S-42 & 10 07 42.5 & $+$16 00 06.8 & 22.47 & 22.02 & 21.46 & 1.01 & 296.6 $\pm$10.3 & S11 \\ 300S-43 & 10 07 43.8 & $+$15 49 32.9 & 22.47 & 20.98 & 20.36 & 2.11 & 299.2 $\pm$ 2.5 & S11 \\ 300S-44 & 10 07 47.2 & $+$16 05 45.5 & 20.13 & 19.77 & 19.67 & 0.46 & 294.3 $\pm$ 2.4 & S11 \\ 300S-45 & 10 07 35.9 & $+$16 11 25.7 & 23.79 & 22.08 & 22.00 & 1.79 & 242.2 $\pm$ 7.8 & S11 \\ 300S-46 & 10 06 25.7 & $+$15 54 22.1 & 21.58 & 21.13 & 20.95 & 0.63 & 244.3 $\pm$ 5.6 & S11 \\ 300S-47 & 10 07 11.8 & $+$16 06 30.4 & 22.80 & 22.16 & 21.95 & 0.85 & 247.1 $\pm$15.9 & S11 \\ 300S-48 & 10 07 07.8 & $+$16 07 21.5 & 20.64 & 20.34 & 20.23 & 0.41 & 247.7 $\pm$ 2.8 & S11 \\ 300S-49 & 10 07 35.2 & $+$15 57 15.3 & 22.53 & 21.13 & 20.41 & 2.12 & 255.1 $\pm$ 3.0 & S11 \\ 300S-50 & 10 06 28.4 & $+$15 56 28.8 & 17.85 & 17.56 & 17.43 & 0.42 & 347.1 $\pm$ 2.9 & S11 \\ 300S-51 & 10 07 36.9 & $+$15 59 58.9 & 21.99 & 21.87 & 21.60 & 0.39 & 373.0 $\pm$ 6.0 & S11 \\ 300S-52 & 10 06 51.7 & $+$16 17 59.2 & 21.40 & 20.92 & 20.95 & 0.45 & 394.9 $\pm$ 8.9 & S11 \enddata \tablenotetext{a}{Target star, referenced as Segue1-11 in N10} \end{deluxetable*} \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=8.5cm,clip=true,bbllx=70, bblly=362,bburx=545, bbury=714]{velhist.ps} & \includegraphics[width=8.5cm,clip=true,bbllx=12, bblly=178,bburx=600, bbury=613]{cm_2_withhb_2.ps} \\ \includegraphics[width=8.5cm,clip=true,bbllx=50, bblly=360,bburx=502, bbury=714]{coord1.ps} & \includegraphics[width=8.5cm,clip=true,bbllx=50, bblly=360,bburx=502, bbury=714]{coord2.ps} \\ \end{tabular} \caption{\scriptsize Summarizing the properties of the stream star candidates found in N10 and S11. Top left: Heliocentric radial velocity histogram of all stars in the combined N10 and S11 samples with velocities greater than 240\,km\,s$^{-1}$. The black dashed line shows stars in the N10 sample, the blue dotted lines show stars in the S11 sample, and the solid red line shows the total. Top right: Color-magnitude diagram of the 300\,km\,s$^{-1}$ stream candidates found in S11 (filled circles) and N10 (open circles). The open star symbol denotes our target. The green triangles show radial velocity members of the ultra-faint dwarf galaxy Segue\,1, among which the stream stars were discovered. Blue (N10) and red (S11) dots show the remaining stars that still meet the photometric cuts in either sample, but are not radial velocity members of Segue\,1 or the stream. Photometry is from the SDSS-DR7 \citep{Abazajian2009}. The black line shows the globular cluster sequence of M5 \citep{An2008}, shifted according to reddening ($E(B-V)=0.03$) and a best-fit distance of 18\,kpc. A rough HB is shown to guide the eye. Bottom: Coordinate plots of stream candidate stars, showing the N10 sample (left) and S11 sample (right). The box size represents the 2$^{\circ}$ field of view covered by N10 centered on Segue\,1; S11 on the other hand only observed stars within 15\arcmin\, of Segue\,1's center. Symbols as in the color-magnitude diagram.} \label{fig:col_mag} \end{center} \end{figure*} \section{Observations and Data Analysis} \label{sec:obs} \subsection{Observations} We obtained a spectrum of 300S-1 with the MIKE spectrograph \citep{mike} on the Magellan Clay telescope in March 2010. The total exposure time for this $V = 17.6$\,mag star was 4.5 hours, distributed over six exposures to allow for removal of cosmic rays. MIKE spectra have nearly full optical wavelength coverage from $\sim3500$-9000\,{\AA}. Using a 1.0\arcsec\, slit and $2\times2$ on-chip binning, a resolution of $\sim22,000$ is achieved in the red, and $\sim28,000$ in the blue wavelength regime. The data were reduced using an echelle data reduction pipeline made for MIKE\footnote{Available at http://obs.carnegiescience.edu/Code/python}. The reduced individual orders were normalized and merged to produce final one-dimensional blue and red spectra ready for the analysis. The $S/N$ of the data of this faint object is modest: 17 at $\sim4500$\,{\AA} and 20 at $\sim5200$\,{\AA}. In addition to our main target, we also took MIKE spectra of three bright comparison stars chosen from the Fulbright (2000; hereafter F00)\nocite{Fulbright2000} sample in March 2011. These comparison stars were chosen based on having stellar parameters and metallicities that bracketed our first estimate for 300S-1. Table~\ref{tab:targets} summarizes the observations of our targets. Spectra were taken with a 1.0\arcsec\, slit and short exposures, in order to get a similar data quality and $S/N$ to that of the 300S-1 spectrum. Figure~\ref{fig:mglines} shows the spectrum of 300S-1 and the short-exposure spectra of the comparison stars in the region around the Mg b lines at 5170\,{\AA}. By comparing stellar parameters and abundances derived from these short-exposure spectra to the published values, we are able to assess the accuracy of our low $S/N$ spectrum of 300S-1. \begin{deluxetable*}{lcccccccc} \tabletypesize{\scriptsize} \tablecaption{Observed Targets \label{tab:targets}} \tablehead{ \colhead{Name} & \colhead{R. A.} & \colhead{Dec.} & \colhead{$V$} & \colhead{$B \-- V$} & \colhead{UT Date} & \colhead{UT Start} & \colhead{$t_{exp}$} \\ & (J2000) & (J2000) & & & & & ($s$) } \startdata 300S-1 & 10 09 15.0 & $+$15 59 48.4 & 17.70\tablenotemark{a} & 0.68\tablenotemark{a} & 03/08/2010 & 02:35 & 3000 \\ & & & & & 03/08/2010 & 05:30 & 3600 \\ & & & & & 03/19/2010 & 05:06 & 3000 \\ & & & & & 03/22/2010 & 00:37 & 3100 \\ & & & & & 03/22/2010 & 05:22 & 2800 \\ & & & & & 03/23/2010 & 05:08 & 700 \\ HIP37335 & 07 39 50.1 & $-$01 31 20.4 & 9.25 & 0.82 & 03/13/2011 & 23:41 & 7 \\ HIP47139 & 09 36 20.0 & $-$20 53 14.8 & 8.34 & 1.01 & 03/12/2011 & 10:08 & 1 \\ HIP68807 & 14 05 13.0 & $-$14 51 25.5 & 7.25 & 0.93 & 03/13/2011 & 23:53 & 5 \enddata \tablenotetext{a}{Transformed from SDSS photometry following \citet{Jordi2006}} \end{deluxetable*} \begin{figure} \begin{center} \includegraphics[clip=true,width=8.5cm,bbllx=55, bblly=12, bburx=510, bbury=370]{Mgspecs_fixed.eps} \caption{High-resolution spectrum of our target star, 300S-1, and the three comparison stars from the F00 sample, in the region around the Mg b lines. Shown here are the short exposures of the comparison stars, taken to obtain a similar resolution and $S/N$ as the 300S-1 spectrum.} \label{fig:mglines} \end{center} \end{figure} \subsection{Line Strength Measurements} \label{sec:linemes} We obtained a first estimate for the radial velocity from the two strong Mg b lines in the green region of the spectrum, and two additional Mg lines in the blue. Equivalent widths were then measured by fitting Gaussian profiles to the metal absorption lines, and our estimate was corrected based on the mean radial velocity from all the lines measured. Using these 366 lines, we find a heliocentric radial velocity of $307.6$\,km\,s$^{-1}$, with a standard error of the mean of 0.1\,km\,s$^{-1}$. This is slightly higher than the estimate of $301$\,km\,s$^{-1}$ from N10 based on the medium resolution spectrum, but consistent within their estimated velocity uncertainty of 10\,km\,s$^{-1}$. However, our measurement is well within the estimated range for the 300\,km\,s$^{-1}$ stream. The linelist used for the abundance analysis is based on lines presented in \citet{Roederer2010}, \citet{Aoki2007b}, and \citet{Cayrel2004}. In the instances where the same line was included in more than one (original) linelist, the most up to date oscillator strength was used, following Roederer et al. (2010). This linelist was initally compiled for work on stars more metal-poor than the target, but besides the strongest lines, we found it to work well for this metallicity range also. For atomic lines, equivalent widths were measured by fitting Gaussian profiles. Lines with reduced equivalent widths $\log$(EW/$\lambda$) $> -4.5$ were not used for abundance determination, since they fall near the flat part of the curve-of-growth. Given the noise in the spectra, most lines with EW $\lesssim 20$\,m{\AA}, were cautiously excluded from the analysis, except for when the $S/N$ in the respective wavelength range allowed for a $>3\sigma$ detection (e.g. in the red spectral region). For molecular features and elements with hyperfine splitting, we used a spectrum synthesis approach. The abundance was then determined by matching synthesized spectra of different abundances to the observed spectra. See Section~\ref{sec:abund} for details. \subsection{Stellar Parameters} \label{sec:stpar} The stellar parameters were determined by using the iron lines in each spectrum, by an iterative process. First, the microturbulence is fixed by demanding that the line abundances show no trend with reduced equivalent width ($\log \mbox{EW}/\lambda$). Similarly, effective temperature is set by requiring no trend of abundance with excitation potential of the lines. Finally, the gravity is fixed by requiring that the abundance derived from Fe\,II lines agree with that obtained from Fe\,I to within 0.05\,dex. By varying the temperature and microturbulence, and comparing the slope to the scatter in the data, we adopt an uncertainty of $\pm$ 150\,K in temperature, and $\pm$ 0.3\,km\,s$^{-1}$ in microturbulence. Similarly, we obtain an uncertainty in the gravity of $\pm$ 0.4\,dex by seeing how much the gravity can be changed with Fe\,I and Fe\,II still being consistent within their uncertainties. Table~\ref{tab:comp_para} shows the resulting stellar parameters for 300S-1 and the three comparison stars obtained with this method, with the values obtained by F00 in parenthesis for comparison. For HIP68807 and HIP47139, our solutions agree well with the parameters published by F00. For HIP37335, we arrive at a slightly higher temperature and gravity, and thus metallicity, than F00. Figure~\ref{fig:isochr} shows the adopted stellar parameters overplotted with theoretical 10\,Gyr isochrones \citep{Kim2002}. Our values agree reasonably well with the tracks within their uncertainties. As its position on the color-magnitude diagram suggested, spectroscopically derived stellar parameters confirm that 300S-1 is located on the red giant branch. We note that the choice for the age of the isochrone does not influence any conclusion since the giant branches are nearly identical for 10 and e.g. 12\,Gyr. For comparison, we also use the SDSS colors of 300S-1 to determine the temperature photometrically by interpolating the SDSS {\it ugriz} colors to the isochrones from \citet{Kim2002}, using the color tables of Castelli (http://wwwuser.oat.ts.astro.it/castelli/). This results in a slightly warmer temperature (depending on which colors we use), with T$_{\mbox{\small{eff}}} \sim 5400$\,K and $\log g \sim 3.5$. If we instead use this set of stellar parameters, we would arrive at [Fe/H] of $-1.3$. This is, however, well within our estimate of uncertainty for the metallicity (see Section~\ref{sec:unc}). For the rest of the analysis, we use the parameters derived from spectroscopy to facilitate the relative analysis with the \citet{Fulbright2000} stars. \begin{deluxetable}{lcccccccc} \tabletypesize{\scriptsize} \tablecaption{Derived Stellar Parameters \label{tab:comp_para}} \tablehead{ \colhead{Name} & \colhead{$T_{eff}$} & \colhead{$\log{g}$} & \colhead{$[\mbox{Fe/H}]$} & \colhead{$v_t$} \\ & (K) & (dex) & (dex) & (km\,s$^{-1}$) } \startdata 300S-1 & 5200 & 2.6 & $-1.4$ & 1.5 \\ HIP37335\tablenotemark{a} & 5100 (4850) & 2.9 (2.7)& $-1.0$ ($-1.2$) & 1.5 (1.5) \\ HIP47139 & 4550 (4600) & 0.9 (1.3)& $-1.6$ ($-1.4$) & 2.3 (1.8) \\ HIP68807 & 4600 (4575) & 1.0 (1.1)& $-1.8$ ($-1.8$) & 2.0 (1.9) \enddata \tablenotetext{a}{While our solution for HIP37335 is warmer than what was found in F00, we note that it agrees with other literature sources for this star (e.g., \citealt{Soubiran2008, Cenarro2007, Peterson1981}). \tablenotetext{}{The values in parenthesis are those determined by F00, for comparison.}} \end{deluxetable} \begin{figure} \begin{center} \includegraphics[width=8.5cm,clip=true,bbllx=12, bblly=182,bburx=600, bbury=610]{isochrones_10Gyr_withhb.ps} \caption{Adopted stellar parameters for 300S-1 (filled star), as well as for the three comparison stars (filled triangles). The open triangles show the values for the comparison stars from Fulbright (2000). Error bars are $\pm$ 150\,K and 0.4\,dex in $\log g$. Also shown are theoretical 10-Gyr isochrones at $\mbox{[Fe/H]} = -2.5$, $-1.5$ and $-0.5$ respectively \citep{Kim2002}. A metal-poor horizontal branch has been added to guide the eye.} \label{fig:isochr} \end{center} \end{figure} \subsection{Model Atmospheres} Our abundance analysis utilizes one-dimensional plane-parallel Kurucz model atmospheres with overshooting and $\alpha$-enhancement \citep{kurucz}. They are computed under the assumption of local thermodynamic equilibrium (LTE). We use the 2010 version of the MOOG synthesis code (first described in Sneden 1973)\nocite{moog} for this analysis. In this version, scattering is currently treated as true absorption, which may have consequences for abundances derived from lines in the blue region of the spectrum. \citet{hollek} tested how the stellar parameters are influenced by that effect and found that temperature and gravities, and hence [Fe/H] are somewhat lower (0.1 to 0.2\,dex) when scattering is properly treated. This average abundance difference could be explained by the fact that abundances of metal lines at lower wavelengths (below $\sim4200$\,{\AA}) yield lower abundances when the scattering is treated as Rayleigh scattering. They used metal lines down to 3750\,{\AA}. However, [X/Fe] values were found to not change beyond $\sim0.05$\,dex. Similar results were found by \citet{Frebel2010a} and \citet{venn12}. Hence, our star might be a little more metal-poor (perhaps 0.1\,dex), because we have no metal lines bluer than 4000\,{\AA} and the abundance ratios would not be significantly affected by these different treatments. As discussed below, these effects are well accounted for in our error budget, and moreover, do not affect our conclusions regarding the nature of 300S-1. \section{Abundance Analysis of 300S-1} \label{sec:abund} The derived stellar abundances of 300S-1 are summarized in Table~\ref{tab:seg11_abund}. The uncertainties quoted are the standard error of the mean, but we adopt a minimum uncertainty of 0.05\,dex. Solar abundances are taken from \citet{Asplund2009}. This section discusses the measurement and uncertainties of the different elements in more detail. \begin{deluxetable}{lcccccccc} \tabletypesize{\scriptsize} \tablecaption{300S-1 Abundances \label{tab:seg11_abund}} \tablehead{ \colhead{Element} & \colhead{$\log\epsilon (\mbox{X}_{\odot})$} & \colhead{$\log\epsilon (\mbox{X})$} & \colhead{$\sigma$} & \colhead{$N$} & [X/H] & [X/Fe] \\ & (dex) & (dex) & (dex) & & (dex) & (dex) } \startdata C (CH) & 8.43 & 7.24 & 0.21 & 2 & $-1.19$ & $+0.27 $ \\ Na\,I & 6.24 & 4.93 & 0.08 & 3 & $-1.31$ & $+0.15 $ \\ Mg\,I & 7.60 & 6.30 & 0.07 & 6 & $-1.30$ & $+0.16 $ \\ Al\,I & 6.45 &$<5.00$&\nodata&2 & $<-1.45$& $<0.01 $ \\ Ca\,I & 6.34 & 5.30 & 0.06 & 16 & $-1.04$ & $+0.42 $ \\ Sc\,II & 3.15 & 1.46 & 0.05 & 3 & $-1.69$ & $-0.23 $ \\ Ti\,I & 4.95 & 3.69 & 0.07 & 18 & $-1.26$ & $+0.20 $ \\ Ti\,II & 4.95 & 3.77 & 0.06 & 17 & $-1.18$ & $+0.28 $ \\ Cr\,I & 5.64 & 3.96 & 0.05 & 8 & $-1.68$ & $-0.22 $ \\ Mn\,I & 5.43 & 3.41 & 0.07 & 3 & $-2.02$ & $-0.56 $ \\ Fe\,I & 7.50 & 6.04 & 0.05 & 103 & $-1.46$ & 0.00 \\ Fe\,II & 7.50 & 6.08 & 0.05 & 10 & $-1.42$ & +0.04 \\ Co\,I & 4.99 & 3.29 & 0.12 & 3 & $-1.70$ & $-0.24$ \\ Ni\,I & 6.22 & 4.73 & 0.06 & 14 & $-1.49$ & $-0.03$ \\ Zn\,I & 4.56 & 3.18 & 0.25 & 2 & $-1.38$ & $+0.08$ \\ Sr\,II & 2.87 & 0.50:& 0.40 & 1 & $-2.37$:& $-0.91$: \\ Ba\,II & 2.18 & 0.83 & 0.21 & 2 & $-1.35$ & $+0.11$ \\ La\,II & 1.10 & $-0.38$ & 0.30 &1 & $-1.48$ & $-0.02$ \\ Eu\,II & 0.52 & $-0.39$:& 0.40 &1 & $-0.91$:& $+0.55$: \enddata \end{deluxetable} \subsection{Carbon} The carbon abundance was determined by synthesis of the carbon G-band head at 4313\,{\AA}, and the CH feature at 4323\,{\AA}. An example of this, comparing the observed spectrum to four synthesized spectra, is shown in Figure~\ref{fig:gband}. Here, the thick red line shows the carbon abundance adopted for this region,while the blue and green show the synthesis with $\Delta \mbox{[C/Fe]} \pm 0.3$\,dex. Synthesis of the feature at 4323\,{\AA} was done independently; the carbon abundance quoted in Table~\ref{tab:seg11_abund} is the mean of the two. Given the noise in the data, we adopt an uncertainty of $\pm$ 0.3\,dex for each measurement. \begin{figure} \begin{center} \includegraphics[width=8.5cm,clip=true,bbllx=80, bblly=362,bburx=550, bbury=705]{CH_synth.ps} \caption{Example of determining carbon abundance by synthesis: The black dotted line shows the actual spectrum of 300S-1 at the carbon G-band, while the colored lines show synthesized spectra at different carbon abundances. } \label{fig:gband} \end{center} \end{figure} \subsection{Light elements} Abundances of elements without hyperfine structure were determined from the equivalent width measurements, as described in Section~\ref{sec:linemes}. In that case, the uncertainties listed in Table~\ref{tab:seg11_abund} are the standard error of the mean of the abundances determined from the individual lines for each element. Abundances of elements with hyperfine structure (Mn, Co) were determined by synthesis of individual lines. In general, the abundance patterns derived from the high-resolution spectrum are similar to those of outer halo stars at this metallicity (also see Section~\ref{sec:ab_rat} and Figure~\ref{fig:light_el}). The possible exception is Mg, which at $\mbox{[Mg/Fe]}=0.14$ is low compared to the other $\alpha$-elements. We note, however, that the derived Mg abundance is very sensitive to the assumed surface gravity in the model, and that our Mg measurements for the three comparison stars are lower than that measured in F00 (also see Section~\ref{sec:unc} and Table~\ref{tab:hipabund}), so this could be a systematic effect. Taking the $\alpha$-element abundance as (Ca + Mg + Ti)/3, we find $\mbox{[$\alpha$/Fe]}= +0.26 $. 300S-1 is at the low end of $\alpha$-enhancement compared to most halo stars at this metallicity, but still higher than $\alpha$-abundances seen in classical dwarf spheroidal galaxies (e.g., \citealt{Tolstoy2009}). \subsection{Neutron-capture elements} Strontium abundance was determined by synthesis of the line at 4215\,{\AA}, illustrated in Figure~\ref{fig:nc_syn}. The line at 4077\,{\AA} is also visible in the spectrum but too noisy to use for abundance determination; the data are however not inconsistent with what is determined from the line at 4215\,{\AA}, within the uncertainties. Given the noise level even at 4215\,{\AA}, compared to the difference between the synthesized spectra, this value should be regarded as uncertain. In particular, even though the Sr abundance appears abnormally low compared to other stellar populations in Figure~\ref{fig:nc_el}, this is likely not significant given the uncertainty. Barium abundance was determined by synthesis of the lines at 4554 and 6496\,{\AA}, with the abundance quoted in Table~\ref{tab:seg11_abund} being the average of the two. The synthesis of the 6496\,{\AA}, line is shown in Figure~\ref{fig:nc_syn}. We adopt an uncertainty of $\pm$ 0.3\,dex. Europium was determined by synthesis of the line at 4129\,{\AA}. Like strontium, there is considerable uncertainty due to the noisy spectrum. Lanthanum was determined by synthesis of the line at 4333\,{\AA}. Other lines are visible but too noisy for more precise abundance determination; the upper limits derived are however consistent with the result derived from the line at 4333\,{\AA}. Synthesis of the 4333\,{\AA}, line is shown in Figure~\ref{fig:nc_syn}. \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[width=8.5cm,clip=true,bbllx=80, bblly=362,bburx=558, bbury=705]{sr4215_synth.ps} & \includegraphics[width=8.5cm,clip=true,bbllx=80, bblly=362,bburx=550, bbury=705]{ba6496_synth.ps} \\ \includegraphics[width=8.5cm,clip=true,bbllx=80, bblly=362,bburx=558, bbury=705]{eu4129_synth.ps} & \includegraphics[width=8.5cm,clip=true,bbllx=80, bblly=362,bburx=550, bbury=705]{la4333_synth.ps} \\ \end{tabular} \caption{Determining the abundances of Sr, Ba, Eu and La by comparing the observed lines to synthesized spectra at different abundances. } \label{fig:nc_syn} \end{center} \end{figure*} \subsection{Uncertainties} \label{sec:unc} Random errors come from uncertainties in placing the continuum level; we estimate the random uncertainty in the abundance of an element as the standard error of the mean abundance determined from individual lines. For elements that were determined from fitting just one line with a synthetic spectrum (Sr, Eu and La), the error quoted in the second column is the estimated fitting uncertainty. Systematic errors arise from uncertainties in the stellar parameters, as described in Section~\ref{sec:stpar}. To quantify this effect, we repeated the analysis with the stellar parameters of 300S-1 changed by $+150$ K, $+0.4$\,dex, and $+0.3$\,km\,s$^{-1}$ in temperature, log g and microturbulence respectively, and record the corresponding change in abundance. Table~\ref{tab:seg11_sigma} shows the result. The total uncertainty is obtained by summing the individual components in quadrature. Another assessment of the uncertainties, given the modest data quality, comes from comparing our abundances from the low $S/N$ spectra of the comparison stars with those of F00. Table~\ref{tab:hipabund} shows the derived abundances of the comparison stars, and lists the published values from F00 also. Our abundances are in good agreement within the uncertainties, especially when taking into account the different stellar parameter solution for HIP37335. \begin{deluxetable}{lrrrrr} \tabletypesize{\scriptsize} \tablecaption{Abundance Uncertainties for 300S-1 \label{tab:seg11_sigma}} \tablehead{ \colhead{Elem.} & \colhead{Random} & \colhead{$\Delta T_{eff}$} & \colhead{$\Delta \log g$} & \colhead{$\Delta v_{micr}$} & \colhead{Total} \\ \colhead{}& \colhead{Uncer.}& \colhead{+150\,K}& \colhead{+0.4\,dex}& \colhead{+0.3\,km\,s$^{-1}$}& \colhead{Uncer.}} \startdata C (CH) & 0.21 & 0.30 & $-0.05$ & $-0.02$ & 0.37 \\ Na\,I & 0.08 & 0.16 & $-0.10$ & $-0.04$ & 0.21 \\ Mg\,I & 0.07 & 0.16 & $-0.11$ & $-0.05$ & 0.21 \\ Ca\,I & 0.06 & 0.13 & $-0.05$ & $-0.10$ & 0.18 \\ Sc\,II & 0.05 & 0.02 & $+0.16$ & $-0.02$ & 0.17 \\ Ti\,I & 0.07 & 0.20 & $-0.02$ & $-0.09$ & 0.23 \\ Ti\,II & 0.06 & 0.03 & $+0.15$ & $-0.09$ & 0.19 \\ Cr\,I & 0.05 & 0.19 & $-0.01$ & $-0.08$ & 0.21 \\ Mn\,I & 0.07 & 0.15 & $-0.03$ & $-0.10$ & 0.20 \\ Fe\,I & 0.05 & 0.18 & $-0.04$ & $-0.13$ & 0.23 \\ Fe\,II & 0.05 &$-0.02$ & $+0.16$ & $-0.09$ & 0.19 \\ Co\,I & 0.12 & 0.25 & $+0.00$ & $-0.20$ & 0.34 \\ Ni\,I & 0.06 & 0.14 & $+0.00$ & $-0.06$ & 0.16 \\ Zn\,I & 0.25 & 0.04 & $+0.10$ & $-0.05$ & 0.28 \\ Sr\,II & 0.40 & 0.10 & $+0.02$ & $-0.10$ & 0.42 \\ Ba\,II & 0.21 & 0.09 & $+0.08$ & $-0.18$ & 0.30 \\ La\,II & 0.30 & 0.05 & $+0.15$ & $-0.05$ & 0.34 \\ Eu\,II & 0.40 & 0.05 & $+0.15$ & $-0.05$ & 0.43 \\ \hline \enddata \end{deluxetable} \begin{deluxetable}{lrccccccc} \tabletypesize{\scriptsize} \tablecaption{Standard Star Abundances \label{tab:hipabund}} \tablehead{ \colhead{Element} & \colhead{$\log\epsilon (\mbox{X})$} & \colhead{$\sigma$} & \colhead{N} & \colhead{[X/H]} & \colhead{[X/Fe]}& \colhead{$[\mbox{X/Fe}]_{\mbox{\tiny{F00}}}$} \\ & (dex) & (dex) & & (dex) & (dex) & (dex) } \startdata \\ & & &HIP37335& & & \\ \hline C (CH) & 7.71 & 0.21 & 2 & $ -0.72 $ & $ +0.28 $ & \nodata \\ Na\,I & 5.46 & 0.07 & 4 & $ -0.78 $ & $ +0.22 $ & $+0.32 $ \\ Mg\,I & 7.05 & 0.12 & 4 & $ -0.55 $ & $ +0.45 $ & $+0.63 $ \\ Ca\,I & 5.86 & 0.05 & 16 & $ -0.48 $ & $ +0.52 $ & $+0.44 $ \\ Sc\,II & 2.40 & 0.05 & 9 & $ -0.75 $ & $ +0.25 $ & \nodata \\ Ti\,I & 4.20 & 0.05 & 28 & $ -0.75 $ & $ +0.25 $ & $+0.26 $ \\ Ti\,II & 4.20 & 0.05 & 16 & $ -0.75 $ & $ +0.25 $ & \nodata \\ Cr\,I & 4.65 & 0.05 & 14 & $ -0.99 $ & $ +0.01 $ & $-0.05 $ \\ Mn\,I & 4.61 & 0.05 & 3 & $ -0.82 $ & $ +0.18 $ & \nodata \\ Fe\,I & 6.50 & 0.05 & 124 & $ -1.00 $ & $ 0.00 $ & $(-1.26)$ \\ Fe\,II & 6.48 & 0.05 & 15 & $ -1.02 $ & $ -0.02 $ & \nodata \\ Co\,I & 3.55 & 0.05 & 3 & $ -1.44 $ & $ -0.44 $ & \nodata \\ Ni\,I & 5.30 & 0.05 & 24 & $ -0.92 $ & $ +0.08 $ & $+0.10 $ \\ Zn\,I & 3.78 & 0.08 & 2 & $ -0.78 $ & $ +0.22 $ & \nodata \\ Ba\,II & 1.43 & 0.21 & 2 & $ -0.75 $ & $ +0.25 $ & $-0.02 $ \\ La\,II & 0.42 & 0.30 & 1 & $ -0.68 $ & $ +0.32 $ & \nodata \\ Eu\,II &$-0.19$ & 0.30 & 1 & $ -0.71 $ & $ +0.29 $ & $+0.38 $ \\ \\ & & &HIP68807& & & \\ \hline C (CH) & 6.76 & 0.21 & 2 & $ -1.67 $ & $ +0.19 $ & \nodata \\ Na\,I & 4.36 & 0.05 & 3 & $ -1.88 $ & $ -0.02 $ & $ -0.13 $ \\ Mg\,I & 6.17 & 0.05 & 6 & $ -1.43 $ & $ +0.43 $ & $ +0.49 $ \\ Ca\,I & 4.89 & 0.05 & 17 & $ -1.45 $ & $ +0.41 $ & $ +0.37 $ \\ Sc\,II & 1.45 & 0.06 & 6 & $ -1.70 $ & $ +0.16 $ & \nodata \\ Ti\,I & 3.25 & 0.05 & 23 & $ -1.70 $ & $ +0.16 $ & $ +0.20 $ \\ Ti\,II & 3.45 & 0.05 & 18 & $ -1.50 $ & $ +0.36 $ & \nodata \\ Cr\,I & 3.61 & 0.05 & 14 & $ -2.03 $ & $ -0.17 $ & $ -0.12 $ \\ Mn\,I & 3.40 & 0.07 & 3 & $ -2.03 $ & $ -0.17 $ & \nodata \\ Fe\,I & 5.64 & 0.05 & 127 & $ -1.86 $ & $ 0.00 $ & $ (-1.83) $ \\ Fe\,II & 5.68 & 0.05 & 14 & $ -1.82 $ & $ 0.04 $ & \nodata \\ Ni\,I & 4.48 & 0.05 & 12 & $ -1.74 $ & $ +0.12 $ & $ -0.03 $ \\ Zn\,I & 2.84 & 0.20 & 1 & $ -1.72 $ & $ +0.14 $ & \nodata \\ Ba\,II & 0.69 & 0.14 & 3 & $ -1.49 $ & $ +0.37 $ & $ +0.27 $ \\ La\,II &$-0.68$& 0.30 & 1 & $ -1.78 $ & $ +0.08 $ & \nodata \\ Eu\,II &$-0.99$& 0.40 & 1 & $ -1.51 $ & $ +0.35 $ & $ +0.40 $ \\ \\ & & &HIP47139& & & \\ \hline C (CH) & 6.61 & 0.21 & 2 & $ -1.82 $ & $ -0.23 $ & \nodata \\ O\,I & 8.14 & 0.15 & 1 & $ -0.55 $ & $+ 1.04 $ & \nodata \\ Na\,I & 4.65 & 0.05 & 4 & $ -1.59 $ & $ 0.00 $ & $ -0.18 $ \\ Mg\,I & 6.42 & 0.08 & 4 & $ -1.18 $ & $ +0.41 $ & $ +0.54 $ \\ Ca\,I & 5.05 & 0.05 & 16 & $ -1.29 $ & $ +0.30 $ & $ +0.27 $ \\ Sc\,II & 1.64 & 0.05 & 12 & $ -1.51 $ & $ +0.08 $ & \nodata \\ Ti\,I & 3.52 & 0.05 & 26 & $ -1.43 $ & $ +0.16 $ & $ +0.29 $ \\ Ti\,II & 3.67 & 0.05 & 18 & $ -1.28 $ & $ +0.31 $ & \nodata \\ Cr\,I & 3.94 & 0.07 & 16 & $ -1.70 $ & $ -0.11 $ & $ -0.17 $ \\ Mn\,I & 3.58 & 0.07 & 3 & $ -1.85 $ & $ -0.26 $ & \nodata \\ Fe\,I & 5.91 & 0.05 & 109 & $ -1.59 $ & $ 0.00 $ & $ (-1.46) $ \\ Fe\,II & 5.94 & 0.05 & 11 & $ -1.56 $ & $ 0.03 $ & \nodata \\ Ni\,I & 4.65 & 0.05 & 18 & $ -1.57 $ & $ +0.02 $ & $ +0.00 $ \\ Zn\,I & 2.97 & 0.11 & 2 & $ -1.59 $ & $ -0.00 $ & \nodata \\ Ba\,II & 0.92 & 0.10 & 3 & $ -1.26 $ & $ +0.33 $ & $ +0.16 $ \\ La\,II & $-0.48$ & 0.30 & 1 & $ -1.58 $ & $ +0.01 $ & \nodata \\ Eu\,II & $-0.49$ & 0.40 & 1 & $ -1.01 $ & $ +0.58 $ & \nodata \enddata \end{deluxetable} \section{Characterizing the 300\,km\,s$^{-1}$ Stream} \label{sec:char} \subsection{Stream Membership} As demonstrated by other authors, there is unequivocally a coherent stream present here with a kinematic peak at $v_{helio} = 300$ km s$^{-1}$ \citep{Geha2009, Norris2010a, Simon2011}. With more extreme velocities, halo contaminants become less likely. Given that a stream is present here with high velocities, it is worth quantifying the probability whether the star analysed here is a background halo star or not. We have employed a two-sample Kolmogorov-Smirnov test using the predicted line-of-sight velocities from the Besan\c{c}on model \citep{Robin2003} to quantify this likelihood. We find a $p$ value of $0.097$, or a $\sim10$\% chance that 300S-1 and predicted stars in the Besan\c{c}on model \citep{Robin2003} are drawn from the same distribution. It is clear that this star sits separate from the main line-of-sight predicted population (Figure \ref{fig:besancon}). We can also deduce some likelihood that 300S-1 is a halo member when we examine the observed velocity distribution in Figure \ref{fig:col_mag}. The peak of the stream velocity distribution occurs at $300$\,km s$^{-1}$ and comprises 39 member stars, amongst background halo outliers with velocities between $250$ and $400$\,km s$^{-1}$. Within the $270-330$\,km s$^{-1}$ range, it is reasonable to suspect that we would observe fewer than 10 halo contaminants in $2^{\circ}$ with such high velocities. Indeed, the observed background range in Figure \ref{fig:col_mag} is approximately 5-10 halo stars, suggesting a probability of between $13-26\%$ that 300S-1 is a halo contaminant. We note that the probability that this star is a halo star ($\gtrsim10$\%) is the same probability cutoff employed by \citet{Simon2011} in examining stream members. \begin{figure*} \includegraphics[width=\columnwidth]{besancon.eps} \caption{Kinematics and metallicities for line-of-sight stars predicted by the Besan\c{c}on model \citep{Robin2003}. The star analysed here, 300S-1, is marked as a filled star.} \label{fig:besancon} \end{figure*} It is well-known that RGB stars in globular clusters exhibit a characteristic anti-correlation in Na-O, and Al-Mg \citep{Carretta2009}. Due to the low spatial distribution and relatively high kinematic dispersion for this stream compared to typical kinematically cold stellar streams, it is possible that the origin of the 300\,km\,s$^{-1}$ stream is a disrupted globular cluster. Although our spectrum has extremely modest S/N below 4000\,{\AA}, we have attempted to synthesize the Al lines at 3944\,{\AA} and 3961\,{\AA}. The Al lines at $\sim6697$\,{\AA} were not detected. Hence, we cannot determine an accurate value for [Al/Fe] from the blue lines. We can only exclude a super-solar abundance $\mbox{[Al/Fe]}<0$ for this star. However, this is a rather low [Al/Fe] abundance if 300S-1 was a member of a globular cluster. While there are globular cluster stars with $\mbox{[Al/Fe]}<0$ and similar metallicities, they generally have Mg abundances of $0.3 < \mbox{[Mg/Fe]} <0.6$ (compare to \citealt{Carretta2009}, their Figure 5). 300S-1 has $\mbox{[Mg/Fe]} =0.14$ and even if that Mg abundance would be systematically low by 0.1 to 0.2\,dex, it would still mostly fall outside any covered region. It suggests that 300S-1 may not be of a globular cluster origin. Unfortunately, given the modest S/N of our spectra no reliable upper limit on oxygen could be ascertained that could provide further clues on the topic. \begin{figure*} \begin{center} \includegraphics[width=17cm,clip=true,bbllx=40, bblly=362,bburx=555, bbury=705]{light_el.ps} \caption{Abundance ratios for light elements in 300S-1 (filled star symbol), as measured from our high-resolution spectrum. For comparison, red and cyan points show thin and thick disk stars respectively, blue points show halo stars, and the crosses show stars in the classical dwarf galaxies Draco, Sextans, Ursa Minor, Carina, Fornax, Sculptor and Leo I. A typical error bar for our measurements is shown in the lower-left panel; also see Table~\ref{tab:seg11_sigma}. } \label{fig:light_el} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=17cm,clip=true,bbllx=80, bblly=362,bburx=555, bbury=705]{nc_el.ps} \caption{Abundance ratios for the neutron-capture elements Sr, Ba, La and Eu in 300S-1. Symbols as in Figure~\ref{fig:light_el}.} \label{fig:nc_el} \end{center} \end{figure*} \subsection{Metallicity of Stream} We determine a metallicity of $\mbox{[Fe/H]} = -1.46 \pm 0.05 \pm 0.23 $ (random and systematic uncertainties) for 300S-1 based on the high-resolution spectrum. This is in agreement with the prediction of $\mbox{[Fe/H]} = -1.3$ from \citet{Simon2011}. This is also consistent with what we roughly estimate from fitting the M5 (with $\mbox{[Fe/H]}=-1.2$) isochrone to the stream photometry. It adds support to the result that the stream stars have higher metallicity than the Segue\,1 system, as already noted in S11 based on Ca triplet equivalents widths. The AAOmega sample of N10 contains some metallicity estimates based on Ca\,II K line strengths; in addition to 300S-1, the spectrum of the stream star Segue1-101 (SDSSJ100659.01+154418.8) has enough counts to yield $\mbox{[Fe/H]} \simeq -1.7$. Beyond this, there is no additional data to determine the metallicity spread of the stream. \subsection{Evolutionary Status} From the spectroscopic analysis we find 300S-1 to be a red giant branch star. Although the star sits slightly above the isochrone, it agrees with the isochrone within the uncertainties in the stellar parameters. From photometry, the star is also found to sit slightly above a shifted M5 globular cluster sequence. It is noteworthy, though, that the scatter of stream members is significant (see Figure~1). Nevertheless, these discrepancies could indicate the star to be on the horizontal branch. The carbon abundance of 300S-1 may shed light on this question. As a star ascends the giant branch, CN cycling converts carbon into nitrogen, thus lowering the observed surface carbon abundances. The measured value of $\mbox{[C/Fe]}=0.25$ suggests that CN cycling has not yet significantly operated, assuming that the star did not form from an unusually carbon-rich gas cloud. Some globular clusters show this effect (e.g., \citealt{carretta05}, their Figure~7) and compared to that, the carbon abundance of 300S-1 also suggests the star to be on the lower or middle part of the red giant branch and not in a more evolved state on the horizontal branch. \subsection{Abundance Ratios} \label{sec:ab_rat} Figures~\ref{fig:light_el} and \ref{fig:nc_el} show the abundance patterns of 300S-1, based on our high-resolution spectrum. The blue, cyan and red points show stars in the halo, thick disk and thin disk respectively, from the sample of \citet{Fulbright2000, Fulbright2002}. Since the Fulbright sample does not include Cr, Sr and La measurements, comparison points (halo stars) for these elements are taken from \citet{Lai2007} and \citet{Barklem2005}. In addition, the crosses show abundance patterns of stars in the classical dwarf galaxies Draco, Sextans, Ursa Minor, Carina, Fornax, Sculptor and Leo I \citep{Shetrone2001, Shetrone2003, Geisler2005, Aoki2009, Cohen2009}. The abundance ratios of 300S-1 overlap well with the general halo population at that metallicity, though as noted in Section~\ref{sec:abund}, the $\alpha$- and particularly Mg abundances are on the low end. (The Sr abundance also appears low, but as discussed in Section~\ref{sec:abund}, this is likely not significant due to the low $S/N$ of the spectrum in this region.) The general abundance pattern is closer to a typical halo star than to a classical dwarf spheroidal galaxy star. The overall abundance pattern is important for interpreting the nature of the stream and its potential progenitor. Judging from this one star, 300S may be the first stream with halo-like abundances. This predicament illustrates that a careful mapping of the region around Segue\,1 and the 300\,km\,s$^{-1}$ stream is of great importance. Initial work on SDSS data to address this question hints that this region is even more complex than assumed thus far. An attempt to decompose the many populations in the Segue\,1 region will be presented in a forthcoming paper (A. Jayaraman et al., in preparation). \subsection{Distance to stream} We use two different methods of estimating the distance to the stream. First, based on the stellar parameters we assume 300S-1 to be a red giant (Figure~\ref{fig:isochr}). Given the star's metallicity, we chose the $\mbox{[Fe/H]} = -1.5$ isochrone as the ``best fit''. While it is not a perfect match to our stellar parameters, it is reasonably close to our determined values. The corresponding inferred absolute magnitude of 300S-1 is $M_V \simeq +1.37$. Transforming the Sloan photometry following \citet{Jordi2006}, and correcting for extinction according to \citet{Schlegel1998}, we find 300S-1 has $V = 17.60$. This yields a distance modulus of $16.23$, resulting in a best distance estimate of $\simeq 18$\,kpc. Given the uncertainties, however, a distance within $\pm 7$\,kpc of this estimate would still be consistent with the stellar parameters we derived from the spectroscopy. Our distance estimate of 18\,kpc is in good agreement with \citet{Simon2011} who find a distance of 22\,kpc to the stream. A second distance estimate comes from using the photometric data and comparing the color-magnitude diagram to various globular cluster sequences from \citet{An2008}, shifted according to the distance moduli and reddening ($E(B-V)=0.03$ for M5, $E(B-V)=0.02$ for M92) compiled in \citet{Harris1996}. As seen in Figure~\ref{fig:col_mag}, the M5 sequence is a good fit to the stream data when shifted to a distance of 18\,kpc. This is in good agreement with the estimate based on spectroscopically determined stellar parameters. Both the isochrone fit based on the single spectroscopic measurement and the photometric data indicate that the stream stars are at a distance of $\simeq 18$\,kpc, slightly closer than the assumed distance of $23 \pm 2$\,kpc \citep{Belokurov2007} for Segue\,1 itself. However, the uncertainties on both values are substantial enough that we cannot rule out that they are at the same distance. We also caution that since the stream stars were picked out in color-magnitude filters targeting stars at the distance of Segue\,1, the stream stars in our sample by design cannot be at a very different distance. We note that with galactic coordinates $l, b \simeq 220^{\circ}, 50^{\circ}$ and a heliocentric distance of 18\,kpc, the stream stars are located in the outer galaxy. A heliocentric radial velocity of 300\,km\,s$^{-1}$ translates to Galactic Standard of Rest velocity of about 230\,km\,s$^{-1}$ in this direction. This suggests the stream stars to be on a low angular momentum orbit that would eventually bring it closer and into the inner Galaxy. \section{Conclusions} \label{sec:conc} We have presented a high-resolution spectrum and abundance analysis of 300S-1, a bright star in the 300\,km\,s$^{-1}$ stream near the ultra-faint dwarf galaxy Segue\,1. We determine a metallicity $\mbox{[Fe/H]} = -1.46 \pm 0.05 \pm 0.23$ (random and systematic uncertainties) for this star, with abundance ratios similar to typical halo stars at this metallicity. Fitting the stellar parameter solution onto theoretical isochrones, we estimate a distance of $18 \pm 7$\,kpc. Both the metallicity and distance are in good agreement with estimates obtained from comparing the SDSS photometry to globular cluster sequences. With this new information, we present several possible scenarios regarding the nature and origin of the stream: \begin{itemize} \item Since these high-velocity stars were discovered in a survey targeting Segue 1 members, a natural question to ask is whether the stream is related to the Segue\,1 dwarf galaxy. We find this an unlikely scenario for several reasons. First, the study of S11 finds no evidence that the Segue\,1 system is being tidally disrupted. Our distance estimate, based both on the high-resolution spectroscopy data and the photometry indicate that the stream stars are at a slightly closer distance than Segue\,1, though the data are not good enough to rule out that they are at the same distance. In addition, the color-magnitude diagrams suggest that the stream members are in general at a higher metallicity than Segue\,1, which our high-resolution measurement of one stream star also confirms. \item Another possibility, suggested by \citet{Geha2009}, is that these stars could be associated with the Sagittarius stream. Indeed at least two wraps of Sagittarius overlap with Segue 1 in this direction, and \citet{Niederste-Ostholt2009} argue that Segue 1 itself is a star cluster from the Sagittarius galaxy. But as for the stream stars, our metallicity measurement indicates that the 300\,km\,s$^{-1}$ stream does not have metallicities representative of Sgr debris, \citep{Chou2007, Casey2012}, and no Sgr debris model that we are aware of predicts a wrap at this velocity. Moreover, the part of the Sagittarius stream proposed to be contaminating Segue 1 samples has $v_{GSR} \sim 130$\,km\,s$^{-1}$ \citep{Niederste-Ostholt2009}, while the stream stars have $v_{GSR} \sim 230$\,km\,s$^{-1}$. \item The Orphan Stream \citep{Belokurov2007b} crosses the Sagittarius stream on the sky near Segue\,1, and at a similar distance modulus. Again, however, the velocities do not agree -- the reported velocities of the Orphan Stream are around $v_{GSR} \sim 110$\,km\,s$^{-1}$. In addition, the Orphan stream is metal-poor, with reported metallicities of $\mbox{[Fe/H]} = -1.63$ to $-2.10$ \citep{Newberg2010,casey13}. Although both the Orphan Stream and Sagittarius Stream overlap with the 300\,km\,s$^{-1}$ stars, then, the combined metallicity and velocity information suggest that they are unrelated. \item We have assumed throughout this paper that the 300\,km\,s$^{-1}$ feature is a stellar stream, but as pointed out in S11, until we can determine the full spatial extent of these kinematically linked stars, we cannot rule out that the stars belong to a bound object. If so, given the 1$^{\circ}$ extent seen in N10, its physical diameter would be at least 300 ($d / 18$\,kpc) pc. This highlights the need for new photometry to map out the full extent of the 300\,km\,s$^{-1}$ stars. \end{itemize} The fact that this stream may be largely chemically similar to the halo is particularly interesting. This is relevant to chemical tagging \citep{Freeman_Bland-Hawthorn2003}, which infers that stars originating from a common origin can be unambiguously identified solely by their chemistry and without the need for kinematics. This stream is most noticable only from its kinematics, and not by any particularly distinct chemical signature identified in this work. Although the luminosity, number of abundances analyzed or abundance uncertainties presented here do not match the strict requirements for the complete chemical tagging planned in future galactic archaeology surveys \citep{Ting2012}, we identify this 300\,km\,s$^{-1}$ stream as a candidate for testing and validating the chemical tagging concept. It would be particularly interesting to determine whether chemical tagging alone could identify members belonging to this 300\,km\,s$^{-1}$ stream without the need for kinematics, as the chemical elements analyzed here are only marginally distinguishable from the halo. Although the 300\,km\,s$^{-1}$ stars are found in a region of sky with many known structures, the combination of velocity, chemistry and distance information makes it unlikely that these stars are associated with any of the Sagittarius stream, the Orphan stream, or the Segue\,1 dwarf galaxy. We therefore conclude that these stars belong to a new structure in the crowded ``Field of Streams''. Its features include an extreme mean velocity of 300\,km\,s$^{-1}$ with a velocity dispersion of 7\,km\,s$^{-1}$ (as found by S11), a broad spatial distribution, and halo-like chemical abundances. The abundance patterns in particular make this stream very interesting to study in the context of halo formation. \acknowledgements{ A.F. acknowledges support of an earlier Clay Fellowship administered by the Smithsonian Astrophysical Observatory. A.R.C. acknowledges the financial support through the Australian Research Council Laureate Fellowship 0992131, and from the Australian Prime Minister's Endeavour Award Research Fellowship, which has facilitated his research at MIT. J.E.N. acknowledges support from the Australian Research Council (grants DP063563 and DP0984924) for studies of the Galaxy's most metal-poor stars and ultra-faint satellite systems. R.F.G.W. acknowledges support from NSF grants AST-0908326 and CDI-1124403.} \textit{Facilities:} \facility{Magellan-Clay (MIKE)}
1,314,259,996,060
arxiv
\section{Introduction} Blockchains rely on transaction messages being broadcast (to what is called a `mempool') where agents (`validators' or `miners') assemble them into a block that is appended to a public ledger. Those agents confirm that messages are valid (e.g., tokens transferred from a wallet actually are owned initially by that wallet) and the block is accepted as confirmed by other agents (i.e., that there is consensus regarding the validity of the proposed block). There are two challenges: (i) that consensus is reached and (ii) that consensus is over a set of truthful messages. For distributed ledgers, `truth' has a specific meaning: that the messages assembled into the block are those in the mempool without any being privately added by validators or intentionally excluded (or censored) by them. Achieving distributed consensus is a challenge because of the dual goals of not having anything bad happen (\textit{safety}) and having something good happen (\textit{liveness}). Typically, improving the probability of one goal being achieved happens at the expense of reducing the probability that the other is achieved. Blockchains divide changes into blocks so that things can happen at a regular pace while they chain those blocks together through cryptographic hashing which assists in achieving safety as block manipulation requires the manipulation of other blocks. While this assists in achieving computing safety, practical reliance on block commits often builds in other buffers to further limit the ability to manipulate blocks in a way that impacts on outcomes outside of the ledger environment (e.g., double spending tokens). Thus, if methods can be found that improve reliance on the ledger, this improves \textit{effective safety} achieved. There are two broad methods of achieving consensus in blockchains. In each case, a \textit{serial dictatorship} model is used, whereby a block proposer is randomly selected from the pool of validators whose task it is to propose a block of transactions.\footnote{See \cite{gans2021consensus} for a review. } The first method, invented by \cite{nakamoto2008bitcoin}, is the \textit{longest chain rule} (LCR). This rule asks (but, importantly, does not require) validators to append blocks to the longest chain. So long as there is always a longest chain, this acts as a coordinating device.\footnote{For an analysis of potential coordination issues with LCR blockchains see \cite{biais2019blockchain} and \cite{barrera2018blockchain} for proof of work and \cite{saleh2021blockchain} for proof of stake blockchains.} The second method relies on \textit{Byzantine Fault Tolerance} (BFT) whereby a proposed block is considered confirmed if at least two thirds of validators have sent a message `agreeing' to the proposed block (\cite{pease1980reaching}, \cite{buchman2016tendermint}, \cite{buterin2017casper}). Future proposed blocks must then be chained to the last confirmed block.\footnote{\cite{halaburda2021economic} show that when nodes are not presumed to be honest (or non-faulty) and consider their own payoffs, coordination problems can still arise.} It is generally the case that LCR blockchains achieve coordination in expectation faster (i.e., are more likely to satisfy liveness conditions) than BFT blockchains. While these methods achieve consensus, it is safe to say that consensus on the truth is left to be determined by crowd behaviour. For instance, in LCR blockchains, there can be forks where two equally long chains exist with the potential that at least one of them has untruthful blocks -- e.g., past blocks that exclude messages in order to facilitate a double spending attack. In BFT blockchains, attacks that subvert the operation of the blockchain by allowing consensus to be delayed are possible. In each case, so long as the majority or super-majority of participants (weighted by computational power in proof or work or token holdings in proof of stake) are engaging in truthful messaging -- that is, messages that reflect what has been broadcast to the mempool -- then truthful consensus can be achieved in equilibrium. Nonetheless, in trying to achieve coordination in this way, there is the potential for some disruption if adverse agents coordinated interventions. The question we address in this paper is whether there are more efficient and more reliable ways to achieve truth in consensus by designing and encoding mechanisms. Mechanism design is the branch of economics that deals with creating incentives for self-interested agents with information not known to the designer to reveal that information truthfully and still be willing to participate in the relevant economic activity. For example, auctions can be viewed as mechanisms for truthful revelation of the willingnesses to pay of buyers to a seller. Without a mechanism, buyers will not want to reveal their true value to a seller who might take advantage of that by charging a higher price. Similarly, without a mechanism, a seller has to guess buyer valuation when setting a price lest buyers choose not to buy or delay purchases. An auction, by specifying how a bid (a reflection of a buyer's true valuation) can be used, can be designed in such a way that the buyer has an incentive to tell the truth. This benefits buyers and the seller compared to a mechanism-less alternative but, critically, it relies on the mechanism being followed by the seller (\cite{akbarpour2020credible}) and bid information being communicated accurately (\cite{hurwicz1972informationally}, \cite{eliaz2002fault}). Typically, mechanisms are conceived of as being designed and then implemented in a centralized manner. This would, on the face of it, put it at odds with being used in permissionless blockchains whose modus operandi is decentralized operation. However, while permissionless blockchains are often characterised as decentralized -- i.e., no one entity controls them and there is no one single point of failure -- a key part is centralized by virtue of there needing to be consensus on the state of the blockchain. Moreover, the blockchain's code is public and itself regarded as immutable. Thus, the ingredients for both a theoretical (i.e., to obtain truthful revelation using incentives) and practical use (i.e., transparent, unchangeable and unique code) are present for the use of mechanisms. This motivates our current examination of that possibility. Our focus is on proof of stake protocols. In proof of work, the scope to deploy a mechanism does not exist because winning the computational game gives the proposer an unfettered right as to the block proposed. In proof of stake, by contrast, validating nodes are required to have committed an amount of tokens prior to potentially being selected to propose a node. That stake (technically, a bond) can then be used to provide incentives for any mechanism -- for instance, creating risk that if the node does not propose a truthful block that node will be worse off than had not participated in the mechanism at all. Indeed, for behaviors that can be readily identified as illegitimate -- such as proposing two conflicting blocks, being unavailable after promising to be available or proposing a block that isn't chained to the genesis block (or a checkpoint) -- proof of stake blockchains automatically fine misbehaving nodes in a process that is called `slashing' (\cite{buterinminimal}). This relies on the protocol designers identifying specific behaviours that may be associated with ill intent rather than something more straightforward and robust. Here our goal is to examine whether messages exist to ensure that validating nodes propose truthful blocks more generally using the information that exists amongst all nodes. We construct an explicit mechanism under BFT and show that the unique equilibrium in this mechanism involves consensus on truthful blocks from any pair of nodes. In fact, this can be done with an arbitrarily small fine which, itself, does not occur on the equilibrium path. Moreover, under our mechanism there is no need for multiple rounds of confirmation to confirm a block--any randomly-selected pair of nodes suffices. Thus, it arguably has an efficiency property that improves the liveliness of these blockchains. We also provide a bound for the share of the network that needs to be held by an attacker to succeed in a multi-node attack. A key insight here is that requiring nodes to send their messages before knowing the identity of the proposer strengthens this bound by an order of magnitude. In general, we conclude that our consensus mechanism is no more vulnerable to attacks than existing POW or POS protocols, and that the number of rounds the mechanism runs to confirm a block provides a kind of ``advance warning'' not provided by existing protocols. For LCR blockchains, we offer an even simpler mechanism and show that in the unique (subgame perfect) equilibrium no dishonest forks arise. This stems from the fact that, under our mechanism, if a dishonest node is selected, they cannot get a transaction removed. This makes it suboptimal to dispute the transaction and the transaction is written to the blockchain. Given that, it is not worthwhile attempting the attack in the first place. Our results rely on two features of the blockchain environment. First, because of cryptographic requirements associated with certain messages (such as a message sending tokens to another address), the message space is limited in certain important directions that limit the type of non-truths that can arise. Second, because of the public nature of parent blocks and the mempool, participating nodes know what the truth is. This second feature means that multi-stage mechanisms of the type examined by \cite{moore1988subgame} exist to ensure truthful revelation in equilibrium. Our task is to find mechanisms for specific blockchain environments. It is important to note that our purpose here is to show the potential benefits of mechanisms to aid in achieving truthful consensus on blockchains. Of necessity, we consider simplified environments and, thus, abstract away from practical difficulties associated with coding such mechanisms. However, we do believe that the broad framework we offer could be used as the foundation for practical implementation of mechanisms on blockchains and improve their operation. While there has been much discussion of the use of mechanism design to inform aspects of the blockchain such as smart contracts (e.g., \cite{buterin2019flexible}, \cite{holden2021can}, \cite{gans2019fine}) only scant attention has been paid to blockchain consensus. For instance, \cite{leshno2020bitcoin} use a mechanism design approach to examine how nodes are selected to process transactions and receive block rewards and derive an impossibility result. \cite{garratt2022impossibility} examine truth-telling in blockchains and find another impossibility result. \cite{Roughgarden_2021} examines how mechanism design can improve transaction fee efficiency on blockchains. Finally, \cite{halaburda2021economic} do not look at mechanisms but look at economic incentives in BFT protocols. Here our approach is to examine whether explicit mechanisms can be used to substitute from other consensus resolving solutions in blockchains. The remainder of the paper is organized as follows. Section 2 discusses mechanisms for Byzantine Fault Tolerance, while Section 3 turns attention to POS blockchains relying on the Longest Chain Rule. Section 4 contains some brief concluding remarks. \section{Mechanism for Byzantine Fault Tolerance} The first broad consensus mechanism is proof of stake under Byzantine Fault Tolerance (or BFT). In the absence of an economic mechanism achieving consensus under BFT uses a voting mechanism. These voting mechanisms have the following steps: \begin{enumerate} \item Transactions are broadcast as messages to the mempool \item Nodes stake and commit to be part of the validating pool \item Nodes observe messages \item One node is selected to propose a block \item Other nodes choose whether to confirm or reject proposed block \item If at least two-thirds of nodes confirm the block it is accepted otherwise it is rejected and another node is selected to propose a block and the process begins again \end{enumerate} Typically, in voting to confirm a block nodes check the technical validity of the proposed block and also whether other nodes are confirming the same block. Thus, communication is multi-lateral and network-wide in the process of achieving consensus. Here we consider whether a mechanism can replace the voting process and limit communication to just two randomly chosen nodes before appending a new block to the chain. \subsection{A Simultaneous Report Mechanism} The mechanism we propose is a special case of the Simultaneous Report (SR) Mechanism analysed by \cite{chen2018getting}.\footnote{The SR mechanism is a simplification of the multi-stage mechanisms explored by \cite{moore1988subgame}.} The baseline idea is that messages are broadcast publicly by blockchain users to the network and participating nodes assemble them into blocks of a fixed size based on time broadcast. When a block is proposed to be committed to the blockchain, each node has in their possession a block of messages they have received. We assume that this block is common across all nodes, however, there are no restrictions on nodes in proposing an alternative block. The goal is to ensure that nodes, while able to propose alternative blocks, only propose and accept truthful blocks. Suppose there are nodes, $i \in \{1,...,n-1\}$ each of whom assemble ledger entries into a block of fixed size. If a node ends up proposing a block that is accepted, they receive a block reward, $R$. There is also an $n$th node who proposes a manipulated block. If that block is accepted, they receive a payoff $\theta$ that is private information in addition to the block reward, $R$. Nodes can send any message from a countably infinite set. Consider the following mechanism that is run after messages have been sent to the mempool: \begin{enumerate} \item One node is randomly chosen to be the \textit{proposer}, $p$, and another node, $c$, is chosen to be the confirmer. \item The proposer proposes a block in the form of a message, $M_p$ while the confirmer sends a message, $M_c$. \item If $M_p = M_c$, then the block is committed to and added to the blockchain. The proposer receives $R$. \item If $M_p \neq M_c$, then the challenge stage begins with both $p$ and $c$ being fined, $F > 0$. \end{enumerate} The \textbf{challenge stage} involves: \begin{enumerate} \item $p$ sends a new message $M^C_p$ based on knowledge that there is a disagreement. \item If $M^C_p = M_c$, then $M^C_p$ is committed to the blockchain, $p$ receives $R$, and $c$ is refunded $F$. \item If $M^C_p \neq M_c$, then $p$'s proposal is discarded and the process begins again with $p$ and $c$ excluded from subsequent rounds. \end{enumerate} Given this, we can prove the following: \begin{proposition} Suppose the true block is $M_T$. Then the unique subgame perfect equilibrium outcome for the mechanism for any pair of nodes is $M_p=M_c=M_T$. \end{proposition} \begin{proof} Suppose that the selected pair does not include node $n$. Then working backwards, if $M_p \neq M_c$, then $M_p \neq M_T$, $M_c \neq M_T$ or both as the message space is a (countably) infinite set. In this case, the challenge stage is initiated and $p$ has the opportunity to send a new message. If $M_p = M_T$, then there is zero probability that $p$ could send $M^C_p = M_c$ and so $p$'s proposal is discarded and both nodes receive $-F$. If $M_p \neq M_T$, then by selecting $M^C_p = M_T$, then with some probability (possibly equal to 1), $p$ receives $R-F$ rather than $-F$ with certainty by choosing some other message. Thus, in the challenge stage $M^C_p = M_T$. Anticipating this, it is optimal for $p$ to set $M_p = M_T$ and $c$ to set $M_c = M_T$. Now suppose that the selected pair includes node $n$. If node $n$ is the confirmer and the challenge stage is reached, then, we have already shown that the proposer will set $M^C_p = M_T$. Given this, node $n$ will find it optimal to set $M_c = M_T$ and earn $0$ rather than $-F$. Alternatively, if node $n$ is the proposer, by our earlier argument, the other node will set $M_c = M_T$. If $n$ sets $M_p \neq M_T$, then there is a challenge round. In that round, $n$ will earn $R-F$ by setting $M^C_p = M_T$ and $-F$ otherwise. Given this, it is optimal for $n$ to set $M_p = M_T$ as it will earn $R$ rather than $R-F$. \end{proof} \\ \noindent It is easy to see that in the challenge stage, if $M_c$ is the truth, $p$ knows this and so finds it worthwhile to set $M^C_p = M_c$ and receive $R$. If $M_p$ is the truth, $p$ has a problem as it does not know what $M_c$ was. In this case, it ends up setting $M^C_p \neq M_c$ and receiving $0$. Thus, the truth is revealed regardless. There seems something harsh about this last step as the proposer may be reasonable and still punished. That would arise only if $c$ has an incentive to message something other than the truth. If they message the truth, then they get $0$ as they expect $p$ to revise their message at least and receive a refund of $F$. If they message something else, then $p$ will never be able to guess that and so they will lose $F$. Thus, $c$ has no incentive to do anything other than be straightforward. It is useful to stress how remarkably powerful this mechanism is for obtaining consensus on truthful blocks. We note the following: \begin{itemize} \item $F$ can be arbitrarily small and the true block will be confirmed by any pair. \item Any randomly-selected pair is sufficient to confirm a block. Unlike BFT mechanisms, there is no need for multiple confirmation rounds, pre-commits or messages sent from more than two nodes. Once block transactions have been communicated publicly and formed into the truthful block, the mechanism can take place and confirmation is instantaneous. \item If there is more than one node who has a private value that arises should a block other than the truthful block occur, the true block will still be confirmed. This outcome occurs even if two nodes with private preferences happen to be paired. This will happen so long as those nodes have preferences for distinct blocks. However, as we explore in the next section, if nodes have preferences for the same non-true block, the game is more complex (e.g., if there is a coalition of nodes). \item A key part of the mechanism is that while all nodes have common knowledge of the message for the true block, privately preferred blocks are unknown beyond individual nodes. This subverts any method by which coordination could arise on a non-true block. Once again, a coalition of nodes with a non-true preferred block could potentially coordinate and subvert the mechanism. \end{itemize} \noindent It is worth emphasising that the mechanism does rely critically on the messages of the true block being perfect and common knowledge. If this was not the case, the proof would be more complicated but we conjecture that it will still hold given the results of \cite{chen2018getting} that show that SR mechanisms are robust to some informational imperfections. Finally, it is useful to note some practicalities in terms of implementing this mechanism. The mechanism relies on the two nodes being selected randomly. As will be discussed in detail below, randomness plays an important role in the mechanism working when there is more than one attacking node. Finding a pure randomisation device on-chain is a challenge for blockchains and, thus, we expect that it is likely that this part of the mechanism will rely on an external randomisation input. The mechanism itself can itself run on-chain but it would be as a smart contract coded into the protocol. This is something that is a feature of many proof-of-stake protocols for other elements. \subsection{Robustness to Multi-Node Attacks} As noted above, while the proposer, if selected, has an incentive to tell the truth this is based on a specific assumption that could not be guaranteed for a permissionless blockchain (and maybe not all permissioned ones either): that the proposer and confirmer are different entities. What if the proposer and confirmer are the same person or part of an attacking coalition that share the same incentive to confirm an alternative, non-true block? Suppose that an attacking coalition has a share, $s$, of all nodes. If they are the proposer, then, with probability $s$ they will be able to confirm the distorted block and receive $R+\theta$ and receive $-F$ otherwise. If they do not distort, they receive $R$ with certainty. (This assumes that the attacker has full knowledge of the fact that they are the proposer before setting $M_c$). Thus, the proposer will try to attack if: \begin{equation} s(R+\theta)-(1-s)F > R \implies s > \frac{R+F}{R+\theta+F} \label{attack1} \end{equation} From this, it can be seen that this mechanism is not robust to an attack if the attacker has sufficient share ($s$) of the network. We can compare this threshold to that typically considered for BFT networks. Attacks that may delay the confirmation of transactions are not possible in those networks if $s < \frac{1}{3}$. Notice here that the SR mechanism lowers this possibility if $R+F > \frac{\theta}{2}$. Thus, depending on the environment, the consensus protocol proposed here may be more secure than the usual BFT protocol. Moreover, security can be enhanced by increasing $R+F$ rather than exogenously set as it is under usual BFT consensus. If we require nodes to send their messages \textit{before} knowing who is a proposer, then this slightly changes the equation. Now the attacker only succeeds with probability $s^2$ but also has an additional cost in that their confirmer role automatically leads to a fine. Thus, an attacker's choice depends on: \[s^2(R+\theta)-s(1-s)F-(1-s)F > R\] and so the threshold becomes $s > \sqrt{\frac{R+F}{R+\theta+F}}$ which is an order of magnitude stronger (modulo, integer issues which we ignore here).\footnote{In this case, the SR mechanism is more secure than the usual BFT consensus if $R+F > \frac{\theta}{8}$.} \subsection{Patient Multi-Node Attacks} The above calculations assume that an attack is considered by the attacker to be a once-off opportunity. This certainly is the case if, in the next round, truthful consensus is reached and the opportunity for an attack is removed. However, when there are multiple nodes, the incentive compatibility conditions in our proposed mechanism need to be reformulated for the possibility that, should consensus not be reached with one pair, at least one additional round of the mechanism will result with a new pair of nodes. To consider this, note the following: \begin{itemize} \item Truthful consensus will arise if both selected nodes are honest (i.e., have no distortion payoff, $\theta$). At the outset, this happens with probability $(1-s)^2$; \item Truthful consensus may arise if a proposer is honest and a confirmer is potentially not. Recall that in the mechanism, the proposer will propose an honest block and continue to do so in a challenge round. Thus, if the confirmer proposes anything other than the true block, they are fined $F$ and another pair is selected. Let $V_{sN-1,(1-s)N-1}$ be the expected payoff to the attacking coalition if there are $sN-1$ remaining coalition nodes and $(1-s)N-1$ remaining honest nodes. Then the incentive compatibility constraint for the confirmer is $V_{sN-1,(1-s)N-1} < F$. This scenario occurs, initially, with probability $(1-s)s$. \item What happens if the proposer is not honest? Recall that nodes cannot see each others messages in the mechanism -- only if they match or not. Thus, if a proposer of a distorted block is matched with an honest node, they will have a choice as to whether to adjust their message or not. In effect, they can either continue the disagreement or confirm the true block (under the assumption that the node they are paired with must be honest). Thus, their incentive compatibility constraint in the challenge stage is to propose an honest block if $V_{sN-1,(1-s)N-1} < 0$. If this constraint is satisfied, then a proposer will propose a distorted block only if $s(R+\theta) - (1-s)F \ge R$. If this constraint is not satisfied, then the proposer will propose a distorted block only if $s(R+\theta) - (1-s)(F-V_{sN-1,(1-s)N-1}) \ge R$. \end{itemize} Examining these conditions, therefore, requires solving for $V_{sN-1,(1-s)N-1}$. Exploring $V_{sN-1,(1-s)N-1}$, note that: \begin{multline} V_{sN-1,(1-s)N-1} = \tfrac{sN-1}{N-2}\Big( \max\{\tfrac{sN-1}{N-2}(R+\theta)-\tfrac{(1-s)N-1}{N-2}(F-V_{sN-2,(1-s)N-2}),R\} \\ + \tfrac{(1-s)N-1}{N-2}\max\{V_{sN-2,(1-s)N-2}-F,0\}\Big) \end{multline} The first probability is the probability that a potential attacker is selected as a proposer or confirmer. If they are a proposer or confirmer, the outcome depends upon whether their incentive compatibility constraint is satisfied or not. These are the next two terms within the brackets. Note that each of these depend upon $V_{sN-2,(1-s)N-2}$ which is the attacking coalition's expected payoff should no consensus be reached in the round. An important property of the recursive structure here is that, for $s \le \frac{1}{2}$, $V_{sN-1,(1-s)N-1} \ge V_{sN-2,(1-s)N-2}$ and this property continues between rounds if no consensus is reached. In this case, if the incentive compatibility constraint is satisfied for the proposer in the first round, it will continue to be satisfied and, thus, there will be no incentive for the attacker to start or continue the attack so long as $s < \frac{R+F}{R+\theta+F}$. What if $s > \frac{1}{2}$? In this case, the expected payoff from an attack could rise between rounds. Indeed, if the penultimate round is reached, then the attacker knows they will have both the proposer and confirmer with certainty in the final round allowing them to confirm the distorted block and receive a certain payoff of $R+\theta$. In this case, in the penultimate round, there is one honest node and, say, $x$ nodes of the coalition. The incentive compatibility condition for the penultimate round would have to be such that $\frac{x}{x+1}(R+\theta)-\frac{1}{x+1}(F-R-\theta) < R$ or $(x+1)\theta < F$. Note that this is equivalent to $s < \frac{F}{\theta}-\frac{1}{2}$ as $s=\frac{1+2x}{2}$ or $x=\frac{2s-1}{2}$. It can be seen that so long as $F > \frac{3}{2}\theta$, then an attack is never worthwhile. This makes it appear that $F$ has to be very large. In fact, the protocol could adjust $F$ so that it \textit{increases with each round}. Precisely what this formula would be, however, is a complex matter and we have not been able to calculate it yet. In any case, as $\theta$ is a free variable calibrating it to that would be a challenge. That said, the conditions here are similar to those regarding so-called ``majority" attacks under both POW and POS. The difference is rather than building out forks of existing chains, here the attack conditions take place on a block by block basis. The effect is the same, as is the cost of an attack in terms of resources -- real or financial (see \cite{gans2021consensus}). Our contention would be that our proposed consensus mechanism is not more vulnerable than existing ones but that it has the advantages that (a) it is significantly more efficient to run and operate under normal conditions; (b) continues to make an attack probabilistically difficult including forked-chain attacks and (c) can be indicated by the number of rounds a mechanism needs to run to confirm a block -- that is, in the absence of an attack, there will be few rounds while an attack with a close to 51 percent majority will still likely take many rounds. We believe that it would be possible to calculate these probabilities more precisely but that is left for future work. \section{Mechanism to Resolve Forks} Unlike the BFT protocol, POS blockchains that rely on the LCR expect forks to arise.\footnote{While we don't explore this possibility in this paper, BFT blockchains do sometimes generate forks which require lengthy and costly human intervention and coordination to resolve. Our fork resolution mechanism could be alternatively used even in conjunction with BFT consensus; although we only explore its use in LCR type consensus here.} A fork is a situation where two chains of equal length are confirmed to the protocol that have a common ancestor but distinct blocks thereafter. When this occurs nodes must decide which one to append new blocks to so that a new longest chain arises which becomes the consensus chain. Because there is no voting protocol, LCR blockchains can have blocks confirmed very quickly but because of the possibility of forks, consensus may not be final. Forks will arise simply because there may be lags in the communication of confirmed blocks to chains and different sets of nodes may work on distinct blocks for a time. One procedure POS blockchain protocols use to limit this lack of coordination is to punish nodes who work on more than one chain. This can happen because nodes have the potential to earn rewards for confirming blocks regardless of the chain that arises and so do not necessarily have an incentive to select one over another.\footnote{\cite{saleh2021blockchain} argues that a proposing node does have an incentive as only the chain that survives as the longest chain will earn them a block reward while persistent lack of coordination devalues tokens and hence those rewards and stakes held.} Blockchains resort to slashing (that is, fining nodes some portion of their stake) to incentivise nodes to work on one chain.\footnote{In POW protocols working on more than one chain requires actual resource expenditures rather than simply staking digital tokens so this incentive issues does not arise.} While forks may be considered a necessary inconvenience, because they can arise as a matter of course, there is potential for nodes to create forks for their own purposes. For instance, a node may create a blockchain fork in order to nullify transactions previous confirmed to past blocks. The reason they might do this is in order to allow a transaction that would otherwise be invalid to current or future blocks. The most famous example of this is the `double-spend' problem whereby an agent attempts to spend their own tokens twice by confirming a transaction to a past block and then attempting to have that transaction omitted so they can spend those tokens again. This type of intervention relies on some agents relying on the past block confirmation to trigger, say, the transfer of real world goods to the agent. Given this \cite{nakamoto2008bitcoin} recommended that agents not consider transactions economically final until a certain number of blocks had been confirmed to the chain following the block in question. However, the potential to revert or nullify past transactions reduces the reliability of the blockchain and the speed at which transactions and messages can be accepted as final. It is instructive to explain in more detail how an attack -- such as a `double-spend' attack -- actually works on POS with the LCR rule. An attacker privately works on a chain that nullifies a transaction in an already confirmed block that is part of the main chain. The transaction is removed so that the tokens remain in the attackers account to be spend again. The goal is then to surface the private chain once it is as long as (or longer than) the current main chain; creating a fork. In proof of work, doing this is non-trivial as this requires, at least, expending more electricity than those nodes working on the main chain. In proof of stake, as only the tokens of the attacker are staked on the private chain, this is simpler. What is more, the attacker, should they succeed, also receives the block rewards, $R$, associated with the number of blocks, $k$, they have worked on privately. Thus, their total reward from succeeding in this attack is $kR+\theta$ (where $\theta$ is the benefit from the double spend). The attack is made more difficult if there are robust penalties to other nodes from staking tokens to both chains in a fork. If other nodes have ``nothing at stake," they do not have an incentive to `pick sides' in the fork. If, eventually, nodes that stake tokens in both chains can be detected, they can be fined an amount that disincentivizes that behavior. However, detecting this can be difficult (\cite{bentov2016cryptocurrencies}, \cite{brown2019formal}). Another approach, to be employed in Ethereum's Casper upgrade, is to punish nodes who stake on the `wrong' chain.\footnote{The means of tracking these outcomes involves the use of code called a `dunkle'; see \url{https://eth.wiki/concepts/proof-of-stake-faqs0.}} This incentivizes nodes to do more work in establishing what they believe the right chain to be. However, this could be a significant challenge in that most nodes may not want to play an active role in resolving forks and would prefer to following the LCR. Thus, at best, current mechanisms deter nodes from staking on different chains and so make it harder for an attacker to create an alternative longest chain. However, there is no real mechanism to determine which chain is the `true' chain. That is where we believe a mechanism can help. The potential for a mechanism that might resolve blockchain forks arises because it is difficult to tell if they are accidental or deliberate. In the latter case, they are akin to an ownership dispute. A natural question is whether there is a mechanism that can quickly determine what the ``true'' chain is? Adapting our analysis of Solomonic disputes (\cite{gansholden2022}), we answer this question in the affirmative. A fork is only consequential if different nodes claim that said fork is the correct one. Thus, there are competing ownership claims about the ``true'' chain. For simplicity call the competing forks $A$ and $B$. The information structure is such that the nodes claiming the fork $A$ is the true one know they are honest. Nodes claiming that $B$ is the true chain know they are not honest. \subsection{A Solomonic Mechanism} Now consider the following mechanism. If a fork appears (without loss of generality, $B$) and is within $x$ of the same number of blocks as fork $A$ then the following mechanism is run between nodes that claim to hold a full record of the blockchain. \begin{enumerate} \item For each fork, the blocks subsequent to the last common parent are unpacked and transactions are compared. \item Valid messages that appear in both sets are immediately confirmed and at the minimum of the time stamp between the two forks.\footnote{Valid messages will have different time stamps but otherwise the same content. However, as they cover multiple blocks there are some practical issues to resolve in comparing messages as being equivalent on each chain of the fork.} Other messages are collected and marked as disputed. \item One node from each chain is selected at random ($a$ for $A$ and $b$ for $B$). The node from the chain where a transaction does not appear is asked to confirm that the transaction is invalid. In this case, both are fined $F$, and they enter the dispute stage for each disputed transaction. \end{enumerate} \noindent The \textbf{dispute stage} involves: \begin{enumerate} \item If the transaction appears in $A$ and not in $B$, $a$ is asked to assert the legitimacy of the transaction. If $a$ asserts, then the transaction remains and the fine is burned. If $a$ does not assert, the transaction is discarded and $b$ has their fine refunded. \item If the transaction appears in $B$ and not in $A$, $b$ is asked to assert the legitimacy of the transaction. If $b$ asserts, then the transaction remains and the fine is burned. If $b$ does not assert, the transaction is discarded and $a$ has their fine refunded. \end{enumerate} \noindent Note that there may be numerous transactions that appear in one chain and not the other. The procedure here, between the two selected nodes, would be conducted for each disputed transaction with the roles assigned depending on which chain the disputed transaction appears. We now need to specify preferences of each type of node. An honest node has an interest in preserving the true blockchain. That preference has a monetary equivalent value of $H$ and they have a disutility arising from another blockchain being built upon of $D$ with $H > D$. By contrast, a dishonest node is only interested in having their preferred chain continue for which they receive a monetary equivalent value of $\theta$. Given this, we can now prove the following: \begin{proposition} Dishonest forks do not arise in any subgame perfect equilibrium where at least one honest node is selected. \end{proposition} \begin{proof} Without loss in generality, suppose that $A$ is the true blockchain and a dishonest fork, $B$, arises with a transaction omitted from a past block. If $b$ confirms that the transaction is invalid, $a$ and $b$ are then selected and fined and the dispute stage begins. In the dispute stage, $a$ is asked to assert or not assert the legitimacy of the transaction. If $a$ asserts, as the transaction is valid and $a$ is honest, $a$ receives $H-F$ and $b$'s payoff remains at $-F$. If $a$ does not assert, $a$ receives $D-F$ and $b$'s payoff becomes $\theta$. In this case, $a$ has a preference to assert. In the first round, anticipating this, $b$ will decline to confirm the removal of the transaction as this will result in a fine of $F$ for sure and no transaction being removed. Given that, it is not worthwhile attempting the attack in the first place. Now suppose that there are no dishonest nodes and that, in this case, $b$ is a node on a chain where the transaction does not appear (e.g., it may have been missed because of network issues). In this case, as $b$ knows they are honest and that dishonest nodes only attempt to remove transactions and not place them, $b$ is choose to not assert the transaction is invalid. Thus, the transaction will remain and neither node will be fined. \end{proof} \\ \noindent The intuition for this result is simple. The mechanism is designed based on the notion that (1) nodes have information as to which chain they regard as truthful and which is not; and (2) the attacker is trying to have a past transaction/message removed from the blockchain. The mechanism gives the opportunity for a node that had staked on each chain to be matched and to confirm whether there is a dispute over a transaction that appears in one but not in the other. If there is an agreement, the transaction is confirmed or removed as the case may be and the mechanism ends. If there is a dispute, however, both nodes are fined which creates an incentive to avoid the dispute. The dispute stage then focusses on the node from the chain that asserts the transaction is valid. As an attack involves a dishonest node trying to remove a transaction, this node is presumptively honest and so are given control over the decision. Thus, the transaction remains. Given this is the outcome of the dispute a dishonest node will not trigger the dispute (as they will be fined) nor create a dishonest fork because that is costly and will be unsuccessful. Importantly, the mechanism takes into account the possibility that chains have arise as a result of, say, network latency issues, rather than an attack. In this case, both nodes are honest. Given that nodes know that attacks involve the removal of transactions, when the mechanism is triggered by an accidental fork, the node from the chain without the transaction will agree their fork should be discarded if the transaction is, indeed, valid. Thus, the `correct' fork persists. Nonetheless, as the proposition qualifies, the mechanism does not prevent dishonest forks from arising when neither selected node is honest. This would require an attacker to distribute nodes across both forked chains; a possibility we consider next. \subsection{Multi-Node Attacks} As noted earlier, the `traditional' strategy for a double-spend attack on POS with the LCR is for an attacker to create private chain without the transactions that appear in the main chain and then build that change to be at least as long, and perhaps longer, than the main chain. The idea of the Solomonic mechanism is that when a fork appears, regardless of the length of competing chains, the mechanism resolves the dispute. As Proposition 2 shows that, so long as one honest node is selected to be part of the dyad in the mechanism (which will happen if the attacker has concentrated its resources to build the private chain), the dishonest chain will be discarded. Hence, if it is expected that an honest node will be selected, there is no incentive for an attacker to create a dishonest fork in the first place. The Solomonic mechanism is engaged as a mechanism to resolve forks in lieu of a race to produce the longest chain. One implication of this is that there is no longer a concern about the `nothing-at-stake' problem. Recall this problem arose because there were incentives for nodes to stake on two chains that were part of a fork which would have the implication of slowing down the resolution of one chain into the longest chain. At the extreme, the nothing-at-stake problem could make it easier for an attacker to create the longest chain. Hence, current POS networks using the LCR, create penalties for nodes who stake on multiple networks. However, when a Solomonic mechanism is used to resolve which chain should be confirmed, whether nodes stake on multiple chains is no longer a concern and, therefore, there is no reason to discourage this, let alone impose a slash penalty. Indeed, as will be argued here, there is reason to encourage multiple chain staking. The reason for this is that an attacker has an interest when the mechanism is in place to try and control the mechanism by having its own nodes selected for the mechanism excluding an honest node. If this happens, the attack can be successful and it is possible a dishonest chain is confirmed. Thus, in contrast to a protocol operating under the LCR, with a Solomonic mechanism an attacker has an incentive to stake all of its nodes on both chains. Suppose that an attacker has a share, $s$, of all nodes. If $A$ is the main chain and $B$ is the dishonest fork, as all nodes -- honest or not -- stake on each chain, the attacker may `control the mechanism' with probability, $s^2$. This assumes that the attacker does not know when engaging with the mechanism in the first round whether the other node is one of their own. This can be achieved by sequentially drawing those nodes with the first node forced to commit to a message before the second node is identified. What does this do to the incentive to engage in a double-spend attack? Suppose that the fork is $k$ blocks long. Recall that $B$, as it was produced by the attacker, will result in block rewards of $kR$ being awarded to them if $B$ persists. If $A$ persists then that agent receives $skR$ in expected block rewards. Thus, the expected return to malicious agent is: $$(1-s)skR+s\Big(-F+s(kR+\theta)+(1-s)skR\Big)$$ Note that, if the attacker does not attack, there is no fork and the attacker earns $skR$. Comparing this to the return to an attack, we can see that an attack will only take place if: $$s > \frac{1}{2}+ \frac{\theta -\sqrt{(\theta +k R)^2-4 F k R}}{2 k R}$$ so long as $\frac{(k R + \theta)^2}{4kR} \ge F$. The higher is $F$, the less likely these conditions are satisfied and the less likely an attack occurs. Note that so long as $\theta \ge \frac{4F-kR}{2}$, the threshold to deter an attack is higher than the usual condition for LCR consensus that $s < \frac{1}{2}$. Thus, as with our mechanism for BFT consensus, here a mechanism has the potential to provide more security depending upon the choices of $F$ and $R$. \subsection{Dealing with a Non-Traditional Double Spend Attack} The above mechanism involves eliminating forks created by traditional double-spend (or double-spend like) attacks. The traditional double spend attack involves an attacker confirming a transaction that spends their tokens to the original chain and then removing that transaction with a dishonest chain in the hope of getting that latter chain accepted as consensus. This allows the attacker to spend those tokens again. The current Solomonic mechanism subverts that attack by making it impossible for the attacker to remove the original confirmed transaction as part of a forked chain. The mechanism relies on the notion that attacks involve transactions being removed rather than retained. Thus, honest nodes have an incentive to defend transactions that are confirmed and dispute transactions that are proposed to be removed. The latter is also consistent with the effect of latency issues that may cause a fork when some nodes `miss' a transaction. However, what if the attacker modifies their attack? Recall that the attacker wants to nullify a past transaction involving tokens spent by the attacker. While forking the chain is a potential way to do this, there are others. What if, for instance, the past transaction can be held to be invalid? Normally, by virtue of being confirmed, the past transaction is valid -- that is, there are tokens in the wallet that can be sent to another wallet at that time. But if the attacker is forking a chain, the attacker, under POS, can fork the chain from an earlier block. In that block, the attacker can insert a transaction that spends the tokens (aka moves them to another wallet perhaps under the attacker's control) prior to the past transaction the attacker is trying to remove. If that is done as part of a fork, when the Solomonic mechanism is run, there will be two disputed transactions: one that appears on the dishonest chain but not on the original chain (the pre-spend) and one that appears on the original chain but the dishonest one (the original spend). When there are multiple disputed transactions, a natural way to run the mechanism would be to run it on the earliest disputed transaction first. The reason for this is that how that is resolved may have implications for future transactions. In fact, that is precisely what the attacker here is anticipating. Thus, the mechanism would, given the way it defaults to preserving transactions, preserve the pre-spend transaction and, by virtue of that, make the original spend transaction invalid. In other words, this attack can now succeed in generating the conditions of a double-spending of the tokens in question. The problem here is that the current mechanism relies on (1) that honest nodes know they are honest and (2) that the only way an attack would work would be to nullify a transaction and honest nodes know that. However, (2) is overcome in the proposed attack and thus, the mechanism fails. In order to restore a mechanism here (2) would have to be replaced by conditions that the honest knew that the original chain was the `true' chain. What would those conditions be? Another way of looking at the fork chains is not transaction by transaction (to see which are disputed) but by paths of tokens. In the original chain, tokens may be confirmed to move from Charles to Alice while in the new chain tokens move from Charles to Bob. Thus, the dispute is over the end point of the tokens (or more generally, the path they have taken). In the case here, who owns the tokens now, Alice or Bob? To resolve this, we propose using a variant of the Solomonic mechanism. The first step involves analysing the forked chains but instead of focusing on transactions that appear in one chain but not the other, the focus is on differences in the allocation of tokens in the last confirmed block of each chain (where we assume here that both chains are of equal length). If there has been a general double-spend attack, tokens that, say, were originally confirmed to be held by Alice would instead by held by Bob. The mechanism is as follows: If a fork appears (without loss of generality, $B$) and is within $x$ of the same number of blocks as fork $A$ then the following mechanism is run between nodes that claim to hold a full record of the blockchain. \begin{enumerate} \item For each fork, the allocations of tokens confirmed on the last block are compared. \item Transactions that lead to allocations that are in both sets are immediately confirmed at their original block time stamp. Other allocations are marked as disputed. \item One node from each chain is selected at random ($a$ for $A$ and $b$ for $B$). Each node is asked to confirm the current allocation of their chain based on their held full blockchain record. If they each agree on one chain, that chain is confirmed and the mechanism ends. If there is a disagreement, both are fined $F$, and they enter the dispute stage. \end{enumerate} \noindent The \textbf{dispute stage} involves: \begin{enumerate} \item One of the two nodes is selected at random and given the opportunity to assert the validity of their chain. \item If they do not assert the chain, the tokens are confirmed to the account on the other chain. \item If they assert the chain, the tokens are burned. \end{enumerate} We will assume that an attacker weakly prefers leaving tokens in the account on the original chain than burning those tokens.\footnote{Even with a double spend attack, the attacker faces some legal risk associated with being identified as the attacker which may happen if the tokens are missing from their original spend.} Given this, we can prove the following. \begin{proposition} In any subgame perfect equilibrium, so long as at least one of the nodes participating in the mechanism is honest, a dishonest chain is never confirmed. \end{proposition} \begin{proof} Without loss in generality let $A$ be the true fork and $B$ the dishonest fork. Working backwards, consider the dispute stage. There are four cases to consider: \begin{enumerate} \item If $a$ (the node selected from $A$) is honest, if that agent asserts that $A$ is the true chain, this will result in the tokens being burned. $a$ receives $ - F$ as a payoff. If $a$ chooses not to assert that $A$ is true, $a$ receives $D-F$ which is a lower amount. Thus, $a$ asserts $A$ is true. \item If $a$ (the node selected from $A$) is dishonest, that agent will assert that $B$ is the true chain. This will result $B$ being confirmed which is what a dishonest node wants. \item If $b$ (the node selected from $B$) is honest, that agent will assert that $A$ is the true chain. This will result in $A$ being confirmed which is what an honest node wants. \item If $b$ (the node selected from $B$) is dishonest, if that agent asserts that $B$ is the true chain, this will result tokens being burned. If they assert that $A$ is the true chain, this will result in $A$ being confirmed. \end{enumerate} Now consider the choice of whether of which chain to claim in the first round. An honest node will always find it worthwhile to assert that $A$ is the true chain. A dishonest node, under the assumptions of the Proposition, knows the other node is an honest node. If that node asserts that $A$ is true, the original chain is confirmed. If that node asserts that $B$ is true, the dispute chain begins. This will result in either the tokens remaining on $A$ or the tokens being burned. So long as the former is weakly preferred by dishonest nodes, that node will claim $A$ is the true chain in order to avoid the fine, $F$. This proves the proposition. \end{proof} \\ \noindent As before, the presence of an honest node, prevents the attacker from achieving their aims to have the tokens allocated as in $B$ rather than $A$. If the attack involves any cost, the attack will, therefore, not take place. That said, this presumes that an honest node is part of the mechanism. This may not happen and the attacker may be able to control both nodes in the mechanism. Put differently this mechanism relies on honest nodes being able to tell which is the correct chain and hence, who rightfully owns the tokens. This is an assumption we make here, and there may be practical difficulties in nodes being able to have a record of the blockchain and which chain was the main chain in the immediate past. That said, there is good reason to think that this assumption is justified. Honest nodes have a lot of information when they see a fork going back many transactions in the chain. While there will always be forks due to latency issues, there will never be a ``dishonest fork'' on the equilibrium path of the game induced by our mechanism. Indeed, provided that the honest nodes have a higher prior that the correct chain is, indeed, the correct chain -- that is, priors are biased towards the truth -- the mechanism here will still have them acting as if the correct chain is the truth and will still deter attempts to confirm a dishonest chain as the consensus chain. \section{Concluding Remarks} We have shown how to construct revelation mechanisms to achieve consensus on blockchains under BFT and LCR. A fundamental pillar of a mechanism-design approach to blockchain consensus is the use of information. Consistent with the central dogma of mechanism design, the designer is not presumed to posses more information than that held by agents in the mechanism. This contrasts, however, with some uses of slashing in existing approaches to consensus. Yet, through careful design choices, the mechanisms we examine here make very efficient of the information that is held by existing nodes. We have also discussed the robustness of these mechanisms to multi-node attacks. We are of the view that mechanism-designed-based consensus protocols have important advantages over existing POS protocols, and that they are likely to be of practical use as an alternative to POW protocols. In particular, rather than just being designed to satisfy stringent requirements of finality and liveness (themselves often probabilistic), mechanisms have choice parameters (e.g., the reward, $R$, and fines, $F$) that themselves can manipulate trade-offs between finality and liveness at the margin depending on the environment and preferences of blockchain users. This expands the set of blockchain consensus options for participants. \newpage \typeout{}
1,314,259,996,061
arxiv
\section{Introduction} In the context of the $AdS_5/CFT_4$ correspondence a very interesting development was the understanding of the existence of an integrable structure at the both sides of the correspondence \cite{Minahan:2002ve,Bena:2003wd}. In the $\mathcal{N}=4$ $SU(N)$ field theory the one loop dilatation operator in the scalar sector was identified with the Hamiltonian of an integrable spin chain \cite{Minahan:2002ve}. Many and very interesting developments followed, see for example \cite{Berenstein:2002jq}-\cite{Gromov:2009tv}. In this note we will be mainly interested in studying the integrability properties of the field theories with less supersymmetry. In four dimensions to remain in the perturbative regime, which allows a field theory computation, one is forced to take orbifold or marginal deformations of the original $\mathcal{N}=4$ theory \cite{Berenstein:2004ys}-\cite{Solovyov:2007pw}. Recently we gained a better understanding of the $AdS_4/CFT_3$ correspondence \cite{Schwarz:2004yj}-\cite{Jafferis:2008qz}. Indeed, it turns out that the three dimensional conformal field theories are Chern-Simons theories with matter. In particular, the authors of \cite{Aharony:2008ug} proposed a field theory dual to the $\mathbb{C}^4/\mathbb{Z}_k$ singularities, the so called ABJM theory. This is an $\mathcal{N}=6$ Chern-Simons matter theory with gauge group $U(N) \times U(N)$ and two Chern-Simons levels satisfying the constraint $k_1 + k_2 = 0$. This theory appears to be integrable at least at the leading order in perturbation theory. Namely, the two loop dilatation operator can be identified with the Hamiltonian of an integrable spin chain \cite{Minahan:2008hf}. Many nice developments followed also in this context, see for example \cite{Bak:2008cp}-\cite{Agarwal:2008pu}. In particular we are interested to understand if integrability is present in the less supersymmetric theories. One could think that the possible generalizations of the basic example in three dimensions, the ABJM theory, are very similar to the related generalizations of the $\mathcal{N}=4$ four dimensional case but they are slightly different. To compute the field theory dilatation operator, it is important that the theory has a weak coupling limit in which the elementary fields have canonical scaling dimensions. In four dimensions this is possible if the superpotential is a cubic function of the chiral superfields, while in three dimensions it is possible if the superpotential is a quartic function. This simple observation points out that in three dimensions there could be more theories which can be analyzed perturbatively than in four dimensions. Indeed, it turns out that in three dimensions also the non-orbifold theories can have a perturbative limit \cite{Gaiotto:2007qi,Gaiotto:2009mv,Jafferis:2008qz}. The second observation is due to the presence of Chern-Simons levels that do not exist in the four dimensional case. There are Chern-Simons levels $k_i$ associated to every gauge group. They are integer numbers and we can vary their values without spoiling the superconformal symmetry. It turns out that, for a class of $\mathcal{N}=2$ Chern-Simons matter theories, if $\sum k_i = 0$ the field theory moduli space has a four complex dimensional branch that is a Calabi-Yau cone and can be understood as the space transverse to the M2 brane \cite{Martelli:2008si,Hanany:2008cd}. If instead $\sum k_i \ne 0$ the four dimensional branch typically disappears and this effect can be interpreted as turning on a Roman's mass $F_0$ in the type IIA limit \cite{Gaiotto:2009mv,Gaiotto:2009yz}. Let us suppose that a theory has an integrable structure for some specific relations among the $k_i$ such that they satisfy $\sum k_i = 0$. It easy to see that there exist two possible interesting deformations of this integrable point. We can move in the space of possible integer values of the $k_i$ in such a way that we preserve the constraint or in a way in which we break the constraint. It is important to underline that these kind of deformations do not exist in four dimensions and offer a new laboratory for studying integrability in the weak coupling regime. In this paper we start the analysis of these deformed theories. We take as basic example the ABJM theory and deform it in such a way that $k_1 + k_2 \ne 0$. We plan to return to the other type of deformation in the near future. To be sure to remain in the perturbative regime it is important to deform the theory in such a way that it preserves at least $\mathcal{N}=3$ supersymmetry in three dimensions. Indeed, for $\mathcal{N}>2$ the Chern-Simons matter field theories are completely specified by the gauge group, the matter content, and the Chern-Simons levels, and they have a weak coupling limit for large values of $k_i$. These theories have a quartic superpotential and could be dual to the non-orbifold M theory backgrounds. The organization of the paper is as follows. In Section 2 we introduce our main example. In Section 3 we rewrite the theory in the explicit invariant form under the global symmetries. In Section 4 we compute the two loop mixing operator for the scalar sector of the theory. In Section 5 we compute the anomalous dimension of some operators. We observe that the degeneracy which due to integrability is present in the ABJM theory is lifted in the generic $k_1 \ne - k_2$ case. We finish with some conclusions and the appendix collects some useful formulae which we used in the main text. \section{The deformed ABJM} We are interested in studying the Chern-Simons theories described by the following action \begin{equation}\label{SN3} S= \frac{k_1}{4\pi} S_{CS}(V_{(1)}) + \frac{k_2}{4\pi} S_{CS}(V_{(2)}) + S_{kin}(Z^i,Z_{i}^{\dagger},W_j, W^{j \dagger}) + \int \ d^{2} \theta W(Z^i,W_j) + c.c. \ ,\nonumber \end{equation} where \begin{eqnarray}\label{inter} & & S_{CS}(V_{(l)})= \int d^3x \ {\rm Tr} \[ \epsilon^{\mu \nu \lambda} \( A_{(l) \mu} \partial_{\nu} A_{(l) \lambda} + \frac{2i}{3} A_{(l) \mu} A_{(l) \nu} A_{(l) \lambda} +i \bar{\chi}_{(l)} \chi_{(l)} - 2 D_{(l)} \sigma_{(l)} \)\]\ , \nonumber \\ & & S_{kin}(Z^i,Z_{i}^{\dagger},W_j, W^{j \dagger}) = \int\ d^{4}\theta \ {\rm Tr} \ \( Z_{i}^{\dagger}e^{-V_{(1)}} Z^{i}e^{V_{(2)}} + W^{j \dagger}e^{-V_{(2)}} W_{j}e^{V_{(1)}} \) \ , \nonumber \\ & & W(Z^i,W_j)=\frac{2\pi }{k_1} {\rm Tr} \left(Z^i W_i Z^j W_j\right)+\frac{2\pi }{k_2} {\rm Tr} \left(W_i Z^i W_j Z^j\right) \ . \end{eqnarray} It is a three dimensional Chern-Simons theory with matter. The gauge group is $U(N)_1 \times U(N)_2$ and the $\mathcal{N}=2$ bifundamental chiral superfields $Z^i$ and $W_j$ transform in the fundamental of the first factor of the gauge group and antifundamental of the second one and vice versa for $Z^{\dagger}_i$ and $W^{\dagger j}$(see figure \ref{quivk1k2}). \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.6]{quiv.eps} \caption{\small The quivers for the ABJM theory with generic Chern-Simons levels.} \label{quivk1k2} \end{center} \end{figure} $k_1$, $k_2$ are integer numbers which we call from now on Chern-Simons levels. The three dimensional theory represented by the Lagrangian (\ref{SN3}) is $\mathcal{N}=3$ superconformal. It admits a perturbative limit for the large values of the $k_i$. The Lagrangian has $SU(2)_R \times SU(2)$ global symmetry, where the first factor is the R symmetry associated to the $\mathcal{N}=3$ superconformal symmetry, while the second $SU(2)$ is a global symmetry under which $Z^i$ and $W_j$ transform in the fundamental representation. In the particular case $k_1= -k_2$ the supersymmetry of the Lagrangian is enhanced to $\mathcal{N}=6$ and the global symmetry group to $SU(4)_R$. In this case the lower bosonic components\footnote{We use the same symbols $Z^i$, $W_j$ for the superfields and for their lowest scalar components. We hope this will not cause too much confusion.} of the chiral superfields can be organized in the fundamental representation ${\bf 4}$: $Y^A=(Z_1,Z_2,W^\dagger_1,W^\dagger_2)$ and the upper ones in the antifundamental. Indeed, in this limit the Lagrangian (\ref{SN3}) reduces to the ABJM one \cite{Aharony:2008ug} which is supposed to describe the three dimensional superconformal field theories living on N M2 branes at $\mathbb{C}^4/\mathbb{Z}_k$ singularities. In this particular case the theory is integrable in the planar limit at least at the two loop order. The first check for the presence of integrability in the ABJM comes from the computation of the two loop mixing matrix of anomalous dimensions for the scalar sector. Due to integrability the mixing matrix of anomalous dimensions is identical to an integrable Hamiltonian of the $SU(4)$ spin chain with the sites transforming under ${\bf 4}$ and ${\bf\bar{4}}$ \cite{Minahan:2008hf}. A natural question is if the generic theory in eq. (\ref{SN3}) is still integrable. In the case $k_1 \ne -k_2$ the supersymmetry and the global symmetries are reduced, and the theory is supposed to be dual to some flux background. The four dimensional Calabi-Yau branch in the field theory moduli space disappears and the theory is proposed to be dual to a type IIA background with the Romans mass $F_0$ turned on: $k_1 + k_2= F_0$ \cite{Gaiotto:2009mv}. It is important to stress that this kind of deformation is not an orbifold deformation and this is a peculiarity of the Chern-Simons theories. In this paper we would like to do the first step towards understanding the question concerning integrability for this theory. We compute the dilatation operator in the scalar sector at the leading order which we use then to find anomalous dimensions of some operators. To make the computation more transparent we rewrite the eq. (\ref{SN3}) in such a way that the $SU(2)_R\times SU(2)$ symmetry becomes apparent. We group the scalar fields into the tensors $O^a_i$ and $O^{\dagger i}_a$, where the indices from the beginning of the alphabet correspond to the $SU(2)_R$ and from the middle to the $SU(2)$ symmetry group \begin{equation}\label{OO} O=\begin{pmatrix} Z^\dagger_1 & W_1 \\ Z^\dagger_2 & W_2 \end{pmatrix} \ ,\qquad \qquad O^{\dagger}=\begin{pmatrix} Z^1 & Z^2 \\ W^{\dagger 1} & W^{\dagger 2} \end{pmatrix} \ . \end{equation} They transform in the ${\bf (2,2)}$ of $SU(2)_R \times SU(2)$ as $U O V^\dagger$, $V O^\dagger U^\dagger$, where $U \in SU(2)$ and $V \in SU(2)_R$. The class of the gauge invariant operators we are interested in has the form \begin{equation}\label{op} \mathcal{O}={\rm Tr} \( O^{\dagger i_1}_{a_1} O^{a_2}_{i_2} O^{\dagger i_3}_{a_3} O^{a_4}_{i_4}.......O^{\dagger i_{2L-1}}_{a_{2L-1}} O^{a_{2L}}_{i_{2L}}\)\chi^{a_1i_2 a_3 i_4...a_{2L-1},i_{2L}}_{i_1 a_2 i_3 a_4...i_{2L-1},a_{2L}} \ , \end{equation} where $\chi$ is some tensor of $SU(2)_R \times SU(2)$. These operators need to be renormalized \begin{equation} \mathcal{O}_{ren}^M=Z^M_N(\Lambda) \mathcal{O}_{bare}^N \ , \end{equation} where $M$ and $N$ label all the possible operators, $\Lambda$ is an UV cutoff, and $Z$ subtracts all the UV divergences from the operator correlator functions. The object we are interested in is the matrix of anomalous dimensions $\Gamma$. It is defined as \begin{equation}\label{gamma} \Gamma=\frac{d \ln Z}{d \ln \Lambda} \ . \end{equation} The eigenstates of $\Gamma$ are conformal operators and the eigenvalues are the corresponding anomalous dimensions. It is convenient to represent the operators (\ref{op}) as states in a quantum spin chain with $2L$ sites. Every site transforms in $(2,2)$ representation of $SU(2)_R \times SU(2)$. The spin chain is alternating between the $\mathcal{O}^\dagger$ and $\mathcal{O}$. In this language the mixing matrix (\ref{gamma}) can be regarded as the Hamiltonian acting on the Hilbert space $(\bar{V} \otimes V)^{\otimes L}$. \section{$SU(2)_R \times SU(2)$ invariant potential} In this section we would like to write the action (\ref{SN3}) in terms of component fields and in particular we would like to have an explicit expression for the potential in terms of $SU(2)_R \times SU(2)$ invariant objects. We start by integrating out all the auxiliary fields. In particular the spinorial fields $\chi_{(l)}$ and the bosonic fields $\sigma_{(l)}$, $D_{(l)}$ are all auxiliary fields and can be eliminated using the equations of motion. From the chiral super fields $Z^I$, $W_j$ we get the complex scalars $Z^i$, $W_j$ and the Dirac spinors $\zeta^i$, $\omega_j$. The potential $V$ can be divided into a part $V^{bos}$ containing only bosonic operators and a part $V^{ferm}$ containing bosonic and fermionic operators. Let consider first the bosonic part. \subsection{The bosonic potential} The bosonic potential $V^{bos}$ gets contributions from the superpotential and from the Chern-Simons interactions $V^{bos}= V^{bos}_{W} + V^{bos}_{CS}$. The superpotential part is \begin{equation} V^{bos}_{W}= {\rm Tr} \( \sum_{i,j} \Big|\partial_{Z^i}W \Big|^2+ \Big|\partial_{W_j}W \Big|^2 \) \ , \end{equation} where $W$ is the superpotential given in eq. (\ref{inter}). The Chern-Simons part is \begin{eqnarray} V^{bos}_{CS}&=&{\rm Tr} \left(Z^\dagger_i Z^{i}\sigma^2_{(1)}-2Z^{ i}\sigma_{(1)} Z^\dagger_i\sigma _{(2)}+Z^{ i}Z^\dagger_i\sigma^2_{(2)}\right)\nonumber \\&&+ {\rm Tr}\left(W^{\dagger i}W_i\sigma^2_{(2)}-2W_i\sigma_{(2)} W^{\dagger i}\sigma _{(1)}+W_iW^{\dagger i}\sigma^2_{(1)}\right) \ , \end{eqnarray} where \begin{eqnarray} \sigma_{(1)}= \frac{2\pi}{k_1} \( Z^\dagger_iZ^{ i}-W_iW^{\dagger i} \) \ , \qquad \sigma_{(2)}= \frac{2\pi}{k_2} \( W^{\dagger i}W_i-W^{ i}W^\dagger_i \) \ . \end{eqnarray} If we write a general ansatz by use of operators in eq. (\ref{OO}) there exist 18 structures compatible with the symmetries and the canonical dimension of the bosonic fields\footnote{In principle we can write 36 structures which would correspond to the singlets of $SU(2)_R \times SU(2)$. From the group theory computation we get that there are only 25 singlets. It means that there 11 linear relations among the structures. 36 structures are equivalent to 18 different structures modulo cyclic permutation and we find that invariance under cyclic permutation reduces the 11 relations to only 7.}. \begin{eqnarray}\label{ansatz} V^{bos}_{a_n}&& \!\!\!\!\!\!\!\!= a_1\, {\rm Tr} \ O^{a}_iO^{\dagger i}_aO^{b}_jO^{\dagger j}_bO^{c}_kO^{\dagger k}_c+ a_2\, {\rm Tr} \ O^{a}_iO^{\dagger i}_aO^{b}_jO^{\dagger k}_bO^{c}_kO^{\dagger j}_c +a_3\, {\rm Tr} \ O^{a}_iO^{\dagger j}_aO^{b}_kO^{\dagger i}_bO^{c}_jO^{\dagger k}_c \nonumber\\[0.2cm] && + a_4\, {\rm Tr} \ O^{a}_iO^{\dagger j}_aO^{b}_jO^{\dagger k}_bO^{c}_kO^{\dagger i}_c +a_5\, {\rm Tr} \ O^{a}_iO^{\dagger i}_bO^{b}_jO^{\dagger j}_cO^{c}_kO^{\dagger k}_a+a_6\, {\rm Tr} \ O^{a}_iO^{\dagger i}_bO^{b}_jO^{\dagger k}_cO^{c}_kO^{\dagger j}_a \nonumber\\ [0.2cm] && +a_7\, {\rm Tr} \ O^{a}_iO^{\dagger j}_bO^{b}_kO^{\dagger i}_cO^{c}_jO^{\dagger k}_a+a_8\, {\rm Tr} \ O^{a}_iO^{\dagger j}_bO^{b}_jO^{\dagger k}_cO^{c}_kO^{\dagger i}_a +a_9\, {\rm Tr} \ O^{a}_iO^{\dagger i}_aO^{b}_jO^{\dagger j}_cO^{c}_kO^{\dagger k}_b \nonumber\\ [0.2cm] &&+a_{10}\, {\rm Tr} \ O^{a}_iO{\dagger i}_aO^{b}_jO^{\dagger k}_cO^{c}_kO^{\dagger j}_b +a_{11}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_aO^{b}_kO^{\dagger i}_cO^{c}_jO^{\dagger k}_b+a_{12}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_a O^{b}_jO^{\dagger k}_cO^{c}_kO^{\dagger i}_b \nonumber\\ [0.2cm] && +a_{13}\, {\rm Tr} \ O^{a}_iO^{\dagger i}_cO^{b}_jO^{\dagger j}_aO^{c}_kO^{\dagger k}_b+a_{14}\, {\rm Tr} \ O^{a}_iO^{\dagger i}_cO^{b}_jO^{\dagger k}_aO^{c}_kO^{\dagger j}_b +a_{15}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_cO^{b}_kO^{\dagger i}_aO^{c}_jO^{\dagger k}_b \nonumber\\ [0.2cm] && +a_{16}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_cO^{b}_jO^{\dagger k}_aO^{c}_kO^{\dagger i}_b +a_{17}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_aO^{b}_jO^{\dagger i}_cO^{c}_kO^{\dagger k}_b+a_{18}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_aO^{b}_kO^{\dagger k}_cO^{c}_jO^{\dagger i}_b \nonumber \ , \\ \end{eqnarray} where $a_n$ are 18 arbitrary real parameters, which we need to fix by use of the explicit expressions for the bosonic potential in components $V^{bos}_{a_n}=V^{bos}$. If we apply $\dagger$-operation on the ansatz (\ref{ansatz}) we find that the first 16 terms are mapped into themselves, while the last two are mapped into each other. It means that the reality of the potential forces $a_{18}=a_{17}$. On top of that it appears that some of the 18 structures are linear dependent. If we call $O_n$ the operators corresponding to the coefficients $a_n$. We can find the seven linear relations \begin{eqnarray}\label{depend} &&3 O_9 - O_{13} - O_5 - O_1=0 \ , \qquad\qquad \ \ \ 3 O_{12} - O_{16} - O_4 - O_8=0 \ ,\nonumber\\ &&3 O_2 - O_3 - O_4 - O_1=0 \ , \qquad\qquad \ \ \ \ 3 O_6 - O_7 - O_5 - O_8=0\ ,\nonumber\\ &&3 O_{14} - O_{16} - O_{15} - O_{13}=0 \ , \qquad\qquad 3 O_{11} - O_3 - O_7 - O_{15}=0 \ ,\nonumber\\ &&\qquad\qquad\qquad O_{10}- O_9-O_{11}-O_{12}+O_{17}+O_{18}=0 \ . \end{eqnarray} In particular, if we try to solve the equation $V^{bos}_{a_n}=V^{bos}$ as a function of the $a_n$ we find a family of solutions parameterized by seven parameters $a_n$ due to the relations (\ref{depend}). To find the coefficients for the potential we need first to reduce the ansatz by use of (\ref{depend}) to 11 linearly independent structures and then solve $V^{bos}_{a_n}=V^{bos}$ for the coefficients. This way to proceed means that there are no unique form of the potential if we use the notion of the $SU(2)_R\times SU(2)$-fields. The concrete form of the mixing operator descends from the choice of these 11 structures but the eigenvalues of the mixing operator are independent of this choice. See additional comments in Appendix \ref{gaugechoice}. We found a choice of $a_n$ where 11 of the 18 coefficients are zero. The remaining non-zero coefficients are \begin{eqnarray} &&a_1=-\frac{4\pi^2}{3k_1^2}\ , \qquad a_8=-\frac{4\pi^2}{3k_2^2}\ , \qquad a_{10}=-\frac{8\pi^2}{k_1 k_2}\, \qquad a_{15}=\frac{16\pi^2}{3k_1k_2} \ , \nonumber \\ && \qquad \qquad a_{13}=\frac{16\pi^2(k_1+k_2)}{3k_1^2 k_2}\ , \qquad a_{16}=\frac{16\pi^2(k_1+k_2)}{3k_1k_2^2} \ . \end{eqnarray} In the following we will use these coefficients. The bosonic potential written in the explicit $SU(2)_R \times SU(2)$ invariant form is \begin{eqnarray}\label{potential} V^{bos} &=& -\frac{4\pi^2}{3k_1^2}\, {\rm Tr} \ O^{a}_iO^{\dagger i}_aO^{b}_jO^{\dagger j}_bO^{c}_kO^{\dagger k}_c-\frac{4\pi^2}{3k_2^2}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_bO^{b}_jO^{\dagger k}_cO^{c}_kO^{\dagger i}_a\nonumber\\ && -\frac{8\pi^2}{k_1 k_2}\, {\rm Tr} \ O^{a}_iO^{\dagger i}_aO^{b}_jO^{\dagger k}_cO^{c}_kO^{\dagger j}_b +\frac{16\pi^2}{3k_1k_2}\, {\rm Tr} \ O^{a}_iO^{\dagger j}_cO^{b}_kO^{\dagger i}_aO^{c}_jO^{\dagger k}_b\nonumber\\ &&+\frac{16\pi^2(k_1+k_2)}{3k_1^2 k_2}\, {\rm Tr} \ O^{a}_iO^{\dagger i}_cO^{b}_jO^{\dagger j}_aO^{c}_kO^{\dagger k}_b+\frac{16\pi^2(k_1+k_2)}{3k_1k_2^2} \, {\rm Tr} \ O^{a}_iO^{\dagger j}_cO^{b}_jO^{\dagger k}_aO^{c}_kO^{\dagger i}_b\ .\nonumber\\ \end{eqnarray} With this choice of the coefficients the ABJM limit is apparent. Namely for $k_1+k_2=0$ the last two terms drop out and we obtain the ABJM potential written in $SU(2)_R \times SU(2)$ invariant way. Indeed in this limit the R-symmetry and flavor indices of the $O$ fields do not mix anymore due to the R symmetry enhancement to $SU(4)$ . The remaining coefficients are exactly the ones in \cite{Aharony:2008ug} \subsection{The fermionic potential} Le us now proceed with the fermionic potential $V^{ferm}$. Our final goal is to compute the two loops mixing matrix in the planar limit. Part of the contribution to the renormalization of the scalar operators $\mathcal{O}$ in eq. (\ref{op}) comes from fermions running in the loops. This interaction is due to the fermionic potential. The fermionic potential is a quartic function in the fields, each term contains two bosons and two fermions. The contributions are of two types, the first one $V^{ferm}_{ffbb}$ contains terms consisting of two fermions followed by two bosons, the second one $V^{ferm}_{bfbf}$ has the coupling fermions-boson-fermion-boson. It is easy to see that the terms of the second type do not contribute to the mixing matrix at the planar level for the scalar operators. That's why it is enough to consider only the terms of the first type. The fermionic potential has two contributions, one is coming from the superpotential $V^{ferm}_W$ and the other one coming from the Chern-Simons interactions $V^{ferm}_{CS}$. After integrating out the auxiliary fields we get \begin{eqnarray}\label{fermpot} V^{{\rm {ferm}}}_W&=&\frac{4\pi }{k_2}\left(\omega_i \zeta^i W_jZ^j+\zeta^i\omega_jZ^jW_i-\zeta^\dagger_i\omega^{\dagger i}Z^\dagger_jW^{\dagger j}-\omega^{\dagger i}\zeta^\dagger_j W^{\dagger j}Z^\dagger_{i}\right)\nonumber\\ &&+\frac{4\pi }{k_1}\left(\omega_i \zeta^j W_jZ^i+\zeta^i\omega_iZ^jW_j-\zeta^\dagger_i\omega^{\dagger j}Z^\dagger_jW^{\dagger i}-\omega^{\dagger i}\zeta^\dagger_i W^{\dagger j}Z^\dagger_{j}\right)+\ldots\nonumber\\ V^{{\rm {ferm}}}_{CS}&=&\frac{2\pi i}{k_1}\left(\zeta^i\zeta^\dagger_i-\omega^{\dagger i}\omega_i\right)\left(Z^jZ_j^\dagger-W^{\dagger j}W_j\right)+ \frac{2\pi i}{k_2}\left(\zeta^\dagger_i\zeta^i-\omega_i \omega^{\dagger i}\right)\left(Z^{\dagger}_jZ^j-W_jW^{\dagger j}\right)\nonumber\\ &&+\frac{4\pi i}{k_1}\left(\zeta^\dagger_i\zeta^j Z_j^\dagger Z^i+\omega_i\omega^{\dagger j}W_j W^{\dagger i}\right) +\frac{4 \pi i}{k_2}\left(\zeta^i\zeta_j^\dagger Z^j Z^\dagger_i+\omega^{\dagger i}\omega_jW^{\dagger j}W_i\right)+\ldots\nonumber\\ \end{eqnarray} The ellipsis corresponds to couplings in $V^{ferm}_{bfbf}$ which are not relevant for our computation. We would like to rewrite the fermionic potential in the $SU(2)_R \times SU(2)$ invariant way. In the ABJM case the superpartners of the scalar field transform in the conjugated representation of the one of the scalars. This is the manifestations of the fact that the $SU(4)$ corresponds to the R-symmetry group of the fields. It means that in the case of the fermionic objects transforming under $SU(2)_R\times SU(2)$ the R-symmetry index should transform in the conjugated representation of the scalar superpartner. However, since we expect that the scalars and spinors belong to the same flavor multiplet they should transform under the same representation of the $SU(2)$ flavor symmetry group. This suggests the following ansatz \begin{eqnarray}\label{fermansatz} \psi^{\dagger 1i}=-i\zeta^i \ , &\qquad& \psi^{\dagger 2i}=\omega^{\dagger i} \ , \nonumber\\ \psi_{1j}=i\zeta^\dagger_j \ , &\qquad& \psi_{2j}=\omega_j \ . \end{eqnarray} The index $i$, $j$ transform under $SU(2)$ flavor symmetry and the written out indices $1,2$ under the $SU(2)_R$-symmetry. The $SU(2)_R \times SU(2)$ invariant ansatz is then \begin{eqnarray} V^{\rm{ferm}}_{f_n}&=&f_1 {\rm Tr}\ O^{\dagger i}_a O^a_i \psi^{\dagger bj}\psi_{bj}+f_2 {\rm Tr} \ O^{\dagger i}_a O^a_j \psi^{\dagger bj}\psi_{bi}+ f_3 {\rm Tr} \ O^{\dagger i}_a O^b_i \psi^{\dagger aj}\psi_{bj}+f_4 {\rm Tr} \ O^{\dagger i}_a O^b_j \psi^{\dagger aj}\psi_{bi}\nonumber\\ &&\!\!\!\!\!\!+f_5 {\rm Tr}\ O_{i}^{ a} O_a^{\dagger i} \psi_{ bj}\psi^{\dagger bj}+f_6 {\rm Tr} \ O_{ i}^{ a} O_a^{\dagger j} \psi_{ bj}\psi^{\dagger bi}+ f_7 {\rm Tr} \ O_{ i}^a O^{\dagger i}_b \psi_{ aj}\psi^{\dagger bj}+f_8 {\rm Tr} \ O_{ i}^a O^{\dagger j}_b \psi_{ aj}\psi^{\dagger bi} \nonumber\\ &&+\ldots \end{eqnarray} The equation $V^{\rm{ferm}}_{f_n}= V^{fer}_W + V^{fer}_{CS}$ gives the solution \begin{eqnarray} &&f_1=-\frac{2\pi i}{k_1}\ ,\qquad f_2=0\ ,\qquad f_3=\frac{4\pi i}{k_1}\ , \qquad f_4=\frac{4\pi i}{k_2}\ ,\nonumber\\ &&f_5=-\frac{2\pi i}{k_2}\ ,\qquad f_6=0\ ,\qquad f_7=\frac{4\pi i}{k_2}\ , \qquad f_8=\frac{4\pi i}{k_1} \ .\end{eqnarray} And the $SU(2)_R \times SU(2)$ invariant fermionic potential is: \begin{eqnarray}\label{ferpot} V^{\rm{ferm}}&=&-\frac{2\pi i}{k_1} {\rm Tr} O^{\dagger i}_a O^a_i \psi^{\dagger bj}\psi_{bj}+ \frac{4\pi i}{k_1} {\rm Tr} O^{\dagger i}_a O^b_i \psi^{\dagger aj}\psi_{bj}+\frac{4\pi i}{k_2} {\rm Tr} O^{\dagger i}_a O^b_j \psi^{\dagger aj}\psi_{bi}\nonumber\\ &&-\frac{2\pi i}{k_2} {\rm Tr} O_{ i}^a O^{\dagger i}_a \psi_{bj}\psi^{\dagger bj}+ \frac{4\pi i}{k_2} {\rm Tr} O_i^a O^{\dagger i}_b \psi_{aj}\psi^{\dagger bj}+\frac{4\pi i}{k_1} {\rm Tr} O_i^a O^{\dagger j}_b \psi_{aj}\psi^{\dagger bi}+\ldots\nonumber\\ \end{eqnarray} The fermionic potential reduces to the ABJM one in the limit $k_1+k_2=0$. Namely, by use of the relation $\delta^i_l\delta_k^j-\delta_k^i\delta^j_l=\epsilon^{ij}\epsilon_{kl}$ and appropriate redefinition of the fields $O^a_i=Y^A,\ \epsilon_{ij}\psi^{\dagger aj}=\psi^{\dagger A}$ where $A$ is an $SU(4)$ index. The last two terms in each line are combined into the terms which mix the $SU(4)$ flavor and the first terms in each line give then flavor non-mixing contributions. \section{The mixing operator} Right now we have all the tools to compute the dilatation operator $\Gamma$. The contributions to the dilatation operator come from the logarithmic divergences ($\ln \Lambda$) of the renormalization function $Z(\Lambda)$. The lowest contributions come at two loops and the non vanishing logarithmic divergences come from the graphs in figure \ref{graphs}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{graph.eps} \caption{\small The graphs that contribute to the mixing operator (a) only scalar bosons are running inside the loops, (b) scalar bosons and fermions are running in the loops, (c) scalar boson and gauge bosons in the loops.} \label{graphs} \end{center} \end{figure} The renormalization of the composite operators $\mathcal{O}$ in equation (\ref{op}) comes from three different kind of graphs where (a) only scalar fields, (b) scalar and fermionic fields and (c) scalar and gauge fields are running in the loops. We can analyze them separately. Before doing it let us fix some notation. We are going to compute the Hamiltonian of an $SU(2)_R \times SU(2)$ spin chain in representation $\bf(2,2)$, with alternating sites corresponding to the fields $O$, $O^\dagger$ in the operators $\mathcal{O}$. At every site of the spin chain we have two indices of $SU(2)$ and the final Hamiltonian can be nicely expressed in terms of two basic operators acting on the group indices: the trace operator $K: V \otimes \bar{V} \rightarrow V \otimes \bar{V}$ or $\bar{K}: \bar{V} \otimes V \rightarrow \bar{V} \otimes V$; and the permutation operator $P: V \otimes V \rightarrow V \otimes V$ or $P: \bar{V} \otimes \bar{V} \rightarrow \bar{V} \otimes \bar{V}$. We can distinguish between the operators acting on the R indices ($K$, $P$) and the operators acting on the flavor indices ($\hat{K}, \hat{P}$): \begin{eqnarray} && K^{a'b}_{b'a}= \delta^{a'}_{b'} \delta^{b}_{a}\ ,\qquad \qquad K^{i'j}_{j'i}= \delta^{i'}_{j'} \delta^{j}_{i} \ , \nonumber\\ && P^{a'b'}_{ba}= \delta^{a'}_{b} \delta^{b'}_{a}\ , \qquad \qquad P^{i'j'}_{ji}= \delta^{i'}_{j} \delta^{j'}_{i} \ .\nonumber \end{eqnarray} The trace operator $K$ acts on the nearest neighbor sites, while the permutation operator $P$ acts on next to nearest neighbor sites. The 't Hooft couplings $\lambda_i= N/k_i$ are our perturbative expansion parameters. The final expression for the mixing operator $\Gamma$ is a polynomial in $K$ and $P$ with coefficients that are functions of $\lambda_1$, $\lambda_2$. \subsection{Six-vertex two-loop diagram} In this subsection we give the part of the Hamiltonian which comes from the diagram with only scalar fields in the loops. The graph (a) in figure \ref{graphs} gets the contribution from the various monomials in the sextic bosonic potential (\ref{potential}). The computation is done in two steps. Firstly, one computes the logarithmic divergent part, and then carefully computes the $SU(2)_R\times SU(2)$ combinatoric structure. To write down the final result in a most transparent way we distinguish between trace operators $\bar K_{l,l+1}$ and $K_{l,l+1}$. The first one acts as usual on the sites $\bar V \otimes V$ and gives zero on $V\otimes \bar V$, while the second one acts as usual on $V\otimes\bar V$ and gives zero on $\bar V \otimes V$. The part of the mixing operator coming from this graph is \begin{eqnarray} \Gamma_{{\rm {bos}}}&=&\frac{1}{2}\sum_{l=1}^{2L}\Big(-\lambda_1^2{\bar K}_{l,l+1}{\hat{\bar K}}_{l,l+1}-\lambda_2^2{ K}_{l,l+1}{\hat{K}}_{l,l+1}+2\lambda_1\lambda_2 P_{l,l+2}\hat P_{l,l+2}\nonumber\\&& -\lambda_1\lambda_2\big(1\hat 1+K_{l,l+1}P_{l,l+2} \ \hat{K}_{l,l+1}\hat{P}_{l,l+2} +{\bar K}_{l,l+1}P_{l,l+2} \ \hat{\bar K}_{l,l+1}\hat{P}_{l,l+2} \nonumber\\&&\qquad\qquad \ \ +P_{l,l+2}K_{l,l+1} \ \hat{P}_{l,l+2}\hat{K}_{l,l+1} +P_{l,l+2}{\bar K}_{l,l+1} \ \hat{P}_{l,l+2}\hat{\bar K}_{l,l+1}\big)\nonumber\\&&+4(\lambda_1\lambda_2+\lambda_1^2)P_{l,l+2}\hat{\bar K}_{l,l+1}+4(\lambda_1\lambda_2+\lambda_1^2)P_{l,l+2}\hat{ K}_{l,l+1}\Big) \ . \end{eqnarray} \subsection{Fermionic contribution} The fermionic potential (\ref{ferpot}) gives two types of contributions to the graph (b) in figure \ref{graphs}: a contribution proportional to the identity in the $SU(2)_R\times SU(2)$ indices, namely a vacuum energy contribution coming from the first two monomials in the two lines of (\ref{ferpot}) and an interacting contribution containing the $K$, $\hat{K}$ trace operators. The constant part of the full mixing matrix gets contribution also from other graphs than the ones in figure \ref{graphs}, for example, from the renormalization to the propagator $<O^\dagger O>$. We are not going to compute these diagrams. Later, we fix this constant part using supersymmetry. For this reason we concentrate here only on the contributions coming from the last two monomials in each lines in (\ref{ferpot}). After computing the logarithmic divergent part of the graph (b) in figure \ref{graphs} and computing the combinatorial $SU(2)_R\times SU(2)$ structure, we obtain the fermionic contribution to the mixing operator \begin{eqnarray} \Gamma_{\rm {ferm}}&=&\sum_{l=1}^{2L}\Big(2(\lambda_2^2+\lambda_1\lambda_2)\bar K_{l,l+1}\hat 1+\lambda_1^2 \bar K_{l,l+1}\hat{\bar K}_{l,l+1} +2(\lambda_1^2+\lambda_1\lambda_2) K_{l,l+1}\hat 1+\lambda_2^2 K_{l,l+1}\hat{K}_{l,l+1}\Big) \ .\nonumber\\ \end{eqnarray} \subsection{The gauge bosons contribution} The last contribution to the mixing operator comes from the graph (c) in figure \ref{graphs}. The gauge bosons do not carry $SU(2)_R\times SU(2)$ indices and we just need to compute the two loop diagram with the correct coupling constants coming from the scalar-gauge interactions in the Lagrangian. The final result is \begin{equation} \Gamma_{\rm {gauge}}=-\frac{1}{2}\sum_{l=1}^{2L}\Big(\lambda_2^2{\bar K}_{l,l+1} \hat {\bar K}_{l,l+1}+\lambda^2_1{ K}_{l,l+1} \hat { K}_{l,l+1}\Big) \ . \end{equation} \subsection{Two-loop dilatation operator} The complete two loop mixing operator is obtained summing up $\Gamma_{{\rm {bos}}}$, $\Gamma_{\rm {ferm}}$ and $\Gamma_{\rm {gauge}}$. Before writing down the final expression we need to fix the constant contribution. Supersymmetry implies that the anomalous dimension of the symmetric traceless operators is equal to zero. This fact fixes the constant contribution. The complete\footnote{We would like to stress here that since there are relations between the trace and permutation operators acting on two-dimensional indices the above form of the Hamiltonian is not unique. The action of the Hamiltonian is of course independent of the concrete representation in terms of $K$s and $P$s.} Hamiltonian can be written as \begin{eqnarray}\label{hful} \Gamma_{{\rm {full}}}&=&\frac{1}{2}\sum_{l=1}^{2L}\Big((\lambda_1^2-\lambda_2^2){\bar K}_{l,l+1}{\hat{\bar K}}_{l,l+1}+(\lambda_2^2-\lambda_1^2){ K}_{l,l+1}{\hat{K}}_{l,l+1}\nonumber\\ &&+4(\lambda_1\lambda_2+\lambda_1^2)(P_{l,l+2}\hat{\bar K}_{l,l+1}+ K_{l,l+1}\hat 1)+4(\lambda_1\lambda_2+\lambda_2^2)(P_{l,l+2}\hat{ K}_{l,l+1}+\bar K_{l,l+1} \hat 1)\nonumber \\&& -\lambda_1\lambda_2\big(2-2 P_{l,l+2}\hat P_{l,l+2}+K_{l,l+1}P_{l,l+2} \ \hat{K}_{l,l+1}\hat{P}_{l,l+2} +{\bar K}_{l,l+1}P_{l,l+2} \ \hat{\bar K}_{l,l+1}\hat{P}_{l,l+2} \nonumber\\&&\qquad\qquad \ \ +P_{l,l+2}K_{l,l+1} \ \hat{P}_{l,l+2}\hat{K}_{l,l+1} +P_{l,l+2}{\bar K}_{l,l+1} \ \hat{P}_{l,l+2}\hat{\bar K}_{l,l+1}\big)\Big) \ .\nonumber\\ \end{eqnarray} The last two lines are the only contributions to the mixing operator in the ABJM case. Indeed in the limit $k_1+k_2=0$ the Hamiltonian reduces to \begin{eqnarray} \Gamma_{{\rm {full}}}^{ABJM}&=&\frac{\lambda^2}{2}\sum_{l=1}^{2L}\Big(2-2 P_{l,l+2}\hat P_{l,l+2}+K_{l,l+1}P_{l,l+2} \ \hat{K}_{l,l+1}\hat{P}_{l,l+2} + P_{l,l+2}K_{l,l+1} \ \hat{P}_{l,l+2}\hat{K}_{l,l+1}\Big) \ \nonumber\\ \end{eqnarray} that is exactly the mixing operator in \cite{Minahan:2008hf} written in $SU(2)_R\times SU(2)$ invariant form, where we didn't distinguish between $K$, $P$ and $\bar{K}$, $\bar{P}$. It is nice to observe that one can define a parity operator $\mathcal{P}$ acting on the spin chain. Its action reverses the orientation of the chain from clockwise to anticlockwise or vice versa. In particular it acts on the operators as $$\mathcal{P}\hbox{ } {\rm Tr} \( O^{\dagger i_1}_{a_1} O^{a_2}_{i_2}...O^{\dagger i_{2L-1}}_{a_{2L-1}} O^{a_{2L}}_{i_{2L}}\) = {\rm Tr} \( O^{a_{2L}}_{i_{2L}} O^{\dagger i_{2L-1}}_{a_{2L-1}}...O^{a_{2}}_{i_{2}} O^{\dagger i_{1}}_{a_{1}}\)\ .$$ The parity operation\footnote{If we act with the parity operator on the Hamiltonian the transformed one should act on the parity transformed states as the original Hamiltonian on the non transformed states. The new vertices of a such transformed Hamiltonian are obtained from the full potential by acting on all the terms with the parity operator. This corresponds exactly to the exchange of $\lambda_1$ and $\lambda_2$ in eq. (\ref{hful}) or alternatively to the exchange of $K,\hat K$ and $\bar K,\hat {\bar K}$.} on the Hamiltonian (\ref{hful}) exchanges $\lambda_1$ and $\lambda_2$. The parity transformed Hamiltonian is \begin{eqnarray} \mathcal{P} \hbox{ }\Gamma_{{\rm {full}}}\ \mathcal{P}&=&\frac{1}{2}\sum_{l=1}^{2L}\Big((\lambda_2^2-\lambda_1^2){\bar K}_{l,l+1}{\hat{\bar K}}_{l,l+1}+(\lambda_1^2-\lambda_2^2){ K}_{l,l+1}{\hat{K}}_{l,l+1}\nonumber\\ &&+4(\lambda_1\lambda_2+\lambda_2^2)(P_{l,l+2}\hat{\bar K}_{l,l+1}+ K_{l,l+1}\hat 1)+4(\lambda_1\lambda_2+\lambda_1^2)(P_{l,l+2}\hat{ K}_{l,l+1}+\bar K_{l,l+1} \hat 1)\nonumber \\&& -\lambda_1\lambda_2\big(2-2 P_{l,l+2}\hat P_{l,l+2}+K_{l,l+1}P_{l,l+2} \ \hat{K}_{l,l+1}\hat{P}_{l,l+2} +{\bar K}_{l,l+1}P_{l,l+2} \ \hat{\bar K}_{l,l+1}\hat{P}_{l,l+2} \nonumber\\&&\qquad\qquad \ \ +P_{l,l+2}K_{l,l+1} \ \hat{P}_{l,l+2}\hat{K}_{l,l+1} +P_{l,l+2}{\bar K}_{l,l+1} \ \hat{P}_{l,l+2}\hat{\bar K}_{l,l+1}\big)\Big)\nonumber \ .\\ \end{eqnarray} For $\lambda_1\neq \pm \lambda_2$ the parity symmetry of the Hamiltonian is broken by the terms in the first and second line. The only values of $\lambda_1$ and $\lambda_2$ which correspond to the parity invariant Hamiltonian are $\lambda_1=\pm\lambda_2$. \section{Length four operators} A typical sign of integrability of a system is the presence of different operators with the same anomalous dimensions. \cite{Beisert:2003tq,Kristjansen:2008ib} In the ABJM case this happens for example for operators of length four \cite{Minahan:2008hf}. In that case the system is an $SU(4)$ spin chain alternating between fundamental $\bf 4$ representation and antifundamental $\bf \bar{4}$ representation. The $\bf 4$ is associated with the vector: $Y^A=(Z^1,Z^2,W^\dagger_1,W^\dagger_2)$ and the length four operators are ${\rm Tr} \left( Y^{A_1} Y^\dagger_{B_1}Y^{A_2} Y^\dagger_{B_2} \right) $. If we decompose these operators in representations of $SU(4)$ we find that they contain two singlets $\bf 1$, two adjoints $\bf 15$, one $\bf 20$ and one $\bf 84$ representations. It happens that the two adjoint operators have the same anomalous dimension $6\lambda^2$. The natural question is, what happens to these operators in the case in which $k_1 \ne - k_2$? Are they still degenerate? To answer these questions we consider the following operators \begin{equation}\label{ofour} {\rm Tr}\ O^{\dagger i_1}_{a_1} O^{a_2}_{i_2} O^{\dagger i_3}_{a_3} O^{a_4}_{i_4} \ . \end{equation} They decompose in representation of $SU(2)_R \times SU(2)$. In particular the $\bf 15$ of $SU(4)$ decomposes under $SU(2)_R\times SU(2)$ as \begin{equation} \bf 15 \rightarrow (3,1) + (1,3) + (3,3) \ . \nonumber \end{equation} For this reason in this section we will be interested to apply the Hamiltonian (\ref{hful}) to operators in (\ref{ofour}) in representations $\bf (3,1)$, $\bf (1,3)$ and $\bf (3,3)$. Operators with the same quantum numbers typically mix among each other under renormalization. We need to consider all the operators of the same length that transform in the same representation. The operators in the representation $\bf (3,1)$ and $\bf (1,3)$ come only from the decomposition of the $\bf 15$ of $SU(4)$, but there exist other three operators in the $\bf (3,3)$ representation coming respectively: one from the $\bf 20$ and two from the $\bf 84$. As result we have two operators in the $\bf (3,1)$, two in the $\bf (1,3)$, and five in the $\bf (3,3)$. In the following subsections we are going to analyze separately their anomalous dimensions and to check if the degeneracy which is present in the integrable ABJM case is still there or is lifted. \subsection{Operators in (3,1)} Let us start with the operators in representation $({\bf 3,1})$. From the decomposition in the list (\ref{decomposition}) in the Appendix we know that there are six structures transforming in the representation ({\bf 3,1}), four come from {\bf 15} and two from {\bf 45} and ${\bf \overline {45}}$ of $SU(4)$. Only the structures descending from the {\bf 15} of $SU(4)$ can form operators invariant under trace. Indeed cyclicity relates four states and we get just two operators: \begin{eqnarray}\label{op31} {\rm Tr}\ |1-{ \bf 15}\rangle_{\bf (3,1)}&=&{\rm Tr}\ O^{\dagger i}_a O^a_i O^{\dagger m}_b O^c_m-{\rm trace} \ , \nonumber\\ {\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,1)}&=&{\rm Tr}\ O^{\dagger m}_b O^a_i O^{\dagger i}_a O^c_m-{\rm trace} \ . \end{eqnarray} The first label enumerates the operators and the second one gives the corresponding $SU(4)$ multiplet. Applying the mixing operator we obtain \small \begin{eqnarray}\label{31one} \Gamma \ {\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,1)}&=&2(\lambda_1^2-\lambda_1\lambda_2+\lambda_2^2){\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,1)}+(5\lambda_2-\lambda_1)(\lambda_1+\lambda_2){\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,1)}\nonumber\\ &&+6\lambda_2(\lambda_1+\lambda_2)\Big({\rm Tr}\ O^{\dagger i}_a O^c_i O^{\dagger j}_b O^a_j+{\rm Tr}\ O^{\dagger i}_b O^a_i O^{\dagger j}_a O^c_j\Big)\nonumber\\[0.25cm] &=&2(\lambda_2^2+5\lambda_1\lambda_2+7\lambda_1^2){\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,1)}+(5\lambda_2-\lambda_1)(\lambda_1+\lambda_2){\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,1)}\nonumber \end{eqnarray} \begin{eqnarray}\label{31two} \Gamma\ {\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,1)}&=&2(\lambda_1^2-\lambda_1\lambda_2+\lambda_2^2){\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,1)}+(5\lambda_1-\lambda_2)(\lambda_1+\lambda_2){\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,1)}\nonumber\\ &&+6\lambda_1(\lambda_1+\lambda_2)\Big({\rm Tr}\ O^{\dagger i}_a O^a_j O^{\dagger j}_b O^c_i+{\rm Tr}\ O^{\dagger i}_b O^c_j O^{\dagger j}_a O^a_i\Big)\nonumber\\[0.25cm] &=&2(\lambda_1^2+5\lambda_1\lambda_2+7\lambda_2^2){\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,1)}+(5\lambda_1-\lambda_2)(\lambda_1+\lambda_2){\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,1)}\nonumber \end{eqnarray} \normalsize The application of the mixing operator on the states ${\rm Tr}\ |1-\bf 15\rangle_{\bf (3,1)}$ and ${\rm Tr}\ |2- \bf 15\rangle_{\bf (3,1)}$ produces structures which we cannot immediately match with the basis states. This comes from the fact that there are more structures than the linearly independent ones. There are 6 ways to organize the R-symmetry indices in such a way that they transform in representation {\bf 3} of $SU(2)_R$ and two ways to organize the flavor indices that transform in {\bf 1} of $SU(2)$. Using the relations from Appendix B these 12 structures can be related to the 6 basis structures which come from the decomposition of {\bf 15 , 45 } and { $\overline {\bf 45}$} of $SU(4)$. The eigenvalues are \begin{equation} 8\lambda_1^2+10\lambda_1\lambda_2+8\lambda_2^2\pm (\lambda_1+\lambda_2)\sqrt{31 \lambda_1^2-46 \lambda_1\lambda_2 +31\lambda_2^2} \ . \end{equation} For physical real values of $\lambda_1$, $\lambda_2$ the eigenvalues are degenerate only for $\lambda_1=-\lambda_2 = \lambda$. In this case our result reduces to the ABJM one \cite{Minahan:2008hf} and the two operators in (\ref{op31}) have the same anomalous dimension, $6\lambda^2$. In all the other cases the degeneracy is lifted. \subsection{Operators in (1,3)} The operators in representation $\bf(1,3)$, similarly to the previous case, appear also in the decomposition of the {\bf 15} of $SU(4)$. As in the $\bf(3,1)$, we get only two operators \begin{eqnarray} {\rm Tr} \ |1-\bf 15\rangle_{\bf (1,3)}&=& {\rm Tr} \ O^{\dagger i}_a O^a_i O^{\dagger j}_b O^b_k-{\rm trace} \ , \nonumber\\ {\rm Tr} \ |2- \bf 15\rangle_{\bf (1,3)}&=&{\rm Tr} \ O^{\dagger j}_b O^a_i O^{\dagger i}_a O^b_k-{\rm trace} \ . \end{eqnarray} Again using the relations from the Appendix B we obtain \begin{eqnarray} \Gamma \ {\rm Tr} \ |1-{\bf 15}\rangle_{\bf (1,3)}&=&2(3\lambda_1^2-\lambda_1\lambda_2+\lambda_2^2){\rm Tr}\ |1-{\bf 15}\rangle_{\bf (1,3)}\nonumber\\ &&+(\lambda_1+\lambda_2)(5\lambda_1+7\lambda_2){\rm Tr}\ |2- {\bf 15}\rangle_{\bf (1,3)}\ , \nonumber\\[0.2cm] \Gamma \ {\rm Tr} \ |2-{\bf 15}\rangle_{\bf (1,3)}&=&2(3\lambda_2^2-\lambda_1\lambda_2+\lambda_1^2){\rm Tr}\ |2-{\bf 15}\rangle_{\bf (1,3)}\nonumber\\ &&+(\lambda_1+\lambda_2)(5\lambda_2+7\lambda_1){\rm Tr}\ |1- {\bf 15}\rangle_{\bf (1,3)}\ . \end{eqnarray} The eigenvalues are: \begin{equation} 2(2\lambda_1^2+\lambda_1\lambda_2+2\lambda_2^2) \pm (\lambda_1+\lambda_2)\sqrt{3(13\lambda_1^2+22\lambda_1\lambda_2+13 \lambda^2_2)}. \end{equation} As in the previous case the mixing and the anomalous dimensions reduce to the ABJM ones \cite{Minahan:2008hf} in the limit $\lambda_1=-\lambda_2$, otherwise the degeneracy is lifted. \subsection{Operators in (3,3)} The $\bf (3,3)$ case is a bit more involved. As we can see in the list (\ref{decomposition}) there are nine structures transforming in $\bf (3,3)$ which come from the decomposition of the length four structures of $SU(4)$. Two of them coming from {\bf 45} and $\overline {\bf 45}$, due to the antisymmetrization, do not correspond to any operators. From the remaining seven structures the four coming from {\bf 15} of $SU(4)$ correspond to two trace invariant operators. Altogether we have the following basis for the operators in $\bf (3,3)$.\footnote{In principle we can write two operators which would correspond to the decomposition of {\bf 20}, the one with upper indices symmetrized and lower antisymmetrized and vice versa. By use of the relations in Appendix \ref{rel33} one can show that one of these two structures can be written as a linear combination of the remaining one, ${\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,3)}$ and ${\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,3)}$.} \begin{eqnarray}\label{basisoperators} {\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,3)}&=& {\rm Tr}\ O^{\dagger i}_a O^a_i O^{\dagger j}_b O^c_k-{\rm trace} \ ,\nonumber\\[0.15cm] {\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,3)}&=&{\rm Tr}\ O^{\dagger j}_b O^a_i O^{\dagger i}_a O^c_k-{\rm trace} \ ,\nonumber\\[0.15cm] {\rm Tr}\ |3-{\bf 20}\rangle_{\bf (3,3)}&=&{\rm Tr}\Big( O^{\dagger [i}_{(b} O^{[a}_{(k} O^{\dagger l]}_{e)} O^{d]}_{m)}-{\rm traces}\Big)\epsilon_{ad}\epsilon^{ce}\epsilon_{il}\epsilon^{jm}\nonumber\\&=&4\ {\rm Tr } \ O^{\dagger [i}_{(b} O^{[a}_{(k} O^{\dagger j]}_{a)} O^{c]}_{i)} -{\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,3)}+{\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,3)}- {\rm traces}\ ,\nonumber\\[0.15cm] {\rm Tr}\ |4-{\bf 84}\rangle_{\bf (3,3)}&=&{\rm Tr}\ \Big(O^{\dagger (j}_{(b} O^{[a}_{[i} O^{\dagger m)}_{e)} O^{d]}_{l]}-{\rm traces}\Big)\epsilon_{ad}\epsilon^{ce}\epsilon^{il}\epsilon_{km}\nonumber\\ &=&4\ {\rm Tr } \ O^{\dagger (j}_{(b} O^{[a}_{[i} O^{\dagger i)}_{a)} O^{c]}_{k]} -\frac{1}{3} \ {\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,3)}-\frac{1}{3}{\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,3)}- {\rm traces}\ , \nonumber\\[0.15cm] {\rm Tr}\ |5-{\bf 84}\rangle_{\bf (3,3)}&=&{\rm Tr}\ \Big(O^{\dagger [i}_{[a} O^{(c}_{(k} O^{\dagger l]}_{d]} O^{e)}_{m)}-{\rm traces}\Big)\epsilon^{ad}\epsilon_{be}\epsilon_{il}\epsilon^{jm} \nonumber\\ &=&4\ {\rm Tr } \ O^{\dagger [j}_{[b} O^{(a}_{(i} O^{\dagger i]}_{a]} O^{c)}_{k)} -\frac{1}{3} \ {\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,3)}-\frac{1}{3}{\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,3)}- {\rm traces}\ . \nonumber\\ \end{eqnarray} The first number enumerates the operators and the second one gives the representation of $SU(4)$ to which it corresponds. The states ${\rm Tr}\ |1-{\bf 15}\rangle_{\bf (3,3)}$ and ${\rm Tr}\ |2-{\bf 15}\rangle_{\bf (3,3)}$ in the definition of the last three operators come from the decomposition of the traces of the $SU(4)$ operators, ${\bf 20}$ and ${\bf 84}$. To obtain the mixing matrix of anomalous dimensions we apply the Hamiltonian (\ref{hful}) to the above basis states. In general the result will contain structures which do not match with the five basis operators in the list(\ref{basisoperators}). We used the relations listed in Appendix \ref{rel33}. The mixing matrix is \footnotesize \begin{equation} \left( \begin{array}{ccccc} \frac{2}{3}\left(7\lambda_1^2+3\lambda_1\lambda_2+5 \lambda_2^2\right) & \frac{1}{3}\left(\lambda_1+\lambda_2\right)\left(7\lambda_1+5\lambda_2\right) &0 &-\frac{8}{3}\lambda_1(\lambda_1+\lambda_2) &-\frac{8}{3}\lambda_1(\lambda_1+\lambda_2)\\ \frac{1}{3}\left(\lambda_1+\lambda_2\right)\left(5\lambda_1+7\lambda_2\right) &\frac{2}{3}\left(5\lambda_1^2+3\lambda_1\lambda_2+7 \lambda_2^2\right) &0 &-\frac{8}{3}\lambda_2(\lambda_1+\lambda_2) &-\frac{8}{3}\lambda_2(\lambda_1+\lambda_2)\\ 0&0&2(\lambda_1-\lambda_2)^2&2(\lambda_1^2-\lambda_2^2)&-2(\lambda_1^2-\lambda_2^2)\\ -(\lambda_1+\lambda_2)(2\lambda_1+\lambda_2)&-(\lambda_1+\lambda_2)(\lambda_1+2\lambda_2)&-\lambda_1^2+\lambda_2^2&3(\lambda_1+\lambda_2)^2&(\lambda_1+\lambda_2)^2\\ -(\lambda_1+\lambda_2)(2\lambda_1+\lambda_2)&-(\lambda_1+\lambda_2)(\lambda_1+2\lambda_2)&-\lambda_1^2+\lambda_2^2&(\lambda_1+\lambda_2)^2&3(\lambda_1+\lambda_2)^2 \end{array}\right)\nonumber \end{equation} \normalsize In the ABJM-limit, $\lambda_1=-\lambda_2=\lambda$, the eigenstates and their corresponding eigenvalues are \cite{Minahan:2008hf} \begin{eqnarray} {\rm Tr} \ |1-\bf 15\rangle_{\bf (3,3)} \ :&& 6\lambda^2 \ , \nonumber\\ {\rm Tr} \ |2-\bf 15\rangle_{\bf (3,3)} \ : && 6\lambda^2 \ , \nonumber\\ {\rm Tr} \ |3-\bf 20\rangle_{\bf (3,3)} \ :&& 8\lambda^2 \ , \nonumber \\ {\rm Tr} \ |4-\bf 84\rangle_{\bf (3,3)} \ :&& 0 \ , \nonumber\\ {\rm Tr} \ |5-\bf 84\rangle_{\bf (3,3)} \ :&& 0 \ . \end{eqnarray} There are other particular values of $\lambda_1$, $\lambda_2$. For $\lambda_1=\lambda_2$ the theory is still parity invariant, but we don't find any degeneracy pairs among the eigenstates which would map one into each other under the parity transformation. For the values of $\lambda_1$, $\lambda_2$ outside the regime $\lambda_1\neq-\lambda_2$ we can find degeneracy among the eigenvalues of the mixing matrix, but since the theory is not parity invariant the operators with the same anomalous dimensions do not form parity pairs. These results suggest that the ABJM integrability is broken for generic values of $\lambda_1$ and $\lambda_2$. \begin{comment} \begin{itemize} \item $\lambda_1=\lambda_2=\lambda$ \begin{eqnarray} && {\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)} \ :\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad \ 0\nonumber \\ &&{\rm Tr} \ |2-{\bf 15}\rangle_{\bf (3,3)}-{\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,3)}\ : \ \qquad\qquad\qquad \qquad 2\lambda^2 \nonumber\\ &&{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} -{\rm Tr} \ |4-{\bf 84}\rangle_{\bf (3,3)}\Big)\ : \qquad\qquad \qquad\qquad 8\lambda^2 \nonumber \\ &&\frac{1}{12}(-1\pm\sqrt{129})\big( {\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,3)}+{\rm Tr} \ |2-{\bf 15}\rangle_{\bf (3,3)} \big)\nonumber\\&&\qquad+{\rm Tr} \ |4-{\bf 85}\rangle_{\bf (3,3)}+{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} \ :\qquad\qquad (17\pm\sqrt{129})\lambda^2 \nonumber\\ \end{eqnarray} \item $\lambda_1=-2\lambda_2=\lambda$ \begin{eqnarray} &&\frac{2}{3}{\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)} -{\rm Tr} \ |4-{\bf 84}\rangle_{\bf (3,3)} +{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} \ : \ \ \ \ \qquad 0\nonumber \\ && - \frac{8}{3}{\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,3)} +3{\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)} +{\rm Tr} \ |4-{\bf 84}\rangle_{\bf (3,3)} \ : \qquad 20\lambda^2\nonumber \\ && - \frac{8}{3}{\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,3)} -3{\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)} +{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} \ : \qquad 20\lambda^2\nonumber \\ &&\frac{1}{3}\left(-4\pm \sqrt{39}\right){\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,3)} +\frac{1}{3}\left(11\mp 2\sqrt{39}\right){\rm Tr} \ |2-{\bf 15}\rangle_{\bf (3,3)}\nonumber\\&& \qquad\qquad+{\rm Tr} \ |4-{\bf 84}\rangle_{\bf (3,3)} +{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} \ : \ \ \ \ \qquad \left(8\mp\sqrt{39}\right)\lambda^2\nonumber \\ \end{eqnarray} \item $\lambda_1=-\frac{1}{2}\lambda_2=\lambda$ \begin{eqnarray} && -\frac{2}{3}{\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)} -{\rm Tr} \ |4-{\bf 84}\rangle_{\bf (3,3)} +{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} \ : \ \ \ \qquad 0\nonumber \\ && - \frac{8}{3}{\rm Tr} \ |2-{\bf 15}\rangle_{\bf (3,3)} -3{\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)} +{\rm Tr} \ |4-{\bf 84}\rangle_{\bf (3,3)} \ : \qquad 5\lambda^2\nonumber \\ && - \frac{8}{3}{\rm Tr} \ |2-{\bf 15}\rangle_{\bf (3,3)} +3{\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)} +{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} \ : \qquad 5\lambda^2\nonumber \\ &&\frac{1}{3}\left(11\mp 2\sqrt{39}\right){\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,3)}+\frac{1}{3}\left(-4\pm \sqrt{39}\right){\rm Tr} \ |2-{\bf 15}\rangle_{\bf (3,3)} \nonumber\\&&\qquad\qquad+{\rm Tr} \ |4-{\bf 84}\rangle_{\bf (3,3)} +{\rm Tr} \ |5-{\bf 84}\rangle_{\bf (3,3)} \ : \ \ \ \ \qquad \left(8\mp\sqrt{39}\right)\lambda^2\nonumber \\ \end{eqnarray} \end{itemize} In the case of $\lambda_1=-\lambda_2$ the degenerate eigenstates build a parity pair since ${\cal P} \ {\rm Tr} \ |1-{\bf 15}\rangle_{\bf (3,3)}= \ {\rm Tr} \ |2-{\bf 15}\rangle_{\bf (3,3)}$. This is a clear hint for integrability of the Hamiltonian. The degenerate eigenstates in the case of $\lambda_1=-2\lambda_2$ and $\lambda_1=-2\lambda_2$ do not map into each other under the parity transformation since ${\cal P} \ {\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)}=- \ {\rm Tr} \ |3-{\bf 20}\rangle_{\bf (3,3)}$, ${\cal P} \ {\rm Tr} \ |4/5-{\bf 84}\rangle_{\bf (3,3)}= \ {\rm Tr} \ |4/5-{\bf 84}\rangle_{\bf (3,3)}$. This is also expected since the parity symmetry of the Hamiltonian is broken at $\lambda_1\neq\pm\lambda_2$. \end{comment} \subsection{Integrability and degeneracy} Let us try to get some conclusions related to the integrability of the system. As we claimed at the beginning of this section a generic feature of integrability is the presence of degeneracy pairs \cite{Beisert:2003tq,Kristjansen:2008ib}. Namely, the existence of couples of operators which have the same anomalous dimension and which are mapped one into each other by the parity operator $\mathcal{P}$. In the ABJM spin chain the first example of degeneracy pairs is in the set of length four operators: they are the operators in the adjoint representation of $SU(4)$. In this section we checked that all the $SU(2)_R\times SU(2)$ operators which are contained in the decomposition of the ABJM degeneracy pairs are no longer degeneracy pairs for generic $k_1$, $k_2$. This fact could be interpreted as a weak evidence of the absence of integrability of the system for $k_1 \ne -k_2$. Let us explain why this is just a weak evidence. First of all the parity symmetry is broken by the Hamiltonian (\ref{hful}) for generic values of $k_1$, $k_2$. A nice observation is that parity is restored for $k_1=\pm k_2$. One of these two points is the ABJM limit where degeneracy pairs appear and the system is integrable. The other point is still parity invariant but there is no degeneracy in the anomalous dimensions. Even this observation is not conclusive: the original eigenvectors of the ABJM mixing matrix are no longer eigenvectors of the new Hamiltonian. The new eigenvectors do not form pairs under parity, they are actually parity eigenvectors and we cannot claim that integrability is broken because they do not have the same anomalous dimension. To say something stronger about the integrability of the theory one should compute for example the mixing of longer operators, or directly compute the integrable Hamiltonian associated to the $SU(2)_R\times SU(2)$ spin chain, but also in this case the claim could be not definitive. Even with all these subtleties in mind we would like to take the lifting of the degeneracy, which is present in the ABJM limit, as a hint against the integrability of the system. Of course, a more rigorous analysis is required. \section{Conclusions} In this note we started the analysis of the deformed integrable Chern-Simons theories. As a first example we considered the ABJM theory with arbitrary Chern-Simons levels $k_1$, $k_2$. We constructed the complete two loop mixing operator for the bosonic scalar sector of the theory and we computed the anomalous dimension for some length four operators. We observed that the degeneracy of anomalous dimensions which is present in the integrable limit (the ABJM theory) disappears for generic $k_1$and $k_2$. We interpreted this fact as a weak evidence of the absence of integrability for these theories, namely, when $k_1 + k_2 \ne 0$ the ABJM integrability seems to be destroyed. A possible future direction could be to start a deeper investigation of the integrability of these theories, in field theory and maybe in the IIA string dual, to support or contradict our conclusions. Another nice application of the ideas presented in this note could be a more general analysis of the integrability of Chern-Simons quiver gauge theories. For example it would be nice to see what happens to the integrable properties of Chern-Simons theories that come by orbifolding ABJM, once we allow non orbifold values for the various $k_i$. We hope to come back to this problem in the near future. We hope to have convinced the reader that three dimensional Chern-Simons theories are a nice laboratory to study integrability, and in a sense, due to the quartic interactions and the presence of Chern-Simons levels, they allow a perturbative weak coupling analysis of more general deformations than the four dimensional examples. \section*{Acknowledgments} We are happy to thank first of all Konstantin Zarembo for many nice discussions, and Claudio Destri, Giuseppe Policastro, Alessandro Tomasiello, Jan Troost, Alberto Zaffaroni, Andrey Zayakin for valuable conversations. W.S. would like to thank Ludwig-Maximilians-University and University of Hamburg for kind hospitality where part of this work was done. D. ~F.~ is supported by CNRS and ENS Paris. The research of W.~S. was supported in part by the ANR (CNRS-USAR) contract 05-BLAN-0079-01.
1,314,259,996,062
arxiv
\section{Introduction} Following \cite{ig} and \cite{dixon}, we say that a subset $\left\{g_{1},g_{2},\hdots,g_{d}\right\}$ of a group $G$ \emph{invariably generates} $G$ if $\left\{g_{1}^{x_{1}},g_{2}^{x_{2}},\hdots,g_{d}^{x_{d}}\right\}$ generates $G$ for every $d$-tuple $(x_{1},x_{2}\hdots,x_{d})\in G^{d}$. The Chebotarev invariant $C(G)$ of $G$ is the expected value of the random variable $n$ that is minimal subject to the requirement that $n$ randomly chosen elements of $G$ invariably generate $G$. In \cite{kz}, Kowalski and Zywina conjectured that $C(G)=O(\sqrt{|G|})$ for every finite group $G$. Progress on the conjecture was first made in \cite{ig}, where it was shown that $C(G)=O(\sqrt{|G|}\log{|G|})$ (here, and throughout this paper,``$\log$" means $\log$ to base $2$). The conjecture was confirmed by the first author in \cite{ACheboGen}; more precisely, \cite[Theorem 1]{ACheboGen} states that \emph{there exists an absolute constant $\beta$ such that $C(G)\le \beta\sqrt{|G|}$ whenever $G$ is a finite group}. In this paper, we use a different approach to the problem. In doing so, we show that one can take $\beta=5/3$ when $G$ is soluble, and that this is best possible. Furthermore, we show that for each $\epsilon>0$, there exists a constant $c_{\epsilon}$ such that $C(G)\le (1+\epsilon)\sqrt{|G|}+c_{\epsilon}$. From \cite[Proposition 4.1]{kz}, one can see that this is also (asymptotically) best possible. Our main result is as follows \begin{thm}\label{KZConjecture} Let $G$ be a finite group. \begin{enumerate}[(i)] \item For any $\epsilon>0$, there exists a constant $c_{\epsilon}$ such that $C(G)\le (1+\epsilon)\sqrt{|G|}+c_{\epsilon}$; \item If $G$ is a finite soluble group, then $C(G)\leq \frac{5}{3}\sqrt{|G|},$ with equality if and only if $G=C_{2}\times C_{2}$. \end{enumerate} \end{thm} We also derive an upper bound on $C(G)$, for a finite soluble group $G$, in terms of the set of \emph{crowns} for $G$. Before stating this result, we require the following notation: Let $G$ be a finite soluble group. Given an irreducible $G$-module $V$ which is $G$-isomorphic to a complemented chief factor of $G$, let $\delta_V(G)$ be the number of complemented factors in a chief series of $G$ which are $G$-isomorphic to $V$. Then set $\theta_V(G)=0$ if $\delta_{V}(G)=1$, and $\theta_{V}(G)=1$ otherwise. Also, let $q_{V}(G):=|\End_{G}(V)|$, let $n_{V}(G):=\dim_{\End_{G}(V)}{V}$, and let $H_{V}(G):=G/C_G(V)$ (we will suppress the $G$ in this notation when the group is clear from the context). Also, let $\sigma:=2.118456563\hdots$ be the constant appearing in \cite[Corollary 2]{Pomerance}. The afore mentioned upper bound can now be stated as follows. \begin{thm}\label{CheboSolTheorem} Let $G$ be a finite soluble group, and let $A$ [respectively $B$] be a set of representatives for the irreducible $G$-modules which are $G$-isomorphic to a non-central [resp. central] complemented chief factor of $G$. Then $$C(G)\le \sum_{V\in A} \min\left\{(\delta_{V}\cdot\theta_{V}+c_{V})|V|,\left(\left\lceil\frac{\delta_{V}\cdot\theta_{V}}{n_{V}}\right\rceil+\frac{q_{V}^{n_V}}{q_{V}^{n_V}-1}\right)|H_{V}|\right\}+\max_{V\in B}{\del_{V}}+\sigma$$ where $c_{V}:=q_{V}/(q_{V}-1)\le 2$.\end{thm} The layout of the paper is as follows. In Section 2 we recall the notion of a \emph{crown} in a finite group. In Section 3 we prove Theorem \ref{CheboSolTheorem} and deduce a number of consequences, while Section 4 is reserved for the proof of Theorem \ref{KZConjecture} Part (i). Finally, we prove Theorem \ref{KZConjecture} Part (ii) in Section 5. \section{Crowns in finite groups} In Section 2, we recall the notion and the main properties of crowns in finite groups. Let $L$ be a monolithic primitive group and let $A$ be its unique minimal normal subgroup. For each positive integer $k$, let $L^k$ be the $k$-fold direct product of $L$. The crown-based power of $L$ of size $k$ is the subgroup $L_k$ of $L^k$ defined by $$L_k=\{(l_1, \ldots , l_k) \in L^k \mid l_1 \equiv \cdots \equiv l_k \ {\mbox{mod}} A \}.$$ Equivalently, $L_k=A^k \diag L^k$. Following \cite{paz}, we say that two irreducible $G$-groups $V_1$ and $V_2$ are {$G$-equivalent} and we put $V_1 \sim_G V_2$, if there are isomorphisms $\phi: V_1\rightarrow V_2$ and $\Phi: V_1\rtimes G \rightarrow V_2\rtimes G$ such that the following diagram commutes: \begin{equation*} \begin{CD} 1@>>>V_{1}@>>>V_{1}\rtimes G@>>>G@>>>1\\ @. @VV{\phi}V @VV{\Phi}V @|\\ 1@>>>V_{2}@>>>V_{2}\rtimes G@>>>G@>>>1. \end{CD} \end{equation*} \ Note that two $G$\nobreakdash-isomorphic $G$\nobreakdash-groups are $G$\nobreakdash-equivalent. In the particular case where $V_1$ and $V_2$ are abelian the converse is true: if $V_1$ and $V_2$ are abelian and $G$\nobreakdash-equivalent, then $V_1$ and $V_2$ are also $G$\nobreakdash-isomorphic. It is proved (see for example \cite[Proposition 1.4]{paz}) that two chief factors $V_1$ and $V_2$ of $G$ are $G$-equivalent if and only if either they are $G$-isomorphic between them or there exists a maximal subgroup $M$ of $G$ such that $G/\core_G(M)$ has two minimal normal subgroups $N_1$ and $N_2$ $G$-isomorphic to $V_1$ and $V_2$ respectively. For example, the minimal normal subgroups of a crown-based power $L_k$ are all $L_k$-equivalent. Let $V=X/Y$ be a chief factor of $G$. A complement $U$ to $V$ in $G$ is a subgroup $U $ of $G$ such that $UV=G$ and $U \cap X=Y$. We say that $V=X/Y$ is a Frattini chief factor if $X/Y$ is contained in the Frattini subgroup of $G/Y$; this is equivalent to saying that $V$ is abelian and there is no complement to $V$ in $G$. The number $\delta_V(G)$ of non-Frattini chief factors $G$-equivalent to $V$ in any chief series of $G$ does not depend on the series. Now, we denote by $L_V$ the monolithic primitive group associated to $V$, that is $$L_{V}= \begin{cases} V\rtimes (G/C_G(V)) & \text{ if $V$ is abelian}, \\ G/C_G(V)& \text{ otherwise}. \end{cases} $$ If $V$ is a non-Frattini chief factor of $G$, then $L_V$ is a homomorphic image of $G$. More precisely, there exists a normal subgroup $N$ of $G$ such that $G/N \cong L_V$ and $\soc(G/N)\sim_G V$. Consider now all the normal subgroups $N$ of $G$ with the property that $G/N \cong L_V$ and $\soc(G/N)\sim_G V$: the intersection $R_G(V)$ of all these subgroups has the property that $G/R_G(V)$ is isomorphic to the crown-based power $(L_V)_{\delta_V(G)}$. The socle $I_G(V)/R_G(V)$ of $G/R_G(V)$ is called the $V$-crown of $G$ and it is a direct product of $\delta_V(G)$ minimal normal subgroups $G$-equivalent to $V$. \begin{lemma}{\cite[Lemma 1.3.6]{classes}}\label{corona} Let $G$ be a finite group with trivial Frattini subgroup. There exists a chief factor $V$ of $G$ and a non trivial normal subgroup $U$ of $G$ such that $I_G(V)=R_G(V)\times U.$ \end{lemma} \begin{lemma}{\cite[Proposition 11]{crowns}}\label{sotto} Assume that $G$ is a finite group with trivial Frattini subgroup and let $I_G(V), R_G(V), U$ be as in the statement of Lemma \ref{corona}. If $KU=KR_G(V)=G,$ then $K=G.$ \end{lemma} \section{Crown-based powers with abelian socle} The aim of this section is to prove Theorem \ref{CheboSolTheorem}. For a finite group $G$ and an irreducible $G$-group $V$, we write $\Omega_{G,V}$ for the set of maximal subgroups $M$ of $G$ such that either $\soc{(G/\core_{G}(M))}\sim_{G}V$ or $\soc{(G/\core_{G}(M))}\sim_G V\times V$. Also, for $M\in \Omega_{G,V}$, we write $\w{M}$ for the union of the $G$-conjugates of $M$. We will also say that the elements $g_{1}$, $g_{2}$, $\hdots$, $g_{k}\in G$ \emph{satisfy the $V$-property in $G$} if $g_{1}$, $g_{2}$, $\hdots$, $g_{k}\in \w{M}$ for some $M\in \Omega_{V}$. Finally, let $P_{G,V}^{\ast}(k)$ denote the probability that $k$ randomly chosen elements of $G$ satisfy the $V$-property in $G$. Suppose now that $V$ is abelian, and consider the faithful irreducible linear group $H:=G/C_G(V)$. We will denote by $\der(H,V)$ the set of the derivations from $H$ to $V$ (i.e. the maps $\zeta: H\to V$ with the property that $\zeta(h_1h_2)=\zeta(h_1)^{h_2}+\zeta(h_2)$ for every $h_1,h_2\in H$). If $v\in V$ then the map $\zeta_v:H\to V$ defined by $\zeta_v(h)=[h,v]$ is a derivation, called an \emph{inner derivation} from $H$ to $V$. The set $\ider(H,V)=\{\zeta_v\mid v \in V\}$ of the inner derivations from $H$ to $V$ is a subgroup of $\der(V,H)$ and the factor group $\h(H,V)=\der(H,V)/\ider(H,V)$ is the first cohomology group of $H$ with coefficients in $V.$ \begin{prop}\label{crucial} Let $H$ be a group acting faithfully and irreducibly on an elementary abelian $p$-group $V$. For a positive integer $u$, we consider the semidirect product $G=V^{u}\rtimes H$ where the action of $H$ is diagonal on $V^{u}$; that is, $H$ acts in the same away on each of the $u$ direct factors. Assume also that $u=\delta_{V}(G)$. View $V$ as a vector space over the field $F=End_{H}(V)$. Let $h_{1},\hdots,h_{k}\in H$, and $w_{1},\hdots,w_{k}\in V^{u}$, and write $w_{i}=(w_{i,1},w_{i,2},\hdots,w_{i,u})$. Assume that $h_{1}w_{1},h_{2}w_{2},\hdots,h_{k}w_{k}$ satisfy the $V$-property in $G$. Then for $1\le j\le u$, the vectors $$r_{j}:=(w_{1,j},w_{2,j},\hdots,w_{k,j})$$ of $V^{k}$ are linearly dependent modulo the subspace $W+D$, where $$\begin{aligned}W &:=\left\{(y_{1},y_{2},\hdots,y_{k})\text{ : }y_{i}\in [h_{i},V]\text{ for }1\le i\le k\right\} \text{, and}\\ D &:= \left\{(\zeta(h_1),\zeta(h_2),\hdots,\zeta(h_k) )\in V^{k}\text{ : }\zeta\in\der(H,V)\right\}.\end{aligned}$$ \end{prop} \begin{proof} Let $M$ be a maximal subgroup of $G$ such that $M\in \Omega_{V}$, and $h_{1}w_{1},\hdots,$ $h_{k}w_{k}\in \widetilde{M}$. Since $u=\delta_{V}(G)$, $M$ cannot contain $V^{u}$, and hence $MV^{u}=G$. Thus, $M/M\cap V^{u}\cong H$, and hence there exists an integer $t\ge 0$ and elements $h_{k+1}w_{k+1},\hdots,h_{k+t}w_{k+t}\in M$ such that $h_{1},\hdots,h_{k},h_{k+1},\hdots,h_{k+t}$ invariably generate $H$. But then, \cite[Proposition 6]{ACheboGen} implies, in particular, that $r_{1},\hdots,r_{u}\in V^{k}$ are linearly dependent modulo $W+D$, as needed.\end{proof} Before proceeding to the proof of Theorem \ref{CheboSolTheorem}, we require the following easy result from probability theory. \begin{prop}\label{binomale} Write $B(k,p)$ for the binomial random variable with $k$ trials and probability $0<p\le 1$. Fix $l\ge 0$. Then $$\sum_{k=l}^{\infty} P(B(k,p)=l)\le \frac{1}{p}.$$\end{prop} \begin{proof} Note first that $$\binom{k}{l}x^{k-l}=\frac{1}{l!}\frac{d^{l}}{dx^{l}}x^{k}$$ where $\frac{d^{l}}{dx^{l}}x^{k}$ denotes the $l$-th derivative of $x^{k}$. Let $x=1-p$. By definition, $P(B(k,p)=l)=\binom{k}{l}(1-x)^{l}x^{k-l}$. Thus\begin{align*} \sum_{k=l}^{\infty}P(B(k,p)=l) &=(1-x)^{l}\sum_{k=l}^{\infty}\binom{k}{l}x^{k-l}\\ &=\frac{(1-x)^{l}}{l!}\sum_{k=l}^{\infty}\frac{d^{l}}{dx^{l}}x^{k}\\ &=\frac{(1-x)^{l}}{l!}\frac{d^{l}}{dx^{l}}\sum_{k=l}^{\infty}x^{k}\\ &\le\frac{(1-x)^{l}}{l!}\frac{d^{l}}{dx^{l}}\frac{1}{1-x}\\ &=\frac{(1-x)^{l}}{l!}\frac{l!}{(1-x)^{(l+1)}}=\frac{1}{1-x}=\frac{1}{p}\end{align*} as needed. (Note that the third equality above follows since the series $\sum_{k=l}^{\infty}x^{k}$ is convergent.)\end{proof} We shall also require the following. We remark that since $P^*_{G,V}(k)\le \sum_{\widetilde{M}\in \Omega_{V}}\left(\frac{|\widetilde{M}|}{|G|}\right)^{k}$ and $\frac{|\widetilde{M}|}{|G|}<1$, $\sum_{k=0}^{\infty} P^*_{G,V}(k)$ converges. \begin{prop}\label{firstred} Let $G$ be a finite group, and let $A$ [respectively $B$] be a set of representatives for the irreducible $G$-groups which are $G$-equivalent to a non-central [resp. central] non-Frattini chief factor of $G$. Then\begin{enumerate} \item $C(G)\le \sum_{V\in A}\sum_{k=0}^{\infty} P^*_{G,V}(k)+\max_{V\in B}\delta_{V}+\sigma$, and; \item If $\frat(G)=1$ and $U$ and $V$ are as in Lemma \ref{corona}, then $C(G)\le C(G/U)+\sum_{k=0}^{\infty} P^*_{G,V}(k)$.\end{enumerate} \end{prop} \begin{proof} By definition, $C(G)=\sum_{k=0}^{\infty} (1-P_{I}(G,k))$, where $P_{I}(G,k)$ denotes the probability that $k$ randomly chosen elements of $G$ invariably generate $G$. Let $P_{G,G/G'}(k)$ denote the probability that $k$ randomly chosen elements $g_1$, $\hdots$, $g_k$ of $G$ satisfy $\langle G'g_1,\hdots, G'g_k\rangle=G$. Then it is easy to see that \begin{equation}\label{firstredeq}1-P_{I}(G,k)\le 1-P_{G,G/G'}(k)+\sum_{V\in A} P^{\ast}_{G,V}(k).\end{equation} Clearly $P_{G,G/G'}(k)$ is the probability that a random $k$-tuple of elements from $G/G'$ generates $G/G'$. Hence, $C(G/G')=\sum_{k=0}^{\infty} (1-P_{G,G/G'}(k))$ is at most $d(G/G')+\sigma$ by \cite[Corollary 2]{Pomerance} (here, for a group $X$, $d(X)$ denotes the minimal number of elements required to generate $X$). Since $d(G/G')\le \max_{V\in B}\delta_V$, it follows from (\ref{firstredeq}) that $C(G)\le\max_{V\in B}\delta_V+\sigma+\sum_{V\in A}\sum_{k=0}^{\infty} P^{\ast}_{G,V}(k)$, and Part (i) follows. Assume that $\frat(G)=1$, and let $U$ and $V$ be as in Lemma \ref{corona}. Then \begin{equation}\label{nomoremax0}1-P_{I}(G,k)\le 1-P_{I}(G/U,k)+\sum_{W}P^*_{G,W}(k)\end{equation} where the sum in the second term goes over all complemented chief factors $W$ of $G$ not containing $U$. Now, if $M$ is a maximal subgroup of $G$ not containing $U$, then $M$ contains $R_{G}(V)$, by Lemma \ref{sotto}. Hence, $\core_G(M)$ contains $R_G(V)$, so $M\in \Omega_{G,V}$. Since $C(G)=\sum_{k=0}^{\infty} (1-P_{I}(G,k))$, Part (ii) now follows immediately from (\ref{nomoremax0}), and this completes the proof. \end{proof} The proof of Theorem \ref{CheboSolTheorem} will follow as a corollary of the proof of the next proposition. For a finite group $G$, and an abelian chief factor $V$ of $G$, set $H_{V}=H_{V}(G):=G/C_G(V)$, $m=m_V=m_{V}(G):=\dim_{\End_{G}(V)}\h(H_{V},V)$, and write $p=p_{V}=p_{V}(G)$ for the probability that a randomly chosen element $h$ of $H_{V}$ fixes a non zero vector in $V$. Also, let $\delta_{V}=\delta_V(G)$ be the number of complemented factors in a chief series of $G$ which are $G$-isomorphic to $V$, and set $\theta_{V}=\theta_V(G)=0$ if $\delta_{V}=1$, and $\theta_{V}=1$ otherwise. Finally, let $q_{V}=q_{V}(G):=|\End_{G}(V)|$ and $n_{V}=n_{V}(G):=\dim_{\End_{G}(V)}{V}$. \begin{prop}\label{CheboProp} Let $G$ be a finite group with trivial Frattini subgroup, and let $U$, $V$ and $R=R_G(V)$ be as in Lemma \ref{corona}. If $V$ is nonabelian, then set $\alpha_U:=\sum_{k=0}^{\infty} P^*_{G,V}(k)$. If $V$ is abelian, then write $q=q_V$, $n=n_V$ and $H=H_V$, $p=p_V$ and $m=m_V$. Also, set $\delta=\delta_{V}$ and define $\theta=0$ if $\delta=1,$ $\theta=1$ otherwise, and set $$\alpha_U:=\begin{cases}\sum_{0\leq i\leq \delta-1}\frac{q^\delta}{q^\delta-q^i}\leq \delta+\frac{q}{(q-1)^2}& \text { if }H=1,\\ \min\left\{\left(\delta\cdot \theta+m+\frac{q}{q-1}\right)\frac{1}{p},\left(\lceil\frac{\delta\cdot \theta}{n}\rceil+\frac{q^n}{q^n-1}\right)|H|\right\}& \text { otherwise.} \end{cases}$$ Then $$C(G)\le C(G/U)+\alpha_U.$$ \end{prop} \begin{proof} By Proposition \ref{firstred} Part (ii), we have \begin{equation}\label{nomoremax} C(G)\le C(G/U)+\sum_{k=0}^{\infty} P^*_{G,V}(k).\end{equation} Thus, we just need to prove that $\sum_{k=0}^{\infty} P^{\ast}_{G,V}(k)\le \alpha_U$. Therefore, we may assume that $V$ is abelian. Writing bars to denote reduction modulo $R_G(V)$, note that if $M$ is a maximal subgroup of $G$ with $M\in \Omega_{G,V}$, then $R_G(V)\le M$ and $\overline{M}\in \Omega_{\overline{G},V}$. Hence, $P^{\ast}_{G,V}(k)\le P^{\ast}_{\ol{G},V}(k)$, so we may assume that $R_G(V)=1$. Thus, $G\cong V^{\delta}\rtimes H$, where $H$ acts faithfully and irreducibly on $V$, and diagonally on $V^{\delta}$. Suppose first that $|H|=1$. Then $G=V^{\delta}\cong (C_r)^{\delta}$, for some prime $r$, and $P^*_{G,V}(k)$ is the probability that $k$ randomly chosen elements of $G$ fail to generate $G$. Hence, $\sum_{k=0}^{\infty} P^*_{G,V}(k)$ is the expected number of random elements to generate $(C_r)^{\delta}$, which is well known to be $$\sum_{i=0}^{\delta-1}\frac{r^{\delta}}{r^{\delta}-r^{i}}.$$ See, for instance, \cite[top of page 193]{Pomerance}. So we may assume that $|H|>1$. Let $F=\End_HV$, so that $|F|=q$, $\dim_FV=n$, and $|V|=q^n$. Fix elements $x_{1}$, $x_{2}$ ,$\hdots$, $x_{k}$ in $G$, and for $i\in\{1,\dots,k\},$ let $x_i=w_ih_i$ with $w_i\in V^\delta$ and $h_i\in H.$ For $t\in \{1,\dots,\delta\}$ let $$\begin{aligned}r_t&=(\pi_t(w_1),\dots,\pi_t(w_k))\in V^{k}.\end{aligned}$$ where $\pi_{t}$ denotes projection onto the $t$-th direct factor of $V^{\delta}$. Moreover let \begin{align*}W &:=\left\{(u_{1},u_{2},\hdots,u_{k})\text{ : }u_{i}\in [h_{i},V]\text{ for }1\le i\le k\right\}\text{, and}\\ D &:= \left\{(\zeta(h_1),\zeta(h_2),\hdots,\zeta(h_k) )\in V^{k}\text{ : }\zeta\in\der(H,V)\right\}.\end{align*} By Proposition \ref{crucial}, $P^{\ast}_{G,V}(k)$ is at most the probability that $r_1,\dots,r_\delta$ are linearly dependent modulo $W+D$. Also, for an $f$-tuple $J:=(j_{1},j_{2},\hdots,j_{f})$ of distinct elements $j_i$ of $\left\{1,\hdots,k\right\}$, set $$r_{t,J}:=(\pi_t(w_{j_1}),\pi_t(w_{j_2}),\hdots,\pi_t(w_{j_f}))\in V^{f}$$ for $t\in \left\{1,\hdots,\delta\right\}$, and set \begin{align*}W_{J} &:=\left\{(u_{j_1},u_{j_2},\hdots,u_{j_f})\in V^{f}\text{ : }u_{i}\in [h_{j_i},V]\text{ for }1\le i\le f\right\}\text{, and}\\ D_{J} &:= \left\{(\zeta(h_{j_1}),\zeta(h_{j_2}),\hdots,\zeta(h_{j_f}) )\in V^{f}\text{ : }\zeta\in\der(H,V)\right\}.\end{align*} Notice that: $(\ast)$ If $J$ is fixed and $r_1,\dots,r_\delta$ are $F$-linearly dependent modulo $W+D$, then the vectors $r_{1,J},\dots,r_{\delta,J}$ of $V^{f}$ are $F$-linearly dependent modulo $W_J+D_J$. We will prove first that \begin{align}\sum_{k=0}^{\infty} P^{\ast}_{G,V}(k)\le (\deltat +m+c_{V})\frac{1}{p},\end{align} where $c_V$ is as in the statement of Theorem 2. To this end, let $\Delta_{l}$ be the subset of $H^k$ consisting of the $k$-tuples $(h_1,\dots,h_k)$ with the property that $C_V(h_i)\neq 0$ for precisely $l$ different choices of $i\in\{1,\dots,k\}.$ If $(h_1,\dots,h_k)\in \Delta_{l},$ then, by \cite[Lemma 7]{ACheboGen}, $W+D$ is a subspace of $V^{k}\cong F^{nk}$ of codimension at least $l-m$: so the probability that $r_{1},\dots,r_{\delta}$ are $F$-linearly dependent modulo $W+D$ is at most $$\begin{aligned}p_{l}&=1-\left(\frac{q^{nk}-q^{nk-l+m}}{q^{nk}}\right) \cdots\left(\frac{q^{nk}-q^{nk-l+m+\delta-1}}{q^{nk}}\right)\\ &=1-\left(1-\frac{1}{q^{l-m}}\right)\dots \left(1-\frac{q^{\delta-1}}{q^{l-m}}\right)\\&\leq \min\left\{1,\left(\frac{q^\delta-1}{q-1}\right)\frac{1}{q^{l-m}}\right\}\le \min\left\{1,1/q^{l-m-\deltat}\right\}. \end{aligned}$$ Hence, we have $$\begin{aligned}\sum_{k=0}^{\infty} P^{\ast}_{G,V}(k) &\le \sum_{k=0}^{\infty}\sum_{l=0}^{k} P(B(k,p)=l)\min\left\{1,q^{\deltat+m-l}\right\}\\ &\le \sum_{k=0}^{\infty} P(B(k,p)<\deltat+m )+\sum_{k=0}^{\infty}\sum_{l=\deltat+m}^{k} P(B(k,p)=l)q^{\deltat+m-l}\\ &\le \sum_{k=0}^{\infty} P(B(k,p)<\deltat+m )+\sum_{l=0}^{\infty}q^{-l}\!\!\!\!\!\sum_{k=l+\deltat+m}^{\infty} \!\!\!\!\!\!P(B(k,p)=l+\deltat+m)\\ &\le \frac{\deltat+m+c_{V}}{p} \end{aligned}$$ where $c_{V}=\frac{q}{q-1}$. Note that the last step above follows from Proposition \ref{binomale}. Thus, all that remains is to show that \begin{align}\sum_{k=0}^{\infty} P^{\ast}_{G,V}(k)\le \left(\left\lceil\frac{\deltat}{n}\right\rceil+\frac{q^n}{q^{n}-1}\right)|H|.\end{align} For this, we define $\Omega_{l}$ to be the subset of $H^k$ consisting of the $k$-tuples $(h_1,\dots,h_k)$ with the property that $h_i=1$ for precisely $l$ different choices of $i\in \{1,\dots,k\}.$ Suppose that $(h_1,\hdots,h_k) \in \Omega_l$, and set $J:=(j_{1},j_{2},\hdots,j_{l})$, where $j_{1}<j_{2}<\hdots<j_{l}$ and $\left\{j_{1},j_{2},\hdots,j_{l}\right\}=\left\{i\text{ }|\text{ }1\le i\le k, h_i=1\right\}$. Then, by $(\ast)$, the probability $p_{l}'$ that $r_{1}$, $r_{2}$, $\hdots$, $r_{\delta}$ are $F$-linearly dependent modulo $W+D$ is at most the probability that the vectors $r_{1,J}$, $r_{2,J}$, $\hdots$, $r_{\delta,J}\in V^{l}$ are $F$-linearly dependent modulo $W_{J}+D_{J}$. But $W_{J}+D_{J}=0$, by the definition of $J$. Thus we have $$\begin{aligned}p_{l}'&\le 1-\left(\frac{q^{nl}-1}{q^{nl}}\right) \cdots\left(\frac{q^{nl}-q^{nl-\delta-1}}{q^{nl}}\right)\\ &=1-\left(1-\frac{1}{q^{nl}}\right)\dots \left(1-\frac{q^{\delta-1}}{q^{nl}}\right)\leq \min\left\{1,\left(\frac{q^\delta-1}{q-1}\right)\frac{1}{q^{nl}}\right\}\le \min\left\{1,\frac{1}{ q^{nl-\deltat}}\right\}. \end{aligned}$$ Hence, if $\alpha:=\lceil \frac{\deltat}{n}\rceil$, and $p'=1/|H|$ is the probability that a randomly chosen element of $H$ is the identity, then we have \begin{align*} \sum_{k=0}^{\infty} P^{\ast}_{G,V}(k) &\le \sum_{k=0}^{\infty} P(B(k,p')<\alpha )+\sum_{k=0}^{\infty}\sum_{l=\alpha}^{k} P(B(k,p')=l)q^{\deltat-nl}\\ &\le \sum_{k=0}^{\infty} P(B(k,p')<\alpha )+\sum_{l=0}^{\infty}q^{-nl-n\alpha+\deltat}\sum_{k=l+\alpha}^{\infty} P(B(k,p')=l+\alpha)\\ &\le \sum_{k=0}^{\infty} P(B(k,p')<\alpha )+\sum_{l=0}^{\infty}q^{-nl}\sum_{k=l+\alpha}^{\infty} P(B(k,p')=l+\alpha)\\ &\le \frac{1}{p'}\left(\alpha +\frac{q^{n}}{q^{n}-1}\right) \end{align*} Note that the last step above again follows from Proposition \ref{binomale}. Since $p'=1/|H|$, (3.5) follows, whence the result. \end{proof} We are now ready to prove Theorem \ref{CheboSolTheorem}. \begin{proof}[Proof of Theorem \ref{CheboSolTheorem}] By Proposition \ref{firstred} Part (i), we have $$C(G)\le\max_{V\in B}\delta_V+\sigma+\sum_{k=0}^{\infty}\sum_{V\in A} P^{\ast}_{G,V}(k)$$ Thus, it will suffice to prove that \begin{align} \sum_{k=0}^{\infty} P^{\ast}_{G,V}(k)\le \min\left\{(\delta_{V}+c_{V})q_{V}^{n_{V}},\left(\left\lceil\frac{\delta_{V}}{n_V}\right\rceil+\frac{q_{V}^{n_{v}}}{q_{V}^{n_{V}}-1}\right)|H_{V}|\right\}\end{align} for each non-central complemented chief factor $V$ of $G$. However, since $\h(H,V)=0$ by \cite[Lemma 1]{st}, and since $p_{V}\le |H_{v}|/|H|\le 1/|V|$ (for any non-zero vector $v\in V$), this follows immediately from the proof of Proposition \ref{CheboProp}.\end{proof} \begin{cor}\label{DelAstCor} Let $G$ be a finite soluble group, and let $A$ and $B$ be as in Theorem \ref{CheboSolTheorem}. Then $$C(G)\le d(G)\sum_{V\in A}\left(1+\frac{{q_V^{n_{V}}}|H_V|}{{q_V^{n_{V}}}-1}\right)+\sigma.$$\end{cor} \begin{proof} For $V\in A\cup B$, set $\gamma_{V}:=\lceil\delta_{V}/n_{V}\rceil$, and $p'_{V}=1/|H_{V}|$. Note also that $n_V=|H_V|=1$ when $V\in B$. Arguing as in the last paragraph of the proof of Proposition \ref{CheboProp}, we have \begin{align*}C(G) &\le \sum_{V\in A}\sum_{k=0}^{\infty}\sum_{l=0}^{k} \min\{q_V^{-n_{V}l+\delta_V},1 \} P(B(k,p'_V)=l)+ \max_{V\in B}{\del_{V}}\!+\sigma\\ &\le\sum_{V\in A}\sum_{k=0}^{\infty}\!P(B(k,p'_V)\!<\!\gamma_{V})\!+\!\!\sum_{V\in A}\sum_{l=0}^{\infty}q_V^{-n_{V}l}\!\!\!\!\!\sum_{k=l+\gamma_V}^{\infty}\!\!\!\!\!P(B(k,p'_V)=l+\gamma_{V} )+\\ &\quad \max_{V\in B}{\del_{V}}\!+\sigma\\ &\le \sum_{V\in A} \gamma_{V}/p'_{V} +\sum_{V\in A}\frac{{q_{V}}^{n_{V}}}{{p'_{V}(q_{V}}^{n_{V}}-1)}+\max_{V\in B}{\del_{V}}+\sigma\\ &\le \left(\max_{V\in A\cup B}\gamma_{V}\right)\sum_{V\in A}\left(1+\frac{{q_{V}}^{n_{V}}}{{q_{V}}^{n_{V}-1}}\right) |H_{V}|+\sigma. \end{align*} We remark that the third inequality above follows from Proposition \ref{binomale}. Finally, \cite[Theorem 1.4 and paragraph after the proof of Theorem 2.7]{ADV} imply that $d(G)=\max_{V\in A\cup B}\left\{1+a_{V}+\left\lfloor\frac{{\delta}_V-1}{n_V}\right\rfloor\right\}$, where $a_{V}=0$ if $V\in B$, and $a_{V}=1$ otherwise. In particular, $d(G)\ge \max_{V\in A\cup B}{\gamma_{V}}$, and the result follows. \end{proof} \section{Proof of Theorem \ref{KZConjecture} Part (i)} Before proceeding to the proof of Part (i) of Theorem \ref{KZConjecture}, we require the following result, which follows immediately from the arguments used in \cite[Proof of Proposition 10]{ACheboGen}. \begin{prop}\label{pH}{\cite[Proof of Proposition 10]{ACheboGen}} Let $H$ be a finite group acting faithfully and irreducibly on an elementary abelian group $V$, and denote by $p$ the probability that a randomly chosen element $h$ of $H$ centralises a non zero vector of $V$. Also, write $m:=\dim_{\End_{H}(V)}\h(H,V)$. Assume that $\h(H,V)$ is nontrivial and that $|H|\geq |V|$. Then there exists an absolute constant $C$ such that $p|H|\ge 2(m+1)^{2}$ if $|H|\ge C$.\end{prop} \begin{proof}[Proof of Theorem \ref{KZConjecture} Part (i)] Since $C(G)=C(G/\frat(G))$, we may assume that $\frat(G)=1$. Thus, Proposition \ref{CheboProp} applies: adopting the same notation as used therein, we have \begin{equation}\label{main} C(G)\leq C(G/U)+\alpha_U. \end{equation} Using (\ref{main}), the proof of the theorem reduces to proving that \begin{equation}\label{red}\alpha_U\leq (1+\beta_U)\sqrt{|{G}|}\end{equation} where $\beta_{U}\to 0$ as $|U|\to \infty.$ Indeed, suppose that (\ref{red}) holds, fix $\epsilon>0$, and suppose that Theorem \ref{KZConjecture} holds for groups of order less than $|G|$. Then since $|U|>1$, there exists a constant $c_{\epsilon}$ such that $C(G/U)\le (1+\epsilon)\sqrt{|G/U|}+c_{\epsilon}$. Hence, by (\ref{main}) and (\ref{red}) we have $C(G)\le (1+\beta_{U}+\frac{1+\epsilon}{\sqrt{|U|}})\sqrt{|G|}+c_{\epsilon}$. It is now clear that by choosing $|U|$ to be large enough, we have $C(G)\le (1+\epsilon)\sqrt{|G|}+c_{\epsilon}$, as needed. Assume first that $U$ is nonabelian. By \cite[Proof of Lemma 13]{ACheboGen}, there exist absolute constants $c_{1}$ and $c_{2}$ such that $$P^*_{G,V}(k) \le \min\left\{1,c_{1}\sqrt{|{G}|^{3}}(1-c_{2}/\log{|{{G}}|})^{k}\right\}.$$ Also, there exists a constant $c_{3}$ such that if $k\ge c_{3}(\log{|{G}|})^{2}$, then\\ $c_{1}\sqrt{|{G}|^{3}}(1-c_{2}/\log{|{G}|})^{k}$ tends to $0$ as ${|G|}$ tends to $\infty$. It follows that \begin{align*} \alpha_U &= \sum_{k=0}^{\infty}P^*_{G,V}(k)\\ &\le \lceil c_{3}(\log{|{G}|})^{2}\rceil +c_{1}\sqrt{|{G}|^{3}}(1-c_{2}/\log{|{G}|})^{\lceil c_{3}(\log{|{G}|})^{2}\rceil}\sum_{k=0}^{\infty}(1-c_{2}/\log{|{G}|})^{k}\\ &= \lceil c_{3}(\log{|{G}|})^{2}\rceil +\frac{c_{1}}{c_{2}}\sqrt{|{G}|^{3}}\log{|G|}(1-c_{2}/\log{|{G}|})^{\lceil c_{3}(\log{|{G}|})^{2}\rceil}\end{align*} and (\ref{red}) holds. So we may assume that $U$ is abelian, and hence $|{G}|\ge |V|^{\delta}|H|$. The inequality (\ref{red}) then follows easily from the definition of $\alpha_U$, except when $\delta=1$ and $|H|\geq |V|.$ Indeed, if $|H|\le |V|$ and $\delta=1$, then $\frac{q^n}{q^{n}-1}\to 1$ as $|U|=q^{n}\to\infty$; if $|H|\le |V|$ and $\delta>1$, then $$\left(\left\lceil \frac{\delta}{n}\right\rceil+\frac{q^{n}}{q^{n}-1}\right)|H|\le \frac{\left\lceil\frac{ \delta}{n}\right\rceil+\frac{q^n}{q^{n}-1}}{|V|^{\frac{\delta-1}{2}}}\sqrt{|G|}$$ which clearly gives us what we need, since $|U|=|V|^{\delta}$ is tending to $\infty$. The other cases are similar. So assume that $\delta=1$ and $|H|\geq |V|$. We distinguish two cases:\begin{enumerate}[(1)] \item $m\neq 0$ and $|V|\le |H|\le (m+1)^{2}|V|$. Denote by $p$ the probability that a randomly chosen element $h$ of $H$ centralizes a non-zero vector of $V$: By Proposition \ref{pH}, there exists an absolute constant $C$ such that $p|H|\ge 2(m+1)^{2}$ if $|H|\ge C$. Thus, $$\alpha_U\le \left(m+\frac{q}{q-1}\right)\frac{1}{p}\le (m+2)\frac{|H|}{2(m+1)^{2}}\le \frac{|H|}{m+1}\le \sqrt{|H||V|}$$ if $|H|\ge C$, from which (\ref{red}) follows. \item $|H| \ge |V|(m + 1)^2$. We remark first that, for any fixed nonzero vector $v$ in $V$, we have $p\ge \frac{|H_v|}{|H|}$, where $H_{v}$ denotes the stabiliser of $v$ in $H$. If $H$ is not a transitive linear group, then there is an orbit $\Omega$ for the action of $H$ on $V\backslash\left\{0\right\}$ with $|\Omega|\le q^{n}/2$. Choose $v\in \Omega$: we have $$\frac{1}{p}\le \frac{|H|}{|H_v|}\le \frac{q^n}{2},$$ hence $$\alpha_U\le \frac{m+2}{p}\le (m+1)q^{n}\le \sqrt{|H||V|}.$$ We remain with the case when $H$ is a transitive linear group. There are four infinite families:\begin{enumerate}[(a)] \item $H\le \Gamma L(1,q^{n})$; \item $SL(a,r)\unlhd H$, where $r^{a}=q^{n}$; \item $Sp(2a,r)\unlhd H$, where $a\ge 2$ and $r^{2a}=q^{n}$; \item $G_2(r)\unlhd H$, where $q$ is even, and $q^{n}=r^{6}$.\end{enumerate} Furthermore, $H$ and $m$ are exhibited in \cite[Table 7.3]{Cameron}: in each case, we have $m\le 1$. Furthermore, we have $|H|=(q^{n}-1)\rho$, where $\rho$ is the order of a point stabiliser. Hence, if $\rho\ge 9$, then $$\alpha_{U}\le \frac{m+2}{p}\le \frac{3}{p}\le 3|V|\le \sqrt{|H||V|}.$$ So we may assume that $\rho\le 8$. Suppose first that (a) holds. Then $H$ is soluble, so $m=0$. Also, $\rho=|H_{v}|\le 8$ implies that $n\le 8$. Hence, as $q^{n}\le q^{8}$ approaches $\infty$, $\frac{q}{q-1}$ approaches $1$, and (\ref{red}) follows since $$\alpha_U\le \frac{q}{q-1}|V|\le \frac{q}{q-1}\sqrt{|H||V|}.$$ So we may assume that (a) does not hold. In particular, if (b) or (c) holds then $a\ge 2$. It follows (in either of the cases (b), (c) or (d)) that if $q^n$ is large enough, then $|H|\ge 9q^{n}$, and so $$\alpha_{U}\le \frac{(m+2)}{p}\le 3q^{n}\le \sqrt{|H||V|}.$$ This gives us what we need, and completes the proof.\qedhere\end{enumerate}\end{proof} \section{Proof of Theorem \ref{KZConjecture} Part (ii)} In this section, we prove Part (ii) of Theorem \ref{KZConjecture} in a number of steps. The first is as follows: \begin{lemma}\label{tecn}Let $G$ be a finite soluble group with trivial Frattini subgroup, and let $U$ and $V$ be as in Lemma \ref{corona}. Assume that $V$ is abelian and non-central in $G$, and let $H=H_V$. Then $$\frac{\alpha_U}{|G|^{1/2}}< \frac{5}{3}\left(\frac{|U|^{1/2}-1}{|U|^{1/2}}\right)$$ except when $|H|<|V|$ and one of the following cases occur: \begin{enumerate} \item $\delta=2$, $q^n=4$ and $|R_G(V)|=1.$ \item $\delta=2$, $q^n=3$ and $|R_G(V)|\leq 2.$ \item $\delta=1$, $4\leq q^n \leq 7$ and $|R_G(V)|=1.$ \item $\delta=1$, $q^n=3$ and $|R_G(V)|\leq 3.$ \end{enumerate} \end{lemma} \begin{proof} Note that $m=0$ since $H$ is soluble. We distinguish the following cases: \noindent Case 1) $|H| < |V|$ and $\delta\neq 1.$ Since, $|G|=\lambda|H||V|^\delta$ for some positive integer $\lambda$ it suffices to prove \begin{equation}\frac{3\left(\delta+\frac{q^n}{q^n-1}\right) \left(\frac{q^{n\delta/2}}{q^{n\delta/2}-1}\right)|H|}{5\lambda^{1/2}|H|^{1/2}q^{n\delta/2}}\leq \frac{3}{5\lambda^{1/2}}\left(\delta+\frac{q^n}{q^n-1}\right) \left(\frac{(q^{n}-1)^{1/2}}{q^{n\delta/2}-1}\right)< 1. \end{equation} If $\delta\geq 3$ then $$\frac{3}{5\lambda^{1/2}}\left(\delta+\frac{q^n}{q^n-1}\right) \left(\frac{(q^{n}-1)^{1/2}}{q^{n\delta/2}-1}\right)\leq \frac{3}{5\lambda^{1/2}}\left(3+\frac{q^n}{q^n-1}\right) \left(\frac{(q^{n}-1)^{1/2}}{q^{3n/2}-1}\right)< 1.$$ Suppose $\delta=2.$ If $q^n\geq 5$, then $$\frac{3}{5\lambda^{1/2}}\left(\delta+\frac{q^n}{q^n-1}\right) \left(\frac{(q^{n}-1)^{1/2}}{q^{n\delta/2}-1}\right)\leq \frac{3}{5\lambda^{1/2}}\left(2+\frac{q^n}{q^n-1}\right) \left(\frac{(q^{n}-1)^{1/2}}{q^{n}-1}\right)< 1.$$ Suppose $\delta=2$ and $q^n=4.$ We have $|H|=3$ so if $\lambda \neq 1$, then $$\frac{3\left(\delta+\frac{q^n}{q^n-1}\right) \left(\frac{q^{n\delta/2}}{q^{n\delta/2}-1}\right)|H|}{5\lambda^{1/2} |H|^{1/2}q^{n\delta/2}}\leq \frac{2\cdot 3^{1/2}}{3\cdot \lambda^{1/2}}< 1$$ Suppose $\delta=2$ and $q^n=3.$ We have $|H|=2$ so if $\lambda>2,$ then $$\frac{3\left(\delta+\frac{q^n}{q^n-1}\right) \left(\frac{q^{n\delta/2}}{q^{n\delta/2}-1}\right)|H|}{5\lambda^{1/2} |H|^{1/2}q^{n\delta/2}}\leq \frac{21\cdot 2^{1/2}}{20\cdot \lambda^{1/2}}< 1.$$ \noindent Case 2) $|H|\geq |V|$ (and consequently $n\neq 1$) and $\delta\neq 1.$ It suffices to prove that \begin{equation}\frac{3\left(\delta+\frac{q}{q-1}\right) \left(\frac{q^{n\delta/2}}{q^{n\delta/2}-1}\right)q^n}{5|H|^{1/2}q^{n\delta/2}}< 1. \end{equation} Suppose $q^n\neq 4.$ $$\frac{3\left(\delta+\frac{q}{q-1}\right) \left(\frac{q^{n\delta/2}}{q^{n\delta/2}-1}\right)q^n}{5|H|^{1/2}q^{n\delta/2}}\leq \frac{3\left(2+\frac{q}{q-1}\right) \left(\frac{q^{n}}{q^{n}-1}\right)}{5q^{n/2}}< 1.$$ Suppose $q^n= 4.$ We have $H=\GL(2,2)\cong \perm(3),$ and consequently $|H|=6$ and $p=2/3.$ Hence $$\frac{\alpha_U}{|G|^{1/2}}\frac{5}{3}\left(\frac{|U|^{1/2}}{|U|^{1/2}-1}\right) \leq \frac{(\delta+2)\cdot \frac 1 p \cdot \frac 3 5 \cdot \frac 4 3}{|H|^{1/2}\cdot 2^\delta}\leq \frac{6}{5\sqrt 6}< 1.$$ \noindent Case 3) $|H| < |V|$ and $\delta=1.$ Since, $|G|=\lambda|H||V|^\delta$ for some positive integer $\lambda$ it suffices to prove \begin{equation}\frac{3\left(\frac{q^n}{q^n-1}\right)\left(\frac{q^{n/2}}{q^{n/2}-1}\right) |H|^{1/2}}{5\lambda^{1/2}q^{n/2}} < 1. \end{equation} If $q^n\geq 8,$ or $7 \geq q^n\geq 4$ and $\lambda\neq 1,$ or $q^n=3$ and $\lambda >3,$ then $$\frac{3\left(\frac{q^n}{q^n-1}\right)\left(\frac{q^{n/2}}{q^{n/2}-1}\right) |H|^{1/2}}{5\cdot \lambda^{1/2}\cdot q^{n/2}}\leq \frac{3\left(\frac{q^n}{q^n-1}\right)\left(\frac{(q^n-1)^{1/2}}{q^{n/2}-1}\right)}{5\cdot \lambda^{1/2}} = \frac{3\left(\frac{q^n}{(q^n-1)^{1/2}(q^{n/2}-1)}\right)}{5\cdot \lambda^{1/2}}< 1.$$ \noindent Case 4) $|H|\geq |V|$ (and consequently $n\neq 1$) and $\delta= 1.$ It suffices to prove that \begin{equation}\label{54}\frac{3\left(\frac{q}{q-1}\right) \left(\frac{q^{n/2}}{q^{n/2}-1}\right)}{5|H|^{1/2}q^{n/2}p}< 1. \end{equation} If $H$ is not a transitive linear group, then $|H|^{1/2}q^{n/2}p\geq 2,$ so it suffices to have $$\left(\frac{q}{q-1}\right) \left(\frac{q^{n/2}}{q^{n/2}-1}\right) \leq \frac{10}{3},$$ which is true if $(q,n)\neq (2,2).$ On the other hand, we may exclude the case $(q,n)=(2,2):$ indeed the only soluble irreducible subgroup of $\GL(2,2)$ with order $\geq 4$ is $\GL(2,2),$ which is transitive on the nonzero vectors. If $H$ is a transitive linear group, then $|H|=(q^n-1)\rho,$ with $\rho$ the order of the stabilizer in $H$ of a nonzero vector and $$\frac{3\left(\frac{q}{q-1}\right) \left(\frac{q^{n/2}}{q^{n/2}-1}\right)}{5|H|^{1/2}q^{n/2}p}\leq \frac{3\left(\frac{q}{q-1}\right) \left(\frac{q^{n/2}}{q^{n/2}-1}\right)}{5\sqrt{\rho}},$$ so it suffices to have $$\left(\frac{q}{q-1}\right) \left(\frac{q^{n/2}}{q^{n/2}-1}\right) \leq \frac{5\sqrt \rho}{3},$$ which is true if $q\geq 3$ and if $(q,n,\rho)\notin \{(2,4,2), (2,3,2),$ $(2,3,3), (2,2,2))\}.$ We may exclude the case $(q,n,\rho)=(2,3,2)$ (there is no transitive linear subgroup of $\GL(3,2)$ of order 14). If $(q,n,\rho)=(2,4,2),$ then $H=\GL(1,16)\rtimes C_2$, hence $p=6/30$ so $|H|^{1/2}q^{n/2}p\geq 2$ and (\ref{54}) is true. If $(q,n,\rho)=(2,3,3)$ then $H=\gaml(1,8)$ and consequently $p=15/21$ and $$\frac{3\left(\frac{q}{q-1}\right) \left(\frac{q^{n/2}}{q^{n/2}-1}\right)}{5|H|^{1/2}q^{n/2}p}=\frac{3\cdot 2 \cdot\sqrt 8\cdot 21}{5\cdot (\sqrt 8-1) \cdot 15 \cdot \sqrt{21}\sqrt{8} }<1.$$ If $(q,n,\rho)=(2,2,2)$ then $H=\GL(2,2)$ and consequently $p=2/3$ and $$\frac{3\left(\frac{q}{q-1}\right) \left(\frac{q^{n/2}}{q^{n/2}-1}\right)}{5|H|^{1/2}q^{n/2}p}=\frac{3\cdot 2 \cdot 2\cdot 3}{5\cdot 2 \cdot 2 \cdot \sqrt{6} }<1. \qedhere$$ \end{proof} \begin{lemma}\label{dirab} If $G$ is one of the exceptional cases in the statement of Lemma \ref{tecn}, then $C(G)< \frac{5}{3}\sqrt{|G|}.$ \end{lemma} \begin{proof} This follows easily by direct computation. We use MAGMA, and the code from \cite[Appendix, page 36]{kz} to compute $C(G)$ explicitly whenever $G$ is a group satisfying the conditions of one of the exceptional cases of Lemma \ref{tecn}. \end{proof} The next step is to deal with the case of a central chief factor. \begin{lemma}\label{13} If $G\cong C_p^\delta$, then $C(G)\leq \frac{5}{3}\sqrt{|G|},$ with equality if and only if $G=C_2 \times C_2.$ \end{lemma} \begin{proof}If $p\neq 3$ or $p=2$ and $\delta > 3,$ then $$C(G) = \sum_{0\leq i\leq \delta-1}\frac{p^\delta}{p^\delta-p^i}\leq \delta+\frac{p}{(p-1)^2} < \frac{5\cdot p^{\delta/2}}3=\frac{5\cdot \sqrt{|G|}}3.$$ If $(p,\delta)=(2,1)$ then $$\frac{C(G)}{\sqrt{G}}=\frac{2}{\sqrt{2}}=\sqrt{2};$$ if $(p,\delta)=(2,2)$ then $$\frac{C(G)}{\sqrt{G}}=\frac{\frac{4}{2}+\frac{4}{3}}{{2}}=\frac{5}{3};$$ if $(p,\delta)=(2,3)$ then $$\frac{C(G)}{\sqrt{G}}=\frac{\frac{8}{4}+\frac{8}{6}+\frac{8}{7}}{\sqrt{8}} \sim 1.5826.\qedhere$$ \end{proof} \begin{proof}[Proof of Part (ii) of Theorem \ref{KZConjecture}] We prove the claim by induction on the order of $|G|.$ If $\frat (G)\neq 1$, then the conclusion follows immediately since $C(G)=C(G/\frat (G)).$ Otherwise $G$ contains a normal subgroup $U$ as in Lemma \ref{sotto}. If $G=U\cong C_p^\delta,$ then the conclusion follows from Lemma \ref{13}. Otherwise, Lemma \ref{tecn}, together with the inductive hypothesis gives $$C(G)\leq C(G/U)+\alpha_U < \frac{5 \sqrt{|G|}}{3 \sqrt{|U|}}+\frac{ 5(\sqrt{|U|} -1)\sqrt{|G|}}{3\sqrt{|U|}}=\frac{5}{3}\sqrt{|G|}$$ as claimed.\end{proof}